Optimizing Item Import Performance in Oracle Product Hub/Inventory

APPLIES TO:

Oracle Product Hub - Version 12.1.1 to 12.1.1 [Release 12.1]
Oracle Inventory Management - Version 12.1.1 to 12.1.1 [Release 12.1]
Oracle Item Master - Version 12.0.6 to 12.0.6 [Release 12]
Information in this document applies to any platform.

ABSTRACT

This document discusses the Import Catalog Items (ICI) program and how to achieve optimal performance for high volumes. It lists a number of functional areas that you can tune for optimal scalability. ICC processing and performance depends on which features you use, as well as how you use and configure your system. This document identifies opportunities for setting up your system to scale for high volumes. It also lists and briefly discusses other item bulk import options available in Oracle Product Hub (also known as Oracle Product Information Management Data Hub (PIMDH)).

Performance of ICI varies depending on what features you use. Performance also depends on system configuration. It is not possible to make absolute hardware recommendations because hardware requirements vary based on the features used and the processing volume. Hardware requirements also vary depending on the time window allowed for processing. For instance, more powerful hardware is required to process 10,000 items in one hour than in three hours. 

Although it is not possible to predict throughput for a given hardware configuration, it is possible to identify throughput observed with a specific hardware configuration. Refer to Performance Results section of this document for information about throughput observed using a specific set of features and data. The data is not a benchmark.

HISTORY

 Author: Oracle Product Hub Development Team
 Create Date
 30-Jul-2010
 Update Date 19-Oct-2010
 Expire Date Not Applicable

DETAILS

Bulk Import Capabilities in PIMDH

Product Hub uses the following methods for importing items in bulk:

  • Import Items (INCOIN) 
    Import Items is the core inventory program provided to upload items in bulk from interface tables. It is available to Inventory and PIMDH users. This program is for creating and/or updating basic items in bulk, without any of the associated PIMDH features such as UDAs. You must load data into the items/revisions interface table before launching this program.
  • Import Catalog Items (ICI) 
    This program is a wrapper on top of the Import Items program and supports bulk load of additional PIMDH child entities, such as user defined attributes (UDAs) and associations. Details of the additional entities supported are discussed in the next section, Functional Background and Process Flow for ICI. As with the Import Items program, you must load data into the interface table before launching this program.
  • Excel Upload (EGOIJAVA) 
    This feature enables business end users to enter data into Excel and upload it to PIMDH. The data from Excel is automatically loaded into an interface table, then the “EGO Spreadsheet Java Concurrent Program” launches and processes the interface records. This program internally calls the same APIs called by Import Catalog Items, so the item import performance after uploading the data from Excel into interface tables is the same as the ICI program. Most of the discussion in this document also applies to this part of the Excel upload process. Time required for loading data from Excel into the interface tables is additional. 
    In addition to loading items and their child entities, this program can also create change orders, new item requests and structures, all of which are out of scope of this document.
  • Batch Import using Import Workbench 
    Access the Import Workbench through the Product Information Management Data Librarian responsibility. In the Import Workbench, you can create batches, then add items to the batch, match the items against production to eliminate duplicates, and submit the batch for loading the items into production tables. The Import Workbench internally calls the same EGOIJAVA program mentioned above to load items and their child entities.

Functional Background and Process Flow for ICI

This document specifically focuses on the functionality and performance of the Import Catalog Items program. As noted in the previous section, the Excel Upload and Import Workbench features are wrappers on top of this program, so the discussion also applies to these programs

Functional Background

The Import Catalog Items program can import the following entities:

  • Items (both master and organization items)
  • Revisions
  • Item Intersections
    • Suppliers
    • Supplier Site
    • Supplier Site Store
  • User Defined Attributes at all data levels
  • Item People
  • Approved Manufacturer Lists
  • Source System Cross References

While validating and importing the above entities, the program accomplishes the following tasks based on your setup:

  • Defaulting of UDAs.
  • Applying templates.
  • Copying information from the source item.
  • Generating Item Number/Description using designated functions where applicable.
  • Applying Rules (assignment and validation rules).
  • Checking for change policies and creating change orders/NIRs where required.
Process Flow

The following list shows a sequence for data processing:

  1. Process standard/style items
    1. Preprocess interface records.
    2. Run Rules.
    3. Generate Item Number/Description if function generation is specified.
    4. Process the item records to create items.
    5. Call the Change Management pre-process API.
    6. Call APIs to process child entities (such as UDAs, Intersections, AML, People).
    7. Call the Change Management post-process API.
  2. Process SKU items (same flow as step 1)
    1. Preprocess interface records.
    2. Run Rules.
    3. Generate Item Number/Description if function generation is specified.
    4. Process the items records to create items.
    5. Call the Change Management pre-process API.
    6. Call APIs to process child entities (such as UDAs, Intersections, AML, People).
    7. Call the Change Management post-process API.

Architecturally, the Import Catalog Items program is divided into smaller programs to handle items and each item's entities. The processing is also split across the Java and PL/SQL layers. The validation and processing of entities such as items and UDAs are mainly performed in PL/SQL while supporting activities like number/description generation and Rules are performed in the Java layer. The Java layer program processes each interface record individually due to the functional nature of Rules and Function Generation.

The PL/SQL programs mainly handle the corresponding interface records in bulk rather than processing one record at a time. However, certain validations and processing, due to their functional nature, are performed on individual interface records or in a smaller chunk of records.

Depending on the availability of resources, customers can submit multiple batches in parallel. Finding the optimal number of threads and the optimal batch size for processing in each thread on your system is an iterative process, which is described in the next section.

Factors Affecting the Performance of Import Catalog Items

The performance of the Import Catalog Items program depends on numerous factors. This section discusses each of these factors in detail.

  1. Number of items: Larger batches take a longer time to complete. However, having too large a batch size puts excessive memory requirements on the program, especially in cases involving rules and change management. The ideal batch size depends on resource availability.
  2. Transaction Type: You can submit batches with CREATE or UPDATE transaction types (or a combination of both). Performance varies with each transaction type. CREATE transactions have additional processing requirements, like creating revisions and applying default values.
  3. Number of UDAs Per Item: Each user defined attribute value you specify is represented as a separate row in the UDA interface table. Therefore, the volume of this table for a batch could potentially grow very large depending on the number of items in the batch and the number of UDAs per item. 
    In addition to the UDAs provided by the user, rows are inserted into this interface table from templates and UDA defaults. Also, more UDAs may be added depending on the rules configuration for the Item Catalog Category/Attribute Groups participating in this batch.
  4. Associations and other child entities: Adding supplier associations, role assignments, AMLs and cross references in the same batch as the item creation batch requires additional processing and  more time for the program to complete. Supplier associations also add more UDAs to the interface table (and further processing time) depending on the setup.
  5. Rules: You can define rules at the Attribute Group level or the Item Catalog Category level. Performance is affected by the number of rules, number of attributes participating in these rules, and the types of rules. As described earlier, assignment rules add more records to the UDA interface table, causing additional processing time. 
    Rules processing is performed in a java program, unlike the rest of the import program which is a PL/SQL program. To avoid multiple database trips, all information required to process these rules is cached into the memory in chunks of 100 items at a time. This makes rules a memory intensive process. If more memory is needed for this process, consider adding the following parameters to the concurrent program definition form: -Xms<memorysize>M -Xmx<memorysize>M. Also, review Document 362851.1, JVM: Guidelines to setup the Java Virtual Machine in Apps Ebusiness Suite 11i and R12, to ensure optimal performance.
  6. Change Management requirements: If a batch is created using the Import Workbench function, you can select the change management option to add changes to a change order or add new items to a new item request. This is a java program that is memory intensive and causes additional processing time.
  7. Type of Items (Style/SKU): Due to the functional nature of SKU items, they are created after the Style items. In batches where SKU items coexist along with Style/Standard items, certain sections of the program are repetitively called, first to process style/standard items, then again to process SKU items. Including SKU items in same batch as Style items also increases the number of records for processing in multiple interface tables, since UDAs, organizations, associations, and so on are defaulted from the Style item to the SKU item.
  8. Items for Child Organizations: Assigning items to child organizations requires the same amount of processing time as creating an item in the child organizations. If you create 1000 items and assign each item to two child organizations in the same batch, the batch essentially creates 3000 items in the items table. In addition, extra processing, such as validating for master controlled attributes, copying master controlled attributes from master records, and so on, increases the processing time for child organization item records. Consider these factors while measuring performance of batches.
  9. Number of Instances of this Program Running: Under ideal circumstances, the time expected for a single ICI program to run should remain the same while running more than one ICI process in parallel. However, this is seldom the case since each program shares the same resources with the other programs running in parallel. CPU and memory utilization determine the number of parallel threads. The CPU utilization ideally never exceeds 80% at any point when all threads are running. 
    In addition, despite best efforts to tune all SQLs, there still are a few queries that perform a Full Table Scan on the interface tables, mainly due to the data distribution. These queries degrade while running more threads of the program in parallel, since each program scans through all records of the other batches processing at the same time.
  10. Hardware configuration: The throughput of the ICI program greatly depends on hardware capabilities such as number of processors and the amount of memory in the configuration of the various tiers (dB Tier, Middle Tier, Concurrent Manager, and so on). If your installation uses Real Application Clusters (RAC) with multiple nodes for each tier, Oracle recommends bringing down all nodes and retaining only one node per tier for best performance during the initial data load. Additionally, refer to the "Concurrent Processing on RAC Information Center" section in Document 1304305.1, Concurrent Processing - Product Information Center (PIC), and ensure that all dB parameters are optimized and the same for all nodes.

The above factors affect the performance of the item import programs. Next, we will discuss how to measure and, if required, tune the performance of the program.

Measuring and Tuning the Performance of the ICI Program

The process of tuning the performance of the ICI program involves multiple steps:

  1. Ensuring the system configuration prerequisites are met.
  2. Measuring the current performance of the ICI program.
  3. Tuning and measuring the post-tuning performance.

The following sections discuss each of the above steps in detail. 

System Configuration Pre-requirements

 Before using PIMDH, ensure system configurations are performed as specified in the following My Oracle Support documents. Some of these documents are specific to the Linux x86-64 bit platform, but documents for other platforms as well are available in My Oracle Support.

396009.1 - Database Initialization Parameters for Oracle Applications Release 12 
761566.1 - Oracle Applications Installation and Upgrade Notes (Linux X86-64 bit) 
361323.1 - HugePages on Linux 
859397.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11.1.0.7 (if you use 11gR2, review note 823587.1 instead)
466649.1 - Using Oracle 11g Release 1 (11.1.0.7) Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12

The list of documents listed above is not a comprehensive list, and it is only indicative for a specific configuration. If you use a different platform or configuration, look for other similar documents that are relevant to your configuration 

Verify the following init.ora parameters in your environment since they can impact the performance of batch processes.

"optimizer_dynamic_sampling" = 2 
"optimizer_index_cost_adj" = 100 
"_optim_peek_user_binds" = false 
"_optimizer_extended_cursor_sharing" = NONE 
"_optimizer_extended_cursor_sharing_rel" = NONE

Gather statistics regularly on your INV/EGO/APPLSYS schema using the Gather Schema Statistics program.

Measuring the Current Performance of the ICI Program

This section describes the steps needed to measure performance of the Import Catalog Items program.

  1. Ensure the test environment is comparable in configuration to your final production instance configuration. Ensure the instance is isolated and not subjected to varying loads (for example: accessed by other users, other programs) during the time of the test.
  2. Ensure the performance measurement run is not the first run after a bounce of the database, middle tier, or concurrent manager. If you have bounced any of these components, perform a warm up run for a small batch of items. This eliminates overhead times taken for activities such as class loading and query parsing during the actual performance measurement runs.
  3. One of the factors impacting performance is the batch size. Ensure that you measure performance for a batch size most commonly used by your business process. However, Oracle recommends limiting batch sizes to functionally manageable limits to facilitate error reviews and corrections due to  warnings received. A functionally manageable size depends on the setup, which dictates the number of records in the interface table. For example, if you have 5 UDAs per item, you would have 25,000 records in the UDA interface tables for a batch of 5,000 items. However, if you had 100 UDAs per item, a 5,000 item batch results in 500,000 records in the UDA interface table.
  4. Ensure that statistics are gathered for the EGO, INV and APPLSYS schema regularly. If you have performed any major data load or data update activit
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值