HomeHow To Avoid Data Import Errors- IBM ICM

How To Avoid Data Import Errors-IBM ICM

В 

Importing failures can be a source of frustration for companies that utilize the IBM ICM (formerly Varicent) tool to calculate their compensation plans. Frequent failures can make it difficult to meet service level agreements for data availability and can be difficult to diagnose and resolve. Both Cloud and On Premise customers can utilize SQL processes to stage and evaluate data prior to importing, thereby stabilizing the importing process and significantly reducing the number of import failures

Import Errors

Import errors are typically caused by one of four reasons:

  • Missing values on pick list columns that reference structural tables,
  • Null values in primary key fields,
  • Improperly formatted fields (date and numeric),
  • Duplicate rows.

After implementation, most import errors are caused by missing values on pick lists. This white paper will detail techniques to deal with this common cause of import failure. Typically, other causes of import failures underscore a more systemic issue and it is recommended that the file be rejected in full for more detailed analysis.

Function of Pick Lists

Structural tables are a control within IBM ICM that limit data to a set of input values. These tables play an important part in maintaining referential integrity within IBM ICM, ensuring that the same values are used across multiple tables. During table configuration, structural tables can be referenced for the table design. These structural tables are often referred to as pick lists, or more technically, Foreign Keys, based on the configuration options. Pick lists manage referential integrity by only allowing specific values to be entered into the table. IBM ICM completes this data validation during the import process by comparing the field data against the underlying structural data. When it identifies new or unexpected data not on the pick list, it generates an import error, preventing a data load to the destination table. Import failures are stressful for administering compensation plans; particularly for customers who have daily reporting service level agreements with their payees and sales managers. The frustration level is further heightened by the fact that the failure is caused by a few records, while +90% of the data rows are perfectly valid.

To improve stability during the import process, supplemental automated processes can be built to load valid data, update a pick list automatically and/or quarantine data. These processes can make it easier to administer the compensation plan, maintain service level agreements and reduce the effort it takes the Administrator to support the model. In addition, these automated processes can be programmed to load new data into the supporting structural tables and data tables in the correct sequence. With the combination of staging tables and SQL imports, one can analyze the data in files and identify potential failure points. Companies with IBM ICM on premise have an additional option to quarantine rows by creating a stored procedure. However, updating pick lists must be completed within the IBM ICM tool. Quarantining data or dynamically updating the pick lists are two typical approaches that can improve import stability by reducing import failures; each process has distinct benefits.

Quarantine vs updating pick lists

The most important decision about how to supplement import processes centers around what to do with the unanticipated data. Quarantining sets aside the data that will cause import failure, and allows valid data to be loaded while the unexpected data is analyzed. There is also the option of updating the pick list automatically with the new data. Quarantining unexpected data is the least risky approach because the invalid data is set aside. In some cases dynamically updating the pick lists can be considered. This decision is generally dependent on the compensation plan impact. Reporting items typically represent low risk to the payment process, so the unexpected data can be added to the underlying structural table without much concern. Items impacting pay are usually on multiple tables and are generally quarantined so that research can be completed on all the compensation impacts.

Potential situational failures:

  • Text data in numeric fields

An unexpected character in a numeric data field triggers the record to parse in an unanticipated manner. The file is sent back to the data provider to address.

  • New products

Some new products should have been included in the compensation plan, while others, excluded. Further conversation with the sales team is needed to resolve this issue. For products that need to be included, the compensation terms also require loading so that the payee earns the appropriate credit. A quarantine process has to be developed to identify new products and extract them from the data file for additional analysis. All existing products are then identified and loaded onto the transaction table.

  • New titles

Titles are used only in reporting and do not impact the compensation plan assignment for the payee. With low risk to the compensation payouts, a process is built to stage the data, while the new titles are automatically added to the title table prior to the payee import. This allows the payee import to load without failing.

  • New locations

Compensation plans for individual performers are indifferent to location; however regional manager compensation plans take into account the location. Accuracy in the daily reporting for individual performers is more important than quarterly reporting for regional managers. A process to stage the data and preload the new location is developed. In addition, a report is built to identify these newly added rows so that the administrators can complete the mapping of new locations for the regional managers.

  • New compensation plans and cost centers

Unexpected compensation plans need to be researched while new cost centers have no impact on compensation payouts. A series of staging tables and imports are created to quarantine the unexpected compensation plan information and load the new cost centers onto the cost center table. This allows critical compensation data to be analyzed further and non-critical data to load.

These situational examples highlight things to consider when weighing which approach makes the most sense for your compensation plans. A decision tree might look like the one illustrated in figure 1:

As illustrated in Figure 1, the first design step in stabilizing an import process requires the creation of a staging table. A staging table matches the file structure, but typically contains only Text, Date, and Numeric formats. Comment fields are outside of these discussion points as they are a rare occurrence. A new import is then built to load the file data into the staging table. The staging import becomes the first step of the new process and replaces the data import in the scheduler. Once in the staging table, SQL is utilized to analyze the data.

Creating a quarantine process

Where it is necessary to quarantine data for review/analysis, SQL scripts are written to evaluate the data in the staging table against the data in the underlying structural tables. Generally, +90% of the data in the file will match existing validation criteria. SQL imports can identify these valid rows, extract them from the file and write them to the destination table. Incorporating this approach helps support service level agreements. Rows that don’t match the high risk evaluation criteria can be written to a reject table for further analysis. For customers with IBM ICM on premise, the quarantine process can be completed externally using stored procedures to identify unexpected data. The quarantined rows can then be imported into the reject table.

Once analysis of the unexpected data is concluded, an import process can be built to update the underlying tables. This process can be repeated for each structural table until all the unexpected data has been addressed. The final step would involve creating an import script that can load any resolved records into the destination table. Many of these events may be low frequency, so armed with the analysis of the unexpected data, administrators may find that a manual update is preferred over an import.

The workflow in figure 2 displays a quarantine process:

The new imports would be inserted into the scheduler, enabling an automated approach to importing and quarantining the data. A sample quarantine process would look similar to the updated schedule illustrated in figure 3 below:

Creating a pick list update process

To update structural tables dynamically, SQL scripts are written to evaluate the data in the staging table against the data in the underlying structural tables and identify the unexpected data. An import can then update the underlying structural table with the new values. This process can be repeated for each structural table until all the unexpected data has been addressed. This process can be integrated into the scheduler process just prior to importing, enabling a seamless import of unexpected data. When run through the scheduler, IBM ICM tracks the dynamic updates of the structural tables in the audit log. Adding in a periodic review of the audit log is considered best practice when implementing a dynamic update. This way the administration team can stay on top of new data added into the IBM ICM model.

The workflow in figure 4 displays a process to automatically update pick lists

The new imports would be inserted into the scheduler, enabling an automated approach to importing and quarantining the data. A sample pick list update process would look similar to the updated schedule illustrated in figure 5 below:

Some data imports may include both high and low risk data. A combination approach of quarantining high risk data and updating pick lists with low risk data can be created. Quarantine processes should be ordered first; then the updates to the structural tables should be sequenced. Lastly the valid rows would be imported into the destination table. This hybrid approach allows for the benefits from both quarantining and updating. A hybrid process would look similar to the updated schedule illustrated in figure 6 below:

Conclusion

Frequent file import errors caused by missing values on pick lists can create issues in administering compensation plans and reporting results to end-users. Both Cloud and On Premise customers can utilize SQL processes to stage and evaluate data prior to importing, thereby stabilizing the importing process and significantly reducing the number of import failures. Administrators have several options that can be implemented to reduce file failures:

  • Quarantine data rows
  • Update the pick list dynamically or
  • The Hybrid approach of quarantining some rows and dynamically updating some pick listsВ 

В 

The risk to compensation payouts influences which approach should be considered when deciding on methodology

  • Low payout risk are good candidates for a dynamic update process
  • High payout risk are good candidates for a quarantine processВ 

В 

In integrating these automations, administrators will find it easier to identify and resolve issues with unexpected data, meet service level agreements for their end-users and reduce import errors.

_________________________________________

About Author:

This article is authored by a team of IBM ICM Professionals at Spectrum Technologies LLC. For further information on this topic, please reach us at This email address is being protected from spambots. You need JavaScript enabled to view it. , or visit us at www.spectrumbiztech.com