Search This Blog

Tuesday, 9 April 2013

Warning : Ignoring duplicate entry at table record 453159; no further warnings will be issued for this table


This warning is seen when there are multiple records with the same key column is present in the reference table from which lookup is done. Lookup, by default, will fetch the first record which it gets as match and will throw the warning
since it doesn’t know which value is the correct one to be returned from the reference.
To solve this problem you can either one of the reference links from  “Multiple rows returned from link” dropdown, in Lookup constraints. In this case Lookup will return multiple rows for each row that is matched.

Else use some method to eradicate duplicate multiple rows with same key columns according to the business requirements.

Warning : When checking operator: Operator of type "APT_TSortOperator": will partition despite the preserve-partitioning flag on the data set on input port 0.


These warnings are obtained when we are using some specific partitioning method in a stage in the job e.g hash/same etc but in the preceding stage , preserve portioning(also default) is set in the stage>advanced tab.

To solve this warning the preserve portioning should be set to clear in the preceding stage where we are using the particular partitioning method.

Warning : APT_CombinedOperatorController(1),0: Data string 'yyyymmdd' does not match format '%yyyy%mm%dd': an integer was expected to match tag %yyyy.


If you get the same or similar warning, check the format of the string used in the Stringtodate()/stringto timestamp() function.

The format is StringToDate(“String”,"%yyyy%mm%dd")

Where in the above case the string should be in the format e.g 20130304.


If your string is in the format ‘2013-03-04’ change the format pattern e.g StringToDate(“String”,"%yyyy%-mm%-dd")

Error : The OCI function OCIStmtExecute returned status 1. Error code: 24,381, Error message: ORA-24381: error(s) in array DML.


This warning is obtained usually when there is a problem in the number of records getting loaed in the array. For this we need to get the correct balance between the record count and array size property.

The record count should be a multiple of the value for array size. For one of the projects we set the value of record count as 10000 and array size as 2000 and the warning was gone.

Error : The OCI function OraOCIStmtPrepare returned status -1. Error code: 1,756, Error message: ORA-01756: quoted string not properly terminated. (CC_OraStatement::prepare, file CC_OraStatement.cpp, line 392)


  This occurs when the query written in the stage lacks correct format. If all columns and filters are properly defined check the parameters being passed. Most of the the times it can also
Happen due to some discrepancy in the parameter being used.

Fatal Error: Attempt to setIsNull() on the accessor interfacing to non-nullable field "column_name".


As evident this error can occur when you are trying to assign a NULL value to a NOT NULLABLE column. But this can be a little tricky sometimes.
Like if you use trim(col_name) where col_name is a not nullable column. Still it can have a NULL value as trim(Blank Spaces) in oracle gives NULL.
So, if you are using a trim in the query and the column is defined NOT NULL, beware that it can give the above error in case blank values(which are in fact NOT NULL) come in that
Column.
You can resolve it  by defining the column as NOT NULL, or remove the trim from the query and use the trim in a transformer.

Error : The OCI function OCIStmtExecute returned status -1. Error code: 1,654, Error message: ORA-01654: unable to extend index schema.index_name by 1280 in tablespace USERS. (CC_OraStatement::executeInsert, file CC_OraStatement.cpp, line 1,642)

 This error is caused due to insufficient tablespace. Ask the DBA to increase the tablespace for the concerned table.

ERROR from ExecDirect: ERROR: Attribute 'Col_Name' is of type 'NUMERIC' but expression is of type 'DATE' You will need to rewrite or cast the expression

 As obvious it seems, but this error might come if you are using Insert/Update then Insert in Netezza Enterprise Stage. In the above error, the column used was Nullable but still the job shows error. This is because in Netezza Enterprise stage, we need to have the entire metadata defined in the target even if the columns defined are not getting any values from source and they have been defined as NULLABLE.

Update in Netezza Enterprise stage


Doing update or update then insert requires the use of unique key column in Input->Properties tab.

The following needs to be configured in Input->Properties tab :

Load Method = NZLOAD
Table_Name = ‘give your table name’
Update Mode = Update or Update Then Insert
Drop Work Table = ‘True’ (Usually we don’t create a work table for load. But if you have a work table, you can specify it here.)
Key Column = Specify Key Column
(You can add any number of Key column. Just add one Key Column property for each column)

Unique Key Column :
This column can be a sequential number which is generated and may or may not be present in the target table. Its ok even if this column is not present in the target table and still appears in
Target table metadata in datastage.

Usually we generate it in a transformer with some column like SEQ_SURR, and assign a value = @PARTITIONNUM+(@NUMPARTITIONS*(@INROWNUM-1))+1

The requisite is that it should have a unique value for each row coming from source.

Updatable column :

If you want the table to get updated values for all the columns for the specified key combination, then don’t select this property. Else if you want only some of your columns to be updated,
Then select this property, one for each column you want to update.

Write Method = Update

Connection :

Provide the connection details that are required for the properties
Datasource, Database, Password, Server and User

That’s it! That’s the minmum steps you require to configure updates for Netezza Enterprise stage.

Info : Use of WITH Keyword in SQL query


One of the easiest and powerful keywords I found in SQL, for debugging or for writing complex queries is the WITH keyword. Using this we can define multiple blocks  and perform
different operations between the defined blocks and store the result in a different block. It should be clear with the below query. Here I have defined
different blocks as SRC,TGT, JN, SRC1, SRC2, EXTRA, MISSING.

(In the below SQL I am trying to validate some of the column values for validity. The values in Target have been loaded from the ETL job. So I am generating the source records using the different operations as was required by business. Here I will be joining the source table : table_1 with a reference table table_2 for a particular batch. The result obtained will be further joined with table_3 to fetch additional columns required. Finally the generated values from source are compared with the target if any EXTRA or MISSING records are there.)

The different functions performed by the blocks are :

SRC – Extracts result of table_1 for a particular batch. (Source table)
TGT – Extracts result of table_2 for a particular batch.
JN – Joins results of SRC and TGT and puts some filter.
SRC1 – Joins result of JN with table_3.
TGT1 – Extracts result of table_4 (target table).
EXTRA – finds records which are in Target but not in Source.
MISSING – finds records which are in Source but not in target.

WITH SRC AS
(SELECT DATEIG,VENDOR_NUM,POSEQNC_NUM,WKSTSQC_NUM,CONCAT(VENDOR_NUM,POSEQNC_NUM) AS PO_NUM FROM table_1 WHERE
BATCH_TS = TIMESTAMP'2012-11-25 04:01:00')
,TGT AS
(
SELECT DATEDOCK,VENDOR_NUM,POSEQNC_NUM,WKSTSQC_NUM FROM table_2 WHERE
BATCH_TS = TIMESTAMP'2012-11-25 04:01:00'
),
JN AS
(
SELECT CASE WHEN DATEIG IS NULL THEN DATEDOCK ELSE DATEIG END AS IN_GATE_DATE FROM SRC LEFT JOIN TGT ON
SRC.VENDOR_NUM = TGT.VENDOR_NUM AND
SRC.POSEQNC_NUM = TGT.POSEQNC_NUM AND
SRC.WKSTSQC_NUM = TGT.WKSTSQC_NUM
--WHERE PO_NUM = '15046327' AND SRC.WKSTSQC_NUM IN ('0001','0005','0009','0013','0017','0002','0006')
)
,SRC1 AS
(
SELECT
IN_GATE_DATE,DAY_ID AS IN_GATE_DATE_ID FROM JN LEFT OUTER JOIN table_3 ON
IN_GATE_DATE = DAY_DATE
)
,TGT1 AS
(SELECT IN_GATE_DATE,IN_GATE_DATE_ID,'TARGET' AS DESCR FROM table_4
--WHERE PURCHASE_ORDER_NUM = '15046327' AND WORKSHEET_NUM IN ('0001','0005','0009','0013','0017','0002','0006')
)
,EXTRA AS
(SELECT IN_GATE_DATE,IN_GATE_DATE_ID FROM TGT1 MINUS SELECT IN_GATE_DATE,IN_GATE_DATE_ID FROM SRC1)
,
MISSING AS
(SELECT IN_GATE_DATE,IN_GATE_DATE_ID FROM SRC1 MINUS SELECT IN_GATE_DATE,IN_GATE_DATE_ID FROM TGT1)

SELECT IN_GATE_DATE,IN_GATE_DATE_ID,'EXTRA' AS DESCR FROM EXTRA
UNION
SELECT IN_GATE_DATE,IN_GATE_DATE_ID,'MISSING' AS DESCR FROM MISSING

Thursday, 4 April 2013

Warning : The OCI function OCIStmtExecute returned status 1. Error code: 24,381, Error message: ORA-24381: error(s) in array DML.

The OCI function OCIStmtExecute returned status 1. Error code: 24,381, Error message: ORA-24381: error(s) in array DML.


This warning is obtained usually when there is a problem in the number of records getting loaded in the array. For this we need to get the correct balance between the record count and array size property.


The record count should be a multiple of the value for array size. For one of the projects we set the value of record count as 10000 and array size as 2000 and the warning was gone.

Fatal Error: Attempt to setIsNull() on the accessor interfacing to non-nullable field "column_name".

Fatal Error: Attempt to setIsNull() on the accessor interfacing to non-nullable field "column_name".


As evident this error can occur when you are trying to assign a NULL value to a NOT NULLABLE column. But this can be a little tricky sometimes.
Like if you use trim(col_name) where col_name is a NOT NULLABLE column. Still it can have a NULL value as trim(Blank Spaces) in oracle gives NULL.
So, if you are using a trim in the query and the column is defined NOT NULL, beware that it can give the above error in case blank values(which are in fact NOT NULL) come in that
Column.
You can resolve it  by defining the column as NOT NULL, or remove the trim from the query and use the trim in a transformer.

Wednesday, 3 April 2013

Warning : When checking operator: When binding input interface field ... to field ...: Implicit conversion from source type "int64" to result type "decimal[18,0]": Possible range limitation.

Warning : When checking operator: When binding input interface field "field_name" to field "field_name": Implicit conversion from source type "int64" to result type "decimal[18,0]": Possible range limitation.



It means you are trying to convert a bigint to decimal which requires a explicit treatment. Use the AsInteger() function, or assign the column to a stage variable and then use the stage variable to populate the column in the target.

Warning : When checking operator: When validating import schema: At field "field_Name": Importing potential null to not nullable field

When checking operator: When validating import schema: At field "field_name": Importing potential null to not nullable field 

while using Netezza Enterprise stage.


 In Netezza Enterprise stage sometimes even if the column is defined not null in metadata, still the above warning can be seen. To remove this use NVL function.

e.g I have a column PURCHASE_ORDER_NUM VARCHAR(20) (NOT NULL). Use the nvl function in the query as NVL(PURCHASE_ORDER_NUM,’?’) as PURCHASE_ORDER_NUM and convert its nullability to YES in the metadata.
Since the column is defined NOT NULL in source we should not have any ‘?’ actually coming in the data.
Now since here we have converted the nullablity of the column to NULLABLE, we might have the column defined as NOT NULLABLE in the target. In such as a case you might have to use a transformer or a modify stage to convert the NULLABLITY of the column to NOT NULLABLE and define a rule like :

If(IsNull(link_name.PURCHASE_ORDER_NUM)) Then ‘?’ Else link_name.PURCHASE_ORDER_NUM


To use NVL for decimal or integer field, give a integer decimal value.
e.g if the column PURCHASE_ORDER_NUM was an integer or decimal column I would have used :
NVL(PURCHASE_ORDER_NUM,’-1’) AS PURCHASE_ORDER_NUM

Popular Posts