In order to avoid this kind of conflict completely, you will need some "intermediate storage" (such as another DB table or a flat file) to store the key attributes plus a flag (to indicate whether this record exists in the staging table); then in the next step you source this "intermediate storage" and can write to the target table without incurring any locks or whatever.
Yes, that's not the only approach, but in my experience the safest one.
Thank you for your reply. Yeah, i tried writing the file date to a flat file on unix and get the date from the file in the mapping to avoid using the same table as source and target. However, I wanted to know if there is still a possibility.
It is possible, but the penalty in terms of performance (UPDATE vs. INSERT statements) is usually heavy.
Not to forget that depending on the DBMS you may encounter deadlocks of the worst kind. Such approaches can even lead to the whole DB coming to a grinding halt; that happened to me app. ten years ago on DB2 on AIX, and that was the PowerCenter repository of the production environment... not funny, in particular not if the admins (which was the case here) had to restart the whole AIX box. That's a sacrilege for them.
That's why I firmly suggest to *never* use the same table as source and target within the same target load path in a mapping.
Is it not recommended to have the same source and target table to read and update. This is because database transactions intended to guarantee atomicity, consistency, isolation, and durability. A sequence of database transaction operations should smoothly satisfy the ACID properties.
Thank you. I changed my design and proceeded without using the source and target for reading and writing data. Thank you everyone.