Consider increasing DTM buffer size to see if that helps. It might be useful to tune the DTM buffer size accordingly instead of setting it to auto.
This is the relevant entry in attached logs :
2021-03-22 17:58:55 : INFO : (10680 | READER_1_1_1) : (IS | IS_TEST_INFA) : DisNodePrd01 : DBG_21699 : Waiting for consumer to free buffer block. Please increase the dtm buffer pool size.
2021-03-22 17:59:48 : INFO : (10680 | WRITER_1_*_1) : (IS | IS_TEST_INFA) : DisNodePrd01 : WRT_8161 :
TARGET BASED COMMIT POINT Mon Mar 22 17:59:47 2021
Also please take into account that - if SQL Server is not installed on the PowerCenter box - data have to be read from the SQL Server files, moved over the network, processed by PowerCenter, and then moved back to the SQL Server machine. That takes a good lot of time.
Another point may be that the PowerCenter machine may be "physically" weaker than the SQL Server machine. It's not uncommon to have SQL Server machines with let's say 32 CPU cores and 512 GB of RAM whereas the PowerCenter machine has 4 CPU cores and 64 GB of RAM.
And some more things which may lead to such "bad" performance.
Just a starting point.
Also, besides increasing the DTM block size and buffer pool size; on the SQLServer DB side; please check if by default, the static cursor usage is on?
For every query raised to the DB, if a cursor is created. That process could take much time if the result of the query is in the order of millions of records.
In order to disable the static cursors in the DB, add the following property in your service.
For PowerCenter Integration Service, set the CUSTOM PROPERTY DisableStaticCursorsForSQLServerODBCProvider in the properties section within the processes tab of the IS, and set it to Yes.
For Data Integration Servide of Data Services, set the CUSTOM PROPERTY ExecutionContextOptions.DisableStaticCursorsForSQLServerODBCProvider in the properties section within the processes tab of the DIS, and set it to Yes.
A PowerCenter Integration Service and/or Data Integration Service recycle is required for the changes to take effect.
Usually, UPDATE operation - 30 min - is much more harder operation than SELECT operation that takes about 40 sec. I think that increasing DTM buffer does not help in this case.
30 min for 1 mln updates is not too bad perfomance.
I suppose that your target table prd_sterling_owner.yfs_item_test has index on ITEM_KEY column.
Next, I see in your log
Output Rows , Affected Rows 
I think it may be happen if target has no key for update or some rows were updated few times. In second case you must use some actions to get last values because you can get wrong results.
1 of 1 people found this helpful
I added the custom property on my IS and I can see better performance. Now the throughput raised to 1200 row\sec instead of about 400.
Thank you, really appreciate