Is the number increasing at all or has it come to a stand still? Also is there any downstream transformation which might be preventing the Source Qualifier to read more data into its memory? Can you please attach session log so we can look for any issues? Especially check for messages related DTM buffer blocks being full and recommendation to increase DTM buffer memory. This will at least confirm that there is some blocking transformation downstream.
@gauravsanghi Thanks for your answer. I updated my original post, but that seemed to have given it a new link and therefore removed your reaction.
I have attached the session log and it indeed has a message in it about the DTM buffer.
Ah, the post was being moderated, that is why I could not see your answer anymore.
We have a query that identifies if there are any records coming in from a day before yesterday. Normally we only load yesterday's data. If there is, we want to also include that data in the select.
Now we often see that the data for the day before yesterday is picked correctly, but nothing from yesterday is being selected. This logic was working for a long time we did not change it in any way. The data for yesterday is almost never selected.
On a side note worth mentioning, in the last few weeks it did run succesfully a few times.
Do you have any tips on how to increase the DTM buffer and what other factors we should take into account? Does this impact performance on other flows?
Thanks in advance!
The buffer block message is for SQ_TMP_RR_POS_BON_PG_KASSATYPE. You have sq_OVR_DR_CO_ORDER which started at 2019-08-06 04:05:21 and first row returned at 2019-08-06 04:18:05
If there is a joiner or some blocking transformation which requires data from both SQs, it will prevent the other SQ (SQ_TMP_RR_POS_BON_PG_KASSATYPE) from pushing data further. Once the buffer block for this SQ is full, it will log that complaint. Increasing the buffer size or block size will not help in your performance because of the other SQ running slow. Once that is resolved, you should not see that error.
I also noticed that based on Auto memory allocation, srt_FOR_AGGREGATION got only 16MB memory which can only accommodate 161319 rows while your mapping had 5M rows. Recommendation is to set to at least 48MB (I would say 64MB for future) to do a 2-pass sort and still get reasonable performance.
Overall, I would say you can ignore this since it is a very small volume you are dealing with. If you get Snapshot too old errors or too much UNDO block reads for your query SQ_TMP_RR_POS_BON_PG_KASSATYPE, you can put a dummy sorter and finish reading data from the SQ and close your query. However, you might now see the DTM buffer block warning for Sorter transformation.
This behaviour sounds somewhat weird. So I would guess that this has nothing to do with memory settings. Instead, could you please check with your DBAs, network administrators, and the system administrator of the PowerCenter machine(s) whether anything has been changed in the infrastructure? Maybe a network switch has been replaced, maybe a new hard disk has been set up for the source DBMS... Whatever, Even the smallest unlikely change can have drastic results if the circumstances are at odds.
Thanks for the replies!
We have tried to increase the memory for the sorter as gauravsanghi recommended last thursday.
The session ran succesfully on friday and sunday. It did not run succesfully on saturday.
The increasing of the sorter memory did seem to have some positive effect. We cannot find why this happened though. I have attached the session logs for the succesful run of sunday and the unsuccesful run of saturday to the original post. Can you please have a look?
We also want to try to increase the DTM buffer size. Do you have recommendations to what size we need to increase it?
Thanks in advance.
Please have your DBAs check the timeout settings for the source DB connections.
And also have the DB client checked for keep-alive messages; I have seen cases where the source DB expected keep-alive messages every few seconds but the DB client was not set up accordingly.
Thanks for the reply.
We have checked the timeout settings and tried a new configuration. We have set the SQLNET.INBOUND_CONNECT_TIMEOUT=260 and in informatica set the Connection Retry Period = 60 in the Connection Object Definition.
Unfortunately, this did not solve the issue. They are now checking the keep-alive messages.
We do have multiple workflows reading from this table and those run for a longer time and get more data without issues. These flows don't run parallel to eachother. I don't think the connection is breaking during the process.
We would also like to try to increase the DTM buffer, but not sure to what value. Do you have any recommendations on that?