Please take a look at the Performance Guide for general guidelines. For special questions you can always refer to this forum again, but repeating the basic explanations from the manuals here wouldn't make much sense.
Check out in Informatica Support portal. Informatica does provide you a freeware tool which calculates the DTM Buffer Size.
For example, you create a session that contains a single partition using a mapping that contains 50 sources and
50 targets. Then, you make the following calculations:
1. You determine that the session requires a minimum of 200 memory blocks:
[(total number of sources + total number of targets)* 2] = (session buffer blocks)
100 * 2 = 200
2. Based on default settings, you determine that you can change the DTM Buffer Size to 15,000,000, or you can
change the Default Buffer Block Size to 54,000:
(session Buffer Blocks) = (.9) * (DTM Buffer Size) / (Default Buffer Block Size) * (number of
200 = .9 *
/ 64000 * 1
200 = .9 * 12000000 /
n partitions, set the DTM Buffer Size to at least
times the value for the session
with one partition. The Log Manager writes a warning message in the session log if the number of memory blocks
is so small that it causes performance degradation. The Log Manager writes this warning message even if the
number of memory blocks is enough for the session to run successfully. The warning message also gives a
suggestion for the proper value.
If you modify the DTM Buffer Size, increase the property by multiples of the buffer block size.
The easiest way to explain this in my opinion is as such:
go into the mapping and add up all the values from the precision column from ALL your sources and targets. once you get this number (which is the max amount any one record would require for processing), multiply it by 100. The result will be your DTM buffer size in bytes, optimized for 100 records at a time (this is ideal for processing as per the performance guide that Nico was referencing).
With this number, set the DTM Buffer in the session with it, and you should be great. Works wonders on performance. I've reduced a run time from 15 minutes to 30 seconds.
Any idea about the total amount of Buffer memory that a host can hold? And what happen if a DTM memory required for a session is more than the total DTM memory available on host?
I too faced the same issue recently , I tried to change buffer size or create new workflow/session, but it didn't work.
In my case issue was with source file definition. The workflow was actually ran successfully first time, then later my teammate had changed the precision in file definition. Which was not propagated into other transformations.
Propagating the precision into mapping solved the issue. Hope this helps for someone.