12 Replies Latest reply on Mar 31, 2021 4:35 PM by Vasudevan TS

    Informatica performance tuning

    Dhiman Dey New Member

      Hello experts,

       

      I'm searching to get some details on Informatica Powercenter real time performance tuning. I'm working in  Informatica for last 2 years but till now I did not get a chance to work with huge amount of data where we need a performance tuning.

      Can someone let me know some how they are doing performance tuning in Informatica PC in real time.

      I have gone through the Informatica documentation and clear with the concepts but want to listen some real time story.

       

      Thanks in Advanced!

        • 1. Re: Informatica performance tuning
          Smitha HC Guru

          Hi Dhiman,

           

          Provide the scenario, where you are facing performance issue. Based on that we can help you out.

           

          Thanks,
          Smitha

          • 2. Re: Informatica performance tuning
            Seema Yaligar Guru

            Hi Dhiman,

             

            You can refer to below PowerCenter Performance tuning guide which has the information about run-time bottlenecks and how to tune for optimal performance.

             

            Preface

             

            Thanks,

            Seema

            • 3. Re: Informatica performance tuning
              Nico Heinze Guru

              Guys,

               

              it's nice from you to share the links to the documentation, but that was not what Dhiman has asked for. Dhiman wants to learn some real-life story.

               

              I can't give much to it, only two stories.

               

              Once we had a mapping which ran every five minutes. It issued a short SQL statement (including a SQL override) to a DB2 database, and over the years the status table read in this SQL override grew so large that the SELECT alone took app. 3 minutes. And this mapping had to continue to be executed every five minutes.

              What we did was to replace the whole SQL override with mapping logic. Instead of having DB2 perform a sub-SELECT and join the results of the sub-SELECT with themselves, we simply read all data into the mapping, sorted them, aggregated them, and joined the aggregated results with the original data stream. This process still (this story is 7 years ago) takes less than a minute via mapping logic.

               

              The other story was that a couple of Oracle specialists have built a process where duplicate records were written to an Oracle table; afterwards a Stored Procedure in Oracle was called to remove the duplicates.

              Now it turned out that this table contained app. 130 million records of which 100 million were duplicates.

              And that the Stored Procedure needed app. 5 hours per night to remove the duplicates.

              My suggestion was to avoid writing duplicates altogether. The Sorter transformation can be set up to remove duplicate records from the input data stream, and so we did: we inserted a Sorter set to Sort Unique into the mapping, and off we went.

              In the end using this Sorter the session ran 1.5 hours (including removal of the duplicate records) instead of writing the 130 million records including duplicates in 30 minutes and running the Stored Procedure for five hours.

              So we were able to cut down the overall time from app. 6-7 hours to 2-2.5 hours.

               

              Both these cases are extreme cases. I've been told by colleagues that in "standard" use cases for DWH loads we can expect to be able to fine-tune session runs by app. 20-40%, meaning the session run time will be cut down by (in good cases) up to 40% of their total runtime.

               

              Regards,

              Nico

              • 4. Re: Informatica performance tuning
                Asmita Kesarwani Guru

                Hi,

                 

                Following standards/guidelines that can improve the overall performance:

                • Use Source Qualifier if the Source tables reside in the same schema.

                • Make use of Source Qualifer  "Filter" Properties if the Source type is Relational.

                • If the subsequent sessions are doing lookup on the same table, use persistent cache in the first session. Data remains in the Cache and available for the subsequent session for usage.

                • Use flags as integer, as the integer comparison is faster than the string comparison.

                • Use tables with lesser number of records as master table for joins.

                • While reading from Flat files, define the appropriate data type instead of reading as String and converting.

                • Have all Ports that are required connected to Subsequent Transformations else check whether we can remove these ports.

                • Suppress ORDER BY using the '--' at the end of the query in Lookup Transformations.

                • Minimize the number of Update strategies.

                • Group by simple columns in transformations like Aggregate, Source Qualifier.

                • Use Router transformation in place of multiple Filter transformations.

                • Turn off the Verbose Logging while moving the workflows to Production environment.

                • For large volume of data drop index before loading and recreate indexes after load.

                • For large of volume of records Use Bulk load Increase the commit interval to a higher value large volume of data.

                • Set 'Commit on Target' in the sessions.

                 

                Also, the below document might be helpful:

                https://knowledge.informatica.com/s/article/519927?language=en_US

                 

                Thanks,

                Asmita

                • 5. Re: Informatica performance tuning
                  Dhiman Dey New Member

                  Thanks for sharing the stories, Nico.

                  It really helps me to understand the real time performance issue.

                  • 6. Re: Informatica performance tuning
                    Dhiman Dey New Member

                    Nico,

                     

                    From your first story what I understood is, previously you were doing everything in SQL override but later you just select all the needed records and perform the other operations like sub query etc using mapping transformations.

                     

                    But you know till now whatever the mappings I have seen, I realized that if possible we need to do all the sql related things in SQ itself which will be more performance oriented.

                    May be case by case we need to think about it.

                    Please let me know your thought.

                    • 7. Re: Informatica performance tuning
                      Dhiman Dey New Member

                      Asmita,

                       

                      Thanks for sharing those tips.

                      • 8. Re: Informatica performance tuning
                        Nico Heinze Guru

                        Yeah, that's an old (mis-)understanding. It dates back to the times when database servers were 64-bit machines and PowerCenter mostly ran on 32-bit machines. And yes, in those times often a SQL override was notably faster than the equivalent mapping logic.

                        Those times are gone for a long time now. Only this prejudice that SQL were faster still remains.

                         

                        One of the many things I've learned during the past 20 years is this: never trust SQL, and never trust PowerCenter. Sometimes one is faster, sometimes the other one is faster. It can indeed happen (I've seen this more than once) that a SQL based approach was faster in DEV and slower in PROD, and I've seen the other way round as well.

                        You have to try both ways. And you should take due care that you build your SQL logic and your mapping logic the "right" way, meaning that you don't waste performance in either case.

                        PowerCenter is quite "generous" when it comes to bad mapping design, but there are many ways to "overstretch" this behaviour, meaning that PowerCenter cannot execute "bad" mappings fast. It is hard work, often small details come into play, and sometimes the outcome is clear. But in the end you always have to try both ways.

                         

                        My experience.

                         

                        Regards,

                        Nico

                        • 9. Re: Informatica performance tuning
                          Dhiman Dey New Member

                          Thanks Nico for sharing your experience on this.

                          • 10. Re: Informatica performance tuning
                            Nico Heinze Guru

                            That's what we are here for: sharing.

                             

                            Cheers,

                            Nico

                            • 11. Re: Informatica performance tuning
                              JanLeendert Wijkhuijs Seasoned Veteran

                              Hi,

                               

                              I also have an example

                               

                              functional requirement:

                               

                              Every day a full dump flatfile will be delivered and it needs to be compared to the previous delivery and the changes must be extracted and delivered as output (also flatfile)

                               

                              Originally the received file would be read and written as a target to an intermittent location with a timestamp added to the name.

                              In the same mapping it would also have a pipeline to the functional target.

                              In this part of the mapping a lookup to the previous flatfile was added and a filter to only allow the differences to the previous delivery written to the target.

                               

                              The number of records was about 1 million very slowly increasing.

                              The total runtime of the workflow varied between 5 and 7 hours.

                              The reason for it was a field which could contain over 7000 characters.

                               

                              I adjusted the mapping to use MD5 hash for the field with upto 7000 characters and an MD5 hash for the remaining fields (also a lot of long character fields)

                               

                              So I had a source/source qualifier, expression to create the hashes and then only wrote the key fields and the hashes to the target to compare for the next day.

                              In the lookup I only had to compare the hashes and write a few records to the target.

                               

                              Total runtime dropped to 2 minutes in total.

                               

                              So if you have large character fields and you need to do comparisons switch to hash fields.

                               

                               

                              Regards

                              JanLeendert

                              • 12. Re: Informatica performance tuning
                                Vasudevan TS Guru

                                Hi Dhiman Dey,

                                 

                                You can follow the tuning guide to help to achieve the performance improvement.

                                 

                                Performance Guide

                                1 of 1 people found this helpful