4 Replies Latest reply on Jul 1, 2021 4:33 PM by Arya Nymeria

    Read flat file and fail the workflow with an exit code

    Arya Nymeria Active Member

      Hello Gurus, I am trying to read a file which has three columns. I created a mapping to read and in the second column I can have two values 'P' or 'F'. If I find the value P then exit code is 0 and workflow should be succeeded as usual. If "F' exit code is 66 and fail the workflow. Since my env is windows, how can we achieve this? I was successful in creating workflow but not on how to parse the exit codes and fail the workflow. Any suggestions?


      Sample data in the file as below.





      main focus is second column which says pass or fail. Thank you, very much.

        • 1. Re: Read flat file and fail the workflow with an exit code
          Nico Heinze Guru

          First a technical note: forget about the exit code 66. You can NOT hand over a specific exit code to the calling shell (resp. batch file or PowerShell script or whatever), only0 for success or != 0 for failure.


          Second another technical note: it is not a really good idea to fail a workflow as an indicator for some particular content. The point is: when the workflow fails, how do you know WHY it failed? Did it fail due to this special content (in your case, a F in the second column of the input file), or did it fail because some network hiccup caused loss of the connection between the workflow and the Integration Service? Or the connection of the Integration Service to the repository database? Or some database problem within the DBMS itself?

          And so on and so on. There are numerous reasons why a workflow could fail, and all of them need some attention by some skilled personnel.


          Not to forget that most customers don't like the thought of a workflow in production environments to fail because some input data is "wrong" or whatever. Most customers I've seen so far have set up some monitoring system for (at least) their production environment, and whenever a workflow fails in production, some red light is turned on, and several people get very nervous until someone has found out why (and possibly remedied that situation).

          For this reason alone I usually refrain from using a workflow status as an indicator for some special data. In most such cases I suggest that the complete process be redesigned so that no workflow failure is "needed" as an indicator for some unusable data.


          Having written that, you can build a mapping such that the workflow fails "cleanly" if the second column bears a F. Before describing how, we need to know whether this file always contains one line or (as in your example) it may contain several lines; if the latter is the case, it's important to know when exactly the workflow shall fail: shall it fail if all records have a F? If at least one record contains a F? If at least half of the records contain a F? Or something else?

          That needs to be clarified, please.


          Now for the general outline.

          You start with a mapping which "analyses" the source file and identifies whether to fail the workflow or not; I'll explain one example below.

          Now as soon as you have found out whether to fail the workflow or not, you set some port Fail_WF (type Integer) to 0 if the workflow shall succeed and to 1 if it shall fail.

          You forward this port (and no other ports) to a Filter having this condition:



          Then you forward this value to a flat-file target with one column of type Number(1).

          Now comes one trick: in the session, you set the target directory to a local directory which exists on each PowerCenter server machine (for example, C:\Temp , but make sure at all costs that this directory DOES exist on each machine) and the file name to NUL . This will lead to the actual file content being written to Nirvana (the NUL device is a "bit bucket" and simply doesn't write anything anywhere).


          After this session, insert a Control task set to Fail Workflow; set the workflow link between the session and this control task to this condition (assuming that the session is named s_check_file):

            $s_check_file.TgtSuccessRows != 0

          This will make sure that the Control task is executed only if the session did produce one output record, and - thanks to the Filter transformation - this session will produce an output record (even though it vanishes nowhere) only if the port Fail_WF was set to 1.


          Now one last thing, namely the mapping.

          For this description I assume that the input file may contain more than one row. Also I assume that the workflow should fail if at least one record contains a F in the second column.

          Simply forward the second port from the input file to an Aggregator transformation. This AGG has no Group-By ports, one input port named FLAG_PORT (namely the second column from the input file), and one output port Fail_WF (type Integer) with the following expression term:

            Count( *, FLAG_PORT = 'F') > 0

          This will make sure that the output port is set to TRUE (=1) if the count of records with FLAG_PORT = 'F' is greater than 0, meaning that at least one such record exists.




          1 of 1 people found this helpful
          • 2. Re: Read flat file and fail the workflow with an exit code
            Arya Nymeria Active Member

            Thank you very much. It helped.

            • 3. Re: Read flat file and fail the workflow with an exit code
              Nico Heinze Guru

              Out of curiosity, may I ask what exactly you implemented in the end? That might help other people to find a goo solution for their needs in similar situations.


              Happy to help when I can. In particular because I didn't give much specific help here.




              • 4. Re: Read flat file and fail the workflow with an exit code
                Arya Nymeria Active Member

                Hey Nico, thank you for all your help. I was able to implement the solution but there were requirement changes that forced us to not go through that route. The process is not being implemented by the hadoop team.