Two ways I see:
1) Make sure that the header line is always fed into the target, even if there are no "real" source data available.
You can do that either in the mapping itself using e.g. a Joiner / Union transformation and appropriate mapping logic or via another mapping which runs before the load mapping and creates the header line; then the actual load mapping would no longer use the "Output field names" feature.
2) Create a post-session success command which checks whether the target file is empty and, if so, writes the header line to it.
Thank you so much for your reply. I am new to informatica so
1. Could you please more elaborate on the step "You can do that either in the mapping itself using e.g. a Joiner / Union transformation and appropriate mapping logic
2. How can we create only header line through a mapping for the step "another mapping which runs before the load mapping and creates the header line"
Approach #1 (mapping logic) would mean this:
You forward your source data to a Joiner transformation as the Detail Input stream. Make sure that each record in this stream is given a running number starting with 1, meaning each record to be written to the target is numbered from 1 to n (n being the total number of records to be written to the target file).
The Master Input stream of this Joiner would be a Source Qualifier (e.g. bound to DUAL in Oracle or SYSIBM.SYSDUMMY1 in DB2) which delivers one single record, namely the field headers for the output file. Send this to the Joiner along with another port of value 0.
Now set up the Joiner to perform a Full Outer Join based on some number port, and set it to Sorted Input.
The join number for the master stream is the value 0 from the "dummy" source.
The join number for the detail stream is the running number (1 to n) from the "real" source.
Thanks to the Joiner being set to Full Outer Join and Sorted Input, the Joiner will forward the master record (with the field headers) to the target file first, followed by all "real" data recourds.
After the Joiner you will need an Expression transformation which checks which of the two sides (the master side or the detail side) has sent data for this particular record. this can be done by checking whether the join number from the master side is NULL; if so, you have to forward the detail data to the target file, otherwise you have to forward the master data (the field headers) to the target file.
One caveat: make sure that your flat file target definition contains String fields only, otherwise you can't send the field headers (which are of course strings) to the flat file through the Joiner. You simply can't write a field header "PERSON_ID" to a field of data type NUMBER.
Appraoch #2 works basically the same way, but the logic is split up into two mappings. The first mapping reads some "dummy" source (see above, DUAL or SYSDUMMY1 or whatever) and writes the field headers to its target (which should have only string fields). The session for this mapping would create a new target file.
Your current mapping would then be executed by the second session; it simply appends all its output to the already existing target file (which contains the field headers).
Functionally both approaches are equivalent. Some people don't like having more than one session per workflow, so it's more or less a matter of policies and habit which approach to take.
Personally I would prefer a mixture of both approaches: one mapping with two target load paths, the first target load path just writing the header line to the target file, the second target load path writing the "real" data to the output file. This is in my opinion the easiest approach, but you have to understand how the so-called Target Load Plan works in a PowerCenter mapping. It's not difficult, but some people simply don't like this approach.
At session level use header command, this option is used to generate the header row in the file target in either cases. This will add a header row to the file target once target file is created by integration service.
to be honest I haven't tried this with more than one output file per session; did you try your approach with many flat files (because the mapping is supposed to create varying file names)?