First a question: how do you access that VSAM file? Using PowerExchange for mainframes (in this case, PWX for complex flat files, if I recall correctly)? Or by some other means?
Imported as COBOL source file, not using power exchange
I do not have any OCCURS or REDEFINES in the COBOL Structure
Below is the SAMPLE structure looks like
05 HDR PIC X(15).
05 RUN-DATE PIC X(25).
05 RUN-TIME PIC X(25).
05 JOB-ID PIC X(25).
05 ID PIC X(25).
05 ADDR PIC X(50).
05 MODE PIC X(10).
05 TOTAL PIC(10).
I am not sure how to read part of data coming in header section as detail record as new to COBOL sources
This does not look to be the correct copybook.
header, detail and trailer are having 3 different levels and under level 01, that means they are present in one record and not like the one you provided in example
As per copybook, your source record will look like: Notification RUN DATE:01/01/2021 RUN TIME:01:30:12 JOB ID:12345C1|ZA|BN--TOTAL ROWS:2
Once you import the table in mapping using normalizer, you can just select the columns required to target.
If header, detail and trailer are different records (different 01 levels), Then i think it is not possible using Normalizer, we might try Power Exchange Multi-record, But this need some testing, which i have not done yet.
All these unclear points were the reason why I've asked how the file is accessed.
Where is this file stored? In what code page? In what file format?
The reason for asking for the file format is this:
Assuming that the mainframe file has been copied to the PowerCenter server, there are different ways to perform such a "copy" operation. For example, mainframe-based FTP utilities can add line-end characters on their own (something which VSAM files simply don't contain). On the other hand, if the VSAM file has been copied to the PowerCenter server in binary format, you have to either use PowerExchange for Complex Flat Files (because this MAY be able to read VSAM structures, not sure about that, that's a question for the PowerExchange forum) or to program your own access utility. Mainframe files in general are always organised in some manners which are completely unknown to the Windows or Unix/Linux world. That's why it's important to know where this file resides, how it's "physically" organised there, what code page it has been written in, and so on.
In general accessing mainframe sources is not really complicated once you've got PowerExchange for mainframe installed and up and running (and the correct license options) but it's not at all easy if you try to read them like a "normal" CSV file.
my apologies, while being pernickety to the bone I completely forgot the orignal question (namely how to combine those record contents).
For the following part I assume that the file contains exactly one header record and one footer records, that the header record physically will be read before the first data record, and that the footer record will read last from the file.
In this case your Application Source Qualifier (you probably will need one) will have three output groups, one for the header record, one for the data records, and one for the footer record.
So what you should do is to use two Joiner transformations to combine these data streams.
The first Joiner will combine the header record stream with all data records.
The second Joiner will then combine this data stream (consisting of the header record and the data records) with the footer record.
So in the end you will have a JNR forward data records to e.g. some Expression transformation; this EXP will find that EITHER the ports from the header record will contain data OR the ports from the data record will contain data OR the ports from the footer record will contain data. There will never be two (or even three) "groups" of these ports which will contain data at the same time.
No problem at all. The very first record in this EXP will be the header record, so you simply have to copy the values from those ports you need into some variable ports. These variable ports will never change for the remainder of the session (because they are only set for the header record only).
While the data records are pushed through the EXP, you can e.g. count those data records and perform all sorts of other validation / enrichment based on these records. Save those results not only to some output ports but also keep them in some variable ports.
When the footer record is processed, you can e.g. compare the number of data records (counted in some variable port) with the expected number of records as stored in the footer record. If they match, fine, if they don't, you can do whatever is necessary to keep those data from being processed any further.
This is a very rough outline. No offense intended, but I firmly advice you to make sure you understand this general process before we go into the details. Not because I want to tell you how you have to do your job, but because I want to make sure that you understand the general process before you try to implement anything. This approach is not particularly complicated, but nevertheless there are many opportunities to insert errors in the process for those who are not experienced enough (and I fear from your question that you are not too experienced with this kind of tasks).