For this mapping(my requirement), I have 3 different sources. These 3 sources are sending a file(one file per folder per source) to 3 different folders but the bucket name is same for all three.
At one point of a time, there would be only 1 single file in a a folder.
Do you still think that .manifest file is required here ?
As per the article Support , The manifest file should be available under S3 bucket. At the session level, provide folder path property as <bucket name>/<folder name>/file name.manifest.
When we use AWS S3 as a source, we do not have any property called folder path at session level like below.
So I assume that when it says At the session level, then it means we need to update folder path property in the connector
Here I have update the folder path in connector like <BUCKETNAME>/<FOLDERNAME>/manifestfilename.manifest
Upon doing these changes, i executed my workflow but got below error
I checked with our Cloud Ops person and as per him we have the access to the bucket because I was able to export the source definitions from these s3 path...
Can someone please suggest that what wrong I am doing here ??
You have to source the .manifest file in the source object. So instead of sourcing the file directly you source the manifest file which reads the first file in the manifest to derive the metadata definition for all the files. It assumes all files in the manifest have the same structure.
--side note: Please note that exchange for S3 will at some point be EOL. Informatica is no longer actively selling Power exchange for cloud and Big data applications which S3 was a part of. Any connectivity to things like blob, s3 ,adls ...cloud based stores will require a IICS footprint to interface with. You also have greater flexibility in IICS with S3 as well. features that do not exist in PowerCenter.
Hmm, there is a catch here. Structure of my all 3 source files are different.
Well sure. Based on your previous post you mentioned you have 3 different sources. So you would have 3 manifest files each one tied to a specific source layout that way you can read it dynamically.
This has been solved. Developed a workaround. Rather than reading the files from AWS S3 bucket, what I did was that I copied the files from S3 bucket to infa_shared path on Unix server.
Refer my this post on the detailed approach: Read the latest .csv file from AWS S3 bucket