8 Replies Latest reply on Apr 13, 2020 7:19 AM by yuva teja movva

    The refresh schema for Amamzon S3 flat file read object is not working as expected in Informatica BDM Dynamic mapping solution.

    yuva teja movva New Member

      Hi All,

       

      We have a requirement to read multiple flat files compressed using gzip with different schemas from Amazon S3 bucket.

      We tried to create a dynamic mapping for this and we selected the option "At runtime, get data object columns from data source" and ran the mapping.

      But what we observed is that when the DIS read the schema at runtime it is directly trying to fetch schema from the gzip file without decompressing and as a result garbage value are pulled as column names from S3.

       

      Note: we have selected the compression as gzip in the objects properties.

       

      Similarly we have tried to read flat file directly from Amazon S3 using the option "At runtime, get data object columns from data source" but at runtime we observed that all the column names are being concatenated as a single column

      ex: field_1_field_2_field_3

       

      We are not facing this issue when we use parquet files.

       

      So is this a known bug in Informatica BDM (10.2.2) or are we missing anything?

       

       

      Thanks,

      Yuva