9 Replies Latest reply on Apr 28, 2020 1:18 AM by user165569

    Informatica BDM mapping with SPARK engine writes so many partition files to HDFS

    Vardhan Reddy Active Member

      We have one mapping where it uses Spark engine. The output of the mapping is to write to Hive table. When we check the external hive table location after the mapping execution we are seeing so many file splits with very very small size and 3-4 files with data that is needed.


      Could you please let us know if we can force spark engine to write data to HDFS with only few partitions? Is there any parameter to pass in mapping run time properties to force to write to only few partitions?