What's New > Connectors > Changed behavior
  

Changed behavior

This release includes changes in behavior for the following connectors.

Cloudera 6.1 package

Effective in this release, the Cloudera 6.1 package that contains the Informatica Hadoop distribution script and the Informatica Hadoop distribution property files is part of the Secure Agent installation. The package contains support for additional Hadoop distributions. You must have the licence to use the Cloudera 6.1 distribution package.
To access the Hadoop distributions using Hive or Hadoop Files V2 Connector, you must run the Hadoop distribution script and specify the distribution version for the mapping or elastic job. Even if you want to use only the CDH 6.1 distribution for the source or target, you must still download the CDH 6.1 libraries using the Hadoop distribution script.
Previously, you had to contact Global Customer Support to download and run the Informatica Hadoop distribution script.
Important: These changes are not applicable for connectors such as Amazon S3 V2 Connector, Microsoft Azure Data Lake Storage Gen2 Connector, Google Cloud Storage V2 Connector, or Kafka Connector. To use the Cloudera CDH 6.1 libraries for these connectors, you require only the Cloudera CDH 6.1 license.
Steps to access the Hadoop distributions from the Cloudera 6.1 package
You must perform the following tasks to run the script from the Secure Agent installation location and access the Hadoop distributions:
  1. 1. Go to the following Secure Agent installation directory where the Informatica Hadoop distribution script is located:
  2. <Secure Agent installation directory>/downloads/package-Cloudera_6_1/package/Scripts
  3. 2. Copy the Scripts folder outside the Secure Agent installation directory on your machine.
  4. 3. From the terminal, run the following command from the Scripts folder: ./infadistro.sh
  5. 4. When prompted, select Data Integration or Data Integration Elastic for which you want to run the script.
  6. 5. When prompted, specify the value of the Hadoop distribution that you want to use.
  7. The third-party libraries are copied to the following directory based on the option you selected in step 4:
    where the value of the Hadoop distribution version is based on the Hadoop distribution you specified.
  8. 6. If you copy the Scripts folder to a machine where the Secure Agent is not installed, perform steps 4 and 5:
  9. 7. Set the INFA_HADOOP_DISTRO_NAME property for the DTM in the Secure Agent properties and set the value of the distribution version that you want to use.
  10. 8. Restart the Secure Agent.
Hadoop distributions applicable for mappings and elastic mappings
The following table lists the supported distribution versions that you can access from the Cloudera 6.1 distribution package for mappings and elastic mappings. You must specify the appropriate Hadoop distribution values when you run the Hadoop distribution script and set the DTM property based on the Hadoop distribution that you want to access:
Jobs
Hadoop Distribution
Distribution Option in infadistro.sh Script
Value in DTM Flag
Data Integration*
Cloudera CDH 6.1
CDH_6.1
CDH_6.1
Hortonworks HDP 3.1
HDP_3.1
HDP_3.1
Amazon EMR 5.20
EMR_5.20
EMR_5.20
Azure HDInsight 4.0
HDInsight_4.0
HDInsight_4.0
Data Integration Elastic**
Cloudera CDH 6.1
CDH_6.1
CDH_6.1
Cloudera CDP 7.1 private cloud
CDP_7.1
DTM flag is not required.
Cloudera CDW 7.2 public cloud
CDW_7.2
DTM flag is not required.
Amazon EMR 6.1, 6.2, and 6.3
EMR_5.20
Applicable for Amazon EMR 6.1, 6.2, and 6.3
EMR_5.20
Applicable for Amazon EMR 6.1, 6.2, and 6.3
Azure HDInsight 4.0
HDInsight_4.0
HDInsight_4.0
*Applies to Hive and Hadoop Files V2 Connector. **Applies to Hive Connector.

Capture debug logs

Effective in this release, if you want the session to capture the debug logs, set the following properties:
  1. 1. In the Custom Configuration section in the agent properties, set the LOGLEVEL=DEBUG flag as a DTM property for the Data Integration Server.
  2. 2. On the Schedule page in the mapping task properties, select the Verbose execution mode.
To exclude the debug logs, change the Verbose execution mode to Standard in the mapping task properties.
Previously, the session logs included the debug logs because you set the LOGLEVEL=DEBUG property and ran the mapping task in Standard execution mode.

Error messages for elastic mappings

Effective in this release, when an elastic mapping fails, the error messages that appear on the user interface are standardized and do not contain the stack trace from exceptions. For details of the error message, you must check the session log.
Previously, error messages contained the exception stack trace and internal details in the message description.

Expression transformations in mappings enabled for pushdown optimization

Effective in this release, when you configure a mapping for pushdown optimization, you can continue to add an Expression transformation each to multiple sources followed by a join downstream in the mapping.
Additionally, you can add multiple Expression transformations that branch out from a transformation and then branch in into a transformation downstream in the mapping.
Previously, if the mapping contained multiple Expression transformations that were connected to a single transformation downstream, pushdown was disabled and the mapping ran without pushdown optimization.

Amazon S3 V2 Connector

Effective in this release, when you specify the customer master key ID in the connection properties and select server side encryption as the encryption type for complex files, the target file is encrypted with server side encryption.
Previously, the target file was encrypted with server side encryption with KMS.

Google BigQuery V2 Connector

Effective in this release, you can suppress post-SQL commands when error occurs while the mapping writes data to the target table.
Previously, the post-SQL commands ran even if the mapping fails to write the data to the target.

Microsoft Azure Synapse SQL Connector

Effective in this release, Microsoft Azure Synapse SQL Connector includes the following changes:

SAP BAPI Connector

Effective in this release, you can select the Jco Trace option in the BAPI Connection Connection Properties section to store information about the JCo calls in a trace file.
Previously, the Trace option was not applicable. You could store the information in a trace file only by defining the Trace parameter in the SAP Additional Parameters field.