Skip navigation
1 2 Previous Next

Informatica Platform

19 posts

What are we announcing?


Who would benefit from this release?

This release is for all Data Engineering and Data Catalog customers and prospects who want to take advantage of the updated Hadoop distribution support as well as fixes to core platform, connectivity, and other functionality. You can apply this Service Pack after you install or upgrade to Informatica 10.4.0. If you are already on 10.4.0, you can directly install

What’s in this release?

Data Engineering PAM

  • HDI 3.6 WASB support

Data Engineering Streaming

  • General availability of Databricks Delta Lake targets in streaming mappings
  • General availability of Snowflake targets in streaming mappings
  • General availability of dynamic mapping support with Confluent Kafka sources in streaming mappings
  • Latest Apache Kafka 2.4, Confluent Kafka 5.4 & other PAM support
  • Performance enhancements


  • Complex Files (DEI): When you read from or write data to complex files, the FileName port now includes data from the absolute path including the file name.
  • PowerExchange for Microsoft Azure Data Lake Storage Gen2: Reads data from the FileName port that contains the endpoint name and source file path.

Enterprise Data Catalog


  • Qlik Sense: New resource to extract metadata from applications, stories, and sheets. You can also view lineage derived from the data source.
  • Apache Kafka: New resource to extract metadata from topics in an Apache Kafka cluster. Enterprise Data Catalog integrates with Informatica Intelligent Structure Discovery to discover the message format or leverage the Confluent Kafka Registry.
  • MongoDB: New resource to extract metadata from collections. Enterprise Data Catalog integrates with Informatica Intelligent Structure Discovery to discover the record format. The resource supports metadata extraction only.


  • In the Compact View for lineage enhancements, added transformation logic and expand and collapse functionality for all resources.


Support for:

  • Microstrategy 11.x
  • PowerCenter 10.2HF2
  • QlikSense 3.2
  • Apache Kafka 2.3.0 and Confluent Kafka 5.3.1
  • MongoDB 2.6, 3.4, 3.6, 4.0 and 4.2

Enterprise Data Preparation

  • Support for HDI 3.6 with WASB

Release Notes & Product Availability Matrix (PAM)

Informatica Release Notes:

PowerExchange Adapters for Informatica Release Notes:

Informatica Release Guide:

PAM for Informatica

You can download the Service Packs from here.

What is being announced?


Starting with the release of Informatica Platform 10.4, Solaris 11 Server will no longer be supported. Informatica Platform is a set of data services and information solutions powered by a state-of-the-art data management engine. The End of Life Announcement  is applicable for the following products:

  • PowerCenter
  • Big Data Management (BDM), Enterprise Data Catalog (EDC), Enterprise Data Preparation, (EDP), and Big Data Streaming (BDS) *
  • Data Quality (DQ)
  • Data Transformation (B2B DT)
  • Test Data Management (TDM) and Data Archive
  • Data Validation Option (DVO)
  • Data Integration Hub (DIH)
  • B2B Data Exchange (DX)
  • Metadata Manager, Profiling and Business Glossary
  • Master Data Management (MDM) including Multidomain MDM, Customer 360, Supplier 360, Product 360*, and Informatica Identity Resolution (IIR)

* These products currently do not support Solaris platform.


Why is Informatica announcing this product end of life?


In 2017, Oracle changed the roadmap for Solaris when they switched to Solaris instead of Solaris 12, a minor release that would cater only to security fixes and patches but no new features or advances in the operating system. Oracle will continue to provide minor updates to the Solaris operating system and promises to sustain support through 2034, but they do not plan to release major updates in the future. Recent product download data, customer support calls, and sales trends show decline in Solaris share.


How will this impact customers?

Informatica supports customers on Solaris servers on versions of Informatica that currently support Solaris and will continue to do so for as long as those versions are supported by Informatica. You could also refer to the EOL Announcement published on Informatica Network. For the full list of End of Life (EOL) announcements please refer to the Informatica Products EOL Notification.


What alternative does a customer have?

As part of an ongoing effort to enable organizations to derive more business value from their investments, Informatica has decided to focus its efforts on delivering superior technology and timely solutions on those platform combinations that are themselves benefiting from investment by their respective vendors. To avoid critical impacts to the business operation, customer should consider planning for a smooth transition to other IT platforms.


Informatica Global Customer Support

For more information, questions, or issues around any Informatica product, open a Support Case through Informatica Network. For additional information on Informatica Global Customer Support, refer to the Informatica Global Customer Support Guide.

What are we announcing?

The release of Informatica 10.4


Who would benefit from this release?

This release is for all customers and prospects who want to take advantage of the latest PowerCenter, Data Engineering Integration, Data Engineering Quality, Data Engineering Streaming, Enterprise Data Catalog, and Enterprise Data Preparation capabilities.


What’s in this release?

This update provides the latest ecosystem & connectivity support, security enhancements, cloud support, and performance enhancements while improving the user experience. Also, the Big Data product family is renamed to Data Engineering.


The following product names have changed:

  • Big Data Management has changed to Data Engineering Integration.
  • Big Data Quality has changed to Data Engineering Quality.
  • Big Data Streaming has changed to Data Engineering Streaming.
  • Big Data Masking has changed to Data Engineering Masking.


Enterprise Data Catalog and Enterprise Data Preparation are aligned within the Data Catalog product family.

Data Engineering Integration (DEI)

Enterprise Class

  • CI/CD & REST initiatives: Use REST APIs to deploy, update, and query objects and to compare mappings that you develop in a CI/CD pipeline.
  • CLAIRE® recommendations and insights: Provides best practices recommendations for mappings during design time. It also gives insights into mapping design patterns.
  • Debugging enhancements: Collect aggregated cluster logs for a mapping in the Monitoring tool or by using an infacmd ms command.
  • Blockchain support: Connect to a blockchain to use blockchain sources and targets in mappings that run on the Spark engine. (Technical Preview)

Advanced Spark

  • Data Processor on Spark: Process unstructured and semi-structured file formats using the Data Processor transformation on the Spark engine.
  • Profiling on Spark: Run profiles and choose sampling options on the Spark engine. You can perform data domain discovery and run scorecards on the Spark engine.
  • Hierarchical Data Processing enhancements:
    • Midstream hierarchical data parsing: Parse hierarchical JSON and XML data in a midstream string port using intelligent structure models and complex functions.
    • Data preview: Preview hierarchical data using the Spark Jobserver in the Amazon EMR, Cloudera CDH, and Hortonworks HDP environments. Spark Jobserver allows for faster data preview jobs.
    • Intelligent Structure Discovery improvements: Processes additional input types such as ORC, Avro, and Parquet, creates an intelligent structure model from a sample file at design time and arranges unidentified input data in the sample file as a structured JSON format in the output model.


  • Security: Support for cross-realm Kerberos authentication and multiple LDAP servers in a single Informatica domain
  • Leverage cloud infrastructure: Installing PowerCenter on AWS is easier with added support for Azure Linux as server OS and PostgreSQL as a repository database
  • Transformation enhancements: Enhanced ability to consume REST web services using an HTTP transformation with added support for PUT, PATCH and DELETE methods
  • Productivity enhancements: New refresh option to refresh metadata in the PowerCenter Designer and Workflow Manager without requiring you to log in again
  • New connectors: PowerExchange for DB2 Warehouse, PowerExchange for PostgreSQL, PowerExchange for Dynamics 365 Sales (Rest API Based)

Operational Insights

  • New Advanced Edition:  New paid offering, Operational Insights Advanced Edition on top of Operational Insights Base Edition, which is free
  • New Operator Console: Operational analytics console with a projects view for effectively viewing PowerCenter, Data Engineering Integration, Data Engineering Quality, and Data Quality operational data with permissions management with Advanced Edition
  • Claire alerts: Auto-detection and alerts for job anomalies in PowerCenter workflows based on the elapsed time and data processed with Advanced Edition

Platform PAM

·      Database support added:

    • Oracle 19c
    • PostgreSQL
    • SQL Server 2019

·       Operating system support added:

    • Amazon Linux2
    • Ubuntu
    • Windows 2019 Server
    • zLinux

·       Operating system support dropped:

    • Solaris

· Java support

    • Azul OpenJDK 1.8.0_222
    • IBM JDK
    • Tomcat 7.0.96
  • Browser support
    • Microsoft Edge Browser (Win 10) 44.18
    • Internet Explorer 11.x
    • Google Chrome 75.x
    • Safari 12.1.1

Platform Update

  • Support for cross realm Kerberos authentication to allow Informatica nodes, application services, and users to belong to different Kerberos realms.
  • Support for Smart card-based Kerberos single sign-on.
  • Support for multiple LDAP servers in a single Informatica domain.
  • Support for Oracle Universal Directory for LDAP authentication.
  • Support for defining custom LDAP types.
  • Support for Microsoft Azure Active Directory for secure LDAP authentication.
  • Support for Microsoft Active Directory Federation Services 4.0 and PingFederate as Security Assertion Markup Language (SAML) identity providers.

Model Repository

  • Version Control System Support added:
    • Collabnet Subversion Edge 5.2.4
    • Bitbucket Server 6.4
    • Perforce 2019.1
    • Visual SVN 4.0.2
  • Version Control System Support dropped:
    • Collabnet Subversion Edge 5.2.2
    • Perforce 2014.2
    • Visual SVN - 3.6, 3.7

Informatica Container Utility

Data Engineering Streaming (DES)


Streaming Data Integration

  • Data Quality transformations in streaming mappings: Apply real-time Data Quality transformations to streaming data.
  • Lineage for streaming mappings: Ability to view the lineage for streaming mappings in Enterprise Data Catalog.
  • Enhanced OS support: Data Engineering Streaming is now supported on SuSE Linux.
  • Dynamic mapping support: Ability to run streaming mappings with dynamic mapping support. (Technical Preview)
  • CLAIRE®integration for industry standard data parsing: Ability to parse HL7 messages using Intelligent Structure Discovery integration. (Technical Preview)
  • Enhanced support for CDC ingestion to Hadoop.
  • Enhanced monitoring capabilities for streaming jobs.
  • Latest Spark and Hadoop distribution support.

Streaming Analytics

  • Confluent schema registry support: Ability to parse complex messages from Kafka using schema from schema registry.
  • SSL support with Kafka: Ability to connect to secure Kafka cluster.

Cloud streaming support

  • Ephemeral cluster support: State support in cloud repositories and workflow support.
  • Serverless streaming in Azure Databricks: Ability to run streaming jobs in an Azure Databricks cluster.

Enterprise Data Preparation (EDP)


  • Upload files directly to the data lake: Data analysts and data scientists can now initiate the data preparation process by uploading files directly to the data lake, without waiting for IT to fulfill their request.
  • Microsoft Excel support: Excel files are now supported for data preparation. CLAIRE® technology helps in automatic table structure discovery used for data preparation and publication. (Technical Preview)
  • Publish as files: File-based data assets can now be prepared and published to the data lake as files without dependence on hive or relational layer.
  • ADLS Gen2 support: Support for file-based data preparation on ADLS Gen2.
  • NULL value handling: Intelligent handling of NULL values during data preparation and publication.
  • Privilege control: You can now enforce privilege control on various user activities.
  • Stabilization and performance: Stabilization and performance improvements of data preparation projects from intermittent failures and slowness.

Enterprise Data Catalog (EDC)



  • Snowflake: New scanner that can extract object and lineage metadata. Lineage metadata includes view to table lineage.
  • Cassandra: New scanner to extract metadata Cassandra keyspaces, tables and views. Profiling is not supported.
  • AWS Glue: New scanner to extract metadata from the Glue catalog. The Glue scanner can extract metadata from sources in the AWS environment (S3, Redshift, DynamoDB, RDS) as reference objects. Scanner draws lineage from Glue objects to source objects. Lineage for ETL jobs is not supported.
  • Informatica Data Quality:
    • Extract rules, scorecards definitions and results as well as profiling and data domain discovery statistics from an IDQ or a BDQ model repository service and profiling warehouse. Users who have built Data Quality processes in IDQ/BDQ can now extract the quality scores and visualize them in EDC.
    • Extract profiling and data domain discovery statistics from an IDQ or a BDQ profiling warehouse. Users who have already run profiling and enterprise discovery in IDQ/BDQ can now extract these profiling results and visualize them in EDC.
  • Azure Power BI: New scanner to extract metadata for Workspaces, Dashboard, Reports, Datasets and Dataflows as well as lineage between them.
  • Google Cloud Storage: New scanner to extract metadata from files and folders. Refer to the PAM for supported file formats. Profiling is not supported.
  • SAP BW: New scanner to extract metadata, lineage, and relationships between SAP Business Warehouse objects. Profiling is not available. (Technical Preview)
  • SAP BW/4HANA: New scanner to extract metadata, lineage, and relationships between SAP BW/4HANA objects. Profiling is not available. (Technical Preview)
  • Informatica Data Engineering Streaming: The Informatica platform scanner now supports extracting metadata from streaming mappings, including streaming sources. Streaming sources are created as reference objects.
  • Google Big Query (Profiling): Support for column profiling and data domain discovery.
  • SAP HANA Database (Profiling): In addition to the extraction of the metadata, EDC is now capable of profiling the SAP HANA database tables and views to extract column profiling and data domain discovery statistics.

Scanner Framework

  • Reference Objects: Extract data lineage and referred objects directly from ETL, BI, and catalogs. Users can search, annotate, govern and view lineage for reference objects.
  • Offline scanner support added for:
    • Amazon Redshift
    • Amazon S3
    • Azure Data Lake Store
    • Azure Microsoft SQL Data Warehouse
    • Azure Microsoft SQL Server
    • Google BigQuery
    • Microsoft Azure Blob Storage
    • Salesforce
    • Workday
    • Axon
    • Business Glossary
    • Custom Lineage
    • Database Scripts
    • Informatica Intelligent Cloud Services
    • Erwin
    • SAP PowerDesigner
    • IBM Cognos
    • QlikView Business Intelligence
    • SAP HANA
    • Snowflake
    • AWS Glue
    • Google Cloud Storage
    • SAP BW
    • PowerBI
    • Cassandra
  • Custom Scanner Enhancements:
    • Support for profiling of sample files: Author of custom metadata can now provide sample data files that will be used to compute profiling statistics.

Business User Experience


  • Summarized lineage view: Users can view lineage at any level from the highest, system-wide view to the granular, field-level view. (Technical Preview)
  • Lineage filters: Users can now apply filters on the lineage view to hide object types and associated links from the view for better clarity.
  • Control lineage: Object dependency generated by SQL statement Where clauses or lookup are considered as control lineage and now reported in EDC in the tabular summary view of the lineage diagram.
  • Resource-level attributes: Administrators can create custom attributes assigned to specific resources instead of the class type of any resource.
  • Export from search results: Users can now export a list of objects from search results. Import is now possible at a global level containing objects from multiple sources. Import and export jobs can be monitored in a central UI.

Data Provisioning (GA)


  • Data Provisioning: After discovery, users can now move data to a target where it can be analyzed. EDC works with IICS to provision data for end-users. Credentials are supplied by the users for both the source and the target.


    • Databases: Oracle, SQL Server, Teradata, Hive, Redshift, Azure SQLDB, Azure SQL DW, JDBC
    • Data Lakes: S3, ADLS, Blob, HDFS
    • Applications: Salesforce


    • BI: Tableau Server, Qlik,
    • Data Lakes: S3, ADLS, Blob,  HDFS, Google Cloud Storage
    • Databases: Hive, Redshift, Google Big Query, Azure SQLDB, Azure SQL DW, JDBC, Teradata, Oracle, SQL Server
  • Live Data Preview: Users can now preview source data at the table level by providing source credentials.



  • Unique Key inference: The derived key (PK) information from datasets allows users better understand the characteristics of datasets.
  • Data Domain Discovery in Text (CLOB) fields: Data Domain discovery is now applied to CLOB fields during database source profiling.
  • Scalability
    • Profiling on spark:  Administrators can run profiles using the Spark engine for selected sources.



  • Deployment Support Added
    • Hortonworks HDP 3.1 GA
  • Source Support
    • Hive, HDFS on CDH 6.1, 6.2
    • Hive, HDFS on HDP 3.1
    • Informatica Data Quality 10.2HF2, 10.4
    • SAP BW 7.4 and 7.5
    • Oracle 19c
    • MS SQL Server 2019
    • Cassandra 3.11
    • Informatica Platform 10.2 HF2, 10.2.2HF1, 10.4
    • Informatica PowerCenter 10.2 HF2, 10.4
    • Azure Power BI
    • Google Cloud Storage
    • AWS Glue
    • Snowflake

Data Engineering Quality (DEQ)



  • Profiling on Spark:  Administrators can now run profiles using the Spark engine for selected sources.


  • Address Verification: Update to Address Verification Engine to 5.15

(PAM and Platform updates as per DEI)


Cloud Ecosystems and Connectivity


  • Amazon:
    • S3 deferred policy check
    • Support for Aurora Postgres
    • Filename ports for both batch and streaming
  • Microsoft/ Azure:
    • ADLS Gen 2: New connector for the native and Spark environments including streaming across HDInsight and Databricks
    • SQL DW: VNET SE authentication support, PDO support via ODBC, Proxy support
    • Blob: Support for Shared Access Signature (SAS) authentication
    • Azure SQL DB: Support for managed instance
    • SQL Server always encrypted support
    • New Dynamics 365 Sales (common data service) support
  • Snowflake
    • Dynamic mapping support for Snowflake
    • Performance improvement
    • Database Push Down Optimization (PDO)
    • Support for Snowflake target in streaming mappings. (Technical Preview)
  • Google
    • BigQuery connector optimized in performance and scale for Sparkexecution
    • Support for global regions
    • Folders support for Google cloud storage
  • Databricks
    • Databricks is now supported on both Azure and AWS ecosystems as well as for Snowflake
    • Databricks Delta Lake support with additional transformations including Streaming pipeline support (Technical Preview)
  • SFDC
    • Support for API 47
    • Salesforce Marketing Cloud support
  • SAP
    • Core Data Services (CDS) support
    • Calculation and Analytics Views
    • Additional datatypes support including:  HANA DB LTRIM RTRIM
  • Oracle
    • Oracle 19c RAC support
    • Essbase Certification
    • JD Edwards Enterprise One 9.2 Certification
  • Enterprise Data Warehouses
    • Greenplum 5.1 support
    • DB2 Warehouse on Cloud support
    • Minor enhancements to Teradata connector
  • Technology
    • New Kafka connector for PowerCenter and PowerCenter real-time.
    • PowerExchange for JDBC V2: Support for Spark and Databricks to connect to Aurora PostgreSQL, Azure SQL Database, or any database that supports the Type 4 JDBC driver.
    • Enhancements to Complex File, Teradata, MongoDB, Cassandra connectors
    • Enhanced ability to consume a REST service using an HTTP transformation with added support for the PUT, PATCH, and DELETE methods
    • Support for MQ 9.1, JMS 2.0

PowerExchange (PWX Mainframe & CDC)


PAM Changes

  • Database version support added:
    • Db2 for i 7.4
    • Db2 LUW V11.5
    • Oracle 18c
    • MySQL 8.0
    • PostgreSQL 10.x and 11.4
    • Windows 2019 for source & target
  • Dropped support for CICS/TS 4.1
  • Database version support dropped:
    • z/OS Adabas 8.1 and 8.2
    • z/OS Datacom 12 and 14
    • z/OS Db2 9.1 and 10
    • z/OS IDMS 17 and 18
    • z/OS IMS 10, 11, and 12
    • Db2 for I 7.1
    • LUW Oracle 11g R2 & 12c R1
  • Operating system support dropped:
    • z/OS 1.11, 1.12, and 1.13
    • i5/OS 7.1

New and Extended Utilities

  • A new IBM I installer that runs on Windows guides users through installing and configuring PowerExchange on i5/OS. It can perform full and upgrade installations.
  • The z/OS Installation Assistant now supports IBM PDSE data sets.
  • The new PWXUMAP utility provides additional reporting capabilities for data maps, extraction maps, and source schemas.
  • Performance of the DTLURDMO utility when processing a large number of data maps has been improved.

General Enhancement or Feature Updates

  • PostgreSQL has been added as a CDC source.
  • PowerExchange can now read DBD information from the IBM IMS catalog when you create data maps or at CDC or IMS unload run-time.
  • Oracle 18c has been certified for operation with PowerExchange. Oracle Express CDC to run as per previous versions of PowerExchange for Oracle CDC capabilities.



Intelligent Structure Discovery Enhancements for Data Engineering Integration and Data Integration Streaming:

  • Ability to process additional input types: ORC, Avro, Parquet.
  • Ability to process HL7 messages. (Technical Preview)
  • Users can create a model based on a sample file in Data Engineering Integration at design time, without having to go through the Informatica Cloud design flow first.
  • The output of unidentified data is structured in a JSON format.




Release Notes:


Release Guide:

What are we announcing?

Informatica 10.2.0 HotFix 2


Who would benefit from this release?

Customers who want to take advantage of fixes to the core platform and products based on it - PowerCenter, Data Quality, and Big Data Management. It includes support for new environments as well as fixes to support stable deployments.


PowerExchange for 10.2.0 HotFix 2 will be releasing on 17th May 2019.


What’s in this release?

  • Platform Update
    • Informatica domain support for cross-realm Kerberos authentication (using Global Catalog). Informatica nodes, application services, and users can belong to different Kerberos realms. Smart card based Kerberos single sign-on is also supported.
  • Platform PAM Update
    • Oracle 18c - Added
    • Azure SQL Database (single database, managed instances, and elastic pool) - Added
    • Java support
      • Azul OpenJDK 1.8.0_192 - Added
      • Oracle Java 8 - Removed
      • The Informatica platform now supports Azul OpenJDK, instead of Oracle Java because Oracle has changed its Java licensing policy, ending public updates for Java 8 effective January 2019. Azul OpenJDK comes bundled with the product.
      • IBM JDK - updated
    • Authentication - Active Directory - Kerberos for Windows 2016, 2012 R, and LDAP for Azure Active Directory
    • Model Repository - Version control system support - GIT and Collabnet Subversion Edge
    • Other
      • Microsoft Edge Browser (Win 10) 41.16 - updated
      • Internet Explorer -11.x - updated
      • Google Chrome- 73.x - updated
      • Safari - 12.0.3 ( MacOS 10.13 10.14) - updated
  • Big Data PAM Update
    • Hadoop Distributions Updates
      • Cloudera CDH: 5.9, 5.10, 5.11, 5.12, 5.13, 5.14, and 5.15
      • Hortonworks HDP: 2.4, 2.5, 2.6
      • MapR: 5.2 with MEP 2.x, 3.x
      • Azure HDInsight: 3.6 supported
      • Amazon EMR 5.4.x, 5.8.x
    • Security Support:
      • LDAP
      • Kerberos support for the Informatica domain and Hadoop clusters
      • Kerberos authentication for clusters and LDAP authentication for Hive (USPS use-case)
      • MapR Tickets
      • MapR Tickets + Kerberos
      • Azure Active Directory
    • Authorization Support:
      • Sentry on all supported distributions
      • Ranger on all supported distributions
    • Encryption Support:
      • SSL / TLS
      • Encrypted zones / Transparent Data Encryption
      • Java based KMS
      • Ranger based KMS
      • Navigator based KMS
    • Resource Scheduling Support:
      • YARN queues
      • Node labeling
    • Perimeter Security Support:
      • Knox
    • High Availability Support:
      • Hive
      • HBase
      • Namenode
      • Resource Manager
      • Hive Metastore
      • Sentry
      • Ranger
      • KMS
      • Zoo keeper
    • Google Big Query: SQL Override and CustomQuery
  • Connectivity Updates
    • New Tableau v3 connector with support for Tableau Hyper target file format
    • Added support for Pushdown Optimization for PostgreSQL
    • Added ODBC based Pushdown Support for Google Big Query
    • Enhancement for Azure SQL DW PDO Support via ODBC
    • Upgraded ODBC and JDBC Drivers to 2.5.8
    • Connector Enhancements
      • Snowflake
      • Azure SQLDW
      • Redshift
      • AWS
      • SAP
    • CCI Enhancements to support new features across the connectors
  • PowerExchange Connector for Kafka
    • Support for Kafka connectivity for PowerCenter as source and target
  • Any Other
    • New PowerCenter transformation function for binary data processing
  • Informatica Metadata Manager PAM Update
    • Repository
      • Oracle 18c - Added
    • Sources
      • Oracle 18c - Added

      • Informatica PowerCenter 10.2 HF2 - Added

      • Informatica Platform 10.2 HF2 and 10.2.2 - Added

Release Notes & Product Availability Matrix (PAM)

Release Notes


Product Availability Matrix (PAM)


You can download the Hotfixes from here.

What are we announcing?

Informatica Service Pack Release 10.1.1 HF2 SP1

Who would benefit from this release?

This release provides the latest updates and fixes for customers who have deployed Informatica Data Quality 10.1.1 HF2.

Informatica recommends customers running 10.1.1 to upgrade to HotFix 2 and apply Service Pack1 to implement latest fixes for functionality and stability.

Note: This service pack is available for Windows and Linux environments and does not include any PAM changes for OS or connectivity.

Informatica 10.1.1 HotFix 2 Service Pack 1 Release Notes

You can download the Hotfix from here.

What are we announcing?

Informatica 10.2.0 HotFix 1

Who would benefit from this release?

This release is the latest version of 10.2 for all customers and includes support for new environments (see PAM) as well as all fixes to support stable deployments (see Release Notes)

What’s in this release?

Model Repository - Versioned Controlled

  • GIT Support - Added support for GitHub Server (hosting service for Git repositories)
  • Collabnet Subversion Edge - 5.2.2 - update

Informatica PowerCenter Docker Utility

Security updates

  • Third-party Library upgrades – 21 Vulnerable libraries upgraded
  • Migrated from legacy Struts to secure Spring MVC framework across entire product stack
  • Enhanced security for web applications through ‘Content Security Policy’ and advanced implementation of CSRF tokens


Platform PAM Update


  • Operating System Update:
    • AIX 7.1 TL4 - Added
    • AIX 7.1 TL2 & TL3 - Dropped
    • RHEL - 7.3 & 6.7 - Added
    • RHEL - 7.0 , 7.1 ,7.2 and 6.5 , 6.6 - Dropped
    • SUSE 12 SP2 - Added
    • SUSE 12 SP0 & SP1 -  Dropped
    • SUSE 11 SP4 - Added
    • SUSE 11 SP2 & SP3 - Dropped
    • Windows Server 2016 (Server) - Added
    • Windows Server 2008 R2 (Server)- Dropped
    • Windows Server 2008 , 2008 R2 (Client) - Dropped
  • Database support :
    • Oracle 12cR2 - Added
    • SQL Server 2017 - Added
    • Azure SQL DB (Single Database Model) - Added
    • IBM DB 9.7 & 10.1 - Dropped
    • MS SQL Server 2012 & 2008 R2  - Dropped
  • Authentication Support
    • Windows 2012 R2 & 2016 (LDAP only) - Added
    • Windows 2012, 2008, 2003 - Dropped
    • Oracle Directory Server (ODSEE) LDAP - Added
  • Tomcat Support Update:
    • v 7.0.88 - update
  • JVM Support Update:
    • Oracle Java 1.8.0_171 - update
    • IBM JDK update
  • Others
    • Visio 2007, 2010 (Mapping Architect for Visio) - Dropped
    • Visio 2007, 2010 (Mapping Analyst for Excel) - Dropped
    • Microsoft Edge Browser (Win 10) 40.15 - updated
    • Internet Explorer -11.1155 - updated
    • Google Chrome- 68.0.3440.84 - updated
    • Safari - 11.1.2 ( MacOS 10.13 High Sierra) - updated
    • Adobe Flash Player - 27.x - updated

Big Data PAM Update

  • Hadoop Distributions Update
    • IBM BigInsights - Dropped

Metadata Manager Updates

  • Security Enhancements:
    • SAML support
    • Replaced vulnerable Struts with Spring 
  • Lineage Enhancements:
    • Upgraded to latest version of graph db to fix memory leak related issue and improve performance
    • SkipLineage option in MMREPOCMD
  • PAM update: New Versions Support Added for MM Resources:
    • Oracle 12cR2
    • Microsoft SQL Server 2017
    • Microsoft SSIS 2016
    • Microsoft ASRS 2016
    • ERWin 9.7
    • Microstrategy 10.9
    • Teradata 16.x
    • Informatica PowerCenter 10.1.1 HF2; 10.2.0 HF1
    • Informatica Platform 10.2.1 and 10.2.0 HF1


PowerExchange Mainframe & CDC Updates


  • PAM upgraded support
    • z/OS 2.3
    • CICS/TS V5.4
    • Adabas V8.4.x
    • IMS V15
    • MySQL Enterprise Edition V5.7
    • Amazon RDS(Oracle)
    • Oracle 12cR2 (toleration mode)
    • New CDC Source Capabilities
    • MySQL (Enterprise Edition) is now supported as a source for Changed Data Capture
    • Amazon RDS support of Oracle is now supported as a source for Changed Data Capture
  • General Enhancements
    • PWXCMD commands can now be used to communicate with the following Environmental Change Capture Routines:
      • Adabas
      • Datacom
      • IDMS
      • IMS
    • New PowerExchange Mainframe & CDC Utilities
      • PWXUCRGP this utility allows for the contents of the CCT files to be printed
      • PWXUGSK this utility allows PowerExchange to provide SSL configuration reporting for z/OS PowerExchange listeners.


Release Notes


Informatica 10.2.0 Hotfix 1 Release Notes

Informatica PowerExchange Release Notes


Adapters Release Notes:


PAM for Informatica 10.2.0 HotFix 1


You can download the Hotfixes from here.

What are we announcing?

Informatica Big Data Release 10.2.1


Who would benefit from this release?

This release is for all customers and prospects who want to take advantage of the latest Big Data Management, Big Data Quality, Big Data Streaming, Enterprise Data Catalog, and Enterprise Data Lake capabilities.


What’s in this release?

This update provides the latest ecosystem support, security, connectivity, cloud, and performance while improving the user experience.


Big Data Management (BDM)


Enterprise Class


  • Zero client configuration: Developers can now import the metadata from Hadoop clusters without configuring Kerberos Keytabs and configuration files on individual workstations by leveraging the Metadata Access Service
  • Mass ingestion: Data analysts can now ingest relational data into HDFS and Hive with a simple point and click interface and without having to develop individual mappings. Mass Ingestion simplifies ingestion of thousands of objects and operationalizes them via a non-technical interface
  • CLAIRE integration: Big Data Management now integrates with Intelligent Structure Discovery (that is part of Informatica Intelligent Cloud Services) to provide machine learning capabilities in parsing the complex file formats such as Weblogs
  • SQOOP enhancements: SQOOP connector has been re-architected to support high concurrency and performance
  • Simplified server configuration: Cluster configuration object and Hadoop connections are enhanced to improve the usability and ability to perform advanced configurations from the UI
  • Increased developer productivity: Developers can now use the "Run mapping using advanced options" menu to execute undeployed mappings by providing parameter file/sets, tracing level and optimizer levels in the Developer tool. Developers can also view optimized mappings after the parameter binding is resolved using the new "Show mapping with resolved parameters" option.
  • PowerCenter Reuse enhancements: Import from PowerCenter functionality has been enhanced to support import of PowerCenter workflows into Big Data Management
  • GIT Support: Big Data Management administrators can now configure GIT (in addition to Perforce and SVN) as the external versioning repository


    Advanced Spark Support


  • End to end functionality: End to end Data Integration and Data Quality use-cases can now be executed on the Spark Engine. New and improved functionality includes Sequence Generator transformation, Pre/Post SQL support for Hive, support for Hive ACID Merge statement on supported distributions, Address Validation and Data Masking.
  • Data science integration: Big Data customers can now integrate pre-trained data science models with Big Data Management mappings using our new Python transformation.
  • Enhanced hierarchical data processing support: With support for Map data types and support of Arrays, Structs and Maps in Java transformations, customers can now build complex hierarchical processing mappings to run on the Spark engine. Enhancements in gestures and UI enable customers to leverage this functionality in a simple yet effective manner
  • Spark 2.2 support: Big Data Management now uses Spark 2.2 on supported Hadoop distributions




  • Ephemeral cluster support: With out-of-the-box ephemeral cluster support for AWS and Azure ecosystems, customers can now auto deploy and auto scale compute clusters from a BDM workflow and push the mapping for processing to the automatically deployed clusters
  • Cloudera Altus support: Cloudera customers can now push the processing to Cloudera Altus compute clusters.
  • Improved AWS connectivity: Amazon S3 and Redshift connectors have received several functional, usability and performance updates
  • Enhanced Azure connectivity: Azure WASB/Blob and SQL DW connectors have received several functional, usability and performance updates


Platform PAM Update


Oracle 12cR2


SQL Server 2017


Azure SQL DB

(PaaS / DBaaS , Single database model)


SQL Server 2008 R2 & 2012 (EOL)


IBM DB2 9.7 & 10.1 (EOL)


Suse 12 Sp2


Suse 12 Sp0



Not Supported


Not Supported

Windows Server

Not Supported

Model Repository - Versioned Controlled





Oracle Java 1.8.0_162




Tomcat 7.0.84



Big Data Quality (BDQ)



  • Enable data quality processing on Spark
  • Updated Address Verification Engine (AddressDoctor 5.12)
  • Support for custom schemas for reference tables
  • Updated workflow engine


  • Support Spark scale and execution with Big Data Management
  • Enhanced Address Verification engine with world-wide certifications
  • Flexible use of reference data with enterprise DB procedures
  • Faster start times for workflow engine


Big Data Streaming (BDS)


Change in Product Name: The product name has changed from "Informatica Intelligent Streaming" to "Big Data Streaming"


Azure Cloud Ecosystem Support


  • Endpoint Support: Azure EventHub as source & target and ADLS as target
  • Cloud deployment: Run streaming jobs in Azure cluster on HDInsight


Enhanced Streaming Processing and Analytics


  • Stateful computing support on streaming data
  • Support for masking streaming data
  • Support for normalizer transformation
  • Support for un-cached lookup on HBase tables in streaming
  • Kafka Enhancements - Kafka 1.0 support & support for multiple Kafka versions


New Connectivity and PAM support


  • Spark Engine Enhancements - Spark 2.2.1 support in streaming, Truncate table, Spark concurrency
  • Relational DB as target - SQL Server and Oracle
  • New PAM - HDInsight
  • Latest version support on Cloudera, Hortonworks, EMR


Enterprise Data Lake (EDL)


Change in Product Name: The product name has changed from "Intelligent Data Lake" to "Enterprise Data Lake"


Core Data Preparation


  • Data Preparation for JSON Lines (JSONL) Files: Users can add JSONL files to a project and structure the hierarchical data in row-column form. They can extract specific attributes from the hierarchy and can expand (or explode) arrays into rows in the worksheet.
    Pivot and UnPivot: Users can pivot or unpivot columns in a worksheet to transpose/reshape the row and column data in a worksheet for advanced aggregation and analysis.
  • Categorize and One-hot-encoding functions: Users can easily categorize similar values into fewer values to make analysis easier. With one-hot-encoding, the user can convert categorical values in a worksheet to numeric values suitable for machine learning algorithms.
  • Column Browser with Quality Bar: A new panel for browsing columns is added to the left panel in the worksheet. This easy to use column browser interface allows users to show/hide columns, search for columns, highlight columns in the worksheet, etc. The panel also has a Quality bar that shows unique, duplicate and blank value count percentages within the column. The panel can also show any associated glossary terms.
  • Project Level Graphical View: For a project with a large number of assets, the graphical view helps users understand the relationships between input data sources, sheets created, assets published, and Apache Zeppelin notebooks created. Users can navigate to the asset, notebook or the worksheet directly.
  • Insert recipe step, add a filter to an existing step: Users can insert a new step at any location in the recipe. They can also add/modify existing filters for any recipe step.
  • Data Type Inferencing optimization: Users can revert undesired inferencing done by data preparation engine and apply appropriate functions. They can revert or re-infer types as needed.
  • Show where the data in a column comes from: The column overview in the bottom panel now has a Source property that shows if the column corresponds to a physical input source column, another worksheet or a step in the recipe. If the user hovers over a data source name, the application shows details of the formula when available and highlights the appropriate recipe step.
  • UX Improvements in Filter-in-effect, Sampling, Join and Apply Rule panels: The user interface has been improved for clarity of icons and language used, visibility of information and button and better user flow for these panels. Users can also input constant values as inputs in the Apply Rule panel for text based user inputs.


Self-service and Collaboration


  • Self-service scheduling: Data Analysts now have the ability to schedule import, publish and export activities. The Import/Publish/Export wizard offers the choice to perform the activity now, or to schedule it. For publish, a “snapshot” of recipes is saved for execution at the scheduled time. Users can continue to work on the project and modify recipes without affecting scheduled activity.
    The “My Scheduled Activities” tab provides details of upcoming activities. The “Manage My Schedules” tab provides details of schedules and enables users to modify schedules.
    Scheduled activities can be monitored on the My Activities page. Functionally it has the same effect as running the activity manually. All the schedules created in Enterprise Data Lake and activities scheduled in Enterprise Data Lake are also visible in the Administrator Console tool.
  • Project History: Users (and IT/Governance staff) can view the important events that happened within a given project. These include events related to Project, Collaborators, Assets, Worksheets, Publications, Scheduled Publications, Notebook etc.
  • Copy-Paste Recipe Steps: Users can copy specific steps or the whole recipe and paste into another sheet in the same project or another project. There is also a way to map the input columns used in the source sheet to the columns present in the target sheet. This enables reuse of each other’s or their own work in the creation of repetitive steps.
  • Quick Filters for asset search in the data lake: In the search results, users have a single-click filter to get all the assets in the data lake that match the search criteria.
  • Recommendation Card UX Improvements: The Recommendation cards in the Project view now show the reason an asset was recommended for inclusion in the project, and what action user should take.
  • Details of Source Filters during Publish: During Publication, the Publish Wizard shows the details of "Source Filters" so the user understands the impact of including or not including the filters.


Enterprise Focus


  • Single Installer for Big Data Management, Enterprise Data Catalog and Enterprise Data Lake: The installation and upgrade flows have been improved and simplified with a single installer. Enterprise Data Lake customers can now install all three products in a single install. The total size of the single installer is just ~7GB due to better compression, as compared to the previous combined size of ~13GB. The process requires fewer domain restarts, and additional configurations can also be enabled in the same single flow.
  • Blaze as Default Execution Engines for Enterprise Data Lake: All Enterprise Data Lake processes using Big Data Management mapping execution now use Blaze as the default engine. This has improved performance and consistency.
  • SAML based SSO: Enterprise Data Lake now supports SAML based Single-Sign-On.
  • Lake Resource Management UI: Administrators can manage the Enterprise Data Catalog resources that represent the external data sources and metadata repositories from which scanners extract metadata for use in the data lake. The Lake Resource Management page also verifies the validity of resources, the presence of at least one Hive resource, etc. so that Enterprise Data Lake functionality is usable. Changes done through the Lake Resource Management page do not require a service restart.
  • Data Encryption for Data Preparation Service node: The temporary data created on Data Preparation Service nodes is encrypted for better security.
  • Demo Version of IT Monitoring Dashboard: A dashboard created in Apache Zeppelin allows administrators to monitor Enterprise Data Lake user activities. The dashboard is not a product feature, but an example to show what is possible with the audit information. The dashboard is an Apache Zeppelin Notebook built on top of the Enterprise Data Lake user event auditing database. The Zeppelin Notebook and associated content are available on request, but it is unsupported. The Audit mechanism has been changed and improved now to support direct queries using JDBC.  
  • Performance Improvement in Import process using CLAIRE: Using the profiling metadata information available in CLAIRE, the import process optimizes the number of sub-processes created thereby improving the overall performance of Import


Enterprise Data Catalog (EDC)


  • Intelligence
    • Enhanced Smart Discovery: By clustering similar columns from across data sources, EDC enables users to quickly associate business terms as well as classify data elements. Unsupervised clustering of similar columns is now based on names, unique values and patterns in addition to the existing data overlap similarity.
    • Enhanced Unstructured Data Discovery (Tech Preview): Enhanced unstructured data support for accurate domain discovery using NLP and new file system connectivity.
    • New Data Domain Rules: Override rules and new scan options for more granular control on rule based data domain inference.
  • Connectivity
    • New Filesystems: Added support for cataloging of Sharepoint, Onedrive, Azure Data Lake Store(ADLS), Azure Blob and MapRFS
    • New File Formats: Avro and Parquet support added in 10.2.1.
    • Remote File Access Scanner: Mounting folders on Hadoop nodes not required for Linux and Windows filesystem, instead the new remote file access scanner uses SMB for Windows and SFTP for Linux for cataloging.
    • Deep Dive Lineage support for BDM: End to End data lineage from Big Data Management with transformation logic and support for dynamic mappings
    • Data Integration Hub: Users can now scan DIH to access metadata for all objects and its subscriptions and publications.
    • Data Lineage from SQL Scripts(Tech Preview): End to End data lineage from hand coded SQL scripts to understand column level data flows and data transformations- includes support for Oracle PLSQL, DB2 PLSQL, Teradata BTEQ, HiveQL. Stored Procedures are not supported in this release.
    • Qlikview: Scan reports and report lineage from Qlikview.
  • User Experience Improvements
    • Manage business context with in-place editing of wikipages of data assets. Business user friendly data asset overview page that provides all the business context about the data asset. Inherit descriptions from Axon associations or type your own.
    • SAML Support: For Single Sign-On.
    • Multiple Business Term Linking: Allows custom attribute creation with Axon or BG term type to allow users to link multiple business terms with a single asset.
    • Search Facet Reordering: Catalog Administrators can now reorder the default facet orders making business facets show up higher than the technical facets.
    • New Missing Asset Link Report: To help users identify linked and unlinked data assets for a lineage-type source.
  • Open and Extensible Platform
    • New REST APIs for starting and monitoring scan jobs
    • S@S Interop: Shared Infrastructure, Metadata Repository, Data Domain Definitions and Curation Results shared across EDC and S@S. Users can now scan a resource once to see it in both EDC and S@S.
    • Reduced Sizing: Upto 3X reduction in computation cores required on the Hadoop cluster across all sizing categories
    • Ease of Deployment – Improved validation utilities, updated distro(HDP v2.6) for embedded cluster.


Release Notes & Product Availability Matrix (PAM)


PAM for Informatica 10.2.1


Informatica 10.2.1 Release Notes


What are we announcing?

Informatica 10.1.1 HotFix 2


Who would benefit from this release?

This release is for all PowerCenter and Data Quality customers and prospects who want to take advantage of the fixes to the Core Platform, Connectivity and installing new HotFix 2, or upgrading from previous versions.


What’s in this release?

  • This update provides the latest HotFix improving the user experience
  • Domain Rename - Allows customer to rename an existing domain to a new name. This is beneficial to the customers splitting their mixed use-use Informatica domain.
  • TPL Upgrades - Upgraded several security-critical Third-Party Libraries (TPLs)
  • PAM Updates – Upgrading the Operating systems, Databases, Java and Tomcat
  • Azure Blob/DW (V1/V2) are certified to work with 10.1.1 HF2 release
  • SAP NW 7.5 Connectivity update



  • Added support for Windows 2016 (Server and Client)
  • Added support for AIX 7.2
  • Updated support for SUSE 12 SP2
  • Updated support for RHEL 7.4
  • Dropped support for AIX 6.1
  • Updated Java version -  IBM java - 8.0 Service Refresh 5  and Oracle Java - Java SE 8u144
  • Updated Tomcat version - Tomcat - 7.0.82


Informatica 10.1.1 Hotfix 2 Release Notes


Informatica PowerExchange 10.1.1 HotFix 2 Release Notes


Informatica PowerExchange Adapters - 10.1.1 HotFix 2 Release Notes


Informatica PowerExchange Adapters for PowerCenter - 10.1.1 HotFix 2 Release Notes


PAM for Informatica 10.1.1 HotFix 2


You can download the Hotfixes from here.

The Informatica Platform is a set of data services and information solutions powered by a state-of-the-art data management engine. The end of support for the Informatica Platform includes the following products:


  • PowerCenter
  • Metadata Manager and Business Glossary
  • PowerExchange (PWX)
  • Data Quality (DQ)
  • Data Transformation (B2B DT)
  • Big Data Management (BDM)
  • Data Integration HUB (DIH)
  • B2B Data Exchange (DX) without OEM Cleo MFT products (see Informatica Product Lifecycle Guide for more details and DX end of life)


Support for all Informatica Platform products with versions 9.0.x, 9.1, 9.5.x, and 9.6.x will end on the following dates:



Release Date

Minimum Support Period

End of Support

Extended Support*

Sustaining Support*


Jan 2014

31 Jan 2018

31 Mar 2018

31 Mar 2019

31 Mar 2021


Jun 2012

30 Jun 2016

31 Jul 2017

31 Jul 2018

Not Available


Mar 2011

31 Mar 2015

31 Jan 2017

31 Jan 2018

Not Available


Dec 2009

31 Dec 2013

31 Jan 2017

31 Jan 2018

Not Available


* Extended and Sustaining Support are available at an additional charge; Sustaining Support is not available for all product releases. For further information please contact:


For more information on product lifecycle, extended support policy, and end of life details of various Informatica products, view the Informatica Product Lifecycle Guide.

Informatica 10.2.0  includes the following new capabilities:


Big Data Management (BDM)


Ease of use


  • Zero design-time footprint: Customers now no longer need to install stacks/parcels/RPMs on the Hadoop cluster to integrate Informatica BDM with a Hadoop cluster.
  • 1-step Hadoop Integration: Customers can now integrate Informatica BDM with Hadoop clusters in 1 step. They can also keep the Hadoop and BDM environments in sync with 1-click Refresh.
  • Default DIS selection: Developers working with Informatica domains having a single DIS no longer have to select the DIS to be able to execute mappings. Developers can also now select a default DIS per domain instead of per installation.


Platform enhancements


  • Bulk execution engine selection: Administrators can now bulk update both deployed and design-time mappings to now leverage Blaze and Spark.
  • Reusing PowerCenter applications: Customers can now import complex PowerCenter mappings with multiple pipelines as well as PowerCenter workflows into BDM.
  • DIS Queuing and Concurrency: Data Integration Service can now queue the job submissions and has concurrency enhancements to enable massive Hadoop pushdown job submission and execution
  • DI on Spark: Customers can now leverage Spark as execution mode for all types of Data Integration use-cases.
  • Complex/Hierarchical data types: BDM now supports Array, Structs and Maps data types to enable applications with complex data types to process the data more effectively.
  • Stateful computing: Customers can now perform Stateful computing using advanced windowing functions now introduced in Spark execution mode.


Connectivity & Cloud


  • SQOOP on Spark: Customers can now use any JDBC compliant SQOOP drivers in the Spark execution mode to ingest data into Hadoop eco-system
  • AWS S3 & RedShift on Spark: Amazon AWS customers can now integrate with S3 and RedShift on Spark execution mode.
  • Azure ADLS & WASB on Spark: Microsoft Azure customers can now integrate with HDFS and Hive on both ADLS & WASB in Spark execution mode.
  • New ADLS Connector: We added a new Azure Data Lake Storage Connector on Spark engine. Azure customers can now read and write to ADLS .


Ecosystem support


  • Cloudera CDH 5.11
  • HortonWorks HDP 2.6
  • MapR 5.2 MEP 2.x
  • IBM BigInsights 4.2
  • Amazon EMR 5.4
  • Azure HDInsights 3.6


Platform PAM Update


  • Operating System Update:
    • AIX 7.2 TL0 – Added
    • AIX 6.1 TL8 - Dropped
  • Tomcat Support Update:
    • v 7.0.76
  • JVM Support Update:
    • Oracle Java 1.8.0_131
    • IBM JDK


Informatica PowerCenter


Developer/Admin Productivity


  • Users can evaluate expression formula when typing in the expression
  • Automate end-to-end deployment using pmrep Create Query
  • pmrep Create Connection enhanced to support password as a parameter option


Audit Enhancements


  • Audit user login attempts with information about timestamp, IP address, PowerCenter client application name and version
  • Audit PowerCenter metadata XML Import during code deployment with information about logged in user, IP address, file name with path and size


Connectivity & Cloud


  • Ability to connect to host of cloud applications using PowerExchange for Cloud Applications
  • PowerExchange for AWS Redshift
    • Performance improvement
    • Additional AWS region support
  • PowerExchange for AWS S3
    • Source partitioning support to
    • Ability to read and write multiple files to S3
    • Additional AWS region support
  • PowerExchange for Azure Blob
    • Added support for append to Blob files
  • Reader performance improvements to PowerExchange for Teradata TPT
  • PowerExchange for SAP
    • HTTPS support added for table reader
    • Additional datatype support for table reader
    • SSTRING datatype support added for IDoc prepare and interpreter transformations
  • SAP HANA (PowerExchange for ODBC)
    • certified for HANA 2.0
    • Support for bulk loading
    • Added support for “upsert”
  • PowerExchange for Dynamics CRM
    • Made available for AIX operating system
    • Enhanced reader performance


Data Quality (IDQ and BDQ)


Increased flexibility of Business Rules (Rule Specifications)


  • Increased Analyst User Flexibility
    • Remove need to compile
    • Use Rule Specifications directly in profiles and mappings
  • Ensure user intent is reflected
    • No disconnect between Rule Specification and rule implementations in mappings
    • Increased Business <> Technical collaboration through re-usable artefacts
  • More flexible what-if analysis by self-service thin client based Data Analysts


Advanced Exception Management Capabilities


  • Support exception data distribution by ranges with flexible definitions in Exception Management processes
  • Enforce basic user validation / prevent NULL user entries
  • Editable Subject for Human Task Notifications
  • Ability to use external table for table for task assignment – no need to re-deploy workflow to change task assignments
  • Updated UI for task assignment
  • Set if a task user must fill in data in error/exception cells at workflow level


Address Verification Updates


  • Integrated Address Verification (AddressDoctor) v5.11 engine
  • View Address Verification Licence, Engine and Data version details from Informatica Developer
  • Country Specific fields added for Austria and Czech Republic


Informatica Intelligent Streaming (IIS)


Enhanced Streaming Analytics Solution


  • Pass through Mapping support: Customers can now pass the streaming payload (BLOB) as is to the target bypassing parsing and column projection
  • Rank Transformation support: Customers can now use Rank transformations in streaming mappings for ranking the data based on relevance.
  • Support for secure Kafka clusters: Customers can now use IIS in Kafka clusters with Kerberos security.
  • Support for replaying messages in Kafka: Customers can now reprocess the messages in Kafka with the replay feature using timestamps


Cloud Support in Streaming


  • Support for Amazon Kinesis as source: Customers can now source streaming data from Amazon Kinesis and process the data using transformations.
  • Support for Amazon Kinesis Firehose as sink: Customers can now use Amazon Kinesis Firehose as target of streaming mappings and use it to persist data onto AWS S3, Redshift, ElasticSearch
  • Support for stream data processing in cloud: Customers can now run streaming data processing in the cloud on EMR cluster on AWS


New Datatype and New PAM


  • Hierarchical Datatype support: Customers can now process complex hierarchical streaming payloads in JSON, Avro and XML format using IIS
  • MapR Ecosystem support: Customers can now use MapR streams as source & MapR DB, HBase & HDFS as sink
  • Character Delimited data format support: Customers can now use Character Delimited data (CSV) as the data format in streaming.


Ecosystem support


  • Cloudera CDH 5.11
  • HortonWorks HDP 2.6
  • MapR 5.2 MEP 2.x
  • Amazon EMR 5.4
  • Apache Kafka 0.9 & 0.10.x


Intelligent Data Lake (IDL)


Data Preparation Enhancements and DQ Rules support


  • Enhanced Recipe Panel Layout : Users can see all recipe steps in a dedicated panel during data preparation. The recipe steps are clearer and more concise with color codes to indicate function name, columns involved, and input sources. Users can edit steps, delete steps, or go back-in-time to see the state of data at a specific step in the recipe.
  • Applying Data Quality Rules: While preparing data, users can use pre-built rules to cleanse, transform and validate data. Rules can be created using Informatica Developer or Informatica Analyst. With a Big Data Quality license, thousands of pre-built rules are available. Using pre-built rules promotes effective collaboration within business and IT teams.
  • Business Terms for Data Assets in Data Preview and Worksheet View:  Users can view business terms associated with columns in data assets during data preview and data preparation.
  • Data Preparation for Delimited Files: Users can cleanse, transform, combine, aggregate, and perform other operations on delimited HDFS files that exist in the data lake. Files can be previewed before being added to a project.
  • Join Editing: Users can edit the joinconditions for an existing joined worksheet, including join keys and join types (such as inner and outer joins).
  • Sampling Editing: Users can change the columns selected for sampling, edit the filters applied, and change the sampling criteria.


Data Validation and Assessment Using Visualization with Apache Zeppelin


After publishing data, users can validate the data visually to make sure that the content and quality are appropriate for analysis. Users can modify the recipe used to prepare the data to address any issues, thus supporting an iterative Prepare-Publish-Validate process.

Intelligent Data Lake integrates with Apache Zeppelin to provide a visualization “Notebook” that contains graphs and charts representing relationships between columns. When you open the visualization Notebook for the first time for published data asset, IDL uses the CLAIRE engine to create “Smart Visualization suggestions“ in the form of histograms based on the numeric columns newly created by the user.

For more details about Apache Zeppelin, see

In addition, users can filter the data during data preview for better assessment of data assets. .



Enterprise Readiness


  • Support for Multiple Enterprise Information Catalog Resources in the Data Lake: Administrators can configure multiple Enterprise Information Catalog resources (Hive and HDFS scanners) so users can work with all types of assets and all applicable Hive schemas and HDFS files in the data lake.
  • Support for Oracle as the Data Preparation Service Repository: Administrators can now use Oracle 11gR2 and 12c databases for the Data Preparation Service repository.
  • Improved Scalability for the Data Preparation Service: Administrators can ensure horizontal scalability by deploying the Data Preparation Service on a grid. Improved scalability supports high performance during interactive data preparation for increased data volumes and numbers of users.


Hadoop Ecosystem Support


  • Cloudera CDH 5.11
  • Hortonworks HDP 2.6
  • Azure HD Insight 3.6
  • Amazon EMR 5.4
  • MapR 5.2 MEP 2.x
  • IBM Big Insights 4.2


Enterprise Information Catalog (EIC)


  • Intelligence

    • Composite Domains: With Composite Domains, EIC can discover entities and perform data classifications based on rule based or machine learning based domains. Entity Recognition is used by search, facets, classifications and business glossary recommendations
    • Unstructured Data Cataloging: EIC can now catalog unstructured data including formats like Excel, Docs, Powerpoint, PDF and more. System uses Domain and Composite Domain discovery to automatically infer semantic type and entities from unstructured files.
    • New Data Domain Creation: Users can now use Catalog Administrator to create data domains with regular expressions and reference tables in addition to mapplet based rules.
    • Value Frequency: Catalog users can now view the top values by occurrence for each column. Value Frequency views are governed by security privileges and permissions.


  • Open and Extensible Platform

    • Open REST APIs: REST APIs for enablingAnalytics on Metadata Repository and integrating EIC with third party applications.
    • Custom Scanner Framework: Model and Ingest metadata from any kind of data or lineage source
    • Metadata Import/Export: Resource level metadata import and export to access metadata in easy to understand excel format and bulk curate data assets at scale.


  • New Connectivity

    • Azure SQL DB
    • Azure SQL DW
    • Azure WASB File System
    • ERWIN
    • Apache Atlas: Extract lineage from Hadoop jobs: Sqoop, Hive queries etc. to create an end to end view of data lineage
    • Informatica Axon: Axon Glossary scanner to import Axon Business Glossaries in EIC and allow association to technical data assets. Integration includes ability to create business classifications and recommendations based on domain associations.
    • Transformation logic from selected sources: Powercenter, Informatica Cloud and Cloudera Navigator


  • Deployment

    • Solr sharding and replication support for HA and better performance
    • Improved Logging: All service dependent logs in one file (LDM.log) including Hbase, Solr and Ingestion services logs.
    • Multi-home network support when there are multiple network interfaces installed on the box.
    • New utility to validate key tabs generation from KDC server.
    • Ranger certification


  • PAM

    • Deployment
      • Cloudera CDH 5.11
      • Hortonworks HDP 2.6
      • Azure HDInsights 3.6


PowerExchange Mainframe & CDC

  • PAM
    • AIX 7.2 – Added
    • z/OS Adabas 8.3.4 – Added
    • z/OS DB2 V12 (Compatibility) – Added


  • Enhancements
    • PowerExchange Navigator, continued work to help support working between multiple PowerExchange editions
    • z/OS DB2 for CDC support of Large Object Data
    • LUW Oracle CDC, facility enabling audit log trace of Oracle DDL changes


Product Availability Matrix (PAM)


PAM for Informatica 10.2


Release Notes


Informatica 10.2 Release Notes link:


Informatica PowerExchange Version 10.2 Release Notes link:


PowerExchange Adapters for Informatica 10.2 Release Notes link:


PowerExchange Adapters for PowerCenter 10.2 Release Notes link:


Note: As this is a major version, download requests need to be made by opening a shipping case.

What are we announcing?

Informatica 10.1.1 HotFix 1


Who would benefit from this release?

Customers who want to take advantage of fixes to the core platform and all products based on it including Data Quality, B2B Data Transformation, Profiling, PowerCenter, PowerExchange adapters and Big Data products. It also includes updates to ecosystem support for Big Data as well as connectivity for PowerCenter customers.


What’s in this release?

Mainframe and CDC (PowerExchange)


  • z/OS DB2 V12 support.

Big Data Management (BDM)

Big Data Management includes the following changes to support:


  • Cloudera CDH 5.10 and 5.11. You can use Cloudera CDH 5.10 and 5.11 for both production and non-production use cases. In version 10.1.1 Update 2, Cloudera CDH 5.10 support was a technical preview feature. Big Data Management added support for version 5.11 in this release.
  • Hortonworks HDP 2.5 and 2.6. You can use Hortonworks HDP 2.5 and 2.6 for both production and non-production use cases. Big Data Management added support for Hortonworks HDP 2.6 in this release.
  • HDInsight 3.5. You can use HDInsight 3.5 both production and non-production use cases. In version 10.1.1 Update 2, HDInsight 3.5 support was a technical preview feature.

Intelligent Streaming (IIS)

Intelligent Streaming includes the following changes to support:


  • Cloudera CDH 5.10 and 5.11
  • Hortonworks HDP 2.6

Intelligent Data Lake (IDL)

Product Availability Matrix


  • Intelligent Data Lake includes the following changes to support:
    • Cloudera CDH 5.10 and 5.11
    • HDInsight 3.5


  • You can now use Oracle 12c and 11gR2 as the repository for the Data Preparation Service.



  • You can now use IDL with IBM BigInsights clusters enabled for security through Transparent Data Encryption.



PowerExchange for Cloud Applications


  • You can now connect to Cloud Applications seamlessly from PowerCenter, just the way you connect to on-premises applications with PowerExchange Connectors.

PowerExchange for Microsoft Azure Blob Storage


  • Support for APPEND blob in PowerCenter through PowerExchange for Microsoft Azure Blob Storage.

PowerExchange for Microsoft Azure SQL Data Warehouse


  • Support for UPDATE, UPSERT, and DELETE operations in PowerCenter via PowerExchange for Microsoft Azure SQL Data Warehouse
  • Support for Full Push Down Optimization in PowerCenter via ODBC

PowerExchange for Netezza


  • The performance of Netezza bulk writer in is improved and is now comparable with NZ load.

Release Notes & Product Availability Matrix (PAM)


PAM for Informatica 10.1.1 HotFix 1


Informatica 10.1.1 HotFix 1 Release Notes


Informatica 10.1.1 HotFix 1 PowerExchange Adapters Release Notes


Informatica 10.1.1 HotFix 1 PowerExchange Adapters for PowerCenter Release Notes


Informatica 10.1.1 HotFix 1 PowerExchange Release Notes


You can download the Hotfixes from here.

What are we announcing?


Informatica 10.1.1 Update 2


Who would benefit from this release?


This release is for all customers and prospects who want to take advantage of the latest Big Data Management, Enterprise Information Catalog, and Intelligent Data Lake capabilities.


What’s in this release?


This update provides the latest ecosystem support, security, connectivity, cloud, and performance while improving the user experience.


Big Data Management (BDM)


Hive Functionality


Hive truncate table partitioning: You can now configure Blaze mappings to truncate the Hive partitions without impacting other partitions. This feature is an addition to your ability to truncate an entire Hive table, available since 10.1.1.

Hive filter pushdown: Blaze can now push compatible filter conditions to Hive for better performance.




SSL/TLS support: You can now integrate BDM with secure clusters that are SSL/TLS enabled.

Hadoop security with IBM BigInsights: You can now use BDM with IBM BigInsights clusters enabled for security through Apache Ranger, Apache Knox, and Transparent Data Encryption.




Node labeling: You can now group Hadoop nodes with similar characteristics via Hadoop’s node labeling. You can also run Blaze & Spark mappings on the labeled nodes.

Fair and Capacity Scheduler support: You can now define and run Blaze & Spark mappings with the Fair Scheduler and the Capacity Scheduler.

YARN queues: You can now execute Blaze & Spark jobs on specific YARN queues.

Multiple GRID manager support: You can now leverage the same Blaze infrastructure services on a Hadoop cluster to run jobs from multiple Informatica domains of the same version.


Data Integration on Hadoop


PC Reuse Report: As a PowerCenter customer, you can now run the PC Reuse Report to analyze the reusability of your existing PowerCenter mappings into BDM. You can interactively analyze and drill into the data by folder, engine, mapping, and functionality.

Stop on errors: You can now configure Blaze mappings to stop as soon as the Blaze engine encounters data rejection.




[Tech Preview only] HDFS & Hive on ADLS: As a Microsoft Azure customer, you can now read from write to HDFS and Hive tables hosted on ADLS with the Spark engine.

[Tech Preview only] HDFS & Hive on WASB: As a Microsoft Azure customer, you can now read from write data to HDFS and Hive tables hosted on WASB with the Spark engine.

Hive on S3: As an Amazon AWS customer, you can now interact with Hive external tables with S3 storage in all supported Hadoop distributions.




Sqoop direct support for Oracle: You can now read from and write to Oracle using Oracle specialized (direct) driver for Sqoop with the Spark engine.

Sqoop direct support for Teradata: Cloudera and Hortonworks customers can now leverage Teradata Connector for Hadoop (TDCH) functionality with the Blaze engine.

MapR-DB: As a MapR customer, you can now read from and write to MapR’s NoSQL database MapR-DB with PowerExchange for MapR-DB.


Ecosystem support


• Cloudera CDH 5.9, [ Tech Preview only] CDH 5.10

• Hortonworks HDP 2.3, 2.4, 2.5

• MapR 5.2

• IBM BigInsights 3.5

• Amazon EMR 5.0

[ Tech Preview only] Azure HD Insight 3.5


Enterprise Information Catalog (EIC)


Open and Extensible Platform


One-click deployment on AWS:  As an AWS customer, you can deploy EIC in minutes with the one-click EIC marketplace offering on AWS. The offering also contains sample metadata to help jump start your EIC experience.

Windows and Linux file system support: You can now import metadata and run profiles on files in Windows and Linux file systems to help catalog enterprise file-based data assets which are deployed outside of HDFS and Amazon S3.

Apache Ranger support: You can now deploy EIC on Hortonworks clusters that are Apache Ranger enabled.

Centrify support: You can also deploy EIC on Cloudera clusters that are Centrify enabled.


User Experience Enhancements


Business Terms in the asset view: You can now view the related business terms besides column names in the asset view.

Data asset path in asset views: You can now view and easily navigate to any part of the data asset path using the asset path header.

Hyperlinks in custom attributes: You can add hyperlinks to file assets in Box, Dropbox, Sharepoint or any other URL as a string-based custom attribute to help data consumers navigate to related links from EIC assets.

Maximize the Custom Attribute pane: Maximize the custom attribute pane to view the annotations in a wider, full-screen view.




Automatic detection of CSV file headers: You can now auto detect the existence of file headers when you scan CSV files. This enhancement is especially helpful when you automate the scans of large folders in HDFS, Amazon S3, and Windows, and Linux filesystems.


Product Availability Matrix (PAM)

Deployment and Source Support


• Hortonworks HDP 2.5

• Cloudera CDH 5.9, 5.10

• IBM BigInsights 4.2


Source Support Only


• Azure HDInsights 3.5

• MapR 5.2


Intelligent Data Lake


Product Availability Matrix (PAM)

Hadoop Ecosystem Support


You can now use the following Hadoop distributions as a Hadoop data lake:


• Cloudera CDH 5.9, 5.10

• Hortonworks HDP 2.3, 2.4, 2.5

• Azure HD Insight 3.5

• Amazon EMR 5.0

• IBM BigInsights 4.2


Repository for the Data Preparation Service


• You can now use MariaDB 10.1.x for the Data Preparation Service repository.


Functional Improvements


Column-level Lineage


• Data analysts can now view the lineage of individual columns in a table corresponding to activities such copying, importing, exporting, publishing, and uploading data assets.


SSL/TLS Support


• You can now integrate IDL with Cloudera 5.9 clusters that are SSL/TLS enabled.


Informatica 10.1.1 Update 2 Release Notes

Informatica 10.1.1 Release Notes

PAM for Informatica 10.1.1 Update 2

This release can be downloaded by opening a shipping request.

What are we announcing?

Informatica 10.1.1


Who would benefit from this release?

This release is for all customers and prospects who need big data management, data quality and data integration solutions.


What’s in this release?

We see a strong interest in our customers to start or expand their big data initiatives. Informatica provides the most comprehensive big data management solution to enable customers to quickly turn big data into business value.


As part of Informatica 10.1.1, we are excited to release two new products:

  • Enterprise Information Catalog (EIC), the next generation business-user oriented enterprise-wide metadata catalog.
  • Informatica Intelligence Streaming (IIS), which enables continuous data capture and processing for real-time analytics on streaming data.


In this release, we also added new features, updated product availability matrix (PAM) support, improved performance, and expanded connectivity for our existing products.


High-level new capabilities for this release are described in detail below.


Big Data Management (BDM)

  • Expand and update support for Hadoop distributions
    • Cloudera HDP 5.8
    • Hortonworks 2.5
    • IBM BigInsights 4.2
    • AWS EMR 4.6AWS EMR 5.0
    • Microsoft Azure HDInsight 3.4
    • TDCH through Sqoop
    • Cassandra version upgrade
    • Blaze support for HBase
    • Silent configuration option for Cloudera Manager and Ambari-based distributions
    • Azure HDInsight configuration
    • Deployment of Informatica binaries using Ambari Stacks and Services
    • Ambari integration
    • Eliminated relational database client installation through
      • JDBC support for Lookup transformation
      • JDBC support for data quality transformations
      • Advanced Hive functionality with support for
        • Create, append, and truncate tables
        • Partitioned and bucketed Tables
        • Char, Varchar, Decimal 38 data types
        • Quoted identifiers in column names
        • SQL-based authorization
        • SQL overrides and Hive Views
      • Partitioning support for the Data Masking Option
      • Advanced transformations: Update Strategy, global sort order through the Sorter transformation, Data Process transformation
      • Summary report capabilities in the Blaze Job Monitor
  • Enhanced Spark capability with:
    • Spark 2.0 support
    • Security support for Sentry, Ranger, operating system profiles, and transparent encryption
    • Hive and Sqoop lookup support
    • Java transformation support
    • Binary data type support
    • HBase support
    • Performance optimizations
  • Enhanced cloud support
    • Single-click BDM image deployment on Microsoft Azure and Amazon AWS marketplace
    • Connectivity for AWS S3 and RedShift
  • Better security on the MapR Hadoop cluster with support for MapR tickets
  • Additional connectivity

  •      Sqoop for Blaze and Spark mode of execution

  • Enhanced installation and configuration
  • Enhanced the Blaze engine with advanced capabilities

  •      Hive connected and unconnected lookups

  • Enhanced workflows with support for nested gateways and Control tasks


Intelligent Data Lake (IDL)

  • Improvements in data preparation

  •      Ability to select columns, filter rows, and randomization for sampling

  • Lookup function
  • Sentry storage and table-level authorization
  • Cloudera CDH 5.8, Hortonworks 2.5
  • Windows 10 Edge browser 38.14, Safari 9.1.2
  • SUSE 12
  • Data preview and ingestion from external RDBMS sources (using JDBC through Sqoop)
  • Publication performance improvements with Blaze
  • Export data from the lake to an external RDMS (using JDBC through Sqoop)
  • Export data from the lake as a Tableau data extract file
  • Granular activity tracking for import, export, publish, copy, delete, etc.
  • Enhanced security support

  •      Ranger storage, table, row-level authorization, and masking policy support

  • Updated PAM Support


Enterprise Information Catalog (EIC) – New standalone product with the 10.1.1 release

Enterprise Information Catalog helps data architects, data stewards, and data consumers analyze and understand large volumes of metadata in the enterprise. Users can extract metadata for many objects, organize the metadata based on business concepts, and view data lineage and relationship information for each object. In essence, it is the ‘Google’ for the enterprise, providing a unified view of all data assets and their relationships.


Enterprise Information Catalog maintains a catalog. The catalog serves as a centralized repository that stores all the metadata extracted from different external sources. Enterprise Information Catalog extracts metadata from external sources such as databases, data warehouses, business glossaries, data integration resources, or business intelligence reports. For ease of search, the catalog maintains an indexed inventory of all the assets in an enterprise. Assets represent the data objects such as tables, columns, reports, views, and schemas. Metadata and inferred statistical information in the catalog include profile results, information about data domains, and information about data relationships.


The early version of EIC was part of the BDM package is now a standalone product.  in 10.1.1.


New capabilities and enhancement for this release are:

Effective Metadata Management

  • Business Glossary Integration: Integrated Business Glossary ensures alignment of business concepts with technical data assets. It also maximizes accuracy of searches for data assets using business terminology as well as navigate relationships.
  • Column Level Lineage and Impact Analysis: Column/Metric level data lineage helps track data from origin to destination through multiple ETL flows. The detailed visualization helps in impact analysis to assess impact of any changes. It also helps in identifying the right source for any specific field in any given report, file, or table.
  • Resource-Level Security: With resource-level security, catalog administrators can restrict metadata access to users and groups ensuring controlled visibility of non-public resources in the catalog.
  • Synonym Support: Users can directly upload a synonym file to the catalog. These synonyms are then used by the system to match asset names in search by referring to them with their synonyms.


  • Smart Domains (Domain by Example): With smart domains, catalog users can associate domains to data assets directly in the catalog. System learns from these associations to automatically associate the domain to similar columns across the enterprise.
  • Data Similarity: Data similarity uses machine learning techniques to cluster similar columns to compute the extent to which data in two columns are the same. Data similarity is internally used by smart domains for domain propagation. It is also available as a relationship in the column relationship diagram.
  • Domain Curation: Users can prove or reject existing domain associations in rule-based and smart domains
  • Domain Proximity: Columns describing the same entity are generally found together in data assets. Domain proximity utilizes these groupings while performing inference, penalizing conformance when proximal domains are not found in the same table or file.
  • Domain Management: New domain management capabilities allow users to add, view, and edit domains directly from LDM Administrator.

Open and Extensible Platform

  • Universal Connectivity Framework: EIC 10.1.1 allows users to connect to a broad range of enterprise data sources including databases, data warehouses, big data systems, BI systems, cloud applications and more. This connectivity is provided through metadata bridges from our partner MITI.
  • New Connectivity
    • Hive/HDFS on EMR and HDInsight: Metadata scanning support for Hive on EMR and HDInsight distributions.
    • OBIEE: Support for extracting BI report metadata from OBIEE
    • SAP R/3: New scanner for SAP applications
    • Microsoft SSIS: New scanner for extracting lineage metadata from SSIS

Performance Enhancements

  • Profiling on Blaze is up to 25X faster than Hive on MapReduce
  • 50% faster Metadata Ingestion compared to 10.1 .
  • Similarity Inference that scales linearly with additional resources.


  • Simplified cluster configuration which uses the Ambari or Cloudera Manager URL to determine other parameters automatically.
  • Pre-Validation checks to report all deployment errors upfront. Helps with fixing all deployment errors quickly instead of going through an iterative process
  • Improved logging with removal of redundant messages

PAM Support

Hadoop Distribution Deployment support

  • Cloudera 5.8
  • Hortonworks 2.5
  • New versions added for existing scanners
  • IBM DB2 11.1
  • Microsoft SQL Server 2016
  • New Scanners
  • Hive/HDFS on EMR 5.0
  • Hive/HDFS on HDInsight 3.4
  • OBIEE 11
  • Microsoft SSIS 2008R2 and 2012
  • SAP R/3 5 and 6


Informatica Intelligent Streaming (IIS) – New product with the 10.1.1 release

Informatica Intelligent Streaming enables customers to design data flows to continuously capture, prepare, and process streams of data with the same powerful graphical user interface, design language, and administration tools used in Informatica's Big Data Management.


Out of the box, IIS provides pre-built high-performance connectors such as Kafka, JMS, HDFS, NoSQL databases, and enterprise messaging systems as well all data transformations to enable a code-free method of defining the customer's data integration logic.


Informatica Intelligent Streaming builds on the best of open source technologies in an easy-to-use enterprise-grade offering. In tandem with BDM's data processing capabilities, it provides a single platform for customers to discover insights and build models that can be then operationalized to run in near real-time and capture and realize the value of high-velocity data.


It will significantly reduce the time and effort organizations require to build, run and maintain streaming-based data integration architectures and allow them to focus on building low-latency data delivery mechanisms for real-time reporting, alerting and/or visualizations.


Initially built to execute leveraging the Streaming libraries in Apache Spark, it can scale out horizontally and vertically to handle petabytes of data while honoring business service level agreements (SLAs). The automatic generation of whole classes of data flows at runtime based on design patterns means that the business logic is only lightly coupled to the runtime technology, allowing for future application of that logic in the next generation of frameworks, as they mature.


  • IIS provides the following capabilities:
  • Allows users to create and execute streaming (continuous-processing) mappings
  • Leverages the Spark streaming engine as the execution engine which provides high scale and availability
  • Provides management and monitoring capabilities of streams at runtime
  • At-least-once delivery guarantees
  • Granulate lifecycle controls based on number of rows processed or time of execution


  • IIS comes with the following Streaming/Messaging/Big Data adapters
  • Source: Kafka, JMS
  • Target: Kafka, JMS, HBase, Hive, HDFS
  • IIS in combination with VDS can also source data from various Streaming sources such as Syslog, TCP, UDP, flat file, MQTT, etc.


  • IIS supports following data types and formats (only for payloads with simple or flat hierarchies)
  • JSON
  • XML
  • Avro


  • IIS supports the following transformations:
  • (New with IIS) Window transformation is added for streaming use cases with the option of sliding and tumbling windows
  • Filter, Expression, Union, Router, Aggregate, Joiner, Lookup, Java and Sorter transformations can be used with streaming mappings and are executed on Spark
  • Lookup transformations can be used with Flat file, HDFS, Sqoop, and Hive
  •   Hadoop Distribution Support
  • Cloudera 5.8
    • Apache Spark 2.0
    • Cloudera Distributed Spark 1.6
  • Hortonworks 2.5
    • Apache Spark 2.0
  • Security Support
  • Kerberized Hadoop Cluster Support


Platform PAM Update

  • Operating System Update:

  •      Solaris 11

  • Windows 10 Client support
  • SQL Server 2016
  • IBM DB2 11.1
  • Oracle RAC / SCAN certification for PC and Mercury
  • Chrome 54.x
  • v 7.0.70
  • Database Support Update:
  • Web Browser Update:

  Microsoft Edge Browser (Windows 10)


Informatica Upgrade Advisor

  • Informatica Upgrade Advisor assesses existing Informatica environment and checks for upgrade readiness. The tool runs a list of rules and provides an upgrade readiness report. Effective in version 10.1.1, you can run the Informatica Upgrade Advisor to check for actions that you need to take before you perform an upgrade.


Informatica Data Quality (IDQ)

  • Exception management
    • Task-based data security features
    • Centralized auditing enabling enterprise-wide deployment
  • Workflow
    • Nested parallel execution providing performance boost
    • Terminate workflow task
  • Reference data pushdown optimizations for Hadoop
    • No database driver installation required on compute nodes for reference data
    • Synchronized pushdown of address validation data on compute nodes
  • Address validation
    • AV 5.9 integration
      • ISTAT for Italy
      • INE Code for Spain


PowerExchange Mainframe and CDC

  • PAM updates
  • Improved or extended functionality
    • Windows 10 client support
    • DB2UDB V11.1
    • I5/OS 7.3
    • SQL-Server 2016
    • Solaris support re-established

  · New functionality

    • SQL-Server access from Linux
    • SMF reporting enhancements
    • DB2 read (via Datamap) “LOB” datatype support


Metadata Manager

  • Netezza Multiple Schema Support
    • This can be consumed at multiple places within Metadata Manager (Lineage, Catalog, etc.)
    • Support for both single and multiple schemas
    • Can view all artifacts for all multiple schemas within the Catalog object
    • Addresses use cases where table is part of one schema and the corresponding view is part of another schema
  • Platform XConnect Improvements
    • Removed the need for workflow dependencies to be deployed to applications for metadata load


Profiling and Discovery

  • Scorecard Dashboard Drilldown
  • Scorecard dashboards will now allow users to drill down to the details and navigate them towards actionable results
    • A separate drilldown pane is provided to view the drilldown results in the Scorecard homepage
  • Hive/Hadoop Connection Merge for Blaze Mode
    • Hive and Hadoop connections are merged and seen as “Hadoop” for run time environments
    • Blaze mode will be the preferred mode of big data execution while Hive will be used as a fallback option for functional issues
    • For customers upgrading to 10.1.1, execution mode of pre-10.1.1 profiles will switch to Blaze after upgrade
  • Blaze Support for Profiling Drilldown
    • Both profile and scorecard drilldown operations are now pushed down to Blaze (when the execution mode is set to Blaze)
    • Drastic reduction of profiling drilldown time while leveraging the benefits of performance optimized Blaze environments (vs the Data Integration Service)
    • Profile-level logs will continue to be available while logs for Yarn jobs are available under the Blaze Grid Manager


Informatica 10.1.1 Release Notes

PowerExchange 10.1.1 Release Notes

PowerExchange Adapters for Informatica 10.1.1 Release Notes

PowerExchange Adapters for PowerCenter 10.1.1 Release Notes

PAM for Informatica 10.1.1

This release contains 10.1 certification for Windows 10.

PAM for Informatica 10.1

Informatica 10.1.0 includes the following new capabilities:


Big Data Management

  • PAM
    • HDP 2.3.x, HDP 2.4.x
    • CDH 5.5.x
    • MapR 5.1.x
    • HDInsight 3.3
    • IBM BI 4.1.x
  • Functionality
    • File Name Option: Ability to retrieve file name and path location from complex files, HDFs and flat files.
    • Parallel Workflow tasks
    • Run Time DDL Generation
    • New Hive Datatypes: Varchar/char datatypes in map reduce mode
    • BDM UTIL: Full Kerberos automation
    • Developer Tool enhancements
      • § Generate a mapplet from connected transformations
      • § Copy Paste-Replace Ports from/to Excel.
      • § Search with auto-suggest in ports
      • § “Create DDL” sql enhancements including parameterization.
  • Reuse
    • SQL To Mapping: Convert ANSI SQL with functions to BDM Mapping
    • Import / Export Frameworks Gaps: Teradata/Netezza adapter conversions from PC to BDM
    • Reuse Report
  • Connectivity
  • SQOOP: Full integration with SQOOP in map-reduce mode.
  • Teradata and Netezza partitioning: Teradata read/write and Netezza read/write partitioning support and Blaze mode of Execution.
  • Complex Files: Native support of Avro and Parquet using complex files.
  • Cloud Connectors: Azure DW, Azure Blob, Redshift connectors on map-reduce mode.


  • Performance
  • Blaze: We have done significant improvement in “Blaze 2.0” by enhancing performance and adding more connectors and transformations to run on Blaze. Some of the new features on Blaze are :
    • Performance
      • Map Side join (with persistence cache)
      • Map Side aggregator
    • Transformations
      • Unconnected lookup
      • Normalizer
      • Sequence generator
      • Aggregator pass through ports
      • Data Quality
      • Data Masking
      • Data Processor
      • Joiner with relaxed join condition for Map-side Joins. Earlier, only equal join is supported
    • § Connectivity
      • Teradata
      • Netezza
      • Complex file reader/writer for limited cases
      • Compressed Hive source/target
    • § Recovery
      • We do support partial recovery, though it is not enabled by default
  • Spark: Informatica BDM now fully supports Spark 1.5.1 in Cloudera and Hortonworks.


  • Security
    • Security Integration: Following features are added to support Infrastructure security on BDM:
      • Integration with Sentry & Ranger for Blaze mode of Execution.
      • Transparent Encryption support.
      • Kerberos: Automation through BDM UTIL.
    • OS Profile: Secured multi tenancy on the Hadoop cluster.


  • License compliance enhancements
    • License expiration warning messages to renew the license proactively
    • License core over-usage warning messages for compliance
  • Monitoring enhancements
    • Domain level resource usage trending
    • Click through to the actual job execution details from the summarized job run statistics reports
    • Historical run statistics of a job
  • Scheduler enhancements
    • Schedule Profile and Scorecard jobs
    • Schedules are now time zone aware
  • Security enhancements
    • OS Profiles support for BDM, DQ, IDL products - Security and isolation for job execution
    • Application management permissions – fine grained permissions on application/mapping and workflow objects





  • Drag and drop Target definition into Source Analyzer to create source definition in Designer
  • Enhancements to address display issues when using long names and client  tools with dual monitor
  • SQL To Mapping: Use Developer tool to convert ANSI SQL with functions to PowerCenter Mapping
  • Command line enhancement to assign Integration service to workflows
  • Command line enhancement to support adding FTP connection
  • Pushdown optimization support for Greenplum


New connectors

  • Azure DW
  • Azure Blob

New Certifications

  • Oracle 12cR1
  • MS Excel 2013
  • MS Access 2013

Mainframe and CDC

  • New functionality
    • z/OS 2.2
    • z/OS CICS/TS 5.3
    • z/OS IMS V14 (Batch & CDC)
    • OPENLAP to extend security capabilities over more Linux, Unix & Windows platforms
  • Improved or Extended functionality
  • I5/OS SQL Generation for Source/Target Objects
  • z/OS DB2  Enhancements
    • IFI 306 Interest Filtering
    • Offloading support for DB2 Compressed Image Copy processing
    • Support for DB2 Flash Copy Images
  • Oracle Enhancements
    • Support of Direct Path Load options for Express CDC
    • Allow support of drop partition DDL to prevent CDC failures
    • Process Archived REDO Log Copies
  • Intelligent Metadata Generation (Createdatamaps)
    • Ability to apply record filtering by matching Metadata with physical data


Metadata Manager

  • Universal Connectivity Framework (UCF)
    • Connects to a wide range of metadata sources. The list of metadata sources is provided in Administrator Guide
    • A new bridge to a metadata source can be easily deployed. Documentation is provided to aid with the deployment
    • Linking, lineage and impact summary remains intact with Native connectivity
    • Connection based linking is available for any metadata source created via UCF
    • Rule Based and Enumerated linking is available for connectors created via UCF
  • Incremental Load Support for Oracle and Teradata
    • Improved load performance by extracting only changed artifacts from relational sources: Oracle and Teradata
    • Lesser load on metadata source databases compared to a full extraction
    • XConnect can run in full or incremental mode. Logs contain more details about extracted artifacts in the incremental mode
  • Enhanced Summary Lineage
    • A simplified lineage view to the business user without any mapping assets or stage assets in the flow
    • Users can drill down from the enhanced summary lineage to get to technical lineage
    • Users can go back to the summary view from the detailed lineage view

Profiling and Discovery

  • AVRO/Parquet Profiling
    • Profile Avro/Parquet files directly without creating Logical Data Objects for each of them
    • Profile on a file or a folder of files; within Big Data Environment or within Native file system
    • Support common Parquet compression mechanisms including Snappy
    • Support common Avro compression mechanisms including Snappy and Deflate
    • Execution mode of profiling Avro/Parquet files is available in Native, HIVE and Blaze mode
  • Operational Dashboards
    • The operational dashboard provides separate views of:
      • § Number of scorecards
      • § Data objects tracked by scorecards
      • § Cumulative scorecard trend (acceptable/unacceptable) elements
      • § Scorecard runs summary
    • Analyst user should be able to view the operational dashboard in the scorecard workspace
  • Scheduling Support for Profiles/Scorecards
    • Ability to schedule single/multiple profile(s)/scorecard(s)/Enterprise profile(s)
    • Performed from the UI in Administrator Console
  • Profiling/Scorecards on Blaze
    • Use Big Data infrastructure for Profiling Jobs
    • Running Profiling on Blaze is supported on both Analyst and Developer
    • Following jobs are supported in Blaze mode:
      • § Column Profiling
      • § Rule Profiling
      • § Domain Discovery
      • § Enterprise Profiling (Column and Domain)
    • Ability to use all sampling options when working in the Blaze mode: First N, Random N, Auto Random, All
  • Data Domain Discovery Enhancements
    • Ability to provide the number of records as a domain match criterion that allows detecting of domain matches even when there are a few records that match the criteria; especially useful when trying to match secure domains
    • Additional option to exclude NULL values from computation of Inference percentage
  • OS Profile Support
    • Providing Execution resource Isolation for Profiles and Scorecards
    • Configuration similar to PowerCenter/Platform for OS Profiles


Data Transformation 10.1.0

  • New 'Relational to Hierarchical' transformation in Developer
  • New REST API for executing DT services
  • Optimizations for reading & writing complex Avro and Parquet files


PAM (Platform – PC/Mercury)

  • Database Support Update : Added:
    • Oracle 12cR1
    • IBM DB2 10.5 Fix Pack 7
  • Web Browser Update:
    • Safari 9.0
    • Chrome51.x
  • Tomcat Support Update : v 7.0.68
  • JVM Support Update :  Updated:
    • § Oracle Java 1.8.0_77
    • § IBM JDK 8.0

Enterprise Information Catalog

New Connectivity

• File Scanner for HDFS(Cloudera, Hortonworks) and Amazon S3: Catalog supported files and fields in the data lake. Supported for CSV, XML and JSON file formats.

• Informatica Cloud: Extract lineage metadata from Informatica Cloud mappings.

• Microstrategy: Support for metadata extracts from Microstrategy

• Amazon Redshift: Support for metadata extract from Amazon Redshift.

• Hive: Added multi-schema lineage support for Hive

• Custom Lineage Scanner: Manually add links and link properties to existing objects in the catalog; document lineage from unsupported ETL tools and hand-coded integrations.


• Semantic Search: Object type detection from search queries for targeted search results.

• Enhanced Domain Discovery: Granular controls in domain discovery like Record match and Ignore NULLs for accurate domain matching.

User Experience Improvements

• Enhanced Catalog Home Page: Added new reports for Top 50 Assets in the organization, Trending Searches and Recently Viewed Assets by the user

• Enhance Administrator Home Page: New dashboard with widgets for task monitoring, resource views and unassigned connections.

Performance Enhancements

• Profiling on Blaze: Run Profiling and Domain Discovery jobs on Hadoop for Big Data sets

• Incremental Profiling Support: Scanner jobs identify if the table has changed from the last discovery run and run profiling jobs only on the changed tables for selected sources (Oracle, DB2, SQL Server, and HDFS Files).

• ~4X Performance Improvement in scanning PowerCenter resources.

• ~30X search, search auto-suggest and sort performance improvements


• Added Support for Backup, Restore and Upgrade.

• Added Kerberos Support for embedded cluster

• Intelligent Email Alerts: To help administrators proactively take care of any potential stability issues in Catalog setup.

•      EIC PAM

• RHEL 7 Support Added

• New versions added for existing scanners:

• Tableau 9.x

• MapR 5.1 Hive Scanner

• SAP Business Objects 4.1 SP4 through SP6

• New Scanners

• Amazon Redshift

• Amazon S3

• Informatica Cloud R25

• Microstrategy 10.x, 9.4.1, 9.3.1


Intelligent Data Lake (IDL)

In version 10.1, Informatica introduces a new product Intelligent Data Lake to help customers derive more value from their Hadoop based data lake and democratize the data for usage by all in the organization.

Intelligent Data Lake is a collaborative Self-service Big Data discovery and preparation solution for data analysts and data scientists to rapidly discover and turn raw data into insights with quality and governance, especially in a data lake environment.

This allows analysts to spend more time on analysis and less time on finding and preparing data. At the same time IT can ensure quality, visibility and governance.


Intelligent Data Lake provides the following benefits.

  • Data Analysts can quickly and easily find and explore trusted enterprise data assets within the data lake as well as outside the data lake using semantic search, knowledge graphs and smart recommendations.
  • Data Analysts can transform, cleanse, and enrich data in the data lake using an Excel-like spreadsheet interface in a self-service manner without need of coding skills.
  • Data Analysts can publish and share data as well as knowledge with rest of the community and analyze the data using their choice of BI or analytic tools.
  • IT and governance staff can monitor user activity related to data usage in the lake.
  • IT can track data lineage to verify that data is coming from the right sources and going to the right targets.
  • IT can enforce appropriate security and governance on the data lake
  • IT can operationalize the work done by data analysts into a data delivery process that can be repeated and scheduled.


Intelligent Data Lake has following features.

  • Search:
    • Find the data in the lake as well as in the other enterprise systems using smart search and inference based results.
    • Filter assets based on dynamic facets using system attributes and custom defined classifications.
  • Explore:
    • Get overview of assets including custom attributes, profiling stats for quality, data domains for business content and usage information.
    • Add business context information by Crowd-sourcing metadata enrichment and tagging.
    • Preview sample data to get a sense of the data asset based on user credentials.
    • Get lineage of assets to understand where data is coming from and where it is going to build trust.
    • Know how the data asset is related to other assets in the enterprise based on associations with other tables/views, users, reports, data domains etc.
    • Discovery assets unknown before with progressive discovery with lineage and relationship views.
  • Acquire:
    • Upload personal delimited files to the lake using a wizard based interface.
    • Hive tables are automatically created for the uploads in the most optimal format.
    • Create new, append or overwrite existing assets for uploaded data.
  • Collaborate:
    • Organize work by adding data assets to Projects.
    • Add collaborators to Projects with different roles such as Co-owner, Editor, Viewer etc. for different privileges.
  • Recommendations:
    • Improve productivity by using recommendations based on other users’ behaviors and reuse knowledge.
    • Get recommendations for alternate assets that can be used in the Project instead of what is added.
    • Get recommendations for additional assets that can be used in addition to what’s in the project.
    • Recommendations change based on what is in the project.
  • Prepare:
    • Use excel-like environment to interactively specify transformation using sample data.
    • See sheet level and column level overviews including value distributions, numeric/date distributions.
    • Add transformations in the form of Recipe steps and see immediately the result on the sheets.
    • Perform column level data cleansing, data transformation using string, math, date, logical operations.
    • Perform sheet level operations like Combine, Merge, Aggregate and Filter data.
    • Refresh sample in the worksheets if the data in the underlying tables changes.
    • Derive sheets from existing sheets and get alerts when parent sheets change.
    • All the transformation steps are stored in the Recipe which can be played back interactively.
  • Publish:
    • Use the power of underlying Hadoop system to run large scale data transformation without coding/scripting.
    • Run data preparation steps on actual large data sets in the lake to create new data assets.
    • Publish the data in the lake as a Hive table in desired database.
    • Create new, append or overwrite existing assets for published data.
  • Data asset operations:
    • Export data from the lake to CSV file
    • Copy data into another database or table.
    • Delete the data asset if allowed by user credentials.
  • My Activities:
    • Keep track of my upload activities and their status.
    • Keep track of publications and their status.
    • View log files in case of errors. Share with IT Administrators if needed.
  • IT Monitoring:
    • Keep track of  User, Data asset and Project activities by building reports on top of Audit data base
    • Answer questions like Top Active Users, Top Datasets by Size, Last Update, Most Reused assets, Most Active projects etc.
  • IT Operationalization:
    • Operationalize the ad-hoc work done by Analysts.
    • User Informatica Developer tool to customize and optimize the Informatica BDM Mappings translated from the Recipe that Analyst created.
    • Deploy, schedule and monitor the Informatica BDM mappings to ensure data assets are delivered at the right time to the right destinations.
    • Make sure the entitlements in the data lake for access to various databases and tables are according to security policies.



Informatica Data Quality

Exception management

  • Data type based Search & replace enhancements
  • Non-default schema for exception tables for greater security and flexibility
  • Task ID for better reporting


Address Validation

  • IDQ now Integrated with AV 5.8.1
  • Ireland – Support for eircode (postal codes)
  • France – SNA Hexaligne 3 Data Support
  • UK – Roof top geocode



  • IDQ transformations can execute on Blaze
  • Workflow - parallel execution for enhanced performance



  • Scorecard dashboard for single, high level view of scorecards in the repository



Informatica 10.1 Release Notes

PowerExchange Adapters for Informatica 10.1 Release Notes

PowerExchange Adapters for PowerCenter 10.1 Release Notes

PowerExchange 10.1 Release Notes

PAM for Informatica 10.1

10.1 New Features guide


This is a Major release and all download requests will have to be made by opening a shipping request.