Informatica Platform : 2017 : September Skip navigation

Informatica 10.2.0  includes the following new capabilities:


Big Data Management (BDM)


Ease of use


  • Zero design-time footprint: Customers now no longer need to install stacks/parcels/RPMs on the Hadoop cluster to integrate Informatica BDM with a Hadoop cluster.
  • 1-step Hadoop Integration: Customers can now integrate Informatica BDM with Hadoop clusters in 1 step. They can also keep the Hadoop and BDM environments in sync with 1-click Refresh.
  • Default DIS selection: Developers working with Informatica domains having a single DIS no longer have to select the DIS to be able to execute mappings. Developers can also now select a default DIS per domain instead of per installation.


Platform enhancements


  • Bulk execution engine selection: Administrators can now bulk update both deployed and design-time mappings to now leverage Blaze and Spark.
  • Reusing PowerCenter applications: Customers can now import complex PowerCenter mappings with multiple pipelines as well as PowerCenter workflows into BDM.
  • DIS Queuing and Concurrency: Data Integration Service can now queue the job submissions and has concurrency enhancements to enable massive Hadoop pushdown job submission and execution
  • DI on Spark: Customers can now leverage Spark as execution mode for all types of Data Integration use-cases.
  • Complex/Hierarchical data types: BDM now supports Array, Structs and Maps data types to enable applications with complex data types to process the data more effectively.
  • Stateful computing: Customers can now perform Stateful computing using advanced windowing functions now introduced in Spark execution mode.


Connectivity & Cloud


  • SQOOP on Spark: Customers can now use any JDBC compliant SQOOP drivers in the Spark execution mode to ingest data into Hadoop eco-system
  • AWS S3 & RedShift on Spark: Amazon AWS customers can now integrate with S3 and RedShift on Spark execution mode.
  • Azure ADLS & WASB on Spark: Microsoft Azure customers can now integrate with HDFS and Hive on both ADLS & WASB in Spark execution mode.
  • New ADLS Connector: We added a new Azure Data Lake Storage Connector on Spark engine. Azure customers can now read and write to ADLS .


Ecosystem support


  • Cloudera CDH 5.11
  • HortonWorks HDP 2.6
  • MapR 5.2 MEP 2.x
  • IBM BigInsights 4.2
  • Amazon EMR 5.4
  • Azure HDInsights 3.6


Platform PAM Update


  • Operating System Update:
    • AIX 7.2 TL0 – Added
    • AIX 6.1 TL8 - Dropped
  • Tomcat Support Update:
    • v 7.0.76
  • JVM Support Update:
    • Oracle Java 1.8.0_131
    • IBM JDK


Informatica PowerCenter


Developer/Admin Productivity


  • Users can evaluate expression formula when typing in the expression
  • Automate end-to-end deployment using pmrep Create Query
  • pmrep Create Connection enhanced to support password as a parameter option


Audit Enhancements


  • Audit user login attempts with information about timestamp, IP address, PowerCenter client application name and version
  • Audit PowerCenter metadata XML Import during code deployment with information about logged in user, IP address, file name with path and size


Connectivity & Cloud


  • Ability to connect to host of cloud applications using PowerExchange for Cloud Applications
  • PowerExchange for AWS Redshift
    • Performance improvement
    • Additional AWS region support
  • PowerExchange for AWS S3
    • Source partitioning support to
    • Ability to read and write multiple files to S3
    • Additional AWS region support
  • PowerExchange for Azure Blob
    • Added support for append to Blob files
  • Reader performance improvements to PowerExchange for Teradata TPT
  • PowerExchange for SAP
    • HTTPS support added for table reader
    • Additional datatype support for table reader
    • SSTRING datatype support added for IDoc prepare and interpreter transformations
  • SAP HANA (PowerExchange for ODBC)
    • certified for HANA 2.0
    • Support for bulk loading
    • Added support for “upsert”
  • PowerExchange for Dynamics CRM
    • Made available for AIX operating system
    • Enhanced reader performance


Data Quality (IDQ and BDQ)


Increased flexibility of Business Rules (Rule Specifications)


  • Increased Analyst User Flexibility
    • Remove need to compile
    • Use Rule Specifications directly in profiles and mappings
  • Ensure user intent is reflected
    • No disconnect between Rule Specification and rule implementations in mappings
    • Increased Business <> Technical collaboration through re-usable artefacts
  • More flexible what-if analysis by self-service thin client based Data Analysts


Advanced Exception Management Capabilities


  • Support exception data distribution by ranges with flexible definitions in Exception Management processes
  • Enforce basic user validation / prevent NULL user entries
  • Editable Subject for Human Task Notifications
  • Ability to use external table for table for task assignment – no need to re-deploy workflow to change task assignments
  • Updated UI for task assignment
  • Set if a task user must fill in data in error/exception cells at workflow level


Address Verification Updates


  • Integrated Address Verification (AddressDoctor) v5.11 engine
  • View Address Verification Licence, Engine and Data version details from Informatica Developer
  • Country Specific fields added for Austria and Czech Republic


Informatica Intelligent Streaming (IIS)


Enhanced Streaming Analytics Solution


  • Pass through Mapping support: Customers can now pass the streaming payload (BLOB) as is to the target bypassing parsing and column projection
  • Rank Transformation support: Customers can now use Rank transformations in streaming mappings for ranking the data based on relevance.
  • Support for secure Kafka clusters: Customers can now use IIS in Kafka clusters with Kerberos security.
  • Support for replaying messages in Kafka: Customers can now reprocess the messages in Kafka with the replay feature using timestamps


Cloud Support in Streaming


  • Support for Amazon Kinesis as source: Customers can now source streaming data from Amazon Kinesis and process the data using transformations.
  • Support for Amazon Kinesis Firehose as sink: Customers can now use Amazon Kinesis Firehose as target of streaming mappings and use it to persist data onto AWS S3, Redshift, ElasticSearch
  • Support for stream data processing in cloud: Customers can now run streaming data processing in the cloud on EMR cluster on AWS


New Datatype and New PAM


  • Hierarchical Datatype support: Customers can now process complex hierarchical streaming payloads in JSON, Avro and XML format using IIS
  • MapR Ecosystem support: Customers can now use MapR streams as source & MapR DB, HBase & HDFS as sink
  • Character Delimited data format support: Customers can now use Character Delimited data (CSV) as the data format in streaming.


Ecosystem support


  • Cloudera CDH 5.11
  • HortonWorks HDP 2.6
  • MapR 5.2 MEP 2.x
  • Amazon EMR 5.4
  • Apache Kafka 0.9 & 0.10.x


Intelligent Data Lake (IDL)


Data Preparation Enhancements and DQ Rules support


  • Enhanced Recipe Panel Layout : Users can see all recipe steps in a dedicated panel during data preparation. The recipe steps are clearer and more concise with color codes to indicate function name, columns involved, and input sources. Users can edit steps, delete steps, or go back-in-time to see the state of data at a specific step in the recipe.
  • Applying Data Quality Rules: While preparing data, users can use pre-built rules to cleanse, transform and validate data. Rules can be created using Informatica Developer or Informatica Analyst. With a Big Data Quality license, thousands of pre-built rules are available. Using pre-built rules promotes effective collaboration within business and IT teams.
  • Business Terms for Data Assets in Data Preview and Worksheet View:  Users can view business terms associated with columns in data assets during data preview and data preparation.
  • Data Preparation for Delimited Files: Users can cleanse, transform, combine, aggregate, and perform other operations on delimited HDFS files that exist in the data lake. Files can be previewed before being added to a project.
  • Join Editing: Users can edit the joinconditions for an existing joined worksheet, including join keys and join types (such as inner and outer joins).
  • Sampling Editing: Users can change the columns selected for sampling, edit the filters applied, and change the sampling criteria.


Data Validation and Assessment Using Visualization with Apache Zeppelin


After publishing data, users can validate the data visually to make sure that the content and quality are appropriate for analysis. Users can modify the recipe used to prepare the data to address any issues, thus supporting an iterative Prepare-Publish-Validate process.

Intelligent Data Lake integrates with Apache Zeppelin to provide a visualization “Notebook” that contains graphs and charts representing relationships between columns. When you open the visualization Notebook for the first time for published data asset, IDL uses the CLAIRE engine to create “Smart Visualization suggestions“ in the form of histograms based on the numeric columns newly created by the user.

For more details about Apache Zeppelin, see

In addition, users can filter the data during data preview for better assessment of data assets. .



Enterprise Readiness


  • Support for Multiple Enterprise Information Catalog Resources in the Data Lake: Administrators can configure multiple Enterprise Information Catalog resources (Hive and HDFS scanners) so users can work with all types of assets and all applicable Hive schemas and HDFS files in the data lake.
  • Support for Oracle as the Data Preparation Service Repository: Administrators can now use Oracle 11gR2 and 12c databases for the Data Preparation Service repository.
  • Improved Scalability for the Data Preparation Service: Administrators can ensure horizontal scalability by deploying the Data Preparation Service on a grid. Improved scalability supports high performance during interactive data preparation for increased data volumes and numbers of users.


Hadoop Ecosystem Support


  • Cloudera CDH 5.11
  • Hortonworks HDP 2.6
  • Azure HD Insight 3.6
  • Amazon EMR 5.4
  • MapR 5.2 MEP 2.x
  • IBM Big Insights 4.2


Enterprise Information Catalog (EIC)


  • Intelligence

    • Composite Domains: With Composite Domains, EIC can discover entities and perform data classifications based on rule based or machine learning based domains. Entity Recognition is used by search, facets, classifications and business glossary recommendations
    • Unstructured Data Cataloging: EIC can now catalog unstructured data including formats like Excel, Docs, Powerpoint, PDF and more. System uses Domain and Composite Domain discovery to automatically infer semantic type and entities from unstructured files.
    • New Data Domain Creation: Users can now use Catalog Administrator to create data domains with regular expressions and reference tables in addition to mapplet based rules.
    • Value Frequency: Catalog users can now view the top values by occurrence for each column. Value Frequency views are governed by security privileges and permissions.


  • Open and Extensible Platform

    • Open REST APIs: REST APIs for enablingAnalytics on Metadata Repository and integrating EIC with third party applications.
    • Custom Scanner Framework: Model and Ingest metadata from any kind of data or lineage source
    • Metadata Import/Export: Resource level metadata import and export to access metadata in easy to understand excel format and bulk curate data assets at scale.


  • New Connectivity

    • Azure SQL DB
    • Azure SQL DW
    • Azure WASB File System
    • ERWIN
    • Apache Atlas: Extract lineage from Hadoop jobs: Sqoop, Hive queries etc. to create an end to end view of data lineage
    • Informatica Axon: Axon Glossary scanner to import Axon Business Glossaries in EIC and allow association to technical data assets. Integration includes ability to create business classifications and recommendations based on domain associations.
    • Transformation logic from selected sources: Powercenter, Informatica Cloud and Cloudera Navigator


  • Deployment

    • Solr sharding and replication support for HA and better performance
    • Improved Logging: All service dependent logs in one file (LDM.log) including Hbase, Solr and Ingestion services logs.
    • Multi-home network support when there are multiple network interfaces installed on the box.
    • New utility to validate key tabs generation from KDC server.
    • Ranger certification


  • PAM

    • Deployment
      • Cloudera CDH 5.11
      • Hortonworks HDP 2.6
      • Azure HDInsights 3.6


PowerExchange Mainframe & CDC

  • PAM
    • AIX 7.2 – Added
    • z/OS Adabas 8.3.4 – Added
    • z/OS DB2 V12 (Compatibility) – Added


  • Enhancements
    • PowerExchange Navigator, continued work to help support working between multiple PowerExchange editions
    • z/OS DB2 for CDC support of Large Object Data
    • LUW Oracle CDC, facility enabling audit log trace of Oracle DDL changes


Product Availability Matrix (PAM)


PAM for Informatica 10.2


Release Notes


Informatica 10.2 Release Notes link:


Informatica PowerExchange Version 10.2 Release Notes link:


PowerExchange Adapters for Informatica 10.2 Release Notes link:


PowerExchange Adapters for PowerCenter 10.2 Release Notes link:


Note: As this is a major version, download requests need to be made by opening a shipping case.

Filter Blog

By date: By tag: