Skip navigation

What are we announcing?


Informatica 10.1.1 Update 2


Who would benefit from this release?


This release is for all customers and prospects who want to take advantage of the latest Big Data Management, Enterprise Information Catalog, and Intelligent Data Lake capabilities.


What’s in this release?


This update provides the latest ecosystem support, security, connectivity, cloud, and performance while improving the user experience.


Big Data Management (BDM)


Hive Functionality


Hive truncate table partitioning: You can now configure Blaze mappings to truncate the Hive partitions without impacting other partitions. This feature is an addition to your ability to truncate an entire Hive table, available since 10.1.1.

Hive filter pushdown: Blaze can now push compatible filter conditions to Hive for better performance.




SSL/TLS support: You can now integrate BDM with secure clusters that are SSL/TLS enabled.

Hadoop security with IBM BigInsights: You can now use BDM with IBM BigInsights clusters enabled for security through Apache Ranger, Apache Knox, and Transparent Data Encryption.




Node labeling: You can now group Hadoop nodes with similar characteristics via Hadoop’s node labeling. You can also run Blaze & Spark mappings on the labeled nodes.

Fair and Capacity Scheduler support: You can now define and run Blaze & Spark mappings with the Fair Scheduler and the Capacity Scheduler.

YARN queues: You can now execute Blaze & Spark jobs on specific YARN queues.

Multiple GRID manager support: You can now leverage the same Blaze infrastructure services on a Hadoop cluster to run jobs from multiple Informatica domains of the same version.


Data Integration on Hadoop


PC Reuse Report: As a PowerCenter customer, you can now run the PC Reuse Report to analyze the reusability of your existing PowerCenter mappings into BDM. You can interactively analyze and drill into the data by folder, engine, mapping, and functionality.

Stop on errors: You can now configure Blaze mappings to stop as soon as the Blaze engine encounters data rejection.




[Tech Preview only] HDFS & Hive on ADLS: As a Microsoft Azure customer, you can now read from write to HDFS and Hive tables hosted on ADLS with the Spark engine.

[Tech Preview only] HDFS & Hive on WASB: As a Microsoft Azure customer, you can now read from write data to HDFS and Hive tables hosted on WASB with the Spark engine.

Hive on S3: As an Amazon AWS customer, you can now interact with Hive external tables with S3 storage in all supported Hadoop distributions.




Sqoop direct support for Oracle: You can now read from and write to Oracle using Oracle specialized (direct) driver for Sqoop with the Spark engine.

Sqoop direct support for Teradata: Cloudera and Hortonworks customers can now leverage Teradata Connector for Hadoop (TDCH) functionality with the Blaze engine.

MapR-DB: As a MapR customer, you can now read from and write to MapR’s NoSQL database MapR-DB with PowerExchange for MapR-DB.


Ecosystem support


• Cloudera CDH 5.9, [ Tech Preview only] CDH 5.10

• Hortonworks HDP 2.3, 2.4, 2.5

• MapR 5.2

• IBM BigInsights 3.5

• Amazon EMR 5.0

[ Tech Preview only] Azure HD Insight 3.5


Enterprise Information Catalog (EIC)


Open and Extensible Platform


One-click deployment on AWS:  As an AWS customer, you can deploy EIC in minutes with the one-click EIC marketplace offering on AWS. The offering also contains sample metadata to help jump start your EIC experience.

Windows and Linux file system support: You can now import metadata and run profiles on files in Windows and Linux file systems to help catalog enterprise file-based data assets which are deployed outside of HDFS and Amazon S3.

Apache Ranger support: You can now deploy EIC on Hortonworks clusters that are Apache Ranger enabled.

Centrify support: You can also deploy EIC on Cloudera clusters that are Centrify enabled.


User Experience Enhancements


Business Terms in the asset view: You can now view the related business terms besides column names in the asset view.

Data asset path in asset views: You can now view and easily navigate to any part of the data asset path using the asset path header.

Hyperlinks in custom attributes: You can add hyperlinks to file assets in Box, Dropbox, Sharepoint or any other URL as a string-based custom attribute to help data consumers navigate to related links from EIC assets.

Maximize the Custom Attribute pane: Maximize the custom attribute pane to view the annotations in a wider, full-screen view.




Automatic detection of CSV file headers: You can now auto detect the existence of file headers when you scan CSV files. This enhancement is especially helpful when you automate the scans of large folders in HDFS, Amazon S3, and Windows, and Linux filesystems.


Product Availability Matrix (PAM)

Deployment and Source Support


• Hortonworks HDP 2.5

• Cloudera CDH 5.9, 5.10

• IBM BigInsights 4.2


Source Support Only


• Azure HDInsights 3.5

• MapR 5.2


Intelligent Data Lake


Product Availability Matrix (PAM)

Hadoop Ecosystem Support


You can now use the following Hadoop distributions as a Hadoop data lake:


• Cloudera CDH 5.9, 5.10

• Hortonworks HDP 2.3, 2.4, 2.5

• Azure HD Insight 3.5

• Amazon EMR 5.0

• IBM BigInsights 4.2


Repository for the Data Preparation Service


• You can now use MariaDB 10.1.x for the Data Preparation Service repository.


Functional Improvements


Column-level Lineage


• Data analysts can now view the lineage of individual columns in a table corresponding to activities such copying, importing, exporting, publishing, and uploading data assets.


SSL/TLS Support


• You can now integrate IDL with Cloudera 5.9 clusters that are SSL/TLS enabled.


Informatica 10.1.1 Update 2 Release Notes

Informatica 10.1.1 Release Notes

PAM for Informatica 10.1.1 Update 2

This release can be downloaded by opening a shipping request.

What are we announcing?

Informatica 10.1.1


Who would benefit from this release?

This release is for all customers and prospects who need big data management, data quality and data integration solutions.


What’s in this release?

We see a strong interest in our customers to start or expand their big data initiatives. Informatica provides the most comprehensive big data management solution to enable customers to quickly turn big data into business value.


As part of Informatica 10.1.1, we are excited to release two new products:

  • Enterprise Information Catalog (EIC), the next generation business-user oriented enterprise-wide metadata catalog.
  • Informatica Intelligence Streaming (IIS), which enables continuous data capture and processing for real-time analytics on streaming data.


In this release, we also added new features, updated product availability matrix (PAM) support, improved performance, and expanded connectivity for our existing products.


High-level new capabilities for this release are described in detail below.


Big Data Management (BDM)

  • Expand and update support for Hadoop distributions
    • Cloudera HDP 5.8
    • Hortonworks 2.5
    • IBM BigInsights 4.2
    • AWS EMR 4.6AWS EMR 5.0
    • Microsoft Azure HDInsight 3.4
    • TDCH through Sqoop
    • Cassandra version upgrade
    • Blaze support for HBase
    • Silent configuration option for Cloudera Manager and Ambari-based distributions
    • Azure HDInsight configuration
    • Deployment of Informatica binaries using Ambari Stacks and Services
    • Ambari integration
    • Eliminated relational database client installation through
      • JDBC support for Lookup transformation
      • JDBC support for data quality transformations
      • Advanced Hive functionality with support for
        • Create, append, and truncate tables
        • Partitioned and bucketed Tables
        • Char, Varchar, Decimal 38 data types
        • Quoted identifiers in column names
        • SQL-based authorization
        • SQL overrides and Hive Views
      • Partitioning support for the Data Masking Option
      • Advanced transformations: Update Strategy, global sort order through the Sorter transformation, Data Process transformation
      • Summary report capabilities in the Blaze Job Monitor
  • Enhanced Spark capability with:
    • Spark 2.0 support
    • Security support for Sentry, Ranger, operating system profiles, and transparent encryption
    • Hive and Sqoop lookup support
    • Java transformation support
    • Binary data type support
    • HBase support
    • Performance optimizations
  • Enhanced cloud support
    • Single-click BDM image deployment on Microsoft Azure and Amazon AWS marketplace
    • Connectivity for AWS S3 and RedShift
  • Better security on the MapR Hadoop cluster with support for MapR tickets
  • Additional connectivity

  •      Sqoop for Blaze and Spark mode of execution

  • Enhanced installation and configuration
  • Enhanced the Blaze engine with advanced capabilities

  •      Hive connected and unconnected lookups

  • Enhanced workflows with support for nested gateways and Control tasks


Intelligent Data Lake (IDL)

  • Improvements in data preparation

  •      Ability to select columns, filter rows, and randomization for sampling

  • Lookup function
  • Sentry storage and table-level authorization
  • Cloudera CDH 5.8, Hortonworks 2.5
  • Windows 10 Edge browser 38.14, Safari 9.1.2
  • SUSE 12
  • Data preview and ingestion from external RDBMS sources (using JDBC through Sqoop)
  • Publication performance improvements with Blaze
  • Export data from the lake to an external RDMS (using JDBC through Sqoop)
  • Export data from the lake as a Tableau data extract file
  • Granular activity tracking for import, export, publish, copy, delete, etc.
  • Enhanced security support

  •      Ranger storage, table, row-level authorization, and masking policy support

  • Updated PAM Support


Enterprise Information Catalog (EIC) – New standalone product with the 10.1.1 release

Enterprise Information Catalog helps data architects, data stewards, and data consumers analyze and understand large volumes of metadata in the enterprise. Users can extract metadata for many objects, organize the metadata based on business concepts, and view data lineage and relationship information for each object. In essence, it is the ‘Google’ for the enterprise, providing a unified view of all data assets and their relationships.


Enterprise Information Catalog maintains a catalog. The catalog serves as a centralized repository that stores all the metadata extracted from different external sources. Enterprise Information Catalog extracts metadata from external sources such as databases, data warehouses, business glossaries, data integration resources, or business intelligence reports. For ease of search, the catalog maintains an indexed inventory of all the assets in an enterprise. Assets represent the data objects such as tables, columns, reports, views, and schemas. Metadata and inferred statistical information in the catalog include profile results, information about data domains, and information about data relationships.


The early version of EIC was part of the BDM package is now a standalone product.  in 10.1.1.


New capabilities and enhancement for this release are:

Effective Metadata Management

  • Business Glossary Integration: Integrated Business Glossary ensures alignment of business concepts with technical data assets. It also maximizes accuracy of searches for data assets using business terminology as well as navigate relationships.
  • Column Level Lineage and Impact Analysis: Column/Metric level data lineage helps track data from origin to destination through multiple ETL flows. The detailed visualization helps in impact analysis to assess impact of any changes. It also helps in identifying the right source for any specific field in any given report, file, or table.
  • Resource-Level Security: With resource-level security, catalog administrators can restrict metadata access to users and groups ensuring controlled visibility of non-public resources in the catalog.
  • Synonym Support: Users can directly upload a synonym file to the catalog. These synonyms are then used by the system to match asset names in search by referring to them with their synonyms.


  • Smart Domains (Domain by Example): With smart domains, catalog users can associate domains to data assets directly in the catalog. System learns from these associations to automatically associate the domain to similar columns across the enterprise.
  • Data Similarity: Data similarity uses machine learning techniques to cluster similar columns to compute the extent to which data in two columns are the same. Data similarity is internally used by smart domains for domain propagation. It is also available as a relationship in the column relationship diagram.
  • Domain Curation: Users can prove or reject existing domain associations in rule-based and smart domains
  • Domain Proximity: Columns describing the same entity are generally found together in data assets. Domain proximity utilizes these groupings while performing inference, penalizing conformance when proximal domains are not found in the same table or file.
  • Domain Management: New domain management capabilities allow users to add, view, and edit domains directly from LDM Administrator.

Open and Extensible Platform

  • Universal Connectivity Framework: EIC 10.1.1 allows users to connect to a broad range of enterprise data sources including databases, data warehouses, big data systems, BI systems, cloud applications and more. This connectivity is provided through metadata bridges from our partner MITI.
  • New Connectivity
    • Hive/HDFS on EMR and HDInsight: Metadata scanning support for Hive on EMR and HDInsight distributions.
    • OBIEE: Support for extracting BI report metadata from OBIEE
    • SAP R/3: New scanner for SAP applications
    • Microsoft SSIS: New scanner for extracting lineage metadata from SSIS

Performance Enhancements

  • Profiling on Blaze is up to 25X faster than Hive on MapReduce
  • 50% faster Metadata Ingestion compared to 10.1 .
  • Similarity Inference that scales linearly with additional resources.


  • Simplified cluster configuration which uses the Ambari or Cloudera Manager URL to determine other parameters automatically.
  • Pre-Validation checks to report all deployment errors upfront. Helps with fixing all deployment errors quickly instead of going through an iterative process
  • Improved logging with removal of redundant messages

PAM Support

Hadoop Distribution Deployment support

  • Cloudera 5.8
  • Hortonworks 2.5
  • New versions added for existing scanners
  • IBM DB2 11.1
  • Microsoft SQL Server 2016
  • New Scanners
  • Hive/HDFS on EMR 5.0
  • Hive/HDFS on HDInsight 3.4
  • OBIEE 11
  • Microsoft SSIS 2008R2 and 2012
  • SAP R/3 5 and 6


Informatica Intelligent Streaming (IIS) – New product with the 10.1.1 release

Informatica Intelligent Streaming enables customers to design data flows to continuously capture, prepare, and process streams of data with the same powerful graphical user interface, design language, and administration tools used in Informatica's Big Data Management.


Out of the box, IIS provides pre-built high-performance connectors such as Kafka, JMS, HDFS, NoSQL databases, and enterprise messaging systems as well all data transformations to enable a code-free method of defining the customer's data integration logic.


Informatica Intelligent Streaming builds on the best of open source technologies in an easy-to-use enterprise-grade offering. In tandem with BDM's data processing capabilities, it provides a single platform for customers to discover insights and build models that can be then operationalized to run in near real-time and capture and realize the value of high-velocity data.


It will significantly reduce the time and effort organizations require to build, run and maintain streaming-based data integration architectures and allow them to focus on building low-latency data delivery mechanisms for real-time reporting, alerting and/or visualizations.


Initially built to execute leveraging the Streaming libraries in Apache Spark, it can scale out horizontally and vertically to handle petabytes of data while honoring business service level agreements (SLAs). The automatic generation of whole classes of data flows at runtime based on design patterns means that the business logic is only lightly coupled to the runtime technology, allowing for future application of that logic in the next generation of frameworks, as they mature.


  • IIS provides the following capabilities:
  • Allows users to create and execute streaming (continuous-processing) mappings
  • Leverages the Spark streaming engine as the execution engine which provides high scale and availability
  • Provides management and monitoring capabilities of streams at runtime
  • At-least-once delivery guarantees
  • Granulate lifecycle controls based on number of rows processed or time of execution


  • IIS comes with the following Streaming/Messaging/Big Data adapters
  • Source: Kafka, JMS
  • Target: Kafka, JMS, HBase, Hive, HDFS
  • IIS in combination with VDS can also source data from various Streaming sources such as Syslog, TCP, UDP, flat file, MQTT, etc.


  • IIS supports following data types and formats (only for payloads with simple or flat hierarchies)
  • JSON
  • XML
  • Avro


  • IIS supports the following transformations:
  • (New with IIS) Window transformation is added for streaming use cases with the option of sliding and tumbling windows
  • Filter, Expression, Union, Router, Aggregate, Joiner, Lookup, Java and Sorter transformations can be used with streaming mappings and are executed on Spark
  • Lookup transformations can be used with Flat file, HDFS, Sqoop, and Hive
  •   Hadoop Distribution Support
  • Cloudera 5.8
    • Apache Spark 2.0
    • Cloudera Distributed Spark 1.6
  • Hortonworks 2.5
    • Apache Spark 2.0
  • Security Support
  • Kerberized Hadoop Cluster Support


Platform PAM Update

  • Operating System Update:

  •      Solaris 11

  • Windows 10 Client support
  • SQL Server 2016
  • IBM DB2 11.1
  • Oracle RAC / SCAN certification for PC and Mercury
  • Chrome 54.x
  • v 7.0.70
  • Database Support Update:
  • Web Browser Update:

  Microsoft Edge Browser (Windows 10)


Informatica Upgrade Advisor

  • Informatica Upgrade Advisor assesses existing Informatica environment and checks for upgrade readiness. The tool runs a list of rules and provides an upgrade readiness report. Effective in version 10.1.1, you can run the Informatica Upgrade Advisor to check for actions that you need to take before you perform an upgrade.


Informatica Data Quality (IDQ)

  • Exception management
    • Task-based data security features
    • Centralized auditing enabling enterprise-wide deployment
  • Workflow
    • Nested parallel execution providing performance boost
    • Terminate workflow task
  • Reference data pushdown optimizations for Hadoop
    • No database driver installation required on compute nodes for reference data
    • Synchronized pushdown of address validation data on compute nodes
  • Address validation
    • AV 5.9 integration
      • ISTAT for Italy
      • INE Code for Spain


PowerExchange Mainframe and CDC

  • PAM updates
  • Improved or extended functionality
    • Windows 10 client support
    • DB2UDB V11.1
    • I5/OS 7.3
    • SQL-Server 2016
    • Solaris support re-established

  · New functionality

    • SQL-Server access from Linux
    • SMF reporting enhancements
    • DB2 read (via Datamap) “LOB” datatype support


Metadata Manager

  • Netezza Multiple Schema Support
    • This can be consumed at multiple places within Metadata Manager (Lineage, Catalog, etc.)
    • Support for both single and multiple schemas
    • Can view all artifacts for all multiple schemas within the Catalog object
    • Addresses use cases where table is part of one schema and the corresponding view is part of another schema
  • Platform XConnect Improvements
    • Removed the need for workflow dependencies to be deployed to applications for metadata load


Profiling and Discovery

  • Scorecard Dashboard Drilldown
  • Scorecard dashboards will now allow users to drill down to the details and navigate them towards actionable results
    • A separate drilldown pane is provided to view the drilldown results in the Scorecard homepage
  • Hive/Hadoop Connection Merge for Blaze Mode
    • Hive and Hadoop connections are merged and seen as “Hadoop” for run time environments
    • Blaze mode will be the preferred mode of big data execution while Hive will be used as a fallback option for functional issues
    • For customers upgrading to 10.1.1, execution mode of pre-10.1.1 profiles will switch to Blaze after upgrade
  • Blaze Support for Profiling Drilldown
    • Both profile and scorecard drilldown operations are now pushed down to Blaze (when the execution mode is set to Blaze)
    • Drastic reduction of profiling drilldown time while leveraging the benefits of performance optimized Blaze environments (vs the Data Integration Service)
    • Profile-level logs will continue to be available while logs for Yarn jobs are available under the Blaze Grid Manager


Informatica 10.1.1 Release Notes

PowerExchange 10.1.1 Release Notes

PowerExchange Adapters for Informatica 10.1.1 Release Notes

PowerExchange Adapters for PowerCenter 10.1.1 Release Notes

PAM for Informatica 10.1.1

This release contains 10.1 certification for Windows 10.

PAM for Informatica 10.1

Informatica 10.1.0 includes the following new capabilities:


Big Data Management

  • PAM
    • HDP 2.3.x, HDP 2.4.x
    • CDH 5.5.x
    • MapR 5.1.x
    • HDInsight 3.3
    • IBM BI 4.1.x
  • Functionality
    • File Name Option: Ability to retrieve file name and path location from complex files, HDFs and flat files.
    • Parallel Workflow tasks
    • Run Time DDL Generation
    • New Hive Datatypes: Varchar/char datatypes in map reduce mode
    • BDM UTIL: Full Kerberos automation
    • Developer Tool enhancements
      • § Generate a mapplet from connected transformations
      • § Copy Paste-Replace Ports from/to Excel.
      • § Search with auto-suggest in ports
      • § “Create DDL” sql enhancements including parameterization.
  • Reuse
    • SQL To Mapping: Convert ANSI SQL with functions to BDM Mapping
    • Import / Export Frameworks Gaps: Teradata/Netezza adapter conversions from PC to BDM
    • Reuse Report
  • Connectivity
  • SQOOP: Full integration with SQOOP in map-reduce mode.
  • Teradata and Netezza partitioning: Teradata read/write and Netezza read/write partitioning support and Blaze mode of Execution.
  • Complex Files: Native support of Avro and Parquet using complex files.
  • Cloud Connectors: Azure DW, Azure Blob, Redshift connectors on map-reduce mode.


  • Performance
  • Blaze: We have done significant improvement in “Blaze 2.0” by enhancing performance and adding more connectors and transformations to run on Blaze. Some of the new features on Blaze are :
    • Performance
      • Map Side join (with persistence cache)
      • Map Side aggregator
    • Transformations
      • Unconnected lookup
      • Normalizer
      • Sequence generator
      • Aggregator pass through ports
      • Data Quality
      • Data Masking
      • Data Processor
      • Joiner with relaxed join condition for Map-side Joins. Earlier, only equal join is supported
    • § Connectivity
      • Teradata
      • Netezza
      • Complex file reader/writer for limited cases
      • Compressed Hive source/target
    • § Recovery
      • We do support partial recovery, though it is not enabled by default
  • Spark: Informatica BDM now fully supports Spark 1.5.1 in Cloudera and Hortonworks.


  • Security
    • Security Integration: Following features are added to support Infrastructure security on BDM:
      • Integration with Sentry & Ranger for Blaze mode of Execution.
      • Transparent Encryption support.
      • Kerberos: Automation through BDM UTIL.
    • OS Profile: Secured multi tenancy on the Hadoop cluster.


  • License compliance enhancements
    • License expiration warning messages to renew the license proactively
    • License core over-usage warning messages for compliance
  • Monitoring enhancements
    • Domain level resource usage trending
    • Click through to the actual job execution details from the summarized job run statistics reports
    • Historical run statistics of a job
  • Scheduler enhancements
    • Schedule Profile and Scorecard jobs
    • Schedules are now time zone aware
  • Security enhancements
    • OS Profiles support for BDM, DQ, IDL products - Security and isolation for job execution
    • Application management permissions – fine grained permissions on application/mapping and workflow objects





  • Drag and drop Target definition into Source Analyzer to create source definition in Designer
  • Enhancements to address display issues when using long names and client  tools with dual monitor
  • SQL To Mapping: Use Developer tool to convert ANSI SQL with functions to PowerCenter Mapping
  • Command line enhancement to assign Integration service to workflows
  • Command line enhancement to support adding FTP connection
  • Pushdown optimization support for Greenplum


New connectors

  • Azure DW
  • Azure Blob

New Certifications

  • Oracle 12cR1
  • MS Excel 2013
  • MS Access 2013

Mainframe and CDC

  • New functionality
    • z/OS 2.2
    • z/OS CICS/TS 5.3
    • z/OS IMS V14 (Batch & CDC)
    • OPENLAP to extend security capabilities over more Linux, Unix & Windows platforms
  • Improved or Extended functionality
  • I5/OS SQL Generation for Source/Target Objects
  • z/OS DB2  Enhancements
    • IFI 306 Interest Filtering
    • Offloading support for DB2 Compressed Image Copy processing
    • Support for DB2 Flash Copy Images
  • Oracle Enhancements
    • Support of Direct Path Load options for Express CDC
    • Allow support of drop partition DDL to prevent CDC failures
    • Process Archived REDO Log Copies
  • Intelligent Metadata Generation (Createdatamaps)
    • Ability to apply record filtering by matching Metadata with physical data


Metadata Manager

  • Universal Connectivity Framework (UCF)
    • Connects to a wide range of metadata sources. The list of metadata sources is provided in Administrator Guide
    • A new bridge to a metadata source can be easily deployed. Documentation is provided to aid with the deployment
    • Linking, lineage and impact summary remains intact with Native connectivity
    • Connection based linking is available for any metadata source created via UCF
    • Rule Based and Enumerated linking is available for connectors created via UCF
  • Incremental Load Support for Oracle and Teradata
    • Improved load performance by extracting only changed artifacts from relational sources: Oracle and Teradata
    • Lesser load on metadata source databases compared to a full extraction
    • XConnect can run in full or incremental mode. Logs contain more details about extracted artifacts in the incremental mode
  • Enhanced Summary Lineage
    • A simplified lineage view to the business user without any mapping assets or stage assets in the flow
    • Users can drill down from the enhanced summary lineage to get to technical lineage
    • Users can go back to the summary view from the detailed lineage view

Profiling and Discovery

  • AVRO/Parquet Profiling
    • Profile Avro/Parquet files directly without creating Logical Data Objects for each of them
    • Profile on a file or a folder of files; within Big Data Environment or within Native file system
    • Support common Parquet compression mechanisms including Snappy
    • Support common Avro compression mechanisms including Snappy and Deflate
    • Execution mode of profiling Avro/Parquet files is available in Native, HIVE and Blaze mode
  • Operational Dashboards
    • The operational dashboard provides separate views of:
      • § Number of scorecards
      • § Data objects tracked by scorecards
      • § Cumulative scorecard trend (acceptable/unacceptable) elements
      • § Scorecard runs summary
    • Analyst user should be able to view the operational dashboard in the scorecard workspace
  • Scheduling Support for Profiles/Scorecards
    • Ability to schedule single/multiple profile(s)/scorecard(s)/Enterprise profile(s)
    • Performed from the UI in Administrator Console
  • Profiling/Scorecards on Blaze
    • Use Big Data infrastructure for Profiling Jobs
    • Running Profiling on Blaze is supported on both Analyst and Developer
    • Following jobs are supported in Blaze mode:
      • § Column Profiling
      • § Rule Profiling
      • § Domain Discovery
      • § Enterprise Profiling (Column and Domain)
    • Ability to use all sampling options when working in the Blaze mode: First N, Random N, Auto Random, All
  • Data Domain Discovery Enhancements
    • Ability to provide the number of records as a domain match criterion that allows detecting of domain matches even when there are a few records that match the criteria; especially useful when trying to match secure domains
    • Additional option to exclude NULL values from computation of Inference percentage
  • OS Profile Support
    • Providing Execution resource Isolation for Profiles and Scorecards
    • Configuration similar to PowerCenter/Platform for OS Profiles


Data Transformation 10.1.0

  • New 'Relational to Hierarchical' transformation in Developer
  • New REST API for executing DT services
  • Optimizations for reading & writing complex Avro and Parquet files


PAM (Platform – PC/Mercury)

  • Database Support Update : Added:
    • Oracle 12cR1
    • IBM DB2 10.5 Fix Pack 7
  • Web Browser Update:
    • Safari 9.0
    • Chrome51.x
  • Tomcat Support Update : v 7.0.68
  • JVM Support Update :  Updated:
    • § Oracle Java 1.8.0_77
    • § IBM JDK 8.0

Enterprise Information Catalog

New Connectivity

• File Scanner for HDFS(Cloudera, Hortonworks) and Amazon S3: Catalog supported files and fields in the data lake. Supported for CSV, XML and JSON file formats.

• Informatica Cloud: Extract lineage metadata from Informatica Cloud mappings.

• Microstrategy: Support for metadata extracts from Microstrategy

• Amazon Redshift: Support for metadata extract from Amazon Redshift.

• Hive: Added multi-schema lineage support for Hive

• Custom Lineage Scanner: Manually add links and link properties to existing objects in the catalog; document lineage from unsupported ETL tools and hand-coded integrations.


• Semantic Search: Object type detection from search queries for targeted search results.

• Enhanced Domain Discovery: Granular controls in domain discovery like Record match and Ignore NULLs for accurate domain matching.

User Experience Improvements

• Enhanced Catalog Home Page: Added new reports for Top 50 Assets in the organization, Trending Searches and Recently Viewed Assets by the user

• Enhance Administrator Home Page: New dashboard with widgets for task monitoring, resource views and unassigned connections.

Performance Enhancements

• Profiling on Blaze: Run Profiling and Domain Discovery jobs on Hadoop for Big Data sets

• Incremental Profiling Support: Scanner jobs identify if the table has changed from the last discovery run and run profiling jobs only on the changed tables for selected sources (Oracle, DB2, SQL Server, and HDFS Files).

• ~4X Performance Improvement in scanning PowerCenter resources.

• ~30X search, search auto-suggest and sort performance improvements


• Added Support for Backup, Restore and Upgrade.

• Added Kerberos Support for embedded cluster

• Intelligent Email Alerts: To help administrators proactively take care of any potential stability issues in Catalog setup.

•      EIC PAM

• RHEL 7 Support Added

• New versions added for existing scanners:

• Tableau 9.x

• MapR 5.1 Hive Scanner

• SAP Business Objects 4.1 SP4 through SP6

• New Scanners

• Amazon Redshift

• Amazon S3

• Informatica Cloud R25

• Microstrategy 10.x, 9.4.1, 9.3.1


Intelligent Data Lake (IDL)

In version 10.1, Informatica introduces a new product Intelligent Data Lake to help customers derive more value from their Hadoop based data lake and democratize the data for usage by all in the organization.

Intelligent Data Lake is a collaborative Self-service Big Data discovery and preparation solution for data analysts and data scientists to rapidly discover and turn raw data into insights with quality and governance, especially in a data lake environment.

This allows analysts to spend more time on analysis and less time on finding and preparing data. At the same time IT can ensure quality, visibility and governance.


Intelligent Data Lake provides the following benefits.

  • Data Analysts can quickly and easily find and explore trusted enterprise data assets within the data lake as well as outside the data lake using semantic search, knowledge graphs and smart recommendations.
  • Data Analysts can transform, cleanse, and enrich data in the data lake using an Excel-like spreadsheet interface in a self-service manner without need of coding skills.
  • Data Analysts can publish and share data as well as knowledge with rest of the community and analyze the data using their choice of BI or analytic tools.
  • IT and governance staff can monitor user activity related to data usage in the lake.
  • IT can track data lineage to verify that data is coming from the right sources and going to the right targets.
  • IT can enforce appropriate security and governance on the data lake
  • IT can operationalize the work done by data analysts into a data delivery process that can be repeated and scheduled.


Intelligent Data Lake has following features.

  • Search:
    • Find the data in the lake as well as in the other enterprise systems using smart search and inference based results.
    • Filter assets based on dynamic facets using system attributes and custom defined classifications.
  • Explore:
    • Get overview of assets including custom attributes, profiling stats for quality, data domains for business content and usage information.
    • Add business context information by Crowd-sourcing metadata enrichment and tagging.
    • Preview sample data to get a sense of the data asset based on user credentials.
    • Get lineage of assets to understand where data is coming from and where it is going to build trust.
    • Know how the data asset is related to other assets in the enterprise based on associations with other tables/views, users, reports, data domains etc.
    • Discovery assets unknown before with progressive discovery with lineage and relationship views.
  • Acquire:
    • Upload personal delimited files to the lake using a wizard based interface.
    • Hive tables are automatically created for the uploads in the most optimal format.
    • Create new, append or overwrite existing assets for uploaded data.
  • Collaborate:
    • Organize work by adding data assets to Projects.
    • Add collaborators to Projects with different roles such as Co-owner, Editor, Viewer etc. for different privileges.
  • Recommendations:
    • Improve productivity by using recommendations based on other users’ behaviors and reuse knowledge.
    • Get recommendations for alternate assets that can be used in the Project instead of what is added.
    • Get recommendations for additional assets that can be used in addition to what’s in the project.
    • Recommendations change based on what is in the project.
  • Prepare:
    • Use excel-like environment to interactively specify transformation using sample data.
    • See sheet level and column level overviews including value distributions, numeric/date distributions.
    • Add transformations in the form of Recipe steps and see immediately the result on the sheets.
    • Perform column level data cleansing, data transformation using string, math, date, logical operations.
    • Perform sheet level operations like Combine, Merge, Aggregate and Filter data.
    • Refresh sample in the worksheets if the data in the underlying tables changes.
    • Derive sheets from existing sheets and get alerts when parent sheets change.
    • All the transformation steps are stored in the Recipe which can be played back interactively.
  • Publish:
    • Use the power of underlying Hadoop system to run large scale data transformation without coding/scripting.
    • Run data preparation steps on actual large data sets in the lake to create new data assets.
    • Publish the data in the lake as a Hive table in desired database.
    • Create new, append or overwrite existing assets for published data.
  • Data asset operations:
    • Export data from the lake to CSV file
    • Copy data into another database or table.
    • Delete the data asset if allowed by user credentials.
  • My Activities:
    • Keep track of my upload activities and their status.
    • Keep track of publications and their status.
    • View log files in case of errors. Share with IT Administrators if needed.
  • IT Monitoring:
    • Keep track of  User, Data asset and Project activities by building reports on top of Audit data base
    • Answer questions like Top Active Users, Top Datasets by Size, Last Update, Most Reused assets, Most Active projects etc.
  • IT Operationalization:
    • Operationalize the ad-hoc work done by Analysts.
    • User Informatica Developer tool to customize and optimize the Informatica BDM Mappings translated from the Recipe that Analyst created.
    • Deploy, schedule and monitor the Informatica BDM mappings to ensure data assets are delivered at the right time to the right destinations.
    • Make sure the entitlements in the data lake for access to various databases and tables are according to security policies.



Informatica Data Quality

Exception management

  • Data type based Search & replace enhancements
  • Non-default schema for exception tables for greater security and flexibility
  • Task ID for better reporting


Address Validation

  • IDQ now Integrated with AV 5.8.1
  • Ireland – Support for eircode (postal codes)
  • France – SNA Hexaligne 3 Data Support
  • UK – Roof top geocode



  • IDQ transformations can execute on Blaze
  • Workflow - parallel execution for enhanced performance



  • Scorecard dashboard for single, high level view of scorecards in the repository



Informatica 10.1 Release Notes

PowerExchange Adapters for Informatica 10.1 Release Notes

PowerExchange Adapters for PowerCenter 10.1 Release Notes

PowerExchange 10.1 Release Notes

PAM for Informatica 10.1

10.1 New Features guide


This is a Major release and all download requests will have to be made by opening a shipping request.


Informatica 9.6.1 HotFix 4 includes the following capabilities:




PAM Certifications:


  • Java  is upgraded to 1.8.0_72. Tomcat is upgraded to 7.0_68
  • IBM DB2 10.5 Fix Pack 7 support
  • MS Excel 2013 support (PowerCenter)
  • MS Access 2013 support (PowerCenter)    
  • Oracle : 12cR1
  • MSD CRM 2016 Certification
  • Chrome 49.x is supported
  • REST WebServices, SOAP Web Services for HP-UX & Solaris 10 (Informatica Developer) – PowerExchange Applications


Security Enhancements:


  • Upgraded several security-critical Third Party Libraries (TPLs)
  • Ability to work with JRE; JDK no longer required
  • Support for TLS v1.1/v1.2 – TLS v1.0 blocked
  • Ability for customers to choose custom cipher configuration


Connectivity Enhancements:


  • Data driver upgrade 7.1.5.Hot Fix
  • TLS 1.2 certification of relational DB’s (Oracle, SQL Server and DB2 ) connectivity.
    • Note: TLS 1.2 connectivity to Oracle is not certified on AIX platform for all 9.6.x and 10.x versions
  • DB2 – User can filter out schema objects based on schema name provided in connection string in Mercury platform
  • Native bulk connector for PowerCenter


Administrator Tool:


  • License expiration warnings and errors
  • For self-compliance, Administrator tool now shows warning messages in case license is about to expire or shows error messages in case license has already expired


Reporting & Dashboard Services:


  • Announcement: Reporting & Dashboard Services is on a deprecation path
  • New installs of 9.6.1 HF4 will not contain RDS service anymore
  • Upgrades from earlier versions to be supported until end of life
  • SQL queries for all out of the box RDS reports are documented, customers are encouraged to use the reporting tool of their choice and build reports as needed


Business Glossary:


  • Audit trail Migration support from MM-BG 951


Metadata Manager:



  • Support added for SAP BO 4.1 SP6




  • Advanced profiling enhancements: Added a custom flag to optionally disable staging of drill down data


PowerExchange (Mainframe & CDC)


  • Certification of z/OS 2.2
  • z/OS CDC Adapter enhancements
  • Potential performance improvements by utilising additional IFI 306 object filtering
  • Ability to Identify either User/PSBName as DTL__USER in IMS synchronous capture
  • LUW CDC Adapter enhancements
  • Oracle CDC


      • Oracle Direct Path Load support for Insert operations
      • Capture resilience supporting Oracle Kill Session events


Data Quality


Address Validation transform integration with Address Doctor 5.8.1 engine


  • Eircode (postal code) support for Ireland addresses
  • UK address enrichments
  • Delivery point type and Organization enrichments
  • South Korea address enrichments
  • Address ID
  • Germany address enrichments
  • Street Code


Exception management


  • Search and replace enhancements
  • Synonym support for exception tables


Informatica 9.6.1 HotFix 4 Release Notes


PowerExchange Adapters for Informatica 9.6.1 HotFix 4 Release Notes


PowerExchange Adapters for PowerCenter 9.6.1 HotFix 4 Release Notes


Informatica PowerExchange 9.6.1 HotFix 4 Release Notes




PAM for Informatica 9.6.1 HotFix 4


You can download the Hotfixes from here.

What are we announcing?

Informatica BDM is releasing EBFs on top of V10 update1 release.


Who would benefit from this release?

This release is for all customers and prospects who requires cloud hadoop distribution support on big data management.


What’s in this release?

Informatica customer base we see a strong interest in Big data Analytics on cloud. As part of this EBF release, we are releasing multiple EBFs to support major hadoop cloud distributions along with a newer version support of MapR.


Here are the details:


  • Azure Ecosystem:

o   HDInsight: : Informatica BDM will fully support running the data integration or data quality workloads on Azure HDInsight. We will support Ubuntu based HDInsight cluster version 3.3.

o   Hive on Blob: Hive on Blob will be supported in the hadoop mode of Execution

  • MapR 5.1:

o   Informatica BDM will fully support running the data integration or data quality workloads on MapR 5.1.



KB article on How to Install Update1:

PS: Location for EBF @/updates/Informatica10/10.0.0/EBF17167


Informatica Big Data Management 10.0 Update 1 Release Notes


PAM for Informatica Platform v10-Update 1

What are we announcing?

Informatica BDM is releasing Redshift installer and EBFs on top of V10 update1 release.


Who would benefit from this release?

This release is for all customers and prospects who requires cloud hadoop distribution support on big data management.


What’s in this release?

Informatica customer base we see a strong interest in Big data Analytics on cloud. As part of this EBF release, we are releasing EMR and Redshift .


Here are the details:


  • Amazon Ecosystem
    • EMR Support: Informatica BDM will fully support running the data integration or data quality workloads on Amazon EMR. We will support Amazon version of EMR 4.3.
    • Redshift Connector: We are also releasing redshift connector support in BDM. This is a high performance partition aware connector. This will run on native and hadoop mode of execution.
    • Hive On S3


Informatica PowerExchange for Amazon Redshift 10.0.Update 1 Release Notes


PAM for Informatica Platform v10-Update 1


PS: Location for EMR EBF @/updates/Informatica10/10.0.0/EBF17167

Redshift installer is available through Shipping



The following is the targeted release dates for the 9.1HF5. You may use this as a guide for scheduling the deployment of future releases/mile-stones.


** Please note that these dates are subject to change. **




Release   timeline

Informatica 9.1 HF5

9.1 HF5

First week of August 2012





Informatica MySupport Portal Team