JDBC V2 Connector > Introduction to JDBC V2 Connector > Prerequisite tasks for using JDBC V2 Connector

Prerequisite tasks for using JDBC V2 Connector

Before you run mappings or elastic mappings with JDBC V2 Connector, you must perform the prerequisite tasks.
    1. Ensure that your organization has the JDBC V2 Connector license and the Data Integration Elastic licenses.
    2. If you want to run elastic mappings, ensure that the general, Spark, and custom advanced session properties are configured to run elastic mappings on the Serverless Spark engine.
    For more information, see Tasks in the Data Integration help. For more information about the elastic cluster prerequisites, implementation, configurations, and permissions, see Administrator.
    3. Download the latest Type 4 JDBC driver version that your database supports from the third-party vendor site.
    If you want to use JDBC V2 Connector to connect to Aurora PostgreSQL, download the Aurora PostgreSQL driver. Informatica has certified Aurora PostgreSQL driver 42.2.6 for JDBC V2 Connector.
    4. Install the Type 4 JDBC driver for the database on the Secure Agent machine and perform the following tasks:
    1. a. Navigate to the following directory on the Secure Agent machine: <Secure Agent installation directory>/ext/connectors/thirdparty/
    2. b. Create a folder named informatica.jdbc_v2/common and copy the driver.
    3. c. For elastic mappings, additionally create a folder named informatica.jdbc_v2/spark and add the driver.
    5. Restart the Secure Agent after you copy the driver.
    If you update the driver on the Secure Agent machine and the elastic cluster is already running, you must restart the agent. The changes take effect the next time that the agent starts the cluster. You can see the status on the Elastic Configurations page in Monitor.

Using the serverless runtime environment for elastic mappings

To use the JDBC V2 connection and run elastic mappings in a serverless environment, you must place the JDBC driver JAR files on Amazon S3. The serverless runtime environment retrieves the JDBC driver files from the Amazon S3 location.
To use the serverless runtime environment, perform the following prerequisite tasks on Amazon S3:
  1. 1. Create the following structure for the serverless agent configuration in AWS: <Supplementary file location>/serverless_agent_config
  2. 2. Add the JDBC driver files in the Amazon S3 bucket in the following locations in your AWS account:
  3. <Supplementary file location>/serverless_agent_config/common
    <Supplementary file location>/serverless_agent_config/spark
  4. 3. Copy the following code snippet to a text editor:
  5. version: 1
    - fileCopy:
    sourcePath: common/<Driver_filename>
    - fileCopy:
    sourcePath: common/<Driver_filename>
    - fileCopy:
    sourcePath: spark/<Driver_filename>
    - fileCopy:
    sourcePath: spark/<Driver_filename>
    where the source path is the directory path of the driver files in AWS.
  6. 4. Ensure that the syntax and indentations are valid, and then save the file as serverlessUserAgentConfig.yml in the following AWS location: <Supplementary file location>/serverless_agent_config
  7. When the .yml file runs, the JDBC driver files are copied from the AWS location to the serverless agent directory.
For more information about serverless runtime environment properties, see the Administrator help.