JDBC V2 Connector > Mappings and elastic mappings in JDBC V2 Connector > Partitioning for elastic mappings
  

Partitioning for elastic mappings

When you configure a JDBC V2 elastic mapping to read from or write data to a database that supports the Type 4 JDBC driver, you can configure partitioning for the JDBC V2 source or target to optimize the elastic mapping performance at run time. The partition type controls how the Spark engine distributes data among partitions at partition points.
The Spark engine distributes rows of data based on the partition key you define. Before you configure a partition, ensure that you select the partition key in the Fields tab of the Source or Target transformation in a JDBC V2 mapping. If the imported table already has a primary key defined, the Spark engine considers that as the default partition key. Before you import, ensure that the table has only one partition key.
After you import, you can change the partition key on the Fields tab. From the Options list, select Edit Metadata and then select the partition key. Ensure that you do not define more than one partition key for the source or target.
The default number of partitions is 1. The maximum number of partitions you specify must not exceed 64. If the number of partitions you specified for the target is 1, the Spark engine considers the same number of partitions you specified for the source as the target partition number.
Note: If you want to use an existing elastic mapping configured for partitioning from an earlier release, you must specify the partition key field.

JDBC V2 data types supported for partitioning

The following table lists the JDBC V2 data types supported as partition keys:
Data types
Supported
Smallint
Yes
Integer
Yes
Bigint
Yes
Decimal
Yes
Numeric
Yes
Real
Yes
Double
Yes
Smallserial
Yes
Serial
Yes
Bigserial
Yes
Char
-
Charn
-
Varchar
-
Varcharn
-
Text
-
Bytes
-
Boolean
-
Date
-
Time
-
Timestamp
-
Bit
-