Snowflake Cloud Data Warehouse V2 Connector > Mappings and elastic mappings with Snowflake Cloud Data Warehouse V2 Connector > Snowflake Cloud Data Warehouse V2 objects in mappings and elastic mappings
  

Snowflake Cloud Data Warehouse V2 objects in mappings and elastic mappings

You can configure a mapping or an elastic mapping to read from or write to Snowflake tables and views. You can also read from Snowflake external tables and materialized views.
When you create a mapping or an elastic mapping, you can configure a Source or Target transformation to represent a Snowflake Cloud Data Warehouse V2 object.
Use the Mapping Designer in Data Integration to define and configure a mapping or an elastic mapping for Snowflake Cloud Data Warehouse V2 sources, targets, or lookups. Add the Source, Target, or Lookup transformations in the mapping canvas and configure the Snowflake Cloud Data Warehouse V2 source, target, and lookup properties.
Add the mapping or elastic mapping to a mapping task and run the mapping. You can use Monitor to monitor the jobs.
In an elastic job, you run Snowflake Cloud Data Warehouse jobs from Data Integration in a Serverless Spark engine infrastructure. The Secure Agent starts the elastic cluster and pushes the job to the cluster for processing. You can use Monitor to monitor both the cluster and job. As you run additional jobs, the cluster is scaled up or scaled down accordingly.

Snowflake Cloud Data Warehouse V2 sources in mappings and elastic mappings

In a mapping or an elastic mapping, you can configure a Source transformation to represent a Snowflake Cloud Data Warehouse V2 source.
You can configure partitioning to optimize the mapping performance at run time when you read data from Snowflake Cloud Data Warehouse. The partition type controls how the agent distributes data among partitions at partition points. You can define the partition type as key range partitioning. With partitioning, the agent distributes rows of source data based on the number of threads that you define as partition.
The following table describes the Snowflake Cloud Data Warehouse V2 source properties that you can configure in a Source transformation:
Property
Description
Connection
Name of the source connection.
You can select an existing connection, create a new connection, or define parameter values for the source connection property.
Note: You can switch between a non-parameterized and a parameterized Snowflake Cloud Data Warehouse V2 connection. The advanced property values are retained during the switch.
If you want to overwrite the source connection properties at runtime, select the Allow parameter to be overridden at run time option.
Specify the parameter file directory and name in the advanced session properties.
Source Type
Type of the source object. Select Single Object, Multiple Objects, Query1, or Parameter.
Note: When you use a custom SQL query to import Snowflake Cloud Data Warehouse tables, the agent fetches the metadata using separate metadata calls.
Parameter
A parameter file where you define values that you want to update without having to edit the task.
Select an existing parameter for the source object or click New Parameter to define a new parameter for the source object.
The Parameter property appears only if you select parameter as the source type.
If you want to overwrite the parameter at runtime, select the Allow parameter to be overridden at run time option.
When the task runs, the agent uses the parameters from the file that you specify in the advanced session properties.
Object
The source object for the task. Select the source object for a single source. When you select the multiple source option, you can add source objects and configure relationship between them.
Filter
Filters records based on the filter condition. Configure a simple filter.
Sort
Sorts records based on the conditions you specify. You can specify the following sort conditions:
  • - Not parameterized. Select the fields and type of sorting to use.
  • - Parameterized. Use a parameter to specify the sort option.
1Applies only to mappings.
The remaining properties are applicable for both mappings and elastic mappings.
The following table describes the advanced properties that you can configure in a Source transformation:
Advanced Property
Description
Database
Overrides the database specified in the connection.
Schema
Overrides the schema specified in the connection.
Warehouse
Overrides the Snowflake Cloud Data Warehouse name specified in the connection.
Role
Overrides the Snowflake Cloud Data Warehouse role assigned to user you specified in the connection.
Pre SQL
The pre-SQL command to run on the Snowflake Cloud Data Warehouse source table before the agent reads the data.
For example, if you want to update records in the database before you read the records from the table, specify a pre-SQL statement.
You can specify multiple pre-SQL commands, each separated with a semicolon.
Post SQL
The post-SQL command to run on the Snowflake Cloud Data Warehouse table after the agent completes the read operation.
For example, if you want to delete some records after the latest records are loaded, specify a post-SQL statement.
You can specify multiple post-SQL commands, each separated with a semicolon.
Table Name
Overrides the table name of the imported Snowflake Cloud Data Warehouse source table.
SQL Override
The SQL statement to override the default query used to read data from the Snowflake Cloud Data Warehouse source.
Tracing Level
Determines the amount of detail that appears in the log file. You can select Terse, Normal, Verbose Initialization, or Verbose Data. Default value is Normal.
1Applies only to mappings.
The remaining properties are applicable for both mappings and elastic mappings.

Import Snowflake objects

When you create a Source or Target transformation, you can select the Snowflake database and schema from which you can import the tables.
    1. To select the Snowflake object in the Source or Target transformation, click Select in the Object field.
    2. In the Select Source Object dialog box, select the database and schema and then select the tables from which you want to read from or write to.
    The following image shows the employee details tables from a Snowflake database and schema:
    You can select the required table in the Source or Target transformation.

Join multiple Snowflake objects

When you create a Source transformation, you can select Snowflake multiple object as the source type and then configure a join to combine the tables. You can either add related objects with PK-FK relationships that are already defined or you can define a relationship condition to join the tables.
    1. In the Source transformation, click the Source Type as Multiple Objects.
    2. From the Actions menu, select Add Source Object, and then select the source object that you want to add from the displayed list:
    Use multiple objects as the source type to join multiple Snowflake objects.
    3. From the Related Objects Actions menu, perform one of the following tasks based on your requirement:
    1. a. To add a related object that has PK-FK relationships that are already defined, select Add Related Objects, and then add the related objects:
    2. The objects that have the PK-FK relationships are displayed.
    3. b. To set your own conditions to define the relationship between the tables, select Advanced Relationship from the Related Objects Actions menu, and then define the relationship:
    4. Add the required objects and define the relationship.
      Note: When you configure a join expression, select the fields and define a join query syntax. You must specify only the condition and not the type of join in the query. The condition you specify in the text box for the expression is appended to the join condition.
      The following image shows an example of an advanced join condition defined between the Snowflake tables:
      You can view the defined relationships.

Snowflake Cloud Data Warehouse V2 targets in mappings and elastic mappings

In a mapping or an elastic mapping, you can configure a Target transformation to represent a Snowflake Cloud Data Warehouse V2 target.
You can configure partitioning to optimize the mapping or performance at run time when you write data to Snowflake Cloud Data Warehouse V2 targets. The partition type controls how the agent distributes data among partitions at partition points. You can define the partition type as passthrough partitioning. With partitioning, the agent distributes rows of target data based on the number of threads that you define as partition.
The following table describes the Snowflake Cloud Data Warehouse V2 target properties that you can configure in a Target transformation:
Property
Description
Connection
Name of the target connection.
You can select an existing connection, create a new connection, or define parameter values for the target connection property.
Note: You can switch between a non-parameterized and a parameterized Snowflake Cloud Data Warehouse V2 connection. The advanced property values are retained during the switch.
If you want to overwrite the target connection properties at runtime, select the Allow parameter to be overridden at run time option.
Target Type
Type of target object. Select Single Object or Parameter.
Parameter
A parameter file where you define values that you want to update without having to edit the task.
Select an existing parameter for the target object or click New Parameter to define a new parameter for the target object.
The Parameter property appears only if you select parameter as the target type.
If you want to overwrite the target object at runtime, select the Allow parameter to be overridden at run time option.
When the task runs, the agent uses the parameters from the file that you specify in the advanced session properties.
Object
The target object for the task. Select the target object.
You can either select an existing table or create a new table. You can write data by selecting an existing table or creating a new table in the target by using the Create New at Runtime option.
Create New at Runtime1
Creates a Snowflake Cloud Data Warehouse target table at runtime based on the table type and the path you specify.
To create a target table at runtime, provide the following parameters:
  • - Optional. Specify the table type as table.
  • - In the Path field, specify the Snowflake Cloud Data Warehouse database name and schema in the following format: <database_name>/<schema>
The agent creates the target table based on the object name and the path you specify.
Note: You can edit the metadata of the source fields before creating the target.
Operation
The target operation. Select Insert, Update, Upsert, Delete, or Data Driven.
Update columns
The temporary key column to update data to or delete data from a Snowflake Cloud Data Warehouse target.
In a mapping or elastic mapping, when you configure an update, update else insert, or delete operation and the Snowflake Cloud Data Warehouse V2 target does not include a primary key column, click Add to add a temporary key. You can select multiple columns.
Important: In an elastic mapping, when you specify an expression for a data driven condition and do not specify a column in the Update Columns field, a validation message does not appear. For DD_INSERT and DD_REJECT, the update column is not mandatory. For other operations such as DD_UPDATE and DD_DELETE, you must select at least one field in the update columns for the operation to work.
Data Driven Condition
Enables you to define expressions that flag rows for an insert, update, upsert, or delete operation when you select the Data Driven operation type.
1Applies only to mappings.
The remaining properties are applicable for both mappings and elastic mappings.
The following table describes the advanced properties that you can configure in a Target transformation:
Advanced Property
Description
UpdateMode
Loads data to the target based on the mode you specify.
Applicable when you select the Update operation or the Data Driven operation.
Select from one of the following modes:
  • - Update As Update. Updates all rows flagged for update if the entries exist.
  • - Update Else Insert. The agent first updates all rows flagged for update if the entries exist in the target. If the entries do not exist, the agent inserts the entries.
Database
Overrides the database that you used to import the object.
Schema
Overrides the schema that you used to import the object.
Warehouse
Overrides the Snowflake Cloud Data Warehouse name specified in the connection.
Role
Overrides the Snowflake Cloud Data Warehouse role assigned to the user specified in the connection.
Pre SQL
The pre-SQL command to run before the agent writes to Snowflake Cloud Data Warehouse.
For example, if you want to assign sequence object to a primary key field of the target table before you write data to the table, specify a pre-SQL statement.
You can specify multiple pre-SQL commands, each separated with a semicolon.
Post SQL
The post-SQL command to run after the agent completes the write operation.
For example, if you want to alter the table created by using create target option and assign constraints to the table before you write data to the table, specify a post-SQL statement.
You can specify multiple post-SQL commands, each separated with a semicolon.
Batch Row Size1
The number of rows written to a single file in the agent location. When the number of rows written to the file reaches the value specified, the agent flushes the data queue and starts processing the write commands.
Number of local staging files1
The number of files that represents a single batch of data. The default number of files is 64.
After the agent uploads the specified number of local staging files to the Snowflake user stage, Snowflake unloads the data to the target table.
Truncate Target Table
Truncates the database target table before inserting new rows. Select one of the following options:
  • - True. Truncates the target table before inserting all rows.
  • - False. Inserts new rows without truncating the target table
Default is false.
Additional Write Runtime Parameters
Specify additional runtime parameters.
For example, if you want to specify the user-defined stage in the Snowflake database to upload the local staging files, specify the name of the stage location in the following format:
remoteStage=REMOTE_STAGE
If you want to optimize the write performance, you can choose to compress files before writing to Snowflake tables. You can set the compression parameter to On or Off, for example:
Compression=On
By default, compression is on.
Separate multiple runtime parameters with &.
Table Name
Overrides the table name of the Snowflake Cloud Data Warehouse target table.
Rejected File Path1
The filename and path of the file on the agent machine where you want to write the rejected records.
For example, \rejectedfiles\reject7
Update Override1
Overrides the default update query that the agent generates for the update operation with the update query that you specify.
Success File Directory
Not applicable.
Error File Directory
Not applicable.
Forward Rejected Rows
Determines whether the transformation passes rejected rows to the next transformation or drops rejected rows. By default, the agent forwards rejected rows to the next transformation.
1Applies only to mappings.
The remaining properties are applicable for both mappings and elastic mappings.

Specifying a target

You can use an existing target or create a target to hold the results of a mapping. If you choose to create the target, the agent creates the target when you run the task.
To specify the target properties, perform the following tasks:
    1. Select the Target transformation in the mapping.
    2. On the Incoming Fields tab, configure field rules to specify the fields to include in the target.
    3. To specify the target, click the Target tab.
    4. Select the target connection.
    5. For the target type, choose Single Object or Parameter.
    6. Specify the target object or parameter.
    7. To specify a target object, perform the following tasks:
    1. a. Click Select and choose a target object. You can select an existing target object or create a new target object at runtime and specify the object name. To create a new target object at runtime, specify the object name and the path where you want to create the table in Snowflake.
    2. Note: Ensure that the path of the <database name>/<schema name> is in uppercase letters. When you do not specify the path, the Secure Agent considers the schema and database name you specified in the Additional JDBC URL Parameters field from the Snowflake Cloud Data Warehouse V2 connection properties.
    3. b. To create a target object at runtime, select Create New at Runtime.
    4. c. Enter the name for the target object in the Object Name field.
    5. d. Specify the TableType as table.
    6. The TableType property is optional.
    7. e. In the Path field, specify the Snowflake Cloud Data Warehouse database name and schema in the following format: <database_name>/<schema>
    8. f. Click OK.
    8. Specify the advanced properties for the target, if required.

Configuring an update override

You can specify an update override to override the update query that the Secure Agent generates for the update operation.
When you configure an update override, the Secure Agent uses the query that you specify, stages the data in files, and then loads that data into a temporary table using the Snowflake's loader copy command. The data from the temporary table is then loaded to the Snowflake target table. The syntax that you specify for the update query must be supported by Snowflake.
You must specify the update query in the following format:
UPDATE <Target table name> SET <Target table name>.<Column1> = :TU.<Column1>, <Target table name>.<Column2> = :TU.<Column2>, … <Target table name>.<ColumnN> = :TU.<ColumnN> FROM :TU WHERE <Target table name>.<Update Column1> = :TU.<Update Column1> AND <Target table name>.<Update Column2> = :TU.<Update Column2> AND … <Target table name>.<Update ColumnN> = :TU.<Update ColumnN>
where, :TU. represents the incoming data source for the target port.
The Secure Agent replaces :TU. with a temporary table name while running the mapping and does not validate the update query.

Configuring the .csv file size for a Snowflake Cloud Data Warehouse target

When you create a mapping to write to Snowflake, you can specify the size of the local staging .csv file in bytes. Specify the local staging file size property, csvFileSize, in the Additional Write Runtime Parameters field in the advanced Snowflake target properties. The default file size is 50 MB.
If the intended date size is 50 MB, calculate the csvFileSize value in bytes, for example 50*1024*1024 and then specify 52428800 as the csvFileSize. It is recommended that you configure the right combination of the number of local staging files and csvFileSize while writing data to Snowflake for better performance.

Configuring loader properties as additional runtime parameters in mappings

You can configure write properties to load data to Snowflake Cloud Data Warehouse in the Additional Write Runtime Parameters field in the Snowflake Cloud Data Warehouse advanced target properties of the Target transformation.
The following table lists some of the additional runtime parameters that you can specify to load data to Snowflake Cloud Data Warehouse:
Property
Supported Type
Description
Default Value
oneBatch
Boolean
Process all data in a single batch.
false
remoteStage
String
Specifies to use internal stage.
Note: External stage is not applicable.
"~"(user stage)
onError
String
Specifies the action to perform when an error is encountered while loading data from a file.
For example, on_error option ABORT_STATEMENT|CONTINUE|SKIP_FILE
CONTINUE
compressFileByPut
Boolean
Compress file by PUT.
false
compressDataBeforePut
Boolean
Compress data before PUT.
The loader compresses the data to a gzip format before uploading the data.
true
copyEmptyFieldAsEmpty
Boolean
The COPY command option to set incoming empty fields as null.
-
enablePGZIP
Boolean
Enables parallelism for the file compression.
true
onError=ABORT_STATEMENT&oneBatch=true
onError - String
oneBatch - Boolean
Load the entire data in single batch and to stop the task if an error occurs. Simultaneously, validate the user-specified reject file path and write the error records to this file and to the session log.
-
When you set the values in the additional runtime parameters field, every configured partition initializes a new loader instance and the configured values apply similarly across all the partitions.

Example 1

You want to compress files by using the Put command before loading data to Snowflake.
Specify the following compression option: compressDataBeforePut=false&compressFileByPut=true
If you specify both the options as true, Snowflake considers the compressDataBeforePut option.

Example 2

You want to replace the incoming fields with empty values as NULL while loading the data to Snowflake.
Specify the copyEmptyFieldAsEmpty Boolean option and set the value to true or false based on your requirement.
Consider the following scenarios before you configure the copyEmptyFieldAsEmpty Boolean parameter:

Example 3

You want to write source data in a single batch to Snowflake. If an error is encountered while loading, you also want to capture the errors.
Specify the onError=ABORT_STATEMENT&oneBatch=true property based on your requirement.
While loading in a single batch, if an error occurs, the Secure Agent checks for the specified reject file name, runs the COPY command, validates the reject file, and then passes the file name to capture the errors, if any.

Configure additional runtime parameters in elastic mappings

You can configure additional properties in elastic mappings to write data to Snowflake. Specify the parameters in the Additional Write Runtime Parameters field in the Target transformation.
The following list describes some of the additional runtime parameters that you can specify to write data to Snowflake:
autopushdown
Optional. Determines whether the automatic query pushdown is enabled.
If you enable pushdown and the query runs on the Spark engine, the Spark engine pushes part of the query to process in Snowflake, thereby optimizing the performance of these queries.
Default is on when the connector uses a compatible Spark version. When the connector does not use a compatible Spark version, the default value is off.
continue_on_error
Optional. Determines whether the COPY command aborts the operation when you enter data that is not valid. For example, you specify a JSON format for a variant data type column that is not valid.
The values include on and off. When you specify the value as on, the COPY command continues even if an error occurs. If you specify off, the COPY command aborts when an error occurs. The default value is off.
It is recommended that you keep the option as off. Else, when an error is encountered while copying data into Snowflake, some of the data might be missing.
parallelism
The size of the thread pool to use when the Secure Agent uploads or downloads data between Snowflake and Spark. Default is 4.
Do not change the default value unless you need to increase or decrease the throughput. When you want a high throughput, do not set the parallelism to an arbitrarily large number. A high value of parallelism might lead to undesired outputs and slows down the operation.
purge
Determines whether the Secure Agent deletes the temporary files created when transferring data from the Spark engine to Snowflake through the external data transfer. The possible values are on and off. Default is off.
If you set this parameter to off, the Secure Agent automatically deletes the temporary files. Purging works only for data transfers from the Spark engine to Snowflake, but not for transfers from Snowflake to the Spark engine. If you set this parameter to on, the Secure Agent does not automatically delete the temporary files.
usestagingtable
Optional. Determines whether the data loading operation uses a staging table.
The Secure Agent creates a staging table with a temporary name. If the data loading operation is successful, the Secure Agent drops the original target table and renames the staging table to the original target table name. If the data loading operation fails, the Secure Agent drops the staging table and the target table retains the data that it contained before the operation.
Snowflake strongly recommends that you use a staging table. To create a staging table, you must have sufficient privileges to run the COPY command to create a table. If you do not have permissions to create a table, you must load directly without using a staging table.
The values include on and off. If you specify the usestagingtable parameter as on, the Secure Agent uses a staging table. If you specify the value as off, the Secure Agent directly loads the data into the target table. Default is on.

Target statistics of processed rows in Snowflake Cloud Data Warehouse V2 write operations

When you run an insert, update, or delete operation, and the Secure Agent successfully applies all rows to the target and does not reject any rows, the state of the task in the My Jobs page reflects as Success. If the Secure Agent rejects even one row, the status reflects as Warning. In this case, the Rows Processed field in the My Jobs page reflects the total number of rows that the Secure Agent processed.
The following image shows the My Jobs page that shows the details of the state and the number of processed rows of a Snowflake Cloud Data Warehouse V2 job:
You can view the number of processed rows for the jobs:
To view how many among the processed rows were a success and how many resulted in an error, select the specific instance name and view the Results section. You can view the number of success rows and error rows.
The following image shows the details of the Snowflake Cloud Data Warehouse V2 task:
You can view the details of error and success rows for the selected job:
You can also download the session log to get details of the number of output rows, affected rows, applied rows, and rejected rows.
You might also encounter the following scenarios of target statistics for Snowflake Cloud Data Warehouse V2 write operations:

Snowflake Cloud Data Warehouse V2 lookups in mappings and elastic mappings

You can create lookups for objects in a Snowflake Cloud Data Warehouse V2 connection. You can retrieve data from a Snowflake Cloud Data Warehouse V2 lookup object based on the specified lookup condition.
When you configure a lookup in Snowflake Cloud Data Warehouse V2, you select the lookup connection and lookup object. You also define the behavior when a lookup condition returns more than one match.
You can add the following lookups to a mapping or an elastic mapping:
Note: Cached and uncached lookups are not applicable for elastic mappings.
The following table describes the Snowflake Cloud Data Warehouse V2 lookup object properties that you can configure in a Lookup transformation:
Property
Description
Connection
Name of the lookup connection.
You can select an existing connection, create a new connection, or define parameter values for the lookup connection property.
Note: You can switch between a non-parameterized and a parameterized Snowflake Cloud Data Warehouse V2 connection. The advanced property values are retained during the switch.
If you want to overwrite the lookup connection properties at runtime, select the Allow parameter to be overridden at run time option.
Source Type
Type of the source object. Select Single Object, Query1, or Parameter.
Parameter
A parameter file where you define values that you want to update without having to edit the task.
Select an existing parameter for the lookup object or click New Parameter to define a new parameter for the lookup object.
The Parameter property appears only if you select parameter as the lookup type.
If you want to overwrite the parameter at runtime, select the Allow parameter to be overridden at run time option.
When the task runs, the Secure Agent uses the parameters from the file that you specify in the advanced session properties.
Lookup Object
Name of the lookup object for the mapping.
Multiple Matches1
Behavior when the lookup condition returns multiple matches. You can return all rows, any row, the first row, the last row, or an error.
You can select from the following options in the lookup object properties to determine the behavior:
  • - Return first row
  • - Return last row
  • - Return any row
  • - Return all rows
  • - Report error
Filter
Not applicable.
Sort
Not applicable.
1Applies only to mappings.
The remaining properties are applicable for both mappings and elastic mappings.
The following table describes the Snowflake Cloud Data Warehouse V2 lookup object advanced properties that you can configure in a Lookup transformation:
Advanced Property
Description
Database
Overrides the database specified in the connection.
Schema
Overrides the schema specified in the connection.
Warehouse
Overrides the Snowflake warehouse name specified in the connection.
Role
Overrides the Snowflake role assigned to the user specified in the connection.
Pre SQL
Not applicable.
Post SQL
Not applicable.
SQL Override1
Overrides the default SQL query used to read data from the Snowflake source.
1Applies to mappings.
The remaining properties are applicable for both mappings and elastic mappings.

Unconnected Lookup transformation

You can configure an unconnected Lookup transformation for the Snowflake Cloud Data Warehouse source in a mapping. Use the Lookup transformation to retrieve data from Snowflake Cloud Data Warehouse based on a specified lookup condition.
An unconnected Lookup transformation is a Lookup transformation that is not connected to any source, target, or transformation in the pipeline.
An unconnected Lookup transformation receives input values from the result of a :LKP expression in another transformation. The Integration Service queries the lookup source based on the lookup ports and condition in the Lookup transformation and passes the returned value to the port that contains the :LKP expression. The :LKP expression can pass lookup results to an expression in another transformation.
For more information about the Lookup transformation, see Transformations.

Configuring an unconnected Lookup transformation

To configure an unconnected Lookup transformation, select the Unconnected Lookup option, add incoming fields, configure the lookup condition, and designate a return value. Then configure a lookup expression in a different transformation.
    1. Add a Lookup transformation in a mapping.
    2. On the General tab of the Lookup transformation, enable the Unconnected Lookup option.
    3. On the Incoming Fields tab of the Lookup transformation, create an incoming field for each argument in the :LKP expression.
    For each lookup condition you create, add an incoming field to the Lookup transformation.
    4. In the Lookup Object tab, import the lookup object.
    The Multiple Matches property value Return all rows in an unconnected lookup is not applicable.
    5. Designate a return value.
    You can pass multiple input values into a Lookup transformation and return one column of data. Data Integration can return one value from the lookup query. Use the return field to specify the return value.
    6. Configure a lookup expression in another transformation.
    Supply input values for an unconnected Lookup transformation from a :LKP expression in a transformation that uses an Expression transformation. The arguments are local input fields that match the Lookup transformation input fields used in the lookup condition.
    7. Map the fields with the target.

Capture uncached lookup queries

When you enable caching, the Data Integration Server queries the lookup source once and caches the values for use during the session. When you disable caching, each time a row passes into the transformation, a SELECT statement gets the lookup values.
You can capture the uncached lookup queries for a connected and unconnected lookup in the session log. Enable the Lookup Caching Enabled property in the mapping, and then enable the verbose mode in the Snowflake Cloud Data Warehouse mapping task.
The following image shows the selected verbose mode in the mapping task:
You can view the verbose mode configurations in the mapping task.
When you run the mapping, the Secure Agent logs the uncached lookup queries in the session logs.

Configure dynamic lookup cache

You can configure a dynamic lookup cache to keep the lookup cache synchronized with the Snowflake target.
Enable the Dynamic lookup cache field in the Lookup transformation advanced properties. By default, this field is disabled and represents static cache.
When you enable lookup caching, a mapping task builds the lookup cache when it processes the first lookup request. If the cache is static, the data in the lookup cache does not change as the mapping task runs. If the task uses the cache multiple times, the task uses the same data. If the cache is dynamic, the task updates the cache based on the actions in the task, so if the task uses the lookup multiple times, downstream transformations can use the updated data.
Based on the results of the lookup query, the row type, and the Lookup transformation properties, the mapping task performs one of the following actions on the dynamic lookup cache when it reads a row from the source:
Inserts the row into the cache
The mapping task inserts the row when the row is not in the cache. The mapping task flags the row as insert.
Updates the row in the cache
The mapping task updates the row when the row exists in the cache. The mapping task updates the row in the cache based on the input fields. The mapping task flags the row as an update row.
Makes no change to the cache
The mapping task makes no change when the row is in the cache and nothing changes. The mapping task flags the row as unchanged.
For information about dynamic lookup cache, see Transformations in the Data Integration help.

Rules and guidelines for dynamic lookup cache

Consider the following rules and guidelines when you configure a dynamic lookup cache in Snowflake mappings: