With 10.2.2 Spark supports (simple) JDBC connection only with Update strategy otherwise this requires Sqoop JDBC.
When we have a dedicated connector (here we have a Redshift) then this is the supported method to access this source/target and the generic JDBC is not certified,
Thanks for the inputs. I used the "RedshiftJDBC4-184.108.40.2067.jar" and created a JDBC with Sqoop connection. It worked as well. But below are the issues im facing:
1. When target, Redshift, has column names enclosed inside double quotes ("primary"), then execution fails with error Invalid relation
2. While loading date table into Redshift, getting date value of error.
Any inputs on the above error?
As explained this is not the supported connector for this source/target, and hence there will be issues.
If you manage to implement any sort of Sqoop job via Sqoop command line but do not succeed in getting this same job to run with Informatica then you should check with the support and raise a ticket with the details and we'll endeavour to assist.