If your question is regarding using the Hive table as source and target, then yes please import Hive table as PDO and use it as a soruce and apply the transformation and the connect to hive target table.
Make sure to create proper hive connection and when running the mapping in pushdown , kidnly select the run time engine as spark/blaze and mention the hadoop connection and run the mapping.
Thanks for reply. My doubt is which will be better apply transformation while reading as hive table as a source by SQL or apply the available list of transformation in Informatica BDM
For Hadoop pushdown mappings, we do not extract the data from hive and move it to DIS server machine for processing. We do the entire processing on Hadoop cluster itself. Also we optimize the code before pushing the logic to cluster. Hence it is always recommended to use transformations in the mapping.