I would parameterize the filter condition instead. if dev you can create a parameter set and fully parameterize the condition so it looks like "column 1 = whatever". Then the same par set in prod would be "1=1". This way the column is only checked in dev and when you move the mapping to prod the condition is 1=1 which pulls everything and does not check the column.
Thanks for the response although I'm not sure that would work for our scenario. Our mappings contain as many as 20-30 business rules. We have our test dataset that we are using to validate each business rule mapplet for a given source data, and the filter column essentially links a test record to a business rule and allows us to filter only the relevant data for each mapplet when we are testing. The logic to filter data is implemented in each mapplet. I believe what you are proposing would require us not only to rework the logic of all our mapplets but then run test mappings individually for each of our test cases/business rules. I fear we are too far down the road to make that change having to run so many test cases individually would be too cumbersome.
Our alternative is to create a view for each table in our prod environment (source DB) which add the additional column we have in the test data. But I was hoping between parameterization and dynamic ports there would be a way to simply check whether or not the additional column exists, and if not, pass a default 'null' to an output port which would trigger the mappings/mapplets to process all data records.
I found a work-around.
Read object is configured using the Dev table with the additional filter column, with the option to get data columns from the data source at run time option selected. Then, I used a router transformation, passing in the filter column along with the primary key. I have a single output group with the condition set to TRUE. If the filter column does not exist in the source, the output from the router transformation for that column is NULL. I then join the output from the router back to the source data on the primary key.
I'm not sure why this works with the router transformation, or if that is expected behavior, but it gets the job done.