I'm not sure your proposed way of working will ever work, maybe only if you minimize the commit interval but that will impact performance in a negative wat which you also don't want with those kind of numbers of records.
You are working with 2 tables so 2 datasets and basically you expect powercenter and the rdbms to keep the number of records synchronized while the full records consume much more memory and cpu power to enter a record in the table.
In this case I would choose to use a view on the full table which contains only the PK of the records.
It is possible under certain assumptions. The key term to look up in the PowerCenter manuals is User-Defined Commit, the respective chapter in the Advanced Workflow guide is named Commit Points.
The most important point is that you insert a Transaction Control transformation which feeds both target tables. Make sure you issue a TC_COMMIT_AFTER after every e.g. 5000 records. If you rely on the Target Based Commit size in such a case, you cannot guarantee that the same numbers of records are written to the target tables within each block of data, hence this can't work for you. You need to set the session to User-Defined Commit (hence the Transaction Control).
There are a few more technical requirements you have to look for. Please check the paragraph named Understanding Transaction Control Units. It's no rocket science, but you have to follow the guidelines carefully, otherwise the commit control on both target tables won't work reliably (and that's surely not what you need).
Based on your reply and with additional Google, I found solution. Instead of Target Based Commit, I have used Source Based Commit and it works.
Since there is no ransformation/filter in between, in this scenario "Source based commit" works as expected.
Not a safe approach. The point is that the buffers for the two target tables probably differ in size, and that means that for example the buffer for the PK table may hold 13,000 records while the buffer for the full table may be able to hold 135 records. In such a case it can easily happen that in the PK table no records are written at all while e.g. 2,700 records are written to the full table.
Until now you've been plain lucky that it worked fine for you.