Salesforce Standard API processes up to 200 records in one API call. However, SFDC BULK API processes about 10,000 records in one API call.
If the SFDC object contains large data sets, then it is advisable to use SFDC Bulk API. The performance of the task on Informatica Cloud Services (ICS) would be much faster when loading large data sets using Bulk API when compared to the same job run on Standard API.
Using Standard API may cause an organization to use more of their allotted API requests per day than is necessary when processing large volumes of data.
Refer the following link for more information on BULK API:
So Bulk API will be usefull for loading high volumes that's difference you see in performance
Reduce Salesforce API calls being used by tasks follow below KB:
1 of 1 people found this helpful
We have noticed that even using the bulk API, that it takes a while for the records to update. I think there is a queue on the Salesforce side. I am not sure if this can be seen, but we have queried the data looking for the last update date, and the the number of records that update over time slowly goes up. There is plenty I do not know on this, but this is what I have observed.
This also makes the case, that when there is a child/parent record where one is dependent on the other, that it may make sense to load one set of data first via the standard API, so that you know it is there first.
That makes sense. We ran the import several different times and checked the records. Eventually, everything really did update. I assumed it was because the last run of the import did it, but it sounds like it would have gone ahead and done it anyway. I was going on the understanding that when it said it updated all the records, it had finished, but maybe it had just finished adding them to the "queue". Thank you for your response.
Yeah, I am not sure if that is documented or I/we just mentioned it, but it took us several days of testing to conform some of that. In your above example, if you set the standard to have the larger batch size to 200 (that is the max) on the standard API, it would be faster using the standard API and then you could see done is done. 3000 records does not seem that much, but I am not sure what all you are sending. I also think connection speeds, agent load, and even time to load from the source could all be a factor. How much? I am not sure, but it could all add up. You may be content with bulk and that is fine too.
There are some times where I create cases and then turn around and update a table in our source database data, to show a case was sent. In these cases, I tend to use the standard API, so that I know the cases have been created, and are not queued up. Also, we have a large upsert of several objects in the morning to capture yesterday's data. We start this at 6:30 AM and it takes about 30 minutes to send it all, but it appears that it is not finished until closer to 9:00. We do some of the smaller loads (like cases and even real time updates) later in the day so that we are not stepping on ourselves.
We could probably streamline it a bit and only send deltas and other things, but it is what it is for right now.