We are playing around with some profiling options and trying to understand the auto random N rows. Per the documentation it determines the number of rows to profile based on the size of the source file. Is this a set percentage or does the percentage of rows profiled changed base on the size of the file?
What we are finding is that profiling a random N rows is significantly slower than scanning an auto random n rows on the same file. We did random 10,000 rows and when we did the auto random it profiled 23,000 rows of 401,000 but ran about 7 times faster. It was even faster than when we selected to do the first 10,000 rows for profiling.