Yes, a VDS data flow with file source would read only the unread events. The node does it by maintaining the file source's last read position in a hidden file created under the same directory where the file source is. Every time the data flow is (re-)started, it continues from where it left off. To force the dataflow to read from the beginning, please delete its corresponding position file under .VDSPos directory (which is under the source file directory).
To match multiple files in a directory, you could specify the file names using regular expression.
As for performance, it depends on where the files are and what nodes run it. Based on your question, I assume all the files are on the same host. Each node is a separate process. It's better not to overload a single node process to read multiple file. Instead you could spread it across multiple nodes. Similarly, each file read is a disk I/O, a very performance intensive operation.
I would recommend thorough testing before production deployment.