start with a file of million records. Say each record needs to be tracked with try/catch. Now that might force me to split the file by profile resulting in a MILLION documents flowing between shapes.
As I understand these are written to temporary space on the file system. Then the documents are given to the shapes as file streams scoped by a for loop.
is there an optimization while writing to the disk? Boomi could realize these are very small documents so might save hundreds of them in one file between shapes and build multiple in-memory streams PER that aggregated file. So shapes sees them in memory as streams per document but on the file system multiple documents may be combined to save file name space and or the number of disk writes.
Any thoughts if there is any optimization built in to writing these files so that the performance penalty for splitting the file into documents by profile for error handling needs.