I am using the SqlBulkCopy class to do a bulk insert into a SQLServer DB.
Original size of the .mdf file associated with the DB is 1508 Mb.
When I run it (on the same data of about 4 million records) with :
BatchSize of 100000, the size of the .mdf grows to 1661 MB.
BatchSize of 1000000, size of the .mdf grows to 1659 MB.
Why this variation? Such a small variation is negligible allright, except that when my Tester runs it (on the same data) with a batch size of 100, the .mdf file grows insanely until it uses up all of the 20 gigs available to it, and then it errors out, due to lack of available space.
Is this because SqlBulkCopy has some fixed size blocks that it allocates?
Its working fine with BatchSizes > 100000, but I want to understand the root cause of this strange behaviour/bug.
Your mdf file is where your data is stored - it should grow roughly according to your data load. But as "ck" already pointed out - SQL Server allows you to specify a certain growth pattern, e.g. the mdf file doesn't grow byte by byte whenever you insert a row. So depending on your setting, it will take "jumps" in size, when it needs more space.
This has nothing to do with whether you load your data with a regular INSERT statement or SqlBulkLoad.