Timeout exception even after setting timeout property for operation

ado.net c# sqlbulkcopy sql-server timeoutexception

Question

Timeout issue:

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.\r\nThe the statement has been terminated.

I am having 17 millions of records to dump in my application database.These 12 million records are the result of comparison operation between 2 database records.

I compare 2 database records then populate mismatch records (based on some criteria) in the data table and once that data table reaches some limit like 1000 or 500 etc I send this data table to SQL bulk copy for bulk import and then empty the data table.

I am doing this whole operation inside the transaction so that I have inserted X records and during my comparison process any error comes so i will rollback those X records.

But because of this, I am getting a timeout issue doing then bulk copy.

I have checked varying different batchsize like 5000,1000,500,300 etc.I am getting timeout issues in all this batch size.

Once I have set bulk-copy timeout to 0 but then I go this below error :

The transaction log for my database is full.

With 1000 records it reaches 2.7 million and then throws timeout issues,

With 500 records it reached some 2.1 million records then throws an error.

With 300,200,100 also it is throwing timeout errors.

I also have set connection timeout in my connection string to 30 minutes.

Code :

public class SaveRepo : IDisposable
    {
        DataTable dataTable;
        SqlConnection connection;
        string connectionString;
        SqlTransaction transaction;
        SqlBulkCopy bulkCopy;
        int testId,

        public SaveRepo (int testId)//testId=10364
        {
            this.connectionString = connectionString;
            dataTable = new DataTable();
            connection = new SqlConnection(connectionString);
            connection.Open();
            transaction = connection.BeginTransaction();
            bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.Default, transaction);
            bulkCopy.BulkCopyTimeout = 60;
            bulkCopy.EnableStreaming = true;
            bulkCopy.DestinationTableName = "dbo.Sales";
            bulkCopy.BatchSize = 100;
            bulkCopy.SqlRowsCopied +=
                  new SqlRowsCopiedEventHandler(OnSqlRowsCopied);
            bulkCopy.NotifyAfter = 100;
        }

       void Dump()
        {
            try
            {
                bulkCopy.WriteToServer(dataTable);
            }
            catch(Exception ex) // timeout error
            {
                throw ex;
            }
        }

    void FillDatatable(object[] row)
    {
        if (dataTable.Rows.Count == 100)
        {
           Dump();
           dataTable.Clear();
        }
        dataTable.Rows.Add(row);
    }

        public void End()
        {
            transaction.Commit();
            //dispose the stuffs also
        }
    }

Is there any other way or solution which I am missing and can solve this timeout issue?

Update : After setting BulkCopyTimeout to 0 and having batchsize =1000 i got this error till 3593000 records bulk copied:

Could not allocate space for object 'dbo.Sales'.'PK_dbo.Sales' in database 'XYZ' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

Update 2 : I removed the transaction and i will open and close connection for each of the batch and while dumping any batch if error occurs then i will removed all those previously saved data using testId.Now this works up to dumping 3 millions of data then i get this error :

Could not allocate space for object 'dbo.Sales'.'PK_dbo.Sales' in database 'XYZ' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

This goes in catch section where i try to remove old data based on testId but it takes so long and then it throws this error :

The transaction log for my database is full.

void Dump()
        {
            using (SqlConnection connection =
                  new SqlConnection(connectionString))
            {
                connection.Open();
                using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connectionString))
                {
                    bulkCopy.DestinationTableName = "dbo.Sales";
                    bulkCopy.EnableStreaming = true;
                    try
                    {
                        bulkCopy.WriteToServer(dataTable);
                    }
                    catch(Exception ex)
                    {
                        connection.Close();
                        SalesRepo.Delete(connectionString, testId);
                    }
                }
            }
        }

Popular Answer

So on your 2nd option you have actually been able to "dump" the data in the using the code into the database only to find-out that you are running out of file size on the database.

This can happen when auto-growth is set to falls and or you have reached the maximum file size available on your disk.

Your first attempt failed as the Transaction became to big to maintain using the resources your server had.

First 2 things:

  1. Back up your Transaction log if your database is in full recovery mode or "check" the database to make sure you do have the log-space.
  2. Stripe your table over several files, you do this by making 1 file group that contains several files, best is to spread them over several disk arrays/ controllers so you can parallel the writes

Then,

  • You will have to re-create the indexes as the indexes have become disabled after such an error.
  • the data statistics are going to be really bad after such a large update
  • each index is going to slow down the insert by a factor, if the indexes are bad this will really slow things down depending on how many index splits you get (if you data gets inserted in the soft order of the index or against it, if against it this is like stacking crates of beer from bottom instead of on the the top if you understand what I mean.
  • if you have an Enterprise version used a partition function, this will greatly speed up the action as you can really parallel process the data and reduce locking to the partitioned data.

Try to backup in the middle of your "import" as the backup will persist the data that are transnational committed and your LDF file will be less stressed.

Hope to have helped

Walter




Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
Is this KB legal? Yes, learn why
Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
Is this KB legal? Yes, learn why