I'm new to SQL Server and am facing the following problem:
I have two identically structured tables. Dial it
every 15 minutes, has between 600 000 and 1 million rows.
Once the data has been cleaned up,
I want to transfer every record to
contains roughly 38 million rows at this time.
Every 15 to 20 minutes, the aforementioned procedure must be repeated.
The issue is that the data transfer from
is sometimes taking much longer than 20 minutes.
Initially, when the tables were tiny, copying may take anywhere between 10 and 2 minutes.
It just takes too long now.
Anybody who can help with this? I'll run a SQL query now.
To begin with, one thing I've learned over the years is that MSSQL is excellent at optimizing all types of operations, but it significantly depends on the statistics for all of the relevant tables to do so. Therefore, before doing the actual inserts, I would recommend executing "UPDATE STATISTICS processed logs" and "UPDATE STATISTICS unprocessed logs"; even on a huge database, these operations don't take very long. A lot also relies on the target table's indexes, according to the aforementioned query. In order to avoid significant data fragmentation, I'm presuming the target table has its clustered index (or PRIMARY KEY) on (at least) UnixTime. If not, you'll have to fit more and more data between the records that are already there. You might try defragmenting the target table sometimes to get past this (it can be done online, but it takes a while), but in my view, designing the clustered index (or PK) such that data is always added to the end of the table would be a preferable strategy.