I will be having a table in SQL Server 2008 that will hold millions of rows and the initial design will be:
Code nvarchar(50) PK Available bit validUntil DateTime ImportID int
The users can import 100,000 odd codes at a time which I will be using sqlbulkcopy to insert the data. They can also request batches of codes of up to 10,000 at a time for a specific ImportID and as long as the request date is less than the ValidUntil date and the code is available.
My question is, will it be better to hold all these codes in the one table and use indexes or to split the one table into two - AvailableCodes and UsedCodes - so whenever codes are requested, they are moved from the AvailableCodes table into the UsedCodes table instead of having the Available flag? That way the AvailableCodes table won't get as massive as a single table as over time there will be more used codes than available codes and I will not be that bothered about them accept for auditing purposes.
Also, if the tables are split will I still be able to use the sqlbulkcopy as the codes will still need to be unique across both tables?
I would create it in one table and create well defined indexes.
Consider a filter index for the flag column. This is done with a where clause in t-sql and the filter page in ssms.