There is more than 50,000 records
in the table. I am using Microsoft SQL Server 2016 (RTM-GDR)
.
I have used the code to fetch 200 records at a time. The below code seems to work fine till 1000 records but after that it starts taking longer time.
Code
int skip = 0; //Initially and gets increased by 200 for each calls (0, 200, 400, so on)
int take = 200; //Initially and remains same since only 200 records needed at a time.
var recordList = context.Records.OrderBy(x => x.IncentiveOUID)
.Skip(skip)
.Take(take)
.ToList();
Generated SQL Code
SELECT TOP (200)
[Extent1].[IncentiveOUID] AS [IncentiveOUID],
[Extent1].[IncentiveID] AS [IncentiveID],
[Extent1].[OrganizationUnitID] AS [OrganizationUnitID],
[Extent1].[ModifiedDate] AS [ModifiedDate],
[Extent1].[ModifiedBy] AS [ModifiedBy],
[Extent1].[Id] AS [Id]
FROM ( SELECT [Extent1].[IncentiveOUID] AS [IncentiveOUID], [Extent1].[IncentiveID]
AS [IncentiveID], [Extent1].[OrganizationUnitID]
AS [OrganizationUnitID], [Extent1].[ModifiedDate]
AS [ModifiedDate], [Extent1].[ModifiedBy]
AS [ModifiedBy], [Extent1].[Id]
AS [Id], row_number() OVER (ORDER BY [Extent1].[IncentiveOUID] ASC) AS [row_number]
FROM [dbo].[cms_IncentiveOUs] AS [Extent1]
) AS [Extent1]
WHERE [Extent1].[row_number] > 2400
ORDER BY [Extent1].[IncentiveOUID] ASC
Ready to provide more information if needed to explain the above stuffs.
SQL has no efficient way to Skip
: it must read all preceding rows. You're performing a scan on the index every time you fetch a page of records, with every page scanning more and more of the index.
Instead of paginating results, consider fetching them at offsets. Something like:
IEnumerable<Record> GetNext(int fromId = 0, int take = 50) => context.Records
.Where(x => x.Id > fromId)
.OrderBy(x => x.Id)
.Take(take)
.ToList();
Which with an index on Id
will translate into a maximally efficient index seek.