I am using EF Core to insert entries and I noticed that when I debug this line of code
context.MyEntityDbSet.AddRangeAsync(records) it takes a second to load as opposed to
context.MyEntityDbset.AddRange(records) it happens instantly. Is there a DB call happening when calling the
AddRangeAsync method? Is it any different than the
According to the official EF Core docs
AddRangeAsync(IEnumerable<TEntity>, CancellationToken) is supposed to be used with special value generators like such requiring a database round trip. For example if you use
SqlServerValueGenerationStrategy.SequenceHiLo to allocate blocks of IDs in advance, when a new entity is tracked by EF it might need first to query the database and ask for a new "Hi" (more about the Hi/Lo algorithm can be found here What's the Hi/Lo algorithm?). So when the idea is to just set the entity to
Added state and
SqlServerValueGenerationStrategy.SequenceHiLo is not required,
AddRange is used.
Most likely yes. From the docs :
This method is async only to allow special value generators, such as the one used by 'Microsoft.EntityFrameworkCore.Metadata.SqlServerValueGenerationStrategy.SequenceHiLo', to access the database asynchronously. For all other cases the non async method should be used.
This means you shouldn't use AddRangeAsync unless you use one of those value generators that need access to the database before they generate a value.
Using IDENTITY or a sequence to provide the key value doesn't require an explicit database access. The key values are generated when the rows are inserted in the tables
This is a safe strategy for generating keys on the client side. The server generates a
High value for each client, which is why a database access is required. The client then starts incrementing a "low" value and adding it to the server's high value to generate unique keys. This ensures that two clients will never create the same value.
It also allows the client to know the key value before the data is actually inserted in the database
Unsafe strategies - MAX +1
An unsafe strategy that almost guarantees duplicates is to calculate the maximum value of a key and start incrementig from that. Apart from the obvious cost of calculating MAX, multiple clients can easily read the same MAX value and start creating duplicate values.
Even worse, deleting the latest rows will create new keys with the same values as the already deleted rows. Any other tables that used the old IDs as references will end up pointing to the wrong rows.