Question

In an attempt to log time spent on SQL Server queries in an ASP.NET Core web application I've been using the following code, having subscribed to all DiagnosticListeners in some middleware and using the Observable below.

I'm not sure this is the most optimal solution in terms of performance and wondered if there was a better way using the ASP.NET Core Logging API by capturing detailed logging objects directly from EFCore? Ideally I want to keep a total of the duration of all queries performed through a request and have the total in milliseconds at the end of a request which the Middleware can use.

public class QueryTimingObserver : IObserver<DiagnosticListener>
{
    private readonly List<IDisposable> subscriptions = new List<IDisposable>();
    private readonly AsyncLocal<Stopwatch> stopwatch = new AsyncLocal<Stopwatch>();
    private double milliseconds = 0;

    void IObserver<DiagnosticListener>.OnNext(DiagnosticListener diagnosticListener)
    {
        if (diagnosticListener.Name == "SqlClientDiagnosticListener")
        {
            IDisposable subscription = diagnosticListener.SubscribeWithAdapter(this);
            subscriptions.Add(subscription);
        }
    }

    void IObserver<DiagnosticListener>.OnError(Exception error)
    {
    }

    void IObserver<DiagnosticListener>.OnCompleted()
    {
        subscriptions.ForEach(x => x.Dispose());
        subscriptions.Clear();
    }

    [DiagnosticName("System.Data.SqlClient.WriteCommandBefore")]
    public void OnCommandBefore()
    {
        stopwatch.Value = Stopwatch.StartNew();
    }

    [DiagnosticName("System.Data.SqlClient.WriteCommandAfter")]
    public void OnCommandAfter(DbCommand command)
    {
        stopwatch.Value.Stop();
        milliseconds += stopwatch.Value.Elapsed.TotalMilliseconds;
    }

    public double Milliseconds
    {
        get
        {
            return milliseconds;
        }
    }
}
1
2
9/3/2019 8:20:10 AM

Popular Answer

Query performance should be done at two levels, timing the controller/service actions, and profiling at the database level. Performance issues crop up for a number of reasons, and might be detected only at the controller, or at the database, or both. Recording performance metrics at the code level should always be configurable to be able to turn it off and on easily because any performance capture represents a performance hit, plus resource space needed to record the results.

At the risk of being a bit off-topic I've outlined the typical performance pitfalls I come across and how I measure/detect them. The purpose would just be to outline why SQL-side profiling can be valuable along-side code-based timers to detect potential performance issues, as well as identify steps to resolve them.

  1. Rogue lazy load calls. Detected by SQL profiling. Web requests query data where the initial query completes fast, the web request appears to end fast in a debugger, but the client doesn't receive a response for some time. After the initial query(ies) and the request completes, a number of "by ID" queries are tracked by the profiler. These are lazy load calls tripped by a serializer. Cause: Anti-pattern of passing entities to view/consumer. Solution: Use projection to fill view models.
  2. Expensive queries. Detected by code timers and SQL profiling. These queries take a significant amount of time to run which is detectable by debugging. These queries also show a detectable execution time in profiler results. Solutions range from checking the executed query for execution plan improvements, (missing/improvable indexes) to determining if the query can be simplified or broken up in code.
  3. Intermittently slow queries. Detected by code timers and SQL profiling. These can be picked up ahead of time by inspecting profiling results. The tell-tale signs are queries that involve significantly large row touch counts. When running in low-load scenarios these queries can have very reasonable execution times. In higher-load times the execution times can spike up dramatically. Queries touching a lot of rows risk tying each other up with row locks needed to avoid dirty reads. Locks result in delays, and potential deadlocks. Solutions include: Refining queries to only pull back the data they need to leverage indexes better, and for expansive queries for reports and searches, consider hitting read-only replicas in larger systems.
  4. EF "isms". Detected by code timers, not profiling. Some EF queries can just get a bit weird when projecting entity graphs. The underlying queries run fast, and spawns no additional queries, but the EF query sits and churns before returning. Sometimes these can result from complex Linq expressions that result in a reasonable looking query, but it just seems to sit there. Sometimes the cause can be fairly obvious, such as a long-lived DbContext that ends up tracking a good chunk of the entire database. Other times the solutions need to be analysed a case-by-case basis. They usually involve using/altering projection to simplify the data coming back from EF, or breaking up queries into simpler pieces, even if it results in two queries instead of one.
0
9/12/2019 11:48:47 PM


Related Questions





Related

Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow