Performance

Discussion of open issues, suggestions and bugs regarding ADO.NET provider for PostgreSQL
Post Reply
Zoran
Posts: 44
Joined: Thu 28 Apr 2005 21:55
Location: Zagreb, Croatia

Performance

Post by Zoran » Thu 03 Sep 2009 13:19

Our 2-core web server with 2 GB RAM worked fine. Then I decided to use LINQ entity classes to access database.
After the installation, the web server was immediately overloaded. So I increased number of CPU cores to 8 and RAM to 12 GB. It was no solution because CPU load was constantly between 95 and 100% and 12 GB of RAM wasn't enough.
After a little profiling, I found that LINQ queries work between 2 and 10 times more slowly than plain text SQL. Furthermore, the second line in this code worked 120 times longer:
Table po = dataContext.GetTable();
PurchaseOrder single = po.Single(c => c.Key == key);
After metagenerating the code and changing the 2. line, the performance penalty decreased to 2 times, compared to plain SQL SELECT.
Now I have 8 idle cores, working seldom at more than 10% and 10 GB RAM too much.
Is that normal?

drichter
Posts: 20
Joined: Fri 21 Aug 2009 14:44

Post by drichter » Fri 04 Sep 2009 08:58

Perform the data-access like this, and LINQ will generate a select whith the "key" field in the where clause, instead of grabbing the whole table first.

PurchaseOrder single = dataContext.PurchaseOrders.Single(c => c.Key == key));

Zoran
Posts: 44
Joined: Thu 28 Apr 2005 21:55
Location: Zagreb, Croatia

Post by Zoran » Sat 05 Sep 2009 07:16

I thought the actual read would be deferred until the second command because the table wasn't used in the first one.
After using the code you proposed, I got the similar results. For a different table 200 LINQ reads took 2.308 sec. and plain SQL spent 0.078 sec., 30 times difference.
How about LINQ queries, do they have to last 2-10 times longer?

Shalex
Site Admin
Posts: 9543
Joined: Thu 14 Aug 2008 12:44

Post by Shalex » Mon 07 Sep 2009 14:52

This code

Code: Select all

Table po = dataContext.GetTable();
PurchaseOrder single = po.Single(c => c.Key == key);
is absolutely identical to the following:

Code: Select all

PurchaseOrder single = dataContext.PurchaseOrders.Single(c => c.Key == key));
Only one row is retrieved in both cases. You can check this with the DataContext.Log property (specifies the destination to write the SQL query or command):

Code: Select all

      StringBuilder sb = new StringBuilder();
      db.Log = new System.IO.StringWriter(sb);
The delay is caused by the compilation of the LINQ query. Our compiler is slow. If you need to call the query 200 times, maybe CompiledQuery will be a better solution for you. Please try the following sample:

Code: Select all

      for (int i = 0; i  GetPurchaseOrder = 
            Devart.Data.Linq.CompiledQuery.Compile( 
                (TESTDataContext.PurchaseOrder ctx, int id) => 
                  (ctx.PurchaseOrder.Single(obj => obj.Key == id)) 
            );

Zoran
Posts: 44
Joined: Thu 28 Apr 2005 21:55
Location: Zagreb, Croatia

Post by Zoran » Thu 10 Sep 2009 22:01

The compiled query is much faster. For my example 0.391 sec. (LINQ) compared to 0.086 sec. (plain SQL). However, that it still 4.5 times more slowly and is awkward to write the delegates. A metagenerated data tier is significantly faster.
200 row fetches for one procedure are seldom needed, that is only for testing purposes. I wanted to say that a 2, 5 or 10 times more slowly data access may not be acceptable. For my application, it isn't.

Post Reply