Page 1 of 1

Cached updates trouble

Posted: Thu 27 May 2010 12:53
by h.hasenack
Hi

Using ODAC 3.60.0.42 21-Mar-02, RAD STudio 2007

I have a TCLientDataset, filled with changes. I call TCLientDataset.ApplyUpdates, which fills the TOraTable with changes.

CachedUpdates=True, Options.UpdateBatchsize=1000

It also appears to have some effect, but still behaves unexpectedly.

I expected a lot of DML statements would be send with one network message, but it appears each insert/update/delete statement is sent separately. Due to network delays this causes a bottleneck for manipulating data from within my App.

Sample expected statement to be sent by ODAC
--begin of msg

Code: Select all

BEGIN
 INSERT ... INTO ... VALUES...;
 INSERT ... INTO ... VALUES...;
...
 INSERT ... INTO ... VALUES...;
END;
-- end of msg

Actually sent by ODAC is

--begin of msg

Code: Select all

 INSERT ... INTO ... VALUES...;
-- end of msg

--begin of msg

Code: Select all

 INSERT ... INTO ... VALUES...;
-- end of msg

--begin of msg

Code: Select all

 ...
-- end of msg

--begin of msg

Code: Select all

 INSERT ... INTO ... VALUES...;
-- end of msg

BTW The CLientDataset solution has nothing to do with this. Adding records in a loop (with cache updates enabled) to TOraTable and then calling ApplyUpdates will display the same behaviour.

(Tested with devart's DBMonitor tool)

Posted: Fri 28 May 2010 08:05
by bork
Hello

The batch update mode has been added in the ODAC verdion 6.00.0.4. If you want to use this functionality then you should update ODAC to 6.x.x.x version.

Posted: Fri 28 May 2010 13:44
by h.hasenack
I'm so sorry for entering the wrong version info. Icopied it from one of the readme's.

The actual version used is Odac690src.exe, Which I downloaded 13 april 2010.

Kind regards - Hans

Posted: Mon 31 May 2010 09:15
by bork
The problem is that by default TDataSetProvider has the ResolveToDataSet property set to False. It means that TdataSetProvider posts any data to database by a SQL query. In this case the CashedUpdates or UpdateBatchSize properties is ignored.

But you can change the ResolveToDataSet property to true. In this case TDataSetProvider will post data to the TOraQuery by the Edit and Post methods and you can use the CashedUpdates or UpdateBatchSize properties:

Code: Select all

begin
  Randomize;

  ClientDataSet1.Close;

  OraQuery1.Close;
  OraQuery1.CachedUpdates := true;
  OraQuery1.Options.UpdateBatchSize := 1000;
  OraQuery1.Open;

  DataSetProvider1.ResolveToDataSet := True;

  ClientDataSet1.Open;
  while not ClientDataSet1.Eof do
  begin
    ClientDataSet1.Edit;
    ClientDataSet1.FieldByName('Value').AsInteger := RandomRange(0, 100);
    ClientDataSet1.Post;

    ClientDataSet1.Next;
  end;
  ClientDataSet1.ApplyUpdates(-1);

  OraQuery1.ApplyUpdates;
end;

Posted: Mon 31 May 2010 12:56
by h.hasenack
My OraProvide.ResolveToDataset is set to TRUE (double checked it)

And actually my updates DO get cached by the TOraTable. That's just not the problem.

The problem is that from within the TOraTable.ApplyUpdates call, a loop is started that sends the "cached" inserts/updates/deletes to the oracle server.

From somewhere whithin this loop, only 1 statement is sent to the server each time, and then waited for completion. After that, the next statement is sent. and so on.

I expected ODAC to send (in one message to the oracle server) multiple insert/update/delete statements. This would avoid a slowdown caused by network lag.

Regards - Hans

Posted: Tue 01 Jun 2010 08:47
by bork
I repeated inserting by TClientDataSet and TOraTable with greater number of records. All records were inserted in the batch mode:

Code: Select all

begin
  Randomize;

  ClientDataSet1.Close;

  OraTable1.Close;
  OraTable1.Options.UpdateBatchSize := 1000;
  OraTable1.CachedUpdates := true;
  OraTable1.Open;

  DataSetProvider1.ResolveToDataSet := True;

  ClientDataSet1.Open;
  for i := 1 to 5 do
  begin
    ClientDataSet1.Insert;
    ClientDataSet1.FieldByName('ID').AsInteger := i;
    ClientDataSet1.FieldByName('Name').AsString := 'test' + IntToStr(i);
    ClientDataSet1.FieldByName('Value').AsInteger := RandomRange(0, 100);
    ClientDataSet1.Post;

    ClientDataSet1.Next;
  end;
  ClientDataSet1.ApplyUpdates(-1);
  OraTable1.ApplyUpdates;
end;
Maybe behavior that you described depends on TOraSession or TOraTable or other components properties. Try to create a new application and to add to it: TOraSession, TOraTable, TDataSetProvider, TClientDataSet, and TOraSQLMonitor. And set the TORaSession.Server, TORaSession.UserName, TORaSession.Pasword, TOraTable.TableName, TOraTable.Options.UpdateBatchSize, TOraTable.CachedUpdates, TDataSetProvider.ResolveToDataSet properties only (do not change other properties) and try to insert your records again.

If all records are inserted in the batch mode correctly then try to determine which property changing leads to turnig off the batch mode.

If you cannot determine which property leads to turnig off the batch mode then please send us by E-mail a sample application reproducing your issue.

Posted: Wed 02 Jun 2010 12:07
by h.hasenack
Writing a mini app reveiled that ODAC indeed uses BEGIN...END to update.

Unless:

an UpdateObject is created...

Code: Select all

    FORADataset.UpdateObject:=TOraUpdateSQL.Create(self);
    FORADataset.Options.PrepareUpdateSQL:=True; 
Commenting out these two lines made the BEGIN..END and multiple execution work again. However, now I got an BIND error.

So I had to comment out another line

Code: Select all

    FORADataset.Options.QuoteNames:=True;
Now everything appears to work as expected, and performance is increased accordingly.

Won't I get into trouble when I tablename or fieldname contains spaces? Or is this handled by ODAC automatically? Unfortunately my app allows users to define custom fields in the DB. I may have to automatically replace spaces with underscores if ODAC doesnt autofix this.

Regards - Hans.

Posted: Thu 03 Jun 2010 10:44
by h.hasenack
SO, I managed to get de multirecords update working. This already saves a lot of insertion and updating time. :D

But here's the next bottleneck. :shock: When TCLientDataset.ApplyUpdates is called, updates are commited to the TOraTable using the ORAProvider. Which works just fine.

However, after applying each update, the datasetprovider obviously performs a locate call on the TOraTable, and this again causes TMemData.PrepareRecNoCache to be fired.

When the record count increases, the insert performance gradually decreases from rocketing 10K rps (100 records inserted) downto a snail speed 20 rps (60K records inserted).
Profiling the app reveals 99% of the cpu time is spent in the routine TMemData.PrepareRecNoCache, which is called for each locate call, and FRecordNoCache is flushed for each post/delete.

:?: How can I avoid this performance issue whilst keeping the multi-record updates :?:

** EDIT **

I Managed to work around it by applying some patches. First of all I added a line to

Code: Select all

procedure TCustomDADataSet.SetUniDirectional(Value: boolean);
begin
  if Value  FUniDirectional then
  begin
    CheckInactive;
    // HH Modification to notify unidirectional for TDataset base class
    inherited SetUniDirectional(Value);
    // HH End modification
    FUniDirectional := Value;
    if FIRecordSet  nil then
      FIRecordSet.SetProp(prUniDirectional, FUniDirectional);
    if FUniDirectional then
      FetchAll := False;
  end;
end;
And next I added

Code: Select all

  if IsUniDirectional then
  begin
   Result:=False;
   Exit;
  end;
To all TMemDataset.Locate implementations. It shopuld also be for findkey and findnearest, but for now I omitted this.

I cannot exactly predict all the implications, but for now this seems to fix my Update bottleneck problem

***EDIT***

I managed to bypass the obsolete locate by implementing my own dataset provider and resolver. All I did was override like this:

Code: Select all

procedure TSGORADataSetResolver.InternalBeforeResolve(Tree: TUpdateTree);
begin
  if Tree.Delta.UpdateKindDB.ukInsert then
    inherited; // inherited tries to locate the record. This makes no sense for inserted records
end;
Well, off course this wont fix an issue when many records are updated or deleted, and locate has to be called. In that case, the old problem with PrepareRecNoCache will turn up again. It's up to devart to faind a way to maintain the recno cache rather than destroy it for each post/delete and rebuild it for each locate.


Kind regards - Hans