version 5.70.0.28: ORA-21503: program terminated by fatal error

Discussion of open issues, suggestions and bugs regarding ODAC (Oracle Data Access Components) for Delphi, C++Builder, Lazarus (and FPC)
Post Reply
ac
Posts: 32
Joined: Mon 16 Jan 2006 12:56

version 5.70.0.28: ORA-21503: program terminated by fatal error

Post by ac » Fri 27 Jan 2006 15:59

Oracle: 9.2.0.7 (Client + Server)
Delphi 7

Since we updated to version 5.70.0.28 we got the error "ORA-21503: program terminated by fatal error" several times. We never had this error before.

I'm sorry to not can provide a reproducable example. But it seems the error occurs when iterating through a TOraQuery which contains XMLTYPE fields.
Sometimes the error occurs in the TOraQuery.Next method after having iterated through ~10000 records.

ac
Posts: 32
Joined: Mon 16 Jan 2006 12:56

Post by ac » Fri 27 Jan 2006 16:32

I found a way to reproduce the problem:

PL/SQL script:

Code: Select all

create table lotrec (a integer not null primary key, b xmltype);

declare
  i integer;
begin
  for i in 1 .. 30000 loop
    insert into lotrec values (i, xmltype.createxml(''));  
  end loop;  
end;
use a TOraQuery with SQL = 'select * from lotrec' and then just iterate through all records:

Code: Select all

dataset.First; 
i := 0; 
while not dataset.Eof do 
begin 
  Inc(i); 
  dataset.Next; 
end;

Challenger
Devart Team
Posts: 925
Joined: Thu 17 Nov 2005 10:53

Post by Challenger » Fri 03 Feb 2006 15:14

We have received your request. The investigation is in progress. Unfortunately now we can't provide any information.

Challenger
Devart Team
Posts: 925
Joined: Thu 17 Nov 2005 10:53

Post by Challenger » Tue 14 Feb 2006 13:15

We fixed the problem with memory leak when working with XMTYPE. This fix will be included in the next ODAC build.

ac
Posts: 32
Joined: Mon 16 Jan 2006 12:56

Post by ac » Thu 16 Feb 2006 09:22

Thanks for the fix. Did you find also the problem with EOF and tables containing XMLTYPE/CLOB fields?

Challenger
Devart Team
Posts: 925
Joined: Thu 17 Nov 2005 10:53

Post by Challenger » Thu 16 Feb 2006 09:59

No, we couldn't reproduce problem with invalid value of EOF property.

ac
Posts: 32
Joined: Mon 16 Jan 2006 12:56

Post by ac » Mon 20 Feb 2006 09:34

OK, so after further investigation I found out, that it seems that both the exception "Program terminated by fatal error" and invalid values of EOF have to do with the memory leak. As soon as my test-application (which really doesn't more as the example shown in this thread) reaches 2GB physical + 2GB virtual memory allocation both error occur. So I hope the memleak fix will also fix this strange behaviour.

Can you tell me when the next build will be released? Or if the fix was quite simple, can you tell me what I should change in the ODAC sources to fix the memleak problem? So I can tell you if the other 2 problems are solved with the memleak-fix.

thanks!

Challenger
Devart Team
Posts: 925
Joined: Thu 17 Nov 2005 10:53

Post by Challenger » Tue 21 Feb 2006 13:52

New build of ODAC will be released on the next week.

ac
Posts: 32
Joined: Mon 16 Jan 2006 12:56

Post by ac » Tue 14 Mar 2006 17:36

With the new version 5.70.0.29 it works for tables containing XMLTYPE columns (due to the fact that the memory doesn't increasy anymore).

BUT: We have still problems with tables containing CLOB fields: try to create a table with 30000 records, containing a CLOB field (>3000 characters for every CLOB).
Then just iterate through the table using "while not eof do next" => the memory usage increases _very_ fast and after reaching ~2GB of RAM Eof returns True.

So my question is: is there a memory leak in CLOB fields or is this increasing memory usage just due to the fact that while iterating, new CLOB-objects are created, but no CLOB objects were freed (because already iterated records are kept in memory).

If the latter is the case, my question is: how can I iterate through a big table containing CLOB fields without getting an "out-of-memory"?

Challenger
Devart Team
Posts: 925
Joined: Thu 17 Nov 2005 10:53

Post by Challenger » Thu 16 Mar 2006 11:16

Try to set DeferredLobRead option to True. This helps to avoid reading of CLOB fields while fetching.

ac
Posts: 32
Joined: Mon 16 Jan 2006 12:56

Post by ac » Wed 22 Mar 2006 10:54

This works as long as I don't access the value of the CLOB, but since we read the content of the CLOB within a loop the behaviour is the same as before: it seems all CLOB values were cached; this leads to an out of memory. Isn't there a possibility to "unload" CLOB fields, or some property which causes the dataset to free the CLOB value after scrolling to the next record?

I'm wondering why we don't have problems with XMLTYPE columns since they contain more or less the same amount of data in our case. Iterating throug a table with XMLTYPE columns does not increase the memory usage.

So are XMLTYPE fields handled in a different way then CLOB fields (in terms of caching)?

ac
Posts: 32
Joined: Mon 16 Jan 2006 12:56

Post by ac » Fri 24 Mar 2006 11:51

Finally I think I found the solution: if I set 'CacheLobs' to false, CLOB values are not cached and memory size does not increase, even if I modify the CLOB values.

Post Reply