version 5.70.0.28: ORA-21503: program terminated by fatal error
version 5.70.0.28: ORA-21503: program terminated by fatal error
Oracle: 9.2.0.7 (Client + Server)
Delphi 7
Since we updated to version 5.70.0.28 we got the error "ORA-21503: program terminated by fatal error" several times. We never had this error before.
I'm sorry to not can provide a reproducable example. But it seems the error occurs when iterating through a TOraQuery which contains XMLTYPE fields.
Sometimes the error occurs in the TOraQuery.Next method after having iterated through ~10000 records.
Delphi 7
Since we updated to version 5.70.0.28 we got the error "ORA-21503: program terminated by fatal error" several times. We never had this error before.
I'm sorry to not can provide a reproducable example. But it seems the error occurs when iterating through a TOraQuery which contains XMLTYPE fields.
Sometimes the error occurs in the TOraQuery.Next method after having iterated through ~10000 records.
I found a way to reproduce the problem:
PL/SQL script:
use a TOraQuery with SQL = 'select * from lotrec' and then just iterate through all records:
PL/SQL script:
Code: Select all
create table lotrec (a integer not null primary key, b xmltype);
declare
i integer;
begin
for i in 1 .. 30000 loop
insert into lotrec values (i, xmltype.createxml(''));
end loop;
end;
Code: Select all
dataset.First;
i := 0;
while not dataset.Eof do
begin
Inc(i);
dataset.Next;
end;
-
- Devart Team
- Posts: 925
- Joined: Thu 17 Nov 2005 10:53
-
- Devart Team
- Posts: 925
- Joined: Thu 17 Nov 2005 10:53
-
- Devart Team
- Posts: 925
- Joined: Thu 17 Nov 2005 10:53
OK, so after further investigation I found out, that it seems that both the exception "Program terminated by fatal error" and invalid values of EOF have to do with the memory leak. As soon as my test-application (which really doesn't more as the example shown in this thread) reaches 2GB physical + 2GB virtual memory allocation both error occur. So I hope the memleak fix will also fix this strange behaviour.
Can you tell me when the next build will be released? Or if the fix was quite simple, can you tell me what I should change in the ODAC sources to fix the memleak problem? So I can tell you if the other 2 problems are solved with the memleak-fix.
thanks!
Can you tell me when the next build will be released? Or if the fix was quite simple, can you tell me what I should change in the ODAC sources to fix the memleak problem? So I can tell you if the other 2 problems are solved with the memleak-fix.
thanks!
-
- Devart Team
- Posts: 925
- Joined: Thu 17 Nov 2005 10:53
With the new version 5.70.0.29 it works for tables containing XMLTYPE columns (due to the fact that the memory doesn't increasy anymore).
BUT: We have still problems with tables containing CLOB fields: try to create a table with 30000 records, containing a CLOB field (>3000 characters for every CLOB).
Then just iterate through the table using "while not eof do next" => the memory usage increases _very_ fast and after reaching ~2GB of RAM Eof returns True.
So my question is: is there a memory leak in CLOB fields or is this increasing memory usage just due to the fact that while iterating, new CLOB-objects are created, but no CLOB objects were freed (because already iterated records are kept in memory).
If the latter is the case, my question is: how can I iterate through a big table containing CLOB fields without getting an "out-of-memory"?
BUT: We have still problems with tables containing CLOB fields: try to create a table with 30000 records, containing a CLOB field (>3000 characters for every CLOB).
Then just iterate through the table using "while not eof do next" => the memory usage increases _very_ fast and after reaching ~2GB of RAM Eof returns True.
So my question is: is there a memory leak in CLOB fields or is this increasing memory usage just due to the fact that while iterating, new CLOB-objects are created, but no CLOB objects were freed (because already iterated records are kept in memory).
If the latter is the case, my question is: how can I iterate through a big table containing CLOB fields without getting an "out-of-memory"?
-
- Devart Team
- Posts: 925
- Joined: Thu 17 Nov 2005 10:53
This works as long as I don't access the value of the CLOB, but since we read the content of the CLOB within a loop the behaviour is the same as before: it seems all CLOB values were cached; this leads to an out of memory. Isn't there a possibility to "unload" CLOB fields, or some property which causes the dataset to free the CLOB value after scrolling to the next record?
I'm wondering why we don't have problems with XMLTYPE columns since they contain more or less the same amount of data in our case. Iterating throug a table with XMLTYPE columns does not increase the memory usage.
So are XMLTYPE fields handled in a different way then CLOB fields (in terms of caching)?
I'm wondering why we don't have problems with XMLTYPE columns since they contain more or less the same amount of data in our case. Iterating throug a table with XMLTYPE columns does not increase the memory usage.
So are XMLTYPE fields handled in a different way then CLOB fields (in terms of caching)?