Hi,
We are registered user of doConnect for Oracle.
Currently in our project we need to support for large database of 15 Million records, where in one view we have over 200 columns, which we are unable to retreive from database.
Please suggest some suggestions.
Thanks in advance.
Regards,
Rajeev
Handling large database...
-
- Devart Team
- Posts: 2420
- Joined: Wed 02 Nov 2011 09:44
What do you mean when you say that you cannot retrieve the records? If you cannot retrieve records because of an exception, please describe it. If you get no data (an empty result set), please describe your query.
Also please specify the following:
1. Do you use the direct or OCI connection mode
2. What data types are available in the table you are querying (and if there are any specific ones, like, e.g., XMLType or user-defined object types)
3. The version of the Oracle server.
Also please specify the following:
1. Do you use the direct or OCI connection mode
2. What data types are available in the table you are querying (and if there are any specific ones, like, e.g., XMLType or user-defined object types)
3. The version of the Oracle server.
-
- Posts: 14
- Joined: Mon 07 Feb 2011 09:02
Records are unable to retrive because of huge size of database(10 Million) when we run the same query on 2 Million database then retrival of records has no issue. There is no exception, query takes long time then timeout.
View contains all the standard data types (string, Decimal, datetime and Binary)
1. While creating .edmx file we not selected direct option checkbox, hope it will look for OCI connection mode.
2. No there are no user defined datatypes
3. 10.2.0.4.
Please suggest inorder to retrive records from huge database.
Regards,
Rajeev
View contains all the standard data types (string, Decimal, datetime and Binary)
1. While creating .edmx file we not selected direct option checkbox, hope it will look for OCI connection mode.
2. No there are no user defined datatypes
3. 10.2.0.4.
Please suggest inorder to retrive records from huge database.
Regards,
Rajeev
-
- Devart Team
- Posts: 2420
- Joined: Wed 02 Nov 2011 09:44
Could you please specify the following:
1. Do you use Entity Framework query when the problem occurs?
2. How many records does your query return?
3. The query that you use;
4. Try using the "Default Command Timeout=0;" connection string parameter;
5. The performance in Entity Framework was discussed in this thread: http://www.devart.com/forums/viewtopic.php?t=20866.
1. Do you use Entity Framework query when the problem occurs?
2. How many records does your query return?
3. The query that you use;
4. Try using the "Default Command Timeout=0;" connection string parameter;
5. The performance in Entity Framework was discussed in this thread: http://www.devart.com/forums/viewtopic.php?t=20866.
-
- Posts: 14
- Joined: Mon 07 Feb 2011 09:02
1. Yes, we use Entity Framework when this problem raises.
Is there any other alternative way to access Oracle database?
2. We are selecting top 100 from view and the view contains around 200 fields
3. Simple Select query contains around 4 where clause
query = query.Take(100);
4. Will try with timeout option
5. will work on this thread
Please let us know any observation if you find with above answers.
Regards,
Rajeev
Is there any other alternative way to access Oracle database?
2. We are selecting top 100 from view and the view contains around 200 fields
3. Simple Select query contains around 4 where clause
query = query.Take(100);
4. Will try with timeout option
5. will work on this thread
Please let us know any observation if you find with above answers.
Regards,
Rajeev
1. Check the SSDL part of your model. If there is DefiningQuery for your entity in it, this leads to generating more complex query with subquery. So, remove DefiningQuery if any.
2. Paging can be slow. We recommend to optimize it if possible.
For example: a table has the ID column of the NUMBER type, which was filled with sequence => therefore, there can be a range of values: 1-15000000. In this case, Where(a => a.ID <= 100) will be much faster than Take(100).
3. Do you really need all 200 fields? If their significant part is not required or some of them store big string/binary data, you can try Table Splitting: http://blogs.msdn.com/b/adonet/archive/ ... table.aspx. Table Splitting can be easily (no manual editing of edmx) made in Entity Developer: drag&drop particular (unnecessary or with big data) properties from table on the diagram surface and choose Table Splitting in the dialog.
2. Paging can be slow. We recommend to optimize it if possible.
For example: a table has the ID column of the NUMBER type, which was filled with sequence => therefore, there can be a range of values: 1-15000000. In this case, Where(a => a.ID <= 100) will be much faster than Take(100).
3. Do you really need all 200 fields? If their significant part is not required or some of them store big string/binary data, you can try Table Splitting: http://blogs.msdn.com/b/adonet/archive/ ... table.aspx. Table Splitting can be easily (no manual editing of edmx) made in Entity Developer: drag&drop particular (unnecessary or with big data) properties from table on the diagram surface and choose Table Splitting in the dialog.
Hi Rajeev,RajeevDebnath wrote:Records are unable to retrive because of huge size of database(10 Million) when we run the same query on 2 Million database then retrival of records has no issue. There is no exception, query takes long time then timeout.
View contains all the standard data types (string, Decimal, datetime and Binary)
1. While creating .edmx file we not selected direct option checkbox, hope it will look for OCI connection mode.
2. No there are no user defined datatypes
3. 10.2.0.4.
Please suggest inorder to retrive records from huge database.
Regards,
Rajeev
Just an observation: 10 million rows is not a huge number. Both Oracle and dotConnect is used in environments with much bigger number of records per table. So there is absolutely no reason why you should have problems with 10 millon or 100 million rows. But as others pointed out, 200 columns sounds like a possible design problem. Can you think through the solution, do you really need so many at once? Can you split into smaller, more granular data sets? Try to split your model and play with results, this will let you narrow down the problem.
Good luck.