Issue when restoring very large backup file

Issue when restoring very large backup file

Postby ccampbell » Tue 16 Aug 2016 20:36

Hello,

I’m currently using dotConnect for PostgreSQL (7.6.714.0)

I’m having an issue restoring a very large backup file using pgSQLDump. The backup file is 2.2 gb and has almost 10 million lines.

The backup file consists of the table structures, data, and constraints with the insert commands. These all get loaded into a newly created schema. I don’t know if this will be helpful but I’ve included the backup file header:

SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET default_tablespace = '';
SET default_with_oids = false;

What’s happening is that the restore seems to be skipping one row in one of the tables that has many many thousands of rows. I then receive a data integrity error when the constraints are being applied at the end of the restore process because the one record never got added back into the table. The record that got skipped “is” in the backup file that was created using pgSQLDump.

If I repeat the process by creating a backup and restoring the backup, it always skips the same record. It also only skips one record (that I’m aware of). All other records are restored just fine.

As a test, I deleted the record that was getting skipped, thinking perhaps there was something wrong with it. I performed another backup and attempted to restore the backup. I received the same error, only for a different record from the same table. This record generating the error had restored just fine in the first test.

I don’t believe it’s a data connection timeout issue because the record that gets skipped is not near the end of the file. The table itself contains just straight data. No blobs or other funky data types. I’ve tried this using three different versions of Postgres (9.3, 9.4 & 9.5) and always get the same results.

If you have any suggestions on settings or other things I can try I would greatly appreciate it.

Regards,

Chris
ccampbell
 
Posts: 29
Joined: Tue 01 Jun 2010 17:31
Location: Oregon

Re: Issue when restoring very large backup file

Postby Pinturiccio » Fri 19 Aug 2016 16:08

We could not reproduce the issue. Please provide a DDL script of your table. Please also send us a snippet of code where you create your PgSqlDump object and initialize its properties.

If possible, send us the script that you restore. You can archive your file and upload it to our ftp server ( ftp://ftp.devart.com/, credentials: anonymous/anonymous ) or to any file exchange server so that we could download it from there and use it for testing your example. You can send us the password to the archive via our contact form.

ccampbell wrote:As a test, I deleted the record that was getting skipped, thinking perhaps there was something wrong with it. I performed another backup and attempted to restore the backup. I received the same error, only for a different record from the same table. This record generating the error had restored just fine in the first test.

Could you tell us whether the new problem record (after you deleted the first problem record) is in some random place or it is right before or after the deleted record?
Pinturiccio
Devart Team
 
Posts: 1978
Joined: Wed 02 Nov 2011 09:44


Return to dotConnect for PostgreSQL