Issue when restoring very large backup file
Posted: Tue 16 Aug 2016 20:36
Hello,
I’m currently using dotConnect for PostgreSQL (7.6.714.0)
I’m having an issue restoring a very large backup file using pgSQLDump. The backup file is 2.2 gb and has almost 10 million lines.
The backup file consists of the table structures, data, and constraints with the insert commands. These all get loaded into a newly created schema. I don’t know if this will be helpful but I’ve included the backup file header:
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET default_tablespace = '';
SET default_with_oids = false;
What’s happening is that the restore seems to be skipping one row in one of the tables that has many many thousands of rows. I then receive a data integrity error when the constraints are being applied at the end of the restore process because the one record never got added back into the table. The record that got skipped “is” in the backup file that was created using pgSQLDump.
If I repeat the process by creating a backup and restoring the backup, it always skips the same record. It also only skips one record (that I’m aware of). All other records are restored just fine.
As a test, I deleted the record that was getting skipped, thinking perhaps there was something wrong with it. I performed another backup and attempted to restore the backup. I received the same error, only for a different record from the same table. This record generating the error had restored just fine in the first test.
I don’t believe it’s a data connection timeout issue because the record that gets skipped is not near the end of the file. The table itself contains just straight data. No blobs or other funky data types. I’ve tried this using three different versions of Postgres (9.3, 9.4 & 9.5) and always get the same results.
If you have any suggestions on settings or other things I can try I would greatly appreciate it.
Regards,
Chris
I’m currently using dotConnect for PostgreSQL (7.6.714.0)
I’m having an issue restoring a very large backup file using pgSQLDump. The backup file is 2.2 gb and has almost 10 million lines.
The backup file consists of the table structures, data, and constraints with the insert commands. These all get loaded into a newly created schema. I don’t know if this will be helpful but I’ve included the backup file header:
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET default_tablespace = '';
SET default_with_oids = false;
What’s happening is that the restore seems to be skipping one row in one of the tables that has many many thousands of rows. I then receive a data integrity error when the constraints are being applied at the end of the restore process because the one record never got added back into the table. The record that got skipped “is” in the backup file that was created using pgSQLDump.
If I repeat the process by creating a backup and restoring the backup, it always skips the same record. It also only skips one record (that I’m aware of). All other records are restored just fine.
As a test, I deleted the record that was getting skipped, thinking perhaps there was something wrong with it. I performed another backup and attempted to restore the backup. I received the same error, only for a different record from the same table. This record generating the error had restored just fine in the first test.
I don’t believe it’s a data connection timeout issue because the record that gets skipped is not near the end of the file. The table itself contains just straight data. No blobs or other funky data types. I’ve tried this using three different versions of Postgres (9.3, 9.4 & 9.5) and always get the same results.
If you have any suggestions on settings or other things I can try I would greatly appreciate it.
Regards,
Chris