Backup Failure: Consider setting the UseZip64WhenSaving

Discussion of open issues, suggestions and bugs regarding database management and administration tools for MySQL
Post Reply
pscheidler
Posts: 5
Joined: Sun 13 Feb 2011 17:51

Backup Failure: Consider setting the UseZip64WhenSaving

Post by pscheidler » Sun 13 Feb 2011 17:59

I am attempting to backup a moderate sized MySQL database using zip file compression. The dbForge Studio is 4.5.311, the OS is MS Server 2008. At the completion of the backup the message "Export fail: Compressed or Uncompressed size, or offset exceeds the maximum value. Consider setting the UseZip64WhenSaving property on the ZipFile instance." is presented. The zip file created is ~5Gig. I am unable to open the zip filewith WinZIP 14.0.

Can anyone provide guidance on how to enable the UseZip64WhenSaving option identified in the error message?

Thanks in advance for assistance.


Below is the dbForge log and error message.

------ Database 'biodb2010' backup started ------
Output file: D:\biodb2010 20110211 2009.zip
Cannot describe an object. SELECT command denied to user 'biodb'@'isi-dev.draper.com' for table 'func'
SELECT command denied to user 'biodb'@'isi-dev.draper.com' for table 'func'
Exported Table: Enum
Exported Table: AttributeGrouping
Exported Table: BioSchema
Exported Table: ConfigProperty
Exported Table: EventType
Exported Table: FeatureGrouping
Exported Table: QuestionType
Exported Table: SeriesGrouping
Exported Table: Signal
Exported Table: SignalValue
Exported Table: Study
Exported Table: UserRole
Exported Table: Attribute
Exported Table: EnumValue
Exported Table: Feature
Exported Table: Protocol
Exported Table: User
Exported Table: FeatureSet
Exported Table: Participant
Exported Table: Question
Exported Table: Sensor
Exported Table: FeatureSetToFeature
Exported Table: ProtocolAttribute
Exported Table: SensorToSignal
Exported Table: Session
Exported Table: AttributeValue
Exported Table: Event
Exported Table: SensorSeries
Exported Table: SessionFile
Exported Table: FeatureSeries
Exported Table: SignalResource
Exported Table: FeatureValue
Exported data from table: Enum (455 rows)
Exported data from table: AttributeGrouping (22 rows)
Exported data from table: BioSchema (13 rows)
Exported data from table: ConfigProperty (1 rows)
Exported data from table: EventType (4 rows)
Exported data from table: FeatureGrouping (11 rows)
Exported data from table: QuestionType (6 rows)
Exported data from table: SeriesGrouping (10 rows)
Exported data from table: Signal (30 rows)
Exported data from table: SignalValue (946564932 rows)
Exported data from table: Study (3 rows)
Exported data from table: UserRole (1 rows)
Exported data from table: Attribute (782 rows)
Exported data from table: EnumValue (2137 rows)
Exported data from table: Feature (95 rows)
Exported data from table: Protocol (3 rows)
Exported data from table: User (1 rows)
Exported data from table: FeatureSet (4 rows)
Exported data from table: Participant (359 rows)
Exported data from table: Question (96 rows)
Exported data from table: Sensor (15 rows)
Exported data from table: FeatureSetToFeature (170 rows)
Exported data from table: ProtocolAttribute (1067 rows)
Exported data from table: SensorToSignal (43 rows)
Exported data from table: Session (359 rows)
Exported data from table: AttributeValue (94525 rows)
Exported data from table: Event (30282 rows)
Exported data from table: SensorSeries (1487 rows)
Exported data from table: FeatureSeries (720701 rows)
Exported data from table: FeatureValue (87206069 rows)
Objects processed: 32
Rows processed: 1034623683
------- Database 'biodb2010' backup finished -------
Export fail: Compressed or Uncompressed size, or offset exceeds the maximum value. Consider setting the UseZip64WhenSaving property on the ZipFile instance.

Alexz
Devart Team
Posts: 165
Joined: Wed 10 Aug 2005 08:30

Post by Alexz » Mon 14 Feb 2011 07:03

Download the new build of the product (dbForge Studio for MySQL v4.50.331). The UseZip64WhenSaving option is supported now.
Could you manage the problem with a new build?

pscheidler
Posts: 5
Joined: Sun 13 Feb 2011 17:51

Using version .331 resolved the issue. Timing question.

Post by pscheidler » Tue 15 Feb 2011 13:39

At the end of the backup, the message “Zip64 format was used to compress the resulting file because of the large size of the exported data.” was printed. Problem is solved. Thanks.

One more question if I may. The database size is about 100 GB. The backup file is about 5 GB. The backup took about 90 minutes. It looks like the restore is going to take about 24 hours. Does this timing sound reasonable?

Alexz
Devart Team
Posts: 165
Joined: Wed 10 Aug 2005 08:30

Post by Alexz » Tue 15 Feb 2011 14:41

We're investigating the problem with restoring now. We'll inform you about the results.

Alexz
Devart Team
Posts: 165
Joined: Wed 10 Aug 2005 08:30

Post by Alexz » Wed 16 Feb 2011 14:34

We investigated the problem, and the restore process took about double time for executing if to compare with the time the backup process usually takes.
Please answer the following questions:
* Did you turn off the bulk insert option?
* Did you try perform backup up with Disable Constraints option enabled?
* Did you use a local server or a remote one?
* Could you send us the CREATE script of SignalValue, FeatureSeries, and FeatureValue tables? If not, please specify table's engine these tables have?

pscheidler
Posts: 5
Joined: Sun 13 Feb 2011 17:51

Backup/Restore options and Create Scripts

Post by pscheidler » Fri 18 Feb 2011 14:04

When I did the backup, I accepted all the default options. Use bulk insert was checked. Disable constraints was not checked. The restore was done on a local server. Both the restore file and the database were on the same processor. This processor was a MS Server 2008 VM. The backup was done on a remote server. The backup file and the database were on different computers.
Here are the create scripts.
CREATE TABLE FeatureValue
(
FeatureValueID INTEGER NOT NULL AUTO_INCREMENT,
FeatureSeriesID INTEGER NOT NULL,
FeatureID INTEGER NOT NULL,
Timestamp INTEGER,
Value DOUBLE NOT NULL,
PRIMARY KEY (FeatureValueID),
KEY (FeatureID),
KEY (FeatureSeriesID)
) ENGINE=innodb
;

CREATE TABLE FeatureSeries
(
FeatureSeriesID INTEGER NOT NULL AUTO_INCREMENT,
SessionID INTEGER NOT NULL,
FeatureSetID INTEGER NOT NULL,
SeriesGroupingID INTEGER,
EventID INTEGER,
BeginTime INTEGER,
EndTime INTEGER,
Number INTEGER,
Description VARCHAR(255),
PRIMARY KEY (FeatureSeriesID),
KEY (EventID),
KEY (FeatureSetID),
KEY (SeriesGroupingID),
KEY (SessionID)
) ENGINE=innodb
;


CREATE TABLE SignalValue
(
SensorSeriesID INTEGER NOT NULL,
SignalID INTEGER NOT NULL,
Timestamp INTEGER NOT NULL,
Value DOUBLE NOT NULL,
PRIMARY KEY (SensorSeriesID, SignalID, Timestamp)
) ENGINE=innodb
;

Alexz
Devart Team
Posts: 165
Joined: Wed 10 Aug 2005 08:30

Post by Alexz » Fri 18 Feb 2011 14:26

Thank you for the information. We'll continue the investigation.

Alexz
Devart Team
Posts: 165
Joined: Wed 10 Aug 2005 08:30

Post by Alexz » Wed 23 Feb 2011 09:09

We investigated the described situation. The restoring of your database takes so long time, because your tables have indexes. If tables didn't have indexes, restoring of database would be several times quicker.
So, we can answer your question "Does this timing sound reasonable?". Yes, this timing sounds reasonable in case of your database.

pscheidler
Posts: 5
Joined: Sun 13 Feb 2011 17:51

Thank you for your review.

Post by pscheidler » Wed 23 Feb 2011 20:17

Title says it all.

Justmade
Posts: 108
Joined: Sat 16 Aug 2008 03:51

Post by Justmade » Thu 24 Feb 2011 04:13

Alexz wrote:We investigated the described situation. The restoring of your database takes so long time, because your tables have indexes. If tables didn't have indexes, restoring of database would be several times quicker.
So, we can answer your question "Does this timing sound reasonable?". Yes, this timing sounds reasonable in case of your database.
So if the backup can create script that
1. create the table without indexes
2. append the data
3. alter the table to add indexes

It would make the restore procedure much faster :twisted:

I would love to see this feature in both dbforge and MyDAC TMyDump but it is just a suggestion.

Alexz
Devart Team
Posts: 165
Joined: Wed 10 Aug 2005 08:30

Post by Alexz » Thu 24 Feb 2011 08:00

Of course, we tested the way that you suggested. And I should say that it took so much time to create indexes that it was even a little slower :(

As for your suggestion about the option to drop keys, we consider to implement it in one of the new versions of the product

Post Reply