Page 1 of 1

Can performance of data comparison be improved?

Posted: Wed 21 Aug 2019 10:32
by euph411
I am trying to compare two, relatively large 150GB tables.

Is there any way to avoid querying all of the data?

One idea: I've seen other tools (happy to give examples separately) that compare data in blocks/chunks, based on the primary key. Then calculate a hash/checksum on the data at the MySQL server for each chunk. Finally, only query the underlying data to compare if the checksum is different.

In such a case, I realize the "show identical" feature would be disabled since the dbForge application would not have that data. But for such a large table, it would not make sense to show that data anyway.

This approach would substantially reduce the amount of data transfer across the network, which can be a huge benefit (in terms of both bandwidth and time) if the databases being compared are in different locations.

Thank you.

Re: Can performance of data comparison be improved?

Posted: Wed 21 Aug 2019 13:41
by alexa
Could you please provide us the examples?

You can send a reply straight to our support system at alexaATdevartDOTcom and supportATdevartDOTcom .

Re: Can performance of data comparison be improved?

Posted: Tue 28 Apr 2020 14:07
by alexa
You could try using dbForge Studio for MySQL, v9.0 Enterprise Trial Beta where comparison algorythms were improved https://www.devart.com/dbforge/mysql/studio/download.html

Also, you could use 'Ignore...' options of the 'Options' page of the Data Comparison wizard.