Dump single table data to multiple files.

Discussion of open issues, suggestions and bugs regarding ADO.NET provider for MySQL
Post Reply
NSY738
Posts: 6
Joined: Fri 24 Feb 2017 06:08

Dump single table data to multiple files.

Post by NSY738 » Wed 15 Sep 2021 05:23

Hi

I have a table after dump around 5gb. is it possible auto split to 500mb per file via devart dump?

if no, have any suggestion to achieve this requirement?

thank you

DmitryGm
Devart Team
Posts: 152
Joined: Fri 11 Dec 2020 10:27

Re: Dump single table data to multiple files.

Post by DmitryGm » Fri 17 Sep 2021 08:30

MySqlDump itself does not have this function, but you could use the system utilities to split the dump file into parts.
  • Windows: File manager like Total Commander etc…
  • Linux:

    Code: Select all

    # split -b 500M -d dump.sql dump.
    (example)
This is suitable if the parts are to be rejoined for restore.

Is more interesting if you wanted to split the dump into correct SQL chunks that could be executed separately. Perhaps a code like this will help you in that case:

Code: Select all

using System;
using Devart.Data;
using Devart.Data.MySql;
using System.IO;
using System.Text.RegularExpressions;

namespace ConsoleApp_dump
{
    class Program
    {
        static void Main(string[] args)
        {
            string cs = "User Id=user;Password=1111;Host=mysql.local;Database=world;";
            string dumpFile = "dupm.sql";
            string partsFiles = "dupm.sql.";
            int maxPartSize = 50000;


            MySqlConnection conn = new MySqlConnection(cs);
            conn.Open();
            MySqlDump dump = new MySqlDump(conn);
            dump.UseExtSyntax = true;
            dump.CommitBatchSize = 2000;
            
            dump.Backup(dumpFile);
            
            int bytesWrite = 0;
            int partsCount = 0;
            StreamWriter sw =  new StreamWriter(partsFiles + partsCount, false);

            using (StreamReader reader = new StreamReader(dumpFile))
            {
                string sql = "";
                string line;
                while ((line = reader.ReadLine()) != null)
                {
                    sql += line + "\n";
                    if (Regex.IsMatch(sql, @";\n$"))
                    {
                        if (bytesWrite + sql.Length > maxPartSize)
                        {
                            sw.Close();
                            partsCount++;
                            sw = new StreamWriter(partsFiles + partsCount, false);
                            bytesWrite = 0;
                        }                        
                        sw.WriteLine(sql);
                        bytesWrite += sql.Length;
                        sql = "";
                    }
                }
            }
            sw.Close();
        }
    }
}

If you are using UseExtSyntax=true, remember to set a reasonable CommitBatchSize value as well (CommitBatchSize<<maxPartSize).

Post Reply