Page 1 of 1

Dump single table data to multiple files.

Posted: Wed 15 Sep 2021 05:23
by NSY738
Hi

I have a table after dump around 5gb. is it possible auto split to 500mb per file via devart dump?

if no, have any suggestion to achieve this requirement?

thank you

Re: Dump single table data to multiple files.

Posted: Fri 17 Sep 2021 08:30
by DmitryGm
MySqlDump itself does not have this function, but you could use the system utilities to split the dump file into parts.
  • Windows: File manager like Total Commander etc…
  • Linux:

    Code: Select all

    # split -b 500M -d dump.sql dump.
    (example)
This is suitable if the parts are to be rejoined for restore.

Is more interesting if you wanted to split the dump into correct SQL chunks that could be executed separately. Perhaps a code like this will help you in that case:

Code: Select all

using System;
using Devart.Data;
using Devart.Data.MySql;
using System.IO;
using System.Text.RegularExpressions;

namespace ConsoleApp_dump
{
    class Program
    {
        static void Main(string[] args)
        {
            string cs = "User Id=user;Password=1111;Host=mysql.local;Database=world;";
            string dumpFile = "dupm.sql";
            string partsFiles = "dupm.sql.";
            int maxPartSize = 50000;


            MySqlConnection conn = new MySqlConnection(cs);
            conn.Open();
            MySqlDump dump = new MySqlDump(conn);
            dump.UseExtSyntax = true;
            dump.CommitBatchSize = 2000;
            
            dump.Backup(dumpFile);
            
            int bytesWrite = 0;
            int partsCount = 0;
            StreamWriter sw =  new StreamWriter(partsFiles + partsCount, false);

            using (StreamReader reader = new StreamReader(dumpFile))
            {
                string sql = "";
                string line;
                while ((line = reader.ReadLine()) != null)
                {
                    sql += line + "\n";
                    if (Regex.IsMatch(sql, @";\n$"))
                    {
                        if (bytesWrite + sql.Length > maxPartSize)
                        {
                            sw.Close();
                            partsCount++;
                            sw = new StreamWriter(partsFiles + partsCount, false);
                            bytesWrite = 0;
                        }                        
                        sw.WriteLine(sql);
                        bytesWrite += sql.Length;
                        sql = "";
                    }
                }
            }
            sw.Close();
        }
    }
}

If you are using UseExtSyntax=true, remember to set a reasonable CommitBatchSize value as well (CommitBatchSize<<maxPartSize).