Restore too large database dumpfile

My backup file of my database was too large to handle.

Separating schema and data

I fixed restoring it by cutting it up in a seperate ‘schema’ and ‘data’ sql file.

# use LC_ALL=C because there are multibyte chars in the dump
LC_ALL=C sed '/^LOCK TABLES/,/^UNLOCK TABLES/d' backup.sql > schema.sql
LC_ALL=C sed '/^LOCK TABLES/,/^UNLOCK TABLES/p' backup.sql > data.sql

I also needed to disable the foreign key check during the import as the order of data entry was incorrect.


echo "SET FOREIGN_KEY_CHECKS=0;" > data_new.sql
cat data.sql >> data_new.sql
echo "SET FOREIGN_KEY_CHECKS=1;" >> data_new.sql
mv data_new.sql data.sql

Separate large tables into their own dump files

The data dump was still large. So I moved out the tables which have a lot of data (like events, audits and telescope_* tables)

# table: events
LC_ALL=C grep -A 1000000 "LOCK TABLES \`events\`" backup.sql | \
  grep -B 1000000 -m 1 "UNLOCK TABLES" > events_data.sql

# table: audits
LC_ALL=C grep -A 1000000 "LOCK TABLES \`audits\`" backup.sql \
  | grep -B 1000000 -m 1 "UNLOCK TABLES" > audits_data.sql

# table telescope_*
LC_ALL=C grep -A 1000000 "LOCK TABLES \`telescope_" backup.sql | grep -B 1000000 "UNLOCK TABLES" > telescope_data.sql

# now remove those tables from the data.sql
cp data.sql data.sql.backup
LC_ALL=C sed -i '' '/^LOCK TABLES `events`/,/^UNLOCK TABLES/d' data.sql
LC_ALL=C sed -i '' '/^LOCK TABLES `audits`/,/^UNLOCK TABLES/d' data.sql
LC_ALL=C sed -i '' '/^LOCK TABLES `telescope_/,/^UNLOCK TABLES/d' data.sql

Click Here to Leave a Comment Below

Leave a Reply: