Skip to content

Backup/restore issues/fixes #1678

@EdwardMillen

Description

@EdwardMillen

I'm currently trying to backup my websites from one server and restore them onto another, and I've run into various issues with both the backup and restore processes, which I'm now working through and fixing myself.

But one of the initial issues I had was due to a lack of disk space on the old server, as the existing backup code copies everything to a separate staging directory and then creates a .tar.gz file from that. meaning that the free disk space needs to be a lot more than just equal to the size of the website being backed up.

To avoid this, I changed the code to create the .tar.gz file directly from the existing data instead. This produces a file containing exactly the same data in the same structure (including all the metadata and database backups etc), but doesn't occupy unnecessary space during the process.

Testing it with a website consisting of a 916MB home directory and 75MB MySQL dump, the existing code (v2.4.4) used 2.3GB during the backup process, whereas my version used 820MB at most. It was also much faster, and I found similar improvements were possible in the restore process too.

Is there a reason why it's not done this way at the moment? Or if not, should I contribute my changes?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions