-
Question askedmore than three years ago
Hello, dear readers of the site shargaev-group.ru
! Most hosting providers do not support their clients on vds hosting, all the configuration and security of hosting is shifted to the shoulders of the webmaster or administrator. Website backups are no exception.
Therefore, so that this situation does not happen:
Of course, I also had such situations, and what just didn’t have to do to restore the site. But my advice to you is to learn from the mistakes of others, not your own.
There are several ways to back up your site:
The last two methods will be discussed below.
Hello, dear readers of the site shargaev-group.ru
! When you have one site for maintenance, it is not difficult to back it up. 10-15 minutes and a copy of the site is ready! But if you have more than 10 sites under maintenance and all of them are located on different accounts, the backup process starts to take a lot of time.
For this reason, I thought about how to optimize the process of backing up sites. My method is that at a certain time, the written sh script archives files on the server, creates database dumps and saves them to the specified directory. It remains for us to connect via FTP to the server once or twice a week and download the archives made.
I want to draw your attention
that example is shown based on Hosting Timeweb
(I really like this hosting and most of my projects are located on it). Don’t let this scare you, you can easily adapt this example to your hosting, the only thing that will be different for you is the Cron scheduler panel in the hosting web interface.
Ready? Then let’s get started!
-
How to get a site archive and a database dump from Timeweb to transfer hosting to Reg.ru
To transfer a site to Reg.ru hosting, you need to provide
- archive with website files (in .zip format);
- database dump (in .sql format), if used.
All actions are performed on the site timeweb.ru
. Log in to Hosting Control Panel
.
- How to download archive with website files
- How to download database dump
- What’s next?
- Восстановление из резервной копии
- Восстановление из резервной копии
- rsync and backups BC e.
- LVM (logical volume manager) — logical volume manager
- DRBD
- Thin LVM
- ZFS
- What happened next?
- The Way this is!
- How all my blog articles and pages disappeared
- Backing up site files and restoring them in BACKUP Management
- How to backup website files using File Manager
- How to backup mysql database and how to restore them on TimeWeb hosting
- What mistakes may come your way
- Setting up backup on Timeweb virtual hosting
How to download archive with website files
Go to section File manager
, select the folder of the site you want to transfer (in the example: wordpress), and click Archiver
:
Enter a name, select zip format and click Archive
:
Select a folder on your computer where the archive will be saved.
How to download database dump
If you forgot your database password
Press Export
in the window that opens:
Select a folder on your PC where the database dump will be saved.
Done, you have downloaded the database dump.
What’s next?
To transfer the site to Reg.ru hosting
, upload the received files to the file sharing service and generate a download link according to the instructions
. Attach the received link to the transfer request.
Спасибо за оценку!
Как мы можем улучшить статью?
Нужна помощь?
Напишите в службу поддержки!
SIMAI: Сайт медицинского центра, SIMAI: Сайт РЖД медицина
Информация из первоисточника по ссылке
Автоматические бэкапы
В настоящий момент на хостинге работают две схемы автоматического резервного копирования:
- на большинстве серверов резервное копирование производится раз в несколько дней (как правило, раз в три дня). Три последние созданные резервные копии доступны в панели управления в разделе « Резервные копии
». Более старые бэкапы заменяются новыми при их создании. - на новых серверах, которые вводятся в работу начиная с декабря 2019 г., резервные копии создаются ежедневно. Каждая копия хранится в течение месяца, т.е. в панели управления вам будут доступны 30 копий сайта.
Постепенно все хостинговые серверы Timeweb будут переведены на ежедневное резервное копирование. Если вы хотите подключить ежедневные бэкапы уже сейчас, свяжитесь со службой поддержки хостинга в вашем личном кабинете на timeweb — при наличии возможности они перенесут ваш аккаунт на сервер с такой схемой резервного копирования.
Восстановление из резервной копии
Если вам необходимо восстановить директорию на момент создания резервной копии (т.е. с удалением новых файлов), рекомендуем вам переименовать директорию, после чего произвести восстановление.
Шаг 1.
Перейдите в раздел « Резервные копии
» панели управления аккаунтом.
Шаг 2.
Восстановите файл/папку/сайт/базу данных.
Шаг 3.
После запуска прогресс восстановления из резервной копии можно будет наблюдать на вкладке «Статус задач». Для новой задачи назначается статус «В очереди».
После того как восстановление будет завершено, задаче будет присвоен статус «Выполнено», а на контактный e-mail поступит сообщение, что откат выполнен успешно.
В случае если восстановление выполняется достаточно долгое время (от 1 часа), вы можете сделать запрос в службу поддержки для уточнения причин такого поведения системы.
Резервная копия на хостинге Timeweb
Информация из первоисточника по ссылке
Автоматические бэкапы
В настоящий момент на хостинге работают две схемы автоматического резервного копирования:
- на большинстве серверов резервное копирование производится раз в несколько дней (как правило, раз в три дня). Три последние созданные резервные копии доступны в панели управления в разделе « Резервные копии
». Более старые бэкапы заменяются новыми при их создании. - на новых серверах, которые вводятся в работу начиная с декабря 2019 г., резервные копии создаются ежедневно. Каждая копия хранится в течение месяца, т.е. в панели управления вам будут доступны 30 копий сайта.
Постепенно все хостинговые серверы Timeweb будут переведены на ежедневное резервное копирование. Если вы хотите подключить ежедневные бэкапы уже сейчас, свяжитесь со службой поддержки хостинга в вашем личном кабинете на timeweb — при наличии возможности они перенесут ваш аккаунт на сервер с такой схемой резервного копирования.
Восстановление из резервной копии
Если вам необходимо восстановить директорию на момент создания резервной копии (т.е. with the removal of new files), we recommend that you rename the directory, and then restore.
Step 1.
Go to « Backups »
» account control panel.
Step 2.
Restore the file/folder/site/database.
Step 3.
After starting, the progress of restoring from a backup can be observed on the Task Status tab. The status of the new task is «Queue».
After the restoration is completed, the task will be assigned the status «Completed», and a message will be sent to the contact e-mail that the rollback was successful.
If the recovery takes a long time (from 1 hour), you can make a request to the support service to clarify the reasons for this behavior of the system.
Start a course of study
We tried to briefly describe the path that the Timeweb team has traveled over 10 years: from rsync, LVM and DRBD to ZFS. This article will be useful for those who are engaged in server scalable infrastructure, plan to make backups and take care of the smooth operation of systems.
- rsync (remote synchronization)
- DRBD (Distributed Replicated Block Device)
- incremental backups under DRBD using LVM
- DRBD + ThinLVM
- ZFS (Zettabyte File System)
rsync and backups BC e.
rsync (remote synchronization)
— not at all about backups, strictly speaking. This is a program that allows you to synchronize files and directories in two places with minimal traffic. Synchronization can be performed for both local folders and remote servers.
Quite often, rsync is used for backups. We used this utility when the sites were simpler and there were significantly fewer clients.
Rsync did a good job, but the biggest problem here is speed. The program is very slow, it heavily loads the system. And with the increase in data, it starts to work even longer.
Rsync can be used as a backup technology, but for a very small amount of data.
LVM (logical volume manager) — logical volume manager
Of course, we wanted to make backups faster with less load, so we decided to try LVM. LVM allows you to take snapshots even using ext 4. Thus, we could make backups using an LVM snapshot.
This technology was used by us for a short time. Although the backup was faster than in rsync, it was always full. And we wanted to copy only the changes, so we switched to DRBD.
DRBD
DRBD allows you to synchronize data from one server to another. At what only changes, but not all data are synchronized. This greatly speeds up the process!
And on the storage side, we could use LVM and take snapshots. Such a system existed for a very long time and now exists on a part of the servers that we have not yet managed to transfer to the new system.
However, there is still a drawback with this method. During synchronization DRBD heavily loads the disk subsystem
. This means that the server will run slower. As a result, the backup interfered with the operation of the main services, that is, user sites. We even tried to make backups at night, but sometimes they simply did not have time to complete overnight. I had to maneuver, alternate backups. For example, one part of the servers is running today, then another. The backups were distributed in a checkerboard pattern.
DRBD, in addition, is highly dependent on network speed and affects the performance of the server from which and to which the backup is being made. We need to look for a new solution!
Thin LVM
At this point, the business challenged us to make 30-day backups, and we decided to switch to thinLVM. It didn’t solve the main problem! We did not even expect that such a high file system performance would be required to support thin snapshots. This experience was completely unfortunate, and we abandoned in favor of the usual thick LVM snapshots.
ThinLVM really just wasn’t designed for our purposes. Initially intended for small laptops and cameras, but not for hosting.
It was decided to try ZFS.
ZFS
ZFS
— a good file system that has a lot of already built-in goodies. What is achieved with ext 4 by installing on LVM, connecting a DRBD device, then with ZFS this is the default. The file system itself is very reliable. Separately, it is worth noting the Copy-on-write function, this technology allows you to handle data very carefully.
ZFS allows you to make snapshots that can be copied to storage, as well as automate backups. You don’t have to think of anything!
The transition to ZFS was very careful. First, we created a stand where we just tested for several months. In particular, they tried to reproduce problems with hardware, power, network, disk full. Through careful testing, we were able to find bottlenecks.
A sore subject of ZFS is disk overflow. We were able to solve this problem by reserving empty space. When the disk is full, steps will be taken to offload the server and clean up space.
After testing, we gradually began to introduce new servers, transfer old servers to ZFS. No more problems with backups! You can make 30- or 60-day backups, even backups every hour. In any case, the server will not experience excessive loads.
We collected all the data in the tables below to compare backups using different technologies.
What happened next?
There are plans to update ZFS to version 2 of OpenZFS 2.0.0.
in 2021. We are preparing the transition using all the features that were announced with the release in early December.
The Way this is!
This is the path we have chosen for ourselves! Do you solve similar problems? We will be glad if you share your experience in the comments! We hope the article turned out to be useful and if suddenly you also have the task of making backups using the built-in utilities in Linux, our story will help you choose the right solution.
Hello, friends! Today I’ll tell you how to backup a site and restore it from the control panel of TimeWeb hosting
. For reference, backup
translated from English — this is a backup, i.e. creating and saving copies of various data with the aim of their subsequent recovery in case of damage.
I’m sure that many of you are now thinking: well, how much you can write on this topic, this is already known to everyone. Believe me, not everyone. And even those who know all this often simply forget to back up their sites. Here you, for example, when was the last time you did a full backup of the site, honestly? Write about it in the comments to this article. Yes, I myself often forget about it, but once a week I definitely save the database. And I am writing this article for a reason, but there is an unpleasant reason.
How all my blog articles and pages disappeared
Blog backups are good, but I didn’t rush and decided to start by contacting the support team of my TimeWeb hosting. You never know what could have happened, maybe they screwed up there and are already fixing the mistake, or maybe my blog was hacked. In general, at 21:28 Moscow time, I wrote to technical support through the ticket system from the hosting admin panel. About an hour passed — no answer. The waiting time is up to 24 hours, you can’t fault it. But for me, you understand, I don’t want to wait — the blog is empty, but visitors come. I talked with Sasha Borisov and he advised me to write to them via online chat from the main page of the site, and not from the control panel.
The hosting employee, Karen Gevorgyan, immediately answered me in the chat and promised to transfer my application to the address. Literally 5 minutes later I receive a message:
I’m checking — everything is in order. I asked what was the reason for the disappearance of the content, to which I received an answer after a while:
Well, thanks for that! No, except for jokes, I want to express my gratitude to the hosting staff Karen Gevorgyan and Kirill Prokhorenko! In principle, after starting a dialogue with them, the site was restored quickly, within five minutes. Therefore, my advice to you, if they do not answer tickets, use the online chat
. And for this hint, special thanks to Alexander Borisov ()!
I wouldn’t have remembered this incident if I hadn’t seen in the emails the next day that I wasn’t the only one having similar problems with WordPress blogs on TimeWeb. Everything was successfully fixed there, too, but one of the messages made me think that the topic of backup, and in a simple way, creating a backup of the site, would be useful for many.
So, why you might need a backup of a website or blog
? I described one of the examples above. In general, there can be many reasons. Here are some of them:
- hosting failed;
- hackers messed up;
- installed a new theme (plugin) and the site crashed;
- edited template files and everything crashed.
There are many options for creating backups of a WordPress site and databases, from using plugins to downloading files from hosting manually via FTP. But, since a similar incident happened on TimeWeb hosting, I will show how to backup a site from the control panel on this hosting using the example of a Wodpress blog.
I must say that in 22 months of working with TimeWeb, this was my first contact with the support service. The issue of paying partner commissions, which can only be received by citizens of Russia, does not count. In addition, the hosting automatically backs up all the files and MySQL databases of our sites every day, and there is nothing to worry about. But, anything can happen and hosting is also not iron. It is better to play it safe and make backup copies of the site yourself in parallel.
There are two types of backups:
- file system (pictures, themes, plugins, engine files);
- MySQL databases (articles, pages, comments).
Keep in mind that all blog changes made since the last backup will be lost when the site is restored from backup
. Therefore, the more often you save copies of the site, the less you will need to restore in case of its loss. Even on hosting, there are several backup options.
Backing up site files and restoring them in BACKUP Management
Go to the hosting control panel. Here we will be interested in the two sections indicated in the figure.
Go to “” and open the tab ““. If you have several sites, then there will be folders with domain names in which the . Here we will copy it. If desired, you can save any directory or file by first opening them.
Hosting automatically makes and stores all copies for the last three days. To make a backup of the site files, select the date and click the ““ button. We confirm our intentions and wait until the backup copy of the folder is created. At the end of the copying process, the column “” will be marked ““, and an email will be sent to the mail with a message about creating a backup.
How to backup website files using File Manager
In the window that opens, you can specify the name of the archive and select its type. Click «» and save to a folder on your computer.
To restore a site from a hosting backup, click the “” button in the ““ section. All files will be restored to the date you have chosen.
How to backup mysql database and how to restore them on TimeWeb hosting
Then go to “” again and download the sql database archive, which looks something like this .
Believe that you need to back up your site as often as possible and you should not rely only on hosting
. With us, as long as the rooster does not peck, we do not move. But it’s better to spend a few minutes creating copies and sleep well than to lose your site. And this is the work of many years. Well, now, to be honest, write, when was the last time you backed up the site?
In this article, we have made, rather, an overview of various approaches to backup. We tried to highlight the pros and cons of each of them.
Creating backups and restoring them, it seems to us, is a good topic for a separate article. If you agree with us, let us know 🙂 We will collect the material and share the details in the next article.
Sorry, but a snapshot of a file system and its replication is only one approach to RC, you just do it in different ways. At the same time, there is nothing about what data you have on these file systems, how the application relates to this backup, how and what you can restore from such a backup, and how quickly. This article is not about backup at all, but about snapshots that we can drag somewhere from the main server and nothing more.
Eugene, since we are talking about the Timeweb experience, we meant that our hosting stores user sites and databases.
We believe that the topic of restoration is worthy of a separate article. It would start with choosing a protocol for network access to file systems, then talk about how to protect user data when connecting and copying, and end with the mysql dumping system.
Have you compared the requirements for RAM of different file systems? They say about ZFS that it needs a lot of RAM — did you have it?
And about backups, I’m basically interested in how people decide on the volume of the disk system for backups: for each conditional 1TB of data, how many TB of disk space do you reserve on the backup server?
Have you compared the requirements for RAM of different file systems? They say about ZFS that it needs a lot of RAM — did you have it?
The amount of RAM consumed by the ARK cache is set in the zfs_arc_max module parameters. In our case, it was calculated empirically so as not to interfere with the operation of the main services, but at the same time give file performance.
And about backups, I’m basically interested in how people decide on the volume of the disk system for backups: for each conditional 1TB of data, how many TB of disk space do you reserve on the backup server?
Everything is very individual here, since the size of a snapshot in ZFS depends on the size of the added data. We have it 1 to 2 for storage for a period of 30 days. That is, for 1 terabyte of data, there are 2 terabytes on the backup server.
And do you have ZFS on Linux?
Why not start a NAS with many zfs pools and do zfs send | ssh backup@nas zfs recv, selecting a pool on the NAS based on some criteria, such as a group of servers or a cluster name, if there are several. And then send incremental copies between 2 snapshots: the old one (which has already been sent) and the new one (executed now) via the same send/recv. For automation purposes, the old one, after the transfer, can be bang and the new one can be renamed into the old one …
You are right, this is the basic scheme of the backup system. Since there are a lot of articles on this topic, as well as reviews of cool automation tools like ZREPL, we decided to cover a slightly different side of the issue.
Do you have ZFS on Linux?
Yes, on Linux.
The «redundant space» for rsync is not quite right. It is not necessary to keep a copy of all the data for each point. You can keep one copy + modified files.
In the case of ZFS, a little more economical: one copy + modified blocks. To store in general only changes will not work here either.
In addition, it is not always necessary to copy (and even read) the entire file system, it is often enough to work only with the data of some application, and here rsync can already be much more profitable, both in terms of speed and volume.
XFS with new kernels also has such a thing as reflinks. They allow you to quickly create copies of directories / files, without copying data blocks. New blocks are allocated only when changed. That is, a kind of symbiosis of hardlinks and snapshots, if I may say so.
I’ve been using this mechanism for about a year on one of the backup servers, and somehow I’m more and more inclined towards zfs. In reality, reflink copying is far from instantaneous, sometimes minutes (on several hundred GB of data). Plus, the impossibility to estimate the actually available free space, which is why it seems to be 85% full, and sometimes (!) errors start no free space on device 🙁
But for not very large backups / hot copies, it may be an interesting alternative for someone.
reflink is a great thing, but you don’t really need it often.
Here is an example: I made ddrescue for a 1TB disk, saved the image immediately to fs that supports reflink — btrfs, then I do cp —reflink=always and work with a copy of the file, if I need to check disk and that’s it.
I don’t need a disk snapshot in such a situation, but a “snapshot” of the file is just what the doctor ordered.
and is done in time on btrfs, it seems to be fast enough.
Yes, small files are fast, but 400 gigabytes of databases are copied in a minute. Sometimes. In general, a specific thing, yes.
Btrfs is not ready to be used in production. 🙂
Keeping three dozen snapshots on a thin LVM volume is really not very fast. To solve this problem, I came up with a scheme in which only one snapshot is kept on the working array for each thin volume, which serves to track changes since the last backup. There are already really a lot of pictures on the backup device.
Working and standby LVM groups are different and located on different RAID groups. If you are interested in the details, I have an article about «LVM and Matryoshka» in my profile.
I didn’t understand from the article what is copied and where, where the client data is located and in what form and where you store backups.
What exactly is copied from you by SCP? Data from a specific snapshot created on a shared hosting node? Client applications work on the ZFS file system and snapshots are created there?
How does client recovery work in case of failure — is the last snapshot taken and poured onto a new / backup server, or is there some other logic?
There are two types of servers: client and backup. The file system on which client data resides is ZFS.
Client applications run on the client server. Backups are stored on the backup server.
Snapshots are taken on the client server and copied to the backup server.
The backup scheme is quite simple:
1. Client server has snapshot 1.
2. Snapshot 2 is created before copying.
3. Snapshot 1 is copied / imported to the backup server. Various tools can be used for this: ssh, scp, netcat, etc. In our case, an application built on the basis of ssh is used.
4. Upon completion of successful copying, snapshot 1 is merged / deleted with the main data, and snapshot 2 takes its place.
5. When backing up, go to step 1.
Restoring individual files or directories works by copying data from the backup server from the snapshot to the client server.
Restoring an entire server is possible from the last snapshot, but it’s faster to move disks to a new platform.
Disk failures are monitored. All data must be mirrored on different disks.
Was everything explained clearly? Have all questions been answered?
Why is Hetzner able to make VPS backups and snapshots without qemu-agent and other body kit, but you can’t?
If only the timeweb sites visited only by the monitoring agent would not periodically go down, the price would not be for your zfs with backups.
restic is good.
Much faster than rsync, it only sends and stores changes.
Good, but not many times faster, of course — this is an obvious exaggeration. I use both: there is a difference in speed, but not so noticeable even.
Yes, and rsync also sends, and can only store changed files using hardlinks.
In addition, it is not so correct to compare a separate tool for synchronization and a more complete solution specifically for backup. Rather, then it is necessary to compare it with rsnapshot and similar things.