Announcement Module
No announcement yet.

Backups of Microsoft SQL Server

Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • UG771963
    started a topic Backups of Microsoft SQL Server

    Backups of Microsoft SQL Server

    During presentations about backups and restoring SQL Server databases, two types of questions are usually asked. The first ones are set right in the course of the presentation from the audience, the second ones are set already after, in private conversation. These "private" questions are often more interesting and I will try to give answers to the most complex and interesting ones, instead of writing another article about how you should do backups of your websites leadconcept(.)com/custom-ecommerce-website-development.html, or why you should do backups, or even why you should check your backups (but you really should check your backups).

    Can I deploy a backup to a version of SQL Server other than the one on which the backup was made? What problems may arise?

    You can restore a backup to a different version of SQL Server, but only if the version of SQL Server on which you deploy the backup is newer than the version on which you made it. In other words, you can deploy a backup made by SQL Server 2000 to SQL Server 2005, SQL Server 2005 to SQL Server 2008 R2, or from SQL Server 2008 to SQL Server 2012, but you can never do this in the opposite direction. Each version of SQL Server makes its own changes to the database and files that store it. Microsoft will not “go back in time” and rewrite previous versions of SQL Server to support these changes. If you really need to upgrade to an older version of SQL Server, you will need to script the schema and the data itself (for example, here’s an article about this transition )

    In order to determine on which version of SQL Server the backup was created, you need to look at the header of the backup file:

    RESTORE HEADERONLY FROM DISK = 'd:\bu\mm.bak';

    As a result, you will see the Major, Minor and Build versions of the instance of SQL Server on which the backup was made (as shown in the screenshot below). This will allow you to determine the appropriate version of SQL Server to restore this backup.

    When restoring a database to a newer version of SQL Server, it may turn out that there is something incompatible with this version of SQL Server. The safest approach to migrating to a new version of SQL Server will be to launch Microsoft Upgrade Advisor (a free utility available for each version of SQL Server) on the base you want to migrate, make sure that it is ready, and then make a backup and restore it on a new instance (but only in this order, and not first try to transfer the backup, and then run the assistant).

    After recovery, the database will be in compatibility mode with the version of SQL Server with which the transition was made. This means that only the functionality that was supported by the version of SQL Server on which the backup was created will be available to it. In order to take full advantage of the new version of SQL Server, you need to change the level of compatibility of the database. This can be done using the GUI, and you can script:


    Different numbers denote different versions of SQL Server: 90 for SQL Server 2005, 100 for SQL Server 2008 and 2008 R2, and 110 for SQL Server 2012 (for more information about versions of SQL Server, read here - note of the translator ).

    It is worth adding that not all "transitions" are possible. SQL Server will allow "jump forward" only in two versions. For example, you cannot deploy a backup made by SQL Server 2000 to SQL Server 2012. First you need to deploy it to SQL Server 2008, set the appropriate compatibility level, create a new backup, and then deploy it to SQL Server 2012.

    Can I use the restore operation to create a copy of the database? What can go wrong?

    Yes, you can do it. If you deploy a backup on another server, you need to make sure that you have the same logical drives on the new server as on the “old” server, or manually set the correct paths for the database files using the WITH MOVE command option RESTORE DATABASE:

    FROM DISK = 'c:\bu\mm.bak'
    WITH MOVE 'OldDB' TO 'c:\data\new_mm.mdf',
    MOVE 'OldDB_Log' TO 'c:\data\new_mm_log.ldf';

    Database files have both logical names and physical file names. You only need to register all logical file names and define a new physical location for each of them.

    The main problems you may encounter are errors related to the lack of free space on the disks to which you are restoring the database, or you may forget to specify a new name for the database and SQL Server will try to restore the database over the existing database.

    When you restore the database on a new server, you may encounter the “Orphaned Users” problem ( users who have lost contact with the account, according to the translation in msdn - comment of the translator) if the database user is associated with an account not represented on the new server. You will need to correct this error.

    Can I attach a MDF file as a database if I do not have a transaction log file?

    The only option when it is valid is if the transaction log was lost after the database was properly terminated. Anyway, this is not a good idea. When a database is attached, the transaction log file, as well as the data file, is needed to carry out the database recovery process ( here, database recovery is not the RESTORE DATABASE operation, but recovery is the process that occurs every time SQL Server starts, when SQL Server “runs "According to the transaction log and results in data files in a consistent state - translator's comment). However, in some cases it is possible to attach a data file without a transaction log file, but this feature is intended only for those cases where the transaction log file has been damaged or lost due to hardware problems and in the absence of backup copies. Of course, a database cannot exist without a transaction log, and when attaching a database without a transaction log file, SQL Server leadconcept(.)com/client-server-application.development.html will simply recreate it.

    Attaching a data file without a transaction log file destroys the chain of logs and, in addition, it may turn out that transactional or structural integrity is broken in the database (depending on the state of the database at the time the transaction log is “lost”). The operation of joining such a database may fail, regardless of what action was taken.

    Copying data files and transaction log files is permissible only after a detach operation (detach), or after the SQL Server process has been correctly completed - this will ensure that all transactions are completed correctly. Copying / transferring database files to another server is a faster way to transfer a database to another server than creating / expanding a backup copy, but not so securely (in case you move the database files directly without copies). Also, you need to remember that you can perform a database attachment only on the same or a newer version of SQL Server.

    My database is on SAN. I heard that SAN backups are enough. It's true?

    This may be true. The main thing is that your SAN ( storage, Network / Storage System - approx. Translator ) supports SQL Server transactions. If it is, then it will know that there are transactions in the database and the presence of these transactions may mean that the data in the data files may not be complete, since the process of writing the data changed in these transactions to the hard disk may be not completed at the time of the backup. Those backups that SQL Server itself does naturally take these moments into account.

    EMC Data Domain, for example, is a combination of software and a SAN that provides transaction support, as well as products from other vendors, but you still need to check the documentation of your SAN. Note the presence of phrases like “transaction consistency”, or “transaction aware”, or something like that. If you did not find them, then I would advise you to check the restoration of the database before you decide that you have enough SAN backups to fulfill all of your backup requirements. However, even after you were convinced that SAN backups are performed correctly, this does not mean that you no longer need the “native” SQL Server backups. If you need the ability to restore your database at a time, for example, you still have to make backups of the transaction log using SQL Server.

    Typically, when creating a backup, a SAN with SQL Server support, uses the VDI interface of SQL Server and “freezes” the database at the time of the backup. If you start the mechanism for creating such a backup and look at the SQL Server error log, there you will see messages that the IO operations have been frozen.

    If you rely on backups created by SANs, you still need to check the integrity of the database, either on a live database or on copies restored from a SAN backup. Otherwise, you can create backups of a damaged database for a long time and not even know about it.

    Why can't I use as backups copies of data files created by Windows? I do not need the ability to restore to an arbitrary point in time.

    SQL Server is not a regular desktop application. It manages its files in such a way as to ensure the implementation of all the properties of ACID (Atomic, Consistency, Isolated, Durable - in a little more detail - approx. Translator ). In short, to ensure successful completion of transactions, SQL Server tries not to give anyone access to its files and modifies them itself as it needs.

    If you simply copy the data file, ignoring the locks and transactions that can be executed at the moment, this means that when you try to attach this file later, it will be in an inconsistent state, which will lead to errors.

    Only in the case when the database does not change at all, can you copy the file and attach it later. If there is at least a minimal possibility that at least one transaction was opened at the time of copying the file, you will most likely get a failed backup. The only secure way to copy data files and transaction logs for use as backups is to put the database offline before copying.

    It is much safer and easier to use the built-in SQL Server mechanism
    to create backups. Such a backup will be a complete copy of your database, and all ACID properties will be executed.

    I have a very small database. Why can't I just “unload” every table on a disk to create a backup?

    You can use something like SQLCMD and unload the tables into a simple text file, but then, instead of simply restoring the database with one command, you have to execute a number of commands. First, you will need to create an empty database. Then, you will need to create and load each table from a file. If any table contains an IDENTITY column, you will need to perform a SET IDENTITY_INSERT on each of these tables. Also, you will have to carefully determine the order in which you will load data into tables in order to ensure integrity.

    Plus, keep in mind that all your tables are uploaded to a disk at different times, so if the data was somehow changed during unloading, after recovery you will not get the database in a consistent state and you will have to manually look for errors and fix them.

    Of course, you are entitled to do so. On the other hand, you can simply execute the BACKUP DATABASE command, and then, when necessary, restore this backup.

    Why pay money for utilities that make backups if SQL Server can do it?

    There are three main reasons for using third-party programs that create backups: management, automation, and functionality. If you are a novice database administrator or not a database administrator at all, but are forced to maintain the DBMS as an addition to your main job, you may not be aware of how, where and why you need to configure backups in SQL Server. A good utility (like SQL Backup Pro) can provide you with exactly the type of manual you need in order to ensure the safety of your databases using backup copies.

    The backups created by SQL Server itself work fine, but you need to do a lot of work in order to set them up and even more to automate them. A good third-party utility will make the automation process very simple. Moreover, with its help you can automate other processes related to backups, such as mirroring / delivering logs and checking the integrity of the backup.

    Finally, although SQL Server backups do what you want, they may not do it in the best way. For example, some utilities compress backup copies more efficiently, thus saving more disk space and reducing backup times. Also, they add functionality - such as encrypting a backup file (something like this is possible with the built-in SQL Server tools only if the database itself is encrypted).

    If the backup is on the network ball, can someone read it?

    Until you have directly encrypted the backup file itself - yes - this is the most common file. If someone gets access to this ball, he will be able to read it with any text editor, or just copy and start recovery from it on another SQL Server instance.

    Moreover, from the backup you can get the database schema or data, even without restoring it. If you have the SQL Data Compare utility, it running with the / Export key will be able to pull out all data from the backup in CSV format, comparing this backup with an empty database and without asking for any password. Also, the same SQL Data Compare will be able to create for you a script that creates a database schema.

    In order to prevent unauthorized access to the backup, you have to do a few things. First, make sure that the ball on which backups are stored is available to a limited number of people. Secondly, you should keep only those backups that you really need. Finally, if you use third-party utilities to create backups (such as SQL Backup Pro), you can encrypt the backup, so if someone can directly access the file, then nothing can read from there.

    Without third-party utilities, you can achieve this using Transparent Data Encryption (TDE).

    To ensure the best level of security, you need to perform all of the above.

    Can anyone change the contents of the backup?

    The ability to change the contents of the backup file is not provided. Since the backup is a page-by-page copy of the database (in the form in which it existed at the time of creating the backup), the restored copy of this database will be in exactly the same state as the original was at the time of the backup.
    When SQL Server reads each page, during database recovery, it calculates its checksum, depending on its content, and compares it with the value that was read from the original page at the time of the backup (it is assumed that you used the WITH CHECKSUM parameter when creating backup copy). If someone made changes in the backup file, these values ​​do not match and SQL Server will mark this page as damaged.

    Is there any flag, setting which when creating a backup, I can be sure that I can always recover from it?

    If by such a flag you mean that your backup process involves performing a RESTORE VERIFYONLY operation after creating a backup, then no, you cannot be sure that you can restore the database from this backup. RESTORE VERIFYONLY can perform a set of two checks.

    First, it checks the backup header to make sure that there are no errors in it. If the header is damaged, then you will not be able to restore the database from this backup.

    FROM DISK= '<Backup_location>'

    The second check is possible only if you run the backup procedure with the WITH CHECKSUM option. This means that during backup creation, SQL Server recalculates and checks checksums for all read pages. If he stumbles upon a page for which these sums do not converge, the backup operation will end with an error. If the check succeeds, BACKUP WITH CHECKSUM will calculate and write down the checksum of the created copy.

    Accordingly, RESTORE VERIFYONLY can be used to recalculate the checksum and verify that the backup has not been damaged during storage.

    FROM DISK= '<Backup_location>'

    Problems can arise in two places. Firstly, the header check during the execution of VERIFYONLY does not check everything that may affect the recovery process. This means that RESTORE VERIFYONLY can complete without errors, but the database will still not be able to be restored from the “verified” copy.

    Secondly, CHECKSUM cannot detect memory damage. If the data page was updated while in memory and then it was damaged before it was written to disk (and, accordingly, backed up), then the calculation of the checksum does not show any error, but simply confirms that the same was written to the backup page, which was contained in the database at the time of the backup. Those. If the page was already damaged at the time of the backup, the error could not be found using the checksum and recovery from this backup may fail.

    The only way to know for sure that you can recover from a backup and the resulting database is intact is to restore it and, preferably, run the database integrity check on the restored copy.

    Does the backup contain anything other than data? Can anyone read passwords from it?

    Backup contains not only data. It contains the entire database structure. It includes all the data, procedures, views, functions, and the rest of the code. Also, it contains all the database settings. Finally, it contains all the information about database users. For a regular database, each database user is associated with a SQL Server account. Passwords of such users are stored together with the account, so these passwords will not be in the backup.

    However, in offline databases ( contained databases - comment of translator) there is the concept of USER WITH PASSWORD, since the very idea of ​​offline databases implies a minimal connection between such a database and a server. In this case, the password will be in the backup, which can lead to attempts to get it from there. Passwords are not stored in clear text, they are hashed, just like account passwords (which are stored in the system master database and, of course, get into its backup).

    Microsoft offers several best practices for securing offline databases.

    Why in backup indexes, statistics and other things that are easy to recreate? Is this just a waste of time?

    And in my opinion, the loss of time is an attempt to separate things in this way and make a backup of only one part. First, how to do it? For example, how to backup data without doing a backup of clustered indexes? This is not possible since the leaf level of the clustered index is a data page. That is, it can be said that cluster indexes are the tables themselves, therefore cluster indexes should be included in the backup. Of course, it is possible to allocate nonclustered indexes to a separate file group and not to make it a backup, but then, after restoring the backup that we have, we will still need to return this file group to life and rebuild all indexes. So what are we gonna do?

    With statistics, there will also be problems. SQL Server backs up the statistics along with the database (and it takes up very little space, since the histogram, called statistics, is built only in 200 rows) and restores it along with the database. However, if after recovery we start to re-create the indexes, since we did not backup them, we will have to re-create the statistics. This will also require additional time, and the database, meanwhile, will remain inaccessible.

    In the end, I would argue with the phrase “easy to re-create”, because in an emergency, this whole process can be very confusing, which will inevitably lead to the fact that people working with this database will not be able to access it much more time than in the case of a simple restore from backup.

    The very idea of ​​creating a backup is to restore the database as quickly and efficiently as possible. The more complicated the recovery process, the less efficient the backup. Yes, for storing indexes, users, stored procedures and everything else, additional space is required, but an increase in the recovery rate due to the fact that everything lies in one place is worth this additional space.

    OMG! I just deleted the table! I know this is in the transaction log. How do I get her back?

    After the transaction has been committed, SQL Server will not be able to roll it back. DELETE and TRUNCATE operations delete data in completely different ways. The DELETE operation deletes data using transactions that delete each row. The TRUNCATE operation simply marks the data pages on which the deleted data lay as not being used. But the consequences of none of these operations can be manually eliminated when viewing the transaction log. Instead, you need to perform a process called recovery at a point in time. You should immediately back up your database transaction log in order to save all changes made before you accidentally delete the necessary data from the table. Then, you need to follow the steps in Chapter 6 of this book.for recovery at a time ( in MSDN, too, everything is there - approx. translator ).

    Another option is to use third-party utilities, such as SQL Backup Pro, which can perform the restoration of individual database objects online from existing backups.

    And if I just want to create with the help of a backup script to build a database, without restoring the backup directly ...?

    Standard tools for creating such a script in SQL Server is not provided. However, utilities, such as SQL Compare, can form it. It is easily created using the GUI, but it is also possible using PowerShell:

    & 'C:\Program Files (x86)\Red Gate\SQL Compare 8\SQLCompare.exe' /Backup1:C:\MyBackups\MyBackupFile.bak /MakeScripts:"C:\MyScripts\MyBackupScript"

    Also, you can pay attention to SQL Virtual Restore. This utility allows you to mount a backup to your SQL Server as if you started the recovery process from this backup, but does not require using all the space that would be necessary during the recovery. A backup mounted this way looks like the most common database and you can script it in any way you like.

  • LionGene
    If you remember how things were in the IT sector 10 years ago, then the share of the use of massive SQL servers will be very visible. This was the basis for any large financial portal, including online stores. But things are getting much easier now. I do not take away the role of SQL in programming but ... Modern e-commerce sites are based on simpler and more convenient principles now. I personally like the Magento platform as the basis for an online store. You can see good examples if you look at the developer site, click here. The new Magento-based site is getting faster and could be fully operational in a couple of months. This is why Magento is the most popular e-commercial website.

    Leave a comment: