You were all excited because you read my other post, but you didn’t pay attention to the part about needing a special version of qemu-kvm and were saddened to be hit with this:
error: unsupported configuration: block copy is not supported with this QEMU binary
Don’t fret, I’ll help you get where you want to go. Do everything as root, and don’t do it on a production system … duh
Get your source rpm and prerequisites – note that while this is current as of this posting, things could change. Up to you to handle keeping yourself current:
In order for this to work you will need RHEV versions of qemu-kvm. The versions included in CentOS7 (my platform) don’t support the blockcopy command in virsh.
Start by dumping the xml for the domain to somewhere you can grab it again later:
Today I added the components that create a logfile and cleans up the working directory when done. The idea behind the logfile is that using the information in it a person with no knowledge about the original backup could use the files to create a running restore of the VM. I may someday create a restore script, but not today. The cleanup portion is not working 100%, but good enough that I will use the script in my production starting today. I will debug and fix it later. Here is the Bareos job log for my first full & successful running of the script/backup combo:
bareos-dir Job vmguest-FullImage.2013-10-21_16.02.35_06 waiting 50 seconds for scheduled start time.
bareos-dir shell command: run BeforeJob "/usr/lib/bareos/scripts/vmprep.py -v vmguest.domain.local"
bareos-dir BeforeJob: Found the VMX file and copied it to the backup location /mnt/vmbackup/
BeforeJob: Successfully created a snapshot for your VM
BeforeJob: successfully backed up /vmfs/volumes/datastore1/vmguest.domain.local/vmguest.domain.local.vmdk to the backup location /mnt/vmbackup/
BeforeJob: successfully backed up /vmfs/volumes/550a2145-64112148/vmguest.domain.local/vmguest.domain.local_1.vmdk to the backup location /mnt/vmbackup/
BeforeJob: I deleted the snapshot I took earlier, all is good.
Start Backup JobId 226, Job=vmguest-FullImage.2013-10-21_16.02.35_06
Using Device "FileStorage" to write.
bareos-sd Volume "VM0015" previously written, moving to end of data.
Ready to append to end of Volume "VM0015" size=64551931043
bareos-sd User defined maximum volume capacity 107,374,182,400 exceeded on device "FileStorage" (/home/bareos/storage).
bareos-sd End of medium on Volume "VM0015" Bytes=107,374,157,986 Blocks=1,664,406 at 21-Oct-2013 16:23.
bareos-dir Created new Volume "VM0016" in catalog.
bareos-sd Labeled new Volume "VM0016" on device "FileStorage" (/home/bareos/storage).
Wrote label to prelabeled Volume "VM0016" on device "FileStorage" (/home/bareos/storage)
New volume "VM0016" mounted on device "FileStorage" (/home/bareos/storage) at 21-Oct-2013 16:23.
bareos-sd Elapsed time=00:17:42, Transfer rate=44.48 M Bytes/second
bareos-dir Bareos bareos-dir 12.4.4 (12Jun13):
Build OS: x86_64-unknown-linux-gnu redhat CentOS release 6.2 (Final)
JobId: 226
Job: vmguest-FullImage.2013-10-21_16.02.35_06
Backup Level: Full
Client: "bareos-fd" 12.4.4 (12Jun13) x86_64-unknown-linux-gnu,redhat,CentOS release 6.2 (Final)
FileSet: "VM Image Backup NFS Folder" 2013-10-19 16:56:07
Pool: "VMImage" (From command line)
Catalog: "MyCatalog" (From Pool resource)
Storage: "File" (From command line)
Scheduled time: 21-Oct-2013 16:03:25
Start time: 21-Oct-2013 16:07:24
End time: 21-Oct-2013 16:25:08
Elapsed time: 17 mins 44 secs
Priority: 10
FD Files Written: 7
SD Files Written: 7
FD Bytes Written: 47,245,718,876 (47.24 GB)
SD Bytes Written: 47,245,719,792 (47.24 GB)
Rate: 44403.9 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s): VM0015|VM0016
Volume Session Id: 18
Volume Session Time: 1382202217
Last Volume Bytes: 4,458,527,606 (4.458 GB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK
shell command: run AfterJob "/usr/lib/bareos/scripts/vmprep.py -v vmguest.domain.local -p"
bareos-dir AfterJob: I couldn't find file /mnt/vmbackup/vmguest.domain.local.vmdk!
AfterJob: You may want to look at /mnt/vmbackup/
AfterJob: Cleaned out the backup location, ready for the next round.
Per the request below I’ve attached my vmprep.py script (rename vmprep.py.txt to vmprep.py). I’m not a programmer, so don’t hate me if it blows up your stuff.
Here is my way. I’ve actually had this up and running for some time in another environment using SQL Server Standard 2008. I’m not in need of configuring a new backup for a 2012 SQL Express instance. There are a few parts, obviously. This article assumes you can configure a Bareos Windows client, if not there are plenty of other tutorials to help with that.
Part 1 is to create a sql command that will backup your databases to a file in the location of your choosing. I choose to keep all of my scripts as well as backup files in the same location so that if I ever have to restore I can figure out what it is exactly I did to get the backup working in the first place.
Launch SQL Server Management Studio (go get a cup of coffee while you wait for it to load)
Connect using whatever credentials give you some pretty hefty rights to the database or databases you want backed up.
Drill into the Database server, Databases menu items, then right click on the database you want to work with, select Tasks and Back Up…
The only thing you should have to change in the resulting dialogue is where you want it saved. The default will work fine, but as mentioned I recommend keeping backup scripts and backup files in 1 easy to find work area. For me it’s going to be nice and easy “c:\dbbackups”. There may be performance or capacity implications you’ll have to take to mind in your environment. Also if you do like me and create a folder at the root it’s a good idea to pair back the root permissions on that folder.
Don’t hit OK, that will back up your database, not necessarily what you want now. Tap the down arrow near the Script button at the top of the dialogue and choose “Script action to file”. For me I’m putting it in the folder mentioned before.
Step 2 is to create a script or batch file or something that Bareos can call to have the backup run. The file you execute should not complete until the backup file is created so that Bareos doesn’t try to backup the file until it exists. I believe Bareos even halts and “fails” the backup if the return status from the script is not 0, I’ll probably verify that later. Mine is a simple file that deletes yesterday’s backup then creates today’s backup:
I could add more logic here, but it seems I don’t need to. This has been reliable for me in the past.
Next Step is to configure the backup FileSet, Job and Schedule. Here are what mine look like:
FileSet {
Name = "c_dbbackups"
Include {
Options {
Signature = MD5
Drive Type = fixed
IgnoreCase = yes
}
File = c:/dbbackups
}
}
Job {
Name = "hostname_db"
Type = Backup
FileSet = "c_dbbackups"
Schedule = "NightlyFull_2000"
Storage = File
Messages = Standard
Priority = 10
Pool = Database
Client = "hostname-fd"
ClientRunBeforeJob = "c:/dbbackups/backup_db.cmd"
}
Schedule{
Name = "NightlyFull_2000"
Run = Level=Full sun-sat at 20:00
}
The obvious key component is the ClientRunBeforeJob directive in the Job definition. This makes sure to run the MS SQL backup prior to running the Bareos file backup.
I should mention the reason I’m doing Full nightly… obviously this method could be renamed cheapass backup. As such there is no interaction between the actual SQL backup and the file backup Bareos is performing. You could do a differential backup (and I have in other installations where bytes are more scarce and databases are bigger) but the actual differential part of it is done way back in step one when you’re creating the MSSQL backup script. If you do this I recommend backing up to a separate file in that same all encompassing directory, that way all the crap you need is there, the scripts, the full, and the diff. If you’re backing up a 500GB DB and you only have a couple T to store to… you’ll have to do something like this.
And then we test… Did you expect this error, I did?
ClientBeforeJob: The server principal "NT AUTHORITY\SYSTEM" is not able to access the database "DATABASENAME" under the current security context.
So last step is we have to run that .CMD in a security context that has rights to backup the database. The easy solution is to go back into your SQL Server Management Studio, expand the DB server, Security, and Logins then right-click on NT AUTHORITY\SYSTEM and open the properties dialogue. In there highlight Server Roles in the left pane then check sysadmin in the right pane.
After this you can login to BAT or bconsole or however you choose and test your job again.
Of course even if it appears to work, you should test your restores, which is a whole different ball of wax. If you’re lucky like me you have a test server that you can do your restore to since testing restores on a production MSSQL system is an absolute bear. Remember, if you haven’t tested restores, you don’t have backups!