I am a danish programmer living in Bangkok.
Read more about me @ rasmus.rummel.dk.
Webmodelling Home > Buying a backup server
Do good

Buying a backup server

23 Apr 2013 Things to consider then buying a backup server - this is a documentation of a backup server I bought for Favourite Design and how I chose what to buy.

If you are in the marked for buying a backup server but don't have much knowledge of the matter, this page will help you.

Content :

  1. Booting the backup server
  2. Raid Configuration
  3. Install the operating system
  4. Install the Adaptec 6405 raid driver
  5. Setup harddrive failure notification
  6. Install backup software

Appendixes :

Main considerations

In a small office it may suffice with reusing an expired tower PC as backup server. However this backup server is for our data center rack space to take backup of our web, database, email & NS servers. Therefore this backup server needs to be a rack server and it needs to be highly available.

It was quite difficult to select the rack server, lots of needs to balance within a constrained budget coupled with little knowledge of the matter and difficult to get information as online configuration options are virtually non-existing then buying in Thailand. After testing at least one dealer beyond his patience, I had this list of candidates and knowledge :

Brand Size Bays HDD Lock in* CPU slots PSU slots Pre-installed Comment
IBM 2U 12 Yes 17,000 baht/2TB 2 2 No Extra 2 hot swapable drives in the back (I think convenient for installing the operating system leaving the 12 front bays clean)
Dell 2U 8 Possible 2 2 No
HP 2U 8 Possible 2 2 No
Intel 2U 12 No 2 2 No
QNAP 2U 8 No 1 (weak) 2 Yes It's not clear if 1) high capacity harddrives can be installed nor 2) if more than the default 2GB RAM can be installed.
* Apparently it have become common for vendors to create their HDD bays in a way so ONLY proprietary harddrives can be used - this is really annoying not least because of the exorbitant prices.

I ended up bying the Intel system for the following reasons :

  • While a pre-installed backup server like QNAP is likely both easy to setup and easy to manage, I wanted an empty metal server because :
    • I can then setup any backup software I want and indeed I wanted to use Bacula for backup (Bacula is open source industry grade free backup software - I have intimate experience with Bacula and it have rescued our company and our customers countless times, never once have it failed - I am going with Bacula).
    • I can then choose any full blown or whatever operating system using the server for other tasks as well and indeed I wanted to use the new server temporarily as an emergency virtual server host (our existing virtual server host is loaded to brink of collapsing, but I don't yet have money to add an additional dedicated virtual server host). I don't think a pre-installed backup server could be conveniently used as emergency virtual server host.
    • I am likely more free to configure the physical specs if I choose empty metal.
  • IBM for sure and likely also Dell & HP have harddisk lock in to unbelievable exorbitant prices. Only Intel HDD bays for sure offered compatibility with 3. party harddrives.
  • I wanted as many HDD bays as possible to allow for growth for years to come and 12 bays seemed to be the maximum in my price range and not really more expensive than 8 bays, so 12 bays it had to be.
  • It was very difficult to get information from the dealers, only the Intel dealer were willing to put of with my lack of knowledge.
  • Since it was difficult to get information, I was not sure to which extent Dell, IBM & HP would let me configure the specs of the delivered server, but my guess is that Intel offered the most variable specs.

So after I decided on the Intel empty metal system, I had to decide on how to configure the specs : (again information of the configurational possibilities were hard to come by)

  • Since I wanted to use the backup server also as an emergency virtual server host, I decided the following :
    • CPU should have many threads to cater for virtual CPUs (I changed the default 4 core 4 threads CPU to a little slower 6 cores 12 threads CPU for the same price).
    • While a dedicated backup server can run with very little RAM, a virtual server host MUST have plenty, so I increased the RAM from 4 to 16 GB (RAM is now very cheap, so it was not painful).
    • While for backup purposes it would be enough for me to start with 2x 2TB drives in a RAID-1 logical 2TB disk, I decided to buy yet 2x 2TB drives more to create an extra RAID-1 2TB logical drive to host virtual drives.
  • To maximize my space capability, I would have liked to use 3TB drives, however the dealer told me that in Thailand 3TB drives typically had to be ordered abroad and that it could take up to a month. 1 month is just too long time to wait in case of drive failures, so I decided to go with the readily available 2TB drives instead.
  • Instead of using software raid, I chose to pay 16,000 baht for a high end hardware raid card (more than what an extra CPU would have cost). I did that because I have an unfounded imagination that hardware raid is more professional and because the dealer recommended it. However, I have since become skeptical if hardware raid is always better : Hardware versus software raid.
  • To increase availability I decided to use 7,500 baht to buy an extra PSU in case the first PSU should fail.
  • To increase availability I decided to use 7,700 baht to buy an extra drive to use as a spare and since the raid card allows it, it will be a hotspare.

Apart from the fact that I don't think anymore that hardware raid is worth the money in a backup server, I also likely made another mistake that I did not try to get a server with 2 internal harddrive bays to separate the operating system from the data disks (like in this machine or the IBM above which I would choose if it was not for the harddrive lock-in).

Backup server physical specification

Ok here are the final spec for my backup server (which may also be used as an emergency virtual server host).

backup server
  • Model : Intel R2312GL4GS service guide : price 62,000 baht
    • Board : Intel S2600GL4
      • CPU slots : 2x R-sockets
      • RAM slots : 16 (supports DDR3 UDIMM, RDIMM or LRDIMM up to 1600 speed)
      • Onboard video card :
    • HDD bays : 12x 3.5" hot plug (trays seems to be 2.5" compatible as well)
    • PSU : 1+0 (there is 1 and space for 1 more) 750W 80 platinum
    • Rails : yes
    • Size : 2U
  • CPU : 1x 2.0 GHz/15MB/6C/HT : BX80621E52620 : 10,000 baht
  • RAM : 2x 8GB : KVR13E9/8I : 2,800 baht/piece
  • RAID : Adaptec 6405 user guide : price 16,000 baht.
  • HDD : 5x Western Digital 3.5" 2TB Enterprise : WD2000FYYZ (5 years) : 7,700 baht/piece.
  • PSU : 2x 750W 80 platinum : price 7,500 baht/piece (1 already included in the case price)
  • Total price delivered : 141,300 baht (ex. VAT)

Booting the backup server

The Intel(R) system Boot Agent will first launch the RAID BIOS and then second the system BIOS.

Here is the boot sequence :

  1. Initializing Intel(R) Boot Agent GE
  2. Adaptec RAID BIOS V5.2-0 [Build 19109]
    1. (c) 1998-2012 PMC-Sierra, Inc. All Rights Reserved.
    2. Press <Ctrl><A> for Adaptec RAID Configuration Utility! >>> :
      • Adaptec RAID Configuration Utility will be invoked after initialization. Press ctrl+a to enter the RAID Configuration Utility
        • Adaptec 6405 Family Controller #0 : 6405 is the controller model
    3. Booting the controller kernel
  3. BIOS installed successfully
  4. Press F2 to enter system BIOS

Raid Configuration

With currently only 5 disks (out of 12 possible), I am a little short of options, but this is what I want to configure :

  • 2x 2TB disks in one RAID-1 array : for backup.
  • 2x 2TB disks in one RAID-1 array : for VM host (then I have enough money to buy a dedicated VM host, I will delete this array).
  • 1x 2TB disk as global hotspare : if one of the disks in any of the 2 arrays are failing, the hotspare will automatically and immediately be switched in rebuilding the array.

Adaptec supplies 3 different utilities to create raid arrays and in general manage your Adaptec raid card controlled storage space :

  • Using ARC utility (Adaptec Raid Configure) : the ARC utility is built-in to the Adaptec raid card bios, so we need to boot the machine.
  • Using maxView : a browser based (gui) interface that installed on OS can handle multiple raid cards on multiple machines.
  • Using ARCCONF : a comand line (cli) interface installed on OS.

However, since both maxView & ARCCONF needs to be installed on top of an OS, the ARC utility is the best to get started with, so lets boot the machine and jump into the Adaptec 6405 Family Controller BIOS. The ARC utility present us with the following menu options :

backup server raid adaptec 6405
  • Array Configuration Utility :
    • Manage Arrays : If you have more than one array, you can select an array and choose action to take :
      • <Enter> : Display Array Properties
      • <Del> : Delete Array
      • <Ctrl+b> : Swap the Array (array order sets the boot order)
      • <Ctrl+s> : Failover assignments : I don't know what that is
      • <Ctrl+f> : Force Online
      • <Ctrl+w> : Power Settings
      • <Ctrl+c> : Cache Settings : (raid controller read & write cache as opposed to each physical disks own builtin read & write cache)
        • Read Caching : I set to Enable
        • Write Caching : I set to Enable with ZMM (Zero Maintenance Module)
      • <Ctrl+r> : Rebuild Array : this must be used for damaged arrays if no hotspare is available.
    • Create Array :
      1. Select physical disks to participate in the RAID disk array.
      2. Configure the array :
        • Array Type : options depend on how many physical drives was selected for the array
          • Volume
          • Raid 0
          • Raid 1 (mirror)
          • Raid 1E (enhanced Raid 1) : at least 3 disks
          • Raid 5 : at least 3 disks.
          • Raid 10 : at least 4 disks.
        • Array Label : the name of the raid drive as seen by OS
        • Array Size : automatically set to maximum. Maybe there reason to set it lower would be to be able to move data from bad sectors to unused sectors.
        • Stripe Size : N/A for Volume, Raid 0 and Raid 1
        • Read Caching :
        • Write Caching : apparently write caching dramatically improves performance but also introduces instability
          • Enable always :
          • Enable with ZMM : Zero Maintenance Module
          • Disable
        • Create RAID via :
          • Build/Verify
          • Clear
          • Quick Init
          • Skip Init
    • Initialize Drives : fast (2 seconds) and it made a formatted drive selectable for raid again (under Disk Utilities I formatted a drive after which it was not available for raid nor hotspare).
    • Rescan Drives
    • Secure Erase Drives : I tried on a formatted drive and it took a staggering 1.5 days for a single 2TB drive.
    • Global Hotspares : Select a free disk for hotspare - in case of an array with a damaged disk, the disk assigned as global hotspare will immediately be switched in to rebuild the array.
    • Manage JBOD
    • Create JBOD
  • SerialSelect Utility :
    • Controller Configuration :
      • Drives Write Cache :
      • Runtime BIOS
      • Automatic Failover
      • Array Background Consistency Check
      • Array based BBS Support
      • SATA Native Command Queuing
      • Physical Drives Display during POST
      • DVD/CD-ROM Boot Support
      • Removable Media Devices Boot Support
      • Alarm Control
      • Default Background Task Priority
      • LED Indication Mode
      • Backplane Mode
      • Selectable Performance Mode
    • Advanced Configuration :
      • Time Zone
      • Stay Awake Start
      • Stay Awake End
      • Spinup limit (Internal)
      • Spinup limit (External)
      • Controller info :
        • NVRAM State ............... Clean
        • Controller Memory Size .... 512 MB
        • Controller Serial Number .. 2B421216A3E
  • Disk Utilities :
    1. Probing Topology :
      • 5x Box00:SlotXX WDC WD2000FYYZ-01UL REV# Speed Size. Press ENTER to bring up shortcut menu :
        • Format Disk : low level formatting eraising all existing data. I tried formatting 1 disk, it took quite precisely 4 hours. After formatting the disk cannot participate in an array or being used as a hot spare.
        • Verify Disk Media : select to scan disk for defects - very slow.
        • Identify Drive : press ENTER and the bay hosting the disk will turn on a LED - this way we can see there eg. a damaged drive is.
      • 1x Box00:-- Intel RES2SV240 REW# -- null. If I press ENTER I will get "Not a disk drive".

Install the Operating System

Because the Intel R2312GL4GS system does not come with any CD-drive, I needed to install the operating system from USB. There are several tools for creating bootable USB sticks, however they differ in their ability to load the installer ISO correct :

  • Universal USB Installer : I have earlier used this Windows tool to make both Ubuntu and Windows USB installers with great success, however with the 64 bit Ubuntu 12.04.2 server ISO, it did not work. I got the following error : there was a problem reading data from the cd-rom.
  • Linux Live USB Creator : I tried this Windows tool as it was recommended in the official Ubuntu installation guide, however it also did not work with my Ubuntu ISO (many other people have the same problem).
  • Startup Disk Creator : built into Ubuntu desktop. Startup Disk Creator was what actually worked. Here is what I did :
    1. Downloaded the Ubuntu Server ISO to my windows box.
    2. Downloaded the Ubuntu Desktop ISO to my windows box.
    3. Using Ubuntu desktop ISO and VirtualBox, I created an Ubuntu Desktop virtual machine.
    4. On the virtual machine shared folders, I added the Windows host folder in which I had downloaded the Ubuntu Server ISO.
    5. On the virtual machine USB Device Filters, I added a USB filter for the USB Stick I had in my Windows box USB port (after Ubuntu had loaded in the virtual machine I would have to unplug and replug the USB stick on the Windows host machine for Ubuntu to recognize the USB stick).
    6. On the Ubuntu desktop, I clicked on the Dash home icon and wrote : Startup Disk Creator in the search field and then started up the "Startup Disk Creator" program.
    7. Having already shared the Windows host machine folder in which the Ubuntu server ISO was located, it was easy to create the USB installer.
    8. I then booted my backup server on the USB stick and the full Ubuntu 12.04.2 server installation went through without problems.

Install the Adaptec 6405 raid driver

Here I go through how to install the Adaptec 6405 driver for Ubuntu 12.04.2. If you use another operating system, you should consult the Adaptec 6405 user manual for how to install the driver.

  1. Open a browser on your dev box and navigate to the Adaptec 6405 download page.
  2. Select your operating system (in my case Ubuntu 12)
  3. For Ubuntu 12 there are 2 driver packages : (for other operating systems there may be more)
    • AACRAID Debian and Ubuntu Driver ... for installing Adaptec 6405 driver as part of installing the operating system.
    • Dynamic Kernel Module Source and Drivers ... for installing Adaptec 6405 driver AFTER installing the operating system - I use this one.
  4. Uncompress and unpack the downloaded aacraid-dkms driver archive.
  5. Locate the relevant driver, in my case under drivers/Ubuntu : aacraid_1.2.1.29900-1_all.deb.
  6. Further locate the relevant install instruction text, in my case under drivers/Ubuntu/Ubuntu 12.04.2 LTS : Installing Ubuntu 12.04.2 LTS on Adaptec RAID Controllers.txt.
  7. Copy the driver file, aacraid_1.2.1.29900-1_all.deb, to the backup server, eg. to /var/downloads, using eg. a USB stick or FTP.
  8. Start the installation : (here for Ubuntu 12.04.2)
    1. shell> cd /var/downloads : navigate to there you copied the driver file.
    2. shell> apt-get install build-essential
    3. shell> apt-get install dkms : Dynamic Kernel Module Support (dkms @Wiki).
    4. shell> dpkg -i aacraid_1.2.1.29900-1_all.deb
    5. shell> dkms add -m aacraid -v
    6. shell> dkms build -m aacraid -v
    7. shell> dkms install –m aacraid –v
    8. shell> dkms status : confirm the adaptec 6405 driver, aacraid, is installed.
  9. shell> reboot now : reboot the system to be sure the new module is correctly loaded.

Setup harddrive failure notification

Eventually some raid disk will fail and if we are not notified then that happens, in time the whole raid will surely fail in which case data will be lost. Therefore it is mandatory to setup disk failure notification.

The most well-known utility for monitoring raid health and status is the mpt-status utility. Unfortunately it seems that the Adaptec raid controller drivers does not implement the MPI (Message Passing Interface) interface and so cannot use an MPT (Message Passing Technology) utility. As a result, it seems that I have to use the Adaptec own raid monitoring software. Adaptec supplies 2 different software packages relevant to monitor raid health :

  • maxView Storage Manager (MSM) : GUI utility for raid management & monitoring. Each server with raid will run Storage Manager Agent which will report to a single maxView Storage Manager. maxView Storage Manager can be installed local or remote and manage multiple agents giving you a GUI (browser based) management and monitoring tool handle multiple raid servers. However, the storage manager agent requires java runtime environment (5 or higher) and the maxView Storage Manager requires Tomcat.
  • arcconf : command line utility for raid management & monitoring. No pre-requisites required, it just runs, however while it enables you to monitor the health of your arrays and your physical disks, no notification is built-in.

It is currently too much for me to play with the maxView Storage Manager, so I did the fast solution and installed the arcconf utility : (actually there is no installing only a single file needs to be copied)

  1. Open a browser on your dev box and navigate to Adaptec storage manager download page.
  2. Select your operating system (for my Ubuntu I chose Linux 64 bit here)
  3. Download, uncompress and unpack the file (unpacked there are 2 folders : cmdline and manager).
  4. Locate the arcconf file in the cmdline folder
  5. Copy arcconf to the backup server, eg. to /var/downloads, using eg. a USB stick or FTP.
  6. shell> cd /var/downloads : navigate to folder there you copied the arcconf file to.
  7. shell> chmod +x arcconf : enable execution for arcconf.
  8. shell> ./arcconf getconfig 1 ad : check the status of your raid card. (1) is controller number, (AD) is to get info from raid card only.
  9. shell> ./arcconf getconfig 1 : get all information from raid card, logic and physical harddrives.

I have not yet setup automatic notification, so I will need to periodically manually test the health of the backup server raid. However, while arcconf does not have builtin notification, others have created utilities on top of arcconf to email a notification if something changes :

Relevant links :

Install backup software

bacula backup

While there are several heavy backup software vendors out there, I think that for reliable datacenter backup there is hardly any real competition to Bacula : free, open source and enterprise grade.

@ Favourite Design, we have about 3 years of experience with Bacula for both datacenter backup and for our office backup. Bacula have newer failed us once, saved both us and our customers multiple times and then proper setup, it is a joy to manage and very fast.

Bacula is NOT a click next installation, however if you are ready to install backup software now, you should continue with : step by step bacula backup tutorial - my most popular tutorial to date.

Appendix : Raid performance tests

I have never done raid performance tests before and do not have the time to research it properly. However, I want to use this buying a backup server documentation as a platform for STARTING to learn about storage benchmarking so eventually I am not left completely in the dark whether my storage is performing according to industry standard or if I could squeeze more speed out of my hardware. I fear that without testing, I risk configure my storage space sub optimal or a lot slower than it should be.

As I don't know the metrics to expect and I have not been able to find a relevant collection of performance result, I cannot this time confirm whether my raid drives are as fast as they should be, instead this first time I will just run a test and publish the result.

Some Raid performance test tools :

Let's get started : (and started is all that I am doing)

  1. Install bonnie++ :
    1. shell> apt-get install bonnie++ : install the bonnie++ harddisk performance test tool.
  2. Notice documentation :
    • shell> man bonnie++ : have a quick look at the man page.
    • /usr/share/doc/bonnie++/readme.html : readme file. Download it to your dev box and load the file in your browser.
  3. Run bonnie++ test with default parameters:
    1. shell> bonnie++ -u root : run test with default parameters writing test result to screen.
      Notice how the test result data are first formatted in a table and then in the last line the same data are duplicated in a CSV format.
    2. shell> bonnie++ -u root > bonnie.result : write bonnie++ test result to a text file instead.
    3. shell> cat bonnie.result : the file was written and the content is the same as we saw before on the screen including the duplicate CSV formatted data in the last line.
  4. Format the bonnie++ result for better readability :
    bonnie++ comes with a utility, bon_csv2html, that takes the last CSV formatted line from the test result and format the data as an html table - this makes the data much more easy to overview.
    1. shell> tail -1 bonnie.result | bon_csv2html > bonnie.html : take the last line (tail -1) of the bonnie++ test result (bonnie.result) and pipe it (|) to bon_csv2html and then write the bon_csv2html resulting html markup to a new file (>) bonnie.html.
    2. Download the bonnie.html file to your dev box and load the file in your browser - you should now have the result data nicely formatted in an html table :
      Version 1.96Sequential OutputSequential InputRandom
      Sequential CreateRandom Create
      SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
      K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
  5. Understanding the bonnie++ results : (this is NOT authoritative, I basically do not know what I am talking about)
    • Size : size of the files to read and write for the 2 first I/O tests, default the double of the RAM to bypass cache. The backup server have 16 GB RAM and the size is here 31,984 MB - it's fair to say it's double.
    • Latency : time it takes to prepare the harddrives to read/write (position the read/write heads and transfer the data), however I don't understand how to interpret the latency data.
    • Sequential Output : (writing to disk)
      • Per char : write 1 character at a time : very slow with 864 KB/sec and very CPU intensive as 99% of the CPU was used.
      • Block : write blocks of characters at a time : the write speed is 145 MB/sec (which is very good) and the CPU load is 15%.
      • Rewrite : read and write
    • Sequential Input : (reading from disk)
      • Per Char : reading 1 character at a time : again very CPU intensive, however reading is clearly faster than writing.
      • Block : reading blocks of characters at a time : the read speed is 356 MB/sec using nearly the same CPU as for writing(16%)
    • Random Seeks : number of individual blocks that can be seek'ed per second : random seek speed is 506 blocks/sec, however the latency is nearly 2 seconds.
    • Sequential Create : a heavy load email server may need to create a lot of files managing the queues, so file creation speed is important for that type of programs.
    • Random Create :
  6. Comparing performance results : under construction
    • Compare to a single harddrive :
    • Compare to other raid configurations :
    • Compare to industry standards : I have tried to search a little, but were unable to find a systematic collection of other peoples result.
  7. Run a custom bonnie++ test :
    1. bonnie++ -u root -f : -f stands for fast mode and will skip the per char tests, these takes a long time and likely there are not that many real world applications using per char I/O (though I think drivers may often use per char).
    2. bonnie++ -u root -f -s 64000 : set the file size to write and read to 4 times the available RAM (instead of the default 2 times) - since I have 16 GB RAM, I set the filesize to 64 GB. With bigger filesize, an even smaller part can be cached.
      Version 1.96Sequential OutputSequential InputRandom
      Sequential CreateRandom Create
      SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
      K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU/sec% CPU
      Indeed, the I/O speeds (Sequential Output and Sequential Input) are lower indicating that cache using the available RAM plays a lesser role, however Random Seeks and Sequential and Random Create have also changed even though in my understanding they should not - so far I am uncomfortably interpreting the results.

Relevant links :

Appendix : Rebuild array on harddrive failure

Since I have setup a global hotspare, I expect a failed harddrive to be replaced immediately and the raid array hosting the failed harddrive to be automatically rebuild. However, in case no hotspare is available, we may need to rebuild a failed raid array manually.

  1. Reboot
  2. ctrl+B to enter the ARC (Adaptec Raid Configuration) utility.
  3. Identify the slot in which the harddrive failed :
    1. Disk Utilities
    2. Identify Drive
  4. Remove the failed drive
  5. Insert a new identical drive (or bigger drive)
  6. Reboot
  7. ctrl+B to enter the ARC utility
  8. Select Array Configuration Utility
  9. Select Manage Arrays
  10. If you have more than one array, then select which array contains the failed drive
  11. ctrl+R to Rebuild Array

Appendix : Installing new harddrives

Ok, after only about a year in production, the time have come to increase the storage space in my backup server - not enough space for our incremental backup.

Already having 2 logical drives, one running backup and the other running customer virtual machines, I am a little nervous about installing new harddrives. Instead of increasing the existing logical harddrives, I have decided to build a new raid-1 logical drive consisting of 2x 4TB disks - this way I don't need to touch the existing 2 arrays - and then reconfigure the backup to run on the new logical drive.

I bought 2x 4TB and 2x 2TB new drives with the intention to create 2 new logical raid-1 drives :

  • ImageBackup (4TB raid-1) : to backup virtual harddrives in good state, so that in case of disaster I can fast replace a broken virtual harddrive.
  • BaculaIncr (2TB raid-1) : to keep all incremental backup from Bacula. I will then later rename the existing Backup volum to BaculaFull.

Installing process :

  1. I installed 4 new physical drives, 2x 4TB and 2x 2TB. My inten
  2. fdisk -l : checking that the drives exists. Showing :
    • /dev/sda 1997 GB : existing logical drive I use for Bacula.
    • /dev/sdb 1997 GB : existing logical drive I use as VM host.
    • /dev/sdc 1997 GB : new logical drive I want to use for Bacula incremental backup - error message : Disk /dev/sdc doesn't contain a valid partition table.
    • /dev/sdd 3999 GB : new logical drive I want to use for backup of virtual harddrives - error message : Disk /dev/sdc doesn't contain a valid partition table.
  3. Time to create a partition table on /dev/sdc :
    1. shell> fdisk /dev/sdc : we will use fdisk to create the partion.
    2. fdisk> m : get list of available operations.
    3. fdisk> n : create new partition.
    4. fdisk partition type> p : I choose Primary partition type (not extended).
    5. fdisk partition number> 1 : I select that I want only 1 partition (maximum is 4).
    6. fdisk first sector> : I press enter to select default
    7. fdisk last sector> : I press enter to select default
    8. fdisk> w : write the partition to disk.
    9. fdisk -l : now shows /dev/sdc without the "doesn't contain a valid partition table" error and also I can see that the /dev/sdc1 partition have been created.
  4. Also create a partion table on /dev/sdd.
  5. Time to make a filesystem on the new partions :
    • shell> mkfs.ext4 /dev/sdc1 : first create an ext4 filesystem on the /dev/sdc1 partition.
    • shell> mkfs.ext4 /dev/sdd1 : then create an ext4 filesystem on the /dev/sdd1 partition.
  6. Make mount points for the new partitions :
    • shell> mkdir /media/diskBaculaIncr : this directory will be used to mount the /dev/sdc1 partition (the BaculaIncr logic drive)
    • shell> mkdir /media/diskImageBackup : this directory will be used to mount the /dev/sdd1 partition (the ImageBackup logic drive)
  7. Temporarily mount the drives just for test :
    • shell> mount /dev/sdc1 /media/diskBaculaIncr
    • shell> mount /dev/sdd1 /media/diskImageBackup
    • shell> df -h : PROBLEM : while sdc1 correctly reports 1.8TB, sdd1 reports 2TB, however sdd1 should be 4TB.
  8. Permanently mout the drives (using /etc/fstab) :
    1. shell> blkid : list the UUID (Universal Unique Identifier) of all partitions.
    2. shell> nano /etc/fstab : open fstab in the nano editor
      1. Add the following line at the end of fstab : UUID=be74d75b-356c-4ac4-b28a-69b90ec24ae2 /media/diskBaculaIncr ext4 errors=remount-ro 0 2
      2. Press ctrl+x to end nano
      3. Press y then prompted if you want to save the fstab file.
    3. shell> reboot : boot the Ubuntu server.
  9. mkfs : installing a filesystem on the logical drive
  10. /etc/fstab : mounting the drives permanently

Build method : (Build methods : authoritative answer)

  • Build / Verify : Array is available for use immediately, however performance is impacted while building. I tried this first, but after half an hour it was only 5% finished on a 4TB raid-1 logical drive (2x 4TB physical disks) - so I quit an started to re-initialize the 2 4TB disks and then started a Clear build.
  • Clear : Array is first available after build is finished. Build is supposed to be fast, but it seems just as slow as Build/Verify.
  • Quick Init : Array is available immediately. There is no build only meta data are written - very fast. Can only be used with new disks.
  • Skip Init : Is used then trying to recover data from a logical drive with failures of multiple physical drives. There is no initialization process.

Appendix : RAID Concepts

  • RAID : Redundant Array of Independent Disks. Random Access Independent Disks.
  • Logical disk : if the raid controller combines say 3 physical disks in 1 array, the raid controller will expose this 3 physical disk array as 1 disk to the OS - this disk is called a logical disk, though the OS does not necessarily knows. A logical disk is also sometimes called a RAID disk or a virtual disk, however it's better to use the term logical disk to avoid confusing with virtual machine disks which are also called virtual disks.
  • JBOD : Just a Bunch Of Disks.
  • BBU : Battery Backup Unit : secures data in the cache not yet written to disk is not lost in case of a power failure (typically up to 72 hours). If for some reason, the disks stops (power failure) still with data in the cache, the BBU powers the cache keeping the cache data ready to written to disk then you get the disks up and running again.
  • ZMM : Zero Maintenance Module (The BBU has been replaced by AFM-600 NAND FLASH MEMORY BACKUP for 6 SERIES which is Zero Maintainence and doesnt require battery replacement as it uses capactiors to maintain charge while the memory is written to NAND FLASH). I think ZMM is replacing BBU on the Adaptec raid cards beginning with the 6000 family.
  • Write Cache : not only does physical disks have their own write cache, a RAID controller typically also have a write cache.
  • ARC : Adaptec Raid Configuration : BIOS based utility to manage the raid card (then booting, you can press ctrl+a to enter the ARC utility)
  • ARCCONF : Adaptec Raid Controller Command line : command line utility to manage the raid card.
  • maxView : GUI based utility to manage the raid card (you need the adaptec CD)
  • Disk types supported by Adaptec : (Adaptec 6405 disk compatibility tests)
    • SATA :
    • SAS : Serial Attached SATA. Adaptec recommends not to combine SATA & SAS in the same array.
    • SSD : Solid State Disks :
  • Data striping : improves performance. Segmenting sequential data over multiple drives so that each segment can be accessed concurrently instead of sequentially from only one drive. If one segment is destroyed, the whole data sequence is destroyed, therefore if 1 drive fails all data are lost (except if redundancy like mirror or parity is implemented)
    • Bit level striping
    • Block level striping
  • Parity data : gives data protection. Imagine you have 3 disks, the first 2 holding data and the last holding parity data. If the sum of say the first bit of the 2 data disks are even (0 or 2), then set parity to 0. If the sum of bits are odd (1) then set parity to 1. You can now from one of the data disks and from the parity disk calculate the bit of the other data disk because from the parity bit you know whether the sum of the 2 data disks should be even or odd.
  • Distributed parity : this just means that the parity bits are not on one disk only, but instead say for the first 2048 bits on disk 3, for the next 2048 bits on disk 2, for the next 2048 bits on disk 1 and then the next on disk 3 again and so on.
  • Global hotspare : a disk that can be immediately swapped with a failing disk of any array. Say one of your arrays have a disk failure, the RAID adapter will then discard the failing disk and use the hotspare disk instead. It's called hot because it is turned on and because it can be swapped immediate. It is called global because it is not locked to a specific array, but is available for any array (however then it is used, it cannot be used to replace another disk).
  • Distributed spare : cannot be shared among arrays because it contains contains array data. Increases speed by which an array is rebuild.
  • Drive segment : a disk drive or part of a disk drive used in an array. If no part of the drive is used in an array, the entire drive is an available segment. A drive with 2 segments can be used in 2 different arrays.

The most popular RAID levels are :

RAID Level Min. disks Size Read speed Write speed Fault tolerance Comment
RAID 0 2 drive count very fast very fast None Block striping (Read & Write speeds multiplies with drive count)
RAID 1 2 1 drive very fast medium drive count - 1 Mirror (Read speed multiplies with drive count)
RAID 1E 3 50% @ 3 disks fast medium 1 drive Mirror & Stripes (Also called enhanced RAID 1). I am quite unsure how read speed compares to standard level 1.
RAID 5 3 drive count - 1 fast slow 1 drive Block striping & distributed parity
RAID 5EE 4 drive count - 2 fast slow 1 drive Stripes, Parity & distributed spare (faster rebuild)
RAID 6 4 drive count - 2 fast slow 2 drives Block striping, double distributed parity
RAID 10 4 50% @ 4 disks very fast medium 1 drive/span Mirror & block striping

More about RAID levels @ Wikipedia (however it seems to me their speed table may be wrong)


You can comment without logging in
 B  U  I  S 
Words: Chars: Chars left: 

click to top Raid Adaptec 6405