oVirt Installation Guide

Recently I used oVirt to build a virtualization platform on an IBM BladeCenter. I present to you, the documentation for that project.

oVirt Installation, from start to finish.

1. Install CentOS (6.4)

Boot off your installation media, and run through the options.
Hostname should be in the fashion of:

hostname.itp.internal

Time Zone:

Vancouver time zone (GMT -8.00)

Root user password should be consistent with the rest of the nodes.

Choose minimal install, and partition as LVM. Remove the existing partitions in the group and create:

/
named: lv_root
ext4
28000MB

Make sure it is part of the LVM group. Usually named ‘vg_hostname,’ hostname will be the name you chose. Swap space should fill the remainder of the HDD, around 9000MB+. Name it swap and make sure it is also part of vg_hostname.

I generally dislike using swap because it is so slow, but it acts as our safeguard against possible overflowing RAM. Using this partition scheme isn’t generally considered best practice for a *nix server but, this is a single-purpose system and the partitioning matches that of the equivalent “oVirt-node” setup.

Reboot and log-in as root or even better, create a sudoer

2. Sudo user

Create a maintenance user, and optionally give that user a home directory. I personally like to having a home for random wgets and other such scripting debauchery. This user must be part of the additional group ‘wheel’ for sudo to work correctly.

# useradd -m -g users -G wheel -s /bin/bash USERNAME
# passwd USERNAME

Enter something epic, tough to crack

Edit the sudoers file to allow the wheel group access to the sudo utility. DO NOT EXPLICITELY ADD USERS HERE! The wheel group is there for a reason. Not following this guide-line can make user maintenance a pain.

# visudo

Uncomment the wheel line, it can be found near the bottom of the file.

## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL

After configuration of the server is complete, this sudoer should be the only account you use for maintenance.

3. Static addresses and DNS

We need our addresses to stay consistent, so we will edit 2 files, one for each NIC. eth0 and eth1 are iSCSI related on our servers, they do not need to be touched.

# cd /etc/sysconfig/network-scripts/
# vi ifcfg-eth2

DEVICE=eth2
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.31.xx.xx
NETMASK=255.255.0.0
GATEWAY=172.31.xx.1

Save the file and do the same for eth3. Increment the address by 1, to keep them consistent.

# vi ifcfg-eth3

DEVICE=eth3
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.31.xx.xy
NETMASK=255.255.0.0
GATEWAY=172.31.xx.1

Edit resolv.conf with the DNS settings of our local DNS and search domain, optionally add a good backup DNS such as Google’s.

# vi /etc/resolv.conf

search itp.internal
nameserver 172.31.x.x
nameserver 8.8.8.8

Reload your new settings.

# service network reload
# service network restart

Ping Google and a local server’s hostname to make sure your settings are straight.

# ping google.com

Success!!!

# ping srv01

Success!!!

If both are successful, pass go and collect $200.

4. OVirt Repository

First things first, let’s download wget, pull the .repo file down and upgrade our database.

# yum -y install wget
# cd /etc/yum.repos.d
# wget http://dev.centos.org/centos/6/oVirt/oVirt.repo
# yum -y upgrade

Chances are you’ve just upgraded the Kernel. Reboot the box to apply all changes.

# shutdown -r now

5. DNS

In order for OVirt to work correctly, every server used in the cluster, most have a resolvable A and PTR Record in DNS. Optionally you could fake this by editting /etc/hosts. I recommend the former if you have the option available. The FQDN you choose will most likely be the hostname from section 1. I am going to use ‘hostname.itp.internal.’

After creating the records, ping the FQDN to check that it resolves.

# ping hostname.itp.internal

If you experience “Success!” you are a winner and you may pass.

6. OVirt-engine

If this is the first server you’re setting up in the cluster, you will configure it as the management server. This box will be the heart of the OVirt cluster. This server will provide the web based management front end for the cluster.

If you’ve already done this and need to add nodes, ignore this and skip down to section 7.

# yum -y install oVirt-engine
# engine-setup

Run down the config script and fill in the blanks.

Enter a strong password, use ports 80/443, enter FQDN of this box, etc.

The data and ISO domains setup here will be overridden in our cluster because they will be shared via a node so what you chose here isn’t of dire importance. Choose NFS and except the default location in /var.

More details on this can be found at http://www.oVirt.org/Quick_Start_Guide#oVirt_Engine

The CentOS wiki claims there is an issue with OVirt’s log rotation, let’s take their advice and correct it.

# sed -i 's/`;/ 2>\/dev\/null`;/' /usr/share/oVirt-engine/scripts/oVirtlogrot.sh

You can now open a web browser and check out OVirt, it can be found at the FQDN of this server, congrats!

7. VDSM (oVirt-node)

The whole point of this project is to have some fun with Virtualization, right? Well, we need a node to handle the task. I’d say it’s about time to setup a node, don’t you think? One of the prerequisits of the VDSM stack is VMX support as well as having ‘no execute bit’ enabled via the BIOS. If you haven’t enabled these go do that real quick, if you aren’t sure, you can find out pretty quick.

# cat /proc/cpuinfo | egrep 'sVM|VMx'| grep nx

If all is good, you should see ‘VMX.’

Install the packages

# yum -y install vdsm vdsm-cli

7.1 More network configuration

Edit your interfaces once more and create a bridge called oVirtmgmt. This bridge is required and allows the nodes to communicate with one another.

# cd /etc/sysconfig/network-scripts/
# touch ifcfg-oVirtmgmt

View your interface settings and optionally write them down somewhere. Gut the contents of eth2 and dump them into the bridge.

# cat ifcfg-eth2
# vi ifcfg-oVirtmgmt

DEVICE=oVirtmgmt
ONBOOT=yes
TYPE=Bridge
DELAY=0
BOOTPROTO=static
IPADDR=172.31.xx.xx
NETMASK=255.255.255.0
GATEWAY=172.31.xx.1
NM_CONTROLLED=no

Edit ifcfg-eth2 file, remove the addressing and add the bridge line

# vi ifcfg-eth2

Device=eth2
ONBOOT=yes
NM_CONTROLLED=no
BRIDGE=oVirtmgmt

Restart your network services and do a quick ping to make sure your bridge is up.

# service network restart
# ping google.com

Success! Make sure Vdsmd was configured to start correctly

# chkconfig –list |grep vdsm

You should see vdsmd started on levels 2345, if you do, you’re ready to add this node to the cluster.

8. Add a node to oVirt

Open a web browser, and go to the URL for your oVirt management console.

http://hostname.itp.internal

Login in as admin with the password you setup during section 6.

Admin Portal

Click the hosts tab and right click > new.

Fill out the Name, Address and Root Password sections accordingly. Data Center ‘Default’ and Host Cluster ‘Default’ will do. Allow auto configuration of the firewall and click Ok.

Ignore the warning about power management.

Add a New Host

 

Sit back and wait as your node is installed, eventually the node will reboot. After it has rebooted click “Confirm Host Has been rebooted.” Do not do this until you are positive it has been rebooted. If OVirt claims the install failed, just click “re-install” and it will work. Occasionally the installer times out.

One odd caveat, if the node says up but you are unable to migrate a VM to a node. Check vdsm is starting correctly.

# service vdsmd status

It may claim the service is down

# service vdsmd restart

Will show the service stopped then start then go down again.

This strange bug is caused by a log file being owned by root instead of vdsm. To fix it change ownership to vdsm:kvm.

# cd /var/log/vdsm/
# ls -l

This will show vdsm.log owned by root. Fix with chown

# chown -R 36:36 vdsm.log
# shutdown -r now

9. Bond the NICs

Log into the administrative portal and click the node you wish to Bond the NICs of. Select “Network Interfaces.” Under the panel at the bottom, choose Setup Host network.

Bond NIC

Remove the oVirtmgmt bridge and right click eth2 and select Bond > eth3

Bond NICs

Select mode 5. Mode 5 allows us to Bond, without any special configurations on our switch hardware. Mode 5 also provides link aggregation and fail-over.

Bond with Mode5

Add the oVirtmgmt bridge back onto the “Assigned Logical Networks” section, and check save network click OK.

Add Bridge to Bond

After the pin wheel spins around a few times both nics should be show in an up-state and be joined to bondX.

Joined NICs

Going back and viewing the contents of ifcfg-ethX, you can see that the changes were applied.

10. Adding storage Domain

We need a place to store our virtual machines so let’s add a new storage domain. NFS makes sharing between the nodes really easy so we are going to use it to fill our storage needs.

10.1. Configure NFS access

By default NFS uses v4 and dynamic ports, we hate this. Mainly because it isn’t fully supported by OVirt yet, however it is in the pipelines. For now, make some edits to force NFSv3 and make sure we are using static ports.

# vi /etc/nfsmount.conf

NFSvers=3

# vi /etc/sysconfig/nfs

LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020
NFS4_SUPPORT="no"

The last line isn’t originally in the config, so just append it at the bottom.

# service nfs restart

10.2. Edit iptables

The majority of iptables configuration was taken care of when VDSM was installed in back in section 8.
These lines will allow others access to NFS on this box. This should only be performed on the node that will be providing storage to the rest of the cluster via NFS.

Unblock all of the ports needed by NFS and edit the address ranges to fit your network.

# vi /etc/sysconfig/iptables

-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p udp --dport 111 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p tcp --dport 32803 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p udp --dport 32769 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p tcp --dport 892 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p udp --dport 892 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p tcp --dport 875 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p udp --dport 875 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p tcp --dport 662 -j ACCEPT
-A INPUT -s 172.31.xx.0/24 -m state --state NEW -p udp --dport 662 -j ACCEPT

# service iptables restart

Confirm the needed services are running.

# rpcinfo -p

You should see a list of local NFS related services.

10.3 Add local storage

I recommend preparing a directory for local storage, just in case a fire breaks out and you need some where to store virtual machines.

# mkdir /rhev/storage
# chown -R 36:36 /rhev/storage
# chmod -R 755 /rhev/storage

10.4 Partition iSCSI

Local storage is fine and dandy but if you spent a bunch of money on a fancy iSCSI setup, you’ll probably want to use it. iSCSI appears as a multipath device labeled by its GUID, I prefer to make it human readable. This is optional, and in some cases not even preferred.

# vi /etc/multipath.conf

user_friendly_names yes

Out of habit I used cfdisk and create a msdos style partition. If you want to future proof, use GPT.

Create a partition using up all the space on the drive or in our case drives (RAID5).

# cfdisk /dev/mapper/mpatha

When you have your partition created we need to add a file system, ext4 will do just fine.

# mkfs.ext4 /dev/mapper/mpathap1

Adding a label to the partition will make referencing it much easier.

# e2label /dev/mapper/mpathap1 /iscsi

Create a mount point.

# mkdir /iscsi

Edit fstab to accordingly to make the storage available at boot.

# vi /etc/fstab

LABEL=/iscsi /iscsi ext4 defaults 1 2

# mount /iscsi

We need a data and iso store for our cluster yet, create these directories.

# mkdir /iscsi/data
# mkdir /iscsi/iso

These mount points are going to be used solely for OVirt, so let’s adjust the permissions of our new folders and make sure they are owned by kVM/vdsm.

# chown -R 36:36 /iscsi
# chmod -R 775 /iscsi

Your storage is now prepared and ready to be served up via NFS.

10.5 Prepare to share!

Now that we have our storage attached and given the appropriate permissions, let’s use configure NFS.

First of all, we need to edit our exports file. This will tell NFS where our storage is and allow it to be shared with the cluster. The * tells NFS it can be shared with anyone. Ideally this will be adjusted to reflect hostnames or IP ranges but our goal is just to get it working first. We won’t be using /rhev/storage but it is there in case we need it later on.

TODO: Change * to 172.31.x.x, as this is a security concern.

The options applied to the share allow changes to be made to the share, and forces everyone to have the same rights as the ‘nobody’ user. The last options guarantee all created files and directories are owned by kVM and vdsm.

# vi /etc/exports

/rhev/storage *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
/iscsi/iso *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
/iscsi/data *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

Apply changes

# service nfs reload
# service nfs restart

After the service is reloaded, you will be able to access the shares from FQDN:/iscsi/data. FQDN being the name of this node.

10.6 Attach the storage domains

Log-on to your web console and click the storage tab. Right click > New Domain.

New Storage Domain

Enter a name, set the function as Data/NFS and use this node as the host. Set the FQDN:/iscsi/data as the export path of this host.

Example:

iscsi_data
Default
Data/NFS
hostname.itp.internal
hostname.itp.internal:/iscsi/data

Click Ok and add another store for ISO using the same settings substituting ISO where applicable.

The storage domains will turn to a “green light” and allow you to start using the new storage domains.

Attached Storage Domain

11. ISO uploads

Log-on to your OVirt management server (the one with the web console on it) and edit isouploader.conf to reflect the name of your ISO storage domain.

# vi /etc/oVirt-engine/isouploader.conf

iso-domain=iscsi_iso

After that change, you will be able to upload iso files into your database using the same name as your iso domain.
Make a download directory in, /root or your sudoer directory.

# mkdir /root/iso_dump
# cd /root/iso_dump

Try it out, I recommend, grabbing an iso you plan on using for your VMs

# wget http://mirror.stanford.edu/yum/pub/centos/6.4/isos/x86_64/CentOS-6.4-x86_64-bin-DVD1.iso
# engine-iso-uploader -i iscsi_iso upload CentOS-6.4-x86_64-bin-DVD1.iso

Success!

If you plan on running Windows on OVirt in the future grab the VirtIO driver disc while you’re at it.

# wget http://alt.fedoraproject.org/pub/alt/VirtIO-win/latest/images/bin/VirtIO-win-0.1-52.iso
# engine-iso-uploader -i iscsi_iso upload VirtIO-win-0.1-52.iso

Success!

Go back to your web console, you should see the uploaded iso files under Storage> iscsi_iso> Images

Attached ISO images

12. Spice-xpi

In order to view a VM session via the console, you need to have a browser with the Spice-xpi plugin installed. There are a few methods to get this working, but I find the that the easiest way to do this real quick is to just setup a VM running Fedora and grab the plugin-in from their repository. If you happen to be using a Linux distribution that has this package available you could optionally just get it from there. Apparently, you can install the needed packages on a Windows workstation but I haven’t had any luck with it. Check out this guide for more information:

http://www.oVirt.org/How_to_Connect_to_SPICE_Console_With_Portal

I won’t explain installing a guest operating system here, since you’re probably smart enough to do this. The ISO can be found here:

http://fedoraproject.org/

The fedora repo has the plug-in, and more info can be obtained by following this link.

http://spice-space.org/download.html

13. Let’s create a VM

Load up Fedora or your Spice enabled browser and surf over to OVirt’s web console.

Create a new Disk. Disks > right click > Add, and then provision the storage for your first VM.

Add Virtual Disk

If you are setting up a Linux guest choose VirtIO as the interface, and allocate the appropriate size disk for your VM. Select the Data Center and Storage Domain you setup earlier. Make it bootable if this is the disk you plan to install the operating system on.

13.1 Create the virtual machine

After creating your virtual disk, go to ‘Virtual Machines’ and right click > new Server

New Server Virtual Machine

Fill in the details and go to ‘Boot Options.’ Choose CD-ROM as your first boot device and HDDas second. Attach the ISO you uploaded in section 11.

VM Boot Options

Click OK, click the VM then Disks, and attach the disk you created a moment ago. Right click the new VM > Run

The VM will take a minute to initialize and when the status is ‘Up’ click the console icon to view the session. Spice will launch the virtual machine and it is business as usual. From here you are able to go through the standard Linux installation. After installation and configuration, of a VM I generally stop using SPICE and log-in via SSH or RPC in the case of a Windows guest.

Spice Console

14. Linux templates.

To deploy CentOS virtual machines in a hurry, I like to have a template to create new instances from. For webservers I’ll update and Install a complete LAMP stack before I run through these steps. For this example, I’ll keep it bare-bones.

Log-in to the server you wish to make a template from. Before doing this, be aware this will completely un-configure the server, so only do this on a fresh install.

If you want to save some time upgrading each server after creation upgrade first.

# yum –y upgrade

Flag to un-configure

# touch /.unconfigured

Remove and SSH keys

# rm –rf /etc/ssh/ssh_hosts_*

Then shut down the virtual machine.

# poweroff

Easy, wasn’t it? Now go to your web console > Virtual Machines > right-click the target VM > Make Template

Give the template a name, description and assign it to a cluster, click OK.

VM Template

After a few minutes the template will be created, be sure that it has finished the creation process before trying to use it.

14.1 Deploy the template

For the sake of brevity, just go through the same steps as described in section 13.1. This time click ‘Based on Template’ and choose the template you created in section 14.

Create VM Based on Template

After the creation of your new VM, starting it will launch CentOS configuration mode and prompt you for all the details it needs. Fill in the hostname, IP addresses and select the services you wish to start. Cool, stuff. Every time you need a new server, you can just create them based on this template or another custom stack you have created.

15. Windows Guests

OVirt’s virtualization capabilities are based on the qemu/libvirt stack. The recommended VirtIO drivers used by the stack provide some really great performance for guests, but unfortunately Windows doesn’t provide these drivers in the default installation. So, if you want to get good performance out of your Windows guests you have to do a bit of a work around.

For a review on how to add a disk and create virtual machines in OVirt, see section 13.

This method is proven for Server 2012 installs, as well as Server 2008. You used to be able to shortcut this by loading the drivers during install to make the disk visible but because of a change in how drivers are signed, we are doing it the long way.

15.1 Create disks

Create a new disk in the standard fashion. Disks > Right-click > Add. Select an appropriate size HDD for your install.

This time choose IDE as the interface. IDE is pretty slow but selecting it will allow Windows to detect the disk. If you select VirtIO, windows will not be able to see the media.

Create a second disk of 1Gb in size, leave this one as VirtIO.

15.2 Create the Windows VM

Add a new Virtual Machine, filling in the correct details. Boot from a Windows ISO or PXE boot the installation media, second device should be your Hard Disk. Click OK.

Attach the both Disks to the VM if it is not already.

15.3 Add your NIC

Select the VM you just created and go to ‘Network Interfaces,’ add a new NIC, it should not be set to Type: ‘Red Hat VirtIO.’ Activate it, and click OK.

15.4 Install

Launch the VM and Spice console; you will be greeted by the Windows Installer. Follow the prompts to install Windows, and shutdown after you have confirmed everything is working.

Right click the Window’s VM and select Edit > boot options and attach the VirtIO disk we added in section 11. If you did not add this ISO go back and do it now. Change your NIC so that it is using the VirtIO interface.

Start the VM, Windows should gripe about the VirtIO disk, and networking. Use Device manager to install the drivers from the ISO. Install the SCSI and NIC driver. Reboot Windows and remove the 1Gb disk and change the disk with Windows installed on it to VirtIO.

Things should be much more speedy now.

Final Words

Repeating these steps, I was able to build out across six blades successfully. There are some final touches on the project that need to be done yet, but it is as stable as a rock thus far. Thanks for reading, as always, questions and comments are always welcomed.

About

Tony is a system administrator from Seattle, WA. He specializes in secure, minimal, Linux installations.

Posted in Linux Tagged with: , , , , , , , ,
4 comments on “oVirt Installation Guide
  1. Great walk-through – I think
    MOUTD_PORT=892
    should read
    MOUNTD_PORT=892

  2. Bob says:

    firefox doesn’t show the RHEV-M web interface! It shows the certificate and after accept it goes white! google chrome doesn’t too.

What did you think?

Subscribe

Enjoy what we do? Want to stay updated? Enter a valid E-mail address, and never miss a thing.

%d bloggers like this: