Quantcast
Channel: A Journeyman's Journal
Viewing all articles
Browse latest Browse all 43

Setup NFS4 & LiveMigration with OpenNebula 2.2.1 on Ubuntu 11.04 Natty

$
0
0
About 2 months ago we setup a experimental internal cloud environment for one of our client using pure commodity servers and open source technology (OpenNebula 2.2.1 on Ubuntu 11.04 Enterprise Server). Since then the proof-of-concept environment has been working pretty well for our client hosting internal QA, Development, as well as some IT virtualized appliances. However the initial setup was based on SSH transport which works well in the small environment but struggles with large VMs. The main disadvantages of SSH based configuration are:
  • Performance with large VM image - With ssh transport each VM image will have to be transferred to the cloud node from the repository by the controller, and then retrieved back from the node (if save flag is on). Obviously this process takes a lot of bandwidth with large images.
  • No live migration - Live migration requires shared VM image storage between cloud nodes. Live migration is the key feature allowing any cloud operator to maintain the no downtime illusion and elasticity.
  • Vulnerable to data loss - If you are like our client who is running a cloud on top of commodity servers or even PCs, then most likely your cloud node server does not have RAID disks or much of other hardware redundancies. Since ssh based solution transfers the image to the node then it lets the node to run the VM using local hard disk, thus if you suffer a hard drive failure while running the VM you might loose all or part of your VM data.
To solve these problems OpenNebula and Linux community have already provided many solutions, in this post I would like to introduce the most straightforward alternative solution with NFSv4. With NFS a server can export a part of it's own file system to client machines whom in turn can then mount the exported file system locally and use it just as any other local file system. Originally we expected the change to be quite simple and straightforward however it turned out there were a few interesting hurdles some OpenNebula specific some Ubuntu. I hope this post can help you implementing similar solution with little effort.


First: NFS Server

Since this NFS server will be your centralized storage server therefore a little bit extra investment in this server would not hurt. We recommend to at least have 2 physical NICs, dual power supply, and RAID 1/5/10 disks.

(In our environment we chose software RAID 5 with 3 disks and 1 hot spare)

Install NFS4 Server

sudo apt-get install nfs-kernel-server

In /etc/default/nfs-common turn on IDMAPD:

sudo vi /etc/default/nfs-common

# NFSv4 specific
NEED_IDMAPD=yes
NEED_GSSD=no # no is default

Export /storage folder:
sudo vi /etc/exports
[nfs client or your network]/storage (rw,async,no_root_squash,insecure,no_subtree_check,anonuid=1001,anongid=1001)

Here in /etc/exports we use async flag to improve NFS performance and also map default uid and gid to oneadmin/cloud (1001/1001).

# Refresh export table

sudo exportfs -r

Create oneadmin and cloud group

sudo groupadd --gid 1001 cloud

useradd --uid 1001 -g cloud oneadmin

Now you are done with the server setup.

Second: NFS Client (OpenNebula controller and nodes)

Install NFS Client

sudo apt-get install nfs-common

In/etc/default/nfs-common turn on IDMAPD:

# NFSv4 specific
NEED_IDMAPD=yes
NEED_GSSD=no # no is default


# Add mount
sudo vi/etc/fstab
[nfs server]:/storage /storage nfs4 rw,_netdev,rsize=1048576,wsize=1048576,auto 0 0

Restart the OS:
sudo reboot

Now lets test the NFS mount:
#NFS performance test
dd if=/dev/zero of=/storage/100mb bs=131072 count=800

Shutdown OpenNebula and all VMs then move /srv/cloud/one/var to NFS mount:

mv /srv/cloud/one/var /storage/one/var

ln -s /storage/one/var /srv/cloud/one/var

(Also merge the var content on cloud nodes to the same directory if you have been running under ssh mode)

Turn Off Dynamic Ownership for QEMU on Nodes

sudo vi /etc/libvirtd/qemu.conf

# The user ID for QEMU processes run by the system instance
user = "root"

# The group ID for QEMU processes run by the system instance
group = "root"
dynamic_ownership = 0

Otherwise you might experience "unable to set user and group to '102:105' error" when migrating VMs

Fix LiveMigration Bug

Right now if you try to livemigrate with this setup you will get "Cannot access CA certificate '/etc/pki/CA/cacert.pem'" error since QEMU by default is configured to use TLS. With OpenNebula since you have already configured bi-directional passwordless ssh access between controller and nodes the easiest fix it to ask qemu to use ssh instead of TLS.

vi /srv/cloud/one/var/remotes/vmm/kvmrc

export LIBVIRT_URI=qemu+ssh:///system
export QEMU_PROTOCOL=qemu+ssh

Disable AppArmor for Libvirtd on Nodes


By default Ubuntu AppArmor will prevent libvirtd from access nfs mount, you can add the rw permission for the directory or simply disble apparmor for libvirtd. I chose to disable it:


sudo ln-s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
apparmor_parser-R /etc/apparmor.d/usr.sbin.libvirtd



At this point you should have a working NFS based OpenNebula setup. Now uncomment the tm_nfs section in oned.conf then add your hosts back use the new NFS transport:

onehost create im_kvm vmm_kvm tm_nfs

You should be able to use all the features OpenNebula provides now including live migration and should see a significant performance improvement for VM provisioning. Have fun and enjoy :-)



Viewing all articles
Browse latest Browse all 43

Trending Articles