How to create a pool of VM for MAAS environment

We explain how to create a pool of VM for MAAS environment.

This can be achieved in two ways:

Current method, use MAAS PODs

Note

As of 2018-07-05 these instructions are not fully validated. Stay tuned for updates.

Install physical node

Before installing node, configure the storage system such that two LUNS are presented to the node: one will possibly be used for LXC, the second will be used as the destination for KVM disks.

Install node using MAAS:

  • disk: configure one LUN such that it is mounted under path /vm
  • network: create bond0, do not assign any address
  • network: create all needed bond0.<VLANid>
  • network: create a bridge br<VLANid> for each of the bond0.<VLANid>: configure bond0.601 with static IP
  • network: create a bridge br<VLANid> for bond0, configure with static IP

Configure physical node

Install QEMU and virsh as per instructions below: Install qemu-kvm and virsh.

Configure the default pool such that it uses the LVM volume, as per instructions below: Change the default virsh pool to have the VM on an external data storage

Add SSH public key for maas user of MAAS region controller node to ~ubuntu/.ssh/authorized_keys

  • Make sure maas users on MAAS rack controllers share this same key, otherwise you also need to add those SSH public keys to ~ubuntu/.ssh/authorized_keys

From MAAS region/rack controllers, verify:

sudo -Hu maas sh -c 'virsh -c qemu+ssh://ubuntu@10.4.0.125/system list --all'

You may need to change permissions on the libvirt socket, in order to connect from remote virt-manager:

chmod 777 /var/run/libvirt/libvirt-sock

Register POD

You should now manage to register your POD using official instructions <https://docs.maas.io/2.4/en/nodes-comp-hw>.

Note that POD name cannot be changed after POD creation.

Create a VM

In POD vocabulary, you compose a VM. Once composed, the VM will appear in the Nodes list in MAAS.

Give your machine a name, select CPU, RAM, disk size(s): node will be commissioned and will then power off. You will later deploy it as any other node.

Old method, manual installation

Configure and mount LVM volume

We have created a volume on our FiberChannel SAN and presented to the VM host. First we have to identify it with multipath tool:

multipath -ll

mpatha (3600a0980005de00b000010bb579774a9) dm-1 DELL,MD38xxf
size=4.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=14 status=active
| |- 1:0:0:0 sdb 8:16  active ready running
| |- 8:0:5:0 sdl 8:176 active ready running
| |- 1:0:1:0 sdd 8:48  active ready running
| `- 8:0:6:0 sdn 8:208 active ready running
`-+- policy='round-robin 0' prio=9 status=enabled
  |- 1:0:2:0 sdf 8:80  active ready running
  |- 8:0:7:0 sdp 8:240 active ready running
  `- 1:0:7:0 sdj 8:144 active ready running

In this example the volume is labeled mpatha. Now let’s configure LVM with these steps:

apt-get install lvm2
vgcreate KVM_lvm /dev/mapper/mpathb
systemctl enable lvm2-lvmetad.service
systemctl enable lvm2-lvmetad.socket
systemctl start lvm2-lvmetad.service
systemctl start lvm2-lvmetad.socket
lvcreate -n VM_lvm -l 100%free  KVM_lvm
mkdir /mnt/vm
mkfs.ext4 /dev/KVM_lvm/VM_lvm
mount /dev/KVM_lvm/VM_lvm /mnt/vm/

Check new partition:

df -h

...

/dev/mapper/KVM_lvm-VM_lvm                                                                4.0T   67M  3.8T   1% /mnt/vm

Install and configure node

Install physical node the way you see fit.

Install QEMU and virsh as per instructions below: Install qemu-kvm and virsh.

Configure the default pool such that it uses the LVM volume, as per instructions below: Change the default virsh pool to have the VM on an external data storage

Clone the VM from a given template

You can start a VM and make all the needed changes then use it as a template for bulk creation. Before cloning the VM make sure you allow boot from network card (make sure you pick the right ones) and disable boot from disk.

Then, execute:

(i.e.) for i in `seq -w 01 99`; do virt-clone --original template-VM --name bulk-VM-$i --file /mnt/vm/bulk-VM-$i.qcow2; done

Add to MAAS environment

From the MAAS node, verify you can connect to remote virsh (you may first want to add your SSH key to root@10.X.Y.Z:.ssh/authorized_keys):

virsh -c qemu+ssh://root@10.X.Y.Z/system list --all

Select one machine from the list and add it to your MAAS environment:

# setting all the $VARS properly you can do it by CLI
export REGIONSHORT=pa1       # Used to construct machine name in MAAS
export VMNAME=VM-100         # Name of machine within virt-manager
export VMHOST=${REGIONSHORT}-${VMNAME}  # Name of machine within MAAS
# From virt-manager get the MAC of the card connected to the bridge where PXE will happen
export MAC_ADDRESS=01:02:03:04:05:06
export MAAS_USER=maas-admin
export ARCH=amd64
export VMPROVIDER=10.X.Y.Z
export VMPASS=*****
export ZONE=Palermo
maas $MAAS_USER machines create architecture=$ARCH mac_addresses=$MAC_ADDRESS hostname=$VMHOST power_type=virsh power_parameters_power_address=qemu+ssh://root@$VMPROVIDER/system power_parameters_power_id=$VMNAME power_parameters_power_pass=$VMPASS zone=$ZONE

or refer to https://docs.ubuntu.com/maas/2.1/en/nodes-add

NB select the correct CPU architecture

It’s important to select a kvm compatible cpu architecture if you’ll use the O~S defaults for nova (ref here https://docs.openstack.org/newton/config-reference/compute/hypervisor-kvm.html):

i.e. in virt-manager: cpu model kvm64

Common bits

Install qemu-kvm and virsh

Install the following packages:

apt install libvirt-bin qemu-kvm virtinst bridge-utils cpu-checker virt-manager

Verify installation:

kvm-ok

Change the default virsh pool to have the VM on an external data storage

Backup first:

virsh  pool-dumpxml default > pool.xml

then:

virsh pool-destroy default
virsh pool-undefine default
virsh pool-define-as --name default --type dir --target /mnt/vm/
virsh pool-autostart default
virsh pool-start default
optional if you already had VM on a different pool
(virsh pool-define-as --name <old pool> --type dir --target <old mount point>)
(virsh pool-autostart <old pool>)
(virsh pool-start <old pool>)

check:

virsh pool-list
virsh pool-info default