You probably know the situation: You love FreeBSD, you love ZFS as a filesystem (maybe even as the root filesystem?), but when it comes to virtualisation, FreeBSD users don't have a lot of choice.
Getting it to run on a FreeBSD 9.0 amd64 server (without any GUI) was less complicated than I thought. However, there are some minor things, which can easily be missed in the relevant chapter of the FreeBSD Handbook and the FreeBSD Wiki, and neither of them actually tells you how to start your first VM, which is anything but intuitive. (That can be automated later, though.)
Let me walk you through the required steps from installation to getting your first VM on the way.
I'm assuming that your OS is FreeBSD 9.0, amd64. Your system is up to date, and you do have the full source tree in /usr/src and the ports in /usr/ports.
After that, install VirtualBox:
The whole compilation will take a while, and you'll be asked to configure and compile a number of dependencies. However, it should run through from start to finish just fine.
Once this is done, load the new kernel modules in this order:
We'll add all this to the boot loader and startup configuration files later, when we're sure everything is working fine.
If everything worked well thus far, we can carry on with the VM creation.
You should have an ISO image of the guest OS you'd like to run. I'm using FreeBSD in this example, too, though the Jail evangelists will start throwing things at me if they read this (and rightly so)
Before we continue, we need to understand that VirtualBox uses a sort of internal registry, which contains references to common network settings, paths and of course the VMs. Therefore, each VM must explicitly be "registered" (storage devices too, though it happens implicitly for them).
This creates a registry entry for a VM called "testMachine":
There are plenty of different OS types to choose from. You can get a full list with VBoxManage list ostypes. The ostype is essentially choosing a number of default settings, some of which you will want to override in most cases. Let's do that now:
In this step we change the Memory to 1GB, switch on IOAPIC (without this, FreeBSD 9 will panic during boot!), assign one virtual CPU, choose the Intel ICH9 chipset, and assign a single bridged interface, which will share the em0 host adapter.
Next, you'll need a disk of course. Let's use VBox's sparse image files for now, which grow over time. (Logically this isn't the ideal choice for a server setup, because it will eventually scatter fragments of your image all over the disk as the file grows, but right now it's nice and easy. You can use iSCSI, raw devices and other common disk image types, if you like.)
This looks like a lot of effort to add a disk to the VM, and it surely is. However, what you're actually doing here is to create a disk image, create a controller of your choice in the VM, and attach the disk image to it.
Next up: CDRom to boot and install the guest OS from:
Here again we add a controller (this time IDE not SATA), attach the CDRom drive to it, and put the FreeBSD installation disk into the virtual tray.
That's all for the preparation.
First, we're using the manual approach. FreeBSD's VirtualBox port comes with neater rc scripts, but they'll also background the process, which is not a good idea right now.
So let's start our VM already:
Note that we add a VNC server here, which listens on port 5001 and uses <yourpassword> for authentication.
If you see this and can't connect with your preferred VNC viewer, make sure to check your firewall settings.
The console output on the host will give some diagnostic details once you're connected via VNC. All other host diagnostics go into the log file, in this case: VirtualBox VMs/testMachine/Logs/VBox.log. It's interesting that the logs are rotated with every single VM start, which makes it very easy to find the relevant bits for your current (or previous) session.
If everything went well and you're connected via VNC now, you should see FreeBSD booting into the installer just fine. You can install FreeBSD as usual. No pitfalls or caveats here.
When you're finished with the installation and the VM wants to reboot, make sure to hit CTRL-C in the host session (VBoxHeadhles) to interrupt the execution and avoid a reboot (it will otherwise boot from CD again).
Obviously there's no point booting from CDRom for future sessions (unless the VM runs off a LiveCD). Make sure it boots from disk from now on:
Now if you start it again as shown above, it should boot from disk into your freshly installed guest:
Tip: For future setups you probably want to modify your VMs before installing the guest OS like this:
This ensures that it will always boot from disk, unless the disk isn't bootable (or is empty), which is the case when you first install the guest OS.
Obviously CTRL-C in the host session is like a powercut for the VM. There's a more elegant way to shut down the VM. For example you can simulate pressing the power button, which will trigger a clean shutdown in FreeBSD:
At this point, you can already run VMs as any user who is member of the group vboxusers. However, you cannot currently use any form of network device in unprivileged VMs. Changing that is simple:
And to persist these changes, add the following lines to /etc/devfs.conf:
That's all there is to it. Try it. Create a user, who you add to vboxusers, log in as them, and go through the normal VM creation steps above.
You will have a set of VMs that you want to keep running, even after a server restart, and which should therefore be included in your normal startup/shutdown procedure. Luckily the port maintainers have thought of this too, and treat it very similar to how you'd deal with jails. Here an example for this particular VM.
First add this to /boot/loader.conf to load the virtualbox kernel module:
After you've got these changes in place, you can refer the VM by it's chosen shortname (here "test", which points to "testMachine") when invoking the rc scripts manually to start or stop individual VMs.
Without a given name, the rc scripts will start or stop all of your defined VMs.
Another useful command is service vboxheadless status which lists all of your registered VMs together with their current status (running, powered off).
The Virtio driver performs a lot better than the Intel (em) and AMD PCNet (pcn) drivers, at least within a FreeBSD guest. Actually it's easily twice as fast in my tests.
Once that's done, add these lines to your /boot/loader.conf:
Then shutdown your VM and change the network device types for the VM:
You'll see in many places that you should put virtio_blk_load="YES" into your loader.conf as well. This is however not relevant for VirtualBox, because it does not currently support Virtio-Blk controllers. No need to load a module which will never be used.
Essentially you don't need to do much at all to run Xen VMs in VBox. However, these prerequisites will help:
All these changes can be made in /boot/grub/menu.lst before converting the image. Or if the timeout is set long enough, you can change these settings at boot time.
Once you think your VM is ready for conversion, this is how you transform the image:
You can then attach this disk to a VM's storage controller as described earlier, and chances are that it will boot just fine from it, if you attached it as the first disk device.
This works fine for CentOS VMs, which previously ran on Xen. However, if you see problems mounting root during startup, it's likely because fstab is referencing devices by name (like xvda, xvdb etc) rather than their UUIDs. Change them to sda, sdb etc, because VirtualBox doesn't know anything about Xen-specific devices.
According to the FreeBSD Wiki, there are a number of features, so-called Guest Additions, which are interesting in particular when you run Windows VMs on desktop environments. I haven't tried them, because I haven't got a FreeBSD desktop to play with at the moment.
For servers, they include only one useful feature, that it synchronisation of the guest's time with the host. For me personally that's not worth compiling a load of stuff. NTP will do for now.
However, the above link looks straight-forward enough, if you want to give it a go.
To my knowledge, VRDE (aka RDP) for FreeBSD is not supported by Oracle. If this has changed or someone has reverse-engineered it, please point me into the right direction.
However, VNC will usually be enough, and is very easy to configure. (See configuration examples above)
This could occasionally come in handy for a soft reboot without logging into SSH or VNC
The snapshot functionality of VirtualBox goes a bit further than just cloning a disk image. It actually includes the whole configuration of the VM as well, which is very useful if you are trying to find the ideal configuration for your scenario.
However, please keep this in mind:
I'm not quite sure what causes this, but I can reproduce that both for ZFS and UFS as the host's medium where images are stored on. The error I'd get would be along the lines of:
I believe --pause ensures a consistent state, which is safer than trying to create a snapshot on the fly anyway. And the actual pause is in the order of milliseconds in my tests. In most cases that will be perfectly acceptable.
(Even if it took a few seconds, it would be fine. You won't be running mission-critical things on just a single VM, would you?)
This saved me a lot of time setting up the first VM: http://stdioe.blogspot.co.uk/2012/01/creating-virtual-machine-with.html
Oracle's CLI reference: http://www.virtualbox.org/manual/ch08.html
FreeBSD Handbook's VirtualBox stuff: http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/virtualization-host.html
FreeBSD Wiki: http://wiki.freebsd.org/VirtualBox
Skip to end of metadata Go to start of metadata