Skip to main content
BlogLinodeLinode turns 12! Here’s some KVM!

Linode turns 12! Here’s some KVM!

Happy 12th birthday to us!

Welp, time keeps on slippin’ into the future, and we find ourselves turning 12 years old today. To celebrate, we’re kicking off the next phase of Linode’s transition from Xen to KVM by making KVM Linodes generally available, starting today.

Better performance, versatility, and faster booting

Using identical hardware, KVM Linodes are much faster compared to Xen. For example, in our UnixBench testing a KVM Linode scored 3x better than a Xen Linode. During a kernel compile, a KVM Linode completed 28% faster compared to a Xen Linode. KVM has much less overhead than Xen, so now you will get the most out of our investment in high-end processors.

KVM Linodes are, by default, paravirtualized, supporting the Virtio disk and network drivers. However, we also now support fully virtualized guests – which means you can run alternative operating systems like FreeBSD, BSD, Plan 9, or even Windows – using emulated hardware (PIIX IDE and e1000). We’re also working on a graphical console (GISH?) which should be out in the next few weeks.

In a recent study of VM creation and SSH accessibility times performed by Cloud 66, Linode did well. The average Linode ‘create, boot, and SSH availability’ time was 57 seconds. KVM Linodes boot much faster – we’re seeing them take just a few seconds.

How do I upgrade a Linode from Xen to KVM?

On a Xen Linode’s dashboard, you will see an “Upgrade to KVM” link on the right sidebar. It’s a one-click migration to upgrade your Linode to KVM from there. Essentially, our KVM upgrade means you get a much faster Linode just by clicking a button.

How do I set my account to default to KVM for new stuff?

In your Account Settings you can set ‘Hypervisor Preference’ to KVM. After that, any new Linodes you create will be KVM.

What will happen to Xen Linodes?

New customers and new Linodes will, by default, still get Xen. Xen will cease being the default in the next few weeks. Eventually we will transition all Xen Linodes over to KVM, however this is likely to take quite a while. Don’t sweat it.

On behalf of the entire Linode team, thank you for the past 12 years and here’s to another 12! Enjoy!

-Chris


Comments (92)

  1. Author Photo

    Linode continues to be an excellent service provider. Thanks =)

  2. Author Photo

    You’re welcome Alex!

  3. Author Photo

    My Linode continues to be a great value, and runs like a champ. Thanks for constantly improving and making the experience better and better.

  4. Author Photo

    Great news!

    Some questions:

    a) What is the result for the UnixBench testing a KVM 1024 Linode ?
    b) Is osv.io support planned ?
    c) Is live migration planned ? 🙂

  5. Christopher Aker

    @rata: 1) Don’t know, 2) No idea, probably? 3) Nope.

  6. Author Photo

    I choose Linode because of Xen

    “Eventually we will transition all Xen Linodes over to KVM” – really hope you are not serious.

  7. Christopher Aker

    Dead serious.

  8. Author Photo

    Linode is AWSOME!

  9. Author Photo

    Why was Xen used in the 1st place?

  10. Christopher Aker

    Ed: we used UML in the first place (2003). Neither Xen nor KVM existed. Then we moved to Xen. Now we’re moving to KVM.

  11. Author Photo

    Hi! Great news! One question: How much downtime on upgrade?

  12. Author Photo

    A word of warning to customers: the KVM upgrade hosed my linode, and now it doesn’t boot. Be warned, this is not a seamless upgrade. I’m going to go open a trouble ticket.

  13. Author Photo

    A followup on my previous comment, it seems that Ubuntu on KVM requires that devtmpfs be enabled on your linode profile. Caker enabled it and now it’s booting fine.

    Suggestion: this wasn’t required on Xen (or at least my linode booted fine before the upgrade), so perhaps the KVM upgrade should automatically enable it?

  14. Christopher Aker

    It’s required under Xen, too – however for reasons not yet understood Ubuntu was more tolerant to missing devtmpfs under Xen. We’re going to look at auto-enabling this during the upgrade. Thanks!

  15. Author Photo

    The downtime was 8-9 minutes for a 1GB instance (they have to copy the disk images to another host).

  16. Author Photo
    Professor Farnsworth

    Good news, everyone!

    ps: <3 Linode

  17. Author Photo
  18. Author Photo

    Done!
    Done!
    Done!
    Done!
    Done!
    Done!

    6 linodes migrated

  19. Author Photo

    Just upgraded my 2G Linode.
    Hmm… the cpu spec seems to have dropped:

    Xen:
    model name : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
    cpu MHz : 2800.044
    bogomips : 5600.08

    KVM:
    model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
    cpu MHz : 2499.994
    bogomips : 5001.32

    However, it’s running much more quickly.

    The kernel build time dropped from 573 to 363 seconds.
    That’s 1.6x faster.

    Many thanks Linode 🙂
    flatcap

  20. Author Photo

    Will you you also offer FreeBSD images in the future?

  21. Author Photo

    Ok, how to check /make sure devtmpfs is enabled on ububtu 14.04?

  22. Author Photo

    We upgraded one of our VM that used pvgrub + OpenVZ kernel (2.6.32) and it didn’t boot. We were left at the “grub>” prompt.

    Changing from paravirt to full virt made it work but I’m wondering if there is something we are missing?

  23. Author Photo

    Hmm can I run Windows Server now? I dont see the option. 🙂

  24. Author Photo

    Flawless upgrade and immediate performance gains. Thanks guys.

  25. Author Photo

    Just migrated to KVM and cloudlinux os has stopped working. How can i install my own kernel?

  26. Author Photo

    My Debian Jessie instance migrated seamlessly. It only took a few minutes.

  27. Author Photo

    Finally! Thank you very much! I’m so gonna upgrade to KVM! Now everything is perfect <3

  28. Author Photo

    @Rich Russon: You’re missing the fact that your CPU changed from the E5-2680 v2 to the E5-2680 v3. The old one was a 10-core Ivy Bridge, the new one is a 12-core Haswell.

  29. Author Photo

    Will there / is there any option for download host images and upload my own images?

  30. Author Photo

    Would suggest proceeding with caution – I attempted a migration this morning, but it failed and now won’t boot at all. Support tells me that unfortunately a hardware issue occurred at exactly the time I attempted the migrate (despite no current hardware issues being shown on https://status.linode.com), and I’m still awaiting an update.
    I would have hoped the LVM migration script would perform a full health check so as to not leave customers stuck in limbo.
    The concept is great, but thus far I’m disappointed.

  31. Author Photo

    Yay! I just migrated my server. It went great. My site feels much snappier. You guys rock! Thanks, Linode. 🙂

  32. Author Photo

    Would it possible to post a list of cpu flags supported in the new KVMs if different from the current Xen VMs (from /proc/cpuinfo)? I’m currently using aes instructions to accelerate ipsec on the Xen instance, but with KVM VMs aes isn’t always enabled.

  33. Author Photo

    Do you support Nested KVM?

  34. Author Photo
  35. Author Photo

    Does this mean we can have LVM2 root drives?

    Will we be able to use one Linode to build another (i.e., do an AWS-style chroot-build)?

  36. Christopher Aker

    @Micki: Yes. You can download your ‘image’ using Rescue Mode. You can upload your own image the exact same way (in reverse).

    @Tim: you were coming from a troubled host, sadly. Looks like you’re sorted. Sorry for the hassle.

    @Ricardo: it’s currently disabled. We left nesting off for the time being – but we will revisit this soon.

    @Tom: you already could do LVM root. You have all the tools: GRUB, initrd, disk devices you can manage, etc. No?

  37. Author Photo

    Will we be able to install an OS straight from an ISO or will we still have to go through the old process to migrate it?

  38. Author Photo

    Currently no stock in Japan?

  39. Author Photo

    Can you provide checklist for seamless migration? What I should doublecheck? For example, linode created /etc/fstab with “/dev/xvda” devices by default. Should I manually replace device names or not?

  40. Author Photo

    Please consider enabling Nested KVM. We could host oVirt or OpenStack on top of it. Imagine that!

  41. Author Photo

    I can’t found the bottun!

  42. Author Photo

    I found no such “Upgrade to KVM” link on the right sidebar.
    Is it not yet available in Tokyo data-center?

  43. Author Photo

    Already got FreeBSD running [1] and the Debian benchmark is an improvement [2].

    [1] https://forum.linode.com/viewtopic.php?f=26&t=11818#p67319
    [2] https://forum.linode.com/viewtopic.php?f=26&t=11851#p67217

  44. Author Photo

    Any planned support for Tokyo on the near horizon, or would I be better off migrating elsewhere?

  45. Author Photo

    Interesting, I thought you guys had tested this previously for some reason, although I guess I didn’t feel strongly since largely invisible to guest (at least at my workloads), EC2 and Linode and Vr.org using Xen, then DigitalOcean, Rackspace etc.. using KVM. Those benchmarks difference though are large, I am surprised to see such a jump.

  46. Author Photo

    And it is not available for servers in Tokyo yet !!

  47. Author Photo

    Does this apply to all linodes in all locations? I have an existing Xen linode in Japan and I’m not seeing the upgrade option… (I can change the default type in my account but not in the dashboard for my specific linode).

  48. Author Photo

    The upgrade works well and seamless. I used the KVM upgrade option in the control panel to upgrade 3 Ubuntu servers and a Debian server to KVM. The downtime was only 10 minutes on a Linode 4096 server, disk I/O has increased 20% (disk I/O was never an issue to begin wit), not bad for clicking a button.

  49. Author Photo

    Will these new KVM VMs still be multicore? I thought KVM didn’t support multiple virtual cpus running on multiple real cpus…

  50. Author Photo

    Happy Birthday Linode! Twelve years is a nice milestone for any company to reach – and I’m glad you reached it for sure! Congratulations for the birthday – and it’s great to see that you’re moving to KVM! 🙂

  51. Author Photo

    Be very careful migrating a linode to KVM at this time. Linode’s process failed at migrating one of my nodes and their support hasn’t addressed the problem in _3_ hours, nor given any clarity on the situation.

  52. Author Photo

    Hello,
    I’m on a 32bit Linode (London DC). Two questions:

    1. I don’t see any button to upgrade to KVM. Is it normal ?
    2. Will be possibile to upgrade my linode to a SSD node AND switch to KVM at the same time ?

  53. Author Photo

    No upgrades in Tokyo it looks like?

  54. Author Photo

    How stable can we expect this to be? Would it be wise to migrate mission critical Linodes or better to wait for a few months?

  55. Author Photo

    Can somebody please create a guide/tutorial for installing Windows Server 2012 on the new KVM Linodes? I would really appreciate it!

  56. Author Photo

    @caker:

    Weird. Last time I’d asked about it (just a few months back), I was told that the Xen PV-GRUB you were using didn’t support doing a partitioned, single-disk root drive (i.e., to put “/boot” on /dev/xvda1 and an LVM2 root VG on /dev/xvda2).

    At any rate, that issue aside, more critical to me is “can one use a live Linode instance to do a chroot-install of an OS to a vDisk, then register that vDisk as a bootable instance” (as is doable with AWS). I’d really rather just do everything “in the cloud” rather than having to upload an OS image file.

  57. Author Photo

    I’m still not getting the upgrade button for my Tokyo Linode. Any ETA?

  58. Author Photo

    @Rob: Yes, our systems are still multi-core, no changes there.

    @Keith: Thanks! I’m glad that we’re able to increase the performance for everyone on such a great day!

    @Cal: If you don’t receive a response from Support quickly and it’s urgent, I would suggest giving us a call so we can try to help you out asap.

    @skp: I’m seeing that there is space for London at this time. If you still can’t migrate, I would suggest contacting our support.

    @Mike/Superbarney: Unfortunately it looks like space is out at Tokyo at this time, but you can migrate elsewhere and get the upgrade

    @losif: We are out of beta so it should be 100% stable. If you want to be cautious, I would recommend first taking a backup, and or instead of migrating, make a new KVM Linode and migrate your current Linode to there first and test it.

  59. Author Photo

    Hello,

    are you planned write a HowTo like this:

    https://www.linode.com/docs/guides/run-a-distributionsupplied-kernel-with-pvgrub/
    ?
    I need to know if is possible to install a “Distribution-Supplied Kernel” before migrate or create news KVM Linode (I’m using CentOS 6.X)

    Regards

  60. Author Photo

    I just upgraded two nodes and immediately saw a notable difference in speed.

    Thank you and Happy Birthday!

  61. Author Photo

    @Bakko, here’s what worked for me on a Debian 8.1 image with KVM.

    First, install a kernel and grub2 (apt-get install linux-image-amd64 grub2)

    Debian’s package manager already installed grub on the root filesystem, but do it manually if you need to (grub-install /dev/sda)

    Next, configure grub to not use graphical mode, and to add “console=ttyS0″ to the kernel command line (edit /etc/default/grub to include the following lines:
    GRUB_CMDLINE_LINUX=”console=ttyS0”
    GRUB_TERMINAL=console
    ).

    Run the ‘update-grub’ command to regenerate /boot/grub/grub.cfg (update-grub)

    Go into the linode dashboard, edit the configuration profile for the image, and under “Boot Settings”, look at the “Kernel” field. Take note of what is currently there (in case you break something and need to go back to it), and then change the Kernel to “GRUB 2”. Click “Save Changes”, and then reboot your linode.

    That got me booting with Debian’s default kernel. If it doesn’t work for you, just set the Kernel field back to what it previously was, save changes, reboot, and try and figure out what went wrong.

  62. Author Photo

    Thank you @AndresSalomon but I’m using CentOS. Regards

  63. Author Photo
    Bender Rodríguez

    Note: If you use “GRUB 2” as the “kernel” in the Configuration Profile, then you don’t need “grub-install /dev/sda” at all. That’s only required for “Direct Disk” boot,
    which I would not recommend on disks not having a partition table.

  64. Author Photo

    Hey Linode, the reference guide (https://www.linode.com/docs/platform/kvm) talks about ‘Direct Disk’ booting. What is this? and is it preferable to switch to this?

    Maybe a comment on the reference guide should be added about this.

  65. Author Photo

    I’ve migrated a few Linodes to KVM already, with zero hiccups. One side-effect I noticed, however, is that disks are now reported as “rotational” (aka whether it is an SSD or not).

    On a KVM Linode:

    # cat /sys/block/sdb/queue/rotational
    1

    On a Xen Linode:

    # cat /sys/block/xvda/queue/rotational
    0

    I had previously been opportunistically setting the IO scheduler to noop where the disk reported rotational 0, so this “breaks” that but it’s not a huge deal; the disk is still fast!

  66. Author Photo

    AWESOME! just upgraded and i’ve got more than 25% stability and performance.

    Thanks.

  67. Author Photo

    Hello,

    maybe this HowTo can be useful to someone:

    “Migrate Linode CentOS 6.6 – 64 bit from XEN to KVM using GRUB and the new ttyS0 console”:

    https://www.voztovoice.org/?q=node/781

    Regards

  68. Author Photo

    Guys,
    Your DevOps automation is impressive. How are you even able to pull this off, something like SaltStack?
    Thank you!

  69. Author Photo

    I just upgraded to KVM and it is faster with so far, more memory to spare, nice!

  70. Author Photo

    Please say about Frankfurt?

  71. Author Photo

    I am wondering if you have a more specific upgrade schedule? We are closing down our office for one month vacation starting next week and need to prepare so we minimize the risk of firefighting. You say we shouldn’t sweat it. Does that mean we can wait with this upgrade until the end of August or beginning of September?

  72. Author Photo

    I checked this and I get the following which is probably why linux thinks it is rotational now.
    [pre]
    root@icarus:~# smartctl -a /dev/sda
    smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.0-x86_64-linode59] (local build)
    Copyright (C) 2002-13, Bruce Allen, Christian Franke, http://www.smartmontools.org

    === START OF INFORMATION SECTION ===
    Vendor: QEMU
    Product: QEMU HARDDISK
    Revision: 2.3.
    User Capacity: 51,275,366,400 bytes [51.2 GB]
    Logical block size: 512 bytes
    LU is thin provisioned, LBPRZ=0
    Rotation Rate: 5400 rpm
    Device type: disk
    Local Time is: Mon Jul 6 23:22:57 2015 UTC
    SMART support is: Unavailable – device lacks SMART capability.
    [/pre]

  73. Author Photo

    Last time I checked KVM does not work with cloudlinux. Do you have a way of making it work or do I need to find a new provider?

  74. Author Photo

    @dev

    How do you get ” 25% stability “? Did your Linode crash four times in one hour before, and now it’s just once? XD

  75. Author Photo

    Basicly, KVM better than Xen, it’s good upgrade.

  76. » Linode KVM Upgrade this morning Player FM Status

    […] will be cycled this morning to take advantage of Linode’s new KVM setup. It’s been running a while in staging without […]

  77. Author Photo

    was just looking to move from shared to dedicated resources, looks like i have found a good place.

  78. Author Photo

    Tokyo ETA please …..

  79. Author Photo

    For those of you that ended up with an unbootable system unless you changed to ‘full virtualization’, you may be missing the virtio block driver in your initramfs.

    sudo echo ‘add_drivers+=”virtio_blk”‘ > /etc/dracut.conf.d/kvm.conf
    dracut -f

    shutdown, change to para-virtualization, pray, and boot.

  80. Author Photo

    Not available in Tokyo, Japan, and several questions above about this. Please provide an eta for when those with Xen in Tokyo, Japan can upgrade to KVM. Thank you!.

  81. Author Photo

    Have we received an answer on CloundLinux and KVM Upgrade?

  82. Author Photo

    The results from our unixbench benchmark of KVM and XEN can be found at: http://wpress.io/unixbench-results-for-digitalocean-linode-kvm-linode-xen/

  83. Author Photo

    You can run CloudLinux under KVM at Linode: http://docs.cloudlinux.com/cloudlinux_on_linode_kvm.html (haven’t tested this myself yet, but I’m about to)

  84. Author Photo

    CloudLinux is working perfectly for me under KVM

  85. Author Photo

    Wonder why Xen did so poorly relative to KVM on those boxes (ex: this shows it only a few percentage points slower: https://major.io/2014/06/22/performance-benchmarks-kvm-vs-xen/ ) …

  86. Author Photo

    Interesting. Nice performance gain. Wonder why it was so large, other comparisons of KVM vs Xen only find like a 1% difference [maybe it’s I/O?] ex: http://wpress.io/unixbench-results-for-digitalocean-linode-kvm-linode-xen/

  87. Author Photo

    @John / @Troy A little late, but It seems CloudLinux don’t have any issues about being installed on a KVM node: http://docs.cloudlinux.com/kvm_images.html (otherwise there wouldn’t be any KVM images).

    It’s likely though that you’ll have to install a CentOS image, switch to native kernel (grub / pv-grub) instead of the Linode Kernels and then convert it to CloudLinux.

  88. Author Photo

    Thanks Linode for making this such a smooth transition – enjoying the bump in performance.

  89. Author Photo

    are you willing to share the xen -> kvm migration script you guys use?

  90. Author Photo

    Lost network connectivity after migration. Turned out to be because eth0 had changed to eth1, so I needed to modify my network settings. Not a big deal for me but may cause a problem for some so I thought it worth mentioning.

  91. Author Photo

    It has been more than half an year and KVM upgrade seems to be still unavailable at Tokyo datacenter.

  92. Author Photo

    After the auto migration I had an error with booting complaining about the mount points /dev/pts and /dev/shm not existing.

    I eventually fixed by setting the Automount devtmpfs value to No in my profile configuration as per instructions here: http://thomas.broxrost.com/2016/06/15/fixing-boot-problems-after-upgrading-to-ubuntu-9-10-on-linode.

Leave a Reply

Your email address will not be published. Required fields are marked *