RailsConf 2007: Bradley Taylor: Virtual Clusters

Posted by Nick Sieger Sat, 19 May 2007 19:30:06 GMT

How does Rails figure into virtualization? Bradley will cover this topic with examples and case studies. Along the way, hardware items may be mentioned, but are not critical. Really, it’s about the design of the clusters, not the bits of plumbing you use to connect them up.

Virtualization is partitioning of physical servers that allow you to run multiple servers on it. Xen, Virtuozzo, VMWare, Solaris containers, KVM, etc. Bradley uses Xen. The virtual servers share the same processor (hopefully multi-core), memory, storage, network cards (but with indepenent IP addresses), etc., but run independently of each other. VPS, slice, container, accelerator, VM, it’s all the same. Memory, storage, and CPU can be guaranteed with the virtualization layer.

Why would you do this? Consolidate servers for less hardware and cost; Isolate applications -- bad apps don’t drag the server down, contain intrusions, use different software stacks; Replicate -- easily create new servers and deploy in a standardized and automated way; Utilize -- take advantage of all CPU, memory, storage, resources; Allocate resources, give a server exactly what it requires, grow/shrink up and down, and balance them. Bradley says, “Once you go to virtualization you won’t want to go back. Do the simplest thing that could possibly work.”

Virtual clusters, then, are a bunch of servers cooperating toward a common goal -- if you have many versions or copies of one thing. More than one customer, more than one version of software, etc.

For Rails, this means a lot of things: you can have many development environments and stages, take advantage of memory isolation, protect against PHP/Java, and make multiple-server scaling accessible.


  • Two servers for production and staging
  • Three for web/db/staging
  • Mixed languages -- instead of 1x1GB server use 3x300MB servers
  • High availability applications with fewer servers
  • Multiple applications -- one server per application
  • Standardized roles/appliances -- mail, ftp, dns, web, db


  • They can incubate customers in separate images
  • Dev/staging/production servers
  • Shared SVN/trac
  • 2 physical servers => 8 virtual servers

Boom Design

  • Again, multiple stages
  • Customer staging, with lower uptime requirements
  • Low-traffic apps on a single server, but everything else gets its own dedicated server
  • 2GB memory spread across 9 virtual servers

Tags ,  | no comments | no trackbacks



Use the following link to trackback from your own site: