JRuby on Rails: Fast Enough
Posted by Nick Sieger Thu, 25 Oct 2007 03:36:00 GMT
People have been asking for a while how fast JRuby runs Rails. (Of course, “fast” has always been a relative term.) We haven’t been quick to answer the question, because frankly we didn’t know. We hadn’t been building real Rails applications on JRuby ourselves yet, and there was no definitive word from the crowd either.
Recently, several guys from ThoughtWorks have been working on a Rails petstore application and benchmark to get to the heart of the matter. Discussion has been heated on the JRuby mailing list, but results have not been conclusive yet.
In the project I’m working on, we’ve committed to using and deploying on JRuby. Eventually we were going to reach the point where we’d need to find out how well our application runs. So today I began running a simple single request benchmark on a relatively busy page. The numbers turned out to be rather surprising:
(The raw data is available here.)
Now, MRI (C Ruby) will always run about the same speed no matter how many runs you give it, but it’s well known that the JVM needs time to warm up. And indeed it does; after 250 iterations, Mongrel running on JRuby finally surpasses MRI. The JRuby/Goldspike/Glassfish combo comes close as well.
Some details about the setup:
- I ran the tests on my MacBook Pro Core 2 Duo 2.4 GHz. I didn’t disable one of the cores for the tests, which means that JRuby has an advantage over MRI because it can use both (native threads at work). However, the test script ran the requests serially, which means that the advantage was minimal.
- The application is indeed of the “hydra” variety; the setup is nearly identical to the second diagram on that page. So a single request is passing through not one, but two Rails applications in addition to touching the database. It rendered an HTML ERb view with data from an ActiveResource-accessed RESTful service. The applications are based on Rails 1.2.3.
- MRI version is using Ruby 1.8.6 and Mongrel 1.0.1.
- JRuby Mongrel is also version 1.0.1 (details on installing it here)
- JRuby on Glassfish used Glassfish 2 and Goldspike 1.4, deployed in war files via Warbler.
- The two JRuby setups used JDK 1.5 and were tweaked to disable ObjectSpace and use the “server” VM (-server argument to the JVM).
The main point I wish to make with these numbers is that JRuby performance is there today, and still has room to grow. There’s no longer any doubt in my mind. Yes, this is a simplistic application benchmark run on a developer’s machine, but it’s a real application. The test may not be exacting in precision, but I see enough in the numbers to believe that this will be replicable to production environments. The plot thickens!
Cool!
I observed the same kind of asymptotic warm up curve in my project with JRuby. Warm up is a very long stage but perfom well ultimately which is what matters on the server side.
But really I would be curious to see your results with Java 6. I can confirm it’s much faster. On some J2EE project I saw a 30% speed increase on pure Java code.
Still, your results are a bit ambiguous. Indeed, you are telling JRuby achieves the same speed as C-Ruby but in a way you give it one more CPU even if request are not concurrent. So may be you could show your results disabling the second CPU or may be you could insist more on this point.
Anyway, I’m very optimistic about JRuby’s future. And also, Java integration by itself is already a win and optimizing bottlenecks in Java is also a clear win.
Best Regards