Posted by Nick Sieger
Fri, 13 Jul 2007 09:39:00 GMT
The size of this blog seems to have outgrown the small 128MB VPS it’s running on. Any post/comment/change causes the server to swap endlessly, and I know some of you have seen 500 or 502 errors. I hope to remedy this soon, but until then, comments have been disabled, and the site will remain more or less frozen.
If you do have a comment, please send it to me the old-skool way (email -- with your name and URL and the article you’re commenting on), and I’d be happy to add it for you for historical purposes.
If you have any ideas on tuning my servers to avoid hitting the memory limit, I’d appreciate those as well. I’m currently running Ubuntu Dapper with Apache 2.0, Ruby 1.8.4, Mongrel 1.0 with fastthread 0.6.1, Typo SVN revision 947 (a bit old, I know), and SQLite 2. Upgrade to the latest Ruby? Upgrade Typo -- yeah, that would be a little more painful. Switch to Mephisto -- more painful still. Ditch Apache in favor of nginx? Probably, except I’m using Apache for SVN and Trac as well. We’ll see.
I tried running MySQL (4.1) for a while last night, but it too was swapping and the site wouldn’t even render, so I turned off and reverted to SQLite, which at least allows the site to load, even if it blows chunks when you try to post a comment or an article. Sigh.
Posted by Nick Sieger
Thu, 12 Jul 2007 08:27:00 GMT
Summer is settling in, and so is the JRuby 1.0 release. Most of the core team seemed to take some time off since the release, as the commits and lists have felt quiet compared to the frenetic lead-up to 1.0. I don’t know if that’s a good thing yet -- I’m not bold enough to suggest that it means that JRuby’s just working for everyone, and that the software is bug-free. There have been calls for a point release (probably 1.0.1) and a better roadmap -- we’re working on those and should have something in the next couple of weeks.
On the other hand, the number and quality of blog posts about JRuby seem to be steadily increasing. The number of compelling applications of JRuby, both in the Ruby/Rails and the Java worlds, are being demonstrated more and more. Here are a few examples.
JMS is turning out to be a great place to sprinkle some JRuby magic. Ola started by implementing direct JMS support for ActiveMessaging, eliminating the need for the separate poller process. Nutrun goes the other way and demonstrates how simple it is to run a broker, publisher and subscriber using ActiveMQ and a short JRuby script.
Jeff Mesnil has started jmx4r, a simple DSL leveraging JRuby to write monitoring scripts for your JMX-enabled applications.
Kyle Maxwell whipped up a compelling plugin for Rails in time for RejectConf using Lucene. Why imitate when you can use the real thing?
Zed Shaw has caught the JRuby bug, and decided to do his own take on scripted Swing applications with Profligacy. Check out the progression of examples as Zed hones in on his final product. As nap points out, Zed’s Layout Expression Language (LEL) is a refreshingly concise take on specifying UI layouts. Cleaner separation of component layout and event handling logic is also a big win. Move over Groovy SwingBuilder and JavaFX!
Last but not least, my own earlier proposal on Java interface integration has been implemented in current JRuby trunk. Along the way I found the opportunity to toss in some extra sugar and implement proc and block coercion to interfaces as well. This means you can pass a proc or block to a Java method and it will be converted to the appropriate Java interface type (e.g., Runnable
and Swing/AWT listeners):
button = JButton.new
button.add_action_listener do |event|
event.source.text = "You pressed me"
end
If closures for Java can’t do this, they will have gotten it wrong. Note that blocks will be converted and used in place of the last argument to the Java method only; if you need to pass behavior to any argument preceding the last, use a proc. Here’s one example that gets progressively better as we switch over to blocks.
Two parting thoughts -- a couple of things that aren’t quite ready, but you should keep your eye on.
Glassfish dev Jean-François Arcand demonstrates parked Rails requests with Grizzly. And you thought you had to do comet-style request handling outside of Rails? The future of scalable Rails servers looks pretty good to me.
Finally, respected object technologist Alan McKean has started looking at object serialization and persistence for JRuby. You thought you had to wait for that Gemstone-thing that Avi Bryant mentioned at RailsConf? Maybe the wait won’t be too long after all...
Tags jruby
Posted by Nick Sieger
Mon, 11 Jun 2007 15:35:00 GMT
Many libraries and plugins ship custom Rake tasks. Of course, as slick as Rake is for a build and configuration language, it’s still just Ruby code right?
Case in point: I released a version of ci_reporter
with a fairly careless bug in a rake task that attempted to <<
a string into an existing environment variable. It escaped me at the time that Ruby sets up the ENV
hash with frozen strings, because my own usage of ci_reporter did not exercise the task in that way.
So shouldn’t that Ruby code be subjected to the rigor of automated testing just like the rest of your code? It became obvious to me that it must be so. It turns out it’s straightforward to use Rake in an embedded fashion, and invoke targeted tasks in your custom Rake recipes. The examples here use RSpec, since that’s what I use for testing ci_reporter
, but you could apply this to Test::Unit
as well.
The technique is to create a new instance of Rake::Application
, make it the active application, and load your rake scripts into it:
describe "ci_reporter ci:setup:testunit task" do
before(:each) do
@rake = Rake::Application.new
Rake.application = @rake
load CI_REPORTER_LIB + '/ci/reporter/rake/test_unit.rb'
end
after(:each) do
Rake.application = nil
end
end
Notice the use of #load
rather than #require
, as you want to execute your rake script each time you setup the Rake application object. When tearing down your test or example, you should cleanup Rake by setting the Rake.application
back to nil (or save the previous application and restore it, if you prefer).
Now, in the body of your test or example, you invoke your rake task with @rake['target'].invoke
. Here, I’m exercising the case of an existing, frozen ENV
value. After the task is invoked, I check the value after the task to make sure the variable was modified as expected.
it "should append to ENV['TESTOPTS'] if it already contains a value" do
ENV["TESTOPTS"] = "somevalue".freeze
@rake["ci:setup:testunit"].invoke
ENV["TESTOPTS"].should =~ /somevalue.*test_unit_loader/
end
I was fortunate here that the tasks for which I wrote tests after the fact were simple enough to be testable on their own, which may not always be the case, especially with organic, homegrown Rake tasks that interact with the world outside of Ruby. Still, if your Rake tasks are a critical part of your application, library or plugin, they should be tested. For example, it would be nice if tests could be written for the Rake scripts in Rails’ Railties module to increase coverage there.
Perhaps someone out there will run with this idea and take up the challenge and write a Rakefile completely in a test-driven or behaviour-driven style. It’s always been a sore point for me with Make, Ant, Maven, and virtually every other build tool in existence that you have no other way of automatically verifying your build script is doing what you intended without manually running it and inspecting its output -- it just feels so dirty! I’d expect that test-driven Rake scripts would likely have the level of granularity to match the tasks that need to be done, in a way that you can combine them in the right ways to make incremental and deconstructed builds simpler.
Tags ruby, testing | 5 comments | no trackbacks
Posted by Nick Sieger
Sun, 10 Jun 2007 04:03:00 GMT
I’d just like to take a moment to echo what Ola has to say about the JRuby 1.0 release. This one is definitely for all of you out there. It’s been incredibly gratifying to see the growth of the community, and the increased amount of positive feedback and success stories with JRuby, and I’m honored to have been part of the team that made 1.0 happen.
We really feel strongly that we’ve put out a quality piece of software, a tool that will make your work more enjoyable, easier, and allow you to inject some creativity and innovation back into the Java stack.
We’ve got a solid base to start from. Being able to run Rails is no small feat, to be sure, but the best is yet to come. You can expect more performance, a complete compiler, support for more applications, and tighter integration with long-standing Java technologies. In addition, we’d like to push the envelope of what both Ruby and Java are capable of, including implementing (even driving) Ruby 2.0 features, leading the way for dynamic language support in the JVM, eased as well as novel ways of doing application deployment, better debugging and tooling, and experiments with new ways of doing concurrent and parallel computing.
Do join up with us -- it’s never too late to hop in and enjoy the fun!
Tags jruby, ruby | 2 comments | no trackbacks
Posted by Nick Sieger
Thu, 24 May 2007 17:16:05 GMT
Stumbling upon a description of a rare Burmese Ruby gemstone housed in the Smithsonian, this line popped out at me:
While sapphire, emerald and diamond gems weighing hundreds of carats exist, high quality Burmese rubies larger than 20 carats are exceedingly rare.
We could rephrase that a bit:
While Java, C++, and C# programs weighing hundreds of thousands of lines of code exist, high quality Ruby programs larger than 2000 lines are exceedingly rare.
Isn’t it strange how you hardly notice a difference?
Tags ruby | no comments | no trackbacks
Posted by Nick Sieger
Wed, 23 May 2007 05:51:36 GMT
I was fortunate to be in town right after RailsConf and attended the inaugural geekSessions event on Rails scalibility. The event went off without a hitch: it was well attended, City Club is a classy place, and there was decent food and an open bar. I don’t know the SF geek/startup scene, but pretty much all of the few guys I know were there along with a ton of other folks. My only complaint would have been to let it run at least 30 minutes longer. Socializing was good too, but it seemed like the conversation was just getting started.
Here are some notes for you in my typical rapid-fire style -- hope they’re useful to you.
Ian McFarland
Case study: divine caroline
Servers:
- Load balancer
- Apache + mongrel
- MySQL
- SOLR
Ruby is slow. Rails is slow. Unoptimized app was slow -- 7 pages/sec with ab
. So how can Rails possibly be? 150 pv/s with a simple text render. This formed a sort of upper-bound, that ruled out fragment/action/partial caching, etc. This brought the throughput to 3500 pv/s. Except for page caching limitations:
- Cache coherency
- Writes are more expensive
- Page caching is not applicable to as many pages as you think
But measure first. Pivotal built a drop-in page caching extension to deal with cache coherency issues (soon to be at http://rubyforge.org/projects/pivotalrb)
Jason Hoffman
Jason somehow has the distinction of the first four commits in the Rails repository. Joyent/TextDrive/Strongspace.
If your application is successful, you’re going to have a lot of machines. What happens when you have 1000s of machines, 100s of TB, 4 locations, etc. Is this really a Rails issue? In a typical Joyent setup, Rails is only one of 26+ processes on the server stack. So scaling it really doesn’t mean much more than scaling any application. Object creation in Ruby is fast, sockets and threads are slow. So forget sockets and threads.
Instead, use DNS, load balancers, evented mongrels, JRuby/Java, DBMSes (not just RDBMS; LDAP, filesystem, etc.), Rails process doing Rails only, static assets going through a static server, federate and separate as much as you can.
Jeremy LaTrasse
Jeremy’s job is about safety nets; about knowing the underlying infrastructure. Is the hardware/OS/stack important? Can you build safety nets around those so that you can spare cycles when you need to intrude into the system to troubleshoot?
Twitter is in a unique position with the volume of traffic to be able to find some pretty tough bugs, like the recent backtrace issue.
Bryan Cantrill
Measure first! Like Ian said. Is software information? Or a machine? It’s both. Nothing else in human existence can claim this. 3 weeks after Bryan joined Sun, he was working with Jeff (ZFS architect) debugging an issue when Jeff retorted, “Does it bother you that none of this exists? It’s just a representation of some plastic and metal morass in a backroom” (slightly paraphrased).
We’ve been living with bifurcated code -- “if DEBUG; print something” ad nauseum. But this has a cost. So dev code deviates from production code. But we can’t get the data we want, where it matters, in production. Bryan goes on to describe the aforementioned backtrace issue and how it saved Twitter 33% CPU. So don’t pre-optimize, but you’ve got to be prepared to go get the data. In production.
Q & A
What’s the best way to move from one database to two databases (MySQL), when you scale past the volume of reads that overwhelms one?
Jason doesn’t like the replication approach, it’s not fault tolerant. Reference to Dr Nic’s magic multi-connections gem. Reference to acts_as_readonly. Don’t rely on things that are out of your control, start reading/writing to multiple locations, at the application level. Jeremy: So do you want to be in the business of writing SQL or C extensions to Rails? What about MySQL proxy? Seems ok, but I might not trust it in production. MyTop/InnoTop will tell you about your query volume.
Virtualization: 4 virtual servers w/ web servers on top of a single physical server? Why?
Jason: Free BSD 4.9 on early pentium was the perfect balance of utilization. 18 CPUs by 64G RAM with virtual servers gets us back to that level of utilization. Bryan: Not all virtualization solutions are equivalent! (Solaris containers/zones plug.)
RDBMSes are not good for web applications? Why? Can you give some examples?
Jason: It depends on when you want to join. When people are clicking, or pre-assembled. Look at your application and put the data together before people request it. Why does YouTube need an RDBMS? It serves a file that people can comment on.
Mention of Dabble DB, ZFS, Jabber, Atom, Atom over Jabber, etc. as ways of innovative ways of storing objects, data, etc. GData/GCal most certainly does not store its Atom files in an RDBMS.
Sell Rails apps and have the customer deploy it? What options are available?
Ian: JRuby on Rails with a .war file is an interesting approach. What operational issues/ways to help with scaling remote deployments? Jeremy: Log files are the first line of defense. Jason: Corporate IT are comfortable with Java.
The pessimist in me says that my servers are going to fall over after 5 users. How can I be prepared/not be optimistic about a traffic spike?
Ian: Load test the crap out of the app. Find out the horizontal scaling point. Use solutions like S3 for images. Make sure you can scale by throwing hardware at it. Eventually single points of failure will overcome you (such as a single database), but you can wait until you get to that point before doing something about it.
Jason: You can benchmark your processes, and get an idea of what they can do. Most people that want to do something will be look at your stuff, and maybe signup. So front-load and optimize your signup process, possibly by taking it out of Rails.
Jeremy: Conversations with Zed, DHH, etc. have pointed out that sometimes “Rails isn’t good at that, take it out of Rails.” Same thing for the database. Split those things out into a different application.
Bryan: Do your dry land work, know your toolchain, so that when the moment comes, you can dive in and find the problem.
We have a migration that takes a week to run because of text processing. GC was running after every 10th DB statement. Used Rails bench GC patch to overcome the issue with the migration. Any issue running these?
Jason: We run those GC modifications and a few more in production, and they’re fine.
Most comversations revolve around items like database is slow, or Ruby is slow. How can we use DTrace to streamline the process?
Jeremy: We spent 20 minutes over lunch (plus some preparation) to find a Memcache issue. It’s worth it to spend a little time to learn the tool.
Bryan: “Awk is God’s gift to all of us.” When DTrace was being reviewed inside of Sun, folks commented “This reminds us of awk.” “Thanks!”
Jason: We’re putting a tracing plugin in Rails as a remote process to collect data from a running app. Apple has shown a commitment to get this in Leopard. Textual and graphical output are possible. I believe in DTrace a lot, and the tooling and documentation will go beyond its current state of an experts tool.
Lastly, what one closing thing would you like to say about Rails scalability?
Ian: Measure.
Jason: Don’t use relational databases.
Jeremy: I thought it was a Joyent sales pitch.
Bryan: Use DTrace (with Joyent accelerators of course).
Tags rails, ruby | 2 comments | no trackbacks
Posted by Nick Sieger
Sat, 19 May 2007 20:34:00 GMT
Chris is here to talk about games, since he used to work for Gamespot. He coded PHP, which is like training wheels without the bike. He had to sit in a glass cube and help keep the site running during E3 last year. There were 100 gajillion teenage boys during their lunch break hitting refresh, and it all blew up. Couldn’t even gzip the responses, because the servers heated up to much. They served 50M pages in a day, without downtime. They did it with Memcache.
Memcache is a distributed hash -- multiple daemons running on different servers. Developed by Livejournal for their infrastructure, you just put up the servers, and they just work.
Should you use Memcache? No. YAGNI, UYRDNI (unless you really do need it).
Rails and Memcache
Fragments, Actions, Sessions, Objects, cache it all. You can use:
memcache-client
(by Robot-coop guys/Eric Hodel). Marshal.unload is 40 times faster than Object.new/loading from the database.
- CachedModel -- integration with ActiveRecord
- Fragment Cache Store
- Memcache session store
...or...
cache_fu
Or, acts_as_cached
. It knows about all the aforementioned objects, with a single YAML config file (config/memcached.yml
). Word to the wise: don’t use names in your server config file. Use IPs, avoid BIND and connections to the servers with every connection. Don’t let DNS outages bring down your servers.
This is all you need -- if you’re using set_cache
, you probably don’t understand how the plugin works. Expire cache on the “after save” hook, which allows you to cache ID misses as well.
class Presentation < ActiveRecord::Base
acts_as_cached
after_save :expire_cache
end
Example: only cache published items
class Presentation < ActiveRecord::Base
acts_as_cached :conditions => 'published = 1'
end
Cached-scoped-finders (if somebody thinks of a good name, let Chris know). The idea is to move custom finder logic to a method on your model, and then wrap a cache-scoping thingy around it. cache_fu
ties this up nicely by giving you a cached
method on AR::Base.
class Topic < ActiveRecord::Base
def self.weekly_popular
Topic.find :all, ...
end
end
Topic.cached(:weekly_popular)
Adding date to cache key with alias_method_chain
:
def self.cache_key_with_date(id)
...
end
class << self
alias_method_chain :cache_key, :date
end
Cached loads by ID: Topic.find(1, 2, 3)
moves to Topic.get_cache(1, 2, 3)
, which can parallelize calls to memcached and bring them back as they’re ready.
user_ids = @topic.posts.map(&:user_id).uniq
@users = User.get_cache(user_ids)
You can also cache associations, so that you’re navigating associations via Memcache.
Cache overrides
class ApplicationController < ActionController::Base
before_filter :set_cache_override
def set_cache_override
ActsAsCached.skip_cache_gets = !!params[:skip_cache]
end
end
reset_cache
: Slow, uncached operations can sometimes queue up and wedge a site. Instead, issue cache resets on completion of a request, rather than expiring beforehand. That way, requests that continue to pile up will still use the cached copy until the rebuild is complete.
class Presentation < ActiveRecord::Base
after_save :reset_cache
end
Versioning: a way to expire cache on new code releases
class Presentation < ActiveRecord::Base
acts_as_cached :version => 1
end
Deployment: Chris recommends using Monit to ensure your Memcache servers are up.
libketama
: consistent hashing that gives you the ability to redeploy Memcache servers without invalidating all the keys.
Q: Page caching? A: Nginx with native Memcache page caching, but outside of Rails domains.
Lots of other questions, but dude, Chris talks too fast!
Tags railsconf, railsconf2007 | 2 comments | no trackbacks
Posted by Nick Sieger
Sat, 19 May 2007 19:30:06 GMT
How does Rails figure into virtualization? Bradley will cover this topic with examples and case studies. Along the way, hardware items may be mentioned, but are not critical. Really, it’s about the design of the clusters, not the bits of plumbing you use to connect them up.
Virtualization is partitioning of physical servers that allow you to run multiple servers on it. Xen, Virtuozzo, VMWare, Solaris containers, KVM, etc. Bradley uses Xen. The virtual servers share the same processor (hopefully multi-core), memory, storage, network cards (but with indepenent IP addresses), etc., but run independently of each other. VPS, slice, container, accelerator, VM, it’s all the same. Memory, storage, and CPU can be guaranteed with the virtualization layer.
Why would you do this? Consolidate servers for less hardware and cost; Isolate applications -- bad apps don’t drag the server down, contain intrusions, use different software stacks; Replicate -- easily create new servers and deploy in a standardized and automated way; Utilize -- take advantage of all CPU, memory, storage, resources; Allocate resources, give a server exactly what it requires, grow/shrink up and down, and balance them. Bradley says, “Once you go to virtualization you won’t want to go back. Do the simplest thing that could possibly work.”
Virtual clusters, then, are a bunch of servers cooperating toward a common goal -- if you have many versions or copies of one thing. More than one customer, more than one version of software, etc.
For Rails, this means a lot of things: you can have many development environments and stages, take advantage of memory isolation, protect against PHP/Java, and make multiple-server scaling accessible.
Examples
- Two servers for production and staging
- Three for web/db/staging
- Mixed languages -- instead of 1x1GB server use 3x300MB servers
- High availability applications with fewer servers
- Multiple applications -- one server per application
- Standardized roles/appliances -- mail, ftp, dns, web, db
- They can incubate customers in separate images
- Dev/staging/production servers
- Shared SVN/trac
- 2 physical servers => 8 virtual servers
Boom Design
- Again, multiple stages
- Customer staging, with lower uptime requirements
- Low-traffic apps on a single server, but everything else gets its own dedicated server
- 2GB memory spread across 9 virtual servers
Tags railsconf, railsconf2007 | no comments | no trackbacks
Posted by Nick Sieger
Sat, 19 May 2007 17:22:42 GMT
Cyndi Mitchell -- ThoughtWorks Studios
Enterprise (the “e” word)
Before IT got involved, “enterprise” was a bold new venture. Toyota manufacturing, Skype disruption of telephony.
Enterprise in terms of IT has come to mean bloatware, incompetence, corruption, waste of time, no value.
So this is the battle: The enterprise (to boldly go where no man has gone before) we need to reclaim vs. the bloatware/competence/corruption/fear-based selling etc.
RubyWorks -- package stack with haproxy, mongrel, monit through an RPM repository
For JRuby support, call Ola.
Tim Bray -- Web Guy from Sun Microsystems
Change the world that are better than just using a cool web framework: http://pragmaticstudio.com/donate/
Sun loves Ruby. Ruby and Rails, that is. The impact of the Ruby language is going to be at least as big as Rails is for web development.
Sun provided servers for Ruby 2.0 development, and can provide servers for your potentially cool, worthy, open source project, just drop Tim an email.
A few more obligatory plugs for NetBeans and Sun sponsoring the conference. “Pre-alpha,” he says. Hmm, I wonder what Tor would say about that!
JRuby: when would you use JRuby vs. Ruby? If you have no pain, keep using C Ruby. But if you have management concerns, deployment concerns, etc. then by all means do try it!
Obligatory handshake/sandal connection with ThoughtWorks and Cyndi -- running Mingle (and cruisecontrol.rb) with JRuby.
Sun: “Hi, the answer is Java, what was the question?” So why would Sun want to support Ruby? Well, you guys are programmers. Programmers who deliver quality software fast. And those programmers need computers, and OSes, and web servers, and support and services, etc. Plug, plug, plug.
How do you make money on free products? Sun has open-sourcing Java, Solaris, even Sparc. Joyent is open-sourcing their stuff. Where does the money come from? 1. Adoption 2. Deployment 3. Monetization at the point of value
What if we win? Are our problems over? No, we’ll have to deal with Java. And .NET. And PHP. From the audience: And COBOL. The Network Is The Computer. The Network Is Heterogeneous. Deal with it. So how do we interoperate?
- Just Run Java (and JRuby, of course!, and JavaScript, and PHP, etc.)
- Use Atom/REST. Everything should have a publish button. Don’t use WS-DeathStar or WCF or WSIT.
Developer issues: Scaling, Static vs. Dynamic, Maintainability, Concurrency, Tooling, Integration, Time to Market. Which two of these matter the most?
Tim’s final assertion: Maintainability and Time to Market, and that’s why we’re all at RailsConf.
Tags railsconf, railsconf2007 | 2 comments | no trackbacks
Posted by Nick Sieger
Fri, 18 May 2007 19:33:31 GMT
Evan is talking about leaving Rails as a full-stack framework and remixing bits and pieces for integration projects. He’s doing it in the context of a case study on Bio: a project at the University of Delaware working with DNA data in large SQL databases. Evan states that all of bioinformatics is an integration problem. (Me: That’s probably true of any research project where data is coming from multiple, varied sources. So where does Rails fit in this?)
So how do you cope with this? Use the Rails console as an admin interface, mapping AR onto the legacy schema.
Shadow (gem install shadow
) is a REST-ful record server -- a small Mongrel handler that allows you to manipulate the database remotely. It uses dynamic ActiveRecord classes that are created and trashed for each request.
Parallelization -- uses the Sun 1 grid engine that distributes shell scripts across 128 nodes. Used for job and backend processing.
bioruby/bioperl/biopython -- bioinformatics libraries in other languages -- bioruby is not complete, but we still want to use Ruby, so he looked at ways of integrating Ruby with other languages. No RubyInline for Perl or Python, no up-to-date direct/C bindings. He ended up building a socket-level interface into python.
Admin tools to consider -- streamlined, active_scaffold, autoadmin, Django (manage.py inspectdb; manage.py syncdb; manage.py runserver
). (Wow, come to RailsConf, get a Django demo. Unexpected surprise!)
Extending Rails -- has_many_polymorphs
for easy creation directed graphs
Frustrating AR tidbits: has_many_through
has a huge case statement, with sql strings everywhere, and tightly intertwined classes. Ugh.
Scaling big webapps: AR/SQL is not the way. Instead, go to a hyper-denormalized model, where the DB is just a big hash. This leads to things like berkeleydb, memcached, madeleine, etc. and MySQL just becomes a persistence store for memcache. One key is moving joins at write-time, so that reads don’t need to re-join associations. You’re essentially duplicating/caching the data out to each association, but this makes sharding/splitting of data easier. Example: Flickr user photos vs. photos placed in a group.
Evan doesn’t believe that SQL is a viable data store for webapps -- I think he means large-scale webapps. Not everyone who’s trying to build a web application will run into these kinds of issues, so your mileage may vary. Still, it’s refreshing to see more people rebel against the incumbent 30-year gorilla of SQL.
Tags railsconf, railsconf2007 | no comments | no trackbacks