Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Why Hasn't Ruby Won?

sarahmei
September 20, 2013

Why Hasn't Ruby Won?

For every enthusiastic attendee at a Ruby conference, there are a hundred people who have tried Ruby and walked away. There's also at least one person who's hit the top of Hacker News complaining about it. Are missing features or poor performance chasing people off? Is the community too international, or not responsive enough on GitHub? Maybe the problem is the ones who walk away — the inexperienced masses, poisoned by Flash, Visual Basic and PHP.

Languages and frameworks are interesting things. When you're choosing one, it's important to consider social information about your team — and the project you're evaluating — before making a decision. But while every README has bullet points listing the project's technical features, it's much more challenging to identify and extract the right social data to help that evaluation process.

Let's bring this missing information into the light and look at social data from real teams, real projects and real extracted code. We'll make better decisions. We'll understand why Hacker News exists. Everyone wins! And you still don't have to do Flash.

sarahmei

September 20, 2013
Tweet

More Decks by sarahmei

Other Decks in Technology

Transcript

  1. Why Hasn’t Ruby
    Won?
    Sarah Mei
    Ministry of Velocity
    @sarahmei
    Hiya! I’m Sarah Mei.

    View full-size slide

  2. of
    This is my company, the Ministry of Velocity. We’re a small consulting shop here in San
    Francisco, just got going last month. Logo’s new. I have shiny new business cards too that I
    love handing out. I think I’ve assaulted several of you already and made you take one.
    We work on Rails, JavaScript, iOS, and assorted other things. We help you go fast. If you’re
    looking for that kind of thing, come find me, and I will so happily give you a business card
    because I love them. “We” is, by the way, not quite the royal we, because, if you know me,
    you know - I have to have a pair.
    So far it’s been fun. I’m having a good time with it. The best part is that I got to make up my
    own title. So I’m the Head of Propaganda at the Ministry of Velocity.
    I am also...

    View full-size slide

  3. ...the founder of RailsBridge. You may know us from such programs as the RailsBridge
    Workshops for Women, which have taught Rails & Ruby thousands of women, now, at over a
    hundred events.
    What you may not know about RailsBridge is that we’re also starting an initiative to reach out
    to communities of color. We did a joint event with the Black Founders group here in SF a few
    months ago, and we’re looking to do more of that type of thing, while we continue our work
    on the gender gap.
    I know there are folks in this audience who have been both students and volunteers at our
    workshops - sometimes both - and I am super happy to see you all here.
    Now, I started this talk off...

    View full-size slide

  4. Why Hasn’t Ruby
    Won?
    Sarah Mei
    Ministry of Velocity
    @sarahmei
    ....with a question. It’s kind of an odd one, so let me explain a little bit what I mean here.
    You all use Ruby every day and you love it - I’m not even going to ask for a show of hands,
    because I know you do. Otherwise you wouldn’t be here. But for every one of you, there’s a
    hundred people out there who have tried Ruby and discarded it. Most of them seem to post a
    lot on Hacker News. I mean, ever since Rails hit 1.0, Ruby has been the favorite whipping boy
    of Hacker News.
    Witness:

    View full-size slide

  5. of
    HACKER NEWS U CRAY
    https://www.hnsearch.com
    ...just a few of the articles that have made it to the front page of Hacker News in the last few
    years.
    A bunch of the articles on this list are FUD - Fear, Uncertainty, and Doubt - about Ruby, and
    those are really popular, but even the ones that are neutral or positive about Ruby - and
    especially the ones that are positive - attract horribly ignorant comments about Ruby and
    what it’s good for.
    Now everyone knows not to read the comments on Hacker News, right? Never read the
    comments. They’re a cesspool. I was going to get some screenshots of comments but I felt
    too dirty just going in and looking around and getting out and then I felt like I needed a
    shower or something. Ick. Don’t do it.
    But it’s not just folks in other communities disparaging Ruby. We in the Ruby community poke
    outwards too. We’re constantly...

    View full-size slide

  6. of
    http://www.flickr.com/photos/tonyjcase/2234195102
    ...taking jabs at people who use other, lesser languages. We figure they work somewhere like
    this, and they do Java, or Python, or PHP. Certainly I'm guilty of that myself. I mean, those
    poor people! With their bizarre whitespace conventions, their onerous type-checking, their
    closed-source implementations. I want to save them.

    View full-size slide

  7. of
    http://bit.ly/16f58qB
    Save them with Ruby!
    Why can’t we?
    I have a friend who’s trying to switch out of PHP into something that pays better. Of course
    I’ve given her the pitch for Ruby, but it’s been interesting to hear her thought process around
    switching. And I started wondering if there was something we could learn from that process.

    View full-size slide

  8. http://www.flickr.com/photos/donnawilliams/6285105157
    Because when you ask developers to evaluate languages, often what happens is they come
    into it with a lot of subconscious expectations based on their previous language. I mean,
    when I switched to Ruby from Java, I wrote Ruby code that looked like Java code for pretty
    much the entire first month. I used for loops!
    But there are a lot of other things, beyond the actual code, that people consider when they’re
    evaluating a language.
    For instance:

    View full-size slide

  9. http://www.flickr.com/photos/obra/200585546
    ...no one starts projects in SmallTalk any more. This is such an amazing picture.
    And this isn’t because SmallTalk isn’t wonderful to work with - it is.
    But...just try posting a question about SmallTalk to Stack Overflow and see how long it takes
    to get answered. Try hiring a senior SmallTalk developer - good luck. They’re even harder to
    get than a senior Ruby developer.
    Well I’ve met one, current Smalltalk developer, in my life. Earlier this year at Mountain West
    Ruby Conf in Salt Lake. He told me he was a fulltime SmallTalk developer, & I thought that
    was very impressive. Then he told me that he comes to a couple of Ruby conferences every
    year, and I said “why don’t you go to smalltalk conferences?” and he said “because there
    aren’t any.”
    [Note: apparently there actually is one in Buenos Aires.]

    View full-size slide

  10. http://www.flickr.com/photos/chrispark1957/3936556869
    The number of people who use a given language, and how active they are in public, has a
    huge effect on people’s decisions around languages. I wanted to understand better what the
    non-technical influences on people’s decisions were, because I had a hunch that those
    influenced people’s decisions more heavily than the features or performance or other
    technical details of the language.

    View full-size slide

  11. of
    SOCIAL TECHNICAL
    >
    THEORY
    I think people’s decisions about programming languages are based largely on social factors,
    not on technical ones.
    But I had a sample size of one, my PHP friend, so I figured I could understand this better by
    looking at it at a smaller scale where I could collect more data points.
    Because while I don’t know a lot of people who are making the language decision, I know lots
    of developers who make similar but smaller-scale decisions, all the time.

    View full-size slide

  12. of
    Code location, object model, etc.
    Gems & Libraries
    Frameworks
    Languages
    Decision Frequency
    THE DECISION PYRAMID
    Programming is constant technical decision-making, but it operates at different scales. Every
    day, when we're working on a project, we make hundreds of micro decisions about where
    code will go and how to test it. At a larger scale, we make decisions about which gems to
    use, and even larger, we make decisions about when to switch frameworks, and even larger
    we make decisions about when to switch languages.
    And while I don’t know anybody who makes the language or framework tradeoff regularly, I
    know lots of people who look at gems. I started by asking several of my colleagues to
    describe how they evaluate gems with similar functionality.
    Everyone starts in the same place - looking at the interface of the gem - the features,
    functionality, and usage of the code. This information is easy to find. It's usually listed in the
    README on Github. Let's say...

    View full-size slide

  13. of
    THE INTERFACE
    HTTPARTY
    FARADAY
    https://github.com/lostisland/faraday
    https://github.com/jnunemaker/httparty
    ...you were trying to pick a gem to make HTTP requests, and a quick google search turned up
    httparty and faraday. So assuming you don’t discard httparty for pronunciation reasons right
    away, you look at the README for each project, skip to the usage, and this is what you’d find:
    httparty has class methods, or you can mix it in to your own class.
    Faraday takes a different approach, and gives you a connection object that you use to make
    the calls.
    This by itself is not enough for most people to make a decision. Just like the features of
    language aren’t enough to tell you if it’s worth using, the features of a gem aren’t enough
    either. So I asked people to enumerate everything they do to evaluate a gem. Here’s what I
    got:

    View full-size slide

  14. of
    •Read README (Github)
    •Frequency/recency of commits (Github)
    •Number & age of issues (Github)
    •Comments on issues & pull requests (Github)
    •Number & recency of tutorials/blog posts (Google)
    •Relative popularity to similar gems (Ruby Toolbox)
    •Date of last release (Rubygems)
    •Number & answer status of Stack Overflow questions
    •Opinions of work colleagues
    •Opinions on Twitter
    •Hacker News/Reddit discussions
    •Opinions of outside developers (email, IM, etc.)
    •Recency & completeness of official documentation
    •Availability of books
    •Mentions on screencasts/podcasts
    •Evaluate code directly (Github)
    •Evaluate the tests (Github)
    •Drop it in and see what happens
    •Build a sample app around it
    EVALUATION TECHNIQUES
    This is a long list. I had to move the font size way down to get it to fit. And it’s not
    exhaustive. I could put more.
    I want to point out a few interesting features it has.

    View full-size slide

  15. of
    •Read README (Github)
    •Frequency/recency of commits (Github)
    •Number & age of issues (Github)
    •Comments on issues & pull requests (Github)
    •Number & recency of tutorials/blog posts (Google)
    •Relative popularity to similar gems (Ruby Toolbox)
    •Date of last release (Rubygems)
    •Number & answer status of Stack Overflow questions
    •Opinions of work colleagues
    •Opinions on Twitter
    •Hacker News/Reddit discussions
    •Opinions of outside developers (email, IM, etc.)
    •Recency & completeness of official documentation
    •Availability of books
    •Mentions on screencasts/podcasts
    •Evaluate code directly (Github)
    •Evaluate the tests (Github)
    •Drop it in and see what happens
    •Build a sample app around it
    EVALUATION TECHNIQUES
    1
    2
    The first is that different people rank these differently. I always poke around on Github first to
    establish that is has a reasonable interface and activity, and then I ask my co-workers for
    input. But one of the people I talked with works at a company displaying ignorance is not
    really a good thing, so he never asks his co-workers. He goes straight for documentation and
    tutorials.

    View full-size slide

  16. of
    •Read README (Github)
    •Frequency/recency of commits (Github)
    •Number & age of issues (Github)
    •Comments on issues & pull requests (Github)
    •Number & recency of tutorials/blog posts (Google)
    •Relative popularity to similar gems (Ruby Toolbox)
    •Date of last release (Rubygems)
    •Number & answer status of Stack Overflow questions
    •Opinions of work colleagues
    •Opinions on Twitter
    •Hacker News/Reddit discussions
    •Opinions of outside developers (email, IM, etc.)
    •Recency & completeness of official documentation
    •Availability of books
    •Mentions on screencasts/podcasts
    •Evaluate code directly (Github)
    •Evaluate the tests (Github)
    •Drop it in and see what happens
    •Build a sample app around it
    EVALUATION TECHNIQUES
    The second is that we rarely do all of these for any given evaluation. Maybe we do if we’re
    evaluating Rails vs. Sinatra, but if it’s httparty vs. faraday, probably not.
    And the last thing is that this list changes as our community changes. For example, before
    Rails, a lot of discussion about Ruby libraries took place on the official English-language
    Ruby mailing list. But today, it wouldn't occur to any of us, I don’t think, to post to the official
    English-language Ruby mailing list when we're trying to pick an http gem.
    So the way we collect and use this data is pretty complicated. We weight things differently at
    different times, different people do different things...is there anything we can pull out of this?

    View full-size slide

  17. of
    •Frequency/recency of commits (Github)
    •Number & age of issues (Github)
    •Comments on issues & pull requests (Github)
    •Number & recency of tutorials/blog posts (Google)
    •Relative popularity to similar gems (Ruby Toolbox)
    •Date of last release (Rubygems)
    •Number & answer status of Stack Overflow questions
    •Opinions of work colleagues
    •Opinions on Twitter
    •Hacker News/Reddit discussions
    •Opinions of outside developers (email, IM, etc.)
    •Recency & completeness of official documentation
    •Availability of books
    •Mentions on screencasts/podcasts
    •Evaluate code directly (Github)
    •Evaluate the tests (Github)
    EVALUATION TECHNIQUES
    •Read README (Github)
    •Drop it in and see what happens
    •Build a sample app around it
    Interface
    README,
    use gem
    Most of this data is not technical. It’s social. It’s people data. It’s information about the
    maintainers and users of a project.
    There is some technical data - let’s pull that out first. Here’s what everyone starts off with.
    They look directly at the features & functionality of the gem. Not the internals - the external
    interface. We’ll put that stuff over here.

    View full-size slide

  18. of
    •Frequency/recency of commits (Github)
    •Number & age of issues (Github)
    •Comments on issues & pull requests (Github)
    •Number & recency of tutorials/blog posts (Google)
    •Relative popularity to similar gems (Ruby Toolbox)
    •Date of last release (Rubygems)
    •Number & answer status of Stack Overflow questions
    •Opinions of work colleagues
    •Opinions on Twitter
    •Hacker News/Reddit discussions
    •Opinions of outside developers (email, IM, etc.)
    •Recency & completeness of official documentation
    •Availability of books
    •Mentions on screencasts/podcasts
    •Evaluate code directly (Github)
    •Evaluate the tests (Github)
    EVALUATION TECHNIQUES
    Interface
    README,
    use gem
    Ok. Now, these things...

    View full-size slide

  19. of
    •Number & recency of tutorials/blog posts (Google)
    •Relative popularity to similar gems (Ruby Toolbox)
    EVALUATION TECHNIQUES
    Activity
    Commits, issues, PRs,
    releases, docs
    •Frequency/recency of commits (Github)
    •Number & age of issues (Github)
    •Comments on issues & pull requests (Github)
    •Date of last release (Rubygems)
    •Recency & completeness of official documentation
    •Number & answer status of Stack Overflow questions
    •Opinions of work colleagues
    •Opinions on Twitter
    •Hacker News/Reddit discussions
    •Opinions of outside developers (email, IM, etc.)
    •Availability of books
    •Mentions on screencasts/podcasts
    •Evaluate code directly (Github)
    •Evaluate the tests (Github)
    Interface
    README,
    use gem
    ...are information about the Activity of a project. How often is it updated, how likely am I to
    get help from the maintainer or get a pull request merged? We’ll pull these out over here, and
    give ourselves a little more space here. Ok, what’s left? Well, all this stuff...

    View full-size slide

  20. of
    EVALUATION TECHNIQUES
    •Number & recency of tutorials/blog posts (Google)
    •Relative popularity to similar gems (Ruby Toolbox)
    •Number & answer status of Stack Overflow questions
    •Opinions of work colleagues
    •Opinions on Twitter
    •Hacker News/Reddit discussions
    •Opinions of outside developers (email, IM, etc.)
    •Availability of books
    •Mentions on screencasts/podcasts
    Popularity
    SO, HN, Google
    •Evaluate code directly (Github)
    •Evaluate the tests (Github)
    Interface
    README,
    use gem
    Activity
    Commits, issues, PRs,
    releases, docs
    is information about the project's Popularity among other developers. Have any of my co-
    workers used it? How easy will it be to find help when I run into a problem? How likely is it
    that someone else has already fixed a bug by the time I encounter it?
    We’ll pull that stuff out over here, and now we’re left with a couple of outliers. What are these?
    Evaluating the code, and evaluating the tests. Yeah, they don’t really fit in any of the groups
    we have so far. Interface, Activity, and Popularity are all pretty straightforward. There are
    well-known sources of data for this information. But what we have left is a little fuzzier. It’s
    really about how familiar things feel to you. Is this idiomatic Ruby? Does the maintainer share
    my test strategy? How much does the code match up with what I would write if I were going to
    roll my own?
    How much does this code feel like other code you've seen?
    Let’s call this Familiarity.

    View full-size slide

  21. of
    EVALUATION TECHNIQUES
    Popularity
    SO, HN, Google
    •Evaluate code directly (Github)
    •Evaluate the tests (Github)
    Interface
    README,
    use gem
    Activity
    Commits, issues, PRs,
    releases, docs
    Familiarity
    Look at code
    So there’s our last group. These are, broadly speaking, the four categories of data that we
    consider when we’re evaluating a gem.
    Interestingly, only one of these is purely technical data and that’s the Interface. All of the
    other 3 have a social component. Popularity and Activity are almost purely social. And
    Familiarity is partially a technical judgement because you’re spelunking through the internals
    of the code, and partially a social judgement, because you’re doing that in order to find out
    how much the maintainer thinks like you do.
    So certainly by volume, we consider more social data than technical data.
    What happens pretty often in Ruby, actually, is that you have two gems that both have a
    sufficient interface, are about as popular, and are about as active. So the judgement comes
    down to Familiarity. Does the code feel good? Let’s talk a little bit about what that means.

    View full-size slide

  22. of
    THE INTERFACE
    HTTPARTY
    FARADAY
    I want to come back to this example. “Familiarity” is an intuitive judgement, but that doesn’t
    mean we can’t follow the thought process that someone uses when they’re evaluating
    particular code.
    If you recall, httparty has class methods, or a module you can mix in; Faraday gives you a
    connection object. Here’s a thought process one of my colleagues described to me when I
    asked him to talk me through his thought process when he evaluates this code.
    I’ve written it up as a conversation between me and my 4-year-old son. I think you’ll see why
    in a minute.

    View full-size slide

  23. of
    THE CONVERSATION
    ME: I don’t like mixing helper methods into objects.
    4YO: Why?
    4YO: Why?
    ME: It’s a sign there’s another object trying to get out.
    4YO: Why?
    ME: OO design! The Ruby Way is to use objects.
    ME: So that my code is easier to test.
    4YO: Why?
    ME: Testing is important.
    4YO: Why?
    ME: I have a team with mixed skill levels.
    4YO: Can I have an otter pop?
    - I don’t like mixing helper methods into objects.
    - It’s a sign there’s another object trying to get out.
    - OO design! The Ruby Way is to use objects.
    - So that my code is easier to test.
    - Testing is important.
    That’s question we don't ask ourselves often enough, I think. In this case, the answer was
    “Because I have a team with mixed skill levels. I need to have confidence in all of my code.”
    And of course all conversations in my house end the same way: can I have an otter pop?

    View full-size slide

  24. of
    THE CONVERSATION
    ME: I don’t like mixing helper methods into objects.
    4YO: Why?
    4YO: Why?
    ME: It’s a sign there’s another object trying to get out.
    4YO: Why?
    ME: OO design! The Ruby Way is to use objects.
    ME: So that my code is easier to test.
    4YO: Why?
    ME: Testing is important.
    4YO: Why?
    ME: I have a team with mixed skill levels.
    4YO: Can I have an otter pop?
    - this is not a conversation we actually have with ourselves. It’s usually subconscious, which is
    why it has a variety of fuzzy labels. The way people described it to me most often was
    "pattern matching." Now that's interesting. Pattern matching is a difficult job for a computer -
    and a relatively easy one for a human brain. Particularly a human brain belonging to...

    View full-size slide

  25. of
    http://www.flickr.com/photos/tobiasmik/5020355210
    ... a software developer.
    You may be familiar with the concept of a neural net. A neural net is software that tries to
    replicate the learning process of a human brain. In very simplified terms, the way it works...

    View full-size slide

  26. of
    A VERY VERY VERY VERY SIMPLE NETWORK
    ?
    Input
    ?
    ?
    ?
    ?
    ?
    Outputs
    ...is that you construct, in software, a decision tree with algorithms at the branching points -
    that’s these question marks - that determine which path to go down, given a particular input.
    Then you start feeding it "training data", which adjusts the algorithms at the branching
    points, and makes it more likely to correctly process similar inputs. In general, its matching
    power gets greater the more inputs you give it, so it appears to “learn.”
    It turns out that this is a decent approximation of how our human brains learn to match
    patterns as well. The more inputs you give it, the better its matching ability. Or in other
    words, if you want to develop a sense of familiarity, you’ll get better at it the more code you
    process.
    Now we actually talk about this all the time. Most of you are probably familiar with “the
    10,000 hours” phenomenon.

    View full-size slide

  27. of
    Image: http://www.flickr.com/photos/16210667@N02/9637894936 Book: http://en.wikipedia.org/wiki/Outliers_(book)
    I’ve lost count of the number of people who have told me that you must spend 10,000 hours
    programming in order to get really good at it. Our brains need 10,000 hours of training data
    to reach mastery.
    More generally, what this says is that the only way to master programming is to just wait for it
    to happen. This idea is originally from Malcolm Gladwell's book "Outliers," and it's taken
    pretty deep root in our community. We talk about this all the time, even though we all know
    some counter examples, people who seem to have skipped ahead in line.

    View full-size slide

  28. of
    Image: http://www.flickr.com/photos/fepigio/315844360
    Article: http://www.joelonsoftware.com/articles/HighNotes.html
    We've all had a co-worker with 3 years of programming experience who made better
    decisions than the one with 10 years of experience. We all know, just from our everyday life,
    that quality does not directly correlate with how long someone has been a developer.
    And that is where we tell ourselves a myth - the myth of the "10x" programmer. You folks
    know this one too - this is the idea, that Joel Spolsky wrote about in 2005, that there’s a
    small number of super-developers who are 10 times more productive than the mass of
    average developers.
    Another way to state that is that if you look at the 10000-hours-to-mastery idea as a...

    View full-size slide

  29. of
    http://www.flickr.com/photos/kumitey/2328256519
    ...long-term learning curve, which of course it is, then there are some people who go up it
    faster. And there are some who seem to stall. The people who reach mastery more quickly are
    the ones we see as 10x.

    View full-size slide

  30. of
    Research: http://bit.ly/14riXPc
    Image: http://www.flickr.com/photos/sovietuk/1878443254
    Now I called it the myth of the 10x programmer. But like most myths, part of it is true. There
    are huge differences in programmer productivity. That's got solid research to back it up,
    going back 50 years, and I've seen it in the wild. You've seen it. We've all felt at some point in
    our careers like we were sitting next to someone just way out of our league.

    View full-size slide

  31. of
    http://www.flickr.com/photos/creature_comforts/5132563874
    But when I hear people talk about this 10x concept, though, there's usually another
    dimension. They say that the 10x programmers have a “gift” that some people, no matter
    how hard they try, can never acquire. And that is actually not supported by the research. It's
    not proven or disproven - it just isn't addressed.
    I searched, but as far as I can tell, no one has done the study that I wanted to read - the
    study where they take people who measure average on the productivity scale, and try to get
    them up to the level of their 10x peers.
    Ultimately, both of these ideas - the 10,000 hours and the 10x programmer - are
    unsatisfying. They feel like...

    View full-size slide

  32. of
    http://en.wikipedia.org/wiki/File:Blind_men_and_elephant3.jpg
    ...two blind men feeling along an elephant’s body, trying to figure out what an elephant is
    like. One of them runs his hands over the elephant’s leg, and says, an elephant is like a pillar.
    The other feels the elephant’s tusk and says, an elephant is like a strong pipe!
    They’re both right. An elephant is like both of those things. But it’s also way more than either
    of them.
    These ideas are true, in their way, but they are also vast oversimplifications of a process that
    deserves nuance and deeper understanding.
    Malcolm Gladwell is right. He’s feeling the leg of the elephant, and he sees a learning curve
    that leads to mastery, that it is not easy to surmount. But he’s wrong that your progression is
    purely time-based.
    And Joel Spolsky is right. He’s feeling the tusk, and he sees that there are people who move
    up the curve faster, quickly outpacing their peers. But he’s wrong that it comes from a “gift”
    that their peers cannot acquire.

    View full-size slide

  33. of
    http://www.flickr.com/photos/darthale/5960218251
    My theory is that you can accelerate yourself up that curve. There are natural variations in
    cognitive abilities, but they are not significant enough to explain the enormous productivity
    differences among programmers. Just like the natural variations in cognitive abilities between
    men and women are not enough to explain the gender disparity that we see in our
    community.
    So why do I think you can accelerate yourself up that curve? Well, I’ll tell you a story, and then
    I’ll drop you some science.

    View full-size slide

  34. of
    New York Crowd
    http://www.flickr.com/divya_/5173340326
    At this point in my career, I've worked with hundreds of developers. Most of them, I have pair
    programmed with. Pair programming gives you an unusual glimpse into another person’s
    decision-making process.
    When you pair with someone, in less than a day, you get a really good sense of where they lie
    relative to you on the skill spectrum. And when I think through all the people I’ve paired with
    who either way outperformed everyone else on their team, or who were obviously on a fast
    upward trajectory, one common thread stood out.
    They all, without exception, made a concerted effort to read, write, and understand lots of
    different types of code.

    View full-size slide

  35. of
    experimentation
    http://www.flickr.com/puuikibeach/3299183483
    If they did Rails, they'd spend time studying the insides of gems. If they used rspec, they'd
    study code tested in minitest or test::unit, and they'd write some themselves. If they did Ruby
    and were on a JavaScript-heavy project, they'd throw themselves into learning idiomatic
    JavaScript. Sometimes they'd learn new languages, impractical ones with no immediate
    application, just to see how other people did things.
    Some of them did this outside of work, but most just used work time, a few minutes of
    digression here & there, poking a little deeper into the code, the framework, and the language
    than most people do.

    View full-size slide

  36. Image: http://www.flickr.com/photos/38287236@N06/4151565491
    Paper: http://www.cell.com/neuron/retrieve/pii/S0896627306004752
    And now for the science. There’s lots of great research out there around how people learn
    things. And one of the most interesting to me recently is a study published in 2006 on the
    effects of novel information in learning.
    They gave people a set of images to study for a fixed period of time. Then later, they were
    asked questions about the details of the images. They found that people whose images
    included a mix of ones they’d seen before, and ones they hadn’t, were significantly better at
    recall than people who just looked at images they’d seen before. And they weren’t just better
    at remembering the novel images - they were better at remembering all of them, both in the
    short-term and the long-term.
    And then it gets even more interesting. They conducted some MRI studies to figure out why
    this was happening, and they discovered that when we encounter a piece of novel
    information, our midbrain responds by releasing dopamine. Dopamine is a neurotransmitter
    that accelerates learning in ways that we don’t fully understand.
    The practical upshot of this is that when you’re learning, mixing bits of new information into
    what you’re studying means that for a given unit of time, you’ll learn more than you would
    otherwise.

    View full-size slide

  37. of
    Variety
    http://www.flickr.com/photos/wwworks/417511823
    And that lines up with my experience. The best programmers I’ve ever worked with don’t
    maximize the number of hours they spend programming. They maximize the variety of the
    code they work in. And that accelerates them up the curve to mastery faster.
    So to summarize - Malcolm Gladwell and Joel Spolsky are both right. And they’re both wrong.
    Our journey up the curve to mastery is more complex than either of their ideas admit. And
    you can hack it.
    So now I want to bring this back to languages. Our investigation of the decision process
    behind gems yielded us this:

    View full-size slide

  38. of
    EVALUATION TECHNIQUES
    Popularity
    SO, HN, Google
    Interface
    README,
    use gem
    Activity
    Commits, issues, PRs,
    releases, docs
    Familiarity
    Look at code
    The categories of data that we collect and analyze when we’re choosing among gems. Can we
    apply this to languages?
    I decided to do a thought experiment with an example that many of us are navigating right
    now: the decision between ruby and javascript on the server side. Let’s see if these categories
    apply.
    Starting with the Interface.

    View full-size slide

  39. of
    Interface
    README,
    use gem
    INTERFACE
    With gems, this is the external interface and capabilities. With languages, this is also the
    external interface & capabilities. This is where we talk about things like garbage collection
    and threading models. This, actually, is what most node vs. rails posts focus on. Rails builds
    you a blog, node builds you a chat server. Real projects are almost never that clean-cut, and
    often include enough standard request-response features to merit looking at Rails, and
    enough realtime-ish update-y stuff to merit looking at node. But people use and in fact
    usually focus on this category when they’re comparing languages.

    View full-size slide

  40. of
    Activity
    Commits, issues, PRs,
    releases, docs
    Interface
    README,
    use gem
    ACTIVITY
    Activity - node, rails, javascript, and ruby are all very active projects. But here in this category
    is where you sometimes get people reacting, positively or negatively, to the personalities
    involved in a project, because at this larger level, this category include “how does the
    leadership respond to criticism?”

    View full-size slide

  41. of
    Popularity
    SO, HN, Google
    Interface
    README,
    use gem
    Activity
    Commits, issues, PRs,
    releases, docs
    POPULARITY
    Popularity - this also matters when considering languages. Ruby and Rails being older
    projects, probably have the advantage here over node. Certainly on the node side you roll a
    lot of your own code for things you could do with a gem in Ruby.
    And finally, ...

    View full-size slide

  42. of
    Familiarity
    Look at code
    Interface
    README,
    use gem
    Activity
    Commits, issues, PRs,
    releases, docs
    Popularity
    SO, HN, Google
    FAMILIARITY
    Familiarity. This was the most complicated one when we were looking at gems, and it’s the
    most complicated one here too. At the language level, it means how much will my
    internalized assumptions from my current set of languages match up with the expectations in
    this new language.
    And I think when you’re coming from Ruby and looking at JavaScript, they don’t really match
    up well at all. In Ruby we’re used to thinking of software structure being oriented around
    classes and objects. But JavaScript doesn’t have classes. It has prototypes. And that leads to a
    different structure in large systems than we’re used to on this side.
    There are lots of projects trying to bridge this gap. Coffeescript, for example, pretty much
    lets you import your class-based structure assumptions wholesale and apply them to
    JavaScript. I tend not to recommend it because I think people should learn prototypes first.
    And I think JavaScript applications should be structured differently than Ruby applications.
    And that’s great, in an ideal world where there won’t be any real-live people writing your
    software.

    View full-size slide

  43. of
    Familiarity
    Look at code
    Interface
    README,
    use gem
    Activity
    Commits, issues, PRs,
    releases, docs
    Popularity
    SO, HN, Google
    FAMILIARITY
    But sometimes, you don’t have the time, or maybe the expertise on staff to convert a team of
    Ruby developers to idiomatic JavaScript developers. Because while for gems, familiarity is a
    function of one person’s mind, when you’re at the scale of about languages, familiarity is a
    function of all the people who are going to be working with it on this project. Your familiarity
    with JavaScript might be fantastic - but what about the rest of your team?
    And as a result? The capabilities and interests of your team are ultimately the strongest driver
    of language decisions. Usually even more than the task you’re trying to complete. It turns out
    - software really is just people.

    View full-size slide

  44. of
    http://bit.ly/16f58qB
    This will never happen. We can’t save them with Ruby.
    It’s not because Ruby is...

    View full-size slide

  45. of
    http://www.flickr.com/photos/wwworks/417511823
    ...slow. It's not because Ruby’s...

    View full-size slide

  46. of
    http://www.flickr.com/picsoflife/6321200672
    ...main contributors don't speak English.
    And it's not because the most famous Ruby developer in the world spends most of his time...

    View full-size slide

  47. of
    http://37signals.com/svn/posts/2814-behind-the-scenes-37signals-race-car-graphics
    ...racing cars.
    Ruby can’t win, because...

    View full-size slide

  48. of
    Match
    http://www.flickr.com/therichbrooks/4040197666
    ...language choices hinge on familiarity. And everyone who walks in Ruby’s door has given
    their brain a different set of training data than you gave yours. So not everyone’s going to be
    a match.

    View full-size slide

  49. of
    http://www.flickr.com/photos/christolmie/2597909194
    This is a game where there is no winning. But there is definitely losing.
    Ruby could wither into a niche language like COBOL or SmallTalk. And none of us wants that.
    But - remember my theory? My theory is that the learning curve that leads to mastery can be
    hacked, that there are things you can do to accelerate you up. I’ve found one of them. I’m
    sure there are others. Let’s figure out what they are because then we’ll be an unstoppable
    force.
    But in the meantime, given the dramatic effect that novel information has on your brain, the
    best thing you can do for Ruby is to go learn something else.
    And then come back.

    View full-size slide

  50. of
    http://www.flickr.com/photos/jek-a-go-go/3817953924
    I’ll save you an otter pop.

    View full-size slide

  51. Thank you!
    Sarah Mei
    Ministry of Velocity
    @sarahmei
    Thank you.

    View full-size slide