Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Experiences analyzing OSS ecosystems

September 29, 2016

Experiences analyzing OSS ecosystems

Inner Source Commons @ Boston, 2016


September 29, 2016

More Decks by Bitergia

Other Decks in Technology


  1. Outline What we do Use cases Methodology (similar method and

    different goals in inner source) What we have learned/Achievements Summary
  2. Goal Share experience analyzing OSS communities Bring that knowledge to

    Inner Source Help to build trustable communities
  3. /me Bitergia co-founder (CDO) Free Software Engineering PhD ‘Buggy human

    being activity patterns’ Q-reports, diversity analysis, ad hoc dashboards
  4. What we do “Software analytics for your peace of mind”

    Aggregate info. from usual OSS data sources Enrich data about those OSS ecosystems Outcomes: Data API, actionable dashboards, quarterly reports, training, customizations
  5. What we do Grimoire* toolchain helps with this Supported data

    sources: Git, Gerrit, Github (enterprise), mailing lists, IRC channels, Slack, Twitter, Jira, Bugzilla, Reps, Meetup, Stackoverflow (Stackexchange), Redmine, Phabricator... Grimoirelab, Metrics Grimoire @ github.com
  6. What we do Metrics: dozens of them per data source

    Tools to merge info: eg. projects or developers across several data sources Panels with several focuses (others can be done)
  7. What we do: features Projects/Developers/other layer visualizations Tooling to agg.

    Info per project/developer Drill down/enrichment process/data API...
  8. Use case Attraction of developers and companies Marketing, transparency and

    neutrality Eg: who are the relevant members of this community?
  9. Use case Development cycle From user stories to deployment Eg:

    How fast are we implementing requirements? Eg: How long does it take each of the phases of the development? Feature request -> backlog -> developing -> reviewing process -> CI -> entering into master -> more CI -> deployed in customer
  10. Use case Engadgement Attraction and retention of new members Eg:

    How good are we retaining developers? And compared to others?
  11. Use case Contributors funnel From first traces to core developer

    Eg: What % of basic contributors are actual developers nowadays? Eg: First traces: bug report, email, feature need -> first pull request -> first accepted pull request -> active in the project/repository
  12. Use case Social Network Analysis Knowledge evolution Eg: From hidden

    knowledge to shared knowledge Eg: Who’s this developer working with? Areas of knowledge?
  13. Use case Mentorship Helping newcomers Eg: Who are those helping

    others to understand the current product? Eg: Who are the newcomers? How fast are newcomers becoming mentors?
  14. (Inner Source) Method We believe it’s quite straightforward from the

    OSS world Different goals, similar data sources, similar method, probably same tools to analyze repositories.
  15. (Inner Source) Method Goals (some detailed) Increase collaboration Increase documentation

    Increase the number of people working in some technologies and at different layers
  16. (Inner Source) Method Data Sources: Those like in OSS projects!

    GitHub [Enterprise], Mailing lists, IRC channels, Slack, Gerrit, etc
  17. (Inner Source) Method Not that usual metrics in the OSS

    world: Productivity Human Resources Priorities
  18. Lessons learned: define goals First stage goals are different from

    mature goals Initially: understand and check Then: new questions and definitions of KPIs Finally: definition of alerts and actions
  19. Lessons learned: infrastructure Infrastructure is similar in most of the

    OSS projects Even when they use different tools (mailing lists vs Gerrit) Although some tools help a lot
  20. Lessons learned: inner source It’s feasible to bring that experience

    and toolchain to the inner source world Metrics are similar but with different purposes
  21. Lessons learned: You bring the knowledge of your community and

    how to proceed We measure if this is being successful
  22. Lessons learned: cultural issues Metrics may lead to undesired situations

    People don’t like to be tracked Gamification may not be appealing to devs. Having metrics may change people's behaviour to be good at those metrics
  23. Lessons learned: tooling Several layers help the ‘Inner Source Team’

    Data API -> Data Scientists Q reports -> Management Code Review panels -> Developers Demographics panel -> Recruiting
  24. Lessons learned: tooling Whatever you use (Dashboard, reports, …) try

    to involve your community! They’ll feel it’s part of their process They’ll help improving that tooling (inner source project) It’s all about transparency and data driven
  25. OpenStack Code Review Perception of a decay in the code

    review. Study: time waiting for a submitter or reviewer action (Gerrit) Outcome: training for newcomers and more focused reviewers and quarterly reports to follow those metrics
  26. Xen Code Review Perception of a decay in the code

    review. Study: time to merge, cycle time Outcome: definition of user stories and creation of dashboard to follow those metrics
  27. Ceph Goal: aggregate info about machines and performance Study: analysis

    of logs left by those machines Outcome: dashboard to follow all of those metrics
  28. Mediawiki Goal: engineering/community metrics focused on volunteers (foster their participation)

    Study: analysis of those developers Outcome: dashboard to follow all of those metrics
  29. Eclipse Foundation Goal: have a dashboard for generic tracking purposes

    Outcome: generic dashboard to follow all of those metrics
  30. Summary Experts measuring OSS communities from several perspectives Infrastructure similar

    to inner source needs Bring that experience to inner source, we already have the tools! We’d love to learn together
  31. Last but not least You apply inner source, but how

    do you measure success? Indeed, how do you know you’re measuring the right thing?