At RubyKaigi 2018
...in the World of the Cloud-Native
Uchio Kondo / GMO Pepabo, Inc.
How Ruby Survives
Software Engineer @ GMO Pepabo
Uchio Kondo @udzura
R&D/Developper Productivity Team / From Fukuoka
• Perfect Ruby/Perfect Ruby on Rails
• yao - Yet Another OpenStack client
• Haconiwa - mruby on containers
• Fukuoka.rb Meetup @ Fukuoka
• Chief Organizer of Fukuoka Regional RubyKaigi 02
Splatoon 2 Main Weapon
• (Forge) Splattershot Pro
• Back to A- in all rules before RubyKaigi ;)
What is Cloud Native?
Refer to CNCF’s definition
• CNCF = Cloud Native Computing Foundation
• There is the description of cloud-native on CNCF’s
2. Dynamically orchestrated.
3. Microservices oriented.
From FAQ - https://www.cncf.io/about/faq/
Emerging now but...
• As many of you know, containers, orchestrations
and microservices are the next key technologies for
both web developers and operators.
• Why now? Why these technologies suddenly
gathered attention at the same time?
• How can we optimize or leverage Ruby across
these technologies?? We are Rubyists, so we
would like to continue utilize Ruby in cloud.
To answer these questions
• I need to talk about my own experience, I mean
• Now I am going to talk about what I have been
doing for these 3 years.
• It’s “To solve issues of linux containers and
How I decided
my own container
I was a operator of a SaaS
• My company had been running a SaaS:
• The service had been using containers (even this
service was released on 2012 August!)
• Utilizing very early version of LXC (0.7.5...)
Got some troubles...
• Managing container config was hard
• Create config file with ERB via chef...
• Cannot change resource on the fly
• Changing CPU/memory needs restart
• Too old to upgrade
• Changelog was too big after LXC 1.0
So I decided
• To create my own container runtime
• Aiming to make containers operation easy
• I learned about container internal, which seems
super interesting and cool for me
• ...With mruby!!!
• mruby fits these kind of systems programming
The result: Haconiwa
• I ended up releasing this and talking in RubyKaigi
2016 about my container:
Ruby DSL for Haconiwa
Namespaces to enable, mount point
and resources (e.g. CPU, memory, IP...)
are customizable via DSL!
Hooks are available
How I released
web hosting platform
my own containers
with custom stack
Haconiwa was released!!
• Available now on github
• I can create my own containers with Ruby DSL and
play with it
• This was satisfactory to me, but...
A “ngx_mruby man” said:
• ry: “This product is so cool!!”
• ud: “Thanks!”
• ry: “But I wonder what is the strong or distinctive
point of this container, comparing with other
containers like Docker or LXC?”
• Finally I found Haconiwa’s distinctive points are
• Haconiwa has 2 features:
• Composability: Haconiwa can combine many of
Linux container functionalities. Namespace,
cgroup, capabilityes, seccomp...
• Extendibility: Haconiwa has “hooks”, and extends
its lifecycle and features with programming using
For “new hosting”
• Originally, Haconiwa’s idea was from a hosting
service, so Haconiwa’s these features are friendly
for a hosting.
• Configuration control
• Dynamic resource management
• Then, my colleague proposed new web hosting
service using Haconiwa!
• After all, the project was started!!!
• Server efficiency/highly integrated architecture:
• This directly affects pricing!
• Continuous upgrade:
• Containers should be kept being “managed”.
Library update, security, ...
• And ... Availability
A new technology
• A container lifecycle management architecture:
• 1. Container will be up when required, e.g. on the
• 2. Container will be running until the “Lifetime”
• 3. After the lifetime, container will be dropped,
then restart once the next request comes.
• cf. “Phoenix Server Pattern” in IaC
• Containers states are persisted only on 2 places.
• 1. Contents Management DB (CMDB)
• Containers’ desired spec and state are on it
• Containers are converged to CMDB’s state
• 2. Shared storage (e.g. NFS or Managed DB)
• User’s contents are on it
• Container process itself is stateless
1. Get container’s information from CMDB.
Then, check container is alive by this info.
If not, invoke new container at this time.
2. If container is
up, just forward
the request to
3. Kill container
e.g. 20 minutes.
Return to 1.
Merits for hosting
• Resource is allocated just when required
• If there are less accesses, less resource is used
• Processes are continuously refreshed
• Because lifetime is limited. This prohibit containers
from getting too fat
• Containers have “host server transparency”
• Containers are forced to be immutable by refresh.
• So they can be invoked in any host servers
• mruby-cli for system tools
(e.g. LVS update by database)
• Core API to control clusters (written in go)
• Scheduler for provisioning containers
(written in go... compatible with sidekiq redis)
• Dashboard SPA using Nuxt.js
(Sorry for “not by rails” :bow: )
Sample of FastContainer
1. Gets container’s spec via CMDB
2. Check specific IP and port are listening.
If listening, just return these IP:port to
nginx, and request will be forwarded
3. If not listening, compose the
command to invoke and run it, then
wait until the container is up.
(Note: this code skips sanitization...)
Comparing with k8s stack
Clustered Node Pool
Runtime Orchestration Networking
New orchestration stack
• We had to re-implement this layer
• We wanted very detailed control in resource
allocation and container lifecycle management. This
was difficult by existing software stack in that time.
• Ruby tag (ngx_mruby and Haconiwa) was helpful to
implement this stack
• because it’s all Ruby!
With this architecture,
a brand-new hosting was
How I’m creating
a new orchestrator
Service is running, but...
• There are a lot of tasks and issues
• One of this is about “container efficiency”
• We want to provide more user containers using
less resource as possible
• Imagine that: over 10,000 containers in a server.
Is this realizable?
“10k containers” issue
• 10k containers in one host is challenging:
• Bridge limit: a bridge can have just 1,024
• Namespace creation speed:
• “ip netns add” is slow using older iproute2
• Fat slab cache makes unshare(2) operation
I want a “sandbox”
Smaller, simpler one
• Required features:
• Use FastContainer for lifecycle management
• Require less roles to deploy (I want just 1 or 2 VMs
• Customizable Hacofiles and rootfs
Now I remember...
• My company was using a smaller, simpler VM
manager than OpenStack for development
• Simplistic libvirt wrapper, using zeromq for RPC
• (BTW, mizzy-san was an chief architect of Sqale...)
maglica for containers
• Core technologies: zeromq (for simple RPC) and
• Serverengine: a concise framework for CRuby to
build a valid UNIX daemon/Windows service.
• It helps us to write a multi process worker server.
marfd controller marfd subscriber
marfd controller marfd subscriber
Ruby for Cloud-Native
Cloud-Native seems a must
• For application and environment portability
• All operations should be coded
• As developers are organized in smaller teams
(e.g. Spotify’s “tribes”)
• Ruby product is only one:
• Fluentd!! (Great Job)
From my experience
• Ruby has its own strong points to implement cloud-
1. Learning DSL is easier for non-devs
2. Combination of CRuby and mruby
1. DSL’s merit for learning
• In cloud-native operations, everything is
• Operators should learn programming languages.
But this is not always realistic
• Because they aren’t all programmers.
• IMO DSL is one of good solutions:
• Ruby’s power of “effectivity” is also good to non-
programmers to learn and use!
c.f. Static format
• e.g. YAML is good, but it’s prone to becoming a
• ...And is not even “typed”.
• Ruby 3’s soft-typing will be helpful for such type of
configuration as code
2. CRuby and mruby
• We can use both CRuby and mruby in (almost) one
• We can use CRuby in:
• Generic server/cli tools programming
• And we can use mruby in:
• Systems programming!!
• If you are required to control middleware or even
Linux, you can use mruby to access them easily
mruby for systems programming
• Easier memory management
• Just write object alloc/free functions, then we can
pass object management to GC
• Simpler C bridging API
• Its rule is simpler than other languages based on C
• There are many mruby gems, which are reusable.
On the other hand
• What I wanted in Ruby were:
1. Better RPC solution
2. Easier way to use many core
1. Better RPC solution
• gRPC supports Ruby
• But not officially for mruby!
• C/C++ is supported, so creating a binging is a kind
• But this spoils one of gRPC’s merits:
• auto-generation of all of RPC code.
2. Use many core
• Using current Ruby, the only good way is workers
• Going beyond GVL is hard
• Multi-process is a classic UNIX way, and reasonable
• In the future, Guild will be a good solution!!
Cloud Native’s future
• Kubernetes will be a de-facto standard for
developers or operators who want to dynamically
• Most of applications will be deployed to server-less
platforms, especially applications that will be
required to be small and agile.
K8s is agnostic to containers
• In the near future, Haconiwa would be run under
k8s, if we make an adapter that speaks CRI!
• Containers other than haconiwa might be created
CRI is a interfece
Server-less is also agnostic
• A server-less provider just provides “API”. The only
thing developers have to learn is this, and then
they concentrate their resources to develop
• This means providers can choose any technologies
in the back of API to solve developers' tasks and
Cloud-Native for Ruby!
• Somebody can provide a new function as a service
using Ruby and mruby, in which Ruby is fully
• Because using Ruby is reasonable to solve
developers’ issue that “We want to use Ruby in
• I imagine one like AWS greengrass
• but all in mruby!!
We can create
To be continued...
Font used in the slide
• Sinkin sans: https://www.fontsquirrel.com/fonts/
• Under Apache License v2.00
• See https://www.fontsquirrel.com/fonts/sinkin-sans#eula