Upgrade to Pro — share decks privately, control downloads, hide ads and more …

How OpenStack is implemented at GMO Public Cloud service

Naoto Gohko
November 02, 2015

How OpenStack is implemented at GMO Public Cloud service

http://sched.co/4W1R

GMO Internet started using OpenStack in 2011 for their internal deployment. Through this experience, they deployed OpenStack to start their public cloud services as the following.

Onamae VPS (Domain Name service) in 2012

ConoHa (Hosting Service) in 2013- GMO Application Cloud in 2014

Onamae.com Cloud (IaaS Cloud Server) in 2014

ConoHa (Renewal for Hosting Service) in 2015

Currently, GMO Internet is using Diablo, Grizzly, Havana, Juno.

In this session, we will explain how GMO Internet deployed OpenStack into our Public Cloud service for “ConoHa” and “GMO Application Cloud”. We will show the technical background, what components of OpenStack is actually used in those services and how those deployment brought the advantage to the service.

Why we chose OpenStack

Architecture of ConoHa back end infrastructure

2 different deployment of OpenStack for those 2 public cloud services

Speakers
Naoto Gohko
Naoto-Gohko, GMO Internet, Inc.
our product by OpenStack. | ConoHa public cloud; https//www.conoha.jp/en/ | GMO AppsCloud; https://cloud.gmo.jp/en/

Hironobu Saitoh
Technical Evangelist, GMO Internet, Inc.

Wednesday October 28, 2015 4:40pm - 5:20pm

Naoto Gohko

November 02, 2015
Tweet

More Decks by Naoto Gohko

Other Decks in Technology

Transcript

  1. How OpenStack is implemented at GMO Public Cloud service GMO

    Inetnet, Inc. Technical Evangelist Hironobu Saitoh GMO Internet, Inc. Architect Naoto Gohko
  2. "HFOEB • "CPVUVT • )PTUJOH$MPVETFSWJDFTJOPVSCVTJOFTTTFHNFOUT • 0QFO4UBDL • 8IZXFVTF0QFO4UBDL •

    5FDIOJDBMCBDLHSPVOE • "CPVUEJGGFSFODFPGUXPTFSWJDFT $POP)B BOE(.0BQQTDMPVE
  3. "CPVU(.0*OUFSOFU http://gmo.jp/en Japan’s  Leading  All-­in  Provider  of  Internet  Services

  4. #VTJOFTT4FHNFOUT

  5. *OGSBTUSVDUVSF#VTJOFTT

  6. 6TJOH0QFO4UBDL BU(.0*OUFSOFU

  7. 1VCMJD$MPVET 8FBSFPGGFSJOHGPVSQVCMJDDMPVETFSWJDFT

  8. ϓϥΠϕʔτΫϥ΢υج൫ͱͯ͠ (.0άϧʔϓͷʮ(.0ϖύϘגࣜձࣾʯ͕ ϓϥΠϕʔτΫϥ΢υͱͯ͠ར༻

  9. 8IZXFVTF0QFO4UBDL 'FBUVSFMJOFOVQT -PPTFMZDPVQMFEDPNQPOFOUT 0QFO4PVSDF4PGUXBSF .PTUPGGFBUVSFTOFFEFEGPS$MPVEEFWFMPQ XFSFBMSFBEZJNQMFNFOUFE %JGGFSFOUFOHJOFFSJOHUFBNDMPVEEFWFMPQFBDIGFBUVSFTTJNVMUBOFPVTMZ &OBCMFFOHJOFFSJOHUFBNUPBEETQFDJGJDGFBUVSFTXIFOXFXBOU

  10. ίϯϙʔωϯτ $POP)Bͷػೳ Ϧʔδϣϯ ,FZTUPOF "DDPVOUNBOBHFNFOU BVUIFOUJDBUJPO "MMSFHJPOT /PWB 7JSUVBM.BDIJOF "MMSFHJPOT

    /FVUSPO 1SJWBUFOFUXPSLJOH "TTJHO*1BEESFTTGPS7. "MMSFHJPOT $JOEFS #MPDLTUPSBHF "MMSFHJPOT 4XJGU 0CKFDUTUPSF 5PLZP (MBODF $SFBUF 7.JNBHF "VUP#BDLVQ "MMSFHJPOT $FJMPNFUFS $PMMFDU DVTUPNFSVTBHFEBUB $PPQFSBUFXJUIPVSQBZNFOU TZTUFN 5PLZP )FBU *OJUJBMJ[F7. CZDMPVEJOJU "MMSFHJPOT )PSJ[PO (Staff only) "MMSFHJPOT 6TJOH0QFO4UBDL $PNQPOFOUT $POP)B
  11. Develop  OpenStack related  tools Tool  that  create  Docker host. Golang

    Develop  Vagrant  provider  for  ConoHa. Fix  a  problem  and  pull  request. Docker Machine https://github.com/hironobu-­s/vagrant-­conoha
  12. CLI  tool  that  handle  ConoHa specific  APIs Golang Develop  plugin

     that  enable  to  save  media  files   to  Swift(Object  Store) Develop  OpenStack related  tools https://github.com/hironobu-­s/conoha-­iso https://wordpress.org/plugins/conoha-­object-­sync/
  13. 'JOBMMZ • "CPVUVT )JSPOPCV4BJUPI • )PTUJOH$MPVETFSWJDFTJOPVSCVTJOFTTTFHNFOUT • 0QFO4UBDL )JSPOPCV4BJUPI •

    8IZXFVTF0QFO4UBDL • 5FDIOJDBMCBDLHSPVOE /BPUP(PILP • "CPVUEJGGFSFODFPGUXPTFSWJDFT $POP)B BOE(.0BQQTDMPVE
  14. Oname.com VPS(Diablo)   • Service  XaaS model: – VPS  (KVM,

     libvirt) • Network: – 1Gbps • Network  model:   – Flat-­VLAN  (Nova  Network) – IPv4  only • Public  API – None  (only  web-­panel) • Glance – None • Cinder – None • ObjectStorage – None OpenStack service:  Onamae.com VPS(Diablo)
  15. None
  16. Oname.com VPS(Diablo)   • Nova  Network:   – very  simple(LinuxBridge)

    – Flat  networking  is  scalable. èBut   There  is  no  added  value,  such  as  a  free   configuration  of  the  network OpenStack service:  Onamae.com VPS(Diablo)
  17. ConoHa(Grizzly) • Service  XaaS model: – VPS  +  Private  networks

     (KVM  +  libvirt) • Network: – 10Gbps  wired(10GBase-­T) • Network  model:   – Flat-­VLAN  +  Quantam ovs-­GRE  overlay – IPv6/IPv4  dualstack • Public  API – None  (only  web-­panel) • Glance – None • Cinder – None • ObjectStorage – Swift  (After  Havana) OpenStack service:  ConoHa(Grizzly)
  18. ConoHa(Grizzly) • Quantam Network: – It  was  using  the  initial

     version  of  the  Open  vSwitch full  mesh   GRE-­vlan overlay  network èBut When  the  scale  becomes  large,   Localization  occurs  to  a  specific  node  of  the   communication  of  the  GRE-­mesh-­tunnel (with  under  cloud  network(L2)  problems) (Broadcast  storm?) OpenStack service:  ConoHa(Grizzly)
  19. GMO  AppsCloud(Havana) • Service  XaaS model: – KVM  compute  +

     Private  VLAN  networks  +  Cinder  +  Swift • Network: – 10Gbps  wired(10GBase   SFP+) • Network  model:   – IPv4  Flat-­VLAN  +  Neutron  LinuxBridge(not  ML2)  +  Brocade  ADX  L4-­LBaaS  original  driver • Public  API – Provided  the  public  API • Ceilometer • Glance – Provided(GlusterFS) • Cinder – HP  3PAR(Active-­Active  Multipath  original)  +  NetApp • ObjectStorage – Swift  cluster   • Bare-­Metal   Compute – Modifiyed cobbler  bare-­metal  deploy  driver. OpenStack service:  GMO  AppsCloud(Havana)
  20. GMO  AppsCloud(Havana)  public  API Web  panel(httpd,  php) API  wrapper  proxy

    (httpd,  php Framework:  fuel  php) Havana Nova  API Customer  sys  API Havana Neutron API Havana Glance  API 0QFO4UBDL"1* GPSJOQVU WBMJEBUJPO $VTUPNFS%# Havana Keystone  API 0QFO4UBDL"1* Havana Cinder  API Havana Ceilometer  API Endpoint  L7:reverse  proxy Havana Swift  Proxy
  21. GMO  AppsCloud(Havana)  public  API

  22. Havana:  baremetal compute  cobbler  driver

  23. Havana:  baremetal compute  cobbler  driver Baremetal net: • Bonding  NIC

    • Taged VLAN • allowd VLAN  +  dhcp native  VLAN
  24. Swift  cluster  (Havana  to  Juno  upgrade) SSD  storage: container/account  server

      at  every  zone    
  25. Havana:  baremetal compute  Cisco  iOS  in  southbound IUUQTDPEFHPPHMFDPNQDJTDPJPTDMJBVUPNBUJPO

  26. OpenStack Juno:  2  service  cluster,  released Mikumo ConoHa Mikumo Anzu

    Mikumo =  美雲 =  Beautiful  cloud New  Juno  region  released:   10/26/2015
  27. • Service  model:   Public  cloud  by  KVM • Network:

     10Gbps  wired(10GBase  SFP+) • Network  model:   – Flat-­VLAN  +  Neutron  ML2  ovs-­VXLAN   overlay  +  ML2  LinuxBridge(SaaS  only) – IPv6/IPv4  dualstack • LBaaS:  LVS-­DSR(original) • Public  API – Provided  the  public  API  (v2  Domain) • Compute  node:  ALL  SSD  for  booting   OS – Without  Cinder  boot   • Glance:  provided • Cinder:  SSD  NexentaStore zfs (SDS) • Swift  (shared  Juno  cluster) • Cobbler  deply on  under-­cloud – Ansible configuration • SaaS  original   service  with  keystone  auth – Email,  web,  CPanel and  WordPress OpenStack Juno:  2  service  cluster,  released • Service  model:   Public  cloud  by  KVM • Network:  10Gbps  wired(10GBase  SFP+) • Network  model:   – L4-­LB-­Nat  +  Neutron  ML2  LinuxBridge VLAN – IPv4  only • LBaaS:  Brocade  ADX  L4-­NAT-­LB(original) • Public  API – Provided  the  public  API • Compute  node:  Flash  cached  or  SSD • Glance:  provided  (NetApp  offload) • Cinder:  NetApp  storage • Swift  (shared  Juno  cluster) • Ironic  on  under-­cloud – Compute  server  deploy  with  Ansible config • Ironic  baremetal compute – Nexsus Cisco  for  Tagged  VLAN  module – ioMemory configuration
  28. Compute  and  Cinder(zfs):  SSD Toshiba  enterprise  SSD • The  balance

     of  cost  and  performance  we  have  taken. • Excellent  IOPS  performance,  low  latency Compute  local  SSD The  benefits  of  SSD  of  Compute  of  local  storage • The  provision  of  high-­speed  storage   than  cinder  boot. • It  is  easy  to  take  online  live  snapshot  of  vm instance. • deployment  of  vm is  fast. ConoHa:  Compute  option  was  modified: • take  online  live  snapshot  of  vm instance. http://toshiba.semicon-­storage.com/jp/product/storage-­ products/publicity/storage-­20150914.html
  29. NexentaStor zfs cinder:  ConoHa cloud(Juno) Compute  

  30. Designate  DNS:  ConoHa cloud(Juno) Client API DNS Identify Endpoint Storage

    DB OpenStack Keystone Backend DB RabbitMQ Central Components  of  the  DNS  and  GSLB(original) back-­end  services
  31. NetApp  storage:  GMO  Appscloud(Juno) If  you  are  using  the  same

     Cluster  onTAP NetApp  a  Glance  and  Cinder  storage,  it  is   possible  to  offload  a  copy  of  the  inter-­service   of  OpenStack as  the  processing  of  NetApp   side.   • Create  volume  from  glance  image ((glance  the  image  is  converted  (ex:  qcow2  to   raw)  required  that  does  not  cause  the   condition)
  32. Ironic  with  undercloud:  GMO  Appscloud(Juno) For  Compute  server  deployment. Kilo

     Ironic  and  All-­in-­one • Compute  server:  10G  boot • Clout-­init:  network • Compute  setup:  Ansible Under-­cloud  Ironic(Kilo): It  will  use  a  different   network  and  Ironic   Baremetal dhcp for   Service  baremetal compute  Ironic(Kilo).
  33. Ironic(Kilo)  baremetal:  GMO  Appscloud(Juno) Boot  baremetal instance • baremetal server

    (with  Fusion  ioMemory SanDisk) • 1G  x4  bonding  +  Tagged  VLAN • Clout-­init:  network  +  lldp • Network:  Nexsus Cisco Allowd VLAN  security Ironic  Kilo  +  Juno:  Fine • Ironic  Python  driver • Whole  Image  write
  34. • Service  model:   Public  cloud  by  KVM • Network:

     10Gbps  wired(10GBase  SFP+) • Network  model:   – Flat-­VLAN  +  Neutron  ML2  ovs-­VXLAN   overlay  +  ML2  LinuxBridge(SaaS  only) – IPv6/IPv4  dualstack • LBaaS:  LVS-­DSR(original) • Public  API – Provided  the  public  API  (v2  Domain) • Compute  node:  ALL  SSD  for  booting   OS – Without  Cinder  boot   • Glance:  provided • Cinder:  SSD  NexentaStore zfs (SDS) • Swift  (shared  Juno  cluster) • Cobbler  deply on  under-­cloud – Ansible configuration • SaaS  original   service  with  keystone  auth – Email,  web,  CPanel and  WordPress OpenStack Juno:  2  service  cluster,  released • Service  model:   Public  cloud  by  KVM • Network:  10Gbps  wired(10GBase  SFP+) • Network  model:   – L4-­LB-­Nat  +  Neutron  ML2  LinuxBridge VLAN – IPv4  only • LBaaS:  Brocade  ADX  L4-­NAT-­LB(original) • Public  API – Provided  the  public  API • Compute  node:  Flash  cached  or  SSD • Glance:  provided  (NetApp  offload) • Cinder:  NetApp  storage • Swift  (shared  Juno  cluster) • Ironic  on  under-­cloud – Compute  server  deploy  with  Ansible config • Ironic  baremetal compute – Nexsus Cisco  for  Tagged  VLAN  module – ioMemory configuration
  35. Finally: The  GMO  AppsCloud  in  Juno  OpenStack  it  was  released

     on  10/27/2015. • Deployment  of  SanDisk  Fusion  ioMemory by  Kilo  Ironic  on  Juno  OpenSack I   can  also. • Compute  server  was  deployed  by  Kilo  Ironic  with  under-­cloud  All-­in-­One   openstack.    Compute  server  configuration  was  deployed  by  Ansible. • Cinder  and  Glance  was  provied NetApp  copyoffload storage  mechanism.   • LbaaS is  Brocade  ADX  NAT  mode  original  driver. On  the  otherhand;; Juno  OpenStack ConoHa released  on  05/18/2015. • Designate  DNS  and  GSLB  service  was  started  on  ConoHa. • Cinder  storage  is  SDS  provied NexentaStor zfs storage  for  single  volume  type. • LBaaS is  LVS-­DSR  original  driver.