infrastructure of Ad-Tech What is Ad-Tech in CyberAgent Why we choose OpenStack Big picture of our private cloud Deployment / Operation / Monitoring Future of our private cloud
leading the system admin team of CyberAgent Ad- Tech Business Division. I have been managing the OpenStack for about one year. Other than OpenStack, I also use cloud platforms such as aws and gcp. Recently I’m most interested in how I can automate the cloud management by using various different tools. and … @makocchi https://www.facebook.com/makocchi0923
Business Division at CyberAgent Inc., where I design and deploy OpenStack based private cloud which hosts multiple advertisement related services CyberAgent Inc. offers. Keeping it neat.
players, separate companies like this. This diagram does not show all the relative companies and players and there are many more. Also new platforms are added day by day.
and Stable more Flexible ! There are many platforms for services exists, and the infrastructure under them have to work with all of them. more Agile ! The infrastructure has to be prepared quickly and deleted when they are not necessary any more. more Stable ! One small trouble and system down has huge impact. So we need flexible, agile and stable Cloud platform.
opensource. CyberAgent has strong culture of leveraging opensource technologies. Using OSS benefits us in terms of catching up new technologies and also cutting costs. There was a strong momentum that OpenStack should be the next mainstream of Cloud Management System when we are evaluating several options. I thought the technical skills and the motivation of our engineers will improve by learning, deploying and operating OpenStack, which contains a lot of different technologies
2015.04 Kilo We used OpenStack only for PoC We provided over 10 services on OpenStack Grizzly (10+ compute nodes / 10 engineers) 2013.10 Ad-Tech Division started in CyberAgent 2014.06 We provided some Ad-Tech services on OpenStack Icehouse (40+ compute nodes / 3 engineers) 2015.03 We provided some Ad-Tech services on OpenStack Juno (100+ compute nodes / 3 engineers) We will start testing so very soon
Icehouse Production Codename: galadeira Icehouse Production (Sandbox) Codename: diana Juno Production Codename: minerva Icehouse Personal Development Codename: venus Juno Personal Development (Future) Codename: vesta Kilo Production (Future) Codename: eiskeller Icehouse Personal Development (Sandbox)
computes Number of CPU cores 5,000 cores / 10,000 threads Number of VM Instances 1,000+ instances Network Dual 10G from server to ToR Dual 40G from ToR to EoR Specs of diana
Existence • TCP Port State API Monitoring • Response Time • High Level Function Testing Standard Monitoring • Hardware Health • OS Resources Templates and scripts may be released
Reason: Not using Neutron LBaaS at the moment Network design differs from reference implementation LBaaS driver still in development for Juno Solution: Manage load balancer outside Neutron (with some level of multi tenancy) Instance tagging & filtering Reason: High demand from users migrating from AWS Not implemented in Juno Solution: Use instance ‘metadata’ as tag field
on our Github. https://github.com/CyberAgent/openstack-summit-2015-vancouver/ If you have any questions and impressions, please create issue on github !! we reply as soon as possible. https://goo.gl/6dsfSk See you next Tokyo summit !!