between smart endpoints – Content-agnostic routing – Rates controlled by endpoints – Content- and user-agnostic forwarding • Clean separation of concerns – Routing and forwarding by network elements – Rate control, admission control, security at endpoints
application-aware stateful forwarding (e.g., multicast) • Need QoS guarantees and network-aware endpoints – For high-QoS applications – For lousy links • Need in-network security and admission control – Endpoint security easily overwhelmed…
of in-network and endpoint services – Information from the network – Diversion to a proxy – Line-rate packet filtering • All require endpoint processing – Stateful processing – Connection-splitting – Filesystem access • Three central use cases – Optimization of network resources, especially bandwidth – Proximity to user for real-time response – In-situ sensor processing
function • Gets the job done, but… – Appliances proliferate (one or more per task) – Opaque – Interact unpredictably… • Don’t do everything – E.g., generalized in-situ processing engine for data reduction • APST, 2005: “The ability to support…multiple coexisting overlays [of proxies]…becomes the crucial universal piece of the [network] architecture.”
of network forwarding and routing • What it’s not: – On-the-fly software decisions about routing and forwarding – In-network connection-splitting store-and-forward – In-network on-the-fly admission control – In-network content distribution – Magic…. • What it is: – Table-driven routing and forwarding decisions (including drop and multicast) – Callback protocol from a switch to a controller when entry not in table (“what do I do now?”) – Protocol which permits the controller to update the switch
are a necessary evil that breaks the “end-to-end principle” (Network should be a dumb pipe between endpoints) • Modern View (Peterson): “Proxies play a fundamental role in the Internet architecture: They bridge discontinuities between different regions of the Internet. To be effective, however, proxies need to coordinate and communicate with each other.” • Generalized Modern View (this talk): Proxies and Middleboxes are special cases of a general need: endpoint processing in the network. We need to merge the Cloud and the Network.
In-network general-purpose processors fronted by OpenFlow switches • Advantages of Middleboxes – Specialized processing at line rate • Disadvantages of middleboxes – Nonexistent programming environment – Opaque configuration – Vendor-specific updates – Only common functions get done – Interact unpredictably…
• Open programming environment: Linux + OpenFlow • Much broader range of features and functions • Interactions between middleboxes mediated by OpenFlow rules – Verifiable – Predictable • Updates are software uploads
I need endpoint processing – In-situ data reduction – Next to users – Where I need filtering • Can’t always predict these in advance for every service! • So I need a small cloud everywhere, so I can instantiate a middlebox anywhere • Solution = Distributed “EC2” + OpenFlow network • “Slice”: Virtual Network of Virtual Machines • OpenFlow creates Virtual Network • “EC2” lets me instantiate VM’s everywhere
of a virtual distributed cloud, with explicit forwarding instructions BETWEEN specified VMs Translation onto OpenFlow rules on physical network AND instantiation on physical machines at appropriate sites Effectuation on physical network AND physical clouds
Must support common API – Can have some variants (switch variants still present a common interface through OpenFlow) • DSOS is simply a mixture of three known components: – Network Operating System – Cloud Managers (e.g., ProtoGENI, Eucalytpus, OpenStack) – Tools to interface with Network OS and Cloud Managers (nascent tools under development) 30
anticipated in 1.5 – “Support for L4/L7 services”, aka, seamless redirection • Northbound API – Joint allocation of virtual machines and networks – Location-aware allocation of virtual machines – WAN-aware allocation of networks – QoS controls between sites • Build on/extend successful architectures – “Neutron for the WAN” 31
how do we easily instantiate and stitch together services at multiple sites/multiple providers? • Multiple sites is easy, multiple providers is not • Need easy way to instantiate from multiple providers – Common AUP/Conventions? Probably – Common form of identity/multiple IDs? Multiple or bottom-up (e.g. Facebook) – Common API? Absolutely • Need to understand what’s important and what isn’t – E.g. very few web services charge for bandwidth 32
Each cloud – 80-150 worker cores – Several TB of disk – OpenFlow-native local switching • Interconnected over OpenFlow-based L2 Network • Local “Aggregate Manager” (aka controller) • Two main designs with common API – InstaGENI (ProtoGENI-based) – ExoGENI (ORCA/OpenStack-based) • Global Allocation through federate aggregate managers • User allocation of networks and slices through tools (GENI portal, Flack) 35
10GE) Layer 3 GENI control plane Layer 2 connect to subscribers Existing head-end New GENI / Ignite rack pair OpenFlow switch(es) Flowvisor Remote management Instrumentation Aggregate manager Measurement Programmable servers Storage Video switch (opt) Home Most equipment not shown U.S. Ignite City Technical Architecture
today is NSFNet circa 1985 • GENI and the SFA: Set of standards (e.g., TCP/IP) • Mesoscale: Equivalent to NSF Backbone • GENIRacks: Hardware/software instantiation of standards that sites can deploy instantly – Equivalent to VAX 11 running Berkeley Unix – InstaGENI cluster running ProtoGENI and OpenFlow • Other instantiations which are interoperable – VNode (Aki Nakao, University of Tokyo and NICT) – Tomato (Dennis Schwerdel, TU-Kaiserslautern)
are all described as “Testbeds” – But they are really Clouds – Tests require realistic services • History of testbeds: – Academic ResearchAcademic/Research servicesCommercial services – Expect similar evolution here (but commercial will come faster) 44
distributed clouds is a pain – Allocate network of VM’s – Build VM’s and deploy images – Deploy and run software • But most slices are mostly the same • Automate commonly-used actions and pre-allocate typical slices • 5-minute rule: Build, deploy, and execute “Hello, World” in five minutes • Decide what to build: start with sample application 45
System • Open and Public – Anyone can contribute layers – Anyone can host computation • Why GIS? – Large and active community – Characterized by large data sets (mostly satellite images) – Much open-source easily deployable software, standard data formats – Computation naturally partitions and is loosely-coupled – Collaborations across geographic regions and continents – Very pretty… 46
Genericize and make available the infrastructure behind the TransGEO demo – Open to every GENI, FIRE, JGN-X, Ofelia, SAVI…experimenter who wants to use it • TransGEO is a trivial application on a generic infrastructure – Perhaps 1000 lines of Python code on top of • Key-Value Store • Layer 2 network • Sandboxed Python programming environment • Messaging Service • Deployment Service • GIS Libraries 51
Permanent, Long-Running, GENI-wide Message Service • Permanent, Long-Running, Distributed Python Environment • Permanent, world-wide Layer-2 VLANs on high-performance networks • All offered in slices • All shared by many experimenters • Model: Google App Engine • Advantage for GENI: Efficient use of resources • Advantage for Experimenters: Up and running in no time 52
iCAIR Andy Bavier, Marco Yuen, Larry Peterson, Jude Nelson, Tony Mack PlanetWorks/Princeton Chris Benninger, Chris Matthews, Chris Pearson, Andi Bergen, Paul Demchuk, Yanyan Zhuang, Ron Desmarais, Stephen Tredger, Yvonne Coady, Hausi Muller University of Victoria Heidi Dempsey, Marshall Brinn, Vic Thomas, Niky Riga, Mark Berman, Chip Elliott BBN/GPO Rob Ricci, Leigh Stoller, Gary Wong University of Utah Glenn Ricart, William Wallace, Joe Konstan US Ignite Paul Muller, Dennis Schwerdel TU-Kaiserslautern Amin Vahdat, Alvin AuYoung, Alex Snoeren, Tom DeFanti UCSD 55
Integration Jessica Blaine, Jack Brassil, Kevin Lai, Narayan Krishnan, Dejan Milojicic, Norm Jouppi, Patrick Scaglia, Nicki Watts, Michaela Mezo, Bill Burns, Larry Singer, Rob Courtney, Randy Anderson, Sujata Banerjee, Charles Clark HP Aki Nakao University of Tokyo 56
basically the first Distributed Cloud – Single Application, now generalizing • But this is OK… – Web simply wrapped existing services • Now in vogue with telcos (“Network Function Virtualization”) • What’s new/different in GENI/JGN-X/SAVI/Ofelia…. – Support from programmable networks – “Last frontier” for software in systems • Open Problems – Siting VMs! – Complex network/compute/storage optimization problems • Needs – “http”-like standardization of APIs at IaaS, PaaS layers 57