a Server? • A server is a computer which primarily interacts with other computers, not humans • Networking is necessary for servers • Every kind of network system got its start as a dedicated server! – Bridges – Routers – Firewalls – Content Cache • Switches are highly evolved bridges, some to the point of losing the computer part
to Server • As small computers evolved, it was natural to want to connect them to the big computer • The only form of network offered by the big computers was to user terminals • Early networking was all terminal emulation and screen scraping – yecch • A whole generation of big computers died because they had lousy networking • The ones with good networking are called servers!
I/O CPU MEM console CPU channel interconnect MEM CTLR CTLR console Rudimentary storage sharing Common interfaces for storage and I/O (channels) Beginnings of CPU to CPU communication 1970 Illiac IV
Better • Customers with big problems are all to eager to buy big iron! • Even if its not necessary! • It wasn’t I.T., but rather users, who discovered their PCs were as fast as the big shared mainframe • In 10 years, mainframe went from being incredibly over-subscribed to under-subscribed • What if the gave a mainframe and nobody came? – Repurpose the storage and call it a server!
• Use mainstream processors – x86, etc. • Run general purpose OS – Windows, Linux, UNIX • Run in-house or 3rd party applications • Reliability is key • Differentiation going, commoditization coming • Slowly evolving to be more manageable than PCs
• Look inside a network appliance or a switch or a storage node, what do you see? – X86 processors – Linux OS – Integrated OS & Apps – Improved reliability & manageability • The evolution of servers is directly tied to the evolution of everything else in the data center
• In 1964, IBM announced System/360 with a 50:1 ratio between fastest and slowest processors – If you need more capacity, buy the next biggest computer • Today, the ratio between fastest and slowest server processors is more like 3:1 – If you need more capacity, buy more processors • Formerly, computers were much more than just processors; buying another computer wasn’t desirable. Also much harder to manage • Now, buying clusters of small computers (Scale- Out) is cheaper and faster than using scaleable large SMPs (Scale-Up)
Cluster Software • Writing code for SMPs or clusters is much harder than writing single-threaded code • Writing cluster code (no shared memory) is totally different than writing SMP code (shared memory) • The 80s and 90s encouraged SMP software • Built into languages like Java – Being taught worldwide • Clustering big trend since late 90s – harness lots of cheap computers – but software very difficult
Rebirth of SMPs • Frequency limits in chips means processors don’t go faster anymore • Moore’s law of density stays valid – plenty more room on chips • Intel, AMD adding more processors to chips to keep increasing performance • But SMP software is needed to leverage multi-cores • Today, dual-socket, dual-core = 4 processors • In 2007, quad-socket, quad-core = 16 processors • Few, if any, real SMP apps use more than 16 Ps
• Does multi-core replace need for clusters? • No, because the killer app for clusters is fault- tolerance • Also, clusters are the right approach for the embarrassingly parallel loads caused by zillions of internet connections • So complex systems will use both SMP and cluster techniques
Virtualization, aka partitioning, evolved as a way to share large computer among lighter workloads • A large system without virtualization is probably under-utilized • VMWare brings virtualization to x86, adds new benefit of containerization – frees OS from specific HW • VMWare, Xen critical to success of multi-core processors
Come Marching… • The Transaction Iceberg • Dedicated servers • Redundant servers • Scale-Out vs Scale-Up • Ever increasing cycles to stay on the net – Security – SPAM – Bots
Iceberg • What the web user sees in a transaction is just the tip of the iceberg • How many servers are involved in buying a book on Amazon? 10s, 100s, 1000s? – includes warehouses, credit card processors, fedex/ups, … • Pre-processing: How many servers consumed just getting ready for that transaction? – Google: hundreds of thousands at Google, bothering millions of web servers every day • Post-processing: feed every transaction into model, try to predict future behavior – Where does Walmart send the next truck?
• Each major application dictates the exact versions & patches to underlying middleware & OS • Therefore, each app requires its own OS image • Each OS requires own server, or virtual server • An appliance just pre-packages this mess for the end-user
• Clusters are much more common with Linux than with Windows • Software licensing is still mostly per machine rather than per processor, user, or application • Software costs of clusters suck unless you’re using freeware! • Servers are viewed as disposable as long as you don’t flush your licenses at the same time
Scale-Out • Scale-Up, aka Big SMP, is the new dinosaur – Huge costs for coherent memory interconnect – Huge cost for processors with CMI • Multi-core, aka Little SMP, takes away most of the market from Big SMP • Scale-Out Little Clusters necessary for reliability • Scale-Out Big Clusters previously successful only with HPC & Internet • Now bleeding into business analytics – OLAP, financial modeling, fraud prevention, …
CPU CPU Internet Dumb Desktops Storage Storage Storage Storage LAN 10Gbit LAN 10Gbit Memory Memory Memory 2010 Pooled Storage on SANs Pooled, Clustered, CPUs Pooled Memory?
a Server? • A box? • A blade? • A partition? • A virtual machine? • An OS image? • A cluster? • Does a server still exist when its not running? • A ServerArray?
I don’t know how many servers there will be in the future • But the number of server cores will be astronomical! • Far greater than the number of laptop or desktop cores • More cores than people! • More cores than ants?? • Computers can keep each other much busier than humans can!