Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Performance tuning for Portals

Performance tuning for Portals

Presents common practices for performance tuning in high scaleability portals.

6ba92bd65b22e6c4b69bae9485124af4?s=128

Leon Rosenberg

October 08, 2009
Tweet

More Decks by Leon Rosenberg

Other Decks in Programming

Transcript

  1. Performance tuning for portals or why we don’t dress up

    in the cellar 08 OCT 2009 Parship, Hamburg by Leon Rosenberg Dienstag, 13. Oktober 2009
  2. Performance • Computer performance is characterized by the amount of

    useful work accomplished by a computer system compared to the time and resources used. • Short response times. • High throughput. • High availability. Dienstag, 13. Oktober 2009
  3. Performance Tuning • Code optimization. • Caching strategy. • Load

    balancing. • Distributed computing. Not today. Not today. Dienstag, 13. Oktober 2009
  4. Before you proceed "We should forget about small efficiencies, say

    about 97% of the time: premature optimization is the root of all evil." Donald Knuth Whatever you do, measure, find the problem, find the solution, measure again. Dienstag, 13. Oktober 2009
  5. The world • We are talking about portals. • We

    are talking about high read/save ratio (90% of the requests are read requests) • We are talking about SOA. • We are talking about 3T. Dienstag, 13. Oktober 2009
  6. The Problem • Users want to see the results of

    an action ASAP. • Users want to store some data and to view a lot of other user’s data. • Databases can store data. • Databases aren’t meant to retrieve the data. Dienstag, 13. Oktober 2009
  7. The Metaphor • Databases aren’t meant to retrieve the data,

    they are like cellars, you put everything in and hope that you’ll never need it again. Dienstag, 13. Oktober 2009
  8. Typical DB Dienstag, 13. Oktober 2009

  9. The Cellar • The best way to store all my

    clothes is to put them into the cellar. • They won’t take place in the flat (ram). • They will be labeled, categorized, and safely put away (indexes, tables, foreign keys). Dienstag, 13. Oktober 2009
  10. The Cellar (II) • Problem: From time to time I

    want to dress myself. • Problem: I don’t want to run into the cellar for each peace (slow). Dienstag, 13. Oktober 2009
  11. Caches • Permanent Cache • Expiry Cache • Query/Method Cache

    • SoftReference Cache • Other Dienstag, 13. Oktober 2009
  12. Permanent Cache • White shirt rack. Shoe cabinet. • Achieves

    high cache hit rates continuously. • May have a warm-up time (bad for restart). • Memory intensive, hard to calculate. • Can achieve 100% cache rate, very high cache hit rates. Dienstag, 13. Oktober 2009
  13. Caches Dienstag, 13. Oktober 2009

  14. Expiry Cache • Seasonal clothing. As soon as I stop

    wearing my winter jacket, it ‘expires’. • Caches away high traffic on presumably solid object state. • Hits on same object during one request, hits on same object during traffic peaks. • Trade off: performance vs. change visibility. Dienstag, 13. Oktober 2009
  15. Query/Method Cache • Take the same piece blindfolded. • Presumes

    that same query must always return same result (May be combined with expiry cache to prevent change invisibility) • Mostly ineffective on well designed (rich) interfaces (compared to object cache). • Easy application: can be wrapped around an implementation. Dienstag, 13. Oktober 2009
  16. SoftReference Cache • Coat rack, wardrobe. • Same as permanent

    cache but for varying object sizes. • Flexible memory usage. • Ineffective memory usage, may afford gc tuning (collection ratios). Dienstag, 13. Oktober 2009
  17. Soft Reference Caches Dienstag, 13. Oktober 2009

  18. Other • Partial Cache. Caches unmodifyable parts of objects only

    (i.e. registrationDate, Manolo Blahnik, 400$ shoes). • Negative cache. Caches non existing objects (Null Object Cache, You don’t have red shoes!). • Index cache. Caches attribute mappings for reverse lookups (name -> id, red -> shoes). Dienstag, 13. Oktober 2009
  19. Usage of Caches • No general patent. • Requires deep

    understanding of the business case. • Most effect relies on good understanding of business case and application behaving. • HitRate = domain knowledge * operational knowledge. Dienstag, 13. Oktober 2009
  20. Proxies • Local. • Remote. • Rerouting. Dienstag, 13. Oktober

    2009
  21. Local Proxies • Keep the ski-suit in the cellar, but

    take the gloves into the flat for a dog-walk. • Reduces network traffic. Reduces traffic on a remote service. • Powerful in combination with expiry caches (transparent proxies). • Good in combination with partial caches (some methods are served locally). Dienstag, 13. Oktober 2009
  22. Remote Proxies • A room in the cellar with stuff

    that I indeed do need often. A wall closet. • Reduces traffic on a remote service. • Powerful in combination with Permanent Caches, SoftReference Caches or Expiry Caches, dependent on the concrete use case. Dienstag, 13. Oktober 2009
  23. Rerouting Proxies • A special cellar for special pieces. •

    Transparently reroute parts of traffic to another implementation. • Help to separate fast and slow traffic. Reduce load on a service. • Powerful in combination with SoftReference and Expiry Caches. Dienstag, 13. Oktober 2009
  24. Load balancing • Functional - method based. • Traffic-oriented -

    parameter based (mod- ing). • Random - round robin. Whatever it is, it should be transparent to the application. Dienstag, 13. Oktober 2009
  25. What mods - scales • Rerouting distributes traffic by method.

    • Mod-ing distributes traffic by parameter (i.e. userid % constant). Must be performed/ configured by the middleware. • Multiple instances of same service - linear scaleability. • Not always possible due to resource ownership (create - dilemma). Dienstag, 13. Oktober 2009
  26. What doesn’t mod • Mods in combination with remote proxies,

    extract non-creating calls to proxies, mod- distribute proxies. • Use publish/subscriber model to synchronize multiple instances. • Distribute domains (Each service instance has its userid pool for creation etc). Dienstag, 13. Oktober 2009
  27. Use Case • Assume we have a User Object we

    need upon each request at least once, but up to several hundreds (mailbox, favorite lists etc), with assumed mid value of 20. • Assume we have an incoming Traffic of 1000 Requests per Second. Dienstag, 13. Oktober 2009
  28. userId userName regDate lastLogin User getUser getUserByUserName updateUser createUser UserService

    <<use>> UserServiceImpl UserServiceDAO <<create>> 1 1 dao Classic 3T Architecture: Dienstag, 13. Oktober 2009
  29. client:Class LookupUtility 1.1 getService service:UserService facade:UserService 1.1.1 createFacade 1.2 getUser

    dao:UserServiceDAO 1.2.1 getUser Database 1.2.1.1 getUser network Classic 3T Architecture: collaboration for getUser Dienstag, 13. Oktober 2009
  30. Classic 3T • The DB will have to handle 20.000

    requests per second. • Average response time must be 0,05 milliseconds. • No DB in the world can handle this (At least not one with less than 20 processors). Dienstag, 13. Oktober 2009
  31. client:Class LookupUtility 1.1 getService service:UserService facade:UserService 1.1.1 createFacade 1.2 getUser

    dao:UserServiceDAO 1.2.1 getUser Database 1.2.1.1 getUser network 10*100*20=20.000 20.000 20.000 Dienstag, 13. Oktober 2009
  32. usernameCache nullCache cache userId userName regDate lastLogin User getUser getUserByUserName

    updateUser createUser UserService LocalUserServiceProxy RemoteUserServiceProxy getFromCache putInCache Cache getId Cacheable expiryDuration ExpiryCache PermanentCache <<use>> 1 1 proxied proxied SoftReferenceCache <<use>> 1 1 1 1 UserServiceImpl 2 1 1 1 cache cache UserServiceDAO <<create>> 1 1 dao Performance Optimized 3T Architecture: Dienstag, 13. Oktober 2009
  33. client:Class LookupUtility 1.1 getService service:UserService facade:UserService 1.1.1 createFacade 1.2 getUser

    dao:UserServiceDAO 1.2.2.2.1 getFromCache Database 1.2.2.2.3.1 getUser network service:LocalUserServiceProxy proxied:UserService cache:Cache 1.2.1 getFromCache 1.2.2 getUser service:RemoteUserServiceProxy network cache:Cache 1.2.2.1 getFromCache proxied:UserService 1.2.2.2 getUser cache:Cache negative:Cache 1.2.2.2.2 getFromCache 1.2.2.2.3 getUser 1.2.2.2.4 putInCache 1.2.2.3 putInCache 1.2.3 putInCache Performance Optimized 3T Architecture. Collaboration for getUser Dienstag, 13. Oktober 2009
  34. Optimized 3T • LocalServiceProxy can handle approx. 20% of the

    requests. • With Mod 5, 5 Instances of RemoteServiceProxy will handle 16000/s requests or 3200/s each. They will cache away 90% of the requests. • 1600 remaining requests per second will arrive at the UserService. Dienstag, 13. Oktober 2009
  35. Optimized 3T (II) • Permanent cache of the user service

    will be able to cache away 98% of the requests. • NullUser Cache will cache away 1% of the original requests. • Max 16 Requests per second will reach to the DB, demanding a response time of 62,5ms --> Piece of cake. And no changes in client code at all! Dienstag, 13. Oktober 2009
  36. client:Class LookupUtility 1.1 getService service:UserService facade:UserService 1.1.1 createFacade 1.2 getUser

    dao:UserServiceDAO 1.2.2.2.1 getFromCache Database 1.2.2.2.3.1 getUser network service:LocalUserServiceProxy proxied:UserService cache:Cache 1.2.1 getFromCache 1.2.2 getUser service:RemoteUserServiceProxy network cache:Cache 1.2.2.1 getFromCache proxied:UserService 1.2.2.2 getUser cache:Cache negative:Cache 1.2.2.2.2 getFromCache 1.2.2.2.3 getUser 1.2.2.2.4 putInCache 1.2.2.3 putInCache 1.2.3 putInCache 10*100*20=20.000 4000 stop here 14400 stop here in different instances 1568 stop here 16 stop here 16 make it to DB Partytime ! Dienstag, 13. Oktober 2009
  37. Thank you. Discussion The End to be continued are you

    still here? Dienstag, 13. Oktober 2009