Upgrade to Pro — share decks privately, control downloads, hide ads and more …

SETTLE: A Tuple Space Benchmarking and Testing Framework

SETTLE: A Tuple Space Benchmarking and Testing Framework

Interested in learning more about this topic? Read the overview of my research to learn more: https://www.gregorykapfhammer.com/research/

Gregory Kapfhammer

October 19, 2005
Tweet

More Decks by Gregory Kapfhammer

Other Decks in Technology

Transcript

  1. SETTLE: A Tuple Space Benchmarking and Testing Framework Gregory M.

    Kapfhammer, Daniel M. Fiedler Kristen Walcott, Thomas Richardson Department of Computer Science Allegheny College Ahmed Amer, Panos Chrysanthis Department of Computer Science University of Pittsburgh A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 1/20
  2. Contributions Benchmarking framework that can measure throughput and response time

    while varying the number of clients Tuple space aging technique that automatically populates tuple space before benchmark execution Detailed empirical study that evaluates tuple space performance, time overhead associated with aging, and impact of aging on space performance A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 2/20
  3. Introduction to Tuple Spaces Space clients can write, take, and

    read Entry objects A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 3/20
  4. SETTLE Approach q space clients execute the same benchmark in

    phases Client Cj starts up Tdelay ∈ [Tmin , Tmin + V ] ms after Cj−1 Client pauses for Tdelay ms between the write and take phases Measure response time, R(Si , Cj , O), and throughput, X(Si , O, q) Startup Aging Aging Cleanup Write Take Shutdown Pause A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 4/20
  5. Tuple Space Aging: Preliminaries Could execute benchmark with an empty

    tuple space but take will execute faster than normal Age with either automatically generated or recorded/derived workloads {r, t, w}-frequency defines the fraction of the workload that will be associated with each space operation Define a frequency for each possible Entry type so that A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 5/20
  6. Tuple Space Aging: Example Automatically populate space with Entry objects

    of same type but different field values A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 6/20
  7. Experiment Design Dual Intel Xeon Pentium III processors and 512

    MB main memory GNU/Linux kernel 2.4.18-14smp, Java 1.4.2 compiler, Java 1.4.2 VM in HotSpot client mode, Jini 1.2.1 LinuxThreads 0.10 was configured with a one-to-one mapping between Java threads and kernel processes Clients C1 , . . . , Cq executed on the same machine as JavaSpace Si Other configurations are possible and additional experiments are currently being conducted A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 7/20
  8. Experiment Parameters Parameter Value(s) Tmin 200 ms V 50 ms

    Tdelay [200, 250] ms # of Entry Objects {1000} Aging Workload Size (|W|) {1000, 3000, 6000, 12000} # of Clients (non-aged) (q) {2, 8, 14, 22} # of Clients (aged) (q) {8, 14} {r, t, w}-frequency {0, 0, 100} Entry Objects {Null, String, Array, File} A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 8/20
  9. Tuple Space Throughput 5 10 15 20 Number of Clients

    q 250 275 300 325 350 375 400 Throughput Operations sec NullIO StringIO ArrayIO FileIO When space is not aged, throughput knees at 8 or 14 clients A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 9/20
  10. NullIO: Response Time, Throughput 0 5 10 15 20 Number

    of Clients q 320 330 340 350 360 370 380 390 Throughput operations sec 10. 20. 30. 40. 50. 60. 70. 80. Response Time ms Response Time Throughput When throughput knees at 8 clients, average response time continues to increase linearly A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 10/20
  11. FileIO: Response Time, Throughput 0 5 10 15 20 Number

    of Clients q 245 250 255 260 265 270 Throughput operations sec 20. 40. 60. 80. Response Time ms Response Time Throughput When throughput knees at 14 clients, average response time continues to increase linearly A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 11/20
  12. NullIO: Impact of Aging 1000 3000 6000 12000 Workload Size

    W 25 50 75 100 125 150 175 200 Benchmark Execution Time sec 1000 3000 6000 12000 55.2 59.7 66.6 78. 97.6 105.3 115.7 137. 14 8 When |W| = 3000 there is a 30% increase in NullIO execution time A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 12/20
  13. NullIO: Aging Time Overhead 1000 3000 6000 12000 Workload Size

    W 20 40 60 80 100 Aging’s % of Execution Time 1000 3000 6000 12000 8.65 14.99 22.29 30.87 5.1 9.1 14.2 20.3 14 8 Aging never consumes more than 31% of entire benchmark time A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 13/20
  14. NullIO: Cleaning Time Overhead 1000 3000 6000 12000 Workload Size

    W 20 40 60 80 100 Cleaning’s % of Execution Time 1000 3000 6000 12000 5.6 9.5 14.1 18.3 3.2 5.6 8.6 11.3 14 8 Cleaning incurs less time overhead than aging due to snapshot A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 14/20
  15. Aging’s Impact on Throughput 0 2000 4000 6000 8000 10000

    12000 Workload Size W 200 250 300 350 Throughput Operations sec NullIO StringIO ArrayIO FileIO Aging reduces tuple space throughput as workload size increases A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 15/20
  16. Aging’s Impact on Response Time 0 2000 4000 6000 8000

    10000 12000 Workload Size W 20 25 30 35 40 Response Time ms NullIO StringIO ArrayIO FileIO Aging increases tuple space response time as workload size increases A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 16/20
  17. Related Work Bulej et al. focus on regression benchmarking Sterk

    et al. evaluate tuple space performance in the context of bioinformatics Noble and Zlateva measure tuple space performance for astrophysics computations Hancke et al. and Neve et al. measure tuple space performance through the use of statistically guided experiments Smith and Seltzer introduced file system aging A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 17/20
  18. Future Work Additional experiments: (i) transient vs. persistent tuple spaces,

    (ii) remote client interactions, (iii) different tuple space implementations, (iv) new versions of Jini and JavaSpaces Workload studies for tuple space-based applications Additional micro, macro, and application-specific benchmarks Definition-use testing for tuple space-based applications: how do you know your application puts the right data into the space? A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 18/20
  19. Conclusions SETTLE measures throughput and response time and supports automatic

    tuple space aging In current SETTLE configuration, JavaSpaces can support between eight and fourteen concurrent local clients without reducing average response time Tuple space aging can be performed with acceptable time overhead Aging does support the characterization of worst-case performance A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 19/20
  20. Resources Fiedler et al. Towards the Measurement of Tuple Space

    Performance. In ACM SIGMETRICS Performance Evaluation Review. December, 2005. http : //cs.allegheny.edu/˜gkapfham/research/settle/ A Tuple Space Benchmarking and Performance Testing Framework, 9th JCM, October 19-20, 2005 – p. 20/20