Slide 32
Slide 32 text
Metrics in literature
32
type metric explanation & citations
query
interface
usability
how easy it is to use a system [145]: it can be evaluated by the query
specification or task completion time [33, 43, 46, 88, 141, 147, 157, 185, 207,
227, 237, 269, 270, 281], number of iterations or navigation cost [47, 159, 160,
181], miss times [152], ease to get more or unique insights [101,225,237,264],
accuracy or success in finishing tasks [46,65,127,166,185,237, 269, 270], etc.
learnability easy to learn the user actions given prior instruction: [43, 46, 207, 225], etc.
discoverability ability to discover the user actions without prior instruction: [152, 192, 207], etc.
survey /
questionnaire
survey questions: [65, 127, 141, 155, 157, 159, 185, 207, 225, 264], etc.
case study do real tasks, demonstrate feasibility and in-context usefulness [53, 106, 108,
170, 193, 216, 230, 261, 266, 271, 272, 279], etc.
subjective feedback
/ suggestions
[46, 57, 65, 147, 227, 237, 261], etc.
behavior analysis sequences of mouse press-drag-release [147], event state sequence [174], etc.
performance
throughput transactions / requests / tasks per second: TPC-C, TPC-E [30], [70], etc.
latency the execution time of the query or frame rate per second: [70, 88, 104, 152, 155,
159, 184,188,189], etc.
scalability performance with the increasing data size, number of dimensions, number of
machines, etc.: [81, 88, 155, 189], etc.
cache hit rate [48, 155, 245], etc.
https://github.com/ixlab/eval