Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Designing for Complex Creative Task Solving

janetyc
October 13, 2017

Designing for Complex Creative Task Solving

My NTU INM PhD proposal slides

janetyc

October 13, 2017
Tweet

More Decks by janetyc

Other Decks in Research

Transcript

  1. Designing for Complex Creative Task Solving Yi-Ching (Janet) Huang 戔懯薹究蕦褾ጱ獺蝨௔犨率

    2017.10.13 PhD Thesis Proposal 讙௑覌 Advisor: Jane Yung-jen Hsu, PhD
  2. 4 human-centered design from IDEO A Creative Task as An

    Iterative Process Intro | Feedback Generation | Feedback Integration | Iteration Process
  3. 5 Score: 2 Score: 2.5 Score: 2.75 1st version 2nd

    version 3rd version Writing as an iterative process
  4. 6 http://push.m-iti.org User Interface Design as An Iterative Process Intro

    | Feedback Generation | Feedback Integration | Iteration Process
  5. 7 Complex Creative Process Uncertainty A Concrete Solution Intro |

    Feedback Generation | Feedback Integration | Iteration Process
  6. 8 Properties of Creative Tasks 1. Open-ended and ill-defined 3.

    Quality is usually evaluated by multiple criteria 4. Quality can be improved by iterative refinement 2. Answer is not true or false, but how good the answer is Intro | Feedback Generation | Feedback Integration | Iteration Process
  7. 9 Agapie, Teevan, and Monroy-Hernández. Crowdsourcing in the Field: A

    Case Study Using Local Crowds for Event Reporting. HCOMP 2015. Sadauskas, Byrne, and Atkinson. Mining Memories: Designing a Platform to Support Social Media Based Writing. CHI 2015 Bernstein, Little, Miller, Hartmann, Ackerman, Karger, Crowell, and Panovich. Soylent: A Word Processor with a Crowd Inside. UIST 2010. Kim, Cheng, and Bernstein. Ensemble: Exploring Complementary Strengths of Leaders and Crowds in Creative Collaboration. CSCW 2014 Hahn, Chang, Kim, and Kittur. The Knowledge Accelerator: Big Picture Thinking in Small Pieces. CHI 2016. Nebeling, To, Guo, de Freitas, Teevan, Dow, and Bigham. WearWrite: Crowd-Assisted Writing from Smartwatches. CHI 2016. Kittur, Smus, Khamkar, and Kraut. CrowdForge: Crowdsourcing Complex Work. UIST 2011. Agapie, Teevan, and Monroy- Hernández. Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. HCOMP 2015. Luther, Hahn, Dow, and Kittur. Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines. HCOMP 2015.
  8. 9 Agapie, Teevan, and Monroy-Hernández. Crowdsourcing in the Field: A

    Case Study Using Local Crowds for Event Reporting. HCOMP 2015. Sadauskas, Byrne, and Atkinson. Mining Memories: Designing a Platform to Support Social Media Based Writing. CHI 2015 Bernstein, Little, Miller, Hartmann, Ackerman, Karger, Crowell, and Panovich. Soylent: A Word Processor with a Crowd Inside. UIST 2010. Kim, Cheng, and Bernstein. Ensemble: Exploring Complementary Strengths of Leaders and Crowds in Creative Collaboration. CSCW 2014 Hahn, Chang, Kim, and Kittur. The Knowledge Accelerator: Big Picture Thinking in Small Pieces. CHI 2016. Nebeling, To, Guo, de Freitas, Teevan, Dow, and Bigham. WearWrite: Crowd-Assisted Writing from Smartwatches. CHI 2016. Kittur, Smus, Khamkar, and Kraut. CrowdForge: Crowdsourcing Complex Work. UIST 2011. Agapie, Teevan, and Monroy- Hernández. Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. HCOMP 2015. Luther, Hahn, Dow, and Kittur. Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines. HCOMP 2015.
  9. 10 Shortn: Text Shortening Crowdproof: Crowdsourced Proofreading The Human Macro:

    Natural Language Crowd Scripting Soylent: A Word Processor with a Crowd Inside (Bernstein et al., UIST’ 10) Find-Fix-Verify workflow M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann, M. S. Ackerman, D. R. Karger, D. Crowell, and K. Panovich. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, UIST '10, pages 313-322, New York, NY, USA, 2010. ACM. Intro | Feedback Generation | Feedback Integration | Iteration Process
  10. 11 Prior Work Ideation Outlining Creation Revision Publishing CrowdLines 


    (Luther et al.,2015) MicroWriter 
 (Teevan et al.,2016) CrowdForge 
 (Kittur et al.,2011) Sparkfolio
 (Sadauskas et al.,2015) Ensemble
 (Kim et al.,2014) Soylent
 (Bernstei et al.,2011) Crowdsourcing in the Field 
 (Agapie et al.,2015) Knowledge Accelerator
 (Hahn et al.,2016) WearWrite
 (Nebeling et al.,2016) Intro | Feedback Generation | Feedback Integration | Iteration Process
  11. 14 Iteration Quality The Benefits of Iteration Intro | Feedback

    Generation | Feedback Integration | Iteration Process
  12. 14 Iteration Quality The Benefits of Iteration Intro | Feedback

    Generation | Feedback Integration | Iteration Process
  13. 14 Iteration Quality The Benefits of Iteration o x Intro

    | Feedback Generation | Feedback Integration | Iteration Process
  14. 15 evaluate the writing improve the writing Feedback Facilitates High

    Quality Results feedback work iterative process Intro | Feedback Generation | Feedback Integration | Iteration Process
  15. 16 Evaluate & Critique Create & Modify Author Learning Supporting

    Collaboration Feedback Provider Creative Task Solving Framework Intro | Feedback Generation | Feedback Integration | Iteration Process
  16. 17 Evaluate & Critique Create & Modify Author Learning Supporting

    Feedback Integration Collaboration Feedback Generation Feedback Provider Intro | Feedback Generation | Feedback Integration | Iteration Process
  17. 18 Outline - Introduction - Part I: Feedback Generation -

    Part II: Feedback Integration - Part III: Iterative Revision Process - Conclusion Intro | Feedback Generation | Feedback Integration | Iteration Process
  18. 19 Part I: Feedback Generation Writing Feedback How do we

    generate effective feedback for supporting writers to improve the quality of writing? Supporting Feedback Provider Writer Intro | Feedback Generation | Feedback Integration | Iteration Process
  19. Machine can help for correcting surface errors Intro | Feedback

    Generation | Feedback Integration | Iteration Process 21
  20. 22 - Holistic Scoring: - provide diagnostic feedback on grammar,

    usage, and mechanics; style and diction; and organization and development - Templated-based feedback Criterion Feedback for the highest score “6” Feedback for the highest score “1” Template-based feedback Intro | Feedback Generation | Feedback Integration | Iteration Process Holistic Scoring Jill Burstein, Martin Chodorow, and Claudia Leacock. Automated essay evaluation: The criterion online writing service. AI Magazine, 25(3):27–36, 2004. (Burstein et al., 2004)
  21. 23 Disadvantage of Existing Feedback Systems Intro | Feedback Generation

    | Feedback Integration | Iteration Process - Require large amounts of labeled data - Support limited topic - Static feedback template
  22. 24 Experts Peers Crowds Where can we get feedback? Intro

    | Feedback Generation | Feedback Integration | Iteration Process
  23. 25 Rewriting Tools Rewriting Feedback global local Spelling checker Grammar

    checker Free Comment idea sentence word Local Global Intro | Feedback Generation | Feedback Integration | Iteration Process Human Machine
  24. 25 Rewriting Tools Rewriting Feedback global local Spelling checker Grammar

    checker Free Comment idea Organization checker structure sentence word Local Global Intro | Feedback Generation | Feedback Integration | Iteration Process Human Machine
  25. 25 Rewriting Tools Rewriting Feedback global local Spelling checker Grammar

    checker Free Comment idea Organization checker structure sentence word Local Global ??? Intro | Feedback Generation | Feedback Integration | Iteration Process Human Machine
  26. 26 Writer Feedback Writing Crowd Machine Supporting Writing Revision by

    Crowdsourced Structural Feedback Revision Crowdsourcing Workflow Data Annotations StructFeed Yi-Ching Huang, Jiunn-Chia Huang, and Jane Yung-jen Hsu. Supporting ESL writing by prompting crowdsourced structural feedback. In Proceedings of the Fifth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2017) Yi-Ching Huang, Hao-Chuan Wang, and Jane Yung-jen Hsu. Bridging learning gap in writing education with a crowd-powered system. CHI 2017 Workshop on Designing for Curiosity, Denver, Colorado, USA, 2017. Intro | Feedback Generation | Feedback Integration | Iteration Process
  27. 27 English Oriental (Kaplan, 1966) Rhetorical Patterns of Different Languages

    Intro | Feedback Generation | Feedback Integration | Iteration Process Robert B. Kaplan. Cultural thought patterns in inter-cultural education. Language Learning, 1966.
  28. 28

  29. 29 Crowdsourcing Workflow Structural Feedback Unity Identification Writing Criteria 1.

    multiple topic issue 2. missing topic issue 3. irrelevance issue Topic sentence prediction Irrelevant sentence prediction Crowd Annotations System Overview of StructFeed Intro | Feedback Generation | Feedback Integration | Iteration Process Topic sentence annotation Relevant keyword annotation
  30. 30 Opening Topic Sentence Supporting Sentence Concluding Sentence Closure (optional)

    (optional) Introduction Body Conclusion Essay Structure Paragraph Structure paragraph paragraph paragraph Important! Intro | Feedback Generation | Feedback Integration | Iteration Process
  31. 31 Unity 1. All sub-points centering on one central idea

    2. Using no irrelevant sentences Keys: Topic Sentence Supporting Sentence 1 Concluding Sentence related to the topic sentence Supporting Sentence 2 Supporting Sentence 3 Intro | Feedback Generation | Feedback Integration | Iteration Process
  32. 32 Crowdsourcing Workflow for Unity Identification Topic Identify topic sentence

    topic + ideas Crowdsourcing Workflow Relevance Highlight the relevant words between two sentences relevance topic Filter Filter paragraphs with no topic sentence (weight>=2) Topic sentence annotation Relevant keyword annotation Intro | Feedback Generation | Feedback Integration | Iteration Process
  33. 33 Topic Task - identify topic sentence Quality Control -

    native speakers as workers - brief explanation of concept - worked example - annotate sentence by click Intro | Feedback Generation | Feedback Integration | Iteration Process
  34. 33 Topic Task - identify topic sentence Quality Control -

    native speakers as workers - brief explanation of concept - worked example - annotate sentence by click Explanation Worked example Working area annotate sentence by click Intro | Feedback Generation | Feedback Integration | Iteration Process
  35. 34 Relevance Task annotate word by click Intro | Feedback

    Generation | Feedback Integration | Iteration Process
  36. 34 Relevance Task Worked example annotate word by click Intro

    | Feedback Generation | Feedback Integration | Iteration Process
  37. Experiment I: Crowd vs Machine Unity identification Topic sentence identification

    Irrelevant identification - 15 essays generated from non-native writers - Ground truth data generated by 2 experts - topic sentence - relevant keywords - 106 crowd workers - total cost: $26 - 336 topic sentence annotations and 1923 relevant word annotations 35 Intro | Feedback Generation | Feedback Integration | Iteration Process
  38. 36 Machine-Based Solutions for Unity Identification TF-IDF ASS Word2vec Wordnet

    Rule Intro | Feedback Generation | Feedback Integration | Iteration Process TFIDF-based - Topic sentence is a sentence that consist of the most amount of key ideas Similarity-based (ASS) - Topic sentence is a sentence that is the most similar sentence to other sentences in the paragraph. Rule-based - choose the first sentence as topic sentence
  39. 37 ASS
 (Wordnet) 0.403 0.360 0.380 ASS
 (Word2vec) 0.343 0.307

    0.324 Rule-based 0.658 0.587 0.620 Crowd-based 0.607 0.720 0.659 0 0.2 0.4 0.6 0.8 TF-IDF
 (Wordnet) TF-IDF
 (Word2vec) ASS
 (Wordnet) ASS
 (Word2vec) Rule Crowd 0.66 0.62 0.32 0.38 0.25 0.27 0.72 0.59 0.31 0.36 0.24 0.25 0.61 0.66 0.34 0.40 0.27 0.28 Precision Recall F1-score Results of Topic Sentence Prediction Intro | Feedback Generation | Feedback Integration | Iteration Process
  40. 38 Precision Recall F1-score Path Similarity
 (Wordnet) 0.133 0.083 0.103

    Cosine Similarity
 (Word2vec) 0.118 0.100 0.108 Crowd-based 0.206 0.326 0.252 0 0.1 0.2 0.3 0.4 Path Similarity
 (Wordnet) Cosine Similarity
 (Word2vec) Crowd-based 0.25 0.11 0.10 0.33 0.10 0.08 0.21 0.12 0.13 Precision Recall F1-score Results of Irrelevant Sentence Prediction Intro | Feedback Generation | Feedback Integration | Iteration Process
  41. 39 Structural Feedback Writing Criteria 1. multiple topic issue 2.

    missing topic issue 3. irrelevance issue Unity Identification Topic sentence prediction Irrelevant sentence prediction Crowd Annotations Structural Feedback Intro | Feedback Generation | Feedback Integration | Iteration Process
  42. 40 Effective Feedback 1. Obtain a concept of the standard

    or goal 2. Compare the actual level of performance with the standard 3. Engage in action which leads to closure of the gap (Sadler, D. R. 1989) D. R. Sadler. Formative assessment and the design of instructional systems. Instructional Science, 18(2):119{144, 1989 Intro | Feedback Generation | Feedback Integration | Iteration Process
  43. 41 Feedback Summary Rhetorical Visualization Structural Feedback - topic sentence

    - irrelevant sentence - relevant keywords (crowd annotations) (crowd annotations) (machine prediction) - type of issue - suggested action Intro | Feedback Generation | Feedback Integration | Iteration Process
  44. 42 Structural Feedback Detailed annotations - topic sentence - irrelevant

    sentence - relevant keywords Feedback summary - type of issue - suggested action Writing Tip - provide a quick reminder about writing basic knowledge https://writingfeedback.herokuapp.com/topic_vis?workflow_id=78 StructFeed
  45. 43 topic weight: 1 relevance weight: 1 topic weight: 5

    relevance weight: 3 Intro | Feedback Generation | Feedback Integration | Iteration Process
  46. 44 Field Experiment on ESL Writers - 18 self-motivate ESL

    learners (8 females, 10 males) - 19~34 years old - A between subjects study Conditions - C1 (expert feedback): free-form feedback from an expert - C2 (crowd feedback): free-form feedback from a crowd worker - C3 (structural feedback): structural feedback from StructFeed Writing Original version R Rewriting Revised version R’ Feedback Measure - time, quantity, cost - quality improvement (R’-R) - perceived helpfulness Intro | Feedback Generation | Feedback Integration | Iteration Process
  47. Field Experiment Results Expert Feedback Crowd Feedback StructFeed - Time:

    1~2 days - A single expert - $16 - 55.44 suggestions - Diff-rating: 0.29 (.43) - # of equal rating: 1 - # of decreased rating: 1 - Time: 10~30 mins - A single worker - $2 - 8.11 suggestions - Diff-rating: 0.38 (.44) - # of equal rating: 1 - # of decreased rating: 1 - Time: 1~5 hrs - 15-25 workers - $1~1.7 - Diff-rating: 0.54 (.25) - All participants received StructFeed increased the quality of revision Intro | Feedback Generation | Feedback Integration | Iteration Process 45
  48. 46 Discussion - The crowd helps develop better rules for

    machine - data-driven template is “4-paragraph essay template” - StructFeed not only identifies writing issues but also promotes reflection - Expert feedback perform worse than crowd feedback? - may exist knowledge gap or misunderstanding between experts and novice writers - writers tend to fix local issues like grammatical error rather than global issues like improve unity or coherence Intro | Feedback Generation | Feedback Integration | Iteration Process
  49. 47 Conclusion (1) We propose a crowdsourcing workflow that guide

    crowd workers to annotate topic sentence and relevant keywords for identifying paragraph unity in an article (2) We demonstrate that our crowd-based method outperformed naive machine- based methods on ill-structure ESL writing with no labeled data (3) We designed and implement a crowd-powered system, StructFeed, that generates structural feedback consisting of writing hints and crowd annotations. (4) A field experiment showed that people who received structural feedback from StructFeed obtained better performance than people who received free-form feedback generated from a single expert and a single crowd worker. Intro | Feedback Generation | Feedback Integration | Iteration Process
  50. Learning 48 Part II: Feedback Integration Writing Feedback Feedback Provider

    Writer How do we support writers to integrate feedback into revisions and facilitate high-quality outcome? Revision Intro | Feedback Generation | Feedback Integration | Iteration Process
  51. 49 evaluate the writing improve the writing feedback work iterative

    process Good feedback NOT always facilitates good results! Intro | Feedback Generation | Feedback Integration | Iteration Process
  52. 50 Too many suggestions may cause problems 1. Information overload

    2. Cost of task switching 3. People focus on easier problems Intro | Feedback Generation | Feedback Integration | Iteration Process
  53. 51 Formative Study 6 participants (1 female) with the age

    18-23 - 1 common editing process - 4 editing strategies (1) browse all comments (2) group similar comments (3) deal with comments in a particular sequence (1) beginning-to-end editing (2) high-to-low editing (3) low-to-high editing (4) specificity-first editing Inexperienced writers Experienced writers Intro | Feedback Generation | Feedback Integration | Iteration Process
  54. 52 Writer Feedback Provider Feedback Revision Workflow Feedback Rate Feedback

    Writing High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Orchestration Facilitating Better Revision Behavior and Task Performance Intro | Feedback Generation | Feedback Integration | Iteration Process
  55. 52 Writer Feedback Provider Feedback Revision Workflow Feedback Rate Feedback

    Writing High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Orchestration Facilitating Better Revision Behavior and Task Performance 1. get feedback Intro | Feedback Generation | Feedback Integration | Iteration Process
  56. 52 Writer Feedback Provider Feedback Revision Workflow Feedback Rate Feedback

    Writing High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Orchestration Facilitating Better Revision Behavior and Task Performance 1. get feedback 2. classify feedback Intro | Feedback Generation | Feedback Integration | Iteration Process
  57. 52 Writer Feedback Provider Feedback Revision Workflow Feedback Rate Feedback

    Writing High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Orchestration Facilitating Better Revision Behavior and Task Performance 1. get feedback 2. classify feedback 3. revise an article in a revision workflow Intro | Feedback Generation | Feedback Integration | Iteration Process
  58. 53 Feedback Type Definition Examples High Content Content refers to

    the substance of writing. It includes topic sentence expressing main argument and supporting ideas. e.g. unity of argument, supporting idea, relevant example, addresses the question, etc. Organization Organization refers to the logical organization of the content. e.g. coherence of the content, relation between sentences, logical sequencing, etc. Medium Vocabulary Vocabulary refers to the selection or words those are suitable with the content. e.g. word choice, etc. Language Use Language Use refers to the use of the correct grammatical forms and syntactical pattern. e.g. fixing grammatical errors, or paraphrasing, shortening, etc. Low Mechanics Mechanic refers to all the arbitrary technical stuff in writing like spelling, capitalization, punctuation, etc. e.g. spelling errors, punctuation, capitalization, format, etc. Feedback Classification (Jacobs et al.,1981) Intro | Feedback Generation | Feedback Integration | Iteration Process
  59. 54 Revision Workflow 1. Concurrent workflow 2. Sequential workflow Intro

    | Feedback Generation | Feedback Integration | Iteration Process
  60. 56 High-to-low editing Low-to-high editing Mechanics Feedback Language Feedback Content

    & Organization Feedback High-level Middle-level Low-level
  61. 57 Experiment Design High-to-low (HML) Low-to-high (LMH) High Middle Low

    High Middle Low All - a within-subjects, counterbalanced experiment design - 12 self-motivated non-native writers - each participants performed 3 rewriting tasks with different topics - 3 experimental conditions - (1) show feedback together (ALL) - (2) show feedback sequentially from high to low (HML) - (3) show feedback sequentially from low to high (LMH) Together (ALL) Intro | Feedback Generation | Feedback Integration | Iteration Process
  62. 58 1) Filter information and identify the weakness of writing

    2) Increase awareness and reduce learning anxiety - it reduces learning anxiety about uncertain or unknown matters 3) Promote learning behaviors - It provides new knowledge about how to evaluate the writing - It helps memorize and organize writing issues and and promote focused learning The Benefits of Feedback Classification “This categorized feedback helps me identify my common mistakes easily! When I see the same type of writing issues appearing frequently, I understand that I need to pay more attention to this type of problem in my next writing. (P2)” “I learn new knowledge about writing from categorized feedback, and it helps me learn how to evaluate the quality of the writing by those good or bad examples. I will use the same way to review others’ writings in the next time!! (P7)” Intro | Feedback Generation | Feedback Integration | Iteration Process
  63. 59 - Most participants preferred sequential process to concurrent process

    - sequential process: clear and edit smoothly - concurrent process: same time and edit efficiently - Preference of editing process is related to the past experience - people who have writing experience preferred high-to-low process, for it is consistent to their knowledge obtained from the writing courses. - others preferred low-to-high process - Concurrent process reduce the influence of feedback classification - people tend to follow the original editing behaviors - people tend to take easy way out and ignore high-level issues Sequential Process vs. Concurrent Process Intro | Feedback Generation | Feedback Integration | Iteration Process
  64. 60 - Top-down thinking - the high-to-low process allows people

    to consider big picture first and dive into the details later - Bottom-up thinking - the low-to-high process guides people to develop meaning and momentum to solve high-level issues like content or organization problems. A Sequential Process Guides Logical Thinking “Either low-to-high or high-to-low process helps me develop a thinking order. It’s convenient for me to edit my writing. (P10)” “I start with fixing language-level issues over the whole article and re-read my writing sentence by sentence. Then, I am getting more and more enjoyable because I become familiar with the content and easily recall the situation in which I was writing and understand why I generated those words. It helps me solve the structure issues more easily. (P4)” Intro | Feedback Generation | Feedback Integration | Iteration Process
  65. 61 Editing Conflicts in a Sequential Process - Physical conflicts

    - separating related comments into different stages leads to misunderstanding - appearing inexistent low-level issues that are solved at the high-level stage also leads to confusion - Mental conflicts - uncertainty impedes editing at the high-level stage - too many micro contributions lead to loss aversion “In the second stage, I rejected a comment suggesting an incorrect edit. However, I realized that the suggestion is correct at the third stage when I recall the previous comment I obtained. (P11)” “In the last stage, the feedback suggests that adding one connecting sentence in a specific paragraph increase the unity of the paragraph. I feel that all my previous efforts would be in vain if I fixed this issue. (P12)” Intro | Feedback Generation | Feedback Integration | Iteration Process
  66. 62 Discussion & Design Implications (I) - Feedback classification guides

    learning behaviors - Both writers and feedback providers need to be guided by a category structure to support feedback generation and integration process. - Low-to-high sequence facilitates difficult problem solving - Applying this strategy should carefully consider the number of low-level feedback. - Many micro-contributions may decrease work performance - Two possible reasons - 1) a large number of repetitive and monotonous tasks cause boredom. - 2) many micro-contributions lead to a feeling of loss aversion - Incorporate automated methods to reduce the duplicated tasks and distributing low-level feedback around the content containing high-level writing issues Intro | Feedback Generation | Feedback Integration | Iteration Process
  67. 63 - Resolving editing conflicts and mental obstacles - Group

    related comments and presenting them at the same stage. - Increase transparency of the whole process by providing an overview - Flexible writing support - Offer varying writing support based on their writing abilities, current writing stage (drafting, early stage of revision, or editing), or preference. Discussion & Design Implications (II) Intro | Feedback Generation | Feedback Integration | Iteration Process
  68. 64 Conclusion (1) The first formative study identifying four revision

    strategies applied to revisions from ESL writers. (2) The concept of feedback orchestration that uses a category structure to guide effective revision process by orchestrating feedback of different type. (3) The system ReviseO, which supports three feedback presentation strategies for helping writers resolve issues in a concurrent or sequential process by expert feedback structured by a standard rubric. (4) Our system helps individuals to identify weaknesses, facilitate self-assessment, and promote reflective practice. The results also demonstrate that sequential process guides users toward logical thinking and low-to-high sequence helps complex problem solving. Intro | Feedback Generation | Feedback Integration | Iteration Process
  69. Crowd Machine Crowdsourcing Workflow Data Annotations Expert Crowd Part III:

    Iterative Revision Process Author Feedback Writing Feedback Feedback Intro | Feedback Generation | Feedback Integration | Iteration Process 65
  70. 67 Low Quality High Quality early-stage middle-stage late-stage R1 R2

    R3 R4 R5 R6 High Uncertainty Low Uncertainty Iteration Quality Revision Process Expert Crowd Crowd Machine StructFeed Intro | Feedback Generation | Feedback Integration | Iteration Process
  71. 68 Content Analysis for Crowd and Expert Feedback Intro |

    Feedback Generation | Feedback Integration | Iteration Process
  72. 69 Expert Crowd Time 24~48 hrs 10~30 mins Quantity 53.67

    13.17 Cost (USD) $16 $2 Feedback Type more less Feedback Form (Direct/Indirect) 2.7 3.2 Results of Content Analysis Intro | Feedback Generation | Feedback Integration | Iteration Process
  73. 70 A1 A2 A3 A4 A5 A6 0 10 20

    30 40 Content Organization Vocabulary Language Use Mechanics A1 A2 A3 A4 A5 A6 0 4 8 12 16 Type of Feedback (Crowd) 17% 59% 8% 16% 12% 51% 26% 5% 6% Overall Individual Type of Feedback (Expert) Overall Individual Intro | Feedback Generation | Feedback Integration | Iteration Process
  74. 71 76% 16% 8% 73% 23% 4% Overall Individual Overall

    Individual Form of Feedback (Crowd) Form of Feedback (Expert) A1 A2 A3 A4 A5 A6 0 7.5 15 22.5 30 A1 A2 A3 A4 A5 A6 0 15 30 45 60 Comment (General) Comment (Specific) Editing Intro | Feedback Generation | Feedback Integration | Iteration Process
  75. 72 Findings - A single expert provided more types of

    feedback than a single crowd worker; multiple crowd workers generated diverse types of feedback as expert created. - Both of experts and crowds generated less high-level feedback (i.e., content & organization) - Both of them used “direct editing” to correct the errors. Intro | Feedback Generation | Feedback Integration | Iteration Process
  76. 74 Writing Iteration Experiment - 12~18 self-motivate writers - A

    within-subjects counter-balanced design Writing Rewriting Conditions - C1 (expert feedback): free-form feedback from an expert - C2 (crowd feedback): free-form feedback from a crowd worker - C3 (structural feedback): structural feedback from StructFeed Feedback 1 Rewriting Feedback 2 Rewriting Feedback 3 v1 v2 v3 v4 Intro | Feedback Generation | Feedback Integration | Iteration Process
  77. 75 Feedback Creative Work Feedback Provider Crowd Machine Crowdsourcing Workflow

    Data Annotations Expert Crowd Writer Revision Workflow High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Rate Feedback Part I: Feedback Generation Part II: Feedback Integration Part III: Iterative Revision Process Roadmap of Research
  78. 76 Conclusion - StructFeed - Generate structural feedback for supporting

    high-level thinking in revision process and enabling reflective practice - Feedback orchestration - Provide a categorical structure to scaffolding learning and revision behaviors - Enabling concurrent and sequential workflow for supporting feedback integration (CHI Workshop 2017; HCOMP 2017) (CHI 2018 in-submission) What we have done What we plan do - A series of studies to explore significant factors of feedback that impact revision quality - crowd feedback vs expert feedback - iterative writing experiment - Collaborative feedback by crowd collaboration - Improve global quality of feedback (HCOMP 2015 WIP) Intro | Feedback Generation | Feedback Integration | Iteration Process