Slide 1

Slide 1 text

Designing for Complex Creative Task Solving Yi-Ching (Janet) Huang 戔懯薹究蕦褾ጱ獺蝨௔犨率 2017.10.13 PhD Thesis Proposal 讙௑覌 Advisor: Jane Yung-jen Hsu, PhD

Slide 2

Slide 2 text

2 Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 3

Slide 3 text

3 Complex Creative Tasks Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 4

Slide 4 text

4 human-centered design from IDEO A Creative Task as An Iterative Process Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 5

Slide 5 text

5 Score: 2 Score: 2.5 Score: 2.75 1st version 2nd version 3rd version Writing as an iterative process

Slide 6

Slide 6 text

6 http://push.m-iti.org User Interface Design as An Iterative Process Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 7

Slide 7 text

7 Complex Creative Process Uncertainty A Concrete Solution Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 8

Slide 8 text

8 Properties of Creative Tasks 1. Open-ended and ill-defined 3. Quality is usually evaluated by multiple criteria 4. Quality can be improved by iterative refinement 2. Answer is not true or false, but how good the answer is Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 9

Slide 9 text

9 Agapie, Teevan, and Monroy-Hernández. Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. HCOMP 2015. Sadauskas, Byrne, and Atkinson. Mining Memories: Designing a Platform to Support Social Media Based Writing. CHI 2015 Bernstein, Little, Miller, Hartmann, Ackerman, Karger, Crowell, and Panovich. Soylent: A Word Processor with a Crowd Inside. UIST 2010. Kim, Cheng, and Bernstein. Ensemble: Exploring Complementary Strengths of Leaders and Crowds in Creative Collaboration. CSCW 2014 Hahn, Chang, Kim, and Kittur. The Knowledge Accelerator: Big Picture Thinking in Small Pieces. CHI 2016. Nebeling, To, Guo, de Freitas, Teevan, Dow, and Bigham. WearWrite: Crowd-Assisted Writing from Smartwatches. CHI 2016. Kittur, Smus, Khamkar, and Kraut. CrowdForge: Crowdsourcing Complex Work. UIST 2011. Agapie, Teevan, and Monroy- Hernández. Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. HCOMP 2015. Luther, Hahn, Dow, and Kittur. Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines. HCOMP 2015.

Slide 10

Slide 10 text

9 Agapie, Teevan, and Monroy-Hernández. Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. HCOMP 2015. Sadauskas, Byrne, and Atkinson. Mining Memories: Designing a Platform to Support Social Media Based Writing. CHI 2015 Bernstein, Little, Miller, Hartmann, Ackerman, Karger, Crowell, and Panovich. Soylent: A Word Processor with a Crowd Inside. UIST 2010. Kim, Cheng, and Bernstein. Ensemble: Exploring Complementary Strengths of Leaders and Crowds in Creative Collaboration. CSCW 2014 Hahn, Chang, Kim, and Kittur. The Knowledge Accelerator: Big Picture Thinking in Small Pieces. CHI 2016. Nebeling, To, Guo, de Freitas, Teevan, Dow, and Bigham. WearWrite: Crowd-Assisted Writing from Smartwatches. CHI 2016. Kittur, Smus, Khamkar, and Kraut. CrowdForge: Crowdsourcing Complex Work. UIST 2011. Agapie, Teevan, and Monroy- Hernández. Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting. HCOMP 2015. Luther, Hahn, Dow, and Kittur. Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines. HCOMP 2015.

Slide 11

Slide 11 text

10 Shortn: Text Shortening Crowdproof: Crowdsourced Proofreading The Human Macro: Natural Language Crowd Scripting Soylent: A Word Processor with a Crowd Inside (Bernstein et al., UIST’ 10) Find-Fix-Verify workflow M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann, M. S. Ackerman, D. R. Karger, D. Crowell, and K. Panovich. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, UIST '10, pages 313-322, New York, NY, USA, 2010. ACM. Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 12

Slide 12 text

11 Prior Work Ideation Outlining Creation Revision Publishing CrowdLines 
 (Luther et al.,2015) MicroWriter 
 (Teevan et al.,2016) CrowdForge 
 (Kittur et al.,2011) Sparkfolio
 (Sadauskas et al.,2015) Ensemble
 (Kim et al.,2014) Soylent
 (Bernstei et al.,2011) Crowdsourcing in the Field 
 (Agapie et al.,2015) Knowledge Accelerator
 (Hahn et al.,2016) WearWrite
 (Nebeling et al.,2016) Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 13

Slide 13 text

“The best writing is rewriting.” ― E.B. White 12

Slide 14

Slide 14 text

Quality ? 13

Slide 15

Slide 15 text

14 Iteration Quality The Benefits of Iteration Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 16

Slide 16 text

14 Iteration Quality The Benefits of Iteration Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 17

Slide 17 text

14 Iteration Quality The Benefits of Iteration o x Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 18

Slide 18 text

15 evaluate the writing improve the writing Feedback Facilitates High Quality Results feedback work iterative process Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 19

Slide 19 text

16 Evaluate & Critique Create & Modify Author Learning Supporting Collaboration Feedback Provider Creative Task Solving Framework Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 20

Slide 20 text

17 Evaluate & Critique Create & Modify Author Learning Supporting Feedback Integration Collaboration Feedback Generation Feedback Provider Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 21

Slide 21 text

18 Outline - Introduction - Part I: Feedback Generation - Part II: Feedback Integration - Part III: Iterative Revision Process - Conclusion Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 22

Slide 22 text

19 Part I: Feedback Generation Writing Feedback How do we generate effective feedback for supporting writers to improve the quality of writing? Supporting Feedback Provider Writer Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 23

Slide 23 text

Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 24

Slide 24 text

Machine can help for correcting surface errors Intro | Feedback Generation | Feedback Integration | Iteration Process 21

Slide 25

Slide 25 text

22 - Holistic Scoring: - provide diagnostic feedback on grammar, usage, and mechanics; style and diction; and organization and development - Templated-based feedback Criterion Feedback for the highest score “6” Feedback for the highest score “1” Template-based feedback Intro | Feedback Generation | Feedback Integration | Iteration Process Holistic Scoring Jill Burstein, Martin Chodorow, and Claudia Leacock. Automated essay evaluation: The criterion online writing service. AI Magazine, 25(3):27–36, 2004. (Burstein et al., 2004)

Slide 26

Slide 26 text

23 Disadvantage of Existing Feedback Systems Intro | Feedback Generation | Feedback Integration | Iteration Process - Require large amounts of labeled data - Support limited topic - Static feedback template

Slide 27

Slide 27 text

24 Experts Peers Crowds Where can we get feedback? Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 28

Slide 28 text

25 Rewriting Tools Rewriting Feedback global local Spelling checker Grammar checker Free Comment idea sentence word Local Global Intro | Feedback Generation | Feedback Integration | Iteration Process Human Machine

Slide 29

Slide 29 text

25 Rewriting Tools Rewriting Feedback global local Spelling checker Grammar checker Free Comment idea Organization checker structure sentence word Local Global Intro | Feedback Generation | Feedback Integration | Iteration Process Human Machine

Slide 30

Slide 30 text

25 Rewriting Tools Rewriting Feedback global local Spelling checker Grammar checker Free Comment idea Organization checker structure sentence word Local Global ??? Intro | Feedback Generation | Feedback Integration | Iteration Process Human Machine

Slide 31

Slide 31 text

26 Writer Feedback Writing Crowd Machine Supporting Writing Revision by Crowdsourced Structural Feedback Revision Crowdsourcing Workflow Data Annotations StructFeed Yi-Ching Huang, Jiunn-Chia Huang, and Jane Yung-jen Hsu. Supporting ESL writing by prompting crowdsourced structural feedback. In Proceedings of the Fifth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2017) Yi-Ching Huang, Hao-Chuan Wang, and Jane Yung-jen Hsu. Bridging learning gap in writing education with a crowd-powered system. CHI 2017 Workshop on Designing for Curiosity, Denver, Colorado, USA, 2017. Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 32

Slide 32 text

27 English Oriental (Kaplan, 1966) Rhetorical Patterns of Different Languages Intro | Feedback Generation | Feedback Integration | Iteration Process Robert B. Kaplan. Cultural thought patterns in inter-cultural education. Language Learning, 1966.

Slide 33

Slide 33 text

28

Slide 34

Slide 34 text

28 topic sentence relevant keyword

Slide 35

Slide 35 text

28 topic sentence relevant keyword

Slide 36

Slide 36 text

28 topic sentence relevant keyword

Slide 37

Slide 37 text

28 topic sentence relevant keyword $1

Slide 38

Slide 38 text

29 Crowdsourcing Workflow Structural Feedback Unity Identification Writing Criteria 1. multiple topic issue 2. missing topic issue 3. irrelevance issue Topic sentence prediction Irrelevant sentence prediction Crowd Annotations System Overview of StructFeed Intro | Feedback Generation | Feedback Integration | Iteration Process Topic sentence annotation Relevant keyword annotation

Slide 39

Slide 39 text

30 Opening Topic Sentence Supporting Sentence Concluding Sentence Closure (optional) (optional) Introduction Body Conclusion Essay Structure Paragraph Structure paragraph paragraph paragraph Important! Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 40

Slide 40 text

31 Unity 1. All sub-points centering on one central idea 2. Using no irrelevant sentences Keys: Topic Sentence Supporting Sentence 1 Concluding Sentence related to the topic sentence Supporting Sentence 2 Supporting Sentence 3 Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 41

Slide 41 text

32 Crowdsourcing Workflow for Unity Identification Topic Identify topic sentence topic + ideas Crowdsourcing Workflow Relevance Highlight the relevant words between two sentences relevance topic Filter Filter paragraphs with no topic sentence (weight>=2) Topic sentence annotation Relevant keyword annotation Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 42

Slide 42 text

33 Topic Task - identify topic sentence Quality Control - native speakers as workers - brief explanation of concept - worked example - annotate sentence by click Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 43

Slide 43 text

33 Topic Task - identify topic sentence Quality Control - native speakers as workers - brief explanation of concept - worked example - annotate sentence by click Explanation Worked example Working area annotate sentence by click Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 44

Slide 44 text

34 Relevance Task annotate word by click Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 45

Slide 45 text

34 Relevance Task Worked example annotate word by click Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 46

Slide 46 text

Experiment I: Crowd vs Machine Unity identification Topic sentence identification Irrelevant identification - 15 essays generated from non-native writers - Ground truth data generated by 2 experts - topic sentence - relevant keywords - 106 crowd workers - total cost: $26 - 336 topic sentence annotations and 1923 relevant word annotations 35 Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 47

Slide 47 text

36 Machine-Based Solutions for Unity Identification TF-IDF ASS Word2vec Wordnet Rule Intro | Feedback Generation | Feedback Integration | Iteration Process TFIDF-based - Topic sentence is a sentence that consist of the most amount of key ideas Similarity-based (ASS) - Topic sentence is a sentence that is the most similar sentence to other sentences in the paragraph. Rule-based - choose the first sentence as topic sentence

Slide 48

Slide 48 text

37 ASS
 (Wordnet) 0.403 0.360 0.380 ASS
 (Word2vec) 0.343 0.307 0.324 Rule-based 0.658 0.587 0.620 Crowd-based 0.607 0.720 0.659 0 0.2 0.4 0.6 0.8 TF-IDF
 (Wordnet) TF-IDF
 (Word2vec) ASS
 (Wordnet) ASS
 (Word2vec) Rule Crowd 0.66 0.62 0.32 0.38 0.25 0.27 0.72 0.59 0.31 0.36 0.24 0.25 0.61 0.66 0.34 0.40 0.27 0.28 Precision Recall F1-score Results of Topic Sentence Prediction Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 49

Slide 49 text

38 Precision Recall F1-score Path Similarity
 (Wordnet) 0.133 0.083 0.103 Cosine Similarity
 (Word2vec) 0.118 0.100 0.108 Crowd-based 0.206 0.326 0.252 0 0.1 0.2 0.3 0.4 Path Similarity
 (Wordnet) Cosine Similarity
 (Word2vec) Crowd-based 0.25 0.11 0.10 0.33 0.10 0.08 0.21 0.12 0.13 Precision Recall F1-score Results of Irrelevant Sentence Prediction Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 50

Slide 50 text

39 Structural Feedback Writing Criteria 1. multiple topic issue 2. missing topic issue 3. irrelevance issue Unity Identification Topic sentence prediction Irrelevant sentence prediction Crowd Annotations Structural Feedback Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 51

Slide 51 text

40 Effective Feedback 1. Obtain a concept of the standard or goal 2. Compare the actual level of performance with the standard 3. Engage in action which leads to closure of the gap (Sadler, D. R. 1989) D. R. Sadler. Formative assessment and the design of instructional systems. Instructional Science, 18(2):119{144, 1989 Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 52

Slide 52 text

41 Feedback Summary Rhetorical Visualization Structural Feedback - topic sentence - irrelevant sentence - relevant keywords (crowd annotations) (crowd annotations) (machine prediction) - type of issue - suggested action Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 53

Slide 53 text

42 Structural Feedback Detailed annotations - topic sentence - irrelevant sentence - relevant keywords Feedback summary - type of issue - suggested action Writing Tip - provide a quick reminder about writing basic knowledge https://writingfeedback.herokuapp.com/topic_vis?workflow_id=78 StructFeed

Slide 54

Slide 54 text

43 topic weight: 1 relevance weight: 1 topic weight: 5 relevance weight: 3 Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 55

Slide 55 text

44 Field Experiment on ESL Writers - 18 self-motivate ESL learners (8 females, 10 males) - 19~34 years old - A between subjects study Conditions - C1 (expert feedback): free-form feedback from an expert - C2 (crowd feedback): free-form feedback from a crowd worker - C3 (structural feedback): structural feedback from StructFeed Writing Original version R Rewriting Revised version R’ Feedback Measure - time, quantity, cost - quality improvement (R’-R) - perceived helpfulness Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 56

Slide 56 text

Field Experiment Results Expert Feedback Crowd Feedback StructFeed - Time: 1~2 days - A single expert - $16 - 55.44 suggestions - Diff-rating: 0.29 (.43) - # of equal rating: 1 - # of decreased rating: 1 - Time: 10~30 mins - A single worker - $2 - 8.11 suggestions - Diff-rating: 0.38 (.44) - # of equal rating: 1 - # of decreased rating: 1 - Time: 1~5 hrs - 15-25 workers - $1~1.7 - Diff-rating: 0.54 (.25) - All participants received StructFeed increased the quality of revision Intro | Feedback Generation | Feedback Integration | Iteration Process 45

Slide 57

Slide 57 text

46 Discussion - The crowd helps develop better rules for machine - data-driven template is “4-paragraph essay template” - StructFeed not only identifies writing issues but also promotes reflection - Expert feedback perform worse than crowd feedback? - may exist knowledge gap or misunderstanding between experts and novice writers - writers tend to fix local issues like grammatical error rather than global issues like improve unity or coherence Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 58

Slide 58 text

47 Conclusion (1) We propose a crowdsourcing workflow that guide crowd workers to annotate topic sentence and relevant keywords for identifying paragraph unity in an article (2) We demonstrate that our crowd-based method outperformed naive machine- based methods on ill-structure ESL writing with no labeled data (3) We designed and implement a crowd-powered system, StructFeed, that generates structural feedback consisting of writing hints and crowd annotations. (4) A field experiment showed that people who received structural feedback from StructFeed obtained better performance than people who received free-form feedback generated from a single expert and a single crowd worker. Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 59

Slide 59 text

Learning 48 Part II: Feedback Integration Writing Feedback Feedback Provider Writer How do we support writers to integrate feedback into revisions and facilitate high-quality outcome? Revision Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 60

Slide 60 text

49 evaluate the writing improve the writing feedback work iterative process Good feedback NOT always facilitates good results! Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 61

Slide 61 text

50 Too many suggestions may cause problems 1. Information overload 2. Cost of task switching 3. People focus on easier problems Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 62

Slide 62 text

51 Formative Study 6 participants (1 female) with the age 18-23 - 1 common editing process - 4 editing strategies (1) browse all comments (2) group similar comments (3) deal with comments in a particular sequence (1) beginning-to-end editing (2) high-to-low editing (3) low-to-high editing (4) specificity-first editing Inexperienced writers Experienced writers Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 63

Slide 63 text

52 Writer Feedback Provider Feedback Revision Workflow Feedback Rate Feedback Writing High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Orchestration Facilitating Better Revision Behavior and Task Performance Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 64

Slide 64 text

52 Writer Feedback Provider Feedback Revision Workflow Feedback Rate Feedback Writing High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Orchestration Facilitating Better Revision Behavior and Task Performance 1. get feedback Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 65

Slide 65 text

52 Writer Feedback Provider Feedback Revision Workflow Feedback Rate Feedback Writing High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Orchestration Facilitating Better Revision Behavior and Task Performance 1. get feedback 2. classify feedback Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 66

Slide 66 text

52 Writer Feedback Provider Feedback Revision Workflow Feedback Rate Feedback Writing High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Orchestration Facilitating Better Revision Behavior and Task Performance 1. get feedback 2. classify feedback 3. revise an article in a revision workflow Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 67

Slide 67 text

53 Feedback Type Definition Examples High Content Content refers to the substance of writing. It includes topic sentence expressing main argument and supporting ideas. e.g. unity of argument, supporting idea, relevant example, addresses the question, etc. Organization Organization refers to the logical organization of the content. e.g. coherence of the content, relation between sentences, logical sequencing, etc. Medium Vocabulary Vocabulary refers to the selection or words those are suitable with the content. e.g. word choice, etc. Language Use Language Use refers to the use of the correct grammatical forms and syntactical pattern. e.g. fixing grammatical errors, or paraphrasing, shortening, etc. Low Mechanics Mechanic refers to all the arbitrary technical stuff in writing like spelling, capitalization, punctuation, etc. e.g. spelling errors, punctuation, capitalization, format, etc. Feedback Classification (Jacobs et al.,1981) Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 68

Slide 68 text

54 Revision Workflow 1. Concurrent workflow 2. Sequential workflow Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 69

Slide 69 text

55 ReviseO ReviseO Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 70

Slide 70 text

56 High-to-low editing Low-to-high editing Mechanics Feedback Language Feedback Content & Organization Feedback High-level Middle-level Low-level

Slide 71

Slide 71 text

57 Experiment Design High-to-low (HML) Low-to-high (LMH) High Middle Low High Middle Low All - a within-subjects, counterbalanced experiment design - 12 self-motivated non-native writers - each participants performed 3 rewriting tasks with different topics - 3 experimental conditions - (1) show feedback together (ALL) - (2) show feedback sequentially from high to low (HML) - (3) show feedback sequentially from low to high (LMH) Together (ALL) Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 72

Slide 72 text

58 1) Filter information and identify the weakness of writing 2) Increase awareness and reduce learning anxiety - it reduces learning anxiety about uncertain or unknown matters 3) Promote learning behaviors - It provides new knowledge about how to evaluate the writing - It helps memorize and organize writing issues and and promote focused learning The Benefits of Feedback Classification “This categorized feedback helps me identify my common mistakes easily! When I see the same type of writing issues appearing frequently, I understand that I need to pay more attention to this type of problem in my next writing. (P2)” “I learn new knowledge about writing from categorized feedback, and it helps me learn how to evaluate the quality of the writing by those good or bad examples. I will use the same way to review others’ writings in the next time!! (P7)” Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 73

Slide 73 text

59 - Most participants preferred sequential process to concurrent process - sequential process: clear and edit smoothly - concurrent process: same time and edit efficiently - Preference of editing process is related to the past experience - people who have writing experience preferred high-to-low process, for it is consistent to their knowledge obtained from the writing courses. - others preferred low-to-high process - Concurrent process reduce the influence of feedback classification - people tend to follow the original editing behaviors - people tend to take easy way out and ignore high-level issues Sequential Process vs. Concurrent Process Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 74

Slide 74 text

60 - Top-down thinking - the high-to-low process allows people to consider big picture first and dive into the details later - Bottom-up thinking - the low-to-high process guides people to develop meaning and momentum to solve high-level issues like content or organization problems. A Sequential Process Guides Logical Thinking “Either low-to-high or high-to-low process helps me develop a thinking order. It’s convenient for me to edit my writing. (P10)” “I start with fixing language-level issues over the whole article and re-read my writing sentence by sentence. Then, I am getting more and more enjoyable because I become familiar with the content and easily recall the situation in which I was writing and understand why I generated those words. It helps me solve the structure issues more easily. (P4)” Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 75

Slide 75 text

61 Editing Conflicts in a Sequential Process - Physical conflicts - separating related comments into different stages leads to misunderstanding - appearing inexistent low-level issues that are solved at the high-level stage also leads to confusion - Mental conflicts - uncertainty impedes editing at the high-level stage - too many micro contributions lead to loss aversion “In the second stage, I rejected a comment suggesting an incorrect edit. However, I realized that the suggestion is correct at the third stage when I recall the previous comment I obtained. (P11)” “In the last stage, the feedback suggests that adding one connecting sentence in a specific paragraph increase the unity of the paragraph. I feel that all my previous efforts would be in vain if I fixed this issue. (P12)” Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 76

Slide 76 text

62 Discussion & Design Implications (I) - Feedback classification guides learning behaviors - Both writers and feedback providers need to be guided by a category structure to support feedback generation and integration process. - Low-to-high sequence facilitates difficult problem solving - Applying this strategy should carefully consider the number of low-level feedback. - Many micro-contributions may decrease work performance - Two possible reasons - 1) a large number of repetitive and monotonous tasks cause boredom. - 2) many micro-contributions lead to a feeling of loss aversion - Incorporate automated methods to reduce the duplicated tasks and distributing low-level feedback around the content containing high-level writing issues Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 77

Slide 77 text

63 - Resolving editing conflicts and mental obstacles - Group related comments and presenting them at the same stage. - Increase transparency of the whole process by providing an overview - Flexible writing support - Offer varying writing support based on their writing abilities, current writing stage (drafting, early stage of revision, or editing), or preference. Discussion & Design Implications (II) Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 78

Slide 78 text

64 Conclusion (1) The first formative study identifying four revision strategies applied to revisions from ESL writers. (2) The concept of feedback orchestration that uses a category structure to guide effective revision process by orchestrating feedback of different type. (3) The system ReviseO, which supports three feedback presentation strategies for helping writers resolve issues in a concurrent or sequential process by expert feedback structured by a standard rubric. (4) Our system helps individuals to identify weaknesses, facilitate self-assessment, and promote reflective practice. The results also demonstrate that sequential process guides users toward logical thinking and low-to-high sequence helps complex problem solving. Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 79

Slide 79 text

Crowd Machine Crowdsourcing Workflow Data Annotations Expert Crowd Part III: Iterative Revision Process Author Feedback Writing Feedback Feedback Intro | Feedback Generation | Feedback Integration | Iteration Process 65

Slide 80

Slide 80 text

66 Expert Crowd Crowd Machine StructFeed Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 81

Slide 81 text

67 Low Quality High Quality early-stage middle-stage late-stage R1 R2 R3 R4 R5 R6 High Uncertainty Low Uncertainty Iteration Quality Revision Process Expert Crowd Crowd Machine StructFeed Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 82

Slide 82 text

68 Content Analysis for Crowd and Expert Feedback Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 83

Slide 83 text

69 Expert Crowd Time 24~48 hrs 10~30 mins Quantity 53.67 13.17 Cost (USD) $16 $2 Feedback Type more less Feedback Form (Direct/Indirect) 2.7 3.2 Results of Content Analysis Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 84

Slide 84 text

70 A1 A2 A3 A4 A5 A6 0 10 20 30 40 Content Organization Vocabulary Language Use Mechanics A1 A2 A3 A4 A5 A6 0 4 8 12 16 Type of Feedback (Crowd) 17% 59% 8% 16% 12% 51% 26% 5% 6% Overall Individual Type of Feedback (Expert) Overall Individual Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 85

Slide 85 text

71 76% 16% 8% 73% 23% 4% Overall Individual Overall Individual Form of Feedback (Crowd) Form of Feedback (Expert) A1 A2 A3 A4 A5 A6 0 7.5 15 22.5 30 A1 A2 A3 A4 A5 A6 0 15 30 45 60 Comment (General) Comment (Specific) Editing Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 86

Slide 86 text

72 Findings - A single expert provided more types of feedback than a single crowd worker; multiple crowd workers generated diverse types of feedback as expert created. - Both of experts and crowds generated less high-level feedback (i.e., content & organization) - Both of them used “direct editing” to correct the errors. Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 87

Slide 87 text

73 Expert Feedback ($10.3~25) Crowd Feedback ($1.5) StructFeed ($1~1.5)

Slide 88

Slide 88 text

74 Writing Iteration Experiment - 12~18 self-motivate writers - A within-subjects counter-balanced design Writing Rewriting Conditions - C1 (expert feedback): free-form feedback from an expert - C2 (crowd feedback): free-form feedback from a crowd worker - C3 (structural feedback): structural feedback from StructFeed Feedback 1 Rewriting Feedback 2 Rewriting Feedback 3 v1 v2 v3 v4 Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 89

Slide 89 text

75 Feedback Creative Work Feedback Provider Crowd Machine Crowdsourcing Workflow Data Annotations Expert Crowd Writer Revision Workflow High Medium Low Feedback Low Medium High H M M L L L L Feedback Classification Feedback Rate Feedback Part I: Feedback Generation Part II: Feedback Integration Part III: Iterative Revision Process Roadmap of Research

Slide 90

Slide 90 text

76 Conclusion - StructFeed - Generate structural feedback for supporting high-level thinking in revision process and enabling reflective practice - Feedback orchestration - Provide a categorical structure to scaffolding learning and revision behaviors - Enabling concurrent and sequential workflow for supporting feedback integration (CHI Workshop 2017; HCOMP 2017) (CHI 2018 in-submission) What we have done What we plan do - A series of studies to explore significant factors of feedback that impact revision quality - crowd feedback vs expert feedback - iterative writing experiment - Collaborative feedback by crowd collaboration - Improve global quality of feedback (HCOMP 2015 WIP) Intro | Feedback Generation | Feedback Integration | Iteration Process

Slide 91

Slide 91 text

Thanks You Q&A