tweets as ironic or not Feature scoring Score every tweet of the dataset with feature scores Training Machine learning using the labelled tweets as training set and the rest dataset as test Data workflow Counting Estimating the irony percentage per party Correlating Trend between the ironic tweets that received every party before elections and their election results.
irony by examining the following features: signatures, unexpectedness, style, emotional scenarios, frequent trigrams and pleasantness. Reyes et al (2013 & 2011) Hashtag analysis method creates noisy results with low accuracy. Gonzalez-Ibanez et al (2011) Same dataset. Positive - negative distinction. Alignment between actual political results and web sentiment. Kermanidis & Maragoudakis (2013) High accuracy combining Amazon reviews, tweets and semi- supervised learning. Gonzalez-Ibanez et al (2011)
occurrences — Spoken spoken style applied in writings — Rarity most rare words — Meanings No of Wordnet synsets as a measure of ambiguity — Lexical punctuation, prosodic repeated letters, metaphors — Emoticons smiley faces etc
('vacuum.n.03'), Synset('apartment.n.01'), Synset('space.n.01'), Synset ('area.n.02'), Synset('space-time.n.01'), Synset('proportion.n.02'), Synset('size.n.01'), Synset('acreage.n.01')] “We noticed that ironic tweets present words with more meanings (more WordNet synsets)” -Barbıerı et al
fluctuation percent shows a trend on the edges, so that the 'loser' parties of ND and PASOK are getting the most ironic tweets as well as the 'winner' parties SYRIZA and XA.