Linguamatics, a leader in enterprise text mining, working in collaboration with the NPCU, announced a new view on the instant reactions made on Twitter about party leaders during the second UK televised election debate. For those who followed the instant polls after the debate last week, these results will be of great interest. Preliminary results are published of the linguistic analysis of 169,000 tweets sent by 38,986 twitterers from 8.00pm to 9.30pm on the night of the debate.
Updated results for the analysis of 211,000 tweets sent by 47,420 twitterers from 8.30pm - 10pm on the night of the first UK election debate, April 15 2010, are also presented.
The overall tweet analysis (Figure 1) shows that for the second debate 43% of twitterers who expressed an opinion said that Nick Clegg performed best, down from 57% in the first debate, followed by Gordon Brown (35%, up from 25%), and then David Cameron (22%, up from 18%).
For the second debate the twitterers indicated that their top 3 issues were Europe, immigration, and religion, including the discussion on the Pope’s visit. The second debate covered a wider variety of topics than the first and so the tweets are more widely spread. Cameron won narrowly on Europe, Clegg on immigration as for the first debate, and Brown on religion (Figure 2). However, the combined analysis of winner per topic shows that Clegg has maintained his lead (Figure 3), as he does for the debate as a whole (Figure 1).
Linguamatics’ linguistic analysis of the debate transcript itself (Figure 4) shows that, in the second debate, Europe dominated the leaders’ discussion. The twitterers talked more about immigration than the leaders relative to the other topics (Figure 2).
Linguamatics’ I2E text mining software was used to find and summarize tweets that have the same meaning, however they are worded. I2E identifies the range of vocabulary used in tweets and uses linguistic analysis to collect and summarize the different ways opinion is expressed.
Description of the figures in the press release
Figure 1 shows the number of tweets that expressed a positive sentiment towards each of the party leaders.
The analysis identified tweets saying that a particular leader was doing well or made a good point, or that they like the leader, etc. Linguistic filtering removed examples which were about expectations, e.g. “I hope the leader will do well”, questions, such as “anyone think the leader is doing well?”, and negations, such as “the leader did not do well” or “the leader made no sense”.
Figure 2 shows winner per topic from number of relevant positive tweets.
The analysis identified a list of topics by identifying words or phrases which described the discussion subject, for example Trident, nuclear weapons, armed forces, military, and Eurofighter are assigned to defence. The tweets were then analyzed to find out who was saying positive things about each leader in relation to a specific topic.
Figure 3 shows the percentage of specific topics won by each leader.
This is an aggregation of all positive tweets about each leader with specific reference to any one of the topics. The same data is used for both Figure 2 and Figure 3.
Figure 4 shows number of times a particular topic was mentioned per leader.
This analysis is based on the transcript and not the tweets. As before, topics are not just a mention of a word, but bring together words or phrases which have similar meaning. It shows how important a particular topic was to a leader based on how many times they mentioned it during the debate.