This afternoon I’ll be giving a talk at the UvA text analysis symposium series.
The suprising topic of my talk is “Text Analysis in R”. I won’t be using any slides, but the R code that I will be using is available here. For a more graphically fancy presentation, see my previous post
I’ll be presenting tomorrow (Thu 24 March) at the Text Visualization workshop organized by LSE and Imperial. I’m curious what kind of visualizations people will come up with for the hackathon challenge!
Visualize your corpus with R: Why word clouds aren’t always stupid
Word clouds usually convey only relative word frequency, but by using the other dimensions (colour, x, y) we can convey a lot more information. Using the corpustools and semnet packages we can make word clouds that are both pretty and informative
Slides [html] [source code]
Teaser: Words in and between 3 topics in the states of the union 2000-2016
Today from 11:00 – 13:00 I will teaching the first part of the informal R course for the CW master in 2A-59. Everyone is welcome!
Links: [slides] | [handouts] | [data] | [income_topdecile]
Mogelijke inhoud van deze en volgende sessies:
- Getting started: Your data in R
- Merging and transforming data
- Classical statistics and visualization
- Advanced statistics and/or programming
- Analysing texts and networks
1) Neem een laptop mee
4) Denk na waar je R voor denkt te kunnen gebruiken, wat je zou willen leren aan technieken.
De media hebben een controlerende functie, maar dat roept altijd de vraag op wie de media dan moet controleren. Nu wordt onze bewering over ANP invloed bij o.a. de Volkskrant gecontroleerd door een Volkskrant redacteur, waarbij hij er (zonder methodologische verantwoording) op uitkomt dat het best meevalt met de Volkskrant, wat vervolgens in de Volkskrant wordt gepubliceerd. Als wij hierop willen reageren met een ingezonden brief, wordt deze zonder opgaaf van redenen geweigerd. In plaats daarvan bekijkt de Volkskrant-ombudsvrouw de zaak nog eens, waarbij ze concludeert dat de Volkskrant toch eigenlijk heel objectief is, omdat ook een (voor ons onderzoek irrelevant) kritiekpunt wordt besproken. Dat wordt wederom in de Volkskrant gepubliceerd. Voor ons komt hier toch een heel helder beeld uit naar voren: “wij van de Volkskrant raden de Volkskrant aan”, en kritische geluiden zijn niet welkom. Dan rest bij ons de vraag wie nu eigenlijk de fact-checkers moet controleren.
We just got the exciting news that our article on Dutch media coverage of youth crime was accepted for publication by Journalism!
Media and Juvenile Delinquency: A study into the relationship between journalists, politics and public
Nel Ruigrok, Wouter van Atteveldt, Sarah Gagestein, Carina Jacobi
Abstract: Between 2007 and 2011, the number of registered juvenile suspects declined by 44% but the Dutch public did not feel any safer. In this research we study media coverage of youth crime and interview journalists and their sources, in order to investigate the relationship between journalists, their sources and the possible effects on the public with respect to fear of crime. We find an overrepresentation of youth crime in news coverage, especially in the popular press, and a stronger episodic focus over time. All media focus increasingly on powerful sources that focus on repressive framing, but this is especially found in the elite press. We conclude that news coverage in all media groups, although in different ways, does contribute to the fear of crime in society and the idea that repressive measures are needed. The fact that this fear of crime is also caused by news coverage is acknowledged, but neither journalists nor politicians are able or willing to change this.
I’m happy to report that the paper I co-authored with Kasper Welbers and others has been accepted for ICA:
A gatekeeper among gatekeepers: The impact of a single news agency on political news in print and online newspapers in the Netherlands.
Kasper Welbers, Wouter van Atteveldt, Jan Kleinnijenhuis, Nel Ruigrok
Abstract: This paper investigates the influence of news agency ANP on the coverage and diversity of political news in Dutch national newspapers, using com putational text analysis. We analyzed the influence on print newspapers across three years (1996, 2008 and 2013) and compared influence on print and online newspapers in 2013. Results indicate that the influence of ANP on print newspapers only increased slightly. Online newspapers, however, depend heavily on ANP and are highly similar as a result of it. We draw conclusions pertaining to the gatekeeping role of news agencies in the digital age in general, and in the context of the Netherlands in particular. Additionally, we demonstrate that techniques from the field of information retrieval can be used to perform these analyses on a large scale. Our scripts and instructions are provided online to stimulate the use of these techniques in communication studies.
Links: [download presentation][R source code]
After almost 10 years I’m giving a talk at CLIN (Computational Linguistics in the Netherlands) again. I completely rewrote the clause code from python to R, which is quite exciting as it will make it much easier to tweak and add rules “client-side”, see github.com/vanatteveldt/rsyntax. I also did a new validation, comparing the results to a new gold standard of manually coded aggressive actions in the 2009 Gaza war. I also compare the results to a “word order co-occurrence” baseline that assumes that the leftmost actor is the agent (subject). Results show convincingly that word-order is indeed very fragile in conflict situations:
I also re-evaluated the source extraction, where I compare to a baseline that uses the same speech verbs, and assumes that an actor left of the speech verb is the source, and right of the speech verb the quote. Evaluation shows that recall is the same for both methods (which miss more ‘subtle’ ways of expressing quotes), but precision is extremely good for the syntactic method while being mediocre for the baseline:
In my presentation I will be presenting these results as well as a number of substantive results related to the different bias of Chinese and American newspaper coverage of the 2009 Gaza war. Results show that Chinese quote Hamas much more frequently and also display Hamas less as an aggressor.
More visually, the following shows side-by-side the actions of Israel according to the US and Chinese media, where you can clearly see that US focuses on aggression towards Hamas and emphasises the reasons for the attack (goal discourse), while China focuses on the more civilian Gaza and emphasises the attacks itself (means discourse).
(Israeli actions, Left: US newspapers; right: Chinese newspapers. Network shows co-occurrence based semantic network of all words in predicates with Israel as subject that are overrepresented in the respective country)
Links: [download presentation][R source code]
Today I’ll be giving a guest lecture in the course on Semantic Network Analysis in the VU course on social capital and networks. I will briefly talk about big data and social science research and the central role of text analysis in “social science 2.0″. I will introduce semantic network analysis as an important form of text analysis and give two examples from my own research: (1) differences in newspaper coverage of the 2009 Gaza war between US and Chinese press, and (2) the dynamics of news diffusion in the Dutch media landscape. You can download the slides here.
Today I will be giving a talk at the Talk of Europe Creative Camp #3 symposium held in the Royal Library (KB). I will be presenting the use of syntactic clause analysis to analyse newspaper coverage, which often contains multiple quoted or paraphrased sources using multiple and competing frames, making it difficult to analyse with frequency based approaches. Download: [paper][presentation]
Who killed whom in the Gaza war? Using syntactic information for relational corpus analysis
Wouter van Atteveldt & Kasper Welbers
This paper shows how syntactic information can be used to automatically extract clauses from text, consisting of a subject, predicate, and optional source. Since the output of this analysis can be seen as an enriched token list or bag of words, normal frequency based or corpus linguistic analyses can be used on this output. Taking the 2008–2009 Gaza war as an example, we show how corpus comparison, topic modelling, and semantic network analysis can be used to explore the differences between US and Chinese coverage of this war.
I’m giving a talk at the Connecting Data for Research symposium held today at the VU University.
In my presentation Dutch journalism since 1945: collecting and analysing large textual data sets with commercial owners I will be presenting some preliminary results from my study of Media Logic in Dutch newspapers since 1945, and focus on the technical and copyright-related problems with gathering, processing, and sharing large textual data sets.