Vacancies for asst. profs. in computational (political) communication

The VU University Amsterdam has three openings for assistant professors, of which two explicitly look for candidates with computational skills:

https://www.academictransfer.com/en/46787/assistant-professors-universitair-docenten/

The VU department of Communication Science aims to hire candidates in the fields of:

a) Political communication / public affairs (from January 2019)

b) Corporate communication and/or marketing communication (from January 2019),

c) Media psychology. (from August 1st)

For all three positions, we prefer candidates with a strong focus on the use of new communication technologies (e.g. social media, social robotics, sharing platforms). For positions a) and b) we prefer candidates who apply computational methods such as automatic text analysis, machine learning, or network analysis using programming languages such as R and Python.

If you have are an expert in any of these fields, have good computational skills, and want to be part of our fast-growing Computational Communication Science lab (see also http://ccs.amsterdam), please consider applying to one of these positions before May 10th.

Don’t hesitate to email me if you need more information! (wouter@vanatteveldt.com)

CCS Hannover Workshop on Topic Modeling

I will be teaching a workshop on Topic Modeling at the Hannover CCS conference from 13-15 February 2018. All links and materials will be posted on this page.

Inside​ ​ the​ ​ filter​ ​ bubble: JEDS Proposal Accepted!

We were delighted to hear that JEDS/NWO decided to accept our grant proposal for studying personalised online news consumption! In the coming years, our team from the VU, UvA, CWI, and e-Science Center will automatically measure and analyse personalised online news consumption to find out whether online filter bubbles really exist and what effect they have on political knowledge and attitudes.

Team:

  • Wouter van Atteveldt (VU, Communication Science)
  • Laura Hollink (CWI)
  • Damian Trilling (UvA, Communication Science)
  • Antske Fokkens (VU, Computational Lexicology)
  • Judith Möller (UvA, Communication Science)
  • Kasper Welbers (VU, Communication Science)
  • Natali Helberger (UvA, Law)
  • E-science Engineer (NLeSC)
  • Ph.D. Student (VU & UvA)

Abstract:

Online and mobile news consumption leaves digital traces that are used to personalize news supply, possibly creating filter bubbles where people are exposed to a low diversity of issues and perspectives that match their preferences. Filter bubbles can be detrimental for the role of of journalism in democracy and are therefore subject to considerable debate among academics and policymakers alike. The existence and impact of filter bubbles are difficult to study because of the need to gather the digital traces of individual news consumption; to automatically measure issues and perspectives in the consumed news content; and to combine and analyse these heterogeneous data streams.

Work packages:

  • WP1: Develop a mobile app to trace individual news consumption and gather user data;
  • WP2: Create a custom NLP pipeline for automatically identifying a number of indicators of news diversity in the news content;
  • WP3: integrate and analyze the resulting heterogeneous data sets;
  • Use the resulting rich data set to determine the extent to which news recommender algorithms and selective exposure leads to a lower diversity of issues and perspectives in the filter bubbles formed by news supplied to and consumed by different groups of users (WP4).

These analyses allow us to determine the impact of biased and homogeneous news diets on political knowledge and attitudes. The software developed in this project will be open source and re-usable outside the scope of this project by scholars interested in mobile behavior and news production and consumption.

Python meetup @Byte: Text analysis with Python

Tonight I will be giving a talk in the Python meetup hosted by Byte. I will be talking about “Large scale search and text analysis with Python, Elastic, Celery and a bit of R”, meaning a lot of AmCAT, NLPipe, and R/python scripts using Django, REST framework, elasticsearch, celery, and 3d party tools. Download slides here.

Reinout van Rees actually summarized my talk on his blog, thanks!

VU-HPC: Text analysis in R

Monday 5 november I will teach a course on Text Analysis in R at the VU.

To prepare, please install R and Rstudio on your laptop. If you are interested, you can read our recent article on text analysis in R and/or some introductory materials on learning R.

Location: WN-C203
Time: 9:00 – 13:00 (approx)

Data: [github] [meta.rds] [articles.rds] [tokens.rds]

Program:

  • Session I: Introduction [slides]
    • DTM’s in R
    • Dictionary analysis with AmCAT and/or quanteda
  • Session II: Corpus Analysis [slides]
    • Simple NLP
    • Corpus analysis and visualization
    • Topic modeling

To install all packages used in the example code, you can run the following commands in R:

install.packages(c("devtools", "corpustools", "quanteda", "topicmodels", "ggplot2", "LDAvis", "slam"))
devtools::install_github("amcat/amcat-r")
devtools::install_github("vanatteveldt/nlpiper")

Link to the visualization presentation

ICA 2017: Crowd sourcing for sentiment analysis

(Wouter van Atteveldt, Mariken van der Velden, Antske Fokkens)

Download slides

Due to the need for context-specific sentiment analysis tools and the rich language used for expressing sentiment in political text, automatic sentiment analysis suffers heavily from the scarcity of annotated sentiment data. This is especially true for directional sentiment, i.e. annotations that a holder has sentiment about a specific target.

In this paper we use crowdsourcing to overcome this data scarcity problem and develop a tool for classifying sentiment expressed in a text about a specific target. Crowdsourcing is especially useful for sentiment analysis because sentiment coding is a simple but essentially subjective judgment, and the low cost of crowdsourcing makes it possible to code items multiple times, showing the spread of sentiment as well as the point estimate.

We show that crowd sourcing can work to get directed sentiment with reasonable accuracy with as little as 2-3 coders per unit, increasing in accuracy up to 10 coders. By selecting sentences on which coders agree a very high-precision subset of codes can be compiled. It is essential to make the task as simple as possible and to have good ‘gold questions’ for quality control.

Our future plans are to gather data on sentiment about specific political parties from Dutch and English tweets and political news. These data are used to compare crowdsourcing to manual expert coding. Moreover, these data will be used to enhance an existing sentiment dictionary and to train a machine learning model. By comparing the outcome of these various approaches, we can show the most cost-effective way to conduct accurate targeted sentiment analysis.

ICA 2017: What are topics?

[Part of the Applications of Topic Modeling panel in the Computational Methods interest group. Download slides or an earlier version of this paper]

LDA topic modeling is a popular technique for unsupervised document clustering. However, the utility of LDA for analysing political communication depends on being able to interpret the topics in theoretical terms. This paper explores the relation between LDA topics and content variables traditionally used in political communication. We generate an LDA model on a full collection of front-page articles of Dutch newspapers and compare the resulting LDA topics to a manual coding of the political issues, frames, and sentiment.

In general, we find that a large number of topics are closely related to a specific issue; and that the different topics that comprise an issue can be interpreted as subissues, events, and specific journalistic framing of the issue. Linear combinations of topics are moderately accurate predictors of hand-coded issues, and at the aggregate level correlate highly. These results validate the use of LDA topics as proxies for political issues, and pave the way for a more empirical understanding of the substantive interpretation of LDA topics.

(Wouter van Atteveldt, Kasper Welbers)

Political Communication @Kobe University

Today, Nel and I will be presenting in a workshop on political communication at Kobe University:

The Netherlands’ 15 minutes of (in)fame:
political coverage and populism in the Dutch 2017 elections
dr. Nel Ruigrok
[download slides]

On the 15th of March the international press was focused on the Netherlands where, after Brexit and the election of Trump a third success of populism was expected. The extreme-right Geert Wilders lead the polls during the last months and was expected to become the biggest party. However, it was the liberal party of the current prime minister that won the election, followed by the PVV. Besides this turn to more right-wing parties, also progressive parties won numerous seats, making a the political landscape more fragmented than ever. In this talk we show the different media coverage during the campaign and discuss possible effects on voting behavior of different groups of voters.

Clause analysis:
using syntactic information for automatic analysis of conflict coverage
dr. Wouter van Atteveldt
[download slides]

This paper shows how syntactic information can be used to automaticallyextract clauses from text, consisting of a subject, predicate, and optionalsource. Since the output of this analysis can be seen as an enriched token list or bag of words, normal frequency based or corpus linguistic analyses can be used on this output. Taking the 2008–2009 Gaza war as an example, we show how corpus comparison, topic modelling, and semantic network analysis can be used to explore the differences between US and Chinese coverage of this war.

Mondag 20 feb: Research talk @cityu

Don’t you like it? Using CrowdSourcing for Sentiment Analysis of Dutch and English (political) text  

Wouter van Atteveldt, Antske Fokkens, Isa Maks, Kevin van Veenen, and Mariken van der Velden

[Download slides]

Sentiment Analysis is an important technique for many aspects of communication research, with applications from social media analysis and online reviews to negativity in political communication. The subjective and context-specific nature of evaluative language, however, makes it particularly challenging to develop and validate good sentiment analysis tools.

We use crowdsourcing to develop a tool for classifying sentiment expressed in a text about a specific target. Crowdsourcing is especially useful for sentiment analysis because of the subjective nature of the judgment, and the low cost makes it possible to code items multiple times. By comparing crowdsourcing with dictionary analysis and expert coding, we can show the most cost-effective way to conduct accurate targeted sentiment analysis.

cityu_seminar