I am very excited to announce that we* just launched Computational Communication Research (CCR), a new open-access peer-reviewed journal dedicated to development and applications of computational methods for communication science. We hope that CCR will serve as central home for communication scientists with an interest in and focus on computational methods — a place to read and publish the cutting edge work in our growing subfield.
Please see the inaugural call for papers at http://computationalcommunication.org/inaugural-cfp/ (abstracts 30 Sept, manuscripts 30 Nov), and consider submitting your best computational work to the first issue!
Don’t hesitate to email me for more information, and looking forward to your submissions!
The PhD Candidate will work as part of the Project Team “Inside the filter bubble: A framework for deep semantic analysis of mobile news consumption traces“, a collaboration between the VU, UvA (University of Amsterdam), CWI (Centre for Mathematics and Computer Science) and the Netherlands eScience Centre.
In this project we will develop and use tracking techniques to analyse mobile news consumption patterns to determine the impact of biased and homogeneous news diets on political knowledge and attitudes. Online and mobile news consumption leaves digital traces that are used to personalize news supply, possibly creating filter bubbles where people are exposed to a low diversity of issues and perspectives that match their preferences. Filter bubbles can be detrimental for the role of journalism in democracy and are therefore subject to considerable debate among academics and policymakers alike. The goal of this project is to develop techniques for analysing (mobile) news consumption patterns to determine the impact of selective news exposure on political knowledge and attitudes. This will contribute to the quality of (digital) journalism, news supply and democracy.
Together with the other members of the project team, you will conduct the substantive analysis of these data and write papers as outlined in the project proposal, culminating in a dissertation to be defended at the VU. See ccs.amsterdam/projects/jeds for the project description or mail us for the full proposal.
We are looking for a candidate that has either a degree in communication science, journalism, or related with strong affinity with computational methods; or a degree in data science, computer science or similar with strong affinity with news and societal problems. Candidates that expect to obtain their degree in the near future are also invited to apply.
The VU University Amsterdam has three openings for assistant professors, of which two explicitly look for candidates with computational skills:
The VU department of Communication Science aims to hire candidates in the fields of:
a) Political communication / public affairs (from January 2019)
b) Corporate communication and/or marketing communication (from January 2019),
c) Media psychology. (from August 1st)
For all three positions, we prefer candidates with a strong focus on the use of new communication technologies (e.g. social media, social robotics, sharing platforms). For positions a) and b) we prefer candidates who apply computational methods such as automatic text analysis, machine learning, or network analysis using programming languages such as R and Python.
If you have are an expert in any of these fields, have good computational skills, and want to be part of our fast-growing Computational Communication Science lab (see also http://ccs.amsterdam), please consider applying to one of these positions before May 10th.
Don’t hesitate to email me if you need more information! (firstname.lastname@example.org)
I will be teaching a workshop on Topic Modeling at the Hannover CCS conference from 13-15 February 2018. All links and materials will be posted on this page.
- Session 1: Text analysis in R
- Session 2: Running and validating topic models
- Session 3: Technical details of LDA
- Session 4: Structural Topic Models
- Session 5: Linguistic processing
I am currently at UIUC to participate in the Resteco team meating and CRESTA workshop on analysing terrorism coverage. I will give a brief talk on using syntactic anlaysis and crowdcoding for analysing the quality of terrorism coverage. Download Slides
We were delighted to hear that JEDS/NWO decided to accept our grant proposal for studying personalised online news consumption! In the coming years, our team from the VU, UvA, CWI, and e-Science Center will automatically measure and analyse personalised online news consumption to find out whether online filter bubbles really exist and what effect they have on political knowledge and attitudes.
- Wouter van Atteveldt (VU, Communication Science)
- Laura Hollink (CWI)
- Damian Trilling (UvA, Communication Science)
- Antske Fokkens (VU, Computational Lexicology)
- Judith Möller (UvA, Communication Science)
- Kasper Welbers (VU, Communication Science)
- Natali Helberger (UvA, Law)
- E-science Engineer (NLeSC)
- Ph.D. Student (VU & UvA)
Online and mobile news consumption leaves digital traces that are used to personalize news supply, possibly creating filter bubbles where people are exposed to a low diversity of issues and perspectives that match their preferences. Filter bubbles can be detrimental for the role of of journalism in democracy and are therefore subject to considerable debate among academics and policymakers alike. The existence and impact of filter bubbles are difficult to study because of the need to gather the digital traces of individual news consumption; to automatically measure issues and perspectives in the consumed news content; and to combine and analyse these heterogeneous data streams.
- WP1: Develop a mobile app to trace individual news consumption and gather user data;
- WP2: Create a custom NLP pipeline for automatically identifying a number of indicators of news diversity in the news content;
- WP3: integrate and analyze the resulting heterogeneous data sets;
- Use the resulting rich data set to determine the extent to which news recommender algorithms and selective exposure leads to a lower diversity of issues and perspectives in the filter bubbles formed by news supplied to and consumed by different groups of users (WP4).
These analyses allow us to determine the impact of biased and homogeneous news diets on political knowledge and attitudes. The software developed in this project will be open source and re-usable outside the scope of this project by scholars interested in mobile behavior and news production and consumption.
Tonight I will be giving a talk in the Python meetup hosted by Byte. I will be talking about “Large scale search and text analysis with Python, Elastic, Celery and a bit of R”, meaning a lot of AmCAT, NLPipe, and R/python scripts using Django, REST framework, elasticsearch, celery, and 3d party tools. Download slides here.
Reinout van Rees actually summarized my talk on his blog, thanks!
Monday 5 november I will teach a course on Text Analysis in R at the VU.
To prepare, please install R and Rstudio on your laptop. If you are interested, you can read our recent article on text analysis in R and/or some introductory materials on learning R.
Time: 9:00 – 13:00 (approx)
Data: [github] [meta.rds] [articles.rds] [tokens.rds]
- Session I: Introduction [slides]
- DTM’s in R
- Dictionary analysis with AmCAT and/or quanteda
- Session II: Corpus Analysis [slides]
- Simple NLP
- Corpus analysis and visualization
- Topic modeling
To install all packages used in the example code, you can run the following commands in R:
install.packages(c("devtools", "corpustools", "quanteda", "topicmodels", "ggplot2", "LDAvis", "slam"))
Link to the visualization presentation
(Wouter van Atteveldt, Mariken van der Velden, Antske Fokkens)
Due to the need for context-specific sentiment analysis tools and the rich language used for expressing sentiment in political text, automatic sentiment analysis suffers heavily from the scarcity of annotated sentiment data. This is especially true for directional sentiment, i.e. annotations that a holder has sentiment about a specific target.
In this paper we use crowdsourcing to overcome this data scarcity problem and develop a tool for classifying sentiment expressed in a text about a specific target. Crowdsourcing is especially useful for sentiment analysis because sentiment coding is a simple but essentially subjective judgment, and the low cost of crowdsourcing makes it possible to code items multiple times, showing the spread of sentiment as well as the point estimate.
We show that crowd sourcing can work to get directed sentiment with reasonable accuracy with as little as 2-3 coders per unit, increasing in accuracy up to 10 coders. By selecting sentences on which coders agree a very high-precision subset of codes can be compiled. It is essential to make the task as simple as possible and to have good ‘gold questions’ for quality control.
Our future plans are to gather data on sentiment about specific political parties from Dutch and English tweets and political news. These data are used to compare crowdsourcing to manual expert coding. Moreover, these data will be used to enhance an existing sentiment dictionary and to train a machine learning model. By comparing the outcome of these various approaches, we can show the most cost-effective way to conduct accurate targeted sentiment analysis.
[Part of the Applications of Topic Modeling panel in the Computational Methods interest group. Download slides or an earlier version of this paper]
LDA topic modeling is a popular technique for unsupervised document clustering. However, the utility of LDA for analysing political communication depends on being able to interpret the topics in theoretical terms. This paper explores the relation between LDA topics and content variables traditionally used in political communication. We generate an LDA model on a full collection of front-page articles of Dutch newspapers and compare the resulting LDA topics to a manual coding of the political issues, frames, and sentiment.
In general, we find that a large number of topics are closely related to a specific issue; and that the different topics that comprise an issue can be interpreted as subissues, events, and specific journalistic framing of the issue. Linear combinations of topics are moderately accurate predictors of hand-coded issues, and at the aggregate level correlate highly. These results validate the use of LDA topics as proxies for political issues, and pave the way for a more empirical understanding of the substantive interpretation of LDA topics.
(Wouter van Atteveldt, Kasper Welbers)