Come and work with us! We are hiring for six exciting new PhD positions at the University of Vienna, Austria

We offer a pleasant work environment within a friendly, dynamic, international and young team in Vienna, one of the cities with the highest quality of life worldwide.

The working language will be English, and we are committed to diversity and inclusion. Three of the open positions are associated with the projects "Talking charts" and "Interpretability and Explainability as Drivers to Democracy" funded by the Vienna Science and Technology Fund. The other three open positions are on related topics. Please see the details below. We are looking forward to your application! 


Project “Talking charts”

The project combines perspectives and methods from Computer Science and Science & Technology Studies to explore how charts related to climate change and COVID-19 are created and understood both by researchers and public audiences. 

We welcome candidates with a background in Computing (with a specialization in Human Computer Interaction, Data Visualisation or Data Science) or in Information Science, Science and Technology Studies, or related areas. Read more about the project and the job description here: 

Please get in touch via email with Laura Koesten ( or Kathleen Gregory (  if you have any questions about the role descriptions or application process for the Talking Charts project.


Project “Interpretability and Explainability as Drivers to Democracy”

The project is about machine learning models that are used for making decisions with significant societal impact in democratic societies. In particular, the project's aim is to enable informed evaluation of machine learning models and their usage by the electorate through interpretability and explainability of the used models and communication of these models and their roles in the decision making process in suitable forms. The underlying decision making process is considered to involve different stakeholders with different levels of expertise and achieving the project's aims requires the development of novel machine learning models, visualization approaches, and guidelines.

Please get in touch via email with Sebastian Tschiatschek ( if you have any questions about the role descriptions or application process for the Interpretability and Explainability project.

Our research groups are also filling the following related PhD positions:

Please get in touch via email with Torsten Möller ( with questions about these roles.