ICWSM 19 Workshop: Critical Data Science

01.09.2019

Summary report of the ICWSM-2019 Workshop on Critical Data Science

This post summarizes the activities and outcomes of the Workshop on Critical Data Science at ICWSM-2019 in Munich, Germany, as well as points to future directions for work in critical data science.

–Katja Mayer and Momin M. Malik, 01 September 2019

Overview

In an early suggestion of the term “critical data science,” Jo Bates (2016) writes:

“New data science techniques offer immense potential for scientific advancement and human development – there may even be a role for data science in advancing the democratic project. However, in order to ensure that these advances benefit all, rather than empower the few, it is crucial that data scientists work collaboratively with others to incorporate an analysis of power into their practice.”

Like any other human endeavor, data science will be what we make it. ICWSM, in bringing together social sciences and computer science over the past 13 years, is a prime space to bring together scholars from different disciplines and engage one another around frameworks for responsibly carrying out data science on social phenomena .

We define critical data science as our vision of the practice of working with and modeling data (the “data science”), combined with identifying and questioning the core assumptions that drive that practice (the “critical”)—not just looking at the world, but “back[ing] up and look[ing] at the framework of concepts and assumptions and practices through which [we] look at the world” (Agre, 2000). It can be seen as the intersection (or perhaps the union) of data science and critical data/algorithm studies, and an example of a “critical technical practice” (Agre 1997).

The workshop arose from the premise that only through combining cultures of critique with those of practice can we create responsible and sustainable ways of interdisciplinary collaboration. This workshop was to create a space for such combination, and to explore what it might contain: what will it look like to do data science with awareness to power relations? Who might make up critical practitioners of data science, or how might more emerge? What sorts of communities, coalitions, and collaborations need to exist between technical practitioners and nontechnical analysts, or between those working in academic research, industry, government, or in NGOs?

About this document

This serves as an overview and summary of the workshop, but we are also keeping this open as a living document. We invite workshop participants and others to add to this, especially in adding concrete actions to carry out in our professional and scholarly practices and by which to engage at our institutions/organizations.

Presentations

The workshop included short presentations by participants to support reflection of their own and neighboring scientific practices, and to create opportunities for further cooperation. Participants covered a broad range of backgrounds: industry data science and engineering, computer science, computational social science, linguistics, classics, environmental and human rights activism, social work, digital democracy, and arts.

Philippe Saner discussed ideas of data science in education. He observed that “the cultural framing of the ‘sexiness’ of data science… by industry transforms [claimed] neutrality into a prospective vision enabling students to see themselves as future ‘societal leaders’, thus in specific positions of power.” As data serves as “space between fields”, he argued for the possibilities of exploring these spaces rather than ignoring them, or leaving them to corporate logic and engineering tradition.

Jared Moore critiqued the use of the “social good” label of many current AI initiatives, noting that it serves to distract from intrinsic ills that come along with the mass deployment of resources (in energy, education, and labor) while generally failing to have a coherent standard of what constitutes “social good” to even assess whether it has been met. He proposed “AI for not bad” as a more honest label for those computer scientists who want to distinguish their work from the amoral mainstream but do not want to commit to the political stances necessary for coherently working towards positive social change.

Publication: Moore, Jared. (2019). “AI for not bad.” [Research Topic: Workshop Proceedings of the 13th International AAAI Conference on Web and Social Media.] Frontiers in Big Data. doi: 10.3389/fdata.2019.00018

Parvathi Subbiah discussed her UK-based research of support for the Chavismo movement in her native Venezuela, focusing on the barriers she has faced of lack of access to training, disciplinary skepticism and an inability to offer guidance, and (from the UK Foreign Office banning travel to her native Venezuela) transnational controls. She discussed how online articles of support for Chavismo among non-Venezuelans opened up areas of inquiry in place of the field work she was unable to do, but at the same time in themselves lack critical context. One sentiment analysis tools developed for the US context linked terms of “state” or “worker” to socialism, which created over-estimates of the prevalence of discussions of socialism and missed prominent themes of anti-imperialist sentiment. By using mixed-methods research, in particular interviews with non-Venezuelan supporters of Chavismo and Venezuelans living abroad, she has been able to identify and overcome problems with online records only.

Jaclyn Sawyer discussed her experience of data science in social work, the discipline that perhaps has the greatest claim to have systematically thought about and been devoted to the practice of “social good” (D’Ignazio, 2018; Patton, 2019). She reflected on her experience “working at the intersection of social welfare, data, and technology, a space that gives voice to experts of the human domain in the digital realm.” Jaclyn herself, like many other workshop participants, builds on rich inter- and transdisciplinary experiences, with her own background stretching across public policy, social work, and data science: with this, she described her work on the front lines of using data science for providing social services in the Data Services and Program Analytics department at Breaking Ground, a non-profit providing homeless street outreach and affordable housing opportunity in New York City. She described what could perhaps be an exemplar of responsible modeling, design, and implementation practice: starting not opportunistically from (patchy, inconsistent, and low-quality) available data, but starting with stakeholders and the perspectives of those they serve, and working with a team of laboriously building a whole pipeline of data collection, modeling, and application around homelessness and housing insecurity. Some particularly admirable parts of the program is that data collection only happens after trust-building with the program’s clients, that data cleaning is a major part of the pipeline, and that the project has gone through multiple development cycles to improve and be responsive.

Tea Brasanac reported on her project of “Visual anonymity and data privacy.” As she pointed out, there is a long journalistic tradition of hiding people’s faces to allow them to maintain anonymity while speak for themselves on video. When computational tools became able to reliably detect of faces, an obvious and relatively easy next step would have been automated, real-time blurring of faces while recording video; yet there is not a single available tool that does this. Tellingly, all development past facial detection has been towards facial recognition. Existing tools for blurring require identifiable video to be uploaded for processing, which as she learned from interviewing refugees, is not good enough: her interviewees did not want identifiable recordings of themselves to ever exist. Identifying and theorizing this lack, she has also set about addressing it, demoing a tool for real-time facial blurring at the workshop.

Laura Schelenz presented questions on how to make data collection in the project “Internet of Us” more inclusive, diversity-oriented, and aligned with data protection as well as ethical principles. This was met with challenges the setup of the project, as the methods of pursuing inclusion and diversity had not been made in consultation with the Global South communities who were the intended end-users of the project outputs; this led to a robust discussion of how to effectively distribute the resources which we as researchers have access to within the constraints that come along with those resources, and how to best fight structural inequality with projects that are enabled from that very inequality.

Helena Mihaljević presented reflections on ongoing work with Christian Steinfeldt and colleagues on studying gender representation in mathematical publications. On the one hand, we know there are gendered asymmetries in professional opportunities and advancement in academic mathematics (as in every field and profession), and it is worthwhile and important to study these at scale. On the other hand, studying this at scale makes it infeasible to ask individual authors for their gender identification; and name-based automated gender recognition both denies the lived experience of trans and gender non-binary individuals as well as systematically fails on names of Chinese and Eastern European origin, challenges that are seldom discussed in the literature. The resulting discussion closely paralleled the postcolonial theory idea of “strategic essentialism”; the extent to which it is possible to strategically deploy essentializing categories, with known limits and injustices, for the purposes of fighting other injustices. [We thank Os Keyes for making this connection of strategic essentialism to uses of data and modeling.]

Publication: Mihaljević, Helena, Marco Tullney, Lucía Santamaría, and Christian Steinfeldt. (2019). “Reflections on gender analyses of bibliographic corpora.” [Research Topic: Workshop Proceedings of the 13th International AAAI Conference on Web and Social Media.] Frontiers in Big Data. doi: 10.3389/fdata.2019.00029

Gabriel Pereira presented joint work with Annette Markham, experimenting with “algorithmic memory making.” In their art project, the Museum of Random Memory, they use models to deliberately distort recordings, drawing attention to how digital media are mediated in ways usually meant to be invisible and in so doing asserting control over the transmission of traces, stories, and narratives. Pereira connected this to the need of future-oriented ethics, an ethics of uses of data now knowing that future consequences are unforeseen and unforeseeable, while questioning the ethical role of data collectors in presenting data as lived experience.

Publication: Markham, Annette and Gabriel Pereira. (2019). “Experimenting with algorithmic memory-making:
Lived experience and future-oriented ethics in critical data science.” [Research Topic: Workshop Proceedings of the 13th International AAAI Conference on Web and Social Media.] Frontiers in Big Data.

Non-attendee submissions

We had some submissions from participants who were not able to attend. We thank them for their participation nonetheless.

Eugene T. Richardson (Department of Global Health and Social Medicine, Harvard Medical School; Department of Medicine, Brigham and Women’s Hospital; and Partners In Health, Sierra Leone) submitted a draft chapter, “Immodest Causal Inference” from a forthcoming book. The formal representation of causality via directed acyclic graphs has been a topic of growing interest and excitement within data analysis, and indeed presents a counterpoint to the “correlation-only” mainstream of machine learning and its many dangers. But in this chapter, Richardson critiques some of the implications of this formal language. Namely, it leads us away from understanding structural factors (as, within the language of causal graphs, they can be ignored if they are anything but a direct ancestor within a causal graph), thereby leading away from solidarity and radical interventions. We look forward to the finished work as a powerful example of critically examining the technical limitations and psychological implications of a burgeoning paradigm.

Íñigo Martínez de Rituerto de Troya (Data Science for Social Good Europe, Universidade Nova de Lisboa) submitted a statement describing his work “working with the Portuguese national Institute for Employment and Professional Development to help unemployed individuals find work or undergo professional, vocational, or personal training.” He describes trying to balance using modeling with a skepticism about attempts to use technical means to address social problems: his current practice includes trying to find ways to co-create predictive systems with those who the systems will affect, and reflect on possible negative impacts.

Michael Castelle (Centre for Interdisciplinary Methodologies, University of Warwick) submitted a statement, “Towards a 21st-Century Critical AI: Methods for a Reflexive Deep Learning Practice”, describing his research into epistemic transformations around convolutional and recurrent neural networks. These methods will “likely pose a conceptual threat to traditional disciplinary methods (as well as a financial threat for competitive grants)”, even more than computational social science or digital humanities thus far; how might we address this? He anticipates, and encourages, the creation of a “trading zone” by which techniques other than just computer science, statistics, and mathematics be brought to bear to understand neural networks. Namely: history and sociology of science, knowledge, and technology, as well as the study of semiotics, and the “anthropological study of cultural and linguistic ideologies”. He described a proof-of-concept contribution to the 2018 Empirical Methods in Natural Language Processing conference’s 2nd Workshop on Abusive Language Online, showing the downstream modeling effects of annotation by context-aware domain experts versus context-unaware non-experts.

Discussion

Workshop presentations and discussions both delved into how we can change our socio-technical practices very much in line with Agre’s (1997) call for a critical technical practice. In our account, as in his, three things are central.

First, critical technical practice requires deeply personal involvement with our scientific routines. Reflection and personal knowledge is a deep part of scientific practice, but scientific disciplines frequently justify themselves by claiming objectivity, neutrality, and universality, leaving little room for reflection (Polanyi, 1962). One of the most enlightening parts of the workshop discussion was asking the computer scientists in the room what biographical aspects or experiences led them to, unlike many technical practitioners, be open to non-technical perspectives; for many, it was personal connections or commitments to political projects.

Second, Agre identifies an experience that we suspect is increasingly common: technical practitioners, those whose day-to-day work or research involves activities like math or coding for doing quantitative analysis and building systems, find not only fundamental limitations in their disciplines but also find themselves lost for how to seek answers. Agre describes his own process of looking to the humanities and social sciences, and experiencing a sense of vertigo when he finally learned to read other disciplines in their own terms rather than trying to translate them into the specification of a technical mechanism or a formal procedure.

Lastly, Agre envisions technical practitioners not abandoning their practices after discovering profound limits, but starting to carry out the practices from a fundamentally different foundation: a critical technical one. There is normally a dichotomy between the social scientific “analysts” who produce critical accounts of the ideas of and practices of science, and the scientific “actors” who produce those ideas and practices (Collins, 2008); Agre’s call suggests the possibility of hybrid identity between these two. This theorizes some current practices and future possibilities within communities like the FAT* conference, or niches of critical data and algorithm studies, surveillance studies, and data activism; and it perfectly captures a number of projects presented at the workshop that remain technical products or analyses, but are grounded by something far deeper than the under-theorized pragmatism that drives so much of software engineering and data analysis.

Outputs and future activities

During the workshop we compiled two blocks of questions that could guide our own personal agenda setting.

  1. Politics: What are our experiences of paradigmatic politics? Who are the insiders, and who are the outsiders for effecting change? Do we feel capable of intervening in curricular decision-making, and can we disrupt dominant narratives of big data hegemony, efficiency and objectivity? What does it mean to do data science for good, for whom? What would be my personal priorities: short term, and long term?
  2. Practice: what concrete actions can we take? How can we create spaces and time for collaboration besides always-hectic, project-based logics? Which incentive and reward structures would we need for that? Which skills do we want to establish in the training of the next generation? How can I/we collaborate? With whom? For what tasks?

Workshop participants developed first ideas on how to design critical technical practice in data science:

  1. Systematic reflection. An important component is systematic reflection the issues we face, and the development of best practices for future projects. Potential task: create a template for such systematic reflection (e.g., an application of such a template might be a set of principles of good data science scholarship)
  2. Participatory Action Research. Philippe Saner considers participatory action research (PAR) projects “as a possibility that brings together the different cultures of critique involved to investigate the ‘practices’ (modeling techniques, methods, tools etc.), discourses (framings, imaginations, visions etc.), and structural conditions of data science as a contemporary knowledge formation.” Multiple workshop participants agreed with PAR as an extremely promising framework, as indeed is also being recognized and brought to bear in the world of technology design (Costanza-Chock, 2018; Costanza-Chock et al., 2018).
  3. Building teams. Jaclyn Sawyer’s observations on interdisciplinary practice could serve as a model for building data teams across sectors.
  4. Venues for publishing. We need more white papers for practitioners, and more publishing / exchange formats for transdisciplinary understanding.
  5. Linking sectors. How do we link sectors—non-profits, academia and corporate interests? There are tons of opportunities for data scientists to lend their ambition to nonprofits, such as in deep data dives. But this is not sustainable, as volunteering data scientists move on. What are the incentives?
  6. Education. We have to change the education and training. How do we do this in our ecosystems?
  7. Institution-building. Instead of starting our own efforts, we can acknowledge those who are already doing relevant work in this space. Can we attach our efforts to organizations of critical researchers, who are already advocating change? How do we identify such organizations and choose which ones to join?
  8. Documentation. Making workflows more open and better documented. Which would be the right tools for this?
  9. Ethical principles. We could adopt FAIR data principles (https://www.go-fair.org/fair-principles/), as well as reflect ethical concerns more openly, but while understanding that fairness is not something universal that could simply be built into technology.
  10. Funding. One topic that came up is, especially in the US case, the role of military funding for computational research. Even if people were not actively changing their research and tailoring it to military priorities to attract funding (which almost certainly happens), there would still be a selection effect: research comporting with military goals gets be disproportionately supported. And, there was little disagreement that critical data science will not fit with military priorities. Can funding instruments better mandate ethical regulation? How can scholars, whether in the US or elsewhere, gain funding for work that challenges structures of power—not just from the military, but from corporations, or foundations with no public accountability? Or if not, what is the alternative: are there levels and types of compromise should we accept?

The slides and discussion cards can be found here: https://critical-data-science.github.io/wcds2019slides.pdf.

And we have also started a reading list via Zotero: https://www.zotero.org/groups/2282959/critical_data_science/items

Acknowledgements

There are a number of scholars at the forefront of combining practice and critique, and we were fortunate to have the guidance and input of several of them who serves as our reviews. Alphabetically by last name, thanks to:

  • Doris Allhutter, political scientist and STS scholar with a focus on software development;
  • Catherine D’Ignazio, Feminist Human-Computer Interaction scholar at MIT and co-author of the forthcoming Data Feminism;
  • Claire Donovan, cross-disciplinary scholar in research evaluation and policy;
  • Mary Gray, Senior Researcher at Microsoft Research New England, and co-author of Ghost Work;
  • Nick Seaver, ethnographer at Tufts University and co-compiler of “Critical Algorithm Studies: A reading list”; and
  • Luke Stark, media studies scholar at Microsoft Research Montreal.

Thanks to Katie Shilton and Casey Fiesler (respectively, information scientist at the University of Maryland, College Park and social computing researcher at University of Colorado Boulder, and co-organizers of the 2018 ICWSM workshop “Exploring Ethical Trade-Offs in Social Media Research”) for their guidance when proposing the workshop.

For additional input, we also thank Ben Green (PhD candidate at Harvard University, visiting research AI NOW, and author of “Data Science as Political Action” and The Smart Enough City), Jonnie Penn (historian of science at the University of Cambridge and co-organizer of the History of AI conference series and community), Amy Johnson (digital STS scholar and linguistic anthropologist), and members of the Ethical Tech Working Group at the Berkman Klein Center for Internet & Society at Harvard University.

A special thanks to ICWSM-19 Local Chair Mirco Schönfeld for his organizational efforts on the workshop day, as well as to General Chair Jürgen Pfeffer for his enormous efforts to make the workshops accessible. His success at lowering workshop costs and defraying costs for attendees made it possible for scholars outside of computer science to attend the ICWSM workshops, without which this workshop would not have been nearly the success it was.

We further thank our colleagues Claudia Müller Birn and Hemank Lamba for their valuable inputs, reviews, and technical assistance, and regret that they were prevented from attending and participating due to external circumstances.

What science becomes in any historical era depends on what we make of it.

– Sandra Harding, 1991

References

Agre, Philip E. (1997). “Towards a critical technical practice: Lessons learned from trying to reform AI.” Social science, technical systems, and cooperative work: Beyond the great divide. Ed. by Geoffrey C. Bowker, Susan Leigh Star, Will Turner, and Les Gasser. Mahwah, NJ: Lawrence Erlbaum Associates, pp. 131–158.

Agre, Phillip E. (2000, July 12). “Notes on critical thinking, Microsoft, and eBay, along with a bunch of recommendations and some URL’s.” Red Rock Eater News Service. https://pages.gseis.ucla.edu/faculty/agre/notes/00-7-12.html.

Bates, Jo. (2016, July 12). “Towards a critical data science – the complicated relationship between data and the democratic project.” LSE Impact Blog. https://blogs.lse.ac.uk/impactofsocialsciences/2016/01/12/towards-a-critical-data-science-data-and-the-democratic-project/.

Collins, Henry. (2008). “Actors’ and analysts’ categories in the social analysis of science.” Clashes of knowledge: Orthodoxies and heterodoxies in science and religion. Ed. by Peter Meusburger, Michael Welker, and Edgar Wunder. Springer, pp. 101–110.

Costanza-Chock, Sasha. (2018, July 16). Design justice, A.I., and escape from the matrix of domination. Journal of Design and Science, 3 (5). https://doi.org/10.21428/96c8d426.

Costanza-Chock, Sasha, Maya Wagoner, Berhan Taye, Caroline Rivas, Chris Schweidler, Georgia Bullen, and the Tech for Social Justice Project. (2018). #MoreThanCode: Practitioners reimagine the landscape of technology for justice and equity. Technical Report. Research Action Design & Open Technology Institute. https://morethancode.cc.

D’Ignazio, Catherine. (2018, September 2). “How might ethical data principles borrow from social work?” Medium. https://medium.com/@kanarinka/how-might-ethical-data-principles-borrow-from-social-work-3162f08f0353.

D’Ignazio, Catherine, and Lauren Klein. Data feminism. MIT Press, 2019. https://bookbook.pubpub.org/data-feminism.

Gray, Mary L., and Siddharth Suri. Ghost Work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt, 2019.

Green, Ben. (2018). “Data science as political action: Grounding data science in a politics of justice.” https://arxiv.org/abs/1811.03435.

Green, Ben. (2019). The smart enough city: Putting technology in its place to reclaim our urban future. MIT Press.

Harding, Sandra. (1991). Whose science? Whose knowledge? Thinking from women’s lives. Cornell University Press.

Patton, Desmond U. (2019, March 24). “Why AI needs social workers and ‘non-tech’ folks.” Noteworthy – The Journal Blog. https://blog.usejournal.com/why-ai-needs-social-workers-and-non-tech-folks-2b04ec458481.

Polanyi, Michael. (1966). The tacit dimension. Doubleday.