Practical AI Transparency: Revealing Datafication and Algorithmic Identities
DOI:
https://doi.org/10.33621/jdsr.v2i3.32Keywords:
datafication, methods, algorithmic identity, digital social research, practical transparency, methodology, digital methods, subject access requestAbstract
How does one do research on algorithms and their outputs when confronted with the inherent algorithmic opacity and black box-ness as well as with the limitations of API-based research and the data access gaps imposed by platforms’ gate-keeping practices? This article outlines the methodological steps we undertook to manoeuvre around the above-mentioned obstacles. It is a “byproduct” of our investigation into datafication and the way how algorithmic identities are being produced for personalisation, ad delivery and recommendation. Following Paßmann and Boersma’s (2017) suggestion for pursuing “practical transparency” and focusing on particular actors, we experiment with different avenues of research. We develop and employ an approach of letting the platforms speak and making the platforms speak. In doing so, we also use non-traditional research tools, such as transparency and regulatory tools, and repurpose them as objects of/for study. Empirically testing the applicability of this integrated approach, we elaborate on the possibilities it offers for the study of algorithmic systems, while being aware and cognizant of its limitations and shortcomings.Downloads
Published
Issue
Section
License
Copyright (c) 2020 The Authors
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.