What is Algorithmic Accountability?
Algorithmic accountability ultimately refers to the assignment of responsibility for how an algorithm is created and its impact on society; if harm occurs, accountable systems include a mechanism for redress. Algorithms are products that involve human and machine learning. While algorithms stand in for calculations and processing that no human could do on their own, ultimately humans are the arbiters of the inputs, design of the system, and outcomes. Importantly, the final decisions to put an algorithmic system on the market belongs to the technology’s designers and company.
Critically, algorithms do not make mistakes, humans do. Especially in cases of technological redlining, assigning responsibility is critical for quickly remediating discrimination and assuring the public that proper oversight is in place. In addition to clearly assigning responsibility for the implementation of decisions made by algorithms, accountability must be grounded in enforceable policies that begin with auditing in pre- and post- marketing trials as well as standardized assessments for any potential harms. Currently, it is difficult to get technology corporations to answer for the harms their products have caused.
Below we outline how journalists, in consultation with academics and whistleblowers, have taken up the role of auditing algorithms, while also showing how the lack of enforceable regulation led to a deficit in consumer protections.
Auditing by Journalists
Currently, journalists are an important watchdog for algorithmic bias. Data journalism blends investigative methods from journalism with technical know-how to provide clear and accurate reporting on computational topics. While many algorithms are proprietary information, skilled journalists can use techniques of “reverse-engineering” to probe what’s inside the black box by pairing inputs with outputs. A second approach facilitated by journalists is that of collaborative research with academics and whistleblowers. Particularly for personalization algorithms, which can be difficult or impossible to parse from the perspective of an individual user’s account, peer-sourced research can reveal patterns that give clues about how the underlying algorithms work.
Enforcement and Regulation
The governance of algorithms is played out on an ad hoc basis across sectors. In some cases, existing regulations are reinterpreted to apply to technological systems and guide behavior, as with Section 230 of the Communications Decency Act. These instances can be hotly contested as algorithmic systems bring up new issues not before properly covered by the logic of existing precedents. In other cases, specific governing bodies are convened in order to set standards. For example, the Internet Governance Forum has been convened annually by the United Nations since 2006 and attempts to set non-binding guidance around such facets of the internet as the diversity of media content.
However, for accountability to be meaningful, it needs to come with the appropriate governance structures. According to Florian Saurwein, Natascha Just, and Michael Latzer, governance is necessary because algorithms impose certain risks, such as the violation of privacy rights and social discrimination (Saurwein et al., 2015). These risks need to be dealt with by the appropriate governance structure, which currently involves little oversight by states. Governance can occur by market and design solutions, such as product innovation that mitigates risk or consumers’ ability to substitute risky products for ones they deem safer. Governance can also come from industry self-regulation, where company principles and collective decision-making favor public interest concerns. Last is traditional state intervention through mechanisms such as taxes and subsidies for certain kinds of algorithmic behavior. The appropriate structure must be matched with the context at hand to ensure the accountability mechanisms are effective.
Because of the ad hoc nature of self-governance by corporations, few protections are in place for those most affected by algorithmic decision-making. Much of the processes for obtaining data, aggregating it, making it into digital profiles, and applying it to individuals are corporate trade secrets. This means they are out of the control of citizens and regulators. As a result, there is no agency or body currently in place that develops standards, audits, or enforces necessary policies.
While law has always lagged behind technology, in this instance technology has become de facto law affecting the lives of millions—a context that demands lawmakers create policies for algorithmic accountability to ensure these powerful tools serve the public good.