AISP Toolkit Feb25 2025 - Flipbook - Page 58
CENTERING RACIAL EQUITY THROUGHOUT THE DATA LIFE CYCLE
There are strategies and tools that can and should be used to ensure transparency, assess
algorithmic bias, and determine the potential positive and negative consequences of applying an
algorithm in practice (see Center for Public Sector AI). Key factors include clarity and governance
of algorithms, community involvement in algorithmic deployment, responsive cross-sector
collaboration (see California Policy Lab & University of Chicago Poverty Lab), and continuous
evaluation (see New York Department of Consumer and Worker Protection). Of course, it is also
essential to de昀椀ne and ensure privacy within the application of algorithms.
The Weight of the Cloud
The use of algorithms, in particular AI, requires exponentially more computational power
than traditional analytics. This increase in power requires additional resources—electrical
power, sta昀케ng, and raw materials (minerals, water, land, etc.). While considering risk
versus bene昀椀t, it is important to acknowledge the material harms that occur in the process
of training and maintaining algorithms, including storage. The extraction, creation, and
maintenance of these technologies rely upon predatory industries.15 For example, the
process of mining cobalt is dangerous and often involves slave labor; servers require
cooling, which drains essential resources (power and water) from residents; server farms
are most often built in underresourced communities of color; and the magnitude of energy
use has a signi昀椀cant negative climate impact. The potential harms that can occur in the
implementation of algorithms are linked to the material harms it takes to create them. As
we move forward, let us ground ourselves in the material as we work toward ethical use.
The creation, procurement, deployment, and evaluation of algorithmic tools have signi昀椀cant equity
implications (see City of Seattle AI Policy). To ensure ethical use of this technology, it is vital that
algorithms have human oversight (see City of Boston) and are explainable when being used to make
decisions or take actions that impact people’s lives. For example, if an individual is denied a service as a
result of the output of an algorithm, the organization must be able to explain both why the service was
denied and what actions can be taken to access the service. Further, the organization should be able
to explain how to contest the algorithmic or human+algorithm decision to deny services. The following
positive and problematic practices, Work in Action, and resources will help you explore strategies for
ensuring racial equity in the use of algorithms, AI, and other statistical tools.
15 Png, Marie-Therese. (2022). At the Tensions of South and North: Critical Roles of Global South Stakeholders in AI
Governance. FAccT ‘22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
54