AISP Toolkit Feb25 2025 - Flipbook - Page 59
Positive & Problematic Practices: Racial Equity
in the Use of Algorithms & Arti昀椀cial Intelligence
PROBLEMATIC PRACTICE
Clearly de昀椀ning the relevant terms (e.g., algorithm,
AI, predictive analytics) and how they are being
used by the data effort.
Failing to build understanding of the algorithmic
tools being applied, including what problem they
aim to solve and the potential bene昀椀ts and risks of
their application.
Involving community partners in early
conversations about the purpose of algorithms and
AI to ensure alignment with established priorities.
Deploying algorithms in high-stakes decision-making
(e.g., determining program eligibility and bene昀椀ts)
without careful discernment by community partners
in the data governance process.
Procuring technical vendors that align with the
data effort’s values and guiding principles (e.g.,
include evidence of applicable “positive practices”
in selection criteria).
Using technical vendors purely based on legacy
contracts, ease of use, or cost when they do not
demonstrate understanding of the practical and
ethical implications of their tools.
See Narrowing Technology Solutions for IDS
Initiatives, 2022
Clearly articulating roles and responsibilities
for oversight of algorithm and AI development,
implementation, and evaluation (e.g., managing
data governance including data protection and
quality assurance).
Not providing clear, iterative, and authentic
communication channels for input regarding use of
algorithms and AI.
Using algorithms and AI to provide meaningful
services and supports to people represented in
the data.
Using algorithms and AI for increased surveillance,
punitive action, monitoring, “threat” ampli昀椀cation
via risk scores, or other uses with no clear bene昀椀t
to people represented in the data.
Shifting practice from human-in-the-loop) to
human-led algorithm use (i.e., humans oversee
the entire process and can override an algorithm
at any point).
Believing that technology alone solves social
problems (i.e., tech solutionism) while neglecting
the importance of people in leveraging
technological tools to enact change.
Being transparent about the use of algorithms and
AI in analyses, decision-making, or other outputs
(e.g., describing what data drives an algorithm
and how it was tested and validated, citing the
use of generative AI in report writing, identifying
which department oversees decisions made by
automated decision-making systems).
Relying on “black box” or proprietary algorithms or
AI that do not allow for transparency or replication.
CENTERING RACIAL EQUITY THROUGHOUT THE DATA LIFE CYCLE
POSITIVE PRACTICE
55