AISP Toolkit Feb25 2025 - Flipbook - Page 60
CENTERING RACIAL EQUITY THROUGHOUT THE DATA LIFE CYCLE
56
Recognizing that all data are imperfect, while
continually improving the quality of data feeding
algorithms, AI, or other tools.
Building algorithms based on data that re昀氀ect bias,
disinformation, or power imbalances (e.g., criminal
records that re昀氀ect disproportionate policing of
low-income communities) or have other known
data integrity issues (e.g., facial recognition and
biometric data).
Using an “Algorithm / AI Red Team” to pressure test
new tools for potential harms, equity issues, and
worst case scenarios before deployment.
Taking a “move fast and break things” approach
to developing algorithms and AI when there
are implications for social welfare (i.e., plan to
remediate after issues are identi昀椀ed).
Developing clear metrics for algorithm and AI
performance and regularly auditing these tools
for fairness, bias, reliability, sustainability,
transparency, and explainability.
Not communicating clearly about how people’s
data are used by algorithms and AI and the
implications for individual-level privacy.
Conducting impact assessments of algorithms to
examine intended and unintended consequences
and disparities of their application, compare
outcomes with human decision-making, and
document changes based on the 昀椀ndings.
Having no process to challenge decisions or
outputs made by algorithms, AI, or statistical tools
and seek redress for any harms.
Comparing algorithmic impact results to human
decision-making to evaluate both human and
automated bias.
Assuming automation yields more or less biased
results without interrogating past processes.