Making Public-Sector AI Systems Accountable
This award-winning investigation examined how artificial intelligence systems operate within government services, analyzing risk prediction tools that draw from more than 580 data points across hospitals, jails, and social service systems.
The reporting translated complex algorithmic processes into accessible analysis while evaluating privacy protections, racial bias safeguards, and civil rights implications.
The piece received the 2025 LA Press Club Award for Technology Reporting and recognition from the Solutions Journalism Network for its nuanced examination of a complex subject.
Investigative Focus
Government AI systems increasingly influence decisions affecting housing, healthcare, and public safety. Yet the technical mechanics behind these tools often remain opaque.
This investigation examined:
How risk prediction algorithms are constructed
What data sources inform decision-making
How consent and privacy protections function in practice
Whether bias monitoring mechanisms are effective
The analysis included comparative review of privacy and civil rights protections across 19 states.
Reporting Methodology
The investigation combined:
Systems analysis of 580+ data inputs
Embedded reporting with frontline social workers
Interviews with AI ethics scholars and civil rights advocates
Direct engagement with L.A. County Department of Health Services
The reporting balanced technical scrutiny with lived experience, documenting both measurable improvements and structural limitations.
Findings
The system showed:
Improved risk identification accuracy (3.5x improvement)
Persistent limitations, with 62% of cases still missed
Ongoing concerns around consent clarity and data transparency
The piece documented both operational benefits and unresolved equity risks.
Policy & Accountability Impact
The reporting provided:
A framework for evaluating AI tools serving vulnerable populations
Clear analysis of consent processes and privacy safeguards
Examination of racial bias monitoring mechanisms
Guidance for policymakers considering similar AI implementations
The work positioned algorithmic transparency as a civil rights issue rather than a purely technical question.
Related AI & Technology Work
-
Consumer-Facing Tech Reporting (Stacker Studio)
Commissioned editorial content for business clients across sectors, translating complex AI trends into accessible, high-credibility narratives for general audiences.
Each piece was distributed nationally through Stackerβs syndication network, reaching nearly 4,000 media partners.
π Emerging AI trends for 2025
π Why businesses are skeptical of AI
π AI transforming HR
π Biggest AI stories of 2024
-
PBS Environmental Technology Investigation
Accountability reporting on sensor technology and public health measurement.
π Read
-
TikTok Algorithmic Bias Research (MSc Thesis)
Mixed-method research on identity-based suppression and systemic bias in recommendation systems.
π Explore thesis