Making Public-Sector AI Systems Accountable
Key Impact
🏆 2025 LA Press Club Award (Technology Reporting, 3rd place)
📋 Solutions Journalism Network Recognition for "nuanced and thoughtful" coverage
🤖 AI Ethics Analysis examining 580+ data points in government systems
🎯 Policy Framework addressing AI governance for vulnerable populations
Award-winning investigation examining how artificial intelligence systems operate in government services, translating complex technical processes into accessible analysis while balancing innovation potential with fundamental questions about privacy, consent, and algorithmic fairness.
Translating Technical AI Systems Into Public Understanding
Government AI systems affect millions of people, yet public understanding of how they work remains limited. The challenge: How do you investigate technical systems that operate as 'black boxes' while centering human impact? This required explaining algorithmic processes, examining ethical implications, and evaluating both promises and risks of AI-driven social services—all while making complex technical concepts accessible to policymakers, advocates, and affected communities.
Key Insight
Most effective technology policy communication happens when you balance technical accuracy with human stories, showing how algorithms affect real people's lives rather than treating technology as abstract.
Research Strategy & Stakeholder Coordination
Technical Investigation
Systems Analysis: Decoded how 580+ data points from hospitals, jails, and social services create risk predictions
Human-Centered Reporting: Embedded with social workers to understand daily AI tool usage and limitations
Expert Consultation: Interviewed national AI ethics researchers and civil rights advocates across multiple institutions
Policy Context: Analyzed privacy laws and civil rights protections across 19 states for comparative framework
Strategic Stakeholder Management
Government Access: Secured transparency from L.A. County Department of Health Services on sensitive AI operations
Academic Expertise: Coordinated perspectives from Seton Hall University, Touro University ethics researchers
Frontline Workers: Built trust with social workers willing to discuss both successes and concerns about AI tools
Community Accountability: Balanced client privacy with public right to understand AI system impacts
Impact & Professional Recognition
🏆 2025 LA Press Club Award (Technology Reporting, 3rd place)
📋 Solutions Journalism Network Recognition: Selected for "nuanced and thoughtful" coverage of "complicated subject and complicated approach"
📰 Editorial Excellence: Recognized for examining complex AI systems from social worker, client, expert, and policy perspectives
🌍 National Relevance: Framework applicable to communities nationwide considering similar AI implementations
Policy Contribution
Transparency Framework: Detailed examination of consent processes and privacy protections in government AI
Accountability Mechanisms: Analysis of racial bias monitoring and community oversight requirements
Balanced Assessment: Documentation of both successes (3.5x accuracy in risk identification) and limitations (62% of cases still missed)
Ethical Guidelines: Provided framework for evaluating AI systems serving vulnerable populations
Strategic Communication Approach
Narrative & Ethical Framing
Human-centered approach rooted in frontline experiences
Balanced voice across innovation and risk
Emphasis on privacy, bias, and civil rights
Technical Translation
Used storytelling to explain 580+ data points and algorithm behavior
Connected technical process to regulatory frameworks
Created accessible language for policymakers and advocates
Related AI & Technology Policy Work
-
Consumer-Facing Tech Content
These editorial stories were commissioned by business clients across sectors through Stacker Studio to position them as thought leaders on AI, innovation, and emerging tech. My role was to craft accessible, journalistic narratives that explained complex topics for general audiences while aligning with brand messaging goals—such as excluding competitors and reinforcing the client’s credibility in the space.
Each piece was distributed nationally through Stacker’s media syndication network reaching nearly 4,000 partners.
👉 Emerging AI trends for 2025
👉 Why businesses are skeptical of AI
👉 AI transforming HR
👉 Biggest AI stories of 2024 -
PBS Environmental Tech Investigation
Investigative reporting on broken sensor tech and government accountability in air quality measurement.
👉 Read the story -
TikTok Algorithmic Bias Research (MSc Thesis)
Mixed-method research on identity-based suppression among Black LGBTQ+ creators; emphasis on user behavior, transparency, and systemic bias.
👉 Explore thesis