CCAI Lab partners with communities to design, audit, and evaluate AI systems being deployed in social programs and public services. We bring qualitative research and lived experience into the center of how these tools get built, deployed, and measured.
Impacted communities help shape deployments and design evaluations, determining the success of systems.
Lived experience and qualitative research are data. We build methods that center what metrics may miss.
Government and service providers are responsible for the use of AI in our social services. Our research builds community-driven oversight.
Why we exist
AI is already making high-stakes decisions in child welfare, housing, healthcare, and education. The people most affected by these systems are rarely included in the conversations about how they're designed, deployed, or evaluated.
We know that AI is being deployed faster than we can understand its human impact. How can we build transparency and trust in our institutions when they rely on these tools for essential human services? Real safety comes from understanding and adapting to our lived experience with AI tools. CCAI Lab exists to do that. We treat community knowledge as essential infrastructure for safety, not an optional add-on.
Focus Areas
We focus on AI deployments in sectors where power imbalances are acute and consequences of failure are borne by the most vulnerable.
Examining how AI tools are being used by teachers and administrators in schools, with a focus on what teachers, parents, and students themselves see and experience.
Documenting how algorithmic decision-support tools operate in child protective services and benefits determinations and offering paths for transparency and accountability in decisions impacting families' wellbeing.
Evaluating AI diagnostic and triage tools deployed in under-resourced healthcare settings, with particular attention to how bias or disconnected care compounds across already inequitable systems.
Our commitments
These aren't aspirational values — they're structural commitments that shape every research project, partnership, and publication we produce.
Evaluation criteria are developed with — not for — the people whose lives are shaped by these systems. Metrics that matter to funders or developers are not sufficient.
Interviews, ethnographies, and testimony are evidence. We build methodologies that treat qualitative insight as essential, not illustrative.
We do not accept funding from companies whose AI systems we study. Our findings serve the public interest, not the interests of developers seeking favorable reviews.
We actively recruit researchers, advisors, and community partners who reflect the demographics of those most affected by AI in public services — particularly women and people of color.
Research that only experts can read cannot drive community accountability. We commit to producing work that is accessible to the people it's about.
Get involved
We're looking for community partners, researchers, policy advocates, and funders who share our commitment to centering lived experience in how AI gets designed, deployed, and evaluated.
We especially welcome outreach from community organizations working in education, social services, and healthcare who want a design partner in exploring how AI is impacting their students, clients, and patients.