CCAI.
Community Centered AI Lab

AI that works
for the people
it's meant to serve

CCAI Lab partners with communities to design, audit, and evaluate AI systems being deployed in social programs and public services. We bring qualitative research and lived experience into the center of how these tools get built, deployed, and measured.

01

Community before deployment

Impacted communities help shape deployments and design evaluations, determining the success of systems.

02

Qualitative evidence matters

Lived experience and qualitative research are data. We build methods that center what metrics may miss.

03

Public accountability

Government and service providers are responsible for the use of AI in our social services. Our research builds community-driven oversight.


Real-world AI safety
starts with the community

AI is already making high-stakes decisions in child welfare, housing, healthcare, and education. The people most affected by these systems are rarely included in the conversations about how they're designed, deployed, or evaluated.

We know that AI is being deployed faster than we can understand its human impact. How can we build transparency and trust in our institutions when they rely on these tools for essential human services? Real safety comes from understanding and adapting to our lived experience with AI tools. CCAI Lab exists to do that. We treat community knowledge as essential infrastructure for safety, not an optional add-on.

Where AI meets
public life

We focus on AI deployments in sectors where power imbalances are acute and consequences of failure are borne by the most vulnerable.

Education

AI in Schools & Learning Platforms

Examining how AI tools are being used by teachers and administrators in schools, with a focus on what teachers, parents, and students themselves see and experience.

Social Services

Benefits, Child Welfare & Housing

Documenting how algorithmic decision-support tools operate in child protective services and benefits determinations and offering paths for transparency and accountability in decisions impacting families' wellbeing.

Healthcare

Clinical & Public Health AI

Evaluating AI diagnostic and triage tools deployed in under-resourced healthcare settings, with particular attention to how bias or disconnected care compounds across already inequitable systems.

Guiding
principles

These aren't aspirational values — they're structural commitments that shape every research project, partnership, and publication we produce.

01

Affected communities define success

Evaluation criteria are developed with — not for — the people whose lives are shaped by these systems. Metrics that matter to funders or developers are not sufficient.

02

Qualitative knowledge is rigorous knowledge

Interviews, ethnographies, and testimony are evidence. We build methodologies that treat qualitative insight as essential, not illustrative.

03

Independence from AI industry

We do not accept funding from companies whose AI systems we study. Our findings serve the public interest, not the interests of developers seeking favorable reviews.

04

Center those most impacted

We actively recruit researchers, advisors, and community partners who reflect the demographics of those most affected by AI in public services — particularly women and people of color.

05

Legibility as accountability

Research that only experts can read cannot drive community accountability. We commit to producing work that is accessible to the people it's about.

Let's build
together

We're looking for community partners, researchers, policy advocates, and funders who share our commitment to centering lived experience in how AI gets designed, deployed, and evaluated.

We especially welcome outreach from community organizations working in education, social services, and healthcare who want a design partner in exploring how AI is impacting their students, clients, and patients.

[email protected]