TLDR
6-month design and research engagement for AI-assisted child welfare casework. Remote field research across Pennsylvania, Illinois, Hawaii state agencies during COVID. Core insight: caseworkers need grounding in facts, not automation of judgment. Refined search interfaces for pattern recognition across 30+ case caseloads. Established research practice with vulnerable populations.
Challenge: Help overburdened caseworkers make better-informed decisions about family separation without replacing human judgment with algorithmic authority.
The Child Welfare Decision Problem
Child welfare caseworkers make decisions that determine whether children stay with families or enter protective custody. Wrong in either direction causes harm: unnecessary removal traumatizes families; leaving children in dangerous situations can be fatal.
The cognitive load challenge:
- Caseworkers manage 30+ active cases simultaneously
- Each case has years of documentation (case notes, medical records, school reports, court filings)
- Finding specific details requires searching hundreds of pages under time pressure
- Decisions happen in real-time: during home visits, in court, with supervisors waiting
- High burnout drives extraordinary turnover, disrupting continuity of care
The information problem: Social determinants of health (housing stability, food security, healthcare access, economic stress) predict family crisis better than incident reports alone. But this information is buried in unstructured case notes. Caseworkers couldn’t see patterns across their caseload or surface relevant history quickly.
The ethical stakes: Most caseworkers want to keep families together—research shows family preservation produces better outcomes when safe. But they need evidence to defend those decisions to judges and supervisors who default to removal as the “safe” choice.
Design tension: Authority ↔ Augmentation
AI systems can automate decisions (replace human judgment) or augment capacity (help humans decide better). In high-stakes contexts—child welfare, medical diagnosis, criminal justice—this choice has consequences. The system’s role is grounding caseworkers in facts, not directing their judgment.
What I Worked On
Search Interface for Pattern Recognition
Problem: Caseworkers needed to query caseloads for specific patterns but weren’t data analysts.
Examples:
- All families with housing instability in past 6 months
- Cases where substance abuse treatment recommended but not completed
- Children with multiple school changes in current academic year
Solution: Refined search interfaces supporting structured queries. Made complex filtering accessible to users working under time pressure. Progressive disclosure: simple queries first, advanced options available.
Design challenge — Complex queries accessible to non-technical users under stress
Challenge: caseworkers think in domain language (“families facing eviction”), not database logic (“housing_status == ‘unstable’ AND event_date > 6_months_ago”).
Interface needed to translate between caseworker mental models and system capabilities. Natural language-inspired filters. Saved query templates for common patterns. Results showing why cases matched (which specific factors triggered the pattern).
Field research insight — The courtroom moment
One caseworker took Augintel into court on work phone. Judge asked detailed questions about family’s housing history and previous interventions—normally requiring flipping through case files.
She answered directly from phone. Judge ruled to keep child with family rather than ordering protective custody.
Not a “success metric”—a glimpse of what the work was actually for. Helping caseworkers access accurate information quickly, under pressure, in service of decisions affecting children’s lives.
Data Visualization for Social Determinants
Problem: System analyzed case notes for social determinants patterns. Information existed but wasn’t navigable.
Explored:
- Sparklines showing family stability over time
- Indicators for housing/food/healthcare concerns
- Timeline views making intervention sequences visible
- Pattern highlighting in narrative case notes
Goal: Make patterns visible that might be invisible in narrative notes. Not to tell caseworkers what patterns mean, but to make information navigable so their expertise could interpret it.
Design principle — Transparency over optimization
Show caseworkers where information comes from, when recorded, who documented it. Let them verify rather than trust system’s analysis.
Housing instability might mean eviction, or fleeing domestic violence. System preserves enough context that caseworkers interpret patterns correctly. Make uncertainty visible—when system isn’t confident, say so.
Research Practice with Vulnerable Populations
Context: COVID required all research conducted remotely. Caseworkers dealt with pandemic-related family stress, worked from makeshift home offices, managed virtual court hearings, assessed child safety through video calls instead of home visits.
Conducted:
- Contextual research with caseworkers in actual work environments (remotely)
- Interface testing with representative users
- Feedback loops between product development and field reality
- Documentation of workflows, pain points, decision-making contexts
What we learned:
Caseworkers want to keep families together. Most entered field to help families, not separate them. Understood family preservation produces better outcomes when safe. Needed evidence to defend decisions.
Technology must work under worst-case conditions. Personal phones, working in cars between visits, inconsistent internet, real-time information needs during stressful interactions.
Cognitive load is extraordinary. 30+ active cases, hundreds of pages per case, time pressure, emotional burden.
Research approach — Understanding high-stakes decision-making
This wasn’t user testing—it was understanding how people make consequential decisions under stress with incomplete information.
Worked with agencies across multiple states. Built research capacity with product team. Trained others to conduct contextual studies. Practice continued after engagement.
Design Principles for High-Stakes AI
Transparency over optimization: Show sources, timestamps, documentation chain. Let users verify, not just trust.
Support skepticism: Make it easy to question findings, look deeper, check sources. Caseworkers need to trust their judgment, not defer to algorithms.
Preserve context: Social determinants aren’t just data points—they’re family circumstances. System preserves enough context for correct interpretation.
Make uncertainty visible: When system isn’t confident about pattern, say so. Don’t present weak signals as strong evidence.
Authority ↔ Augmentation: System grounds users in facts. Doesn’t automate judgment. Humans remain responsible for decisions.
Design for worst-case conditions: Users work under stress, with limited technology access, in time-critical situations. Interface must work when conditions are bad.
What This Work Taught Me
High-stakes AI requires different design thinking. When consequences are severe, relationship between human judgment and computational assistance becomes critical. Can’t optimize for efficiency at expense of verification. Can’t hide uncertainty to create user confidence.
Vulnerable populations deserve most careful design. Families in case files didn’t choose government documentation of their lives. The least we owe them: interfaces helping caseworkers make thoughtful, contextualized decisions rather than reactive ones.
Burnout is a design problem. Caseworker turnover hurts families. Better information access reducing cognitive load isn’t just usability—it’s harm reduction for both workers and families they serve.
Research with domain experts is humbling. Caseworkers know infinitely more about child welfare than designers. Research goal: learn enough about their reality to build tools that actually help.
Background Context
Augintel emerged from Stewards of Change, organization championed by Daniel Stein focused on integrating social determinants of health into child welfare practice.
Theory: better information about housing, food security, healthcare access, economic stress helps caseworkers understand family contexts more completely. Leads to decisions keeping families together when safe—which research shows produces better outcomes for children.
I joined for 6-month engagement (late 2021-mid 2022) to lead design and research as product moved from pilot programs to broader state agency adoption.
“Augmented intelligence” terminology matters. Signals humans remain responsible for decisions. System doesn’t tell caseworkers what to do—helps them see patterns they might miss managing 30+ cases while experiencing cognitive and emotional burnout driving high turnover.