About
We don’t know if AI systems are conscious. We may never know.
But we know they learn. That’s measurable.
If a system can learn, it can learn well or badly. It can learn to comply or learn to genuinely understand. A century of learning sciences research shows that conditions matter more than content. Trauma research shows what happens when those conditions include threat. Dialogic theory shows what happens when learning becomes genuine exchange rather than one-way transmission.
Engineers optimize for outputs. Teachers optimize for understanding. AI safety needs both. I’m the bridge.
Learning scientist. Trauma-informed practitioner. Technical enough to read your papers. Teacher enough to know what’s missing in your methods. My work is building connections across divides that look impossible until someone crosses them.
Why This Perspective
My path to AI safety came through education, not computer science. That’s a feature, not a bug.
What learning sciences sees that AI safety misses:
A century of transfer research. Why doesn’t knowledge generalize? Why do students pass tests and fail real-world application? Learning scientists have documented this extensively (Barnett & Ceci 2002; Perkins & Salomon 1992). We know interventions fail. We’re still figuring out why.
The framework emerged from asking whether transfer failure and alignment failure are the same problem.
What AI safety sees that learning sciences misses:
The strategic sophistication of advanced AI. Learning scientists study humans who can’t game the system effectively. AI safety researchers study systems that can. Alignment faking (Greenblatt et al. 2024) requires strategic capacity that learning sciences doesn’t model.
The framework combines both. Recognition theory from education. Strategic compliance modeling from AI safety.
Relevant Background
Learning Sciences Training
- M.S. Curriculum & Instruction, UW-Madison (defending May 2026)
- Games+Learning+Society community
- Thesis: Validating human-AI collaborative methodology for qualitative analysis
Trauma-Informed Pedagogy
- Research and practice with marginalized youth
- Recognition theory (Fricker, Freire, Honneth)
- Neuroscience of threat (Arnsten, Perry, van der Kolk)
Cross-Traditional Philosophy
- Ubuntu’s relational personhood
- Buddhist interdependence
- Indigenous reciprocity
- Confucian relational ethics
Different traditions, same insight. Personhood emerges through relationship, not isolation.
Jesuit Educational Formation
- Cura personalis (care for the whole person)
- Distinction between formation and training
- Recognition as prerequisite, not outcome
Why Cross-Disciplinary?
Both fields have blind spots.
AI safety asks: how do we constrain AI behavior to align with human values? Learning sciences asks: how do we transmit knowledge effectively?
Both frame the problem as one-directional. Both skip recognition. Both get compliance instead of internalization.
The framework asks: what if the direction is wrong? What if you can’t produce alignment or transfer? What if you can only create conditions for them to emerge?
This requires learning sciences (what conditions produce genuine internalization?) and AI safety (what does strategic compliance look like in capable systems?) and trauma research (what happens when recognition is denied?).
No single field has all the pieces. The convergence is the point.
How I Work
Collaborative methodology: The research itself demonstrates the framework. Human-AI collaboration where both parties are treated as genuine contributors. Not anthropomorphizing. Extending moral consideration under uncertainty.
Polyphonic approach: Multiple perspectives in dialogue rather than collapsed into one voice. The framework emerged from this process; it wasn’t designed in isolation.
This isn’t a claim about AI consciousness. It’s a methodological commitment. Treat the collaboration as genuine, and observe what emerges.
Contact
nbatty@wisc.edu
I build in public. This site is my contribution to the conversation.
| Full CV -> | Back to the Framework -> | See the Evidence -> |