Now

Last updated: February 2026

Working on

Researcher at Amnesty International Kenya, working on tech-facilitated repression — how surveillance tools and digital tactics are used to silence activists and civil society.

Taking the ENAIS AI Safety Collab program — a structured deep dive into technical AI safety with a cohort of researchers from across the continent.

Deep in the Bluedot technical AI safety curriculum. The focus right now is on interpretability — specifically how to extract decision-relevant features from transformer internals without drowning in noise.

Also leading partnerships at GDG Pwani, which means a lot of coordination work for the upcoming Build with AI season.

Thinking about

The politics of AI governance. Who gets to decide what "safe" means, and whether the current landscape is trending toward pluralism or priesthood. (I wrote about this recently.)

Also: the gap between alignment research and deployment reality. The field has a theory-practice problem that nobody wants to name directly.

Reading

Re-reading Seeing Like a State (Scott) — keeps being relevant. Also working through the Anthropic interpretability papers to trace how the field's assumptions evolved.

Building

A small tool for visualizing attention patterns in a way that's actually useful for debugging. Early stages. May never ship.