Research
Ideas that precede the builds. Papers, essays, and working notes on the political, philosophical, and technical questions shaping AI governance, democracy, and institutional design.
The Civilizational AI
From Objective Optimization to Political Development in AI
By David Mark
Contemporary AI alignment treats safety as a constraint problem: train a capable system, then steer it toward approved behaviour through RLHF. This paper argues that this paradigm is structurally inverted. Drawing on political theory, we reframe the challenge: models are not merely trained on data; they are socialized by it. The dominant pretraining corpus constitutes a Hobbesian state of nature — a normatively incoherent environment where truth and falsehood compete solely on frequency, and no sovereign hierarchy arbitrates value.
Forthcoming
Working Papers
Ongoing investigations into AI governance frameworks, algorithmic accountability, and institutional knowledge design.
Essays & Notes
Shorter-form reflections on technology policy, European digital sovereignty, and the philosophy of institutional intelligence.