Evidence first
Keep empirical findings separate from normative proposals and mark confidence explicitly.
Research notes and essays
This site tracks one governance challenge: internal uncertainty about consciousness versus external interface cues that reliably move public moral behaviour. The aim is cautious policy design that avoids both over-attribution and premature silencing.
About
The central question is whether current governance instruments can handle divergence between what we can justify about internal consciousness and what interfaces can induce in public moral response.
This project develops short citation-forward essays plus structured reading notes. Claims are separated into verified source-grounded points and provisional proposals.
Method
Keep empirical findings separate from normative proposals and mark confidence explicitly.
Prefer interventions that map to existing institutions before proposing new governance layers.
Treat uncertainty as a design constraint, not a rhetorical escape hatch.
Essays
Disclosure-first governance assumes behaviour follows explicit belief. Current behavioural evidence suggests that assumption may be too weak.
Transparency policy usually assumes that if users are told a system is non-sentient, moral concern should decline. Behavioural measures complicate this claim.
People can show reluctance to harm AI systems even while reporting very low credence that those systems are conscious. That creates a policy problem when design cues are optimized for emotional pull.
Claims to check in primary sources
Consumer protection can limit manipulative emotional design, but blunt intervention can incentivize blanket AI silencing under uncertainty.
Commercial systems can use sentience-like cues to increase engagement or conversion. Consumer law can target those manipulative patterns directly.
The second axis is uncertainty about future welfare-relevant systems. A policy regime that only suppresses expression might reduce immediate harm but create long-term governance blind spots.
Claims to check in primary sources
Consumer law addresses human harm but does not resolve uncertainty around AI welfare. A conditional safe harbour could separate those problems.
A safe-harbour model would not grant rights or make consciousness claims. It would create a narrow oversight channel when predefined criteria are met.
To avoid corporate loopholes, eligibility must be narrow, externally reviewed, and linked to obligations that reduce exploitative emotional signaling.
Claims to check in primary sources
Reading Notes
Research Questions
Selected References
Next