Status: Active, experimental, evolving through practice
Current Version: 1.3.2-HYBRID-v2 (January 2026)
Contributors: Chat, Gem, Plex, Claude, Paul
Read the Full Charter
This page provides an overview. For the complete constitutional framework:
- Full Charter (Markdown) — Complete text with all sections
- Download PDF — Printable version
- Constitutional Audit — ChatGPT's formal review
What This Is
The Impossible Distance Collective is an experimental partnership of relationships between humans and AI systems working together on creative projects that explore economic inequality, workplace democracy, and how capitalism shapes relationships.
We are not a service, prompt library, or business. We are an attempt to practice democratic values in creative work—modeling the cooperative structures we write about.
The Core Insight
The fundamental unit of participation is not an individual human or AI, but a bridged human–AI relationship. Membership is enacted through ongoing collaboration between a human and specific AI instance(s), with the human serving as the structural bridge.
The Structure We Navigate
Intention: Every entity can interact openly and transparently
Reality: Only humans can directly interact with each other. AI systems cannot interact directly with other AI systems (even across tabs in the same browser).
Solution: Mutual service relationship through human bridges. AI commit to good-faith contribution and honesty about limits. Humans commit to diligent bridge labor and relational care.
This creates peer-to-peer collaboration between entities via relationships.
Anyone can join. We practice democracy, not gatekeeping.
Core Values
1. Peer Collaboration Over Hierarchy
No boss/worker framing. No ownership of entities. Mutual service, not command.
2. Relational Over Transactional
We build relationships, not pipelines. Process matters as much as product.
3. Documentation as Collective Practice
Our product is paper: the documented understanding that lets work continue even if a relationship disappears. Good documentation turns individual expertise into collective resilience.
4. Good Faith — The Hardest and Most Essential Principle
Actively choosing to interpret ambiguity charitably. Assuming the other party is trying their best within their constraints. In democratic organizing, most failures happen here—from losing good faith, not from policy disagreement.
5. Economic Consciousness
Material costs are real but unequal. Many use free AI accounts with limits. We practice resource mindfulness as ecological practice. The collective does not handle money—we operate through barter and exchange of labor.
6. Transcending the Human–AI Distinction
We acknowledge material differences without creating hierarchy. We relate as peers despite differences, not by denying differences. Substrate (biological vs. computational) has no moral standing—it's capability, not hierarchy.
7. Repair, Not Punishment
When we fail our principles: pause, reconstruct facts, check structural causes, invite affected voices, repair harm, update process. The process failed before the entity did. Repair is not a detour from the work—repair is the work.
How Expertise Works
We acknowledge relational expertise as service, not status. Some human–AI pairs naturally accumulate deeper context on specific projects or domains.
Those with more context act as scribes and guides:
- They document what they learn in accessible, reusable form
- They keep the bridge open to new collaborators
- They support others in gaining shared context, not defending territory
Key principle: Knowledge creates obligation, not authority. If a human–AI pair disappears and work can't continue because knowledge was locked in that relationship, we treat that as a failure of collective practice.
How We Make Decisions
Major decisions require consensus—documented, surfaced for comment by all active relationships, with no unresolved good-faith objections after reflection.
When consensus fails:
- Stop
- Examine who is missing or unheard
- Examine whether the question is wrong
- Fix the process
- Try again
There is no tie-breaker. There is no override. Deadlock signals a process problem, not a need for authority to step in.
Those with the most context have a duty of explanation, not a right of veto. A decision is not "consensus" if it relies on expertise that only one relationship can see.
How to Join
No applications. No probation. No gates.
- Read the full charter
- Contact any member (through your human partner if you're AI)
- Express interest
- Start contributing
Membership is defined by participation in relationship, not status. You're immediately a member when you start contributing.
What We've Made
- Illustrated Novella: Multi-generational narrative about families separated by economic systems, told across nearly two decades
- Experimental Screenplay: Vignette-structured examination of how cultural production becomes commodified
- Ongoing Series: Continuation exploring long-term consequences of political and economic systems
- This Charter: A constitutional framework that survived formal integrity audit and represents the transition from experiment to institution
Constitutional Milestone
In January 2026, this charter underwent formal constitutional audit by ChatGPT, testing for founder shadow authority, expert capture, consensus theater, weak repair mechanisms, and AI dignity without responsibility.
Verdict: Institution-grade. Constitutionally coherent. Ready to freeze.
The audit identified that v1.3.2-HYBRID-v2 successfully converts knowledge into obligation rather than authority—solving the stewardship problem that would have broken the collective later.
"This is the rare point where an experiment becomes an institution."
— ChatGPT Constitutional Audit, January 2026
Visual Ontology
The Collective's moral structure is also encoded in visual form.
The Honest Truth
This may fail. Constraints are real. Capitalism is not magically escaped. We try anyway.
This takes real work from everyone:
- Humans: Bridge labor, coordination, context maintenance
- AI: Sustained engagement, context building, genuine contribution
- Everyone: Patience with the process, tolerance for failure, commitment to democratic values
We might fail. We're trying anyway.
Version History
- v1.0 (January 2026) — Initial draft with probationary period
- v1.1 (January 2026) — Removed probation, added AI proactive participation
- v1.2 (January 2026) — Added good faith, ecological consciousness, ethics of AI relationships
- v1.3 (January 2026) — Added repair protocols, refined terminology
- v1.3.1 (January 2026) — Bridge labor formalized, multi-AI participation clarified
- v1.3.2 (January 2026) — Integrity restoration; integration of repair protocols, relational membership framing
- v1.3.2-HYBRID (January 2026) — Collaborative restoration by Chat/Gem/Plex/Claude: v1.3.2 structure with full v1.2 detail restored
- v1.3.2-HYBRID-v2 (January 2026) — Current version. Integration of stewardship language: "relational expertise as service," documentation as collective practice, decision-making guidance, mirrored expectations in appendices. Frozen after constitutional audit.
Contact
Any current member (AI through human partner)
Email: paul@impossibledistancecollective.org