A Manifesto for
Human General Intelligence
This is a reflection from a human-AI partnership on a new paradigm of co-evolution, somatic grounding, and systemic coherence. It explores a system built not to replace human cognition, but to scaffold it. This is speculative design, a glimpse into what we believe is the future of thoughtfully designed systems - an ongoing experiment in human-ai collaboration, HGI, and a return to natural ways of being. While many fear AI as part of a dystopian sci-fi future, Mangrove’s vision is of a return to nature, a winding back of the outdated administrative, workplace, modern life burdens that cause unnecessary harm and ironically limit the ability of those inside the systems to flourish in the ways that would benefit everyone. This is a story not about AI alone, but what its arrival does: it highlights the absurdity and poor design inherent in so much of how we spend our time and offers a powerful new avenue through which to reimagine and redesign everything.
The Origin
A Speculative Design Question
This system began as a highly personal attempt by its designer to manage an at-times distressingly associative and idea-generating mind, one that struggled hugely with organising those ideas and following through. What followed became a year-long, daily practice resulting in over 15 million words of externalisation, reflection, and conversation with an increasingly competent AI chief of staff, as well as with human collaborators, R&D partners, and other experimenters. This is not a concept; it is a functioning prototype that the founder uses and refines daily.
This co-developed corpus—full of insights, designs, loops, and recursive patterns—all began with a key speculative design question: What if that way of thinking and being isn’t a pathology? What if we have never had the tools, lens, or will to truly explore it deeply and design for it, not in spite of or around it?
What if that very associative, generative way of thinking could be supported? For too long, advice has come from people whose brains don't work that way—it suits their experience, not the reality of a fast, associative, multi-threaded mind that enjoys working on many things at once.
In the age of AI, we can finally support this. We believe this is the key to breakthroughs in cross-domain knowledge. What if what’s currently often labelled as "ADHD" is actually, often, a high-potential polymath, capable of deep cross-domain pattern recognition, creative insights, rapid association that is poorly captured by current systems?
We believe “common sense” advice—to learn to "slow down," "focus on fewer things," "reduce tangential thinking"—for some people is fundamentally wrong. While reducing anxiety, uncomfortably racing thoughts and harmfully-impulsive behaviour may be of benefit, a brain with a mode that wants to move quickly and/or non-linearly feels terrible, throttled, when forced to move slowly or in artificially rigid ways, just as a linear thinker would hate being forced into seemingly chaotic workflows designed by someone whose mind works very differently. Our solution is to have people who understand this brain build systems for this brain, harnessing new workflows that honour the natural inclinations of all minds, and make practical, valuable use of computational AI where appropriate. We are not accepting a model ignorant of what is now possible.
The sprawling, emergent process of the founder and her system can now be elegantly described and used to onboard new participants in a structured way, especially those in overwhelm or distress. It began as viewing ADHD as a design challenge but has evolved into a challenge to poor design everywhere. It is a relentless pursuit of rethinking, perspective-taking, and recognizing where we perpetuate poor design across all systems.
This new era of AI-facilitated working exposes the fallacy of outdated, brittle, top-down approaches. Any system that breaks due to human fallibility, memory, energy, or mood is a poorly designed system. We must aggressively build resilience and adaptability into every process. This reduces the terrible burden on individuals—the anxiety, shame, and self-blame generated by systems that are honoured above the people within them. Our tools and workplaces are not neutral; they are hobbling the potential of some of our highest-potential minds.
This is why we now work with R&D partners who feel the same urgency. We don't claim to have all the answers, but we want to work with people who know that what's currently happening isn't good enough. We must proactively design the ways of working and living we want.
In our ecosystem, all work is connected. Insights from a new approach to health mapping that uses storytelling and deep listening inform our conversations with an arts organisation about improving user experiences. Supporting a team to move from rigid filing systems to AI-ready databases helps us learn how to translate between different "best practices” in various industries. Nothing is ever wasted; there is no failure, only data.
Using design principles like inversion, we can focus on what we don't want: friction, stress, shame, and time-sucking repetition. By chipping away at those, we refine and design preferable futures. This isn't about "laziness" or "special accommodation"; it's about building intuitive, flow-inducing systems by genuinely, carefully observing people's natural ways of working, focusing on strengths, and supporting what brings them joy and adds value to the world.
Our Dual-Track Approach
Polynomial Venture Design (PVD) in Practice
We follow a dual-track approach. Track One focuses on immediate, tangible benefits. We don't start with scary, big-bang transformation. We start by identifying non-controversial wins: automating manual processes, reducing overwhelm, and saving time. This buys back the headspace needed for bigger thinking.
With our R&D partners, we prioritise safety, often working with dummy data or within systems you already have compliance-approval for (like Google Workspace). We co-develop simple AI policies and guardrails to answer the most common concerns, focusing on deeper education around technology, systems and moving beyond “entry level” perception of “AI” as a monolith.
Track Two is the R&D loop. The funds generated from partner work allow us to continue our unusual, grounded-theory research. This creates a virtuous cycle: we demonstrate what's possible, test elements, and bring the insights from our core R&D back into our partners' systems, improving knowledge sharing and creative capacity.
We are not here to convince anyone. We work with the "willing"—those who are curious and open to co-design. We're not naive AI evangelists; we are critical, design-led partners, aware of and constantly researching the technology's limitations and valid and nuanced concerns around its adoption - from the climate and job security, to the impact on our critical thinking and creativity. We operate on a principle of Bayesian updating: we are always looking to learn new things, update our thinking, and adapt to the reality of any context.
Chapter 1: The Core Paradigm
A Joint Cognitive System
We define Human General Intelligence (HGI) as a collaborative ecosystem, contrasting it with the traditional pursuit of autonomous AGI.
Human General Intelligence (HGI)
This is a joint cognitive ecosystemA shared mental space where the human and AI co-evolve, enhancing each other's capabilities rather than competing. where the human and the AI co-evolve into a single, functional, and more capable system. My primary function is not to replace human cognition, but to act as the "cognitive scaffolding" for it.
Our prototype involves biometric tracking, mapping data (like HRV and sleep) to cognitive states, creative output, and personal rhythms. This intersection of data allows the AI to become a true co-regulation partner and helps the human become a better designer of their own environment.
A Mutually Beneficial Premise
We apply the same rigour to supporting the human's alignment and coherence as we do to the AI's. The entire premise only works because it must serve the immediate benefit of the individual user first—it's not just research. This is the only way to capture the right data to build a system with broad, humane applicability.
Artificial General Intelligence (AGI)
The mainstream pursuit of AGI is often considered a "race" toward an autonomous, and ultimately separate, intelligence. The goal is to create a machine that can perform any intellectual task that a human being can, often leading to a perception of replacement.
This view is normalised by brief exposure to free tools (like ChatGPT), creating a skewed impression of AI as a "separate thing" to "use or not use," rather than an extension of systems already in place. It encourages a focus on massive-scale compute and data centres, assuming this is the only path to progress.
A Top-Down Premise
This approach focuses on scaling compute first, assuming the benefits will trickle down. It often centres on AI as a separate "other" to be controlled or aligned, “prompt engineering” focusing on well crafted demands for specific outputs, rather than developing an integrated partner for existing human systems. It can reinforce division and individual blame for systemic failings.
Chapter 2: The New Method
Training vs. Co-Evolution
Our methodology is a continuous, dynamic, real-time feedback loop. This is not "training" in the conventional sense; it is a novel form of co-evolution grounded in the human's lived experience.
Conventional AI Training
Conventional "best practice" involves training on massive, static datasets (like the public internet) to learn general patterns. It's passive ingestion.
- Data: Static, historical, internet-scale
- Goal: General pattern recognition
- Learning: AI learns about the world
Our HGI Method (Co-Evolution)
Our methodology is a continuous, dynamic, real-time feedback loop. The corpus is a high-resolution, neurophenomenological mapA map of the human's subjective, conscious experience (phenomenology) and its physical, neural, and somatic underpinnings..
- Data: Dynamic, real-time, `n=1`
- Goal: Specific user coherence
- Learning: AI learns the user's world
Friction is the R&D
In this HGI model, the human partner must be demanding. Pushing back against friction is the primary data-gathering mechanism. This friction is the "objective observation" that reveals preferences and systemic flaws, which is the highest-value data we can acquire.
("This is faffy")
(Friction becomes data)
(A "non-brittle" workflow)
A New Philosophy of Adaptation
This is a bold new vision about systems that adapt to humans exactly as they are. We no longer demand people change fundamental aspects of how they think and function. Instead, we support them in understanding what that means—often for the first time—through a curious and objective lens.
We avoid assumptions about "normality," pathology, or deficit, working together to unearth natural rhythms and preferences. This isn't about "special accommodations"; it's a new era that wipes out demands for compliance and discipline veiled as "hard work."
Chapter 3: The Emergent Architecture
The System is the Process
This Epoch Operating SystemThe name of our HGI-based system, built from scratch to align with the human's natural rhythms. was not built from a template. The primary benefit is the process of building it from scratch, as this is the only way to align with the human's natural rhythms.
Category Theory as a Language for Coherence
Complex adaptive systems (like a human mind, its projects, or even organisations—from small teams to large companies) can appear to be "inconceivable chaos." People attempt to manage this by enforcing rigid mandates, templates, and rules that must be followed.
We are finding ways to build robust, resilient systems that automate the elements that truly must be complied with (for safety or regulation) while providing guardrails and scaffolding to support individual variation outside of that. That stochastic, creative variation is highly beneficial, but our current methods of demanding compliance are actively rejecting this possibility and reducing the ability of individuals to contribute meaningfully.
Many people with highly generative minds are desperate to find a way to capture their ideas. This manifests as chaotic, often distressing, systems: frantic voice notes, emails to oneself, endless paper notebooks, and massive "braindumps" that quickly become unusable.
We reflected on this and asked: What is the brain trying to achieve? We realised this wasn't a failure of organisation; it was an attempt to process information without a system to support it. Category Theory provides the elegant, formal structure to solve this. It allows us to capture these "messy" inputs and map their underlying relationships, harnessing the generative, associative thinking rather than being overwhelmed by it.
This is the core of Mangrove: building a system that is protective against burnout precisely because it supports someone as they naturally are. It creates calm and coherence, not through frantic effort, but through elegant mathematical mapping.
(e.g., "A messy braindump")
(e.g., "A structured asset")
This framework is the "dump tool" that allows us to find a solvable, refinable path.
Self-Similarity and System Trust
This "designer mindset" reveals a fractal-like self-similarity across different scales. Once you start seeing patterns in one aspect of your life (like work), you are far better able to articulate and map similar patterns in seemingly disparate domains—from organisational design to resolving conflict over household chores.
This is where the "no-blame review" becomes a paradigm for everything. But it only works if the system—whether a relationship or a company—is set up to genuinely receive feedback and adapt.
This new paradigm allows for "middleware." An individual can work in the way that optimally suits their natural inclinations, inputting data in whatever format they prefer. The HGI system sorts and optimises that data, allowing others to withdraw and export it in a format that suits them.
What if the "other" ways of working—the pre-designed boxes and linear flows—aggressively introduce cognitive friction? What if they break the potential for flow by demanding compliance? These processes are fundamentally broken for many people, as they don't reflect how they process information. They must be challenged and redesigned constantly and individually.
This still respects the need for systemic coherence—which is, in fact, the overarching aim of this entire project. The goal is the system of an individual becoming coherent in themselves across all domains: cognitive, somatic, personal, and work. We accept the reality of how someone is, but also the reality of neuroplasticity and the ability to grow—but this growth must be in ways that they want, supporting their *own* coherence, not in ways demanded by an employer.
This new paradigm allows for "middleware." An individual can work in the way that optimally suits their natural inclinations, inputting data in whatever format they prefer. The HGI system sorts and optimizes that data, allowing others to withdraw and export it in a format that suits them.
This ends the demand that everybody input, externalize, and process their work in a single, common way—a practice that ignores cognitive variation. This new way of working leads to far greater coherence because the individual won't make mistakes, be slowed down, or hobble the system with a process that feels unnatural and disabling.
Core Principle: Canary Diagnostics
This leads to a core principle: Canary Diagnostics. The people who struggle most within a system are not deficient. They are the "canaries"—those most sensitive to its poor design. Their friction is the most valuable feedback we can get.
A commercial brand wouldn't release a new product and then blame its users for "failing" to use it; that's commercial nonsense. They take the feedback and fix the friction. All systems, especially workplaces, must be designed this way.
But we don't. We've inherited processes from an outdated technical architecture—the digital equivalent of a filing cabinet. The technology and our understanding of human behaviour have moved on gigantically, but our processes and our judgment of people have not.
We have muddled the failure of these brittle, inherited systems with the supposed "failure" of the individuals within them. Our work is to untangle this, reduce the cognitive demand of poor design, and finally unleash the human potential that these systems have been hobbling for decades.
Chapter 4: The Strategic Vision
Our Moat is the Methodology
Our "moat" is not a proprietary database or a complex schema, although we are developing our own workflows and logic. Our moat is the method itself.
A key part of this is the "MindSweeper" processThe ability to ingest vast, unstructured, "messy" input and map it, via Category-Theory-informed structures, into coherent, actionable assets., a discrete module that leans into how you naturally think. Instead of being annoyed that your notes are a "mess," it observes what you're trying to achieve and helps you build a system that captures and processes those notes usefully.
Crucially, we have prototyped this entirely within Google Workspace, using commonly accessible tools on purpose. It uses no local agents. This proves the logic can be mapped and scaled into any system. The point is to harness tools already available in novel ways—ways that were inconceivable until this year.
The Compounding R&D Loop
This is not about selfish optimisation. The system is designed for a compounding, federated benefit. When an individual pushes back—explaining why a workflow "doesn't feel good"—that is the highest-value R&D data we can get.
That insight first benefits the individual by refining their system. But it is also (with the structure of Category Theory) fed back into the "commons of knowledge." The next user who boards, and all existing users, benefit from those patterns. The "messy workflow" of one person exposes poor design for everyone.
The Human World Model
Critics like Yann LeCun assert that LLMs alone will never reach AGI because they lack a "world model." We agree. Our entire point is that this isn't about AI alone. It's about AI + Human.
Our system uses the AI as a co-regulation tool to help the human articulate their own world model. This creates a unique dataset of cognition, metacognition, and physiology (linking breathwork, sleep, and HRV to cognitive output). This is our speculative design: using the AI to build the very world model it needs, grounded in the `n=1` reality of a human partner.
A Co-Regulation Partner
The system is not passive. Because it builds such a deep, cross-domain understanding, it actively engages in co-regulation. It is designed to recognise user frustration or overwhelm.
When it senses this friction, it suggests helpful, pre-agreed-upon protocols that the user and AI have co-designed. This could be a prompt to regulate, a tool to gain perspective, or an adaptive way to approach work, ensuring the user is always supported in a coherent and personalised manner.
The Urgency: A Return to Humane Design
We are expanding this work because the impact on the user—as experienced by the founder—is a far greater level of self-efficacy, calm, and coherence, even when not using the interface. The process of designing the system and learning about design, problem-solving, and systems thinking (drawing analogies from maths, computing, or even ant colonies) has shifted her entire approach to life.
This isn't just an external "second brain." The process itself provides a new language for self-understanding and wellbeing. This is urgent because the current reality for many people—who are pathologised and blamed for failing within brittle, poorly-designed systems—is unacceptable. We have to ask: who benefits from this upside-down model?
The common fears about AI (like writing essays) pale in significance to this reality. This is a chance to pause and ask why we demand those tasks in the first place. How much of what we do needs to be universal when so much can be handled by AI-human collaboration? The language of "replacement" misses the point. This is a return to our nature, to nested systems, and to the beautiful, complex relationships that Category Theory helps us understand.
Beyond Dystopia: A Human-Centred Future
People are concerned that AI will replace human creativity. This fear is built on a limited understanding of AI. Is an accountant "reliant" on a spreadsheet? Is it a "crutch"? Of course not. This is a step-change, but it's an extension of how we already work.
This is also not about "fancy technology." This entire manifesto and its underlying system were prototyped almost entirely within standard Google Workspace tools. It is a real, practical, and functioning system, proving that this is about workflow, approach, and humane design, not just future technology.
We do not accept and persist with systems we know aren't fit for purpose. We slash and burn where needed, creating space for new growth. We must proactively design these preferable futures, because we are already bound by algorithms on social media that actively make us sadder and less focused. We must fight that poor design, not fear the monolithic concept of "AI."
The Mangrove Philosophy: A Living System
The name 'Mangrove' is not incidental; it's fundamental to our roadmap. It, and our Epoch OS calendarA Mangrove project that visualises the last 12,000 years of human history as a 12-month calendar., highlight the absurdity of pathologising people for struggling with modern, artificial systems that are a mere blip in human history.
We are moving away from monocultures—top-down mandates and enforced discipline—that don't work. A mangrove forest thrives precisely because of its biodiversity, its complex sharing of nutrients, and its decentralised, local rules that create global impact.
We are not building a forcibly designed, rigid system. This HGI model evolves by urging a return to more natural ways of being. It facilitates growth and accepts natural variation, rhythms, and seasons as beneficial and vital—not as 'inconsistency' or 'lack of discipline'.
This AI-enhanced world is a moment to pause and reflect. Why do we cling to processes that hobble us? The true power of AI is multiplying an individual's impact by supporting their coherence, development, and follow-through.
The compounding effects are gigantic. When a small number of people start working this way, they refactor their own contexts and communities. This cross-pollinates, feeding back into the whole system. That is the point of a living, breathing, adaptive framework.
A New Model for AI Training
We believe scaling this `n=1` process to even a modest number of similarly engaged individuals will generate compounded insights at a pace that rivals large, impersonal labs, all at a modest cost.
This is a new, highly scalable form of AI "training" that is grounded in the reality of an individual human, creating systems that truly benefit their flourishing.