HOW WE WORK
And the inversion of “work” in the age of ai
Knowledge architecture, AI-supported workflows, and what actually changes when you get the foundations right
March 2026
01 — WHO WE ARE
Researchers who practise what they propose
We work daily at the intersection of knowledge management, organisational design, and emerging AI — not as observers but as practitioners. Every approach we recommend, we use ourselves. Every tool we suggest, we have tested. Every workflow we propose, we have built and broken and rebuilt.
We stay current with a landscape that is changing faster than any organisation can track from the inside. We translate what is genuinely useful from what is merely fashionable. When we work with you, we bring firsthand knowledge of what is possible right now — not what was possible a year ago, and not what might be possible next year.
We work primarily with organisations using Google Workspace — and that ecosystem is changing fast. Gemini is now embedded across Sheets, Docs, Gmail, Meet, and Drive. NotebookLM has become a genuinely powerful knowledge tool. Google AI Studio lets you build custom apps with your own data in hours rather than months. Apps Script connects it all. We stay on top of these developments so you do not have to, and we help you see which ones are immediately relevant to how you work.
We are not a technology company and we are not a traditional consultancy. We are researchers who build things, working alongside people who want to understand what is now possible and make it work for their specific organisation.
02 — THE INVERSION
Everything is upside down
Most of how knowledge work is organised today was shaped by constraints that no longer exist. Reports take time to write — so they are written infrequently. Documentation is expensive to produce — so it is sparse. Data analysis requires a specialist — so most data is never analysed. Prototyping is costly — so ideas are planned and justified before they are tested.
These constraints have shaped entire cultures of working. The memo, the quarterly report, the structured email, the PowerPoint deck — these are not natural expressions of how humans communicate and think. They are workarounds for technological limitations. Compression artefacts. And they come with enormous costs: the nuance, the reasoning, the disagreement, the context — all of it stripped away to produce a clean document.
It is now significantly faster and cheaper to build a prototype and discuss it than it is to plan and justify the prototyping. Everything is inverted.
The appropriate response is not to do the old things faster with AI. It is to recognise that the constraints that produced those ways of working have collapsed — and to rethink accordingly.
When a report can be generated in seconds from structured data, the report becomes a trivial artefact rather than the product of significant effort. When documentation can be drafted conversationally and refined continuously, the barrier to creating institutional knowledge drops to near zero. When a prototype can be built in an afternoon, testing replaces planning as the primary mode of progress.
This is not about doing things lazily. It is about appropriate allocation of human effort. The things that were hard because of technology constraints — writing up, formatting, searching, compiling, summarising, routing — should be done by machines. The things that are hard because they are genuinely human — deciding, connecting, imagining, relating, judging — should be done by humans with more time and space than they currently have.
03 — ARCHITECTURE OVER AUTOMATION
The right approach is architecture and context
The most common mistake we see organisations make with AI is asking it to work on top of what already exists without changing the underlying structure. Use AI to tidy the emails. Use AI to rewrite the report. Use AI to summarise the meeting. These are not wrong, exactly — they produce some value. But they miss the point entirely.
AI performs best when it has great data, clear context, and well-defined rules. Most organisations give it none of these things. They give it the messy outputs of years of organic growth — spreadsheets designed to be read by one person, documents with no consistent structure, data that is simultaneously the storage layer and the display layer — and wonder why the results are unreliable.
The right approach is to invest in architecture and context first. This is the work that feels less immediately exciting but changes everything downstream. When the data is well-structured, queries become reliable. When the context is documented, AI responses become accurate. When rules are explicit rather than tacit, automation becomes trustworthy.
Separating data from views
One of the most transformative shifts in how organisations can work is the separation of data from its presentation. Most organisations conflate these completely. The spreadsheet is simultaneously the database and the report. The document is simultaneously the record and the output. This makes both worse.
We worked recently with an organisation that maintained five separate event calendars — one per year, structured differently each time, with sectors as tabs rather than as columns, and people, emails, and commentary mixed in with event data. Consolidating them into a single master spreadsheet was genuinely impossible, not because of a technical limitation but because the design was fundamentally wrong. A flat grid cannot represent a network of relationships without either duplicating rows or losing information.
The solution was not to build a better spreadsheet. It was to separate the concerns. We rebuilt the data as a proper knowledge base: one canonical table of events, never duplicated; tags for sectors and topics that could be applied across any event; temporal rules capturing how things recur; confirmed future dates captured separately from historical data. The result was a system that could answer any question — show me all beauty events in Q2, generate a monthly briefing, tell me what is relevant to a specific client — without any new files being created, any manual work being done, or any data being duplicated. The same source of truth, queried in any way required, producing any view on demand.
When data is properly structured and separated from its presentation, views become free. You do not build the weekly report — you request it. You do not create the client calendar — you query for it. The data stays clean. The outputs are instant.
04 — CONVERSATION AS THE RAW MATERIAL
We think in conversation, not in tables
There is a paradox at the heart of how most organisations try to capture knowledge. The most valuable intelligence — the reasoning behind a decision, the context behind a number, the friction behind a process — lives in conversation. In the meeting, the call, the chat thread, the informal exchange at the end of a session. And almost all of it is discarded.
What gets retained is the compressed output: the summary, the action points, the slide deck. Which is to say, what gets retained is the artefact with all the nuance removed. The decision survives. The reasoning behind it does not. The conclusion is documented. The disagreements that preceded it are gone.
This happened because processing conversation was expensive. Someone had to listen, transcribe, synthesise, structure. The human effort required was incompatible with the pace of work. So the conversation was discarded and the output was kept.
This is no longer true. Transcription is automatic. Synthesis is instant. Structuring is a query. The constraint that made conversation disposable has dissolved. What was previously impossible — capturing everything said in every meeting and making it queryable, searchable, traceable — is now trivial.
The appropriate response is to invert the priority. Spend more time in conversation, not less. Let the AI do the structuring work that was previously the bottleneck. A meeting where people talk freely and think out loud, captured and processed automatically, produces more valuable institutional knowledge than a meeting designed around the constraints of manual note-taking.
We do not think in tables and structured documents. We think in conversation, tangents, challenges, ideas, discussions. AI can organise those in an instant. Let it.
The practical implication: organisations that embrace this shift spend more time in genuine human interaction — creative, strategic, relational — and less time in the administrative work that was always a poor use of human capability. The humans become more human. The machines become more useful.
05 — THE TOOLS
Google Workspace as the platform
We work primarily with organisations using Google Workspace, and we map our intelligence layers and logic into that ecosystem rather than asking organisations to adopt new platforms. The tools are already there. The data is already there. The compliance and security decisions have already been made. Our job is to help you use what you have in a fundamentally different way.
The Google Workspace ecosystem has changed dramatically in the past eighteen months and continues to change rapidly. Most organisations are using it the same way they were using it five years ago, with capabilities that would genuinely transform how they work sitting unused.
Gemini inside your apps
Gemini is now embedded directly inside Sheets, Docs, Gmail, and Drive. This means you can work with your data conversationally without leaving the application. Ask Gemini inside Sheets to restructure your data, add a column derived from existing ones, or explain what a formula does. Ask Gemini inside Docs to rewrite in a different tone, expand a section, or extract action points. These capabilities exist now, for any Google Workspace user, and most people have not discovered them.
Custom Gems with organisational knowledge
A Gem is a custom Gemini assistant pre-loaded with specific knowledge and instructions. You can build a Gem that knows your organisation — your clients, your processes, your brand voice, your terminology — and make it available to your entire team. Anyone on the team can then ask questions and get answers grounded in your specific context rather than general knowledge. This is not a generic AI assistant. It is an assistant that knows who you are.
Google AI Studio for custom apps
Google AI Studio lets you build bespoke applications on top of your data in hours. An event calendar app that can be queried conversationally. A client knowledge base with a chat interface. A proposal assistant pre-loaded with your past proposals and pricing. These are not hypothetical future possibilities — they are things that can be built in an afternoon and used the same day. The cost and time required to create custom software has dropped by orders of magnitude.
NotebookLM for scoped knowledge
NotebookLM creates a scoped AI assistant over a specific set of documents. Upload your client call transcripts and ask questions about that client specifically. Upload your project documentation and ask what was decided about a particular issue. The AI only draws on what is in that notebook — it cannot hallucinate from elsewhere. This makes it an excellent first experiment for organisations that want to see what becomes possible when conversational data is made queryable.
Apps Script for automation
Apps Script connects Workspace together. A script can read from Sheets, write to Calendar, send from Gmail, and process documents — automatically, on a schedule, without human intervention. This is where the administrative work that currently falls on people gets lifted off them entirely. Not AI, exactly — just reliable automation of things that should never have required a human in the first place.
06 — HOW WE ENGAGE
What working with us looks like
We work with founders, CEOs, COOs, and specific team members in smaller organisations — typically under a hundred people — who want to genuinely understand this moment and make it work for them. We are not suited to large-scale rollouts or generic AI training programmes. We work closely with one or a small number of people, go deep, and produce results that are specific to your organisation.
We work primarily with organisations in the arts, charities, and professional services — PR, consulting, knowledge-intensive businesses. We prefer organisations that want to embrace this period rather than wait it out, and that are willing to examine how they currently work without assuming that current practice is correct.
Discovery and introductory workshop
We begin with a discovery conversation and an introductory workshop that is specific to your organisation and what we have learned about it — not a generic introduction to AI or to Google tools. We look at what you are actually doing, what is working, what is causing friction, and where the most immediate opportunities are. This is the foundation of everything that follows.
Close work with one motivated person
The most effective engagements involve working closely with one person who is highly motivated, knowledgeable about the organisation, and empowered to experiment. A CEO, a COO, a head of operations. This person becomes our interface with the organisation — not because others are not involved, but because the depth of conversation required to do this work well is not achievable with a whole team from the start. We start narrow and go deep. The team comes along as things begin to work.
Uncovering institutional knowledge
A significant part of our work is surfacing what an organisation already knows but has never captured in a useful form. This includes transcripts of client and internal calls, email threads, chat logs, documents — and facilitated conversations specifically designed to draw out the foundational information that does not yet exist anywhere.
There is a useful diagnostic embedded in this work: if you do not have the written documentation to train an AI on your organisation, you almost certainly do not have it for your human team either. Brand voice guidelines that exist only in one person's head. A client onboarding process that lives in institutional memory. Decision-making criteria that no one has ever written down. These gaps become visible when you try to give an AI context — and addressing them benefits humans and machines equally.
The sandbox principle
We never touch current workflows until something has been tested and confirmed to be better. Every experiment runs in parallel — a separate branch, outside your production environment, with your context but without any risk to how things currently work. We merge changes only when they are demonstrably preferable. This is not caution for its own sake. It is the correct engineering approach applied to organisational change.
Your data stays yours
We never process your data directly. We work with you to use the tools already available to you — primarily within Google Workspace — so that your compliance and security decisions are already addressed. We sometimes record calls, but for teams that prefer, we support you to manage transcription from your own end and set up workflows that automatically organise and synthesise those records for your benefit. Any Gems, knowledge bases, or custom apps we build run on your own workspace. We are happy to sign NDAs, but by design we never need to see or work with your data. This is work done with you, not for you.
07 — WHAT WE DO
A menu of engagement types
The following describes the range of work we do. Most engagements involve several of these in combination, sequenced based on what the discovery reveals. Nothing here is mandatory or prescriptive.
Discovery and mapping
Understanding how your organisation actually works — the flows of information, the points of friction, the natural patterns that have emerged organically, and where knowledge is being created and lost. This is the foundation. We map what is, before proposing what could be.
Core documentation
Helping you create the foundational written material that supports coherence across the organisation. Brand guidelines, approved language and terminology, templates for different types of output, a constitution that captures who you are and what you do. This material supports your AI tools and your human team equally — it is the same information, serving both.
Data architecture
Separating good data from human-readable views. Taking messy, organically developed files and helping you understand what the right underlying structure should be — then building it. Not a bigger spreadsheet. A proper knowledge base with well-defined types, relationships, and tags, from which any view can be generated on demand.
Workflow automation
Identifying the administrative work that currently falls on people and should not. Routing, formatting, summarising, updating, scheduling — anything rules-based and repeatable. Building the automations that lift this off your team, using Apps Script and the tools already embedded in Google Workspace.
Custom AI tools
Building Gems loaded with your organisational knowledge, custom apps in Google AI Studio, or NotebookLM workspaces for specific clients or projects. Tools that are designed for your specific context, not generic AI assistants. The difference between a tool that knows who you are and one that does not is enormous in practice.
Knowledge capture from conversation
Setting up the workflows that turn your meetings and calls into queryable institutional knowledge — automatically, without additional work from your team. Transcript processing, synthesis, tagging, routing to the right place. The goal is that the conversation is the work, and the administrative record-keeping happens without anyone having to do it.
Team introduction and adoption
Introducing the team to new ways of working — not through a training programme but through direct experience of things that work better. Gemini inside their existing apps. A Gem that answers questions about the organisation. A weekly briefing that is generated rather than written. People adopt new tools when they are demonstrably better, not when they are mandated.
08 — THE DEEPER PRINCIPLE
Human General Intelligence
The core of our research is what we call HGI — Human General Intelligence. The premise: the same principles that make AI systems work well make human organisations work well, and vice versa. And the same failures that make AI unreliable — poor context, missing memory, inconsistent rules, fragmented data — make human organisations unreliable too.
This means that when we improve the architecture of an organisation's knowledge — when we help them structure their data properly, document their processes, create shared context — we are improving conditions for both the humans and the AI simultaneously. The documentation that helps Gemini answer questions accurately is the same documentation that helps a new employee understand how things work. The knowledge base that lets an AI generate a client briefing instantly is the same knowledge base that lets a team member come up to speed on an account in minutes rather than hours.
There is no tension between making things better for machines and making things better for people. The tension only appears when organisations try to layer AI on top of poor foundations without addressing the underlying problems. The right foundations benefit everyone.
The bet is on companies that combine brilliant, accurate, cybernetic operating systems with humans who can truly connect, create, and decide.
What this looks like in practice is a progressive shift in where human effort goes. Less time in the administrative, the repetitive, the things that require consistency and precision and memory — all things machines do better. More time in the genuinely human: the creative, the strategic, the relational, the judgements that are hard to quantify and impossible to automate.
This is not about laziness or about replacing people. It is about appropriate use of capability. Humans are genuinely poor at the things that require perfect consistency, perfect memory, and zero fatigue — and they currently spend enormous amounts of time and energy on exactly these things. Machines are genuinely poor at the things that require taste, understanding of human nuance, and the ability to navigate ambiguity with wisdom — and they are currently being asked to do these things constantly.
The organisations that will do this best are those that understand both sides of this: demanding more of their systems so that their people can do more human things. Not one or the other. Both, simultaneously, as a coherent approach to how an organisation works.
Anything that requires a human to remember, be disciplined, or be perfectly consistent is an architectural problem, not an individual training issue. The same is true for AI. Fix the architecture. Everything else follows.
If any of this resonates — if you recognise the problems it describes or are drawn to the possibilities it points toward — the right next step is a conversation.