
I've been working with AI assistants daily for over two years now. And I've noticed something uncomfortable.
Most developers treat their AI like a smart intern who needs constant hand-holding. They paste code, ask questions, get answers. Every session starts from zero. The AI doesn't know your conventions, your architectural decisions, your testing philosophy. It doesn't know how you work.
So you waste tokens explaining. And the responses stay generic.
I spent months building .claude/ files, writing elaborate system prompts, maintaining context documents. It worked — barely. But it felt brittle. Like I was trying to teach a parrot to be an architect.
Then I realized I was thinking about it wrong.
The question isn't "how do I give my AI more context?" The question is "how do I make my methodology alive inside the AI?"
Here's what that actually means.
Most teams stop at Level 1. They write a system prompt and call it done. That's like giving someone a mission statement and expecting them to build a company.
I've identified three distinct layers that make a methodology breathe inside an AI assistant. Each layer serves a different purpose. Each reinforces the others.
This is your system prompt. But not the lazy kind.
A persona isn't "you are a helpful coding assistant." A persona is a character with opinions, preferences, and a way of thinking. It's the difference between hiring a developer who follows instructions and hiring one who pushes back when you're about to make a stupid decision.
In my Koshin framework, I have three personas:
Each persona has a distinct voice, a set of core principles, and — critically — permission to disagree with me.
The system prompt isn't a list of rules. It's a personality.
This is where most people stop. They dump their documentation into a vector store and call it RAG.
That's not enough.
RAG needs to be curated, not dumped. Your AI doesn't need your entire codebase. It needs the methodology — the decision frameworks, the patterns, the conventions that define how you build.
For Koshin, I maintain 2,582 chunks of structured methodology. That sounds like a lot. It is. But here's what's in there:
Every chunk is written in a specific format: context, principle, example, anti-pattern. The AI doesn't just retrieve information — it retrieves reasoning.
The magic happens when the RAG layer feeds into the persona layer. The Architect persona doesn't just cite a rule. It says "based on our ADR-47, the trade-off here favors consistency over performance. Here's why that applies to your current situation."
This is the part nobody talks about.
Protocols are automated triggers that make the AI initiate methodology without being asked. They're the difference between a tool you have to remember to use and a tool that's always working.
In Koshin, I have 8 automatic gates:
These aren't suggestions. They're enforced behaviors. The AI won't generate code for a feature until the test gate has been satisfied. It won't refactor across multiple files until the architecture gate has been acknowledged.
The key insight: protocols make methodology non-optional.
Let me show you what this looks like in practice.
Here's a simplified version of how the three layers interact:
User: "Add a payment retry mechanism to the checkout service"
[Protocol Layer activates]
→ Pre-commit gate: "This affects the checkout service. Let me verify scope."
→ Architecture gate: "This touches payment flow, error handling, and possibly async jobs. Multiple files."
→ Security gate: "Payment data involved. Need to check PCI compliance patterns."
[Persona Layer activates]
→ Architect persona: "I'll start with the architectural implications. This needs a circuit breaker pattern."
→ Craftsman persona: "I need to see the existing test coverage for checkout before writing code."
→ Navigator persona: "Is this a priority now? The current sprint has 3 other payment-related items."
[RAG Layer activates]
→ Retrieves: circuit breaker patterns from methodology docs
→ Retrieves: existing payment error handling conventions
→ Retrieves: test patterns for async retry mechanisms
AI generates response with architecture proposal, test strategy, and error handling plan — all before writing a single line of code.
The user never asked for any of this. The methodology just happens.
I've been running this setup for four months now. It works. But it wasn't easy to build.
The hardest part wasn't the technical implementation. It was the discipline of writing methodology in a way that an AI can actually use.
Most methodology documentation is written for humans. It's narrative, contextual, full of implicit understanding. "We use feature flags for major releases" — a human knows what that means. An AI needs: "Feature flags are implemented using LaunchDarkly. Every flag must have a cleanup ticket. Flags should be removed within 30 days of full rollout. Here are three examples of flag usage gone wrong."
You have to write for an entity that has no intuition.
The second hardest part was making the personas consistent. An AI that's supposed to disagree with you needs clear boundaries. "Challenge architectural decisions that increase coupling" is different from "be skeptical of everything." The first is useful. The second is annoying.
I spent weeks tuning the persona prompts. If the Architect was too aggressive, it would argue about trivial formatting. If the Navigator was too passive, it would let bad priorities slide. The sweet spot is narrow.
This approach changed how I think about methodology.
Before, I treated methodology as documentation. Something you write, store, and occasionally reference. It was static. Dead.
Now, methodology is a living context. It breathes through the personas, remembers through the RAG, and acts through the protocols. The AI doesn't just follow instructions — it embodies the way I work.
The result? I spend less time explaining. I spend less time reviewing. I spend less time fixing things that should have been caught earlier.
But the real win is subtler.
When the AI pushes back on an architectural decision because it "knows" our conventions, something shifts. It stops feeling like a tool and starts feeling like a partner. Not because it's intelligent — it's not. But because the methodology is alive inside it.
And that's the insight nobody talks about.
The best AI assistant isn't the one that writes the most code. It's the one that remembers how you work and acts on that memory without being asked.
You don't need a bigger context window. You need a better context structure.
Three levels. Personas for identity. RAG for memory. Protocols for behavior. Each layer makes the others stronger.
Start with one persona. Write it like a character, not a list of rules. Add one protocol — a single automated gate that catches something you keep forgetting. Build your RAG incrementally, one decision record at a time.
The methodology doesn't have to be perfect. It just has to be alive.
I've been building software for 20 years. I've seen methodologies come and go. Agile, Scrum, TDD, DDD, Clean Architecture — each one taught me something, none of them stuck forever.
But this is different.
This isn't a methodology you follow. It's a methodology that follows you.
And that changes everything.
The Ermite Shinkofa

Jay "The Ermite"
Coach Holistique & Consultant — Créateur Shinkofa
Coach et consultant spécialisé en accompagnement neurodivergent (HPI, hypersensibles, multipotentiels). 21 ans d'entrepreneuriat, 12 ans de coaching. Basé en Espagne.
En savoir plus →