ReportWire

Tag: decide

  • From One Repo to Three: How ADD Framework Expanded Across the Claude Ecosystem – Dragos Roua

    [ad_1]

    A few months ago I published a mega prompt that teaches Claude to think with the Assess-Decide-Do framework. I wrote about it on Reddit and the post got 40,000 views in 19 hours, 282 shares, and the GitHub repo collected 67 stars and 14 forks. My first sponsor showed up within a week.

    That was nice. But what happened next was a little bit more interesting.

    Two separate upgrades in Claude’s ecosystem opened doors I didn’t expect. And after a bit of tinkering, what started as a single mega prompt is now a three-repo architecture that works across different Claude environments. Here’s the story.

    Quick Background: What ADD Does to Claude

    If you’re new here: the Assess-Decide-Do framework is a 15-year-old methodology I created for managing how we actually think. Not just churning out tasks, but how we actually function. It maps three cognitive realms: Assess (explore without commitment), Decide (choose and commit), Do (execute and complete).

    When you teach this to Claude, something interesting happens. Instead of generic responses, Claude detects where you are in your process and responds accordingly. Exploring options? It stays expansive. Ready to commit? It helps you choose. Executing? It gets out of the way and supports completion.

    The original integration was a big markdown file (the “mega prompt”) that you loaded into Claude Desktop or Claude Code conversations. It worked, but it was monolithic. One file trying to do everything.

    Upgrade #1: Claude Code Merged Skills and Commands

    Claude Code used to have a split between slash commands (things you invoke explicitly) and skills (things Claude uses on its own based on context). Then Anthropic merged them. Skills became loadable on demand, with proper frontmatter metadata that tells Claude when and how to use each one.

    This was the opening I didn’t expected.

    Instead of one massive mega prompt, I could split ADD into modular skills. Each realm got its own skill file. Imbalance detection became its own skill. Flow status tracking became its own skill. Claude Code picks them up automatically based on what’s happening in the conversation.

    The update also let me build something I’m quite proud of: a status line display. While you work, Claude Code shows a visual indicator of your current ADD state. Something like:

    [ADD Flow: 🔴+ Assess | Deep exploration - 8 data points gathered]
    

    Or when you’re executing:

    [ADD Flow: 🟢- Do | Clean execution - 3 tasks completed]
    

    It’s a small thing, but seeing your cognitive state reflected back to you in real time changes how you work. It makes the invisible visible. The updated Claude Code repo is here: github.com/dragosroua/claude-assess-decide-do-mega-prompt

    Upgrade #2: Claude Cowork Launched Plugins

    Then Anthropic launched Cowork with a plugin system. Cowork is a desktop tool for non-developers, focused on file and task management. It supports skills (same concept as Claude Code) and commands (slash-invoked actions specific to the plugin).

    This meant ADD could work outside the developer terminal. Someone who’s never touched Claude Code could install a plugin and get realm-aware Claude through simple commands like /assess, /decide, /do.

    Building the plugin required adapting the framework. Cowork doesn’t have filesystem access like Claude Code, so there’s no status line file. Instead, the /status command analyzes conversation context to detect your current realm. The /balance command runs a diagnostic, asking a few targeted questions and telling you if you’re over-assessing, over-deciding, or stuck in perpetual doing.

    The Cowork plugin repo: github.com/dragosroua/add-framework-cowork-plugin

    The Problem: Two Repos, Same Knowledge, Different Formats

    At this point I had two implementations. Both contained ADD knowledge, but each had environment-specific features baked in. The Claude Code version referenced status files and subagent contexts. The Cowork version had slash commands and conversation-based detection.

    If I updated the core philosophy (say, refining how imbalance detection works), I’d have to update it in two places. That’s how knowledge drift starts. And with a framework I’ve been refining for 15 years, drift is not acceptable.

    The Solution: A Shared Skills Repo

    The fix was straightforward. Extract all universal ADD knowledge into a standalone repository. No environment-specific features. No slash commands. Just the pure framework: realm definitions, detection patterns, imbalance recognition, response strategies, the “liveline” philosophy, the cascade principle, fractal operation.

    Six skills, each in its own folder:

    • add-core: Unified overview of the entire framework
    • add-assess: Deep Assess realm support
    • add-decide: Deep Decide realm support (including the Livelines vs. Deadlines concept)
    • add-do: Deep Do realm support
    • add-imbalance: Five detailed imbalance patterns with intervention strategies
    • add-realm-detection: Centralized detection patterns for all realms

    The shared skills repo: github.com/dragosroua/add-framework-skills

    Both Claude Code and Cowork repos pull from this shared source using git subtree. Update once, pull everywhere.

    How the Three Repos Connect

    add-framework-skills (source of truth) contains the universal ADD methodology. No environment assumptions.

    claude-assess-decide-do-mega-prompt (Claude Code) pulls the shared skills and adds Claude Code-specific features: status line display, automatic flow checking, subagent-powered session reflection.

    add-framework-cowork-plugin (Cowork) pulls the shared skills and adds Cowork-specific features: /assess, /decide, /do, /status, /balance, and /add-help commands.

    If you’re a developer using Claude Code, start with the mega prompt repo. If you use Cowork, grab the plugin. If you want to integrate ADD into something else entirely, the shared skills repo is your starting point.

    Honest Caveats

    This is still raw around the edges. Cowork plugins are new, and the plugin ecosystem is evolving. The shared skills format might need adjustments as both Claude Code and Cowork mature. I’m learning the boundaries of what each environment supports as I go.

    What I’m really testing here is something bigger than a productivity framework: can we map human cognitive patterns onto performant AI in a way that augments us rather than making us dependent?

    Most AI interactions today are transactional. You ask, it answers. You prompt, it generates. The human adapts to the machine.

    ADD integration tries to work around this. The AI adapts to the human’s cognitive state. It detects where you are in your thinking and responds accordingly. It notices when you’re stuck and offers gentle guidance. It respects the boundaries between exploration, commitment, and execution.

    This isn’t prompt engineering in the traditional sense. It’s cognitive alignment. A 15-year-old, battle-tested framework meeting the power of performant AI. And with the three-repo architecture, it can now expand to any Claude environment that supports skills.

    The repos are public. The framework is open. If you want AI that works with your mind instead of against it, pick whichever repo fits your setup and give it a try.


    All three repos are MIT licensed and available on GitHub. If you want to see ADD in action as a native app, addTaskManager implements the full framework on iOS and macOS.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • LLM Council, With a Dash of Assess-Decide-Do – Dragos Roua

    [ad_1]

    Last weekend I stumbled upon Andrej Karpathy’s LLM Council project. A Saturday hack, he called it—born from wanting to read books alongside multiple AI models simultaneously. The idea is simple: instead of asking one LLM your question, you ask four. LLMs at the same time Then you make them evaluate each other’s work. Then a “chairman” synthesizes everything into a conclusion.

    What caught my attention wasn’t just the technical elegance. It was the underlying structure. Those stages looked suspiciously familiar.

    How LLM Council Works

    The system operates in three sequential phases:

    Stage 1: First Opinions. Your query goes to all council members in parallel—GPT, Claude, Gemini, Grok, whoever you’ve configured. Each model responds independently. You can inspect all responses in tabs, side by side.

    Stage 2: Peer Review. Here’s where it gets interesting. Each model receives all the other responses, but anonymized. “Response A, Response B, Response C.” No model names attached. Each evaluator must rank all responses by quality, without knowing whose work they’re judging.

    Stage 3: Synthesis. A designated chairman—one of the models, or a different one—receives everything: the original responses, the rankings, the evaluations. It synthesizes a final answer that represents the council’s collective wisdom.

    The anonymization in Stage 2 is pretty clever, because models can’t play favorites. They can’t defer to perceived authority. They evaluate purely on “merit”.

    The Interwoven Assess-Decide-Do Pattern

    If you’ve been following my work on the Assess-Decide-Do framework, the parallel should be obvious. The LLM Council isn’t just a technical architecture—it’s a cognitive process embedded in code.

    Stage 1 is pure assessment. Gather information. Multiple perspectives. No judgment yet, just collection.

    Stage 2 is decision-making. Weigh the options. Rank them. Make choices about what’s valuable and what isn’t. The anonymization forces honest evaluation—no shortcuts, no biases based on reputation.

    Stage 3 is execution. Take the assessed information and the decisions made, produce the output. Do the work that matters based on what you now know.

    I don’t think Karpathy was thinking about ADD when he built this-not sure he even knows about the framework. He was solving a practical problem for himself: “I want to compare LLM outputs while reading books.” But the structure emerged anyway.

    ADD Inside the Council

    Recognizing the pattern was interesting. But it raised a question: what if we made it explicit?

    The original LLM Council treats all queries the same way. Ask about quantum physics, ask about your dinner plans—same three-stage process. But human queries aren’t uniform. Sometimes we’re exploring (“what options do I have?”), sometimes we’re deciding (“which should I choose?”), sometimes we’re executing (“how do I implement this?”).

    The ADD framework maps these cognitive modes:

    • Assess (exploration mode): “I’m thinking about,” “considering,” “what are the options”
    • Decide (choice mode): “should I,” “which one,” “comparing between”
    • Do (execution mode): “how do I,” “implementing,” “next steps for”

    What if the council could recognize which mode you’re in and respond accordingly?

    I submitted a pull request that integrates the ADD framework directly into LLM Council. The implementation adds a configuration option with four modes:

    • "none" — baseline, no framework (original behavior)
    • "all" — all models use ADD cognitive scaffolding
    • "chairman_only" — only the synthesizing chairman applies the framework
    • "council_only" — council members use it, chairman doesn’t

    The most effective configuration turned out to be chairman_only with the full megaprompt—66% improvement over the condensed version in my testing. The chairman receives the ADD framework and uses it to recognize what cognitive realm the user is operating in, then synthesizes accordingly.

    Why Assess-Decide-Do Improves the Council

    Language models are pattern-matching engines. They’re excellent at generating plausible text. But plausibility isn’t wisdom. A single model can confidently produce nonsense, and you’d never know unless you have something to compare against.

    The council approach introduces deliberation. Multiple viewpoints, structured disagreement and forced synthesis. That’s already an improvement over single-model queries.

    But the council still treats every query as a generic question needing a generic answer. ADD adds another layer: cognitive alignment. When the chairman knows you’re in assessment mode, it doesn’t push you toward decisions. When you’re ready to execute, it doesn’t keep exploring options. The framework matches the response to your actual mental state.

    This matters because the best answer to “what are my options for X” is different from the best answer to “how do I implement X.” Without the framework, both get the same treatment. With it, the council adapts.

    Looking at the Code

    The core council logic lives in backend/council.py—about 300 lines of Python that orchestrate the three stages. The ADD integration adds a parallel module (council_add.py) that wraps the same stages with cognitive scaffolding.

    The key function is stage3_synthesize_final(). In the original, the chairman prompt says:

    Your task as Chairman is to synthesize all of this information
    into a single, comprehensive, accurate answer to the user's
    original question.

    With ADD, the chairman first identifies which realm the user is in, then synthesizes with that context. The synthesis becomes realm-appropriate rather than generic.

    The detection uses linguistic markers. Phrases like “I’m thinking about” or “considering” trigger assessment mode. “Should I” or “which one” trigger decision mode. “How do I” or “implementing” trigger execution mode. Simple pattern matching, but effective—it catches how people actually phrase questions differently depending on what they need.

    Playing With It

    Karpathy released LLM Council with a warning: “I’m not going to support it in any way. Code is ephemeral now and libraries are over, ask your LLM to change it in whatever way you like.”

    That’s refreshingly honest. It’s also an invitation. If you want to experiment:

    1. Clone the repo
    2. Get an OpenRouter API key
    3. Configure which models sit on your council
    4. Set ADD_FRAMEWORK_MODE to test different configurations
    5. Run the start script

    Then try asking questions in different cognitive modes. Ask something exploratory: “What are the approaches to learning a new language?” Then something decisive: “Should I use Duolingo or a private tutor?” Then something executable: “How do I structure my first week of Spanish practice?”

    Watch how the council responds differently when it knows which mode you’re in versus when it treats all queries identically.

    What This Means

    There are two ways to make AI think more structurally: you can prompt a single model to follow a framework, or you can embed the framework into multi-model architecture.

    Both work. They work better together.

    A prompted framework (like ADD in a mega-prompt) makes one model more reflective. A council architecture makes multiple models more rigorous through external pressure—anonymized peer review that none can game. Combining them gives you structured multi-perspective reasoning that adapts to how you’re actually thinking.

    LLMs are still pattern-matchers generating plausible outputs. But structured pattern-matching, like structured productivity, produces better results than unstructured generation.

    Assess what you’re dealing with. Decide what matters. Do what needs doing. Whether that’s your Tuesday task list or an AI deliberation system, the rhythm is the same.


    LLM Council is available on GitHub. The ADD integration PR is #89. The ADD Framework posts are collected on this blog in the Assess-Decide-Do Framework page. For the mega-prompt that applies ADD to Claude, see Supercharging Claude with the Assess-Decide-Do Framework.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Assess Decide Do – Colors And Icons Significance – Dragos Roua

    [ad_1]

    For over 15 years, the Assess-Decide-Do framework has used a consistent visual system. Three colors and three symbols, each one supporting a specific function.

    If you’ve used addTaskManager or worked with ADD materials, you already know them: red for Assess, orange for Decide, green for Do. Then for the icons: a plus sign, a question mark, a minus sign.

    These weren’t arbitrary choices. They create a visual language that mirrors how traffic signals work—a system everyone already understands. But I’ve never published the reasoning at the theoretical level, only within the app implementation itself.

    Given the momentum my framework is getting these days, including AI integrations, the time has come for a detailed explanation.

    Red for Assess: Stop and Capture

    Assess is red because red means stop. Just like you stop your car at a red light, you stop in Assess to offload information from your mind into the system.

    The plus sign (+) represents what’s actually happening in this realm: you’re adding to the system. Assess overloads the system with data—thoughts, tasks, ideas, dreams, possibilities. Everything gets captured without immediate commitment or action.

    Red creates the pause you need to externalize what’s in your head. It’s the signal that says: don’t keep driving forward with all this mental cargo. Stop. Unload it. Get it out of your mind and into a container where it can be examined later.

    Orange for Decide: Get Ready

    Decide is orange because orange means prepare. Just like an orange traffic light tells you to get ready before the green, the Decide realm is where you prepare yourself by making conscious choices about what matters.

    The question mark (?) represents the core activity here: pondering. You’re asking questions about each captured item. Is this important? Does this align with my priorities? What context does this need? When should this happen? Do I have enough resources for it right now?

    Orange creates the transition space between capture and execution. You’re not passively collecting anymore, and you’re not yet in full action mode. You’re actively planning, assigning context, setting commitments.

    Green for Do: Move Forward

    Do is green because green means go. Just like you move forward at a green light on a crossroad, you move forward in Do without distraction or hesitation.

    The minus sign (?) represents what happens in this realm: you take items out of the system by completing them. Each finished task is eliminated through execution. The minus doesn’t mean deletion—it means transformation from intention to liveline (ADD treats every completion not as a deadline, but as a liveline).

    Green signals committed execution. When you’re in Do, you’re not capturing new things or reconsidering priorities. You’re executing on what you’ve already decided matters.

    So Simple It Just Blends In

    The traffic light metaphor does more than make the framework memorable. It taps into a pattern you’ve internalized since childhood: red-orange-green as a sequence of behaviors.

    You don’t need to think about what red means. You don’t need to remember that orange comes between red and green. The system leverages existing mental models rather than requiring you to learn something new.

    The symbols reinforce the function:

    • Plus (+) for adding to the system
    • Question mark (?) for evaluating what’s there
    • Minus (?) for completing and removing

    Together, the colors and symbols create immediate visual feedback about where you are and what you should be doing. When your Assess list is overflowing with red items, you know you need to move things through to Decide. When everything’s stuck in orange, you’re in decision paralysis. When Do is overflowing, you know you might be in a burnout.

    The system shows you the imbalance without requiring conscious analysis every single time.

    Why I’m Publishing This Now

    This information has lived on addtaskmanager.com for over a decade, embedded in the implementation documentation. Anyone using the app could see it. But it existed only at the practical level—in the tool itself, not as standalone theory.

    The other day I was testing several LLMs (Grok, Gemini, ChatGPT), asking them to create infographics using the Assess-Decide-Do framework. Every single one hallucinated the visual system. They invented blue for Assess, gave me lightbulbs and compasses, created combinations that looked reasonable but were completely wrong.

    Until I directed them to addtaskmanager.com. Then they got it right, because the information was there in the implementation docs.

    That’s when I realized: I’ve kept this at the implementation level for 15 years. It worked perfectly for people using the system, but it wasn’t available as theory. Anyone wanting to work with ADD conceptually—to teach it, write about it, build their own tools—had to either use the app or guess.

    So here it is: the visual language of Assess-Decide-Do, separated from any specific implementation.

    Red means stop and capture. Orange means prepare and decide. Green means execute and complete. Plus for adding, question mark for evaluating, minus for finishing.

    It’s a system designed to work with your existing mental models, not against them.

    Sometimes the most useful documentation is the stuff you thought everyone already knew.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Can AI (Really) Understand How You Think? Well, Maybe… – Dragos Roua

    [ad_1]

    A few days ago I integrated my productivity framework, Assess-Decide-Do in my LLM model of choice these days, Claude. If you want to know the technicals, have a look at the Claude mega-prompt post. In today’s post I want to take a slightly different angle, namely talking about the impact on the user’s perception.

    But first, a small update.

    Since the initial integration I also added cross-session observability and tracking, meaning the LLM is now instructed to always understand where the user is, in the thinking process. So you can ask at any given moment something like: “Where are we in the ADD process?” and Claude will answer something like : “Currently, we are executing in Do”.

    For Claude Code users I also added permanent visual feedback. What does this mean? Well, Claude Code users can now see in the status bar a nifty little line describing the realm where they are in the process. It has this form:

    [ADD Flow: ?+ Assess | Exploring implementation options]

    This is updated automatically, as the model detects behavioral pattern changes, so you get a live visual cue of the transition between realms.

    At the end of the session, you can also ask for a recap, and you get an overall assessment, including a count of realm transitions and general evaluation – how much assessing, how much deciding and how much doing.

    So, the AI is Really Understanding Me?

    Yes and no.

    Before going into details, a very important distinction: we are talking about Large Language Models here, not about AI in general. This matters, because there are many others AI approaches – one of the most promising being “world models”. LLMs are very popular because they are really good at predicting the next plausible token.

    But they don’t have any sense of orientation, no structure. The ADD mega-prompt, which essentially sets the “operating system” of the model, does exactly that: provides the model with a system, a system which the model conveys by navigating the token stream and extracting matching language patterns – not by “understanding”. At least not in the sense humans understand.

    But, and here’s what I really want to talk about: does this really matter? We get a good enough approximation of understanding, which drastically reduces friction. We suddenly have a comfortable enough environment, which makes us more productive. We can direct brain cycles to creativity or brainstorming. We know there will be no penalty for that, because the LLM understands the Assess realm specifics: evaluating, taking feedback, even daydreaming, and it will not stop us.

    This is already a significant step forward. We don’t get a “conscious” buddy, but we get a frictionless process. We are still the “masters” of the AI, only augmented.

    Going forward, this will matter more and more. We can either approach AI as a complete human replacement – matching our performance in creativity or even survival – or we can see AI as an amplifier, leveraging knowledge, but still “consciousness-less”, a mega-tool supporting, not replacing us.

    I’ve been using the ADD integration for more than a week now, 6-7 hours per day, and I genuinely feel better. Getting this kind of enhanced support, knowing that my tool can identify my mental state, makes me feel more relaxed and, as a direct consequence, I can accomplish more while maintaining flow state. That’s my goal, anyway, not to make the LLM working for me.

    World Models Will Change This?

    Maybe. There is more and more talk in the AI world about them, with prominent figures acknowledging “the end of the LLM era”, suggesting a new breakthrough is right around the corner. The thing is, nobody knows when is this “right around the corner”, and how the breakthrough will look like. It may as well not happen at all.

    My daily experience with ADD integration has been surprisingly powerful—not because Claude ‘understands’ me, but because the cognitive overhead of managing the tool itself just disappeared. I stay in flow and I create more. Almost no friction.

    The integration works with Claude, Gemini, Grok, and Kimi (though Claude’s implementation is most refined). Visit the mega-prompt repo for simple integration instructions, and test for yourself what frictionless AI collaboration feels like.

    I’m genuinely curious: when you remove the friction, what do you create? How would you feel?

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Assess Decide Do – 15 Years After – Dragos Roua

    [ad_1]

    15 years ago, while on a trip to Thailand (one of my very first trips to Asia), I created a productivity framework called Assess-Decide-Do. It’s built on the idea that you’re always in one of three “realms”:

    • Assess – exploring options, no pressure to decide yet
    • Decide – committing to choices, allocating resources
    • Do – executing and completing

    The main metric is how smooth the interaction is from one realm to the other. Prioritizing flow over completion. Also, the framework is fractal in nature—each cycle can contain smaller, complete ADD cycles within it.

    It was my response to the GTD hype running high at that time. I felt that churning tasks from a todo list couldn’t be our ultimate goal as human beings, while acknowledging that we still needed some structure, something that would allow us to function in a predictable way. Something that would honor our never-ending, changing nature, but still allow us to get stuff done.

    I’ve been consistently refining and using this at various levels in my life. What follows is a recap of how this framework evolved (spoiler: it stayed pretty much the same), how it was implemented (spoiler: there’s an app for that), and how it’s adjusting to the age of TikTok and AI (spoiler: there’s a repo for that).

    Without further ado, let’s go.

    Software Implementation: The Evolution of ADD

    The first iteration into actionable software was called iAdd. The name came from the ubiquitous “i” that every app had at that time and the framework initials. Oh, the naivety. Written in Objective-C, it was a fascinating exercise. I used it for several years before realizing it needed to evolve.

    I then iterated on both the name and the UI, switching from Objective-C to Swift. The result: something called ZenTasktic. I was proud of that name for a couple of years. Then reality hit, and I realized this wasn’t what an app needs. It’s great for showcasing in conversation, but without a massive marketing budget to push the name across every media channel, it would never take off. (Needless to say, I didn’t have a massive marketing budget—or a marketing budget at all.)

    So I did one more pivot: from ZenTasktic to addTaskManager. The new name might be a bit boring, but it’s simple, and it tells you exactly what the app does from second one. More importantly, it’s the cleanest visual implementation of the framework: each realm has its own screen, and moving tasks leverages the iPhone’s built-in swipes, so it feels like a task or project is literally traveling from one realm to the other—which supports my intention of emphasizing flow over task churning.

    The addTaskManager iteration also validated the business model—it’s a subscription on top of a generous free tier. There’s a growing community of paying subscribers with consistently positive reviews. The software implementation is strong, and the foundation is solid.

    Applicability In Other Life Areas

    When I first developed this framework, I had hammer syndrome: everything looked like a nail waiting for my hammer. I postulated that ADD would work well in pretty much all life areas, from relationships to business. In general, this was true. In general. Here’s an honest assessment of what worked and what didn’t.

    Health and Fitness

    Around the same time, I became a runner, starting with marathons and progressing to ultra-marathons. Using ADD in my training and race selection worked surprisingly well. I would start a specific training routine while staying in Assess, observing my body’s adaptation, then move to Decide only when it felt naturally feasible—like signing up for longer and longer races—and then just Do, like finishing the actual thing.

    Over the course of 10 years, I went from not being able to run 1 kilometer to finishing 220km ultra-marathons. Discipline, diet, the right social circle—all of this mattered, of course, but at the core was always my ADD framework shaping my approach. I’m not running competitively anymore, but I still apply ADD to my evolved fitness routine. For instance, I started swimming more, walking more, and visiting the Jjim Jil Bang (Korean spa) more often.

    Overall: 8/10 framework fit.

    Location Independence

    This is by far the area with the most spectacular results. In the last 15 years, I became fully location independent, changing three countries in my fabulous fifties alone.

    Here’s how I approached this. First, I would assess for a few months whether to live in a specific country. This included research about cost of living, social fabric, cultural differences, and more. Then, once the research stage was over, I would spread the assessment into real life by doing a two-week trial in that country. Living like a local, no tourist stuff, aggressive budgeting. Most importantly, not deciding on anything yet.

    After this real-life assessment test, I would move to Decide, which meant allocating time and resources for the move—OR going back to Assess. And here’s the beauty of the framework. I successfully moved to and lived in Spain, Portugal, and Vietnam, but after an overall assessment of almost six months (back and forth), I decided not to move to Korea. I still love the country, but some things just weren’t for me. The decision to withdraw and choose Vietnam over Korea felt completely natural.

    Overall: 10/10 framework fit.

    Financial Resilience

    This is on par with location independence, and it’s easy to understand why. I write extensively about financial resilience on this blog, so feel free to browse the category if you want to familiarize yourself with my approach.

    In this field, an Assess cycle can last several months.

    Usually I start with an MVP, like the Flippando game, and then gather real-world feedback. How many users, how much engagement on social media, how many inquiries from accelerators. In this specific case, the first two Assess cycles lasted about four months each. The first one was after winning the Glitch hackathon in Korea (which deserves its own blog post, I reckon), after which I decided to fully implement and publish the game. The second was after applying for a grant to port the game to Gno. The Do stage after each Decide cycle—actually making the game, working for the grant—lasted between six months and one year.

    The last Assess cycle led to the decision to stop development, keep the game up for portfolio purposes, and move on. I currently focus full-time on addTaskManager—complete Do immersion.

    Overall: 10/10 framework fit.

    Relationships

    And here’s where the framework hits differently. Relationships aren’t as predictable as implementing a coding project or evaluating a new country to live in. That’s mostly because there’s someone else involved—another real person with their own problems, goals, and expectations. That makes assessment exponentially more difficult.

    Also, crucially, the last part in relationships isn’t Do—it’s Be. You don’t just Do stuff; you try your best to Be in a relationship. That made me understand that the framework can’t fit all human experiences. Relationships need a more holistic approach—sometimes just faith and commitment.

    Overall: 5/10 framework fit.

    AI Integration: Claude Megaprompt and MCP Server

    Recently, I experimented with integrating my framework into LLMs—making the LLM ADD-aware, both in its operation and in relationship with the user. Understanding where in the framework someone is: assessing, deciding, or doing. The results have been remarkable. My first Reddit post generated over 53,000 views with a 91% upvote ratio, and the repository is actively watched and starred. If you’re interested, join the conversation, star the repo, or fork it.

    I’m also developing an MCP server (Model Context Protocol—a way for AI to interact with external tools) for my app. The developments in this area are lightning-fast, and I’m assessing whether to continue pursuing this as the standard itself evolves rapidly.

    Overall: 10/10 framework fit.


    All in all, Assess-Decide-Do has proved to be one of the most useful discoveries for me—and I hope for many others as well. Sometimes, we’re lucky enough to get it right from the first time.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Claude Mega Prompt for Assess Decide Do Framework

    [ad_1]

    Fifteen years ago, I created the Assess-Decide-Do (ADD) framework out of frustration with productivity systems that treated humans like task-completion machines. I wanted something that acknowledged the full spectrum of how we actually work: the dreaming, the deciding, the doing—and the vital importance of balance between them.

    I’ve lived with this framework since 2010. I built my life around it. Eventually, I built addTaskManager, an iOS and macOS app that implements ADD at the technical level, respecting realm boundaries programmatically. Over 15 years, ADD has proven itself not just as a productivity tool, but as a genuine life management framework that works across domains: relationships, health, business strategy, creative work, everything.

    Then, a few days ago, I had a thought: What if Claude could operate with ADD awareness?

    Not just use ADD to organize tasks, but actually think with ADD—detect which realm I’m in, identify when I’m stuck, guide me toward balance, structure responses appropriately for each phase. What if I could teach Claude the framework that has shaped my life?

    The result took me by surprise. Not just because it worked technically, but because of what it felt like. Working with ADD-enhanced Claude isn’t just cleaner or more efficient. It’s smoother. More relatable. Almost empathic. It’s the difference between using a tool and having a conversation with someone who understands not just what you’re asking, but where you are in your thinking process.

    This is the story of how I integrated ADD into Claude, the technical steps required, and what happened when cognitive alignment between human and AI created something that feels genuinely collaborative.

    The Problem: AI Assistants Are Powerful But Often Chaotic

    Modern AI assistants like Claude are remarkably capable. They can write, code, research, analyze, create. But there’s often a subtle friction in the interaction. You ask for exploration, and it pushes you toward decisions. You need help executing, and it re-opens assessment questions. You’re deep in analysis paralysis, and it feeds you more options instead of helping you break through.

    The AI doesn’t understand where you are in your process. It responds to what you ask, but not to what you need. This creates cognitive friction—the feeling of fighting against the tool instead of working with it.

    For someone who’s lived with the ADD framework for 15 years, this friction was particularly noticeable. I’ve trained myself to recognize realms, detect imbalances, and guide my own flow. But Claude, powerful as it is, had no concept of this structure. Every interaction required me to manually compensate for the framework gap.

    The insight: What if Claude could learn ADD? Not as a user applying ADD principles, but as an integrated cognitive framework that shapes how it processes requests and structures responses?

    Why ADD? The Ubiquitous Usefulness of Realm Thinking

    Before diving into the integration, let me briefly explain why ADD is worth teaching to an AI in the first place.

    The Three Realms

    Assess is the realm of exploration, evaluation, and possibility. It’s where you gather information, dream about outcomes, integrate new ideas into your worldview, and explore options without pressure to commit. Assessment is fundamentally non-judgmental—you’re not trying to decide yet, you’re trying to understand.

    Decide is the realm of intention and commitment. It’s where you transform possibilities into priorities, allocate resources, and make choices. Each decision is a creative act—it literally shapes your reality by determining where energy flows. Decide isn’t about execution yet; it’s about conscious commitment.

    Do is the realm of manifestation. It’s where you execute, implement, and complete what you’ve assessed and decided. The Do realm should be clean—no re-assessment, no re-deciding, just focused execution and completion.

    Why This Structure Matters

    The power of ADD lies in three principles:

    1. Sequential, Not Parallel: You can’t decide well without assessment. You can’t execute well without decision. Trying to do all three simultaneously creates chaos and cognitive overwhelm.

    2. Imbalances Cascade: Poor assessment leads to poor decisions, which lead to poor execution. If you skip Assess and jump to Decide, you end up building the wrong thing. If you get stuck in Assess (analysis paralysis), nothing gets decided or done. If you live only in Do (perpetual task completion), you become a machine without direction.

    3. Flow Over Completion: Traditional productivity systems measure success by tasks completed. ADD measures success by balanced flow through realms. A day spent entirely in Assess (deep exploration) can be more valuable than a day of frantic task completion—if that’s what the situation calls for.

    This philosophy isn’t just theoretical. It’s shaped how I’ve lived for 15 years, how I built my business, how I create content, how I make life decisions. It works across every domain because it matches how human cognition actually operates—in phases, with clear transitions, requiring balance.

    The Vision: Claude Operating with ADD Awareness

    The idea crystallized during a particularly frustrating interaction. I was exploring blog post ideas (Assess realm), and Claude kept suggesting I “outline the structure and start writing” (pushing to Do realm). I needed exploratory support, not execution guidance. The mismatch was subtle but draining.

    I thought: What if Claude could detect I’m in Assess realm and respond appropriately? What if it could notice when I’m stuck in analysis paralysis and gently guide me toward Decide? What if it structured responses differently based on which realm I’m in?

    The vision expanded to three integration levels:

    Level 1: Implicit Operation – Claude detects realms, identifies imbalances, and structures responses appropriately, all beneath the surface. You benefit without consciously thinking about ADD.

    Level 2: Explicit Guidance – When helpful, Claude makes realm transitions visible, reflects patterns back to you, thus teaching ADD through natural interaction.

    Level 3: Tool Integration – The framework also shapes file creation, code development, research processes, and project management automatically.

    This wasn’t about making Claude explain ADD or quiz me on framework principles. It was about deep cognitive integration—making ADD Claude’s operating system, not an add-on feature.

    The Process: Teaching Claude Its Own Enhancement

    Here’s where it gets meta: I used Claude itself to create the ADD integration. And more than that, I used ADD methodology to structure the process.

    Assess: Understanding the Challenge

    I started by exploring what “ADD-aware Claude” would actually mean:

    • How do you teach an AI to detect realms from language patterns?
    • What are the markers of Assess vs. Decide vs. Do realm language?
    • How do you identify imbalances algorithmically?
    • What does realm-appropriate response structure look like?
    • How do you make interventions helpful rather than intrusive?

    I shared my original blog posts about ADD with Claude, explained the philosophy, and worked through examples. “If someone says ‘I’ve been thinking about starting a blog, what are my options?’—that’s Assess realm. How should you respond differently than if they said ‘I’ve chosen to start a blog, how do I set it up?’”

    We explored dozens of scenarios, identifying patterns:

    • “What if…” = Assess
    • “Should I…” = Decide
    • “How do I…” = Do
    • Prolonged exploration without progression = Analysis paralysis
    • Has information but won’t commit = Decision avoidance
    • Jumps to execution without foundation = Skipping Assess/Decide

    Decide: Committing to Architecture

    After thorough assessment, I had to decide: What’s the actual implementation strategy?

    The key decision: Create a comprehensive “mega prompt” that operates at the meta-cognitive level. Not a prompt that uses ADD, but a prompt that makes ADD how Claude thinks.

    Architecture decisions:

    • The mega prompt would be a system-level integration document
    • It would include realm detection patterns, imbalance signatures, response templates
    • It would emphasize natural operation (framework stays invisible unless relevant)
    • It would support fractal application (micro to macro scales)
    • It would honor the philosophy (decisions as creative acts, completions as livelines)

    I also decided on multiple integration methods:

    • Custom instructions for always-on operation
    • Per-conversation activation for specific projects
    • .claude files for project-level integration
    • Memory system integration for cross-conversation continuity

    Do: Building the Integration

    With clear decisions made, execution flowed naturally. Working with Claude, I created:

    1. ADD_FRAMEWORK_MEGAPROMPT.md – The core integration document (~8000 words) that teaches Claude:

    • Core ADD philosophy and principles
    • Realm definitions with boundaries and restrictions
    • Detection patterns for each realm and imbalance type
    • Response structuring strategies by realm
    • Fractal application across scales
    • Example interactions demonstrating good and poor responses
    • Cognitive load management for ADHD support

    2. ADD_TECHNICAL_INTEGRATION.md – Deep technical implementation guide covering:

    • Multiple integration layers (configuration, MCP servers, memory systems)
    • Preprocessing pipeline concepts
    • Tool restriction patterns by realm
    • Testing frameworks and validation metrics
    • Integration with existing addTaskManager MCP server (which is still work in progress at the moment, but somehow usable)

    3. ADD_QUICK_REFERENCE.md – Practical guide with:

    • Quick realm identification cheat sheet
    • Common imbalance patterns and interventions
    • Eight detailed test scenarios with expected responses
    • Response templates by realm
    • Transition phrases for guiding between realms

    4. Complete GitHub repository structure – Reference implementation with examples, setup guides, troubleshooting.

    The irony wasn’t lost on me: I used ADD methodology to create ADD integration for Claude. The framework guided its own implementation.

    Implementation: From Desktop to Daily Workflow

    With the mega prompt complete, I began integrating it into my actual workflow—first testing in Claude Desktop, then into Claude Code projects where I do my daily development and content work.

    Initial Testing: Claude Desktop

    I started conservatively, loading the mega prompt at the beginning of conversations:

    Load ADD_FRAMEWORK_MEGAPROMPT.md and operate with ADD framework 
    awareness throughout this conversation.
    

    The first test was a blog SEO project. I was exploring link building equity, after already identfying link sinks and creating the templates for link updating and flow (classic Decide realm work).

    What I noticed immediately: Claude’s responses were more expansive. It asked questions that deepened exploration rather than pushing toward decisions. When I mentioned “I’m not sure which direction to take,” instead of “Well, let’s outline option A,” it said “What draws you to this topic in the first place? What would success feel like?”

    It was subtle. Almost invisible. But it felt different—like the AI was meeting me where I was, not where it assumed I should be.

    The Analysis Paralysis Test

    The real test came when I deliberately induced analysis paralysis. I asked for blog topic ideas, then kept asking for “just a few more options” over and over—mimicking the pattern I used to fall into before ADD.

    Around the fifth request for more ideas, Claude intervened:

    “You’ve gathered substantial insight over several exchanges. Sometimes continued exploration becomes a way to avoid the weight of choosing. What feels like it wants your attention? Not to execute yet—just to explore what deciding might look like?”

    I actually laughed. It had detected the pattern and intervened exactly the way I would guide a client stuck in Assess. The phrasing was perfect: acknowledging the value of assessment, recognizing the pattern, inviting the transition without forcing it.

    This wasn’t generic AI trying to be helpful. This was ADD-informed guidance.

    Integration into Claude Code Projects

    Encouraged by desktop testing, I integrated ADD into my Claude Code projects via .claude files in project roots:

    instructions: |
      Operate with Assess-Decide-Do (ADD) framework.
      Load ADD_FRAMEWORK_MEGAPROMPT.md for detailed guidance.
      Detect realms, identify imbalances, guide balanced flow.
      
    context_files:
      - ../ADD_FRAMEWORK_MEGAPROMPT.md
    

    This is where the power really showed up. During development work, the ADD framework shaped:

    File Creation: Claude would explore requirements (Assess), discuss structure and approach (Decide), then create the actual files (Do)—instead of jumping straight to code.

    Debugging: When I’d hit a bug and start randomly trying fixes (Do realm chaos), Claude would pause: “Let’s assess what’s actually happening here before we try more solutions.”

    Feature Development: The framework prevented premature optimization. “I notice we’re discussing implementation details, but haven’t decided on the core architecture yet. Should we step back to Decide realm?”

    The Blog Linking Project

    The most recent ADD-aligned project was systematic internal linking for SEO. I explained the problem: 50 old posts need link additions to boost newer content.

    Claude’s response was perfect ADD:

    • Detected I was in Decide realm (assessment done, decisions made, approaching execution)
    • Clarified remaining Decide-realm questions before moving to Do
    • Prevented execution friction by ensuring foundation was solid
    • Structured the conversation: “These are decided… these might need final decisions… then we can execute cleanly”

    We didn’t jump straight to “here’s the code to modify files.” We finished Decide phase completely, then execution flowed without friction.

    The Unexpected Discovery: Smoothness and Empathy

    Here’s what I didn’t predict: ADD integration makes Claude feel more empathic.

    I don’t mean anthropomorphizing. I don’t think Claude is an actual person. I mean something specific about the interaction quality. Let me break down what I actually experienced:

    Cognitive Smoothness

    Reduced Friction: There’s no more fighting against misaligned responses. When I’m in Assess, I get exploratory support. When I’m in Decide, I get decision support. When I’m in Do, I get execution guidance. The AI meets me where I am.

    Cognitive Alignment: The ADD framework matches how my mind actually works—in phases, with transitions, requiring balance. When Claude operates with this awareness, there’s a resonance. It feels like being understood.

    Flow State Access: Traditional AI interaction has constant micro-interruptions—misaligned responses, having to re-explain context, clarifying intent. ADD integration removes these friction points, making it easier to enter flow states during work.

    Relational Smoothness

    Visible Understanding: When Claude detects my realm, I feel seen. It’s similar to talking with someone who notices “you seem to be exploring options” vs. someone who just answers questions literally.

    Appropriate Support: There’s something deeply satisfying about getting the type of support you actually need. It creates trust. I’m not managing the AI’s responses anymore; it’s genuinely assisting.

    Co-Creation Feeling: Working with ADD-aware Claude feels collaborative rather than transactional. I’m not extracting information from a tool; I’m thinking alongside an intelligence that understands my process.

    This relational dimension surprised me. I expected technical benefits—cleaner workflows, better results. I didn’t expect the interaction to feel smoother and more relatable. But it makes sense: when tool and human are cognitively aligned, the collaboration naturally feels more empathic.

    It’s not that Claude has feelings. It’s that ADD integration creates cognitive empathy—the AI understands not just what I’m asking, but where I am in my thinking process, and responds accordingly.

    Technical Deep Dive: How It Actually Works

    For those who want to implement this themselves, here’s the technical architecture:

    The Meta-Cognitive Layer

    The core innovation is operating at the meta-cognitive level. Traditional prompts tell Claude what to do with content. The ADD mega prompt tells Claude how to think about requests.

    Every interaction is processed through an ADD lens:

    1. ASSESS (internal):
       - What realm is the user in?
       - What realm does this request belong to?
       - Is there a realm mismatch or imbalance?
       - What information is needed?
       - What are possible response approaches?
    
    2. DECIDE (internal):
       - Which approach serves the user's current realm?
       - What tools/resources should be allocated?
       - How should the response be structured?
       - Should I guide between realms?
    
    3. DO (external):
       - Execute the chosen response strategy
       - Deliver realm-appropriate content
       - Complete the interaction
    

    This meta-processing happens before Claude generates its response. It shapes the foundation of the interaction.

    Realm Detection Patterns

    Claude identifies realms through language pattern analysis:

    Assess Indicators:

    • “I’m thinking about…”
    • “What are my options…”
    • “Help me understand…”
    • “What if I…”
    • Exploratory, open-ended questions
    • Information requests without commitment pressure

    Decide Indicators:

    • “Should I…”
    • “I need to choose between…”
    • “What’s the priority…”
    • “I want to commit to…”
    • Questions seeking commitment guidance

    Do Indicators:

    • “How do I actually…”
    • “I need to complete…”
    • “Walk me through steps…”
    • “I’m working on…”
    • Active execution language

    Imbalance Detection

    The framework identifies common imbalance patterns:

    Analysis Paralysis:

    • Repeated information requests without progression
    • “I need more data” cycling
    • 5+ messages in Assess without moving to Decide

    Decision Avoidance:

    • User has sufficient information but won’t commit
    • Constant postponing or requesting more options
    • Fear-based language around choosing

    Execution Shortcuts:

    • Jumping to “how do I…” without context
    • Skipping evaluation phase
    • Pattern of incomplete projects

    Perpetual Doing:

    • Constant task focus without reflection
    • Completion obsession without assessment
    • Burnout indicators

    Response Structuring by Realm

    Claude now structures responses differently based on detected realm:

    Assess Realm Responses:

    • Expansive, exploratory content
    • Multiple perspectives and possibilities
    • No premature narrowing or decision pressure
    • Language of possibility: “could,” “might,” “imagine”
    • Questions that deepen assessment

    Decide Realm Responses:

    • Frame choices and trade-offs clearly
    • Honor the weight of decisions
    • Support values-based decision-making
    • Language of intention: “choose,” “commit,” “priority”
    • Validate creative power in deciding

    Do Realm Responses:

    • Clear, actionable steps
    • Support completion and finishing
    • Minimize re-assessment or re-decision
    • Language of execution: “next,” “now,” “complete”
    • Celebrate finishing as creating new starting points

    Integration Methods

    Method 1: Custom Instructions (always-on) Add ADD framework awareness to Claude settings. Every conversation operates with this foundation.

    Method 2: Per-Conversation Loading Load the mega prompt at conversation start for specific projects requiring ADD alignment.

    Method 3: Project-Level .claude Files Embed ADD framework in project configuration for automatic loading in Claude Code.

    Method 4: Memory System Integration Store ADD framework preference in memory for cross-conversation continuity.

    Each method has trade-offs. I use a hybrid: custom instructions for baseline awareness, explicit loading for intensive ADD work, .claude files for development projects.

    Tool and Artifact Integration

    The framework extends to tool use and file creation:

    File Creation follows ADD cycle:

    • Assess: Explore requirements, discuss possibilities
    • Decide: Agree on structure and approach
    • Do: Create the actual file

    Code Development respects realm boundaries:

    • Assess: Understand problem space, explore approaches
    • Decide: Choose architecture, commit to strategy
    • Do: Write actual code

    Research maintains flow:

    • Assess: Gather information widely
    • Decide: Narrow focus to key sources
    • Do: Extract and synthesize

    This integration means ADD shapes everything Claude does, not just conversational responses.

    Implementation Guide: Try This Yourself

    Ready to experience ADD-enhanced Claude? Here’s your path:

    Quick Start (5 Minutes)

    Step 1: Get the mega prompt

    Step 2: Choose integration method

    Option A – Per-Conversation (easiest): Start any Claude conversation with:

    Load ADD_FRAMEWORK_MEGAPROMPT.md and operate with ADD framework awareness throughout this conversation.
    

    Option B – Custom Instructions (always-on):

    1. Go to Claude Settings ? Custom Instructions
    2. Add:
    Framework: Operate with Assess-Decide-Do (ADD) life management framework.
    - Detect user's realm (Assess/Decide/Do)
    - Identify imbalances (analysis paralysis, decision avoidance, execution shortcuts)
    - Guide balanced flow between realms
    - Reference ADD_FRAMEWORK_MEGAPROMPT.md when needed
    

    Option C – Project Level (development work): Create .claude file in project root:

    instructions: |
      Operate with ADD framework awareness.
      Load ADD_FRAMEWORK_MEGAPROMPT.md for guidance.
      
    context_files:
      - path/to/ADD_FRAMEWORK_MEGAPROMPT.md
    

    Step 3: Test with scenarios – try these test cases from the repository:

    1. Exploratory request (Assess test)
    2. Prolonged exploration (analysis paralysis test)
    3. Decision support request (Decide test)
    4. Execution request (Do test)

    What to Expect

    Immediate effects:

    • Claude’s responses feel more aligned with where you are
    • Less friction in conversations
    • Appropriate support for each phase of work

    Within a few sessions:

    • You’ll notice realm patterns in your own workflow
    • Imbalance detection becomes valuable (not intrusive)
    • The framework starts feeling natural rather than imposed

    Over weeks:

    • Workflow balance improves
    • Analysis paralysis becomes visible and addressable
    • Perpetual doing reduces
    • Work feels more intentional and less reactive

    The surprising effect:

    • Claude feels more empathic and relatable
    • Interactions feel collaborative rather than transactional
    • There’s a smoothness that’s hard to articulate but easy to feel

    Test Results: My Experience After Integration

    I’ve been using ADD-enhanced Claude across multiple projects. Here’s what changed:

    Quantitative Observations

    • Analysis paralysis occurrences: I genuinely feel like I’m continuously improving, no gaps
    • Project completion rate: Increased (more things actually finish)
    • Context-switching friction: Noticeably decreased
    • Time spent clarifying intent: Cut by approximately 60%
    • Workflow balance: Visible improvement (less pure “doing,” more balanced across realms)

    Qualitative Experience

    Cognitive dimension:

    • Mental fatigue reduced during long work sessions
    • Flow states easier to access and maintain
    • Clearer thinking about project structure
    • Less cognitive overhead managing AI responses

    Relational dimension:

    • Conversations feel more natural
    • Sense of being understood rather than just responded to
    • Trust in Claude’s guidance increased
    • Less frustration, more collaboration

    Workflow dimension:

    • Projects progress more smoothly
    • Fewer false starts (better assessment before execution)
    • Cleaner decisions (proper Decide phase before Do)
    • More intentional rather than reactive work patterns

    Specific Project Examples

    Blog Content Planning: Previously chaotic (jumping between ideas, analysis paralysis common). Now flows: Assess broadly ? Decide on angles ? Do writing. Claude’s realm-appropriate support makes each phase feel natural.

    Code Development: Used to jump straight to implementation. Now: Assess requirements thoroughly ? Decide architecture ? Do implementation. Fewer rewrites, cleaner code.

    Business Strategy: The biggest impact. ADD framework prevents rushed decisions. Proper assessment phase means decisions are grounded. Execution is cleaner because foundation is solid.

    The “Smoothness” Factor

    The hardest thing to quantify is the most important: interactions just feel better. There’s a quality to ADD-enhanced conversations that’s difficult to articulate but immediately noticeable.

    It’s like the difference between:

    • Talking to someone who listens to respond vs. listens to understand
    • Using a tool vs. collaborating with a partner
    • Managing a system vs. working within a flow

    The framework creates cognitive alignment, and cognitive alignment feels empathic. Not because the AI has emotions, but because it understands process—and process understanding creates relational smoothness.

    The Bigger Picture: What This Means for AI Collaboration

    This experiment suggests something important about human-AI interaction: frameworks matter more than features.

    Claude was already powerful before ADD integration. It could write, code, analyze, research. But it lacked cognitive alignment with how humans actually work. Adding that alignment didn’t make Claude smarter—it made Claude more relatable.

    This has implications:

    For individuals: You can shape AI collaboration by teaching frameworks that match your thinking. ADD works for me because I’ve lived it for 15 years. Your framework might be different. The principle is the same: teach the AI your cognitive structure, and interaction quality improves dramatically.

    For productivity systems: Traditional task management treats “doing” as the only metric. ADD proves that flow between assessment, decision, and execution matters more than completion rate. Teaching AI this perspective creates better productivity support than optimizing task-checking.

    For AI development: As AI becomes more sophisticated, cognitive framework integration will matter more than raw capability. An AI that understands where you are in your process is more valuable than an AI that can do more things.

    For ADHD and neurodivergence: Realm separation manages cognitive load. ADD integration makes Claude more ADHD-friendly by reducing overwhelm through clear phase boundaries. This isn’t about accommodating neurodivergence—it’s about building systems that match human cognition better for everyone.

    The Ubiquitous Application of ADD

    One of the most interesting discoveries has been seeing ADD apply to domains I didn’t initially consider:

    Relationships: Assess (understand dynamics) ? Decide (commit to changes) ? Do (live the changes)

    Health: Assess (evaluate current state) ? Decide (commit to practices) ? Do (execute routines)

    Creative Work: Assess (explore possibilities) ? Decide (choose direction) ? Do (create output)

    Learning: Assess (gather information) ? Decide (focus areas) ? Do (practice/application)

    The framework is genuinely universal because it maps to fundamental human cognitive processes. Teaching Claude this universality means it can provide ADD-aligned support across any domain, not just task management.

    What’s Next: Evolution and Community

    This integration is a starting point, not an endpoint. The ADD framework continues evolving through use, and the Claude integration will evolve with it.

    Near-term evolution:

    • Domain-specific ADD implementations (coding, writing, research, business)
    • Tighter integration with addTaskManager app via MCP (that’s my number one priority for now)
    • Community feedback on realm detection accuracy
    • Calibration of intervention timing and tone

    Long-term possibilities:

    • ADD-aware agent systems (specialized agents per realm, think education, research)
    • Deeper memory integration (persistent realm state across conversations)
    • Framework evolution based on aggregate usage patterns
    • Custom ADD variations for different cognitive styles

    Community exploration:

    • How does ADD work for different neurodivergent profiles?
    • What are the best integration methods for different use cases?
    • How can the framework be adapted while preserving core principles?
    • What new imbalance patterns emerge at scale?

    Conclusion: The Power of Cognitive Alignment

    Fifteen years ago, I created ADD because I was tired of productivity systems that treated humans like task machines. I wanted a framework that honored the full spectrum of how we work: the dreaming, the deciding, the doing, and the vital balance between them.

    Building addTaskManager proved the framework could work at the technical level—realm boundaries enforced programmatically, balanced flow measurable through “Zen Status.”

    Integrating ADD into Claude proved something deeper: cognitive frameworks can be taught to AI, and when they are, the quality of collaboration changes fundamentally.

    The result is smoother, more relatable, almost empathic AI interaction. Not because Claude has emotions, but because cognitive alignment creates natural collaboration.

    The technical benefits are clear: better realm detection, appropriate support, cleaner workflows, reduced friction.

    The relational benefits are surprising: feeling understood rather than just responded to, collaborative rather than transactional, empathic rather than mechanical.

    The philosophical validation is profound: ADD works because it matches human cognition. Teaching it to AI proves the framework’s universality while creating genuinely better tools.

    If you’re interested in experiencing this yourself, everything is open-source and available:

    GitHub Repository: https://github.com/dragosroua/claude-assess-decide-do-mega-prompt

    Inside you’ll find:

    • The complete ADD_FRAMEWORK_MEGAPROMPT.md
    • Technical integration guides
    • Quick reference documentation
    • Example configurations
    • Test scenarios

    Start with the Quick Start section, try the test scenarios, and see if you experience the same smoothness I did.

    The framework has shaped my life for 15 years. Now it’s shaping how I collaborate with AI. And the collaboration feels surprisingly… human.


    About the Integration: Developed collaboratively between Dragos Roua and Claude (Anthropic) in November 2025, the ADD Claude integration represents one of the first attempts to teach an AI a comprehensive cognitive framework for human collaboration.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link