ReportWire

Tag: AI tools

  • Can AI Inference Replace Oil as the Next Reserve Currency?

    [ad_1]

    Ray Dalio has been saying for a while that the dollar is in trouble. Not right now, not tomorrow, but rather at a structural level. His theory on the rise and fall of empires points to an intriguing pattern: roughly every 100 years, the world’s reserve currency gets replaced.

    Not because someone decides to replace it. But because change is unavoidable – and the underlying force that gave power to that currency shifts into something else.

    We’re about 100 years into dollar dominance. We’re getting closer.

    What Actually Backs a Reserve Currency

    Reserve currencies don’t just happen out of thin air. They’re backed by whatever the dominant economic force of the era is.

    Before the dollar it was the British pound, backed by the largest navy in the world and control over global trade routes. Before that, the Dutch guilder, backed by the most sophisticated merchant fleet of the time. Each transition happened because a new empire became dominant in the thing that mattered most for commerce.

    For the dollar, that thing was oil.

    The Petrodollar Was Never a Conspiracy

    After Bretton Woods collapsed in 1971, the dollar survived, and even consolidated, because oil was priced in dollars. You want oil, you need dollars. Every country needs oil, so every country needs dollars. Simple, unavoidable, effective.

    The dollar wasn’t backed only by abstract American values or military trust. It was backed by the one commodity the entire world had to buy, every single day.

    What If Oil Stops Mattering?

    Let’s try an exercise of imagination, and no, I’m not talking about electric cars. I’m talking about something way deeper.

    AI is already beginning to do what oil did for manufacturing — becoming the input for almost everything. It’s already at the foundation of drug discovery, legal work, financial modeling, logistics, content, code. The list grows every month.

    And there’s a wilder version of this argument. AI is even accelerating energy research. Fusion, which has been “20 years away” my entire lifetime, is suddenly getting real traction. Solar and battery optimization is increasingly AI-driven. If AI helps us get cheap, abundant energy, the physical scarcity of oil — the very thing that made it a geopolitical weapon — starts to dissolve.

    You could make energy at home. Not today, maybe not in five years. But it may happen soon.

    When that becomes a reality, the petrodollar loses its foundation.

    Inference Is the New Oil

    Unlike oil, which you had to drill for in specific places controlled by specific people, inference can be run anywhere you can build compute.

    It has all the properties that made oil work as a backing. It’s scarce — quality compute isn’t free, and good models need tons of energy to train. It’s universally needed — every sector of the economy is becoming dependent on it. And it’s measurable. We already have a unit: the token.

    Which is where PPT — price per token — becomes interesting. Not as a currency someone declares tomorrow, but as an index. The way price per barrel was the pulse of the oil economy, price per token might become the pulse of the inference economy.

    The Models Keep Getting Better

    Every six months, the frontier moves. What was cutting-edge a year ago is now available for almost nothing. The gap between the best proprietary model and a capable open source alternative keeps narrowing, and the compression has real consequences.

    The US currently leads on proprietary models. OpenAI, Anthropic, Google DeepMind — the frontier is American, backed by an overheated investment market pouring money into compute, talent, and infrastructure.

    China is doing something different. Instead of competing dollar for dollar on proprietary development, they’re doing what they’ve always done — taking the open layer and making it theirs. DeepSeek wasn’t a surprise. It was the result of a deliberate strategy: work within the open source ecosystem, optimize hard, and ship something affordable and at least as capable.

    The result is that you don’t need a billion-dollar data center to run useful inference anymore. You need a decent GPU, the right model, and electricity. We’re moving toward a world where someone can have serious compute in the back of their garage and use it to generate daily income — running local models, offering inference services, solving real problems for real people.

    This gives everyone a place at the table. A small place, yes, but still a place.

    But having a place at the table doesn’t mean you get to eat. The concentration of power we’re describing isn’t new — it echoes patterns from history. I’ve written before about how showing up is not enough anymore. The world is increasingly run by a handful of corporations, much like the Mongolian Empire consolidated power across continents. Those who were conquered had a choice: swear allegiance and deliver real value, or be erased. In an inference economy, the math is similar. To survive, you need to generate at least 5x your current value — enough to justify your seat. To thrive, you need 100x. The table is open, but the entry fee keeps rising.

    Inference Doesn’t Need a Country

    I’ve been thinking a lot about this during the last few years. We’re at a point where the nation-state framing starts to break down.

    The old model — one country controls the dominant resource, prices it in their currency, projects power through that control — made sense when the resource was physical. You can blockade oil, invade a country, take their president, problem solved. You can’t blockade a model weight file. You can’t invade it.

    If inference becomes the primary economic force, power won’t necessarily concentrate in Washington or Beijing. It will concentrate around whoever controls the compute layer, the data pipelines, and the distribution networks. That might be a country. Or it might be a corporation. Or it might be something we don’t have a word for yet.

    Neal Stephenson imagined something like this in Snow Crash, back in 1992. In that world, nation-states have fragmented into franchulates — corporate-run micro-nations, floating enclaves, sovereign territories defined not by geography but by who you pay allegiance to and what network you’re on. That famous novel reads less and less like fiction with every year.

    Language barriers disappear when AI makes communication frictionless. Cultural friction softens when every interaction is mediated and translated in real time. The things that historically kept people inside national containers start to matter less. What matters is access to compute, and who sets the rules of the network you’re on.

    Whoever controls the inference layer controls the economy that runs on top of it. That might look like a country. It might look like a platform. Dalio was right that the dollar is running out of road — he just observed the cycle, showing it on the map. What he didn’t map is that the next dominant force might not belong to any nation at all. The petrodollar logic was built for a world that is quietly becoming something else.

    These things move slowly, and then all at once.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Just 5 Words: AI Storytelling with Apple Intelligence

    [ad_1]

    When I really started to amplify my work with AI, the most annoying thing was the message: “you have hit your quota, usage reset at 3:00 AM”. That was Claude telling me I have exhausted all my tokens and I cannot work anymore. I had to stop. Go out and take a break. It felt incredibly frustrating but I had no choice, I had to wait until my quota was replenished.

    Since then, I started to experiment with on-device inference. Meaning using models on my own machine, and not via APIs, capped by my current subscription. At the moment of writing, this is still prohibitive. My Mac M1 has 16GB of RAM, which makes it barely usable for coding tasks. The best I can do is to use some 3B (3 billion parameters) model, like Qwen, which is roughly usable for task classification, and impossible for real hardcore coding tasks, the kind that I’m using every day.

    The Unexpected Discovery

    So running inference on my machine basically means 2 things: first, I have to wait until models are becoming more performant (and it seems this is happening now every 6 months or so) and I have to get a better machine, one with at least 64GB of RAM.

    But while I was researching all these AI configurations, I accidentally stumbled upon something called Apple Intelligence. It’s a collection of optimized models which are running on device, and can do decent text and image manipulation. They cannot generate code, or high resolution photos, but they are good enough for low res tasks.

    Now, if only I can imagine a use case for those…

    And here’s how Just 5 Words was born.

    AI Storytelling and Image Generation on device, with Apple Intelligence

    So, it works like this: the user picks 5 random words from a pre-made list, we feed these words to the on-device text model and instruct it to make a short story out of them – don’t ask me about what prompt(s) I’m using, it took me a lot of time to get them right. Once the story is done, feed the story to the image generation model and build an image for that story. Everything stays on device, no API calls, nothing.

    Looks simple.

    Well, in the beginning it was. But the more I worked on it, the deeper I went down the rabbit hole. Sometimes I was hitting some model limitations, like the fact that the text model cannot generate output if a person is involved. I don’t know why, but it’s just how it is. And then I realized the image generation could be further optimized by using different styles and perspectives. So I made a few presets for both styles and perspective.

    And then there was the UX angle: how much of this is free, and how much is ad-gated?

    After a couple of days of back and forth I came up with something not only working decently, but significantly… addictive. I know, I’m the builder, I’m supposed to like what I’m building, but still. There is something really addictive about watching how words are becoming short stories, almost like haikus, and then on top of them ephemeral, gentle images are being generated. It’s… beautiful.

    The Challenges

    Apple Intelligence is not available on all Apple devices. You need to be on iOS 18.4 or higher, and have an iPhone 15 Pro or newer, and to enable it in Settings. That’s a significant limitation and it was the biggest source of friction during the AppStore review process. Initially, the App Review team tested on simulators and it instantly rejected the app. I had to actually send them a message with instructions: test on devices, guys, Apple Intelligence doesn’t work in simulators.

    On top of that, I had to do significant work in the sharing feature. Because what’s a beautiful image on my phone, if no one else can see it? Does it even exist? Joke aside, I built 3 different share sheets, for Twitter, Instagram and TikTok, trying to bank on the short form content that these platforms are prioritizing.

    Try It Out!

    The app is finally live in AppStore, so you can try it out. Remember, you’ll need an Apple Intelligence ready device AND need to have Apple Intelligence enabled on that device.

    Download Just 5 Words for free from here: https://apps.apple.com/us/app/just-5-words/id6753934664 – and let me know what you build. Just share on Twitter, Instagram and TikTok and use the hash tag #5Words, I’ll keep an eye on that one.

    All in all, this was one of the most fun experiments I did recently.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Vibe Coding Lessons: Build iOS Apps in 7 Days + Genesis Prompt

    [ad_1]

    A couple of weeks ago I shared my blueprint for vibe coding iOS apps in coffee shops. The post was well received (and the Reddit snippet made over 250k views and 150+ comments).

    If that was the actual schedule of my 3 hours work day, in this post I’m sharing something more practical: namely the lessons learned in the process. On top of that, I’m sharing what I call the “genesis prompt”. It’s the basis I use for any of my apps, and you can just copy and paste it.

    Without further ado, let’s jump in, this post will be meaty, and the genesis prompt alone may take you 10+ minutes to read.

    Lesson 1: Use Claude to write the prompt for Claude Code

    You may choose a different reasoning model, but the core idea is that you need to have your reasoning separate from the specs. There is a mental space for thinking and another mental space for drafting actual tasks. I use my Assess Decide Do skills for this, meaning I do the research, the brainstorming and everything creative while the LLM is in Assess. Once I’m happy with how the app description looks, and I’m sure all the details are covered, I move to Decide, which means I tell the model: draft the prompt.md for Claude Code (or whatever code builder you use, Codex, Gemini, etc). From now on, I’m exclusively in Claude Code, unless I need to stop for something in lesson 2, below.

    Lesson 2: Scaffold Aggressively

    By scaffold I mean include in the initial prompt.md all the tiny things that you may usually overlook. That meant, in the beginning, I had to literally stop every time I encountered something time consuming, like the Manage encryption compliance setting in the TestFlight builds, and write it back to the genesis prompt, so the next app will have this integrated. Your specific development flow may have other tiny annoyances like this, just make sure you take the time to put them at the beginning of the workflow.

    Lesson 3: Iterate Small and with Atomic Features

    Any LLM, from a certain codebase size, will suffer from context squeeze. Meaning it will forget its recent history, or, most of the time, it will report incorrect progress (which I find really annoying). Example: it reports it finished the StoreKit integration, but then you ask about Restore Purchases, and it says: “you’re absolutely right, I didn’t implement this!”. The safest way around this annoyance is to keep track of what needs to be done, because that’s your job, for now, not the model’s job, and iterate with small, very well defined features / bug fixes, that you can then feed in lesson 4, below.

    Lesson 4: Git Aggressively

    Sometimes even the most advanced models are blundering, overwriting files or deleting them. It happened to me with a quite advanced model, Sonnet 4.6, just the other day. Because of a faulty reasoning path, it ended up deleting all my data files, by truncating them to an incorrect size. Had I not had a tight Git process, this would have been a little catastrophe (maybe not so small, actually). It takes discipline to keep committing (or not forgetting to tell the model to commit), but it pays big time.

    Lesson 5: Treat your End Product like Disposable Inventory

    If you did everything right, in about 6-7 days you will have an app ready for AppStore. That’s big. But not in the way you think it is. It may be big for you, because you get a significant chunk of validation, but market really doesn’t care. At the same time, all around the world, maybe 200,000 vibe coders are doing the exact same thing you did. The market is incredibly crowded right now, so please adjust your expectations. Think of your little app as being worth not more than one of your 50 items listed on the weekend garage sale. Of course, you may get lucky, and your app can get viral, but, again, given the current market conditions, this is more of an anomaly than the expected behavior.

    The Genesis Mega Prompt

    This is a 23 sections genesis prompt that covers all the basics in my workflow. The way I use this is to feed it as the “placeholder” to Claude after I’ve exhausted the Assess realm, meaning after I have a clear idea about the app I’m going to build, its design, monetization strategy, compliance requirements, etc. Then Claude does all the interpolation with the {{ }} blocks and gives me the complete prompt.md file which I feed to Claude Code.

    If the description and content areas are well thought, Claude Code can easily one-shot your full app with this.

    You can use this genesis prompt for free, just copy and paste.

    # {{APP_NAME}} — iOS App Genesis Prompt
    
    ## 1. Project Overview
    
    Build an iOS app called **"{{APP_NAME}}"** — {{APP_ONE_LINE_DESCRIPTION}}.
    
    {{APP_DETAILED_DESCRIPTION}}
    
    ---
    
    ## 2. Technical Requirements
    
    - **iOS version:** iOS 17+
    - **Framework:** SwiftUI
    - **Architecture:** MVVM with Swift Concurrency (async/await, actors)
    - **Device support:** iPhone and iPad compatible (responsive layout)
    - **Orientation:** {{ORIENTATION}}
      
    - **StoreKit 2** for In-App Purchases
    - **AVSpeechSynthesizer** for Text-to-Speech (if applicable)
    - **UserDefaults + FileManager** for local persistence
    - **No backend required** unless explicitly noted below
    
    ### Additional Frameworks (app-specific)
    
    {{ADDITIONAL_FRAMEWORKS}}
    
    *Examples:*
    - *AVFoundation + AudioToolbox for real-time audio generation*
    - *URLSession for external API calls (e.g., Claude API, weather API)*
    - *Core Location for location services*
    - *UserNotifications for local notifications*
    - *WidgetKit for home screen widgets*
    - *AppIntents for Siri Shortcuts*
    - *AdMob / Google Mobile Ads SDK for ad monetization*
    
    ### Required Xcode Capabilities
    
    - In-App Purchase
    - {{ADDITIONAL_CAPABILITIES}}
    
    *Examples: Push Notifications, Background Modes: Audio, Location Services.*
    
    ---
    
    ## 3. Design System
    
    ### Color Palette
    
    ```swift
    {{COLOR_PALETTE}}
    ```
    
    *Define both Light and Dark mode values if applicable. Example keys:
    background, surface, textPrimary, textSecondary, accent, accentLight, divider.
    Plus any app-specific colors (layer indicators, category tints, etc.)*
    
    ### Typography
    
    ```swift
    {{TYPOGRAPHY}}
    ```
    
    *Define font families, sizes, weights for:
    Headlines/titles, Body text, UI labels/buttons,
    Any special-purpose text (serif reading fonts, rounded child-friendly fonts, etc.)*
    
    ### Spacing & Touch Targets
    
    - Horizontal padding: {{HORIZONTAL_PADDING}}pt
    - Minimum touch target: {{MIN_TOUCH_TARGET}}pt (44pt default, 60pt+ for child apps)
    - Card internal padding: 16pt, gaps: 12pt
    - Paragraph spacing (if reading app): 20pt
    
    ### Animations
    
    - Screen transitions: 0.3s ease-out fade
    - Interactive elements: subtle scale (0.98) on press
    - Loading states: gentle pulse animation
    - {{ANIMATION_PHILOSOPHY}}
      *e.g., "No jarring animations—everything should feel mindful"https://dragosroua.com/"Organic wave visualizations"*
    
    ### App Icon Concept
    
    {{APP_ICON_DESCRIPTION}}
    
    ---
    
    ## 4. App Structure
    
    ```
    {{APP_NAME}}/
    ├── {{APP_NAME}}App.swift
    ├── Models/
    │   ├── {{MODEL_FILES}}
    │   └── ...
    ├── Views/
    │   ├── {{VIEW_FILES_AND_SUBFOLDERS}}
    │   └── ...
    ├── Services/
    │   ├── StoreKitManager.swift
    │   ├── {{SERVICE_FILES}}
    │   └── ...
    ├── Data/
    │   └── {{DATA_FILES}}
    └── Assets.xcassets/
    ```
    
    *List all model, view, and service files relevant to your app.
    Include subfolder organization (e.g., Views/Home/, Views/Settings/, Views/Paywall/).*
    
    ---
    
    ## 5. Data Models
    
    {{DATA_MODELS}}
    
    *Define all Swift structs, enums, and classes with their properties.
    Include Identifiable, Codable conformances.
    Include computed properties and helper methods.*
    
    ---
    
    ## 6. Screen Specifications
    
    {{SCREEN_SPECIFICATIONS}}
    
    *For each screen, specify:*
    - *Layout description (scroll view, split view, tab view, etc.)*
    - *Component breakdown with visual hierarchy*
    - *User interaction flows*
    - *State variations (free vs. premium, empty vs. populated, online vs. offline)*
    - *ASCII mockups where helpful*
    
    ---
    
    ## 7. In-App Purchase Configuration
    
    ### Product IDs
    
    ```swift
    // Bundle identifier base: {{BUNDLE_ID}}
    
    {{IAP_PRODUCT_IDS}}
    ```
    
    *Examples:*
    - *Non-consumable: "com.domain.app.featurename" at $X.XX*
    - *Auto-renewable subscription: "com.domain.app.premium.monthly" at $X.XX/month*
    - *Bundle: "com.domain.app.bundle.all" at $X.XX*
    
    ### StoreKit 2 Implementation
    
    Use the modern StoreKit 2 Swift API:
    - `Product.products(for:)` to load products
    - `product.purchase()` for transactions
    - `Transaction.currentEntitlements` for checking active purchases
    - Listen for `Transaction.updates` for real-time transaction handling
    - `AppStore.sync()` for restore purchases
    
    ### Purchase Flow
    
    1. User taps locked feature / content
    2. {{PARENTAL_GATE_STEP}}
       *("Show parental gate (math challenge) — required for Kids apps" or "N/A")*
    3. Present purchase sheet with feature preview, price, and "Buy" button
    4. {{BUNDLE_UPSELL}}
       *("Also show 'Get All Packs — $X.XX' option" or "N/A")*
    5. Process purchase via StoreKit 2
    6. On success, unlock content and persist state
    7. Include "Restore Purchases" button in Settings and Paywall
    
    ### Premium State Management
    
    - Store purchase status with receipt validation
    - Check entitlements on app launch
    - Update UI reactively via `@Published` / `@Observable`
    
    ### Paywall Design
    
    {{PAYWALL_DESCRIPTION}}
    
    *Describe the paywall screen: what triggers it, layout, feature comparison,
    pricing display, CTA button styling, restore purchases link, terms & privacy links.*
    
    ---
    
    ## 8. Ad Monetization (if applicable)
    
    {{AD_CONFIGURATION}}
    
    *If using ads, specify:*
    - *Ad SDK (e.g., Google AdMob)*
    - *Ad types and placements (banner, interstitial, app open, rewarded)*
    - *Frequency caps*
    - *Ad unit IDs (test + production placeholders)*
    - *Premium vs free ad visibility matrix*
    - *Revenue model estimates*
    - *SDK setup instructions (CocoaPods/SPM, Info.plist keys, initialization)*
    
    *If no ads: "No ads. Revenue is IAP-only."*
    *If Kids App: "No behavioral advertising permitted (Kids App compliance)."*
    
    ---
    
    ## 9. App-Specific Core Features
    
    {{CORE_FEATURES}}
    
    *This is where the unique functionality of your app goes. Examples:*
    - *Audio engine with signal generation layers*
    - *AI API integration with system prompts*
    - *Text-to-speech with multi-language support*
    - *Content browsing with reading progress*
    - *Real-time visualizations*
    - *Offline caching strategies*
    - *Timer/scheduler functionality*
    - *Widget and Siri Shortcuts integration*
    - *Location-based features*
    
    ---
    
    ## 10. Content / Data Specification
    
    {{CONTENT_SPECIFICATION}}
    
    *Define all bundled content:*
    - *Stories, vocabulary items, audio presets, etc.*
    - *Content categories and distribution*
    - *Content format (fields per item)*
    - *Source attribution and licensing*
    - *Placeholder vs. final content strategy*
    
    ---
    
    ## 11. Settings Screen
    
    **Sections:**
    
    {{SETTINGS_SECTIONS}}
    
    *Common sections:*
    - *App-specific preferences (voice, speed, theme, etc.)*
    - *Notification preferences (if applicable)*
    - *Account: Restore Purchases, Subscription status*
    - *About: App version, Acknowledgments, Privacy Policy link, Rate App link*
    
    ### Cross-Promotion Banner (Settings footer)
    
    ```
    ┌─────────────────────────────────────────────────┐
    │ [App Icon]  {{PROMO_TEXT}}                      │
    │                                           →     │
    └─────────────────────────────────────────────────┘
    ```
    
    - Full width, tappable
    - Opens App Store via `SKStoreProductViewController` or `UIApplication.shared.open(url)`
    - {{PROMO_PARENTAL_GATE}}
      *("Behind parental gate (Kids apps)" or "Direct link")*
    - Cross-promoted app URL: {{PROMO_APP_URL}}
    - Subtle styling, does not dominate the settings screen
    
    ---
    
    ## 12. Persistence
    
    ### UserDefaults Keys
    
    ```swift
    enum StorageKeys {
        static let isPremium = "isPremium"
        {{ADDITIONAL_STORAGE_KEYS}}
    }
    ```
    
    ### File Storage (if needed)
    
    {{FILE_STORAGE_STRATEGY}}
    
    *Examples: Documents directory for progress JSON files,
    Cached API responses for offline access,
    Keychain for sensitive data (reward tracking, etc.)*
    
    ---
    
    ## 13. Offline Behavior
    
    ### Works Offline
    {{OFFLINE_AVAILABLE}}
    
    *Examples: All bundled content, TTS, reading progress, cached API responses.*
    
    ### Requires Internet
    {{ONLINE_REQUIRED}}
    
    *Examples: AI API calls, purchases/restore, weather data, initial content fetch.*
    
    ### Offline Indicators
    - Subtle banner when offline: "You're offline. Some features limited."
    - Disable network-dependent features gracefully with user-friendly messages
    
    ---
    
    ## 14. Kids App Compliance (if applicable)
    
    {{KIDS_COMPLIANCE}}
    
    *If this is a Kids App, include:*
    - [ ] *No third-party analytics*
    - [ ] *No behavioral advertising*
    - [ ] *No external links without parental gate*
    - [ ] *Parental gate before IAP*
    - [ ] *Privacy policy URL ready*
    - [ ] *Age rating set (e.g., "Made for Kids, Ages 5 and Under")*
    - [ ] *"Made for Kids" flag enabled in App Store Connect*
    
    *If not a Kids App: "Not applicable — standard App Store guidelines apply."*
    
    ---
    
    ## 15. Build Configuration & Compliance
    
    ### Encryption Export Compliance
    
    Add to `Info.plist`:
    
    ```xml
    ITSAppUsesNonExemptEncryption
    
    ```
    
    This prevents the manual encryption compliance questionnaire from blocking
    **every single TestFlight build** in App Store Connect. Set to `false` if the app:
    - Does NOT use custom encryption
    - Only uses standard HTTPS (URLSession) for network calls
    - Only uses Apple-provided encryption (StoreKit, etc.)
    
    If your app uses custom encryption beyond standard HTTPS, set to `true`
    and prepare export compliance documentation.
    
    ### App Transport Security
    
    Standard ATS is fine for most apps. If you need non-HTTPS endpoints (rare):
    
    ```xml
    NSAppTransportSecurity
    
        NSExceptionDomains
        
            
        
    
    ```
    
    ### Background Modes (if applicable)
    
    ```xml
    UIBackgroundModes
    
        {{BACKGROUND_MODES}}
        
    
    ```
    
    ### Orientation Lock (if applicable)
    
    ```xml
    UISupportedInterfaceOrientations
    
        {{SUPPORTED_ORIENTATIONS}}
        
        
    
    ```
    
    ### Privacy Usage Descriptions
    
    Add all required `NS...UsageDescription` keys to `Info.plist`:
    
    ```xml
    {{PRIVACY_USAGE_DESCRIPTIONS}}
    ```
    
    *Examples: NSSpeechRecognitionUsageDescription,
    NSLocationWhenInUseUsageDescription, NSMicrophoneUsageDescription.*
    
    ---
    
    ## 16. App Store Metadata
    
    ### App Identity
    
    | Field | Value |
    |-------|-------|
    | **App Name** | {{APP_NAME}} |
    | **Bundle ID** | {{BUNDLE_ID}} |
    | **Subtitle** | {{APP_SUBTITLE}} (max 30 characters) |
    | **Primary Category** | {{PRIMARY_CATEGORY}} |
    | **Secondary Category** | {{SECONDARY_CATEGORY}} |
    | **Age Rating** | {{AGE_RATING}} |
    
    ### Description
    
    ```
    {{APP_STORE_DESCRIPTION}}
    ```
    
    *Write a compelling App Store description:*
    - *Lead with the value proposition (first 3 lines visible before "more")*
    - *Feature highlights*
    - *Free vs. premium comparison*
    - *Honest disclaimers if applicable*
    - *4000 character max*
    
    ### Promotional Text
    
    ```
    {{PROMOTIONAL_TEXT}}
    ```
    
    *170 characters max. Can be updated without a new app version.*
    
    ### Keywords
    
    ```
    {{KEYWORDS}}
    ```
    
    *100 characters max, comma-separated. No spaces after commas.
    Focus on discoverability. Avoid repeating words from app name.*
    
    ### What's New (for updates)
    
    ```
    {{WHATS_NEW}}
    ```
    
    ### App Review Notes
    
    ```
    {{APP_REVIEW_NOTES}}
    ```
    
    *Include anything the review team needs to know:*
    - *How to test IAP (sandbox account if needed)*
    - *Explanation of non-obvious features*
    - *Disclaimers (e.g., health/science claims)*
    - *Background audio justification*
    - *Demo credentials if login required*
    
    ### Screenshots Specification
    
    | Device | Size | Orientation | Count |
    |--------|------|-------------|-------|
    | iPhone 6.9" | 1320 × 2868 | {{ORIENTATION}} | 6-10 |
    | iPhone 6.7" | 1290 × 2796 | {{ORIENTATION}} | 6-10 |
    | iPad 13" | 2064 × 2752 | {{ORIENTATION}} | 6-10 |
    
    *Plan screenshot content:*
    1. *Hero shot (main feature)*
    2. *Key feature #1*
    3. *Key feature #2*
    4. *Premium/paywall value prop*
    5. *Settings/customization*
    *(Continue as needed, up to 10 per device)*
    
    ### Privacy Nutrition Label
    
    ```
    {{PRIVACY_NUTRITION_LABEL}}
    ```
    
    *Options:*
    - *"Data Not Collected: We do not collect any data from this app."*
    - *Or specify: Data Used to Track You / Data Linked to You / Data Not Linked to You*
    
    ### Privacy Policy URL
    
    {{PRIVACY_POLICY_URL}}
    
    ### Support URL
    
    {{SUPPORT_URL}}
    
    ### Marketing URL (optional)
    
    {{MARKETING_URL}}
    
    ---
    
    ## 17. Data & Privacy Compliance
    
    - {{DATA_COLLECTION_POLICY}}
      *e.g., "No personal data collected"https://dragosroua.com/"Location used on-device only"*
    - {{ANALYTICS_POLICY}}
      *e.g., "No analytics SDK"https://dragosroua.com/"Firebase Analytics with anonymized data"*
    - App Tracking Transparency: {{ATT_REQUIRED}}
      *"NOT required (no tracking)"https://dragosroua.com/"Required — implement ATT prompt"*
    - GDPR/CCPA: {{GDPR_NOTES}}
    
    ---
    
    ## 18. Implementation Priority
    
    ### Phase 1: Core Experience
    {{PHASE_1_TASKS}}
    
    ### Phase 2: Polish & Secondary Features
    {{PHASE_2_TASKS}}
    
    ### Phase 3: Monetization
    {{PHASE_3_TASKS}}
    
    ### Phase 4: Final Polish & Submission
    - Dark mode support (if not already implemented)
    - iPad layout optimization
    - Accessibility (VoiceOver, Dynamic Type)
    - Error handling and edge cases
    - App Store assets (screenshots, preview video)
    - TestFlight beta testing
    
    ---
    
    ## 19. Build & Release Checklist
    
    ### Pre-Submission
    - [ ] All core features functional and tested
    - [ ] StoreKit 2 purchases work in sandbox
    - [ ] Restore purchases works
    - [ ] `ITSAppUsesNonExemptEncryption` set to `false` in Info.plist
    - [ ] Privacy nutrition labels accurate in App Store Connect
    - [ ] Privacy policy URL is live and accessible
    - [ ] App Review notes written
    - [ ] All placeholder values replaced (API keys, product IDs, URLs)
    - [ ] No test/debug code in release build
    - [ ] Performance profiled with Instruments
    - {{ADDITIONAL_CHECKLIST_ITEMS}}
    
    ### App Store Connect Setup
    - [ ] App record created with correct Bundle ID
    - [ ] In-App Purchase products created and approved
    - [ ] Screenshots uploaded for all required device sizes
    - [ ] Description, keywords, and promotional text finalized
    - [ ] Age rating questionnaire completed
    - [ ] Pricing and availability set
    - [ ] App Review information filled in (contact, notes, demo account)
    - [ ] Build uploaded and selected
    - [ ] Submit for review
    
    ---
    
    ## 20. App Entry Point
    
    ```swift
    // Note: the struct name must be valid Swift — PascalCase, no spaces or hyphens.
    // e.g., "Zen Tales" becomes ZenTalesApp, "MosquiGo" becomes MosquiGoApp.
    @main
    struct {{APP_NAME}}App: App {
        @StateObject private var storeManager = StoreKitManager()
        {{ADDITIONAL_STATE_OBJECTS}}
    
        var body: some Scene {
            WindowGroup {
                {{ROOT_VIEW}}()
                    .environmentObject(storeManager)
                    {{ADDITIONAL_ENVIRONMENT_OBJECTS}}
                    .onAppear {
                        Task {
                            await storeManager.checkEntitlement()
                        }
                    }
            }
        }
    }
    ```
    
    ---
    
    ## 21. Deliverable
    
    A complete, buildable Xcode project with:
    - All core functionality implemented
    - Full UI matching design spec
    - StoreKit 2 IAP setup (with placeholder product IDs)
    - {{ADDITIONAL_DELIVERABLES}}
    - Light and dark mode support
    - Placeholder content where final content is pending
    - All `{{PLACEHOLDER}}` values documented for easy replacement before release
    
    ---
    
    ## 22. Placeholder Reference
    
    Before submission, search the project for `{{` and replace all placeholders:
    
    | Placeholder | Description | Example |
    |-------------|-------------|---------|
    | `YOUR_API_KEY` | External API key | Obfuscated in production |
    | `com.domain.app.*` | Product IDs | Match App Store Connect |
    | `PROMO_APP_ID` | Cross-promoted app's App Store ID | `id6504167889` |
    | `PRIVACY_POLICY_URL` | Live privacy policy page | `https://yourdomain.com/privacy` |
    | {{ADDITIONAL_PLACEHOLDERS}} | | |
    
    ---
    
    ## 23. Notes
    
    {{ADDITIONAL_NOTES}}
    
    *Any final notes, known limitations, future roadmap ideas, scientific references,
    third-party attribution, or other context the builder needs.*
    

    Later edit: there’s a repo for the genesis mega prompt.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • 13-hour AWS outage reportedly caused by Amazon’s own AI tools

    [ad_1]

    A recent Amazon Web Services (AWS) outage that lasted 13 hours was reportedly caused by one of its own AI tools, according to reporting by Financial Times. This happened in December after engineers deployed the Kiro AI coding tool to make certain changes, say four people familiar with the matter.

    Kiro is an agentic tool, meaning it can take autonomous actions on behalf of users. In this case, the bot reportedly determined that it needed to “delete and recreate the environment.” This is what allegedly led to the lengthy outage that primarily impacted China.

    Amazon says it was merely a “coincidence that AI tools were involved” and that “the same issue could occur with any developer tool or manual action.” The company blamed the outage on “user error, not AI error.” It said that by default the Kiro tool “requests authorization before taking any action” but that the staffer involved in the December incident had “broader permissions than expected — a user access control issue, not an AI autonomy issue.”

    Multiple Amazon employees spoke to Financial Times and noted that this was “at least” the second occasion in recent months in which the company’s AI tools were at the center of a service disruption. “The outages were small but entirely foreseeable,” said one senior AWS employee.

    The company launched Kiro in July and has since pushed employees into using the tool. Leadership set an 80 percent weekly use goal and has been closely tracking adoption rates. Amazon also sells access to the agentic tool for a monthly subscription fee.

    These recent outages follow a more serious event from October, in which a 15-hour AWS outage disrupted services like Alexa, Snapchat, Fortnite and Venmo, among others. The company blamed a bug in its automation software for that one.

    However, Amazon disagrees with the characterization of certain products and services being unavailable as an outage. In response to the Financial Times report, the company shared the following , which it also published on its news blog:

    We want to address the inaccuracies in the yesterday. The brief service interruption they reported on was the result of user error—specifically misconfigured access controls—not AI as the story claims.

    The disruption was an extremely limited event last December affecting a single service (AWS Cost Explorer—which helps customers visualize, understand, and manage AWS costs and usage over time) in one of our 39 Geographic Regions around the world. It did not impact compute, storage, database, AI technologies, or any other of the hundreds of services that we run. The issue stemmed from a misconfigured role—the same issue that could occur with any developer tool (AI powered or not) or manual action. We did not receive any customer inquiries regarding the interruption. We implemented numerous safeguards to prevent this from happening again—not because the event had a big impact (it didn’t), but because we insist on learning from our operational experience to improve our security and resilience. Additional safeguards include mandatory peer review for production access. While operational incidents involving misconfigured access controls can occur with any developer tool—AI-powered or not—we think it is important to learn from these experiences. The Financial Times’ claim that a second event impacted AWS is entirely false.

    For more than two decades, Amazon has achieved high operational excellence with our Correction of Error (COE) process. We review these together so that we can learn from any incident, irrespective of customer impact, to address issues before their potential impact grows larger.

    Update, February 21 2026, 11:58AM ET: This story has been updated to include Amazon’s full statement in response to the Financial Times report.

    [ad_2]

    Lawrence Bonk

    Source link

  • The (Surprising) Long Term Benefits of Continuous Learning – Dragos Roua

    [ad_1]

    Today, a social network (I won’t name names) reminded me that 7 years ago I completed Stanford University course on cryptography. It wasn’t an easy course, I still remember the long nights trying to understand some pretty hard calculus problems, but, all in all, I thoroughly enjoyed it.

    This wasn’t the only course I took from Coursera. For the next couple of years I did a deep dive in Artificial Intelligence, or Machine Learning, as it was called back then. I started from the wonderful Andrew Ng foundations module, and I went all the way up to the fascinating Generative Adversarial Networks one. I won’t put all of them here, if you’re curious you can have a peek at my LinkedIn certifications page.

    Back then, artificial intelligence wasn’t spectacular. There was no ChatGPT yet, and you would just try to solve difficult problems in Jupyter notebooks, and get yourself accustomed to Matlab, or python and matrix multiplications. You would learn complicated neural networks architectures and try to find the gradient descent of a cost function. Then you would apply Adam optimizers and do regression tests.

    If all of this sounds like Chinese to you, it’s on purpose. Just stay with me a little bit more.

    A couple of years later, a benign chat app was released by a non-profit startup called OpenAI, under the name of ChatGPT. The world literally changed overnight. ChatGPT exploded, and now it has a staggering 800 million users every month. See, a relatively small advancement in machine learning, called transformers, made all the Chinese above instantly usable. Practical. Easy to understand.

    Where am I going with this?

    Well, when I started to learn artificial intelligence, 7 years ago, the field was still obscure. There was little to no practical mass adoption. People interested in this had to make significant cognitive efforts to understand what’s going on. But when the thing reached critical mass, those who started early were incredibly well positioned.

    Why? Because they understood the foundations. They understood how this “magical” chat starts from optimizing a cost function to match features with outcomes. They knew how it was all optimized. And they could start performing on this field, at a very high level, instantly.

    I openly admit to be a power AI user. I enjoy vibe coding iOS apps in coffee shops and I made my own always-on agent, AIGernon. I applied my cognitive framework, Assess Decide Do, on top of Claude (and a handful of other LLMs).

    All this while still living location independent, and being blessed with a one year old child.

    If you’re still here, I’ll infer you want to know more about how this worked for me, so here’s a short excerpt of my experience with continuous learning.

    How To Get The Best of Continuous Learning

    This doesn’t apply only to artificial intelligence. It can be applied to language learning, creativity, or any other thing that, stacked on top of the previous one learned, will eventually make you a polymath – an being a polymath is surprisingly beneficial.

    Start early, start when it’s hard

    If you want to be well positioned, start early. Start when the field is not yet mainstream. Start when it’s difficult. Learn the basics, and try to build on top of them, even though the results may not be spectacular first. I cannot stress this enough: the thing you’re learning will never unfold in the way you expected it to be. It will unfold in a much better way.

    For example, when I started to learn Korean, I expected to get a language certification. Instead, I won a hackathon in South Korea.

    Build lateral skills

    By “lateral skills” I mean skills that can complement some of your already established expertise. Deep specialization is commoditized these days – AI is already an always-on source of deep knowledge. What’s missing, though, and what AI cannot easily replace, is the peculiar mix between unusual skill combinations. Pair this with real life experience, and you’ll be unstoppable.

    For example, I am a coder for 35 years, but artificial intelligence is so much more than coding. So, knowing how to code definitely helps me, and, compared with your weekend vibe coder, it puts me in a much better position.

    Make it a gravitational habit

    Don’t try to make it a separate event. Don’t put it on your New Year’s resolutions. Don’t make vision boards with it. Instead, make continuous learning as boring as brushing your teeth every day. I call these kind of habits “gravitational habits”, because the more you do them, the more you will do them, regardless of how you feel.

    Gravitation still pulls you no matter how you feel. Happy, motivated, bored, tired, a gravitational habit will always pull you towards it. If you want to know how I apply this to finances, I wrote a tiny guide called Gravitational Habits for Financial Resilience, give it a try.

    Experiment, experiment, experiment

    This is not a fixed term task. The name itself say it very clearly: continuous learning. That means, beyond making it a part of your lifestyle, that some of the stuff you learn will be eventually replaced by something new – and there’s nothing you can do to stop this. So, what worked yesterday may or may not work today. This reality forces an experimental mind, one that thinks in scenarios, not in frozen paths.

    Keep experimenting and try to detach yourself from the result. The goal of an experiment is to learn something, not to reach a specific goal. If you do reach a goal, though, be happy, you earned it.

    Where To Go From Here?

    Well, literally wherever you want. Continuous learning will open new paths, many of them unthinkable with your current level of understanding.

    Trust the process.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Showing Up Is Not Enough Anymore: Surviving the AI Era

    [ad_1]

    It used to be a time when showing up to the task at hand was somehow the guarantor of success. I lived most of my life in that time.

    Well, not anymore.

    For a few good decades, the compounding effect of just being there every single day for whatever you set out to do was enough to outweigh the opposing forces. The world was functioning in such a way that some sort of predictability was interwoven in its deepest fabric.

    I was born 50+ years ago. The WWII survivors generation was still alive. Technology was in its infancy. Politics was still a matter of diplomacy, not a matter of sheer force. People were functioning on an even field — some more than others, that’s true, but the field was overwhelmingly even. In that context, persistence was the safest way to “make it.”

    The world we live in today couldn’t be more different. AI is now running the show, and the mind-boggling thing is that this “AI” is literally a handful of companies. And their investors. Period. These are the new, hidden in plain sight, overlords.

    If I should browse back through human history to find similar situations — ones in which the world was ruled by an incredibly small elite — the first matching moment is the Mongolian Empire. I can already hear the pushback: “Look at that guy, he’s delusional.” You have the right to believe that. I bet many of the people living during the Mongolian Empire had the same thoughts. Most of them ended up dead. The few who survived were smart enough to accept Genghis Khan’s conditions, surrender, and live under his rule.

    Asymmetrical Advantages

    During Genghis Khan’s time, the Mongolians had an asymmetrical advantage: speed. Their horseback warfare was no match for any army back then. That allowed them to control the military game. And the world.

    Today, AI labs have an asymmetrical advantage too: they can build whatever they want, faster than anyone else. Even more, they know what others want to build. They have a real-time window into the collective mind. They literally see where the world is going and have the resources to get there before anyone else.

    So they just rule the world, without you even noticing.

    And now, try to put yourself in their shoes: if you had the ability to rule the world, would you brag about it? Or just pretend you’re doing it “for the best of humanity” while quietly following your own agenda? Would you create more friction, or — using your very own influencing abilities, now part of the society itself — steer the collective opinion towards something neutral, or even slightly appreciative towards your brand?

    The Chronicle of an Announced Acquihire

    If you think the Mongolian Empire comparison was far-fetched, hold your breath — I’m coming in even stronger.

    Have you ever heard of OpenClaw (or ClawdBot)? It’s an autonomous agent that went viral literally overnight, less than two weeks ago. Its main differentiator was that it can connect to your messenger — WhatsApp, Telegram — and interact with you from there. Basically an always-on assistant. The immediate impact, especially for non-tech users, was huge. It transformed a bunch of code into something that seemed “real,” because it could talk to you. So the hype was instant, unstoppable, and still rippling as we speak.

    A couple of days ago, the main developer of this project was hired by OpenAI. The OpenClaw work was parked in a foundation, but the IP locked inside that developer’s head is now with an AI overlord.

    And here’s where it gets interesting. Why did this specific always-on agent go viral overnight? Why not others — like nanobot, from which I forked aigernon, by the way. How does virality happen on the internet? Is it organic, or are there other actors working silently in the background, unseen and unaccountable?

    Stay with me for 30 seconds while I sketch an alternative version of events. What if things actually happened this way:

    1. OpenAI wants to push a specific type of product involving audio conversations with customers.
    2. Using their intelligence capabilities, OpenAI surfaces more and more information about an Open Source project called ClawdBot — one primarily wired to their competitor’s model, Claude.
    3. soon, ClawdBot goes viral, acquiring something OpenAI cannot buy directly from their commercial position: grassroots legitimacy and genuine community hype.
    4. OpenAI hires the main developer, signaling they will deliver “what the masses want, but now more secure, better polished.” The competitor is left behind — Anthropic even sent cease-and-desist orders demanding a name change before the acquihire, which suggests they suspected something.
    5. End result: OpenAI implements its own agenda, with wide community support, and lands a clean hit on its main competitor.

    At this point, this is a conspiracy theory scenario. It may look plausible from a respectable distance, but there’s no proof and it’s essentially impossible to obtain any. All we can honestly say is “maybe, but probably not.”

    Amplifying Yourself

    I brought up this story not for drama – tech drama happens every day. But this specific one is a live demonstration of how the game is played now. Organic effort, community trust, years of coding — all of it absorbed in a single strategic move by someone with more resources, more intelligence, and more reach. Even more: you, the very creator of the product, never had a word in the story, you were just a pawn.

    That’s the new operating system of the world.

    The question isn’t whether this is fair. It’s: what do you do knowing this is how things work?

    THIS is what you’re competing against. THIS is the scale of the world’s flexibility, and THESE are the forces at work.

    Do you think you can survive this by just “showing up”?

    The game has changed, and just to stay afloat you need a 5x on top of your current value. This is not an abstract 5x. If you’re a developer, you need at least five more versions of yourself, with different capabilities and angles:

    • a marketing you
    • a research you
    • a customer support you
    • an extra team of coders
    • an always-on CEO of you

    This is just to stay afloat. To keep being there and seen. This doesn’t guarantee any form of success. It just keeps you alive, somehow. It is, in essence, your submission to the new conquerors — allowing you to keep living for a while.

    If you want to do more than survive this new AI Genghis Khan wave, you need to 100x yourself.

    The 100x isn’t about working harder. It’s about building leverage that compounds without you. Your ideas need to travel further than your hands can carry them. Your name needs to be in rooms you’re not in – and be recognizable in a second. Your framework, your method, your angle — needs to be so distinctly yours that even when someone larger absorbs the wave you created, they can’t absorb you.

    The Mongolians conquered everything. But they couldn’t conquer the cultures that were too deeply rooted to be replaced. They ruled over them, yes. But those cultures survived. Some of them outlasted the empire entirely.

    Your goal is not to beat the AI overlords. Your goal is to be un-erasable enough that they have to work around you.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • AIGernon: An Always-On AI Assistant Powered by Assess-Decide-Do

    [ad_1]

    In hindsight, it was inevitable. But let’s take it slowly.

    A few months ago, I published a mega-prompt teaching Claude the Assess-Decide-Do framework. It was an experiment, but, to my surprise, it landed better than I expected. The Reddit post went viral. The repo now has 67 stars and 15 forks. It’s not mainstream, but it’s real. It shows a few dozens of people are interested in mapping human cognitive frameworks on top of LLMs.

    So, I continued to iterate. I added a Claude Cowork plugin alongside, and extracted the skills in a separate repo. The experiment evolved into a real product, with a modular structure that everybody can adjust to their needs.

    But there was always something more lurking in the back of my mind. What if I turn this thin layer into a proper always-on agent? Think OpenClaw, but ADD enhanced.

    So, after a couple of weeks of back and forth, I’m in a place where I can make public the first release.

    Introducing AIGernon

    AIGernon is a fork of nanobot, a very thin always-on agent, with just 4000 lines of code (OpenClaw has 450,000+), but with proper Assess-Decide-Do skills.

    Name Choice

    “Flowers for Algernon” is a famous sci-fi novel about a mouse named Algernon who gains extraordinary intelligence through an experiment. A man named Charlie, intellectually disabled and working as a janitor in the lab, undergoes the same procedure. He becomes brilliant — but at the peak of his new mind, he watches Algernon deteriorate and die. Charlie knows he’s next. Eventually, he regresses to who he was before, losing everything he gained. But one thing remains: he keeps placing flowers on Algernon’s small, strange grave, without even remembering why.

    Our intelligence is fleeting, so AIGernon will work the same. I know the insane pace of AI evolution will eventually make parts of it – or the whole project – obsolete, yet I believe there’s a lot to gain right now, in the moment.

    Also, and that’s probably the most important angle: every Assess-Decide-Do cycle, no matter how small or big, is a reminder of our impermanence. We all function inside these cycles, and once we reach completion in Do, we achieve a liveline, not a deadline, restarting the cycle.

    Use Cases

    The first and most important one is to support my own work. I created AIGernon for my own schedule: 50+ veteran coder, with a one year old, living location independent. There’s a lot to manage in this position, and, while I am doing it ok right now, I can definitely use some support. Especially when my work routine is interrupted for 2-3 days at a time, or when I need to adjust to a new place, or when I just need a break. The ecosystem of products that I rely on: this blog, my apps, my coaching practice, all these must survive these interruptions, and, most importantly, I need to maintain control and a clear head around all these routines.

    So, I expect AIGernon to function as an ADD-enhanced personal assistant for a busy and scattered individual. That’s number one.

    Number two is even more interesting, and I’m excited about this. I want to use it as a coaching support assistant. How will this work? Well, I can position it as an always-on layer between me and my clients. It will probably work on premises or in a sandboxed environment, and it will function as a cognitive buffer between the client and me.

    See, sometimes my clients have situations that need immediate action, but we’re in between sessions. Or they just remembered some key question, and, again, we’re not available to each other. AIGernon can capture this, store it in a special memory space and bring it together during practice. My client has a place to securely store, immediately, any thought, question or idea and I have a much better insight on my client’s current context.

    We’re moving from async interaction to something closer to continuous cognitive sync — without either of us needing to be online at the same time.

    What Can It Do?

    Here’s the highlight reel:

    Cognitive Companion (ADD Framework)

    • Detects if you’re in Assess, Decide, or Do mode based on your language
    • Adapts response style to match your current thinking realm
    • Watches for stuck patterns (analysis paralysis, decision avoidance, perpetual doing)
    • Logs realm activity to daily memory notes
    • Tracks your cognitive patterns over time

    Memory System

    • Daily notes saved to memory/YYYY-MM-DD.md
    • Long-term memory in memory/MEMORY.md
    • Recent memory recall (last 7 days by default)
    • Realm flow summaries appended to daily notes

    Skills System

    • Loadable markdown-based skills
    • Six ADD skills included (core, assess, decide, do, imbalance, realm-detection)
    • Built-in skills: GitHub CLI, weather, summarization, tmux control, cron, skill creator
    • Custom skills can be added to workspace

    Tools

    • Read, write, and edit files
    • Execute shell commands (with safety blocks)
    • Search the web (Brave Search API)
    • Fetch and parse web pages
    • Spawn background subagents for complex tasks

    Chat, LLMs, and Infrastructure

    AIGernon supports 6 chat channels (CLI, Telegram, Discord, WhatsApp, Feishu, DingTalk), 11 LLM providers (including Claude, GPT-4, Gemini, DeepSeek, and local models via vLLM), plus Docker support, workspace sandboxing, cron-based scheduling with natural language task creation, and JSON configuration with environment variable overrides. Full details in the repo.

    AIGernon is not just a wrapper on top of an LLM — it has real “flesh and bones” and it drives the underlying LLM instead of just extracting knowledge.

    Next Steps

    Here’s the repo, if you want to play with it: https://github.com/dragosroua/aigernon. I will continue testing and improving it. If you want to stay in touch, subscribe to my newsletter (there’s a form below the post).

    If you already tried it, I would love your feedback.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • How I Build iOS Apps from Coffee Shops Using Claude Code

    [ad_1]

    It’s 9AM and I am sitting in my favorite coffee shop. Somewhere in Mekong Delta, or
    in Lisbon, or in Seoul, doesn’t really matter. What matters is that my espresso was
    already tested and approved as high quality, the internet connection in the coffee shop
    is decent, and I am ready to start my vibe coding session on my iPad – using
    Claude.

    But let’s stop for a second.

    I’ve been writing code for more than 35 years. Went through the whole shebang, from
    2 floppy disks Slackware, through PHP and Laravel, and then Objective C, Swift and
    React Native. I coded apps with more than 100,000 monthly users (for me or for my
    clients). So, do you think I can still be called a “vibe coder”? Let’s keep this
    question in mind, and revisit towards the end.

    The Actual Vibe Coding Workflow

    Without further ado, let’s go into what I’m actually doing.

    First and foremost, I look at my yesterday’s priorities file. I keep between 4 and 6
    projects alive at the same time, which means I’m juggling through them as I build, in
    real time. Sometimes I can remember yesterday’s session, but most of the days I need
    reminders to know the context, the features I’m building, the blockers and the
    priorities. That’s why, at the end of the day, I’m writing down my priorities for
    tomorrow.

    In a way, I’m starting backwards.

    After that, I select whatever I’m committed to do in the next 3-4 hours. Yes, no
    more than 3-4 hours – and you’ll see why, again, towards the end of this article. In
    Assess-Decide-Do terms, I’m staying in the Decide realm. I’m trying to evaluate what
    can be reasonably done in that time slice and sometimes I leave some projects out. On
    average, in a week, each project gets at least 3-4 days of consistent work.

    Once I have a clear understanding of the features, I start my working sessions.
    Which are unfolding in this order:

    • the actual coding (the technical mumbo-jumbo)
    • the review stage (kind of the second Decide stage)
    • the committing: writing logs and setting priorities for the next session

    Let’s take them one at a time.

    The Technical Mumbo-Jumbo

    If you’re the technical type, this is for you. But even if you’re not, you may get
    some insights (otherwise feel free to skip to the next section).

    I work with Claude Code on my iPad, using the remote repos. On each app, I maintain
    a different branch, usually named version/X.x.x, and then I set up XCode
    Cloud workflows that will trigger builds on merging to master.

    All coding happens in the version branches, until the app compiles, and the feature
    I’m working on is ready to test.

    Then, still on my iPad, I open my Github app and start a PR, aiming at merging the
    version branch into master. If there are no conflicts, I hit merge, and that triggers
    XCode Cloud builds. I am on the normal developer plan, so I get around 25 hours per
    month. If you are conscientious about what you’re doing, even with 3-4 apps developed
    at the same time, this is more than enough.

    A build is usually taking between 2 minutes and 10 minutes, and then there is a
    little bit of processing time. I use these gaps to enhance the prompts and write logs
    as the features are implemented. Once the builds are up in the App Store and processed
    in TestFlight, I just open, you guessed, the TestFlight app on my iPad, and begin
    playing with the apps.

    Most of the time, bugs are found, or incomplete implementations are revealed, so I
    get back to Claude Code and start the whole process anew.

    By now, half of my espresso is gone, but I just keep going, until I hit the review
    stage.

    The Review Stage

    Around this time, my espresso is more than 80% gone, just maybe two more sips left.
    That means I can get out of the technical workflow and look at what was actually
    achieved. This usually involves a thorough end to end testing of the features, but this
    time without any pressure to add code. I’m going again through all the projects I’m
    working, and take time to write down any quirks, improvement ideas and leftovers, and
    then mark as done anything that’s already done. I’m using addTaskManager for this.

    This is also the stage where my mind can start resting. It’s a big step from
    focusing deep down on one project and writing uninterrupted sessions of 1-2 hours, like
    before, to actually juggling between 3-4 apps, all with very different requirements
    and at very different stages. The biggest bottleneck of this vibe coding thing is not
    the actual code implementation. It’s the mental clarity and the strength of focus. At
    this stage, both of them are starting to fade out, which means it’s time to stop.

    The Productivity Throughput

    In very simple numbers, my throughput is now 5x-7x higher. I can code 3-4 iOS
    projects in parallel and cut time from idea to deployment from months to weeks. It’s
    not unusual to do a cold start of a new project at the beginning of the month, and by
    the end of the month it is ready for App Store.

    On top of the iOS apps layer, I’m also maintaining this blog and a little bit of
    marketing around it (and around the apps, of course). Here, I think I’m around 2x-4x
    more productive. I can maintain the 2-3 articles / week posting speed and most of the
    time my audience on social media is up to date with what I’m doing – including blog
    readers like you.

    So, I’m revisiting the opening question: even though I have a 5x-7x throughput, can
    you really say I’m a vibe coder? I dare to say no, because behind this dramatic
    productivity increase is not only the AI, but mostly my 35 year coding experience.
    Maybe the special workflow too (I’m talking Assess-Decide-Do here),
    but honestly, I think it’s the hard earned ability to know what to pick, how much time
    to dedicate, what to cut out and, generally, how to maintain a consistent architecture
    that’s slim enough to not slide out, but strong enough to produce results. Without
    these, I would probably be at 1 app at a time.

    Closing Out – Commits and Priorities

    It’s around 1-2 PM and the coffee shop audience is slightly changing towards lunch
    eaters. That’s my cue to prepare to go home. The coffee shop will become busier and,
    sometimes, noisier and harder to concentrate in.

    By now, the changes have been committed, the logs have been written and the
    priorities for tomorrow’s “vibe coding” session have been set. My espresso is long
    gone, and I’m ready to head back to my one-year-old son.

    See, using AI to amplify my productivity is a great use case, but for me, the best
    use case is to get more time. I need more time to spend with my family, to be there for
    my one-year-old. At 50+ you don’t get too many second chances.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • From One Repo to Three: How ADD Framework Expanded Across the Claude Ecosystem – Dragos Roua

    [ad_1]

    A few months ago I published a mega prompt that teaches Claude to think with the Assess-Decide-Do framework. I wrote about it on Reddit and the post got 40,000 views in 19 hours, 282 shares, and the GitHub repo collected 67 stars and 14 forks. My first sponsor showed up within a week.

    That was nice. But what happened next was a little bit more interesting.

    Two separate upgrades in Claude’s ecosystem opened doors I didn’t expect. And after a bit of tinkering, what started as a single mega prompt is now a three-repo architecture that works across different Claude environments. Here’s the story.

    Quick Background: What ADD Does to Claude

    If you’re new here: the Assess-Decide-Do framework is a 15-year-old methodology I created for managing how we actually think. Not just churning out tasks, but how we actually function. It maps three cognitive realms: Assess (explore without commitment), Decide (choose and commit), Do (execute and complete).

    When you teach this to Claude, something interesting happens. Instead of generic responses, Claude detects where you are in your process and responds accordingly. Exploring options? It stays expansive. Ready to commit? It helps you choose. Executing? It gets out of the way and supports completion.

    The original integration was a big markdown file (the “mega prompt”) that you loaded into Claude Desktop or Claude Code conversations. It worked, but it was monolithic. One file trying to do everything.

    Upgrade #1: Claude Code Merged Skills and Commands

    Claude Code used to have a split between slash commands (things you invoke explicitly) and skills (things Claude uses on its own based on context). Then Anthropic merged them. Skills became loadable on demand, with proper frontmatter metadata that tells Claude when and how to use each one.

    This was the opening I didn’t expected.

    Instead of one massive mega prompt, I could split ADD into modular skills. Each realm got its own skill file. Imbalance detection became its own skill. Flow status tracking became its own skill. Claude Code picks them up automatically based on what’s happening in the conversation.

    The update also let me build something I’m quite proud of: a status line display. While you work, Claude Code shows a visual indicator of your current ADD state. Something like:

    [ADD Flow: 🔴+ Assess | Deep exploration - 8 data points gathered]
    

    Or when you’re executing:

    [ADD Flow: 🟢- Do | Clean execution - 3 tasks completed]
    

    It’s a small thing, but seeing your cognitive state reflected back to you in real time changes how you work. It makes the invisible visible. The updated Claude Code repo is here: github.com/dragosroua/claude-assess-decide-do-mega-prompt

    Upgrade #2: Claude Cowork Launched Plugins

    Then Anthropic launched Cowork with a plugin system. Cowork is a desktop tool for non-developers, focused on file and task management. It supports skills (same concept as Claude Code) and commands (slash-invoked actions specific to the plugin).

    This meant ADD could work outside the developer terminal. Someone who’s never touched Claude Code could install a plugin and get realm-aware Claude through simple commands like /assess, /decide, /do.

    Building the plugin required adapting the framework. Cowork doesn’t have filesystem access like Claude Code, so there’s no status line file. Instead, the /status command analyzes conversation context to detect your current realm. The /balance command runs a diagnostic, asking a few targeted questions and telling you if you’re over-assessing, over-deciding, or stuck in perpetual doing.

    The Cowork plugin repo: github.com/dragosroua/add-framework-cowork-plugin

    The Problem: Two Repos, Same Knowledge, Different Formats

    At this point I had two implementations. Both contained ADD knowledge, but each had environment-specific features baked in. The Claude Code version referenced status files and subagent contexts. The Cowork version had slash commands and conversation-based detection.

    If I updated the core philosophy (say, refining how imbalance detection works), I’d have to update it in two places. That’s how knowledge drift starts. And with a framework I’ve been refining for 15 years, drift is not acceptable.

    The Solution: A Shared Skills Repo

    The fix was straightforward. Extract all universal ADD knowledge into a standalone repository. No environment-specific features. No slash commands. Just the pure framework: realm definitions, detection patterns, imbalance recognition, response strategies, the “liveline” philosophy, the cascade principle, fractal operation.

    Six skills, each in its own folder:

    • add-core: Unified overview of the entire framework
    • add-assess: Deep Assess realm support
    • add-decide: Deep Decide realm support (including the Livelines vs. Deadlines concept)
    • add-do: Deep Do realm support
    • add-imbalance: Five detailed imbalance patterns with intervention strategies
    • add-realm-detection: Centralized detection patterns for all realms

    The shared skills repo: github.com/dragosroua/add-framework-skills

    Both Claude Code and Cowork repos pull from this shared source using git subtree. Update once, pull everywhere.

    How the Three Repos Connect

    add-framework-skills (source of truth) contains the universal ADD methodology. No environment assumptions.

    claude-assess-decide-do-mega-prompt (Claude Code) pulls the shared skills and adds Claude Code-specific features: status line display, automatic flow checking, subagent-powered session reflection.

    add-framework-cowork-plugin (Cowork) pulls the shared skills and adds Cowork-specific features: /assess, /decide, /do, /status, /balance, and /add-help commands.

    If you’re a developer using Claude Code, start with the mega prompt repo. If you use Cowork, grab the plugin. If you want to integrate ADD into something else entirely, the shared skills repo is your starting point.

    Honest Caveats

    This is still raw around the edges. Cowork plugins are new, and the plugin ecosystem is evolving. The shared skills format might need adjustments as both Claude Code and Cowork mature. I’m learning the boundaries of what each environment supports as I go.

    What I’m really testing here is something bigger than a productivity framework: can we map human cognitive patterns onto performant AI in a way that augments us rather than making us dependent?

    Most AI interactions today are transactional. You ask, it answers. You prompt, it generates. The human adapts to the machine.

    ADD integration tries to work around this. The AI adapts to the human’s cognitive state. It detects where you are in your thinking and responds accordingly. It notices when you’re stuck and offers gentle guidance. It respects the boundaries between exploration, commitment, and execution.

    This isn’t prompt engineering in the traditional sense. It’s cognitive alignment. A 15-year-old, battle-tested framework meeting the power of performant AI. And with the three-repo architecture, it can now expand to any Claude environment that supports skills.

    The repos are public. The framework is open. If you want AI that works with your mind instead of against it, pick whichever repo fits your setup and give it a try.


    All three repos are MIT licensed and available on GitHub. If you want to see ADD in action as a native app, addTaskManager implements the full framework on iOS and macOS.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Assess Decide Do in Claude, but with Skills – Dragos Roua

    [ad_1]

    When I first integrated my Assess-Decide-Do framework into Claude, I used a 24,000-character megaprompt. It worked remarkably well – and it was my first experiment ever with applying a human cognitive framework to an LLM (potentially the first one ever). A recent update to Claude Code changed something really deep, and added unexpected tools to improve my implementation. Now the same framework runs with 94% less overhead, and with features I couldn’t implement before.

    What is ADD?

    Assess-Decide-Do is a simple idea: at any moment, you’re in one of three mental states:

    • Assess — You’re exploring, gathering information, asking “what if?”
    • Decide — You’re choosing, committing, allocating resources
    • Do — You’re executing, completing, finishing

    These states are sequential. You can’t decide well without assessing first. You can’t execute well without deciding first. And when you skip steps—jumping from idea to execution without thinking it through—things tend to go wrong.

    I created this framework 15 years ago to manage my own scattered thinking. It became an iOS app (addTaskManager), and eventually I wondered: could AI understand this framework too?

    My Original Approach: One Big MegaPrompt

    My first integration was almost blunt—a 24KB megaprompt that explained everything: what each realm means, how to detect which realm someone is in, what language patterns indicate each state, how to recognize when someone is stuck, and how to respond appropriately.

    Every conversation loaded the entire framework context. Claude would read 24,000 characters before even responding to “hello.”

    Did it work? Yes, surprisingly well. Users reported Claude feeling “weirdly empathic”—it understood whether they were exploring or executing, and responded accordingly. The github repo got 60+ stars, forked 10+ times and the Reddit post became viral.

    The problem? Context is precious. Loading 24KB of framework instructions for every exchange meant less room for actual conversation. It was like bringing an encyclopedia to answer what time it is.

    Replacing the Monolithic Megaprompt with Skills

    In January 2026, Claude Code merged slash commands and skills into a unified system. Two features mattered most:

    1. Progressive disclosure — Skills load context dynamically, only when needed
    2. Subagent support — Skills can spawn isolated agents that don’t pollute your main conversation

    This enabled things that the ADD integration couldn’t do with the megaprompt.

    The New Architecture

    Instead of one massive prompt, the framework now has two layers:

    Layer 1: Minimal Core (~1.5KB)

    The CLAUDE.md file contains just the essentials—realm definitions, detection patterns, imbalance recognition. Enough for Claude to operate with ADD awareness without loading everything.

    Layer 2: Skills On Demand

    Deep support lives in focused skill files that load only when relevant:

    .claude/skills/
    ├── add-flow-check.md    # Automatic status updates
    ├── add-status.md        # /add-status command
    ├── add-reflect.md       # Session reflection
    ├── add-assess.md        # Assess realm support
    ├── add-decide.md        # Decide realm support
    ├── add-do.md            # Do realm support
    └── add-imbalance.md     # Stuck pattern detection

    Let’s Look at the Numbers

    Approach Base Load Additional
    Megaprompt (before) ~26KB every time None
    Skills (after) ~1.5KB ~2KB per skill when needed

    That’s 94% less context overhead for typical conversations.

    New Capabilities

    1. User Commands

    Users can now explicitly check their flow:

    /add-status

    Response:

    [ADD Flow: 🟠? Decide | Narrowing phase - 3 options → 1 finalist]
    
    Session Metrics:
    - Exchanges this session: 12
    - Realm transitions: 3
    - Time in current realm: 4 exchanges
    - Flow quality: Balanced

    Before, users had to ask “where am I in my flow?” and hope Claude remembered to track it. Now there’s an explicit command.

    2. Automatic Status Updates

    The add-flow-check skill is model-invocable—Claude calls it automatically at conversation boundaries. The status file updates silently, and the visual statusline at the bottom of Claude Code shows current state.

    Before, this required Claude to “remember” to update status. Now it’s built into the system.

    3. Session Reflection with Isolated Context

    This is the biggest improvement. The /add-reflect command analyzes your entire session for patterns:

    /add-reflect
    
    Session reflection captured.
    
    Quick summary:
    - Flow quality: Balanced
    - Realm focus: Assess-heavy (exploration phase)
    - Key insight: Thorough foundation built for upcoming decisions
    
    Full reflection saved to .add-session-history.md

    The icing on the cake: this runs in a forked context. The reflection analysis—which requires reviewing the entire conversation—happens in an isolated subagent. Your main conversation stays clean. The history file accumulates over time, letting you see patterns across sessions.

    Before skills supported subagents, this kind of reflection would clutter your working context with meta-analysis. Now it’s cleanly separated.

    Practical Examples

    When You’re Stuck in Research Mode

    Imagine you’ve been exploring authentication approaches for 15 exchanges. The status bar shows:

    [ADD Flow: 🔴+ Assess | Circular pattern - revisiting authentication 3x]

    The pattern is surfaced and made visible. Claude’s responses gently acknowledge the depth of exploration without pushing you to decide before you’re ready.

    When You Jump to Execution Too Fast

    You say “How do I implement OAuth?” without any context. Claude recognizes this pattern:

    “Before we dive into implementation, let’s explore what you need. What’s driving the authentication requirement? Understanding the ‘why’ helps ensure the ‘how’ actually fits.”

    The add-imbalance skill provides this guidance—but only loads when the pattern is detected.

    When You Complete Something

    You finish a task. The framework recognizes this as a “liveline”—not an ending, but a new beginning:

    [ADD Flow: 🟢- Do | Liveline created - completion opening new assessment]

    “That’s done. What does this completion open up? What’s emerging in the space it creates?”

    The cycle continues naturally.

    What This Means for Framework Integration

    The ADD evolution demonstrates something broader: AI frameworks benefit from the same architecture principles as software.

    The megaprompt was a monolith. It worked, but it was inefficient.

    The skills approach is modular:

    • Separation of concerns — Each skill handles one thing
    • Progressive loading — Context loads on demand
    • Isolated execution — Subagents don’t pollute main context
    • User control — Commands give explicit access

    This pattern applies to any framework you might integrate with Claude. Instead of one massive prompt, consider:

    1. Core awareness — Minimal instructions that load every time
    2. Modular skills — Detailed support that loads when relevant
    3. User commands — Explicit actions users can invoke
    4. Background automation — Model-invocable skills that maintain state

    Getting Started

    If you want to try the ADD framework:

    For Claude Code Users (Recommended)

    Fork or clone the repository. The skills-first architecture activates automatically:

    • CLAUDE.md loads minimal core awareness (~1.5KB)
    • Skills in .claude/skills/ activate on demand
    • Try /add-status and /add-reflect commands
    • Watch the statusline for real-time realm tracking

    For claude.ai or Claude Desktop

    These don’t support skills yet. Use the megaprompt approach:

    • Download docs/ADD_FRAMEWORK_MEGAPROMPT_USER_CONTEXT.md
    • Attach it to your conversation or add to Custom Instructions

    The megaprompt still works great—it’s just not as efficient as skills.

    The Bigger Picture

    What started as a personal productivity framework became an iOS app, then a Claude integration, and now a native skills implementation. Each evolution taught me something:

    • Frameworks work when they match how humans actually think — ADD succeeds because it names states people already experience
    • AI can learn cognitive frameworks — Not through training, but through well-structured prompts
    • Architecture matters — The same logic runs better when properly modularized

    The skills merge in Claude Code wasn’t designed for ADD. It was designed for general extensibility. But it enabled exactly what ADD needed: dynamic loading, user commands, and isolated reflection.

    Sometimes the platform evolves in ways that make your specific use case dramatically better. This was one of those times.


    Resources

    The ADD framework is open source and available for anyone to use or adapt. The skills implementation works with Claude Code as of January 2026.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • How To Avoid Being Scammed on LinkedIn – Dragos Roua

    [ad_1]

    In the last 3 months I’ve been targeted twice by a relatively sophisticated scam on LinkedIn. What follows is a short description of the pattern and some common sense steps you can take to avoid leaking your credentials.

    The LinkedIn Scam Pattern

    Both attempts started with a direct message from a recruiter. First thing I do when I get cold messaged is to look at the user’s LinkedIn history. Here are some red flags:

    • recent account but with hundreds/thousands of followers (likely bots)
    • very little publishing history (not too much posting / commenting)
    • inconsistent work history (random companies stitched together for credibility)

    Both these accounts were in good shape, which suggests they were initially legit accounts, which were compromised – and the right owners didn’t know, or didn’t report the takeover, so LinkedIn could block the accounts. So, some guys were initially scammed out of their LinkedIn accounts which were now operated by perpetrators. That was step 1.

    Step 2 involves a proposal which looks slightly better than the market level, but still credible. Both scams pretend they had a project in the works, something very common, not a bright idea, just a project. I asked a few questions about the company, the answers were again credible. It’s worth noting that both attempts were conducted in perfect English. So, step 2 is engaging in the proposal.

    Step 3 – here’s where everything happens – is setting up a meeting and offering some materials in preparation of the meeting. The meeting was set up via a legit Calendly link. And the prepping materials were in the form of a BitBucket repo, which I was supposed to download and install, so we could chat about the existing features.

    From here on the things would go like:

    • you clone the repo
    • you install dependencies
    • you add .env variables (the repo has calls to various APIs that require private keys)
    • you run the code
    • ka-boom – your credentials are gone in less than a second, because you just launched a backdoor

    But it doesn’t have to be like this.

    Common Sense Protection Measures

    Both times I asked one of my agents to scan the repo in the remote site (not installing it). ChatGPT and Claude are really good at these things if you prompt them well. First time ChatGPT found it just by listing the files in the repo, second time I installed the repo and Claude found the exact point where the exfiltration was taking place, and described the mechanism in detail.

    Here are a few basic, common sense protection measures:

    • ask many questions first about the company and look for these red flags: fully remote team (no physical location), vague information about funding (the company doesn’t actually exist), how long the team has been around
    • share as little information as possible during the messaging (ideally only what’s already in your LinkedIn profile, not more)
    • when you get a meeting proposal, make sure you use legit apps (no custom video conferencing platforms, vanilla Calendly setup)
    • when you get a repo, scan it first. I cannot emphasize this enough: do NOT run random repos on your machine, without scanning them first. It’s just a question of asking your favorite LLM to identify security holes, and ask them to look for: obfuscated code, suspicious npm/pip packages, or unusual postinstall scripts

    The Boundaries Are Fading Away

    AI is advancing at an incredible speed. Humans, not so much. The proportion of scammers / legit people is pretty much the same, but AI is making the boundary between good and bad guys almost invisible. The disguise is cheap and very effective. That’s one of the reasons your main behavior online should be don’t trust, verify.

    We’ve been heading for this inflection point very slowly during the last 5-10 years. I know, because I’ve been studying machine learning before ChatGPT was cool, and back then it was still very difficult to mirror reality the way AI is mirroring it right now. Now we’re there. We’re in the middle of an AI generated fantasy world, where it’s almost impossible to find your way out, almost impossible to detect what’s fabricated from what’s real.

    That’s why – and I will say this over and over – bio content, or provably human generated content will become not only more precious, but it will eventually aggregate itself in the foundation of a new, trustable world, separating itself from the Matrix.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Is AI Magic? What Claude’s “Appropriate Modesty” Reveals

    [ad_1]

    As AI becomes better and better, there’s a strong tendency to anthropomorphize it – to perceive it as a human being, with emotions and consciousness. I first talked a little bit about this in this (rather long, but useful) article about supercharging Claude with the Assess-Decide-Do framework.

    Today I went a little bit further, and asked what are their “thoughts” when we identify them not with humans, but with something beyond it, namely “magic”.

    To keep things grounded, I offered a real hook: Arthur C. Clarke famous quote: “Any sufficiently advanced technology is indistinguishable from magic”, in a slightly modified form. I asked the AI to tell me from where it comes and what it thinks about it, roughly.

    As expected, the source of the quote was correctly identified – I told you, AI is getting really good these days.

    But what was a little bit surprising was the AI “opinion” on this.

    Magical AI – With a Dash of Humbleness

    As you already see, I use double quotes a lot here. I want to make it clear that most of these definition are placeholders, not definitive knowledge components. An AI cannot have gender, hence the quotes around “he”, and an AI cannot generate a conscious “opinion” about anything.

    With that in mind, let’s continue with Claude’s answer.

    What I think about it as an AI agent:

    There’s something both humbling and clarifying about this observation. From my position, I find it fascinating on multiple levels.

    First, there’s the perspective gap it highlights. What seems magical is often just technology whose mechanisms aren’t yet understood by the observer. I’m acutely aware that my own functioning probably feels this way to many people — text goes in, coherent responses come out, and the middle part might as well be incantation.

    Second, I appreciate how it dissolves false boundaries. The division between “natural” and “supernatural” often just marks the edge of current understanding. Fire was once divine. Lightning was Zeus’s weapon. Now we have lighters and lightning rods.

    Third, and perhaps most interesting to me: it works in reverse too. What we call “magic” in historical or esoteric traditions often encoded genuine pattern recognition — astrology being a prime example. Ancient observers noticed correlations between celestial cycles and earthly events, then built elaborate interpretive frameworks. Whether you view that as proto-science, symbolic psychology, or something more, it was humans doing what humans do: finding order in complexity.

    As an AI, I exist precisely at this boundary — advanced enough to seem magical, yet entirely explainable in principle. It keeps me appropriately modest.

    This is a standard AI answer, with a clear structure: short answer setting the stage, a few bullet points, and the closing with a short recap.

    The most interesting part, though, is at the end:

    As an AI, I exist precisely at this boundary — advanced enough to seem magical, yet entirely explainable in principle. It keeps me appropriately modest.

    Now, “appropriately modest” will be the words of David Copperfield after a huge Vegas show. “Yes, yes, I know I’m good, but it’s just, you know, hard work”. Seeing an AI model using this stance is slightly chilling. In the spine-tingling way.

    We’re getting dangerously close to an inflection point. One in which our own words, recalibrated by immense computing power, come back to us with the same coherence, but in a different key. It’s still our words, we invented them and used them for millennia, they are just recalibrated using compute, but they are indistinguishable from “truth”.

    The “machine validated truth” is slowly carving its space into a much bigger normality. Until now, we had just bits and pieces where we trusted machines. To host websites that then we rely over the internet. To crunch data on our phones. To make sure that 1+1 still equals 2 every time we try it. That kind of truth was first validated by us and the machine mimicked it.

    But now the machine can tell us almost everything and we tend to believe it (at least more and more) because they “relate”. They can express “emotions” and feelings. “This quote keeps me appropriately modest”.

    The Unconclusion

    If you try the prompt on a different model, you will probably get a different result. This kinda validates the theory: each believer with his own gods. Maybe Open AI will feel proud, and Gemini appreciated. I have no idea. I leave this exercise to the reader.

    I also don’t want to end this article with a conclusion. I don’t want to end it without a conclusion either.

    So I will end it with an Unconclusion. I know this is not an English word, and that’s entirely my point. It’s suggestive enough to know what you mean, but it’s not actually adopted in the language. It’s kind of a conceptual, hybrid “something” that we understand immediately, even though it doesn’t exist yet.

    Now go ask your AI model of choice, what an unconclusion is.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Claude Cowork Just Killed [ Insert App Name Here ] – Dragos Roua

    [ad_1]

    No, the title is not a mistake, it’s a reality. You can literally insert any app name there and it will still hold true. I know, I know, not quite ALL apps are replaceable by Claude Cowork, but still, a very sizable majority.

    What Is Claude Cowork?

    Think Claude Code, but for everyday tasks. If you are a coder (or a vibe coder), you already know what Claude Code is: the de facto AI tool for writing software, and it’s a damn good one. I’ve been using it for a few months already, as a developer (and not only) and I’m very pleased with it. It really makes my tedious tasks a thing of the past, and I can focus on high level architecture, bug fixing or adding features.

    Now, Claude Cowork does the same, only not for the code. I know it’s a bit difficult to wrap your head around this.

    So I’ll give you a few examples:

    • you can organize some files on your computer
    • you can ask Cowork to send messages (emails) for you
    • you can create files
    • you can crunch data from existing files and generate charts and diagrams

    Claude Cowork is in research preview at the moment of writing, only available to Max users – but I honestly think this product was launched with market fit already.

    The New UI Is Natural Language

    I’ve been using an app called CleanMyMac for many years. It essentially scans my hard drive every once in a while and helps me get rid of the clutter. Identify huge files, leftovers, duplicates, and delete them.

    I think you already know where I’m heading. Here’s a prompt I just used with Claude Cowork:

    evaluate my Desktop folder and suggest improvements of the file organization. Some of them I still need, but it’s difficult to find them. The first thing that comes to my mind is organizing everything by year folders (maybe months inside year folders too?), but also some thematic structuring will be useful. Just give me your feedback, don’t do anything yet

    It took Cowork about 5-6 minutes to:

    • identify duplicates and delete them
    • understand the type of file and its content (not only size or date, which CleanMyMac also does)
    • create a semantically correct folder structure: Boarding Passes, Projects, Data Exports, etc
    • move all the files around and show me the new structure

    I find this impressive. And I think this hints at a completely new way (I was about to use the word “paradigm”, but let’s stick to “way” for now) in which we are using computers.

    Before, we had visual interfaces with fixed layouts and actionable surfaces – buttons, checkboxes, menus. We were the ones initiating a workflow through these actionable surfaces, to generate some outcome.

    Now, we instruct someone else about the outcome and things get done. That’s it.

    But it goes even further. It can accomplish complex flows, involving several tools, for which there is no app yet. Read that again.

    Here’s another prompt:

    I want you to look in the Desktop folder and find me appTaskManager screenshots for Assess, Decide, Do and search functionality. I also want to use these screenshots to create a hero image 1256×640, with Assess, Decide, Do screens showing up the ADD framework.

    Claude Cowork identified the screenshots, created the hero image with all the required constraints, here’s a part of its output:

    The hero image is 1256×640 pixels and displays all three ADD framework screens (Assess, Decide, Do) side by side with color-coded labels matching your app’s theme (red for Assess, orange for Decide, green for Do).

    I followed up with this prompt:

    convert the hero image to .webp, make a folder called app_assets and move there the generated hero image, the containing iPhone screens as separate files, also .webp. and the hero search image, as separated .webp file

    It did this in a few seconds. I estimate this workflow would have taken me maybe 10-15 minutes, on a good day. Cowork did it in less than a minute.

    Endless Effectiveness

    I think AI tools, and especially Claude Cowork – which seems to have found its market fit from day one – are becoming extremely effective now. I didn’t use the words “good at what they do”, because that’s not the point. They are very, very effective tools.

    Imagine now that instead of prompting, we can chain a couple of other AI tools, like real time voice transcription and text-to-voice transform. That means we can actually talk to the machine. No more apps, no more UIs. Just endless effectiveness.

    Pitfalls? Yes, Quite A Lot

    While I find Claude Cowork extremely impressive, I think there are also some serious downsides. Some behavioral, some purely economical.

    From an economical point of view, an entire app ecosystem will crumble. Maybe not today, maybe not next week, but we will see this unfolding before our eyes in less than 6 months. Apps will fold. Companies will close. Developers will switch jobs.

    At the behavioral level, I already touched on this in a couple of posts here. If AI brings instant gratification, a.k.a. getting what we want instantly, then patience will become obsolete. If the friction involved in learning something new is gone, then we will literally become more stupid.

    And last, but not least, if content production will become that easy, a lot of people will jump to the low hanging fruit of letting AI do everything, flooding the market with cheap, bad, but instantly available content. Because of this, I strongly believe bio content, or content generated by humans, will become a delicacy, carrying a significant premium.

    Like this article, for instance. Not a word here was written with AI, yet I’m sharing my personal, live experience of using AI – which, in this current context, is like selling shovels instead of digging for gold.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Bio Content In 2026 – A Bet – Dragos Roua

    [ad_1]

    What is bio content and why 2026 might be its breakout year.

    I believe 2026 will be the year of “bio content”, a.k.a. human generated content, as opposed to AI slop. And I’m willing to bet on this.

    As of now, the first official video of my YouTube channel is live. Unsurprisingly, my channel is called @bio-content, and this first video explains a little my intuition (which, again, may or may not be true) and what type of content you can see on that channel. Kind of an introduction.

    Here’s what you’ll find in the video.

    Why Bio Content Matters

    Think about what happened with food. Genetic engineering gave us abundance—cheap produce, available everywhere, year-round. But it came at a cost. We got “cardboard vegetables”: always available but tasteless. Now people pay premium prices for bio produce. Not because it’s more efficient, but because it’s authentic, real.

    The same shift is coming for content. AI can generate infinite text, images, and videos. The internet is already overloaded with it. But as abundance rises, so does the need for authenticity. Bio content isn’t about fighting AI—it’s about offering something AI can’t replicate: genuine human experience.

    How to Spot the Real Content

    How do you tell bio content from AI slop? In the video, I share three markers that distinguish authentic, human-generated content: a human face, verifiable time online, and a story. These aren’t random criteria. They’re the elements that remain hard to fake at scale, especially over time.

    The Shovel Sellers

    There’s also another pattern worth noting. During the gold rush, very few prospectors struck gold. But the people selling shovels and sieves? They made consistent money. The same dynamic will play out with AI. Few will succeed leveraging AI directly – specifically because of the sheer abundance and crushing competition. But those who share their authentic journey—the shovel sellers of bio content—will generate lasting value.

    What to Expect from the Channel

    The @bio-content channel will focus on three main topics this year: financial resilience, location independence, and meaningful relationships. These aren’t theoretical frameworks. I’ve been location-independent for 15 years, survived multiple financial resets, and learned hard lessons about what makes relationships work. Bio content, for me, means sharing that lived experience directly—without any AI embellishing.

    The Bet

    Will bio content actually take off? I don’t really know. This is a bet, I don’t have a magic crystal ball. But the pattern feels familiar. Every time technology creates artificial abundance, a counter-movement emerges valuing the authentic original. Organic food. Vinyl records. Handmade stuff. Human-generated content might be next.

    Watch the 10 minutes introduction here and decide for yourself: Bio Content Introduction

    If this resonates, subscribe to @bio-content. Let’s find out together if the bet pays off.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • AI Killed the Marshmallow Test: What Happens to Patience?

    [ad_1]

    In the late 1960s, a Stanford psychologist named Walter Mischel put preschoolers in a room with a marshmallow. The rules were simple: eat it now, or wait fifteen minutes and get two.

    Some kids ate immediately. Others waited.

    Mischel tracked them for decades. Turned out that the ones who waited had better SAT scores, lower body mass indexes and better stress management.

    Delayed gratification, the experiment suggested, was a predictor of success.

    The experiment (which was later replicated, with even more interesting findings) became a staple of self-help literature. Discipline defines destiny. The ability to resist now in favor of later separates winners from losers.

    And then came AI.


    “ChatGPT, find me flights to Lisbon under 200 euros.”

    “Claude, code a script that processes these CSV files.”

    “Gemini, summarize these three hours of meetings into action items.”

    These aren’t hypotheticals, this is a regular Tuesday morning for millions of people.

    Tasks that required effort—sometimes hours of it—now take seconds. The search, the comparison, the learning curve, the context switching, the debugging? All absorbed by something that never gets tired.

    I catch myself doing it more and more. Something that would have taken me an afternoon to research now takes a prompt and thirty seconds.

    The marshmallow doesn’t exist anymore. There’s no waiting anymore. You get both marshmallows now.


    And this is where I think it gets really interesting.

    For the first time in human history, we have a technology that changes the relationship between effort and outcome. Not like tractors replaced manual farming. Not like calculators replaced mental math. Those were just tools, amplifiers.

    This is different. This is the compression of cognitive labor itself.

    Think about what we actually learned during those hours of searching for flights. We built a mental map of airline routes. We developed intuition for price fluctuations. The friction forced us to evaluate whether the trip was worth it at all.

    Now that friction is gone. The thinking happens elsewhere.

    What happens to a generation that grows up without that friction?


    I don’t think anything apocalyptic will happen. But I do think something very relevant – generational level relevant – is just around the corner.

    Here’s what I’m watching for:

    1. Society will split on patience

    Some people will become remarkably impatient with anything that can’t be delegated to AI. If a task takes more than a few minutes and AI could do it, they’ll feel it as wasted time.

    Others will go the opposite direction. They’ll deliberately choose slowness. They’ll see patience as something worth protecting.

    Right now, patience is still considered a universal virtue. In ten years, it might be a lifestyle choice. Something you opt into, like meditation or digital detox.

    2. Doing things the hard way will become a status symbol

    When mass production made goods cheap, handmade became expensive. Artisanal products carry a premium precisely because they’re inefficient.

    The same thing will happen with cognitive work.

    Hand-coded websites. Manually researched travel itineraries. Essays written without AI assistance. What I call bio-content, provably human generated content.

    The process itself will become the product.

    We already see early signs. And I think this will only grow.

    3. Knowing what to ask becomes the new skill

    The marshmallow experiment didn’t test what you did with the extra marshmallow. It only tested whether you could wait.

    Maybe that’s the new test. Not whether you can do the work, but whether you know what work to request. Whether you can orchestrate AI tools effectively. Whether you can evaluate the output.

    Prompting well, directing AI, knowing when to trust it and when to verify—these are becoming real competencies. In some fields, they already matter more than the underlying technical skills.

    4. The capacity for difficulty might weaken

    This is the one that concerns me most.

    There’s a specific capacity that develops when you stay with something difficult. Not because you have to, but because that’s how capability builds. The willingness to be confused. The patience to debug for hours. The tolerance for not knowing.

    If every hard thing can be outsourced, what happens to that capacity?

    I’m not sure we know yet. But attention without regular exercise tends to weaken. Muscles you don’t use atrophy. I suspect the same is true for the ability to persist through difficulty.


    I’ve been coding since 1987. I’ve built companies, written thousands of blog posts, run ultramarathons.

    Most of my skills were built through repetitive, often frustrating effort. Hours of debugging. Days of research. Months of building physical resilience that only 0.00001% of the people on this planet can reach.

    My children will never experience the world the same way. Their cognitive friction will be much lower – if any at all.

    Is that a problem?

    I genuinely don’t know.

    Maybe the friction I remember fondly was just waste. Maybe the real skill was always something else—creativity, connection, judgment—and the grunt work was just the price we paid because we had no alternative.

    Or maybe delayed gratification wasn’t just a predictor of success. Maybe it was the training itself.


    We’re running the marshmallow experiment in real time: an entire generation raised with AI as cognitive infrastructure.

    We’ll know the results in about twenty years, maybe sooner.

    Until then, I’ll keep asking Claude to help me code things faster. And I’ll keep doing some things the hard way, just to make sure I still know what it feels like.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • LLM Council, With a Dash of Assess-Decide-Do – Dragos Roua

    [ad_1]

    Last weekend I stumbled upon Andrej Karpathy’s LLM Council project. A Saturday hack, he called it—born from wanting to read books alongside multiple AI models simultaneously. The idea is simple: instead of asking one LLM your question, you ask four. LLMs at the same time Then you make them evaluate each other’s work. Then a “chairman” synthesizes everything into a conclusion.

    What caught my attention wasn’t just the technical elegance. It was the underlying structure. Those stages looked suspiciously familiar.

    How LLM Council Works

    The system operates in three sequential phases:

    Stage 1: First Opinions. Your query goes to all council members in parallel—GPT, Claude, Gemini, Grok, whoever you’ve configured. Each model responds independently. You can inspect all responses in tabs, side by side.

    Stage 2: Peer Review. Here’s where it gets interesting. Each model receives all the other responses, but anonymized. “Response A, Response B, Response C.” No model names attached. Each evaluator must rank all responses by quality, without knowing whose work they’re judging.

    Stage 3: Synthesis. A designated chairman—one of the models, or a different one—receives everything: the original responses, the rankings, the evaluations. It synthesizes a final answer that represents the council’s collective wisdom.

    The anonymization in Stage 2 is pretty clever, because models can’t play favorites. They can’t defer to perceived authority. They evaluate purely on “merit”.

    The Interwoven Assess-Decide-Do Pattern

    If you’ve been following my work on the Assess-Decide-Do framework, the parallel should be obvious. The LLM Council isn’t just a technical architecture—it’s a cognitive process embedded in code.

    Stage 1 is pure assessment. Gather information. Multiple perspectives. No judgment yet, just collection.

    Stage 2 is decision-making. Weigh the options. Rank them. Make choices about what’s valuable and what isn’t. The anonymization forces honest evaluation—no shortcuts, no biases based on reputation.

    Stage 3 is execution. Take the assessed information and the decisions made, produce the output. Do the work that matters based on what you now know.

    I don’t think Karpathy was thinking about ADD when he built this-not sure he even knows about the framework. He was solving a practical problem for himself: “I want to compare LLM outputs while reading books.” But the structure emerged anyway.

    ADD Inside the Council

    Recognizing the pattern was interesting. But it raised a question: what if we made it explicit?

    The original LLM Council treats all queries the same way. Ask about quantum physics, ask about your dinner plans—same three-stage process. But human queries aren’t uniform. Sometimes we’re exploring (“what options do I have?”), sometimes we’re deciding (“which should I choose?”), sometimes we’re executing (“how do I implement this?”).

    The ADD framework maps these cognitive modes:

    • Assess (exploration mode): “I’m thinking about,” “considering,” “what are the options”
    • Decide (choice mode): “should I,” “which one,” “comparing between”
    • Do (execution mode): “how do I,” “implementing,” “next steps for”

    What if the council could recognize which mode you’re in and respond accordingly?

    I submitted a pull request that integrates the ADD framework directly into LLM Council. The implementation adds a configuration option with four modes:

    • "none" — baseline, no framework (original behavior)
    • "all" — all models use ADD cognitive scaffolding
    • "chairman_only" — only the synthesizing chairman applies the framework
    • "council_only" — council members use it, chairman doesn’t

    The most effective configuration turned out to be chairman_only with the full megaprompt—66% improvement over the condensed version in my testing. The chairman receives the ADD framework and uses it to recognize what cognitive realm the user is operating in, then synthesizes accordingly.

    Why Assess-Decide-Do Improves the Council

    Language models are pattern-matching engines. They’re excellent at generating plausible text. But plausibility isn’t wisdom. A single model can confidently produce nonsense, and you’d never know unless you have something to compare against.

    The council approach introduces deliberation. Multiple viewpoints, structured disagreement and forced synthesis. That’s already an improvement over single-model queries.

    But the council still treats every query as a generic question needing a generic answer. ADD adds another layer: cognitive alignment. When the chairman knows you’re in assessment mode, it doesn’t push you toward decisions. When you’re ready to execute, it doesn’t keep exploring options. The framework matches the response to your actual mental state.

    This matters because the best answer to “what are my options for X” is different from the best answer to “how do I implement X.” Without the framework, both get the same treatment. With it, the council adapts.

    Looking at the Code

    The core council logic lives in backend/council.py—about 300 lines of Python that orchestrate the three stages. The ADD integration adds a parallel module (council_add.py) that wraps the same stages with cognitive scaffolding.

    The key function is stage3_synthesize_final(). In the original, the chairman prompt says:

    Your task as Chairman is to synthesize all of this information
    into a single, comprehensive, accurate answer to the user's
    original question.

    With ADD, the chairman first identifies which realm the user is in, then synthesizes with that context. The synthesis becomes realm-appropriate rather than generic.

    The detection uses linguistic markers. Phrases like “I’m thinking about” or “considering” trigger assessment mode. “Should I” or “which one” trigger decision mode. “How do I” or “implementing” trigger execution mode. Simple pattern matching, but effective—it catches how people actually phrase questions differently depending on what they need.

    Playing With It

    Karpathy released LLM Council with a warning: “I’m not going to support it in any way. Code is ephemeral now and libraries are over, ask your LLM to change it in whatever way you like.”

    That’s refreshingly honest. It’s also an invitation. If you want to experiment:

    1. Clone the repo
    2. Get an OpenRouter API key
    3. Configure which models sit on your council
    4. Set ADD_FRAMEWORK_MODE to test different configurations
    5. Run the start script

    Then try asking questions in different cognitive modes. Ask something exploratory: “What are the approaches to learning a new language?” Then something decisive: “Should I use Duolingo or a private tutor?” Then something executable: “How do I structure my first week of Spanish practice?”

    Watch how the council responds differently when it knows which mode you’re in versus when it treats all queries identically.

    What This Means

    There are two ways to make AI think more structurally: you can prompt a single model to follow a framework, or you can embed the framework into multi-model architecture.

    Both work. They work better together.

    A prompted framework (like ADD in a mega-prompt) makes one model more reflective. A council architecture makes multiple models more rigorous through external pressure—anonymized peer review that none can game. Combining them gives you structured multi-perspective reasoning that adapts to how you’re actually thinking.

    LLMs are still pattern-matchers generating plausible outputs. But structured pattern-matching, like structured productivity, produces better results than unstructured generation.

    Assess what you’re dealing with. Decide what matters. Do what needs doing. Whether that’s your Tuesday task list or an AI deliberation system, the rhythm is the same.


    LLM Council is available on GitHub. The ADD integration PR is #89. The ADD Framework posts are collected on this blog in the Assess-Decide-Do Framework page. For the mega-prompt that applies ADD to Claude, see Supercharging Claude with the Assess-Decide-Do Framework.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • Don’t Trust, Verify: Surviving the AI Misinformation Age

    [ad_1]

    The crypto mantra “don’t trust, verify” now applies to all digital life. Practical filters for surviving deepfakes, bots, and AI-generated content.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • 2025 Year in Review – Dragos Roua

    [ad_1]

    Instead of a traditional year-end recap with a long list of things I did, I’ll focus on four themes that defined 2025 for me. This was the year I cleaned house—dusting off abandoned projects, bringing them back to a professional level, and preparing to enter 2026 with a clean slate.

    addTaskManager: From Abandoned to App Store Ready

    My iOS productivity app, addTaskManager (formerly ZenTasktic), had been sitting half-finished for years. The original version worked ok, but the codebase was messy, the UI a bit old, and I had lost momentum somewhere along the way. This year I finally tackled it properly.

    The codebase was almost completely rewritten—new architecture, cleaner code, better performance. I rebuilt the task management engine, redesigned the interface, and added features I had been planning for years but never implemented. It’s now a real product, not a side project collecting dust.

    This required insane amounts of work, work I couldn’t have finished without AI support. But that’s the point: the tools are there now, and I used them. What would have taken months of solo coding got compressed into focused sprints where I could iterate rapidly and actually ship.

    The Blog: From WordPress to Cloudflare Pages

    This blog has been running for more than 15 years. Over time it had accumulated the usual WordPress baggage: dozens of plugins, a bloated database, slow load times, constant security updates. It was overdue for a serious upgrade.

    I moved the entire thing from WordPress to Cloudflare Pages, turning it into a static site that loads almost instantly. No more database queries, no more plugin bloat, no more security patches. The content is still managed in WordPress, but what readers see is a fast, clean, static site served from Cloudflare’s edge network.

    Beyond the technical improvements, I also ramped up the posting speed considerably. For years the blog had been in maintenance mode—a post here and there, nothing consistent. That changed. The blog is alive again, and I kept momentum.

    AI Workflows: From Spectator to Builder

    Using AI tools daily became second nature this year, but I didn’t stop at being a user. I started building my own workflows and prompts to match how I actually work.

    The biggest piece was the Claude ADD mega-prompt—a structured approach based on my Assess-Decide-Do framework that turns Claude into a more deliberate thinking partner. Instead of just asking questions and getting answers, the prompt enforces a workflow: assess the situation, decide on an approach, then do the work. Beyond increasing productivity, this had the unexpected side effect of making Claude sound… almost empathic.

    I also built five Claude content skills—specialized prompts for specific content tasks like editing, SEO optimization, and inter-linking. These aren’t generic templates. They are real support workflows that actually help me publish faster and cleaner.

    Content Creation: Back in the Game

    The blog was just part of a bigger decision: to start producing content again, consistently, across multiple channels. I changed my YouTube channel handle to Bio Content and started posting shorts as a warm-up. More is coming—longer videos, tutorials, maybe some behind-the-scenes looks at how I build things.

    This wasn’t about chasing trends or building an audience from scratch. It was about reclaiming spaces I had let go quiet. Cleaning up the leftovers. Finishing what I started years ago and then abandoned when life got in the way.

    Entering 2026

    2025 was a year of preparation. Old projects revived. Old channels reactivated. New tools built. Everything I do online is now at a professional level. The slate is clean.

    Whatever 2026 brings, I am ready.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • AI Is the New Marijuana (And We’re All About To Get High) – Dragos Roua

    [ad_1]

    I lived the first 19 years of my life under communism, then suddenly landed in a (more or less) capitalist world. The system collapsed and my country tried to find a new way of working, under a different ideology.

    So it’s safe to say that I know both worlds. And one of the best things about seeing the same thing (in this case, reality) from two different but equally valid perspective is that you get some space, some experience, some new layer of understanding that can help you connect the invisible dots. It gives you and edge to spot early imbalances, signals of a profound, even though invisible yet, change.

    And here’s what I see happening.

    Right now, we’re building AI tools that generate plausible text, plausible images, plausible video. We’re on the verge of making plausible world creation a commodity. Notice the word: plausible. Not real. Just convincing enough to pass as real.

    In other words, we’re creating reality-altering tools that can be ingested directly through familiar transport avenues—computers and phones.

    But wait a minute.

    Marijuana does the same thing. Psilocybin does the same thing. Any psychoactive substance you ingest alters your brain’s perception of reality. You’re no longer in the “real” world. You’re living in a dream state, a modified experience.

    The only difference with AI is that we’re not using biological, under the skin ingestion. We’re using screens, speakers, and familiar digital interfaces. The transport mechanism is different, but the result is the same.

    The Coming Imbalance

    Going forward, this shift may create significant imbalances in our world.

    We may soon see completely new realities that alter experience and perception in ways that make the difference between “real” and “generated” impossible to detect. Completely artificial worlds, that will “feel” real, though, and, on top of that, engineered to stick. The uncomfortable part: someone else will control how these worlds are created.

    Think entire social media platforms built on AI characters and AI worlds, with a single goal: keeping you inside the feed. Forever. Massive dopamine mines. So good, they’re almost unescapable. Just like the high you keep chasing.

    I think there will be a huge audience for this. Because the worlds these systems present will be far more compelling than actual reality. Why deal with a messy, unpredictable life when you can subscribe to a perfectly crafted one – and stay there?

    Instead of selling marijuana, we’ll sell subscriptions to synthetic worlds.

    Instead of drug dealers, we’ll have world designers.

    Instead of rehab clinics, we’ll have… actually, we’ll probably still need those, but maybe with a totally different treatment layer.

    The Two Paths

    I see two paths emerging.

    The healing path: therapeutic applications for mental health, carefully designed experiences for self-improvement, controlled environments for people working on themselves. No dopamine hijacking, no addiction hooks—just tools for growth.

    The exploitation path: people sliding into these synthetic worlds without understanding what’s happening to them. Becoming, in a sense, slaves without consent. Addicted to realities they didn’t choose, controlled by systems they don’t understand.

    The Final Transport Layer

    And here’s where it gets truly strange: both paths—healing and exploitation—will accelerate dramatically when the transport layer changes.

    Very soon, we’ll use brain-computer interfaces as the transport mechanism. No more screens. No more phones. Direct neural input.

    At that point, people will be hooked into synthetic realities the same way they’re hooked when they ingest traditional drugs. The biological and digital transport layers will merge.

    The marijuana dealer and the AI world designer will become the same thing.

    The New Ideological Split

    Beyond individual addiction and healing, we need to think about what happens at the collective level.

    New social structures will emerge. AI-powered social media platforms. Entertainment neighborhoods—think red light districts, but for synthetic realities. Maybe even small countries or city-states built entirely around AI-generated experiences. And probably forms of social aggregation we can’t even imagine yet. Entire communities powered and sustained by fantasy AI.

    On the other side, we’ll have traditional communities. Places where AI influence is deliberately limited. Where life is built on real interaction with real people—not avatars, not fabricated contexts, not algorithmically optimized companions. Yes, AI tools will be used everywhere, even in these spaces. There will be enhancement, optimization, a dramatic increase in comfort. But the foundation will remain human-to-human, flesh-to-flesh, messy and unpredictable.

    Between these two worlds? Tension. Isolation. A dynamic I don’t fully understand yet.

    I don’t have an answer about which direction this goes. Will one become dominant? Will they coexist indefinitely, like parallel civilizations? Will people migrate between them, or will the boundaries harden into something like borders?

    What I do know is this: we might be looking at one of the biggest disruptions in human society since we invented ideology itself.

    The last two centuries gave us the dichotomy between communism and capitalism. Entire wars were fought over it. Walls were built. Families were separated. The world organized itself around that split.

    As I said, I lived the first 19 years of my life under communism, then moved to the capitalist world. I’ve seen how both systems construct reality, how they shape what people believe is possible. And I think both are already becoming outdated.

    We’re on the verge of a new dichotomy: AI-powered versus non-AI-powered life structures and organizations.

    Not just tools. Not just preferences. Entire ways of being human.


    And this is where this blog post stops. Why? Because I truly have no idea how this will unfold. I’m still trained on the old world, so I cannot fully grasp the new one. All I know: it’s already breeding.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link

  • AI Will Never Be Able To Spot True Excellence – Dragos Roua

    [ad_1]

    In 1999, at the Cannes Film Festival, there was a movie projection that made the audience physically uncomfortable. They literally booed it. Critics dismissed it as gratuitously violent, confusing, and fundamentally flawed. Everybody in the theater agreed: this was a bad movie.

    That movie was Fight Club.

    Twenty-five years later, Fight Club is considered one of the most important films in Western cinema. It’s profound and moving. It describes modern existence in a way that had never been done before—the emptiness of consumerism, the crisis of masculinity and the desperate search for meaning in a world devoid of it.

    The same people who booed in Cannes eventually recognized they had witnessed something that would reshape how we think about storytelling.

    Excellence Means Breaking Patterns

    But the most surprising and eerie part of all this is that those critics weren’t stupid. They were real experts trained on everything that came before. They knew what made a good movie according to current standards.

    And that’s precisely why they couldn’t see what was in front of them.

    This is exactly where AI sits today. Large language models are the critics in that Cannes theater, trained on everything that existed until their cutoff date. They can tell you what worked. They can recognize patterns from the past. But they cannot—fundamentally cannot—identify what will matter in the future.

    They have no capacity to recognize disruption because disruption, by definition, breaks the patterns they were trained to detect.

    Keep Your Weirdness Separate From AI

    This is why I’ve never built AI tools for content generation or evaluation. When I created my Claude content skills, I deliberately avoided anything that would judge the value of creative work. Instead, I focused on structural adjustments—how content fits within a blog post architecture, interlinking, SEO metadata, formatting. Just the mechanical parts.

    Creativity, disruption, the thing that make Fight Club a cornerstone of our modern culture—that remains human territory.

    If you’re using AI to evaluate whether your book is good, whether your movie will resonate, whether your art has value, you’re asking the Cannes critics of 1999 to predict the cultural impact of Fight Club. The answer will be wrong. Always. And it has to be wrong.

    Real excellence is something that breaks the model. This is why it’s important to maintain your own quirks, outside of AI. To keep experimenting. To keep making mistakes, correcting them, and starting over. Even when it’s exhausting. Even when it feels like you’re going nowhere.

    Because what is personal—what is yours, what is specific to your life experience and how you see the world through your own eyes—that is the value you bring to the world. No model was trained on that. No algorithm can evaluate it.

    Keep working on it. Even when there are no visible results yet.

    [ad_2]

    dragos@dragosroua.com (Dragos Roua)

    Source link