Tagged
Claude
Observations
Strategy sessions generate obsolete code with no way of knowing which is which
Apr 2026When you're deliberating, actively researching, and pivoting strategy inside a chat on a large language model, you end up with code or CLI prompts that were generated earlier in the conversation but are no longer relevant. Further down in the same conversation you changed direction, which generated new code going a different way. The difficulty is compounded when you work the way I work: strategizing from my phone, then going to my computer to implement whatever came out of those sessions. You sometimes have to run all the code sequentially to make sure you don't miss anything, or try to manually remember which parts of the conversation are obsolete due to a mid-session pivot and which are still fresh and applicable. It would be a genuinely useful feature for builders if these models automatically detected code or prompts that no longer match the current direction of the build and put a strikethrough on them. So you don't have to manually track this yourself or waste credits and time running prompts you don't need anymore.
Projects fragment across chats, and there's no native way to chain them
Mar 2026Everything I am building for VYNS involves multiple parallel workstreams across multiple chats. Product, infrastructure, brand, legal, GTM, marketing, build and deployment prompting, and more. Each lives in a different chat thread, often across different AI tooling. The problem is that there's no native mechanism to hand off state between sessions. You can't chain outputs from one conversation into another without significant manual effort and time to get the context right. I currently manage this with a combination of hand-written notes, copy-paste, and session summaries that I paste back in to start again. That friction is real and I estimate it is roughly a 20-30% tax on reconstruction time. It compounds daily.
AI as technical co-founder is real, but the context window is the bottleneck
Mar 2026I use Claude as my main technical co-founder for the VYNS build along with tools from OpenAI, Gemini, and xAI for various reasons I'll likely get into on my blog one day. The collaboration is genuine and unique. I am able to have the systems hold architecture decisions, debate tradeoffs, and generate production code. I feed different channels and chats live updates and make real-time decisions based on the current AI landscape from multiple angles including policy and regulation, new versioning, and tech releases. But as sessions grow, context fills and conversations end. Especially in Claude, to be honest, but that's because I'm most reliant on it currently. Each new chat starts cold and the discontinuity compounds over a long build. You rebuild context constantly, and the AI's knowledge of your project resets or pulls outdated info from relevant chats rather than accumulating properly and syncing only with updated decision frameworks rather than obsolete or pivoted ideas. Memory and past chat search help at the edges, but the fundamental architecture is still per-session.