Tagged

LLM

Observations

Strategy sessions generate obsolete code with no way of knowing which is which

Apr 2026

When you're deliberating, actively researching, and pivoting strategy inside a chat on a large language model, you end up with code or CLI prompts that were generated earlier in the conversation but are no longer relevant. Further down in the same conversation you changed direction, which generated new code going a different way. The difficulty is compounded when you work the way I work: strategizing from my phone, then going to my computer to implement whatever came out of those sessions. You sometimes have to run all the code sequentially to make sure you don't miss anything, or try to manually remember which parts of the conversation are obsolete due to a mid-session pivot and which are still fresh and applicable. It would be a genuinely useful feature for builders if these models automatically detected code or prompts that no longer match the current direction of the build and put a strikethrough on them. So you don't have to manually track this yourself or waste credits and time running prompts you don't need anymore.