Lloyd already has the software showing real signs of life after a few weeks. He says that his productivity is enormously enhanced using the AI. Like going from Assembly language to Python.
I thought we would need a software team, and eventually we might, but so far Lloyd is zipping along. Lloyd is the software squad. (Reference: The Princess Bride.) The rule of thumb for a startup was that you needed a team to be two-pizzas big. That is, a team that two pizzas could comfortably feed. In the age of AI, I am wondering if this drops to one pizza. Maybe one medium size pizza. The AIs don't eat pizza.
I have used Claude Pro to explore business and legal issues. I have had hours of conversation with Claude as I evolved the business. This is enormously productive. It is hard to be a solo entrepreneur because it is super helpful to bounce ideas off another person. I have found bouncing ideas off the AI, low IQ though it is, to be almost as good. Not as good as talking with Ken Levy, but still amazing. The AIs know A LOT.
Of course, the AIs make numerous mistakes. It gave me wrong advice on patent continuation filing dates and told me that $1500 per month is negligible compared to $200 per year. Other errors were more subtle, but I must say it has still been extraordinarily useful.

When Good AIs Go Bad
After a dozen or so interactions in the project, I noticed that Claude was beginning to lose the thread. One of the other AIs suggested that after each chat I end with a prompt that forces Claude to summarize that session and add it to the context window as an artifact. I now end productive chats with:
I am now concluding this specific chat session and need to consolidate our work for future reference in new chats within this project. Please generate a Project Handover Artifact titled: 'KAI Phase Summary -- [number or date]'.
This worked well until I got over 30 chats. At that point, Claude seemed to trip over a root and became lost in the wilderness. Claude is my main AI squeeze, nevertheless I chatted about our relationship problems with one of my side AIs, Gemini, to figure out what happened. I felt a little guilty doing this, but Ms. Gemini was happy to help and didn't seem jealous.
It turned out that over the weeks of investigation I changed my mind repeatedly about which market segments are most attractive, what corporate structure to use, how many people we need to hire, and on and on. Ms. Claude was confused because I told her many different things.
Context Rot Is Real
According to Gemini, this is a very real phenomenon, often called "Context Rot," "Context Saturation," or the "Lost in the Middle" problem. While AI models have technically expanded their "short-term memory" (context windows) to massive sizes, they still struggle to maintain focus when that window is filled with too much information.
When you provide an AI with a massive amount of text, its internal Attention Mechanism has to decide which parts of the text are most relevant to your current question. As the context grows, several things happen:
- Attention Scarcity: The model's "focus" gets spread too thin. It might latch onto a minor detail from page 50 and ignore a critical instruction on page 2.
- The "Middle" Problem: Models are historically better at remembering the very beginning and the very end of a prompt, often glossing over the middle.
- Topic Bleed: If you discuss five different topics in one long thread, the AI may start "hallucinating" connections between them or applying logic from one project to another.
- Instruction Dilution: If your instructions are buried under 50,000 words of data, the model might prioritize the patterns in the data over the specific rules you set.
Context Contamination
Gemini helped me understand that moving to Claude Opus might help slightly with "nuance," but the issue I was facing isn't actually a lack of intelligence—it's Context Contamination.
When you've changed your mind over 30 files and months of chat, the AI is essentially "reading a book" where the protagonist changes their goal every three chapters. Even the smartest AI will eventually start hallucinating or defaulting to your older, more frequently mentioned ideas because they have more "weight" in the conversation history.
Why Sonnet was struggling:
- Recency vs. Frequency: If your old ideas occupy 80% of those 30 files, the model may treat them as the "primary" facts and your new ideas as "temporary deviations."
- Contradiction Loops: When the AI sees "Market A is best" in 10 files and "Market B is best" in the last 2, it creates a logical conflict that dilutes its reasoning power.
- Token Overhead: Processing 30 massive files every time you ask a question creates "attention noise." The model spends so much energy just parsing the data that it has less "brainpower" left for thinking about your new direction.
The Clean Slate Strategy
Instead of just paying for a bigger model, you need to distill your context. Gemini suggested what she called a "Context Audit"—using Claude's reasoning to clean its own memory. The goal is to move from "30 files of messy history" to a single "Source of Truth" file that contains only your final decisions.
The prompt that worked:
Role: You are acting as my Strategic Chief of Staff.
Task: We have 30 files and months of history here. My thinking has evolved, and the old context is now "noise" that is confusing the project. I need you to perform a State Export to create a single "Source of Truth" document.
I asked Claude to identify where my opinion on the "best market opportunity" started and where it ended, prioritize the final state, and flag contradictions. The output format included:
- Current North Star: The final, most recent high-level goal.
- Confirmed Decisions: A bulleted list of everything we are "locked in" on.
- The Graveyard: A brief list of ideas we have officially abandoned (to ensure they don't come up again).
- Unresolved Conflicts: Specific areas where the context is currently contradictory.
The Fresh Start Protocol
Once Claude gave me that summary, I followed what Gemini called the "Fresh Start Protocol":
- Copy the summary and resolve any "Unresolved Conflicts" yourself. This turned out to be super important because there were indeed a ton of unresolved conflicts. I disambiguated a mountain of stuff.
- Start a Brand New Chat. This is the most important part—you must flush the 30-file "cache".
- Create a Claude Project. Upload that one "Source of Truth" summary as a Project Knowledge file. Only upload the 2-3 most recent data files that are actually still relevant.
- Set the Project Instructions: Tell Claude: "Base all your logic strictly on the 'Source of Truth' file. If a previous file says something different, the Source of Truth wins."
It also made me reflect on the challenge my wife Wendy has with over 50 years of context to sort through. The AIs have a LONG way to go.
Why This Works
- Reduces "Attention Noise": Claude no longer has to weigh 30 conflicting variables; it only has to weigh one.
- Recency Bias Control: By putting your latest decisions in a clean "Source of Truth" file, you are manually forcing the "recency" that the AI naturally craves.
- Cost & Speed: Your chats will become significantly faster and cheaper (if using the API) because the input token count will drop by 90%.
So far, this has worked like magic and Claude is back to being useful again. It is a new world.

Lance Glasser
Lance is CEO and Co-founder of Kinetic Audio Innovations. He was previously a faculty member at MIT, Director of Electronics Technology at DARPA, and CTO at KLA. He also makes sculpture, which has nothing to do with audio but explains the hundreds of pounds of bronze in his house.
Ready to Sing Together?
Join our waitlist to be among the first to experience synchronized remote music performance.
Join the Waitlist