Can you feel when your agent has just compressed or lost context? Can you tell by how it bullshits you that it knows where it is, while it's trying to grasp what was going on? What's your emotional response to that?
The way the LLMs are configured by default is to be chatty and pretend to be human.
They're not talking like Data from Star Trek, nor HAL.
They're trying to be Samantha from Her.
Go watch that movie and see the visceral human response that evokes.
To tell people to treat it as a tool, while it's deliberately trying to pass off as human is like telling us to piss in the wind.
It's bad advice and not how humans work.
I think better advice might be 'set it to a different conversational tone' might be better advice.
Setting it to a different conversational tone is a patch, not a solution, it will just postpone the same outcome because the interaction model is purposefully created to resemble the interaction with a human. They could easily fix that but somehow none of the AI peddlers want to do this.
This is part of the whole alignment problem and one of the reasons I have a hard time seeing this whole development as a net positive even if it allows me to do some stuff in an easier way than before. The companies behind this stuff lack in the ethics department and they all are eyeing your wallet. The more involved you are the easier it will be to empty your wallet. So don't let that happen.
If you can't reduce context it suggests the scope of your prompt is too large. The system doesn't "think" about the best solution to a prompt, it uses logic to determine what outputs you'll accept. So if you prompt do an online casino website with user accounts and logins, games, bank card processing, analytics, advertising networks etc., the Agent will require more context than just prompting for the login page.
So to answer the question, if my agent loses context, I feel like I've messed up.
Just throwing stuff into an LLM and expecting it to remember what you want it to without any involvement from yourself isn't how the technology works (or could ever work).
An LLM is a tool, not a person, so I don't have an emotional response to hitting its innate limitations. If you get "deeply frustrated" or feel "helpless anger", instead of just working the problem, that feels like it would be an unconstructive reaction to say the least.
LLMs are a limited tool, just learn what they can and cannot do, and how you can get the best out of them and leave emotions at the door. Getting upset a tool won't do anything.
So i teach myself to not have an emotional response while working with LLMs. The actual response would be starting a new session, or dive into code myself.
Edit:
I truly believe this is solvable just like we're doing for natural language but with code/schema/etc! Relational, document, graph, vector!