With large enough training database, this can produce surprisingly similar results in a lot of simple cases.
The real problem is the only foolproof way to detect when they don't is by painstaking duplicating and verifying any results from an LLM.
But this verification negates much of the advantage LLMs are supposed to offer. So the natural human response is to simply skip the due diligence. And this is a liability issue waiting to happen.
The cost of AI liability has yet to be priced into the market. I expect that some companies and AI service providers will start to restrict or even prohibit the use of AI in some cases because the cost of liability outweighs any real benefit.
Liability disclaimers don't legally apply to a lot of professional services. Selling fake "intelligence" to doctors and lawyers is a risky proposition.
The future of agents is dictated by the future state of LLMs, not the present state.