So in a sense, we are more forgiving of ourselves more than anything.
in fact, in my domain, that's almost always the case. LLM's rarely get it right. Getting something done that would take me a day, takes a day with an LLM, only now I don't fully understand what was written, so no real value add, just loss.
It sure can be nice for solved problems and boilerplate tho.
[citation needed]
This entire article is just meaningless vibes of one guy who sells AI stuff.
Bruh either had help, or he's the most trite writer ever.
At least personally this was obvious to me years before AI was around. Whenever we had clear data that came to an obvious conclusion, I found that it didn't matter if _I_ said the conclusion, regardless of if the data was included. I got a lot more leeway by simply presenting the data to represent my conclusion and let my boss come to it.
In the first situation the conclusion was now _my_ opinion and everyone's feelings got involved. In the second the magic conch(usually a spreadsheet) said the opinion so no feelings were triggered.