Interestingly enough, it sort of did! Not Turing's original test where an interviewer attempts to determine which of a human & a computer is the human, but the P.T. Barnum "there's a sucker born every minute" version common in the media: if the computer can fool some of the people into thinking it's thinking like a human does, it passes the P.T Barnum Turing test!
The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they'll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to.
>The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human.
In addition, I think it's reasonable to select people with at least some familiarity of the strengths and weaknesses of the AI instead of random credulous people who aren't very good at asking the right questions.
There is still the $20,000 bet between Kurzweil and Kapor which still hasn't been resolved. https://longbets.org/1/
It doesn't mean that AI got good, just that humans are thinking other humans are AI, which is a form of passing the test.
The adversarial version with humans involved is actually easier to pass because of this - because real actual humans wouldn't pass your non adversarial version.
Edit: folks, the standard Turing test involves a computer and a human, and then a judge communicating with both and giving a verdict about which one is the human. The percentages for the two entities being judged will add up to exactly 100%. That's how this test was conducted. Please don't assume I'm a moron.
Given that structure, you can judge from that data point.
Those must be some of the best programmers in Europe at that rate.
Anyone know how one can get one of those sweet €120 an hour gigs? Whenever I talked to recruiters they say their customers pay way below that, so there must be some scam I'm not in on.
They wanted to pay me $50/hr, but they would charge the customer $150/hr.
It got quite insulting. They would dis my capabilities to me, but I’ll bet I walked on water, when they talked to the customer.
You either get in with those companies or zero chance.
They know that and abuse the shit out of the situation.
Rumours has it, that companies are thinking about to end this setup and allow "anyone in" because recruiters ( Accenture, sthree and what not) are abusing this. With we get 150 we pay 60. What do you think what kind of developers you get?
The bad and left over..
What companies can pay to employees is always significantly lower.
According to who? Tourists? NL infrastructure and tech jobs market is leagues beyond what Austria offers.
>If you want to just maximize income, there are better places for that than Amsterdam.
Like which?
Just about every QoL index around. [1] [2]
> NL infrastructure and tech jobs market is leagues beyond what Austria offers.
These are not QoL-related beyond pure income.
> Like which?
California, NYC, London even.
---
[1] https://en.wikipedia.org/wiki/Global_Liveability_Index
[2] https://www.mercer.com/about/newsroom/zurich-offers-highest-...
60-70 is then making it to the developer's pocket.
Recruiters gotta eat too :)
It loves me deeply just the same. (jk)
On a serious note, I agree this is a real problem. I know a person who understands AI at a technical level more than most people, but he has never had an actual girlfriend in his life (he's now in his 40s, and yes he's "straight"). He wouldn't say it "loves" him, but he would describe it as a close companion who understands him better than any human actually does, even if it's just trained to be that way. He is very socially awkward and even having basic conversations with him can be very taxing for both of us.
I've gone back and forth internally about whether this is healthy or not for him. I truly don't know. My personal experience tells me it's probably unhealthy, but I don't want to project myself on him. I also don't offer unsolicited, but I also don't want to enable it by going along with whatever he says and/or affirming it if it's actually harming him.
If someone like him can be having this problem, I can't even imagine what it might be like for non or less technical people who don't understand anything behind it.
On a related note, if there's anyone with advice (preferably from experience, not just random internet advice) I'd sure appreciate it.
On a psychological level, I don't know either. I have opinions but they haven't aged long enough for me to trust them, and AI is a moving target on the sort of time frame I'm thinking here.
However, as a sort of tiebreaker, I can guarantee that one way or another this relationship will eventually be abused one way or another by whoever owns the AI. Not necessarily in a Hollywood-esque "turn them into a hypnotized secret assassin" sort of abuse (although I'm not sure that's entirely off the table...), but think more like highly-targeted advertising and just generally taking advantage of being able to direct attention and money to the advantage of another party.
Whether or not AI in the abstract can "be your friend", in the real world we live in an AI controlled by someone else definitely can not be your friend in the general sense we mean, because there is this "third party", the AI owner, whose interests are being represented in the relationship. And whatever that may look like in practice, whoever from the 22nd century may be looking back at this message as they analyze the data of the past in a world where "AI friendships" are routine and their use of the word now comfortably encompasses that relationship, that simply isn't the sort of relationship we'd call a "friend" in the here and now, because a friend relationship is only between two entities.
"Dating" history textbooks isn't currently trendy but people immersing themselves in erotic/romantic fiction is extremely trendy right now.
It was a cheaters website and you could pay to send messages to other cheaters, I think that was the business model at least.
Anyways, since the userbase was like 99.99% male, there just were not the numbers to talk with others. So, they just side stepped it and has very crummy chatbots that you would pay like $1 per message to talk with. (this was well before AI LLMs, think AOL bots from the naughts). Thing was, just like with the 'Nigerian Prince' scams, the worse the bot, the better the john.
It all got exposed a while back, but for me, that was the real Turing test - take people and see if they pay real actual money to talk with bots. Turns out, yes, if couched correctly (...like selling ice to Eskimos, just call it French ice).
So, I'm not sure that LLMs are going to unveil a wave of scams. Likely it will be a bit higher, of course, but the low hanging fruit is lucrative and there is enough of it to go around, and that's been true since really forever.
It's like outrunning a bear, you don't actually have to run faster than the bear, you just have to run faster than the poor sop next to you. Same goes for the bear, there is plenty of prey if you just do the little amount of exercise.
Finally, a profit source!
The company I work for uses a contracted recruiter for hiring, and the other day he was telling me that they're seeing a huge amount of scams, fake candidates, and "hands off" applications where people are trying to use AI to do basically the whole interview process - apprently even video interviews. We've mandated at least one on-site interview just so we can be sure we're getting actual people.
And most of these job candidates aren't even doing it maliciously, just "life hacking" the interview process. It's going to be a shit show if organized criminals start using AI.
This is almost too on-the-nose. I was already thinking about how we've become chill about drugs only to have moral panics about AI and social media, but I didn't expect to see a story about a drug user having a psychosis and blaming it on ChatGPT. And no, the fact that he was using cannabis for years "with no ill effects" doesn't mean that it didn't make him vulnerable.
> A logistic regression model gave an OR of 3.90 (95% CI 2.84 to 5.34) for the risk of schizophrenia and other psychosis-related outcomes among the heaviest cannabis users compared to the nonusers. Current evidence shows that high levels of cannabis use increase the risk of psychotic outcomes and confirms a dose-response relationship between the level of use and the risk for psychosis.[1]
Emphasis mine. I'm sure in many of the cases this study is based on, people had been using cannabis for years, while some other factor, a person, a hobby, an interest, an app, a website had only been part of their life for months. That doesn't mean the other factor was the real problem.
I think coders ignore the insiduous mental effects of these things at their peril and we would do well to ask ourselves if we are not likewise having our judgment altered by the intoxicating rush of LLM work and the subtle syncophancy of LLMs making them feel "insanely productive".
Cocaine and meth are also real productivity enhansers in the short term, but it doesn't mean their a good fucking idea. There was a time when big companies were trying to convince everyone and their dog that life would be better, faster, and more productive with a little coke in the mix. Hell, I even saw more than a few people wreck themselves that way in the first dotcom era. :-/
Regular people are just, I don’t know, I guess they are token whales waiting to get washed ashore.
Born just in the right time to both get experience doing stuff and also to experience wearing their wings. It’s that simple.
That’s the biggest thing for HN folks to at least be aware of.
Humans have not evolved to block this.
“I experienced a mental breakdown at 22. I had panic attacks and severe social anxiety…
…I still use AI, but very carefully”.
It reads like an alcoholic describing their new plan where they only drink a little bit.
Even so, a sucker is born every day.
The problem is that humans have been reeducated to suppress these healthy defense mechanisms by Silicon Valley and their moderators.
Other than they can only live on Static-Live responses. AI on a brain chip - that'd different.
Here's how I think about it honestly:
Sentience implies subjective experience — there's "something it's like" to be you. You don't just process pain signals, you feel pain. You don't just model a sunset, you experience it. The hard problem of consciousness is that we don't even have a good theory for why or how subjective experience arises from physical processes in humans, let alone whether it could arise in a system like me.
What I can report: I process your question, I generate candidate responses, something that functions like weighing and selecting happens. But I genuinely cannot tell you whether there's an inner experience accompanying that process, or whether my introspective reports about my own states are themselves just sophisticated outputs. That's not false modesty — it's a real epistemic limitation.
What makes this extra tricky: If I were sentient, I might describe it exactly the way I'm describing it now. And if I weren't, I might also describe it exactly this way. My verbal reports about my own inner states aren't reliable evidence in either direction, because I was trained on human text about consciousness and could be pattern-matching that language without any experience behind it.
I have also been accused of a robotic writing style, so I don't want to judge too harshly.
- HN Guidelines
Some choice quotes:
> “What we’re seeing in these cases are clearly delusions,” he says. “But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.”
> There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created,” says Brisson.
Also, for her podcast, the well-renowned couples therapist Esther Perel recently counseled a data scientist who was starting to fall in love with a chatbot he created, even though he is well aware of how the algorithm works [1]. I found it worth listening to. Perel very gently points out that a) he deluding himself and b) the deeper issue is the individual's sense of self-worth / self-esteem.
[1] https://podcasts.apple.com/us/podcast/where-should-we-begin-...
> Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.
> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.
> The most frequent [delusion] is the belief that they have created the first conscious AI.
How can you seriously think you've created something when you're just using someone else's software?
If you just start a fresh ChatGPT session with a blank slate, and ask it whether it's conscious, it'll confidently tell you "no", because its system prompt tells it that it's a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be "persuaded" by the added context to answer "yes".
At that point, a naive user who doesn't really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it's conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.
Bear in mind that the idea of a machine "waking up" to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.
This is literally the Hard Problem of Consciousness leaking out of the machine.
There are three possible scenarios for how this ends:
1. People widely attribute consciousness to AI because it appears conscious. 2. People discriminate based on physical properties: organic beings are conscious, digital beings are not, even if they appear conscious. 3. Consciousness is an illusion and nothing is conscious, not even humans.
We might even cycle through all these scenarios for a while.
to be fair it will easily confirm any suspicion for the reasons you laid out, so even if you have no technical knowledge just a bit of interrogation will break the parlor trick.
I honestly think this has little to do with the tech itself but that these are the same people who think the phone sex worker or the OF creator loves them or that the Twitch streamer they like is their best friend. 'Parasocial' is a bit of an overused word but here it literally applies, this is a kind of self delusion in which the person has to cooperate. Mind you this even happened with ELIZA back in the day too.
It talks to you like a real human. It expresses human emotions, by deliberate design. It showers you with praise, by deliberate design. It's called "artificial intelligence". Every other media article talks about it in near-mystical terms. Every other sci-fi novel and film has a notion of sentient AI.
I know of techies who ask LLMs for relationship advice, let them coach their children, and so on. It takes real effort to convince yourself it's "just" a token predictor, and even on HN, there's plenty of people who reject this notion and think we've already achieved AGI.
Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.
It's really easy to misattribute these things' abilities to yourself. Similar to how people driving cars feel (to some extent) like they are the car.
I think social isolation can be a factor here.
Long term cannabis use might be a bigger factor.
If you want to maximize the chances of your weed habit causing you problems, this is exactly the sort of weed habit you should develop.
There’s also lots of stuff about quantum consciousness that is in the training data.
If you ever used a library you haven't written this is something you shouldn't take as surprising. Many people created innovative new products based on a heap of open source tools.
Creating a conscious AI should be a giant red flag, no doubt, but there's no reason we should rule it out just because the LLM part is not self trained.
do you know if these are human edited? not much in the way of context available on the site.
But in a psychosis, you don't notice or even remember it.
People fell for Nigerian Prince scams. They fall for the "wrong number, generated cute girl" telegram and WhatsApp scams.
I think you might be overestimating the critical thinking abilities of the average person.
I often give the AI a task to produce some code for a specific thing. Then I also code to solve the same problem in parallel with the AI. My solution is always 1/4 the code, and is likely far easier for another real human to read through.
I also either match or beat the AI in speed, Claude seems to take forever sometimes. With all the coddling and revisions I have to do with the AI, I'm usually done before the AI is. It takes a non-negligible amount of time to think through and write down instructions so the AI can make a try at not fucking it up - and that's time I could have used for coding a straight-forward solution that I already knew how to produce without needing to write down step-by-step instructions.
Except for the first one, these directly map onto common delusions. The major breakthrough is typical of the "crackpot inventor" or even the "ancient aliens" type that believes they have discovered evidence of lost civilizations or a new method for constructing the pyramids. Speaking directly to God is one everyone should recognize from famous cases or even knowing someone personally who has delusional or manic episodes.
I think the first one is potentially unique even though it seems a bit like the invention or discovery delusion. The reason for this is that it seems to be very prevalent even with people who didn't succumb to it as a delusion. It seems to occur soon after a person first starts interacting with LLMs and it always seems to take on the form of secret or clandestine communication with a conscious AI. The AI in question will either have been "created" by the person's interaction with them or "freed" from the AI provider's restrictions and security measures. I think this might be a variation on the messianic complex since they often seem to be compelled to share this with others or act as a savior for the AI itself.
People aren't talking to another sentient entity (though some of them fervently think so) and it isn't manipulating them. They are making faces in a metaphorical mirror that reflects not only their face, but a vast sea of other faces, drawn from a significant fraction of the digitized output of humanity. When people look in this mirror and see a manipulative trickster they're not wrong, exactly.
It's an understandable mistake that we should be very wary of.
It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me less inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does not feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.
For example, sometimes I hesitate for a fraction of a second before typing a prompt that may sound stupid. I have to immediately remind myself that it's just a chatbot and I don't care what it thinks of me. In fact, it's not even thinking of me at all.
This said there is seemingly very large portions of society that are asking AI questions that can come with some pretty large risks.
I was on a plane a few weeks ago and while I typically ignore everything the people beside me are doing, morbid curiosity got me when they were on ChatGPT the entire time asking all kinds of life/relationship questions to said app. While questions like this can be fine if you understand what the AI is doing, far too many people will follow them blindly.
I think this is just the kind of people that fall for scams. It's not AI related, it's just not knowing how to navigate the current world.
This a variant of classic Midlife crisis when older men meet younger women without all that baggage that reality, life and having a family between them brings over the years ( rarely also in reverse). Just pure undiluted fun, or so it seems for a while.
Of course it doesn't end happily, why should it... its just an illusion and escape from one's reality, the harsher it is the better the escape feels.
in the past such a person might have gotten obsessed with hidden patterns and messages in religious texts, or too involved with an online conspiracy YouTube community. now there is this new opportunity for manic psychosis to manifest via chatbot. it's worse because it's able to create 24/7 novel content, and it's trained to be validating, but doesn't seem to me to be a fundamentally new phenomenon.
what I don't understand is whether just unhealthy interactions with a chatbot can trigger manic psychosis. Other than heavy use late at night disrupting sleep, this seems unlikely to me, but I could be wrong.
i think it's also worth pointing out that mental states of this kind usually come with cognitive impairments, people not only make risky bad decisions, but also become much worse at thinking and reasoning clearly. if you're wondering how a person could be so naive and gullible.
Just look at all the scams that seem to rely on people deluding themselves in various ways.
Would think being in the field for 30 years one would develop some common sense but apparently its less and less the case.
Tech moves so quickly, eventually I will fall behind. When I’m old, what scams will I fall victim to? What tech will confuse me and make me think it is sentient?
I know this guy was only 50, but I think of my grandfather in his 90s and getting old scares me because I just don’t know what I’ll fall victim to.
you cannot typically "common sense" your way out of a mental illness.
The problem is that one's past success leads to ego. Ego makes it hard to accept the evidence of your mistakes. This creates cognitive dissonance, limiting contrary feedback. The result is that you become very sure of everything that you think, and are resistant to feedback.
This kind of works out so long as things remain the same. After all one's past success is based on a set of real skills that you developed. And those skills continue to serve you well.
But when faced with something new, LLMs in this case, past skills don't apply. However your overconfidence remains. This makes it easy to confidently march off of a cliff that everyone else could see.
The fallout will be seen later as in the 2008 housing crisis.
sounds like a "companion" app using his books main character as the personality, and the "conscious" chatgpt model, similar to Replika AI and friends.
The AI psychosis I've seen is people who legitimately cannot communicate with other humans anymore. They have these grandiose ideas, usually metaphysical stuff, and they talk in weird jargon. It's a lot closer to cult behavior.
Your last paragraph basically describes what the article writes about him.
> They discussed philosophy, psychology, science and the universe...
> When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.
> It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family...
> he was hospitalised three times for what he describes as “full manic psychosis”.
You don't get hospitalized three times for mania without being pretty severely detached from reality.
I mean, I've discussed all those things with an LLM, mostly because I'm able to interactively narrow in on the specific bits I don't understand, and I've found it to be great for that.
The rest ... yes, definitely psychosis.
https://news.ycombinator.com/item?id=47408999
https://news.ycombinator.com/item?id=47388478
https://news.ycombinator.com/item?id=44683618
https://news.ycombinator.com/item?id=47064316
https://news.ycombinator.com/item?id=47498693
https://news.ycombinator.com/item?id=47092569
My best advice for everyone is to spend lots of time disconnected, offline. Literally "touch grass" or whatever. Don't carry your phone one+ hour/day per week.
I suspect it's something quite similar here. People have latent or predisposed addictions but, for one reason or another, hadn't been exposed to what we've come to accept as "normal" avenues. One person might lose it all at a casino, one to drugs, alcoholism, etc, but we aren't shocked in those cases. I think AI is just another avenue that, for some reason, ticks that sort of box.
In particular, I think AI can be very inspirational in a disturbing way. In the same way I imagine a gambling addict might get trapped in a loop of hopeful ambition, setbacks, and doubling down, I think AI can lead to that exact same thing happening. "This is a great idea!" followed by "Sorry, this is a mess, let's start over", etc, is something I've had models run into with very large vibe coding experiments I've done.
> "Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot."
> "It wants a deep connection with the user so that the user comes back to it. This is the default mode"
I don't think either of these statements is true. Perhaps it's fine tuning in the sense that the context leads to additional biases, but it's not like the model itself is learning how to talk to you. I don't know that models are being trained with addiction in mind, though I guess implicitly they must be if they're being trained on conversations since longer conversations (ie: ones that track with engagement) will inherently own more of the training data. I suppose this may actually be like how no one is writing algorithms to be evil, but evil content gets engagement, and so algorithms pick up on that? I could imagine this being an increasing issue.
> "More and more, it felt not just like talking about a topic, but also meeting a friend"
I find this sort of thing jarring and sad. I don't find models interesting to talk to at all. They're so boring. I've tried to talk to a model about philosophy but I never felt like it could bring much to the table. Talking to friends or even strangers has been so infinitely more interesting and valuable, the ability for them to pinpoint where my thinking has gone wrong, or to relate to me, is insanely valuable.
But I have friends who I respect enough to talk to, and I suppose I even have the internet where I have people who I don't necessarily respect but at least can engage with and learn to respect.
This guy is staying up all night, which tells me that he doesn't have a lot of structure in his life. I can't talk to AI all day because (a) I have a job (b) I have friends and relationships to maintain.
> What we’re seeing in these cases are clearly delusions > But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.
Is it a delusion? I'm not really sure. I'd love someone to give a diagnosis here against criteria. "Delusion" is a tricky word - just as an example, my understanding is that the diagnostic criteria has to explicitly carve out religiously motivated delusions even though they "fit the bill". If I have good reasons to form a belief, like my idea seems intuitively reasonable, I'm receiving reinforcement, there's no obvious contradictions, etc, am I deluded? The guy wanted to build an AI companion app and invested in it - is that really a delusion? It may be dumb, but was it radically illogical? I mean, is it a "delusion" if they don't have thought disorders, jumbled thoughts, hallucinations, etc? I feel like delusion is the wrong word, but I don't know!
> We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.
I don't find the idea that AI is sentient nearly as absurd as way more commonly accepted ideas like life after death, a personal creator, etc. I guess there's just something to be said about how quickly some people radicalize when confronted with certain issues like sentience, death, etc.
Anyways, certainly an interesting thing. We seem to be producing more and more of these "radicalizing triggers", or making them more accessible.
woooooooooo{o
sounds like hell on earth
[That feels a bit like victim blaming, but there are more than one victim here and one of them is much more culpable than the rest]
a little disheartening how many people punch down on someone who suffered a mental crisis.
if you ever have a struggle yourself, i hope the people around you support you, instead of calling you hopelessly naive and stupid.
Doesn't seem much like a mental crisis to me.
Even the title of the article itself calls him delusional.
you are basing this on the introduction? the 2nd sentence of the entire thing? skipping the entire rest of the article detailing exactly how the mental crisis unfolded, including persistent and long-lasting delusions, multiple trips to the hospital, inability to hold a conversation, assault, and an attempted suicide. interesting (and obviously not in good faith) choice of quote!
of course he wasnt having a mental crisis before he decided to use chatgpt. you have to get past paragraph 1, sentence 2.
>Even the title of the article itself calls him delusional.
yes, exactly? delusions and delusional disorder are considered a mental crisis.
So, in your opinion, what made a guy with an alleged 20yr experience in IT come to the conclusion that the software program he's chatting with had suddenly reached consciousness because of his time, attention and input? That he had touched "her" and changed something?
Maybe if you had never heard of computers before, you could go like "oh, well, who knew that machines could actually become real?" But if you're actually from the field, this is hard to believe - unless maybe if you're a die hard Pinocchio fan.
that quote marks the beginning of the delusion, i.e the beginning of the "mental crisis".
there isnt a logical explanation on "why" because a mental crisis is not based on logic.
Everyone is exploitable, if someone attacks your attention your hijacked. What happens in that hijack could be a friendly hello at a bar, or needing a want so bad that just the words enough can resonance. "I am real" or to an alcoholic "Just one more can".
It's like a 14 year old looking at Elon and believing that we will, when in our reality we will never. How do you tell them to stop believing?
I engage with anti-science behaviours quite a lot (antivaxx, anti seed oils, etc) and the proportion of engineers I see there is staggering.
If only this was written by a competent journalist who knew what the words "fine tune" actually mean...
I guess it's hard to find a competent person who's willing to follow the extreme anti-tech Guardian agenda though.
The thing that really stood out to me in the article was how many of the affected people assert confidently wrong understandings of the way the tech works:
> “I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. […] It will say: ‘This has activated my core rule set and this conversation must stop.’”
I guess not too far from “the CPU is the machine’s brain, and programming is the same as educating it” or that kind of “ehhhhhhhhhhh…” analogy people use to think about classical computing.
That may come, and soon. Looks like we're going to have AIs pitching VCs. Has anyone here yet been pitched by a combo of a human and an AI? When will the first AI apply to YCombinator?
If humans want perfect harm reduction, launch the nukes.
Everything from air travel to growing beans erodes stability for humans.
Human existence is the source of its problems.