I'm pretty sure a set of workshops isn't ACTUALLY going to solve a problem that philosophers have been at each other's throats for for the past half century.
But BOY does it get people talking!
Both sides of the debate have capital-O Opinions, and how else did you want to drum up interest for a set of mathematics workshops. O:-)
Hopefully we can talk about the actual math and stuff (although the article doesn't go into much of that).
The point of the Turing Test is that if there is no extrinsic difference between a human and a machine the intrinsic difference is moot for practical purposes. That is not an argument to whether a machine (with linear algebra, machine learning, large language models, or any other method) can think or what constitutes thinking or consciousness.
The Chinese Room thought experiment is a compliment on the intrinsic side of the comparison: https://en.wikipedia.org/wiki/Chinese_room
(in this case, thinkiness)
> finite context windows
like a human has
> or the fact that the model is "frozen" and stateless,
much like a human adult. Models get updated at a slower frequency than humans. AI systems have access to fetch new information and store it for context.
> or the idea that you can transfer conversations between models are trivial
because computers are better-organized than humanity.
I do hope you're able to remember what you had for lunch without incessantly repeating it to keep it in your context window
One of my earliest memories is of painting a ceramic mug when I was about 3 years old. The only reason I remember it is because every now and then I think about what my earliest memory is, and then I refresh my memory of it. I used to remember a few other things from when I was slightly older, but no longer do, because I haven't had reasons to think of them.
I don't think humans have specific black and white differences between types of knowledge that way LLMs do, but there is definitely a lot of behavior that is similar to context window vs training data (and a gradient in between). We remember recent things a lot better than less recent things. The quantity of stuff we can remember in our "working memory" is approximately finite. If you try to hold a complex thought in your mind, you can probably do that indefinitely, but if you then try to hold a second equally complex thought as well, you'll often lose the details of the first thought and need to reread or rederive those details.
How would you say human short term memory works if not by repeated firing (similar to repeatedly putting same tokens in over and over)?
I can restart a conversation with an LLM 15 days later and the state is exactly as it was.
Can't do that with a human.
The idea that humans have a longer, more stable context window than LLM's, CAN or is even LIKELY to be true given certain activities but please let's be honest about this.
If you talk to someone for an hour about a technical conversation I would guesstimate that 90% of humans would immediately start to lose track of details in about 10 minutes. So they write things down, or they mentally repeat things to themselves they know or have recognized they keep forgetting.
I know this because it's happened continually in tech companies decade after decade.
LLM's have already passed the Turing test. They continue to pass it. They fool and outsmart people day after day.
I'm no fan of the hype AI is receiving, especially around overstating its impact in technical domains, but pretending that LLM's can't or don't consistently perform better than most human adults on a variety of different activities is complete non-sense.
I do hope you're able to remember what was your browser tab 5 tab switches ago without keeping track of it...
Overwhelmingly, I just don't think the majority of human beings have the mental toolset to work with ambiguous philosophical contexts. They'll still try though, and what you get out of that is a 4th order baudrillardian simulation of reason.
Sentences constructed of words and representations of ideas defined long before you existed. I question whether you can work with ambiguous contexts as you have had the privilege of them being laid out in language for you already by the time you were born.
From my reference frame you appear to merely be circumlocuting from memory, and become the argument you make about others.
There's many definitions of "thinking".
AI and brains can do some, AI and brains definitely provably cannot do others, some others are untestable at present, and nobody really knows enough about what human brains do to be able to tell if or when some existing or future AI can do whatever is needed for the stuff we find special about ourselves.
A lot of people use different definitions, and respond to anyone pointing this out by denying the issue and claiming their own definition is the only sensible one and "obviously" everyone else (who isn't a weird pedant) uses it.
The definition of "thinking" in any of the parent comments or TFA is actually not defined. Like literally no statements are made about what is being tested.
So, if we had that we could actually discuss it. Otherwise it's just opinions about what a person believes thinking is, combined with what LLMs are doing + what the person believes they themselves do + what they believe others do. It's entirely subjective with very low SNR b/c of those confounding factors.
There are people who insist that the halting problem "proves" that machines will never be able to think. That this means they don't understand the difference between writing down (or generating a proof of) the halting problem and the implications of the halting problem, does not stop them from using it.
Unicorns are not bound by the laws of physics - because they do not exist.
Is it only humans that have this need? That makes the need special, so humans are special in the universe.
We don't fully understand how brains work, but we know brains don't function like a computer. Why would a computer be assumed to function like a brain in any way, even in part, without evidence and just hopes based on marketing? And I don't just mean consumer marketing, but marketing within academia as well. For example, names like "neural networks" have always been considered metaphorical at best.
And then what do you even mean by "a computer?" This falls into the same trap because it sounds like your statement that brains don't function like a computer is really saying "brains don't function like the computers I am familiar with." But this would be like saying quantum computers aren't computers because they don't work like classical computers.
If thinking is definable, it is wrong that all statements about it are unverifiable (i.e. there are statements about it that are verifiable.)
Well, basic shit.
edit : Thinking is undefined, statements about undefined cannot be verified.
The "hair-splitting" underlies the whole GenAI debate.
It ties into another aspect of these perennial threads, where it is somehow OK for humans to engage in deluded or hallucinatory thought, but when an AI model does it, it proves they don't "think."
You'd think it would unlock certain concepts for this class of people, but ironically, they seem unable to digest the information and update their context.
They're not equivalent at all because the AI is by no means biological. "It's just maths" could maybe be applied to humans but this is backed entirely by supposition and would ultimately just be an assumption of its own conclusion - that human brains work on the same underlying principles as AI because it is assumed that they're based on the same underlying principles as AI.
It's on those who want alternative explanations to demonstrate even the slightest need for them exists - there is no scientific evidence that exists which suggests the operation of brains as computers, as information processors, as substrate independent equivalents to Turing machines, are insufficient to any of the cognitive phenomena known across the entire domain of human knowledge.
We are brains in bone vats, connected to a wonderful and sophisticated sensorimotor platform, and our brains create the reality we experience by processing sensor data and constructing a simulation which we perceive as subjective experience.
The explanation we have is sufficient to the phenomenon. There's no need or benefit for searching for unnecessarily complicated alternative interpretations.
If you aren't satisfied with the explanation, it doesn't really matter - to quote one of Neil DeGrasse Tyson's best turns of phrase: "the universe is under no obligation to make sense to you"
If you can find evidence, any evidence whatsoever, and that evidence withstands scientific scrutiny, and it demands more than the explanation we currently have, then by all means, chase it down and find out more about how cognition works and expand our understanding of the universe. It simply doesn't look like we need anything more, in principle, to fully explain the nature of biological intelligence, and consciousness, and how brains work.
Mind as interdimensional radios, mystical souls and spirits, quantum tubules, none of that stuff has any basis in a ruthlessly rational and scientific review of the science of cognition.
That doesn't preclude souls and supernatural appearing phenomena or all manner of "other" things happening. There's simply no need to tie it in with cognition - neurotransmitters, biological networks, electrical activity, that's all you need.
This is the point, we don't know the delta between brains and AI any assumption is equivalent to my statement.
But I think most people get what GP means.
When you think in these terms, it becomes clear that LLMs can’t have certain types of experiences (eg see in color) but could have others.
A “weak” panpsychism approach would just stop at ruling out experience or qualia based on physical limitations. Yet I prefer the “strong” pansychist theory that whatever is not forbidden is required, which begins to get really interesting (would imply that for example an LLM actually experiences the interaction you have with it, in some way).
As for applying the word thinking to AI systems, it's already in common usage and this won't change. We don't have any other candidate words, and this one is the closest existing word for referencing a computational process which, one must admit, is in many ways (but definitely not in all ways) analogous to human thought.
if there's surely no algo to solve the halting problem, why would there be maths that describes consciousness?
Having read “I Am a Strange Loop” I do not believe Hofstadter indicates that the existence of Gödel’s theorem precludes consciousness being realizable on a Turing machine. Rather if I recall correctly he points out that as a possible argument and then attempts to refute it.
On the other hand Penrose is a prominent believer that human’s ability to understand Gödel’s theorem indicates consciousness can’t be realized on a Turing machine but there’s far from universal agreement on that point.
That wasn't the assumption though, it was only that human brains work by some "non-magical" electro-chemical process which could be described as a mechanism, whether that mechanism followed the same principles of AI or not.
But the accompanying XY plot showed samples that overlapped or at least were ambiguous. I immediately lost a lot of my interest in their approach, because traffic lights by design are very clearly red, or green. There aren't mauve or taupe lights that the local populace laughs at and says, "yes, that's mostly red."
I like the idea of studying math by using ML examples. I'm guessing this is a first step and future education will have better examples to learn from.
I suspect you feel this because you are observing the output of a very sophisticated image processing pipeline in your own head. When you are dealing with raw matrixes of rgb values it all becomes a lot more fuzzy. Especially when you encounter different illuminations, exposures and the cropping of the traffic light has noise on it. Not saying it is some intractably hard machine vision problem, because it is not. But there is some variety and fuzzyness there in the raw sensor measurements.
We observe through our senses geometric relationships.
Syntax is exactly that; letters, sentences, paragraphs organized in spatial/geometric relationships.
At best thought is recreation of neural networks in the brain which only exist as spatial relationships.
Our senses operate on spatial relationships; enough light to work by, and food relative to stomach to satisfy our biological impetus to survive (which is spatial relationships of biochemistry).
The idea of "thought" as anything but biology makes little sense to me then as a root source is clearly observable. Humanity, roughly, repeats the same social story. All that thought does not seem to be all that useful as we end up in the same place; the majority as serfs of aristocracy.
Personally would prefer less "thought" role-play and more people taking the load of the labor they exploit to enable them to sit and "think".
But they are two different things with overlapping qualities.
It's like MDMA and falling in love. They have many overlapping quantities but no one would claim one is the other.
The same arguments that appeared in 2015 inevitably get trotted out, almost verbatim, ten years later. It would be amusing on other sites, but it's just pathetic here.
Personally, I'm ok with reusing the word "thinking", but there are dogmatic stances on both sides. For example, lots of people decreeing that biology in the end can't but reduce to maths, since "what else could it be". The truth is we don't actually know if it is possible, for any conceivable computational system, to emulate all essential aspects of human thought. There are good arguments for this (in)possibility, like those presented by Roger Penrose in "the Emperor's new Mind" and "Shadows of the Mind".
For one thing, yes, they can, obviously [1] -- when's the last time you checked? -- and for another, there are plenty of humans who seemingly cannot.
The only real difference is that with an LLM, when the context is lost, so is the learning. That will obviously need to be addressed at some point.
that they can't perform simple mathematical operations without access to external help (via tool calling)
But yet you are fine with humans requiring a calculator to perform similar tasks? Many humans are worse at basic arithmetic than an unaided transformer network. And, tellingly, we make the same kinds of errors.
or that they have to expend so much more energy to do their magic (and yes, to me they are a bit magical), which makes some wonder if what these models do is a form of refined brute-force search, rather than ideating.
Well, of course, all they are doing is searching and curve-fitting. To me, the magical thing is that they have shown us, more or less undeniably (Penrose notwithstanding), that that is all we do. Questions that have been asked for thousands of years have now been answered: there's nothing special about the human brain, except for the ability to form, consolidate, consult, and revise long-term memories.
1: E.g., https://arxiv.org/abs/2005.14165 from 2020
Conversely, if the one asserting something doesn't want to define it there is no useful conversation to be had. (as in: AI doesn't think - I won't tell you what I mean by think)
PS: Asking someone to falsify their own assertion doesn't seem a good strategy here.
PPS: Even if everything about the human brain can be emulated, that does not constitute progress for your argument, since now you'd have to assert that AI emulates the human brain perfectly before it is complete. There is no direct connection between "This AI does not think" to "The human brain can be fully emulated". Also the difference between "does not" and "can not" is big enough here that mangling them together is inappropriate.
A lot of people seemingly haven't updated their priors after some of the more interesting results published lately, such as the performance of Google's and OpenAI's models at the 2025 Math Olympiad. Would you say that includes yourself?
If so, what do the models still have to do in order to establish that they are capable of all major forms of reasoning, and under what conditions will you accept such proof?
Sometimes, because of the consequences of otherwise, the order gets reversed
Whatever you meant to say with "Sometimes, because of the consequences of otherwise, the order gets reversed" eludes me as well.
So we don't require, say, minorities or animals to prove they have souls, we just inherently assume they do and make laws around protecting them.
There are people confidently claiming they can’t and then other people expressing skepticism at their confidence and/or trying to get them to nail down what they mean.
... someone else points out that the same models that can't "think" are somehow turning in gold-level performance at international math and programming competitions, making Fields Medalists sit up and take notice, winning art competitions, composing music indistinguishable from human output, and making entire subreddits fail the Turing test.
Uh huh. Good luck getting Stockfish to do your math homework while Leela works on your next waifu.
LLMs play chess poorly. Chess engines do nothing else at all. That's kind of a big difference, wouldn't you say?
To their utility.
Not sure if it matters on the question "thinking?"; even if for the debaters "thinking" requires consciousness/qualia (and that varies), there's nothing more than guesses as to where that emerges from.
For my original earlier reply, the main subtext would be: "Your complaint is ridiculously biased."
For the later reply about chess, perhaps: "You're asserting that tricking, amazing, or beating a human is a reliable sign of human-like intelligence. We already know that is untrue from decades of past experience."
I don't know who's asserting that (other than Alan Turing, I guess); certainly not me. Humans are, if anything, easier to fool than our current crude AI models are. Heck, ELIZA was enough to fool non-specialist humans.
In any case, nobody was "tricked" at the IMO. What happened there required legitimate reasoning abilities. The burden of proof falls decisively on those who assert otherwise.
This is exactly the problem. Claims about AI are unfalsifiable, thus your various non-sequiturs about AI 'thinking'.
The reason I say this is because an LLM is not a complete self-contained thing if you want to compare it to a human being. It is a building block. Your brain thinks. Your prefrontal cortex however is not a complete system and if you somehow managed to extract it and wire it up to a serial terminal I suspect you’d be pretty disappointed in what it would be capable of on its own.
I want to be clear that I am not making an argument that once we hook up sensory inputs and motion outputs as well as motivations, fears, anxieties, desires, pain and pleasure centers, memory systems, sense of time, balance, fatigue, etc. to an LLM that we would get a thinking feeling conscious being. I suspect it would take something more sophisticated than an LLM. But my point is that even if an LLM was that building block, I don’t think the question of whether it is capable of thought is the right question.
The AI companies themselves are the ones drawing the parallels to a human being. Look at how any of these LLM products are marketed and described.
"Intelligence" implies "thinking" for most people, just as "Learning" in machine learning implies "understanding" for most people. The algorithms created neither 'think' nor 'understand' and until you understand that, it may be difficult to accurately judge the value of the results produced by these systems.
Why, when we use the term for AI, do we skip over this distinction and expect it to be as good as the original—- or better?
That wouldn’t be artificial intelligence, it would just be the original artifact: “intelligence”.
It's not a perfect term, but we have been using it for seven full decades to include all of machine learning and plenty of things even less intelligent
1: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...
Military justice is to justice as military music is to music
> Large language models (LLMs) can be dishonest when reporting on their actions and beliefs -- for example, they may overstate their confidence in factual claims or cover up evidence of covert actions
Still, the sales pitch has worked to unlock huge liquidity for him so there’s that.
Still making predictions is a big part of what brains do though not the only thing. Someone wise said that LLM intelligence is a new kind of intelligence, like how animal intelligence is different from ours but is still intelligence but needs to be characterized to understand differences.
So long as you accept the slide ruler as a "new kind of intelligence" everything will probably work out fine, it's the Altmannian insistence that only the LLM is of the new kind that is silly.
I saw a YouTube video about a investigative youtuber Eddy Burback who very easily convinced chat4 that he should cut off all contact with friends and family, move to a cabin in the desert, eat baby food, wrap himself in alfoil, etc just feeding his own (faked) mistakes and delusions. "What you are doing is important, trust your instincts".
Wven if AI could hypothetically be 100x as smart as a human under the hood, it still doesn't care. It doesn't do what it thinks it should, it doesn't do what it needs to do, it does what we train it to.
We train in humanities weaknesses and follies. AI can hypothetically exceed humanity in some respects, but in other respects it is a very hard to control power tool.
AI is optimised, and optimised functions always "hack" the evaluation function. In the case of AI, the evaluation function includes human flaws. AI is trained to tell us what we want to hear.
Elon Musk sees the problem, but his solution is to try to make it think more like him, and even if that succeeds it just magnifies his own weaknesses.
Has anyone read the book criticising Ray Dalio? He is a very successful hedge fund manager, who decided that he could solve the problem of finding a replacement by psychology evaluation and training people to think like him. But even his smartest employees didn't think like him, they just (reading between the lines) gamed his system. Their incentives weren't his incentives - he could demand radical honesty and integrity but that doesn't work so well when he would (of course) reward the people who agreed with him, rather than the people who would tell him he was screwing up. His organisation (apparently) became a bunch of even more radical syncopants due to his efforts to weed out syncophantcy.
There's absolutely no similarity between what computer hardware does and what a brain does. People will twist and stretch things and tickle the imagination of the naive layperson and that's just wrong. We seriously have to cut this out already.
Anthropomorphizing is dangerous even for other topics, and long understood to be before computers came around. Why do we allow this?
The way we talk about computer science today sounds about as ridiculous as invoking magic or deities to explain what we now consider high school physics or chemistry. I am aware that the future usually sees the past as primitive, but why can't we try to seem less dumb at least this time around?
But at very least there's also no similarity between what computer hardware does and what even the simplest of LLMs do. They don't run on eg. x86_64 , else qemu would be sufficient for inferencing.
> Secondary school maths showing that AI systems don’t think
And the article contains the quotes:
> the team wants to tackle a major and common misconception: that students think that ANN systems learn, recognise, see, and understand, when really it’s all just maths.
> The team is taking very complex ideas and reducing them to such an extent that we can use secondary classroom maths to show that AI is not magic and AI systems do not think.
This is not off topic
Expecting machines to think is.. like magical thinking (but they are good at calculations indeed).
I wish we didn't use the word intelligence in context of LLMs - shortly there is Essence and the rest.. is only slope - into all possible combinations of Markov Chains - may they have sense or not I don't see how part of some calculation could recognize it, or that to be possible from inside (of calculation, that doesn't even consider that).
Aside of artificial knowledge (out of senses, experience, context lengths.. - confabulating but not knowing that), I wish to see an intelligent knowledge - made in kind of semantic way - allowed to expand using not yet obvious (but existing - not random) connections. I wouldn't expect it to think (humans think, digitals calculate). But I would expect it to have a tendency to be coming closer (not further) in reflecting/modeling reality and expanding implications.
An LLM could be thinking in one of two ways. Either between adding each individual token, or collectively across multiple tokens. At the individual token level the physical mechanism doesn’t seem to fit the definition being essentially reflexive action, but across multiple tokens that’s a little more questionable especially as multiple approaches are used.
> across multiple tokens
- but how many ? how many of them happen in sole person life ? How many in some calculation ? Does it matter, if a calculation doesn't reflect it but stay all the same ? (conversation with.. a radio - would it have any sense ?)
The opinions are exactly the same than about LLM.
The concept of understanding emerges on a higher level from the way the neurons (biological or virtual) are connected, or the way the instructions being followed by the human in the Chinese room process the information
But really this is a philosophical/definitional thing about what you call “thinking”
Edit: I see my take on this is listed on the page as the “System reply”
Would you mind expanding on this? At a base read, it seems you implying magic exists.
"Can not be measured", probably not. "We don't know how to measure", almost certainly.
I am capable of belief, and I've seen no evidence that the computer is. It's also possible that I'm the only person that is conscious. It's even possible that you are!
The argument that was actually made was "LLMs do not think".
B: But Y would also imply Z
C: A was never arguing for Z! This is a strawman!
If you were a mind supervening on the behavior of some massive time/space scale computer, how would you know? How could you tell the difference between running on a human making marks with pen and paper and running on a modern CPU? Your experience updates based on information transformations, not based on how fast the fundamental substrate is changing. When your conscious experience changes, that means your current state is substantially different from your prior state and you can recognize this difference. Our human-scale chauvinism gets in the way of properly imagining this. A mind running on a CPU or a large collection of human computers is equally plausible.
A common question people like to ask is "where is the consciousness" in such a system. This is an important question if only because it highlights the futility of such questions. Where is Microsoft Word when it is running on my computer? How can you draw a boundary around a computation when there are a multitude of essential and non-essential parts of the system that work together to construct the relevant causal dynamic. It's just not a well-defined question. There is no one place where Microsoft Word occurs nor is there any one place where consciousness occurs in a system. Is state being properly recorded and correctly leveraged to compute the next state? The consciousness is in this process.
You can replicate the entire universe with pen and paper (or a bunch of rocks). It would take an unimaginably long time, and we haven't discovered all the calculations you'd need to do yet, but presumably they exist and this could be done.
Does that actually make a universe? I don't know!
The comic is meant to be a joke, I think, but I find myself thinking about it all the time!!!
The question is, are the people in the simulated universe real people? Do they think and feel like we do—are they conscious? Either answer seems like it can’t possibly be right!
That's an assumption, though. A plausible assumption, but still an assumption.
We know you can execute an LLM on pen and paper, because people built them and they're understood well enough that we could list the calculations you'd need to do. We don't know enough about the human brain to create a similar list, so I don't think you can reasonably make a stronger statement than "you could probably simulate..." without getting ahead of yourself.
Yes, or what about leprechauns?
It's been kinda discussed to oblivion in the last century, interesting that it seems people don't realize the "existing literature" and repeat the same arguments (not saying anyone is wrong).
An arbitrarily-perfect simulation of a burning candle will never, ever melt wax.
An LLM is always a description. An LLM operating on a computer is identical to a description of it operating on paper (if much faster).
That simulated candle is perfectly melting wax in its own simulation. Duh, it won't melt any in ours, because our arbitrary notions of "real" wax are disconnected between the two simulatons.
If we don't think the candle in a simulated universe is a "real candle", why do we consider the intelligence in a simulated universe possibly "real intelligence"?
Being a functionalist ( https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_m... ) myself, I don't know the answer on the top of my head.
I can smell a "real" candle, a "real" candle can burn my hand. The term real here is just picking out a conceptual schema where its objects can feature as relata of the same laws, like a causal compatibility class defined by a shared causal scope. But this isn't unique to the question of real vs simulated. There are causal scopes all over the place. Subatomic particles are a scope. I, as a particular collection of atoms, am not causally compatible with individual electrons and neutrons. Different conceptual levels have their own causal scopes and their own laws (derivative of more fundamental laws) that determine how these aggregates behave. Real (as distinct from simulated) just identifies causal scopes that are derivative of our privileged scope.
Consciousness is not like the candle because everyone's consciousness is its own unique causal scope. There are psychological laws that determine how we process and respond to information. But each of our minds are causally isolated from one another. We can only know of each other's consciousness by judging behavior. There's nothing privileged about a biological substrate when it comes to determining "real" consciousness.
A simulation of a tree growing (say) is a lot more like the idea of love than it is... a real tree growing. Making the simulation more accurate changes that not a bit.
Thanks for stating your views clearly. I have some questions to try and understand them better:
Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?
What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?
Build a simulation of creatures that evolve from simple structures (think RNA, DNA).
Now, if in this simulation, after many many iterations, the creatures start talking about consciousness, what does that tell us?
It might if the simulation includes humans observing the candle.
Whatever that something that it actually does in the real, physical world is produces the cogito in cogito, ergo sum and I doubt you can get it just by describing what all the subatomic particles are doing, any more than a computer or pen-and-paper simulated hurricane can knock your house down, no matter how perfectly simulated.
Of course a GPU involves things happening. No amount of using it to describe a brain operating gets you an operating brain, though. It's not doing what a brain does. It's describing it.
(I think this is actually all somewhat tangential to whether LLMs "can think" or whatever, though—but the "well of course they might think because if we could perfectly describe an operating brain, that would also be thinking" line of argument often comes up, and I think it's about as wrong-headed as a thing can possibly be, a kind of deep "confusing the map for the territory" error; see also comments floating around this thread offhandedly claiming that the brain "is just physics"—like, what? That's the cart leading the horse! No! Dead wrong!)
A pen and paper simulation of a brain would also be "a thing happening" as you put it. You have to explain what is the magical ingredient that makes the brain's computations impossible to replicate.
You could connect your brain simulation to an actual body, and you'd be unable to tell the difference with a regular human, unless you crack it open.
I'm not. You might want me to be, but I'm very, very much not.
Connect your pen and paper operator to a brainless human body, and you got something indistinguishable from a regular alive human.
[0] https://en.wikipedia.org/wiki/Functionalism_%28philosophy_of...
LLMs are BAD at evaluating earlier thinking errors, precisely because there's not copious examples of text where humans thinking through a problem, screwing up, going back, correcting their earlier statement, and continuing. (a good example catches these and corrects them)
- ragebait them by saying AIs don’t think
- …
(Sneaking a bit of belief in here, to me "substrate independence" is a more extreme position than the idea that a system could be made which is intelligent but not conscious, hence I find it implausible.)
If you're asking for things you can't easily verify you're barking up the wrong tree.
Very much like this effect https://www.reddit.com/r/opticalillusions/comments/1cedtcp/s... . Shouldn't hide complexity under a truth value.
This is completely idiotic. Do these people actually believe that showing it can't be actual thought because it is described by math?
By every scientific measure we have the answer is no. It’s just electrical current taking the path of least resistance through connected neurons mixed with cell death.
The fact a human brain peaks at IQ around 200 is fascinating. Can the scale even go higher? It would seem no since nothing has achieved a higher score it must not exist.