I have heard this mentioned several times in the last decade or so : "The only thing a quantum computer definitively does better than a classical computer is simulating a quantum computer."
Whether this capability is useful is up in the air.
Note that in practice, classical computers are going to be better at factoring numbers for the foreseeable future.
Whether or not quantum computers have practical applications, the prize itself is not evidence of that.
All of which are based on existing technologies that have been delivering for decades if not an entire century (vehicle efficiency). Even something as nebulous as "AI systems" has been around for twenty years in the form of Google's original semantic search capabilities.
This "Quantum AI" prize, however, is a solution in search of a problem.
If you have significantly better quantum computers, you can solve realistic problems, yes.
But what's not being spelled out here is that as far as we know classical computers will still totally smoke them unless you allow a large probability of inaccurate results.
And if you are fine with inaccurate results, classical randomized algorithms make it a much more difficult deadline to beat.
Keep in mind that qubit requirements keep tumbling down as people work hard to squeeze out as much as possible from a limited number of qubits.
A perhaps more relevant example: Einstein did not have the cell phone camera in mind when developing his theory of the photoelectric effect.
Reminds me of that story about the self-evolving chip that was tasked to learn how to classify tones and instead took advantage of specific flaws in its own package.
The photoelectric effect had been well known for decades, Einstein has just given a good explanation of its behavior that was already known from experiments. It would have been equally easy for the designers of the first video camera vacuum tubes, which were used in the early television, to design them based only on the known experimental laws, ignoring Einstein's explanation.
On the other hand, the formulae of the stimulated emission of radiation, complementing the previously known phenomena of absorption and spontaneous emission, were something new, published for the first time by Einstein in 1917. They are the most original part of Einstein's work, together with the general relativity, but their practical applications are immensely more important for now than the applications of general relativity, which are limited to extremely small corrections in the results of some measurements with very high resolutions.
The inventions of the masers and lasers after WWII would not have been possible without knowing Einstein's theory of radiation.
AFAIK the CCD technology continues to be used only in large-area expensive sensors inside some professional video cameras, in applications like astronomy, microscopy, medical imaging and so on.
CCD was the first thing that came to mind as 'charge' is right in the name.
Out of curiosity, looked up invention dates for CCD 1969 and CMOS 1963 and CMOS sensor 1993 (quite a gap). I was playing with DRAM light sensitivity in the lab in the late 80's. I'm guessing CMOS had too much noise to be useful for a long while or something.
I'm not saying quantum computing won't pan out, but if it has to there's some fundamental piece that is missing so far.
In contrast this effort is trying to imagine and monetize GPS before relativity is discovered.
Maybe, when we have quantum computers, one nerd makes an accidental discovery that enables us to build a room temperature superconductor, and maybe not. But if we don't let people research freely what they're interested in and only things that will pan out, we're going to lose out on a lot of things.
I didn't say quantum computing research is useless.
My point is that we are not at stage that we can offer a small prize and find monetizable uses for it.
Fundamental research requires a lot more funding than this.
The theory of relativity was discovered decades before GPS. Similarly, the theory of quantum computing was discovered in the 1990s.
I agree with the sentiment: trying to find applications for a technology (Large fault tolerant quantum computer) that doesn't exist yet. I just think relativity is the wrong comparison. I do not think that this effort if not worth it due to not having fault tolerant quantum computers. Theory alone can take one very far.
Number factorization and anything else in BQP is also an use for them.
Quantum-everything is linear, how is this distinction overcome?
Also, doesn't solving this problem hint at quantum gravity?
"[N]ature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."
0. https://s2.smu.edu/~mitch/class/5395/papers/feynman-quantum-...
That is an absurd claim, if taken by itself. Most of nature relevant to us behaves classically. If you want to do simulations of a building or a car or an earthquake or the climate, modeling it as a quantum system would be absurd.
The basic problem is that the number of states grows exponentially with the size of the system. You very quickly have to start making approximations, and it takes an enormous amount of classical computing power and memory to handle even relatively small systems.
However, quantum computers also have to deal with noise and errors. So far, that's not very different.
(If we manage to build error-correcting quantum computers, that might change.)
This is very different from, say, an approximation that adds in a small amount of noise that you can estimate. The approximations in simulating quantum systems classically can radically change the behavior of the system, in ways that you might not understand or be able to easily estimate.
Because a simulation is too difficult, there are approximate formulae for computing the quantities of interest, like the energy levels of the spectrum of the hydrogen atom.
These approximate formulae include a large number of constants which are given in the paper linked by you and which are adjusted to match the experimental results.
A simulation of the hydrogen atom would start from a much smaller set of constants: the masses of the proton and of the electron, the magnetic moments of the proton and of the electron, the so-called fine structure constant (actually the intensity of the electromagnetic interaction) and also a few fundamental constants depending on the system of units used, which in SI would be the elementary electric charge (determined by the charge of a coulomb in natural units), the Planck constant (determined by the mass of a kilogram in natural units) and the speed of light in vacuum (determined by the length of a meter in natural units).
The constants are determined by fitting the theory to the best available measurements (not only in hydrogen). This is exactly what fundamental constants do: They convert unit-less theory expressions into measurable quantities.
You can solve such equations in fractions of a second only for very low precisions, much lower than the precision that can be reached in measurements.
For higher precision in quantum electrodynamics computations, you need to include an exponentially increasing number of terms in the equations, which come from higher order loops that are neglected when doing low precision computations.
When computing the energy levels of the spectrum of a hydrogen atom with the same precision as the experimental results (which exceed by a lot the precision of FP64 numbers, so you need to use an extended precision arithmetic library, not simple hardware instructions), you need either a very long time or a supercomputer.
I am not sure how much faster can that be done today, e.g. by using a GPU cluster, but some years ago it was not unusual for the comparisons between experiments and quantum electrodynamics to take some months (but I presume that the physicists doing the computations where not experts in optimizations, so perhaps it would have been possible to accelerate the computations by some factor).
The most accurate hydrogen spectroscopy (of the 1S-2S transition) has reached a relative accuracy of a few parts in 1E15 which is around an order of magnitude above the precision of FP64 numbers.
That absolute frequency is computed from the ratio between an optical frequency and the 9 GHz frequency of a cesium clock, which is affected by large uncertainties due to the need for bridging the gap between optical frequencies and microwave frequencies.
The frequency ratios between distinct lines of the hydrogen atom spectrum or between lines of the hydrogen atom spectrum and lines in the optical spectra of other atoms or ions can be known with uncertainties in parts per 1E18, one thousand times better.
When comparing a simulation with the experiment, the simulation must be able to match those quantities that can be measured with the lowest uncertainty, so the simulated values must also have uncertainties of at most parts per 1E18, or better per 1E19.
This requires more bits than provided by FP64. The extended precision of Intel 8087 would barely be enough to express the final results, but it would not be enough for the intermediate computations, so one really needs quadruple precision computations or double-double-precision computations, which are faster where only FP64 hardware exists.
I have not attempted to compute the QED corrections myself, so I cannot be certain how difficult that really is.
Nevertheless, the section VII from this CODATA paper and also the previous editions of the CODATA publications, some of which had been more detailed, are not consistent with what you say i.e. with them being easy to compute.
For each correction there is a long history of cited research papers that would need to be found and read to determine how exactly they have been computed. For many of them there is a history of refinements in their computations and of discrepancies between the values computed by different teams, discrepancies that have been some times resolved by later more accurate computations, but also some where the right value was not yet known at the date of this publication.
If the computations where so easy that anyone could do them with pen and paper there would have been no need for several years to pass in some cases until the validation of the correct computation and for a very slow improvement in the accuracy of the computed values in other cases.
Isn't it fun to get your own field of expertise (wrongly) explained to you on the internet?
Edit: I never said that it is easy to derive the corrections listed in the CODATA paper. However, it is relatively easy to calculate them.
And, hopefully, other quantum systems in general!
I can see that helping with material science. That can have huge multiplier effects on the rest of the economy.
But I agree with you, that other serious applications of quantum computers seem to be thin on the ground.
If that is your understanding, then yes, you have unfortunately fallen for the very mistaken reporting on this.
There are specific algorithms that Quantum Computing can solve faster than regular computers. Some of these algorithms are incredibly important, and a faster solution to them would cause serious changes to the world, namely Shor's algorithm, which would make factoring large numbers much faster. This would effectively break many/most encryption schemes in the world. So it's a big deal!
Nevertheless, this isn't true in general - not every algorithm is known to have such a speedup. (I'm not an expert, I'm not sure if we know that there isn't a general speedup, or think there might be but haven't found it.)
I could totally see the argument that they are physically impractical and therefore not likely to be actually used vs parallelizing conventional computers.
But no, I'm fairly sure that QC in general isn't known to be able to solve NP problems. And since Traveling Salesman is NP complete (iirc), I don't think there's a QC alg to solve traveling salesman in P (otherwise that would imply QC would solve all of NP in P, which I know isn't true). Where did you see indication otherwise?
FWIW, my favorite CS blogger is Scott Aaronson, and the subtitle of his blog has always been: "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel." This reflects the very common misunderstanding of how QC works.
Recursion is more of a property of how you write your algorithm, than of the problem.
Eg Scheme or Haskell as languages have no built-in facilities for iteration, no for-loops, no while-loops; the only thing you get is function calls. (Well, that and exceptions etc.)
However you can built for-loops as libraries in both Scheme and Haskell. But they will be built on top of recursion.
To make matters even more interesting: once you compile to native code, all the recursion (and also all the for-loops and other fancy 'structured programming' constructs) go away, and you are back to jumps and branches as assembly instructions.
None of this changes whether a given class of problems is in P or NP or (almost!) any other complexity class. Just like getting a faster computer doesn't (usually!) change the complexity class of your problems.
> I could totally see the argument that they are physically impractical and therefore not likely to be actually used vs parallelizing conventional computers.
Quantum computers are impractical now. But as far as we can tell, that's "just" an engineering problem.
In the long run one of two things will happen:
- Either engineering improves and we will be able to build good quantum computers (though perhaps still not better than classical computers for the same amount of effort) - Or, we discover new principles of quantum mechanics.
Basically, quantum mechanics is one of our most successful physical theories, if not the most successful theory. And as far as we understand it, it allows quantum computers to be built.
So either we will eventually manage to built them, or (more excitingly!) that's not possible, and we will discover new physics that explains why. We haven't really discovered new physics like that in a while.
A quantum computer is not a general purpose computer than can be programmed in the way we're used to - it's more like an analog computer that is configured rather than programmed. It's only useful if the problem you are trying to solve can be mapped into a configuration of the computer.
If a task can't be efficiently reduced to such a problem, then a QC probably won't ever help at all; the square root time advantage from grover's algorithm is too easily overwhelmed by simple engineering factors.
If it's sampling, you don't have to deal with the exponential here.
“We give new evidence that quantum computers—moreover, rudimentary quantum computers built entirely out of linear-optical elements—cannot be efficiently simulated by classical comput- ers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum com- putation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions.”
Which is the basis for the experiment discussed here: https://scottaaronson.blog/?p=5122
If by "AI" you include operations research (as opposed to statistical machine learning), yes, adiabatic quantum annealing makes sense for certain optimisation problems which you can manage to naturally formulate as a QUBO problem. By 'naturally' I mean it won't blow up the number of variables/qubits, as otherwise a far simpler algorithm on classical computer would be more efficient. I know someone who published QUBO algorithms and ran them on a NASA D-Wave machine, while I myself was using a lot of annealing for optimisation, I didn't want to get involved in that field.
But if you want to do machine learning on a large amount of data using quantum annealing, no, that's terribly badly matched, because the number of physical qubits needed is proportional to the amount of data you want to feed in.
This statement should delimit between theory and experiment.
Theoretically, the question of building a quantum computer with low enough noise to run Shor's has been solved. In fact it was solved by Shor himself in the 1990s: https://arxiv.org/abs/quant-ph/9605011.
Experimentally, we are just getting started with building quantum computers with low enough noise: https://blogs.microsoft.com/blog/2024/04/03/advancing-scienc....
Experimentally, it will always be open whether a quantum computer can run Shor's until it actually run Shor's. The point is that progress in the field has not stagnated since it's founding.
The paper does not prove anything about the upper limit for the number of bad qubits that are physically realisable.
There are doubts that this upper limit, which is unknown yet, is high enough for most practical applications.
Nothing can prove how many qubits can be realizable except trying to realize them. There will never be a theorem that says "The maximum number of qubits that can ever be controlled in a lab is X". That's what experiment is for.
I will say, it's difficult to doubt that the upper limit to the number of qubits we can realize is infinity. We can now trap ~10 million atoms and efficiently control hundreds of them: https://www.nature.com/articles/s41586-023-06927-3. The question is not "Could we ever realize billions of qubits?". It's "When can we realize billions qubits?". The answer could be decades or centuries but as long as people are building these devices, it will happen eventually.
Cai 2023 (https://pages.cs.wisc.edu/~jyc/Shor-Algorithm-with-Noise.pdf) showed that there is a definite noise floor for Shor's and gave a way to determine this.
New algorithms are better than Shor's the in that they are smaller in space and time and more resilient to noise. The state of the art is Ragavan et-al https://arxiv.org/abs/2310.00899. The insight in Cai's about the QFT applies to this algorithm but is less damaging as is scales the noise floor at n^3/2 not n^2. It does seem that it is believed that error correction can be implemented to bring the effective noise down, but it seems that this will be very expensive for large numbers - probably at the scale of previous estimates that indicate that more than 20 million gates will be required to be operational for about 8 hours. The gates required will have to be much better than the current ones (my reading is about an order of magnitude) but this is believed to be on the table for the future. I think these two assertions by the community are debatable but honestly these folks are the experts and we have to go with what they say and respect them on this until it turns out that they were wrong.
From what I read it was never the case that Shor's was thought to be any more noise tolerant than other algorithms, but Cai proved it, which is different. There is some debate about the insight, because a chunck of the community is like "this is not helpful and will make us even more misunderstood than we already are" but personally I find this attitude really irritating becuase it's part of the gradual reveal that QC people do about the practicality of the tech. I have no respect for anyone working in QC who doesn't say something like "there is no chance of a practical application in the next 30 years and it's likely that there will be no practical application for 50 years at least. But, this is really interesting and important physics and the techniques we are developing may help us build new classical devices or other quantum devices in a shorter life time." This rider should be mandated for all grant applications and in every popular interview. Instead of which we hear (repeatedly) "whooo whooo QC will crack RSA any day now and help us do machine learning better than you can ever imagine". These folks say "whelp I never said that" but the reality is that they use the idea to seduce politicians and CEO's to give up significant money that they would not if they had a clear idea of the technical problems with QC and which could be used to do a lot of good if spent on things like feeding kids and stopping climate change.
This is introducing new security issues because things like QKD and post quantum are problematic in new ways. QKD has end points that are silently vulnerable to people with guns, pliers and evil intents. Post quantum is very computationally expensive and errr suspicious in errr various ways. Introducing it is going to create gaps, and those gaps are unnecessary if current encryption is not threatened.
Quantum ML is another thing that make me really annoyed. The algorithms need data to be loaded (surprise) which is extremely problematic and slow, and they need quantum memory - which exists only on a very small scale and the tech used just seems wildly impractical to me. Yet, folks are out there talking about it as if it's coming in the next 5, 10, 20 years! The reality is that we are about 6 major breakthroughs from doing this and once we have those 6 breakthroughs expect that this is going to take 10 years for a practical implementation at least Again - I have no problem with the theoretical exploration of the topic, but to simulate circuits and make claims about how they will work without mentioning that you don't have a clue how to implement the system that will implement it is pretty bad behaviour.
All they need to do is put a single line in the paper like "This technique has theoretical interest in Computer Science and Physics but major open questions prevent it being practical for real world problem solving in the near or medium term." I have total respect. 100%. And I think that the work is interesting
But no, because "want money" and "don't give a shit about the damage we are doing".
If a problem can be reduced to efficiently sampling from the summation of an exponentially large numbers of FFT's, then a quantum computer will destroy a classical computer.
It's largely an open research problem whether there are useful quantum algorithms between those two problem classes.
You can get a Turing award if you can show this, even theoretically.
But yeah it’s not guaranteed or even likely that quantum computers will be very useful for many computer science problems, though I’m also optimistic about that given the progress made in the last 30 years. Physics / science isn’t guaranteed to be easy or progress at a steady pace.
And developing new materials.
A simulation of a plane is a model, implemented by humans, of how physics works.
A simulation of a quantum system is still a human implementation of mechanics. A quantum computer per my understanding should not be able to add anything to that. You’re not able to just say well this thingy has quantum mechanics and this is a quantum computer so it’s better able to do that. Quantum computers are about speeding up calculations by structuring specific problems such that wrong answers are not calculated. Not emulating quantum mechanics and then pulling the answer from the ether.
So… what am I missing here? How could quantum computer aided search spaces specifically aid simulating quantum systems that are not quantum computer aided search spaces?
Quantum computers are a means of systematically creating and modifying complicated sums of exponentially large FFT's, and then efficiently sampling from the resulting distribution.
Note that you typically still need to sample many times to get a meaningful answer, which is where the “wrong answers are not calculated” ultimately comes from: if you can arrange for most or all of the factors corresponding to “wrong” answers in the sampled distribution to cancel out (such as the term for the number 4 when trying to factor 15), then when you sample several times from that distribution, very few or perhaps none of those samples will have been drawn from the “wrong” part of the distribution, and so you waste less time testing bad samples.
A quantum computer is potentially useful for simulating quantum systems because the _models_ for those systems are ridiculously complex in _exactly_ this way. It won't help if the model is wrong, but our problem is currently that we can't really run the calculations our current models imply beyond slightly-larger-than-toy examples.
How is this not “wrong answers are not calculated”? You gave a lot more detail on the mechanics of how these probability amplitudes are canceling each other out but the answer seems the same?
I don’t follow how this maps to helping simulate the quantum systems. Quantum computers are good at finding solutions to problems efficiently. But the quantum systems we are describing are not solution seeking systems. They’re going to be just interacting components with entanglement whatever’s going on. How would the avoidance of bad samples aid the simulation of a system like that?
All the examples given that I have seen were for making a hardwired simulator for a concrete quantum system, e.g. some chemical macromolecule of interest, to be used much in the same way as analog computers were used in the past for simulating systems governed by differential equations too complex to be simulated by the early digital computers in an acceptable time.
Does that actually follow the same framework as “traditional” “quantum computing”, e.g. struggles with error correction / problem formulation specifically designed to avoid unnecessary calculations?
It feels like although a quantum simulator could viably work to simulate a specific system, it shouldn’t make it any easier to actually understand the system it is simulating and could maybe just indicate how complex by the amount of variation in simulation outcomes (which isn’t useless). Is that accurate?
Excuse my terminology here
A good part of theoretical chemistry today relies on simulating quantum systems.
A paper i found on edge detection https://arxiv.org/html/2404.06889v1#:~:text=Quantum%20Hadama....
It is my understanding that a use is very straightforward: quickly solving problems in the Big(O) Factorial Class (n!).
I could be misunderstanding QC though.
But aren't these problems all in the factorial and exponential classes?
Also, at least the most famous problem which has an efficient QC algorithm, integer factorization, has no proven lower complexity bound on classical computers. That is, while the best algorithm we know for it is exponential (or, technically, sub-exponential), it is not yet proven that no polynomial classical algorithm exists. The problem isn't even NP-complete, so it's not like finding an efficient algorithm would prove P=NP.
Regardless, 1/50 the value of the prize doesn't seem egregious to me.
I look forward to welcoming "Quantum AI" to that graveyard.
(Or Promotion Oriented Programming.)
Where is the AI?
The founder of the Google Quantum AI lab is researching how quantum computers can be applied to AI: https://www.youtube.com/watch?v=3iEEvRfTTEs
And if so, why aren't they throwing crypto into the mix?
"Quantum Blockchain AI" does kind roll off the tongue nicely...
Archived versions of the website [1] mention work on machine learning more prominently, as do some of the linked research publications from a few years back [2].
Perhaps the unit was given the name AI initially but its focus later shifted to quantum computing?
Later: A Google blog post from 2021 explains the connection of quantum to AI better than the current website does [3].
[1] https://web.archive.org/web/20201210053216/https://quantumai...
[2] https://scholar.google.com//citations?view_op=view_citation&...
[3] https://blog.google/technology/ai/unveiling-our-new-quantum-...
1. Barren plateaus in quantum neural network training landscapes: https://www.nature.com/articles/s41467-018-07090-4
2. Power of data in quantum machine learning: https://www.nature.com/articles/s41467-021-22539-9
3. Quantum advantage in learning from experiments: https://www.science.org/doi/10.1126/science.abn7293
When computers first began development during WW2, they were a response to immediate demands for particular functionality from many technical areas. Quantum computing seems to go the exact opposite route, first of building up an (admittedly very interesting) technology and then later figuring out if there actually is anything useful to do with it. The connection to AI is particularly interesting, because it seems to be built entirely out of a combination of two buzzwords.
IBM and Google have invested massively because someone there thought that it actually would be useful. But that hasn't happened yet and to be honest it doesn't look like that will change any time soon.
Inregards to Neural Networks, they were pretty much a complete dead end until computing power increased enough to make them viable. In that case an external technology had to come along to make it work.
Plus what is the near future anyway? If big tech companies spend 2 billion a year on quantum computing for the next 25 years (which is how long it took to get from Geoffrey Hinton's book to fully commercial applications) that's 50 billion. OpenAI + Anthropic are valued at >100 billion. Even if they just broke even doesn't that seem worth it to control the next generation of computing?
And idk other than this, robotics (self driving, manufacturing, agriculture) and AGI what else is there to bet on for the coming decades?
Shouldn't they also specify how many physical qubits are needed to build one logical qubit with the given error rate?
Milestone 6 reduces the error rate from 10^-6 to 10^-13 while increasing physical qubits by only 10x. That doesn't work out well if you need 1000x more physical qubits per logical one...
The Quantum AI lab was founded in 2013, about a decade before LLM's took off. The founder of the lab is researching how quantum can be applied to AI: https://www.youtube.com/watch?v=3iEEvRfTTEs
Oh hey apparently there are some 26MB of mp4s that didn't load at first, they served a 2.4MB "Mobile" mp4 instead, so it's not all bad.
Massive assumption that it was built by an external team. But I'd hope so :)
No seriously the amount of cash the our goverments (Canadian here) pumps into projects like this is always annoy me.
Quantum computers do exist.
They are extremely useful and do things that digital computers cannot do. Categorically cannot.
The cynicism in this thread is crazy
I’m not being snarky; I haven’t read much about quantum computing and I am genuinely interested in practical applications.
As it is, your comment just reads as “you people are a bunch of idiots, you’re all wrong and crazy, and I’ll insult you repeatedly but not offer one single corrective argument”. It would be insane for anyone to change their mind based on your comment, you’ll only get people to double down.
But that is false. A classical computer can simulate a quantum computer. Performance is the difference, not inherent ability.
>The cynicism in this thread is crazy
The cynicism stems from people telling others that practical quantum computers will change the world for at least a decade. Even the URL invokes deep cynicism in me, as it randomly combines Quantum computing with the current hot thing. When will openAI switch to quantum computing for their LLMs? Next year?
Practical quantum computers will change the world (break RSA 2048). The question is "when". The people who have a timeline of ~10 years instead of decades contribute to what we in our community call "Quantum hype" and it's very much frowned upon by most of the members in the community.
> combines Quantum computing with the current hot thing
The founder of this lab has an illustrious career in machine learning and is now researching how quantum computers can help with that.
Perhaps via some practical (non-crypto) application of factoring large numbers?
Quantum computers also change the world by solving circuit-SAT https://en.wikipedia.org/wiki/Circuit_satisfiability_problem more efficiently than classical computers than.
They also change the world by simulating quantum systems efficiently, which classical computers cannot do. This has profound implications for physics.
But as far as I know none of the existing quantum computers works well enough to achieve that 'quantum supremacy', yet.
So I have no clue what Hewlberno is talking about with their comment.
And I'll qualify that as a real quantum computer which exists in physical form today, not something theoretical in someone's research paper.
So we have a new tech like Blockchain, it takes a few years and gets really "big".
We have an AI revolution, ok the research has been ongoing for years but it seems like there has been overnight advancement.
I'm sure people are expecting the same with quantum computing, but it's not just a re-use of existing tech (transistors) it is development of brand new tech.
It is better to imagine it as research into fusion power generation. It will take a long time, no one is sure just how useful it will be.
But saying a research quantum computer can't outperform an existing computer is like saying cern can't outperform a coal fired power plant.
Not only two different things, but one of them isn't even designed to generate power.
Last milestone was February 2023:
https://blog.google/inside-google/message-ceo/our-progress-t...
edit: it might make sense to be worried about the feasibility of cooling it, since I think the infrastructure required for keeping the quantum-y parts of the computer cold is extensive, but the actual energy itself is probably irrelevant.
A hover tip on "quantum computing". Scale, useful, complex, fundamental, quantum
what did i miss ?
oh , computer !
Quantum machine learning is postulated to overcoming the issue with LLMs of exhibiting incredible intuition but limited intelligence.
See also eg https://scottaaronson.blog/?p=2756