Turns out he was just writing LLM prompts way ahead of his time.
for some reason that one wasn't even included in the list of books in the series on the inside jacket of the other books that I had.
I remember I had to really hunt for it and it was from a different publisher. never knew why.
I don’t think that’s the main problem, there are a lot of moral dilemmas where even humans can’t agree what’s right.
If each human could pause the state of the world and gather all information and then decide, they would act humanely
Just think of abortion, wars or legalizing drugs. People disagree completely over those because nobody agrees which choice would be the moral one.
HPMOR offers a solution called 'coherent extrapolated volition' – ordering the super intelligent machine to not obey the stated rules to the letter, but to act in the spirit of the rules instead. Figure out what the authors of the rules would have wished for, even though they failed to put it in writing.
We are debating scifi, of course.
What if the original author was from long ago and doesn't share modern sensibilities? Of course you can compensate when formulating them to some extent, but I imagine there will always be potential issues.
I recall that story of the guy who tried to use AI to recreate his dead friend as a microwave and it tried to kill him[0].
You couldn't sell a sci-fi story where AIs just randomly go insane sometimes and everyone just accepts it as a cost of doing business, and because "humans are worse," but that's essentially reality. At least not as anything but a dark satire that people would accuse of being a bit much.
[0]https://thenextweb.com/news/ai-ressurects-imaginary-friend-a...
Certainly AI safety isn't perfect, but if you're going to criticize it at least criticize the AIs people actually use today. It's like arguing cars are unsafe and pointing to an old model without seatbelts.
It's not surprising at all that people are willing to use AIs even if they give dangerous answers sometimes, because they are useful. Surely they're less dangerous than cars or power tools or guns, and all of those have legitimate uses which make them worth the risk (depending on your risk tolerance.)
And now I wouldn’t even trust them to understand the laws 100% of the time.
Or so says Ted Chiang: https://en.m.wikipedia.org/wiki/The_Lifecycle_of_Software_Ob...
Like seemingly all torment nexii, the warning part of the tale is forgotten.
Sorting garbage is a terrible job for humans, but it's a terrible one for robots too. Those fancy mechanical actuators etc are not going to stand up well to garbage that's regularly saturated with liquids, oil, grease, vomit, feces, dead animals, etc.
[1] https://loe.org/shows/segments.html?programID=96-P13-00022&s...
An example: A friend worked on accurate built-in weighing machines for trucks, which could measure axle weight and load balance to meet compliance for bridges and other purposes. He found it almost impossible to make units which could withstand the torrents of chemical and biological wet materials which regularly leak into a truck. You would think "potting" electronics understands this problem but even that turns out to have severe limits. It's just hard to find materials which function well subjected to a range of chemicals. Stuff which is flexible is especially prone to risks here: the way you make things flex is to use softeners, which in turn make the material for other reasons have other properties like porosity, or being subject to attack by some combinations of acid and alkalai.
These units had NO MOVING PARTS because they were force tranducers. They still routinely failed in service.
Rubbish includes bleaches, acids, complex organics, grease, petrochemicals, waxes, catalyst materials, electricity, reactive surfaces, abrasives, sharp edges..
They are not saying "dont try" they are saying "don't be surprised if it doesn't work at scale, over time"
It's necessary to follow things to their logical conclusion.
Humans can generally stand this without an issue.
In fact you wouldn’t replace a lot of jobs that involves this : doctors, nurses, emergency workers, caregivers…
It just happens to be difficult. But people love doing difficult things as long as it’s : a) rewarding, b) respected, and c) sufficiently paid
Does the graph of that pay scale cross the cost graph of this robot?
Maybe just paying a living wage is a simpler answer than most AI enthusiasts want to admit.
Why's everything gotta have arms and graspers it's so inefficient.
Robots aren't climbing trees or chasing food. They don't need tails, either.
We have designed a lot of processes and workplaces around the assumption that the 'machine' working there will be be around 160-190 cm tall, with two arms with graspers on the end and equipped with stereo colour vision cameras. The closer you make your new machine match that spec the less changes you have to make to your current setup. It also makes it easier to partially swap in robots over time, rather than ripping everything out and building something completely new.
Having worked at a company close to this field, the real answer through is that both approaches are being done right now. People building new facilities from scratch are building entirely automated system where the 'robot' is the whole machine. People with existing facilities are more interested in finding ways to add robots to their current workflow with minimal changes.
do you have a picture of a facility where they would have to replace humans with humanoid robots and a conveyor would not work?
I believe a short drop and hydraulically actuated metal plates are the more typical solution. That offers a high level of precision.
Just had cameras, visual detection, some compressed air nozzles, and millisecond (nanosecond?) reaction time to separate the non-recyclable materials.
The mapping between sensor signals and material types is usually hardcoded from laboratory test results.
For long time the term "artificial intelligence" has just gone out of favour, but I do remember the days where a good AI research lab had a bunch of symbolics lisp machines
Laser interferometry and DCT image distance, primarily.
Lead to nothing. At least not at the time. AFAIK the initial garbage stream is still manually inspected and separated at most sites.
And the people doing that have a much higher risk of getting sick, because of all sorts of bacteria, mold, spores, chemicals, VOC, whatever.
Not to mention the stink.
They used AI to identify and sort
One issue was just the sheer muck of trash, if someone dropped an open smoothie, all sorts of sensors got covered, etc
Really cool idea I thought though
The Gemini robot tech is cool as heck, don't get me wrong, but it doesn't seem particularly well suited to industrial automation.
1. https://www.pbs.org/wgbh/frontline/documentary/plastic-wars/
You can just send explosives into both those things, and it's cheaper and more effective.
This is because if it becomes easy then it won’t matter and all the marketing, non profit orgs and everything goes away, making it a non problem.
While I am sure you will find people who will like these ideas and want them, they will have zero control.
At this point recycling is a marketing thing. And it’s more important that people think about the cause than solve the problem.
Get everyone to dump their crap into one pile and actually invest in industrial processes to sort the crap out.
Huge con: this is a complex problem with possibly poisonous/explosive ramifications if it goes wrong.
Huge Pro: If we can solve this issue, that is a society changing capability, forevermore.
Or until armageddon/robot overlords/singularity/zombie plague at least.
We are asked to do hundreds of little things that mildly inconvenience us in order to maintain some social contract. Sure they could be made easier/nonexistent with better technology, but I:
1) don't see why asking people to do their part is silly
2) don't see why this particular problem would be more frustrating than e.g. the others I've mentioned. I feel like they are all similar on the "effort" scale.
Although I guess I'd admit that asking people to sort recycling properly is very different than relying on them to.
If 10% of people don't put their cart away, then 90% of carts still get put away. If 10% put things into the recycling bin that shouldn't go there, then 100% of that batch of material becomes unsuitable for recycling process unless expensive remediation is done first.
But I don't think this is a good system of caring for our environment. If we cared properly, rather than half-arsing it we'd have a proper industrial system with known outputs that we could improve upon. Instead we seem to have a "feel good you did your part, now forget about it" process. I guess it is shambling it's way to something more but it doesn't seem like it's in a rush - kinda the same way the world agrees on acting on climate change but no one is in a rush.
That’s about 25% by weight of all that gets recycled in the country.
Metals, industrial scrap, and other sources are 75% of what gets recycled in the US.
We are blue collar businesses, with high labor costs. Many are exploring robotics actively for repetitive tasks. We have some robots in our process, looking for more when the ROI makes sense.
It may not be 100x, but there will be value in robots in recycling.
If plastic recycling was actually being done and was profitable I don't think there'd be a Pacific garbage patch and pfas in my heart right now.
# The Reality of Plastic Recycling:
- Low recycling rates: Only 9% of all plastic worldwide is actually recycled[1][2]. In the United States, the recycling rate for plastic waste is even lower, at just 5-6%[5].
- Limited recyclability: Most types of single-use plastic cannot be recycled in the United States. Only plastic #1 and #2 bottles and jugs meet the minimum legal standard to be labeled recyclable[1].
- Downcycling: The majority of recycled plastic is of inferior quality, resulting in downcycling rather than true recycling[2].
- Economic challenges: Recycling plastic is often not economically viable compared to producing new plastic[4].
# Industry Deception:
The myth of plastic recycling has been perpetuated by the plastic and oil industries for decades:
- Misleading labeling: The Resin Identification Codes (RICs) on plastic products were created by the industry to give the impression of a vast and viable recycling system[3].
- Disinformation campaigns: The fossil fuel industry has benefited financially from promoting the idea that plastic could be recycled, despite knowing since 1974 that it was not economically viable for most plastics[3].
- Lack of commitment: In 1994, an Exxon chemical executive stated, "We are committed to the activities, but not committed to the results," regarding industry support for plastics recycling[5].
#Environmental and Health Impacts
- Pollution: Most plastic items labeled as recyclable often end up in landfills, incinerators, or polluting the environment[1].
- Health hazards: Plastic waste contamination affects soil, water, and air quality, potentially impacting human health[4].
Conclusion
The concept of widespread plastic recycling is largely a myth propagated by the plastic industry to distract from the real issues of plastic pollution and to avoid regulation. While some plastic can be recycled, the current system is far from effective or sustainable. To address the plastic crisis, focus needs to shift from recycling to reducing plastic production and consumption.
[1] https://www.greenpeace.org/usa/the-myth-of-single-use-plasti...
[2] https://www.plasticsoupfoundation.org/nl/blog/recycling-myth
[3] https://www.earthday.org/plastic-recycling-is-a-lie/
[4] https://kosmorebi.com/en/plastique-le-mythe-du-recyclage/
[5] https://www.pbs.org/newshour/show/the-plastic-industry-knowi...
(Also there seems to be some kind of video auto-play/pause/scroll thing going on with the page? Whatever it is, it's broken.)
The trick was in that the belt was too tight for an average human to put on with brute force, and disabling the tensioner or using tricks would require better than average mechanical skills their specially chosen 'random humans' lacked.
Reference video (saw your clip is robot-only, but the robot vs human video is more telling):
https://techcrunch.com/2023/12/07/googles-best-gemini-demo-w...
… but don’t disappoint shareholders…
It's an impressive demo but perhaps you are misremembering Jarvis from Iron Man which is not only far faster but is effectively a full AGI system even at that point.
Sorry if this feels pedantic, perhaps it is. But it seems like an analogy that invites pedantry from fans of that movie.
Jarvis is AGI, yes, but is not what's being referred to here.
edit: it didn't.
Don't need the robot to smash a grape when we can use a fake grape that won't smash.
This video reeks of the same shenanigans as perpetual motion machine videos.
Time to think bigger.
I want to strap robot arms to paralyzed people so they could walk around, pick up stuff, and climb buildings with them.
Doc Oc style.
Isn't programming just text anyway ?
Well in developing countries you can hire people to do house chores.
At least until the autonomous corporations really take over.
Ehh, no need - just let the LLM figure out what to build in your garage.
Can these spatial reasoning and end-effector tasks be reliably repeated or are we just looking at the robotic equivalent of "trick-shots" where the success percentile is in the single digits?
I'd say Okura and Vinci are the current leaders in multi-axis multi-arm end-effectors and they have nothing like this.
e.g. the robot can put that particular fake banana in that particular bowl placed in that particular location. Give it another banana and another bowl and run for cover.
It's not just researchers. Engineers at Google get to spin up products and throw spaghetti at walls, too. Google has more money than God to throw around.
Google's ad dominance will probably never go away unless antitrust action by the FTC/DOJ/EU force a breakup. So they'll continue to lead as long as they can subsidize and outspend everyone else. Their investments compound and give an almost unassailable moat into deep tech problems.
Google might win AI and robotics and transportation and media and search 2.0. They'll own everything.
With a reasonable degree of success. In their last quarter (see https://abc.xyz/investor/earnings/) 25% of their revenue was non-ads, and that percentage has been consistently increasing.
That's just one of their many business units.
chatgpt has good chance to kill google search -> kill google.
ML models like chatGPT rely on the open web for training data, particularly for information about recent events. But models like chatGPT are horrible about linking to their sources. That means that sources that rely on ad revenue or even donations to exist will effectively disappear as chatGPT steals all of the traffic. With no cash flow, the sites with current data disappear. With no new training data, chatGPT stagnates.
ChatGPT is basically a parasite on the free and open web - taking content but not giving back.
This might have been true 10-15 years ago. I assure you it is not the case today.
Search is in real danger of mostly obsolescence. Ads aren't safe.
There’s not enough money paid to drivers in the world today to repay the investment in autonomous driving from direct revenues. It’ll be an expected feature of most cars, and priced at epsilon.
Autonomous driving and the attendant safety improvements will turn out to be a gift to the world paid for by Google ad revenue, startup investors, and later, auto companies.
> They’ll never make their money back.
I agree here, because the profit margin on taxi services is too low. Well, on an unrealistically long time horizon, like 50 years, they might make it back, but surely much worse returns that investing that same money into US Treasury bonds. > Autonomous driving is mostly software and will be commoditized very shortly after it works well.
I disagree here. To be clear, when we talk about AI/ML here, I separate it into a few parts: (1) the code that does training, (2) the training data, (3) the resulting model weights, (4) the code that does the inference. As I understand, self driving uses a lot of inference. (Not an expert, but please correct me if wrong.)How can Waymo's software be "commoditized very shortly after it works well" if competitors don't have (2) and (3)? The training data that Waymo has incredibly valuable. (Tesla and some Chinese car companies also have mountains of it.)
I assume Google is being very careful to keep the speeds well below the “oops, it took your jaw off” threshold.
But that all just poofs away in a year or two as inferencing hardware gets better/faster. And for many use cases, the slowness/awkwardness doesn't really matter as long as the job gets done.
"AI working in meatspace" was supposed to be hard, and its rapidly becoming clear that isn't going to be the case at all.
Yet again, ours proves to be a really boring dystopia.
Well, for now, at least.
I know who will be the first shown the door when the next round of layoffs comes: the guy saying "you can't make money that way."
When I see open roles at these companies I think the projects I'm going to work on in the future will be more and more irrelevant to society as a whole.
Anyway, this is amazing. Please delete/remove my post if it seems like this adds nothing to the conversation
The man behind the curtain here has an army of engineers, unlimited cloud nodes and basically has harvested all the data currently available in the world.
It doesn't get any better than this right now.
What's next? They'll ping you later on Linked-in with this awesome idea that you need to make sure runs in a $1 USD microcontroller with a rechargeable battery that is supposed to last at least all day.
The actual scary stuff is the dilution of expertise, we contributed for a long time to share our knowledge for internet points (stack overflow, open source projects, etc), and it has been harvested by the AIs already, anyone that pays access to these services for tens of dollars a month can bootstrap really quickly and do what it might had needed years of expertise before.
It will dilute little by little our current service value, but you know what, it has always been like this forever, it is just faster.
In the meantime, learn to automate the automator, that's the way to get ahead.
Everyone has to collect their own data and pool it together or else there won't be any progress.
And it won't ever get any worse.
These were developed without the big bucks, so the tech has improved for smaller players at least.
But yes, in general, models won't get worse than they are now (or if they do, they won't stay that way.) At Google, search has been enshittified for business reasons, not technical ones.
What scares me more is the opposite of that: information scarcity leading to less accessible intelligence on newer topics.
I’ve completely stopped posting on Reddit since the API changes, and I was extremely prolific before[1] because I genuinely love writing about random things that interest me. I know I’m not the only one: anecdotally, the overall quality of content on Reddit has nosedived since the change and while there doesn’t seem to be a drop in traffic or activity, data seems to indicate that the vast majority of activity these days is disposable meme content[2]. This seems to be because they’re attempting desperately to stick recommendation algorithms everywhere they can, which are heavily weighted toward disposable content since people view more of it. So even if there were just as many long discussion posts like before, they’re not getting surfaced nearly as often. And discussion quality when it does happen has noticeably dipped as well: the Severance subreddit has regularly gotten posts and comments where people question things that have already been fully explained in the series itself (not like subtext kind of things, like “a character looked at the camera and blatantly said that in the episode you’re talking about having just watched” things). Those would have been heavily downvoted years ago, now they’re the norm.
But if LLMs learn from the in-depth posting that used to be prominent across the Internet, and that kind of in-depth posting is no longer present, a new problem presents itself. If, let’s say, a new framework releases tomorrow and becomes the next big thing, where is ChatGPT going to learn how that framework works? Most new products and platforms seem to centralize their discussion on Discord, and that’s not being fed into any LLMs that I’m aware of. Reddit post quality has nosedived. Stack Overflow keeps trying to replace different parts of its Q&A system with weird variants of AI because “it’s what visitors expect to see these days.” So we’re left with whatever documentation is available on the open Internet, and a few mediocre-quality forum posts and Reddit threads.
An LLM might be able to pull together some meaning out of that data combined with the existing data it has. But what about the framework after that? And the language after that? There’s less and less information available each time.
“Model collapse” doesn’t seem to have panned out: as long as you have external human raters, you can use AI-generated information in training. (IIRC the original model collapse discussions were the result of AI attempting to rate AI generated content and then feed right back in; that obviously didn’t work since the rater models aren’t typically any better than the generator models.) But what if the “data wells” dry up eventually? They can kick the can down the road for a while with existing data (for example LLMs can relate the quirks of new languages to the quirks of existing languages, or text to image models can learn about characters from newer media by using what it already knows about how similar characters look as a baseline), but eventually quality will start to deteriorate without new high-quality data inputs.
What are they gonna do then when all the discussion boards where that data would originate are either gone or optimized into algorithmic metric farms like all the other social media sites?
[1]: https://old.reddit.com/user/Nathan2055
[2]: I can’t find it now, but there was an analysis about six months ago that showed that since the change a significant majority of the most popular posts in a given month seem to originate from /r/MadeMeSmile. Prior to the API change, this was spread over an enormous number of subreddits (albeit with a significant presence by the “defaults” just due to comparative subscriber counts). While I think the subreddit distribution has gotten better since then, it’s still mostly passive meme posts that hit the site-wide top pages since the switchover, which is indicative of broader trends.
As people are using AI more and more for coding and problems solving providing company can keep records and train on them. I.e. if person 1 solved the problem of doing 2 on product 3 then when 4 is trying to do the same it can be either already trained into the model or model can lookup similar problems and solutions. This way the knowledge isn't gone or isolated, it's being saved and reused. Ideally it requires permission from the user, but price cuts can be motivating. Like all main players today have free versions which can collect interaction data. With millions of users it's much more than online forums have.
I'm just scared about a future where humans (say the next generation, kids 1-5 years of age right now) lack in-depth knowledge of almost everything and it's mostly AI writing low-level code, so there are no more "human experts."
We've already seen this happening where Gen Z mostly interacts with the world using phones and struggle with older operating systems/desktops, just like older generations. AI is going to make that 10x worse going forward.
In 10 years, no trace of our current practices will remain in a recognizable form. It'll take longer than most of us think -- imagine how nonplussed Winograd and the rest of the SHRDLU-era AI gurus would have been to see how long it took to pull off the dice-matching trick in the video -- but when it does happen, it'll happen faster than we think. We're not yet at the tipping point, but it's close.
Isn’t that the ultimate goal?
But let's say it's accomplished. What will end up happening is that AI (should it work to the extent it's been hyper) will replace all the 'fun' jobs and we'll be left with either no jobs (and no income), or the most menial physical labor imaginable.
We'll probably spend all day consuming media and socializing. Thats the optimistic view of course.
This includes reading, gardening, playing sports, learning to play the piano, etc. Of course some of that is consuming media, but I don't think people will just become couch potatoes.
Think of all the things "high society" accomplished in Victorian england because they had the time, energy and resources.
Definitely believe we are reaching some sort of global maxima in terms of intelligence in our current structure of society
While the ERP is boring as hell compared the the creative coding results, the latter is novelty and often has no intrinsic value.
Also, I see these videos and get deja vu of boston dynamics demos from years back. Not seeing anything new here except this is just early beta version of boston dynamic robots backed by different models.
Also amount of people around the demo set tells me, a lot of supervision and retakes happened. I often do not trust such demos(experience from seeing what goes behind the scene with cherry picked takes being published).
Anyways, my point is, just chill out. I remember how AWS/GCP/Heroku etc were eradicating IT admin but instead now we have dedicated DevOps and IAM specialist roles… and every day I see 7:1 ratio of job vacancy for DevOps:SWE.
The math, well, basic signal processing means you know some algebra and differential calculus. Which is enough, unless, again, you don't want to prove theorems or invent new algos.
I think where the real work will be is taking these models and creating real products out of them.
LLMs were a breakthough out of like a multitude of breakthroughs in AI in the past decade. I think there only needs to be a couple more breakthroughs in the next decade for it to come full circle.
Whenever I here a naysayer open his mouth it's like he's insulting a baby. Look it can't talk! no way it can ever program!
Either way it's not just you. The people creating AI are also as a side effect creating the training data for AI to replace them. So no one is safe.
One key point we must first understand, coding is NOT software engineering or even programming! Writing code is the last bit, a minimal fraction of the job description(unless you are actually indie dev or working for consulting firms). The core tasks include actually untangling the numerous vague requirements, understanding the domain, figuring out best approaches, performing various tests and checks, validating ideas, figuring out a cost effective solution, preparing rough architecture, deciding on an actual set of tech, align a hoard of people that everyone is on the same page, then start coding.
My IDE can already read my mind by auto-suggestion since many years and patterns/frameworks exist to reduce the amount of code I need to write. The issue is that, with these AI model, I just need to abuse my fingers slightly less. The other core duties are not yet solved and remains same archaic procedural everywhere in any/all serious roles.
And speaking of consulting firms, they are also clever and often has several implementations of same stuff, which they can modify a bit and sell for big money.
So in the end, people who jump into the pit because they are afraid of the juju mask are the prime target of the juju mask, for the rest of us, life goes on with minor bumps when the MBA comes to the desk and asks if it is possible to layoff few people to jack up the stock price this quarter yet… while subscribing to that new agenting engineer product suite for double the fees of what the laid off people actually costed, because their best friend in the golf club said the price will eventually become reasonable but the benefits are immediate.
People overestimate the changes that could happen within a couple years, and totally underestimate the changes that would happen in decades.
Perhaps it's a consequence of change having some kind of exponential behavior. The first couple years might not feel like anything in absolute terms, but give it 10 or 20 years and you'll see a huge change in retrospect.
IMHO, I don't think anyone needs to panic now, changes happen fast these days but I don't think things are going to drastically change in these ~2-3 years. But the world is probably going to look very different in 20 years, and in retrospect it will be attributed to seeds planted in these couple years.
In short I think both camps are right, except on different timescales.
Anecdotally the amount of hype and interest has been growing exponentially. This will push progress to a maximal pace. The next 10 years will be significantly faster then the last 10 years.
If you are a professional, please tell me about how much new code you write vs perform other stuff(meetings, alignment, feature planning, system design, benchmark, bug fix, release). For me, the ratio of coding:non-coding is around 10:90 on average week. Some weeks, only code I write are the suggestions on code reviews.
Consider the unmistakable trend: In the early 2010s, deep learning fundamentally transformed machine perception, image recognition, and natural language processing, setting new standards far surpassing earlier methods. By 2016, AlphaGo decisively defeated human champions, showcasing unprecedented strategic depth previously assumed beyond AI’s reach. Shortly after, AlphaFold solved the protein-folding problem, revolutionizing computational biology and drug discovery by rapidly predicting complex molecular structures. In parallel, generative adversarial networks (GANs) and diffusion models ushered in a new era of AI-driven image creation, enabling systems like DALL·E and Midjourney to generate strikingly detailed images, surreal artwork, and hyper-realistic visuals indistinguishable from human craftsmanship. AI’s ability to synthesize realistic voices and human-like speech has dramatically improved through innovations like WaveNet and advanced text-to-speech technologies, leading to widespread practical adoption in virtual assistants and accessibility tools.
Beyond imagery and voice, generative AI has also broken new ground in music composition, where models now produce compositions so sophisticated they are difficult to distinguish from professional human creations. Transformer-based models like GPT-3 and GPT-4 represent a seismic shift in language generation, enabling nuanced conversation, creative writing, complex reasoning, and contextual understanding previously believed impossible. ChatGPT further pushed conversational AI into mainstream utility, effortlessly handling complex user interactions, problem-solving tasks, and even creative brainstorming. Recent innovations in AI-driven video generation and editing—demonstrated by advancements like Runway’s Gen-2—indicate rapidly expanding possibilities for automated multimedia creation, streamlining production pipelines across industries.
Moreover, reinforcement learning breakthroughs have expanded significantly beyond gaming, improving complex logistics operations, real-time decision-making systems, and autonomous navigation in robotics and vehicles. The impressive capabilities demonstrated by autonomous driving systems from Tesla and Waymo further underscore AI’s advancing proficiency in real-world environments. Meanwhile, specialized large language models have emerged, demonstrating near-expert performance in fields such as law, medicine, and finance, streamlining tasks like legal research, medical diagnostics, and financial forecasting with unprecedented accuracy and efficiency.
These advances are not isolated phenomena—they represent continuous, accelerating progress. Today, AI assists with summarization, automated requirement analysis, preliminary architecture design, and domain-specific problem-solving. Each year brings measurable improvements, steadily eroding the barrier between supportive assistance and true cognitive engagement.
Your recognition of AI's limitations today is valid but dangerously incomplete. You fail to account for the rapid pace at which these limitations are being overcome. Each "core task" you've identified—domain understanding, requirement analysis, nuanced decision-making—is precisely within AI's increasingly sophisticated reach. The clear historical evidence signals a near-inevitable breakthrough into human-level reasoning within our professional lifetimes.
In disregarding this aggressively upward trendline, you're making the same grave error committed by those who previously underestimated transformative innovations like personal computing, the internet, and mobile technology. Recognizing current limitations without acknowledging clear indicators of impending revolution isn't merely shortsighted—it's strategically negligent.
Thus the rabid propaganda is great evidence that these technologies are not revolutionary.
Good catch. History is full of revolutionary technologies that succeeded only because everyone kept them a total secret.
And as for hype—what else could possibly spread information besides media? In the past, it was newspapers, expositions, and public demonstrations that built excitement. Today, it’s social media amplifying discussions. But surely, widespread interest in a technology today must mean it’s all just empty propaganda, because, as history shows, real groundbreaking innovations only succeed when no one is paying attention.
That’s why the iPhone was a total flop—Apple really should have just quietly released it in a few stores and hoped people figured it out instead of, you know, making a big deal about it.
I'd be more worried being a junior front-end mobile or web dev.
It's like we automate their creation, and the users automate hiding / clicking them.
The hardest part is data collection.
27 years later, 99.99% of trips are driven by humans.
Real-world robotics takes multiple decades to pan out. These demos are just that: demos. What you are seeing will not remotely impact your life before the 2050's, if ever.
What you should be worried about, however, is you (not your job) becoming irrelevant if you don't learn to write firmware using state-of-the-art AI tooling.
At the minimum: learn to work with Cursor (or equivalent). Make sure you work at a company that uses state-of-the-art AI tooling.
If you want to go further: learn to code (e.g. in python). Take undergrad/grad level courses in math, statistics and fundamentals of deep learning.
And FFS, chill.
1. If you can't beat em, join em - get a masters in ML or something and work at a higher level of the stack. I know several folks who have pivoted this way and are happy.
2. Double down - there will always be layers below the AI (drivers, silicon, etc). Sure, ChatGPT will continue chipping away at how much coding we do day to day, but at the end of the day, good luck to an AI trying to debug I2C bus clock stretching issues IRL. And you can bet that these robotic actuators have plenty of hardware issues to debug.
3. Pick an area of tech which is relatively immune (medical/automotive/aerospace) due to regulatory and other barriers to AI entry. You don't necessarily need robotics/CV experience to do this.
4. GTFO (I say this with no judgment) - and work in a different industry which is less likely to be automated. It would not be the most lucrative option but is certainly an option.
I understand that many people don't live in America and don't know how to use a coffee maker. That is 100% irrelevant. There is a frustrating tendency in AI circles to conflate domain knowledge with intelligence, in a way that invariably elevates AI and crushes human intelligence into something tiny.
[1] The hard part would be psychological (e.g. keeping the chimp focused), not cognitive. And of course the chimp would need to bring a chimp-sized ladder... It would be an unlawful experiment, but I suspect if you trained a chimp to use a specific coffee maker in another kitchen, forced the chimp to become addicted to coffee, and then put the animal in a totally different kitchen with a different coffee maker (but similar, i.e. not a French press), it figure would figure out what to do.
It also excludes corner cases like "what if they don’t have any filters"? Should the robot go tearing through the house till they find one, or do nothing? But what if there were some in the pantry — does that fail the test? There’s all kinds of implicit assumptions here that make it quite hard.
As for the point of corner cases being hard - I mean that's the point here, isn't it?
But sure, without proper compensation a lot of people would probably just say "I can't do it" as a way of avoiding the task.
Wasn't that easy?
Hope to share the details here soon.
As a matter of fact,they may very well end up being the last bastion.
This is a great achievement and I'm not underestimating the work of the people involved. But similar videos have been put together by many research labs and startups for years now.
I feel like Google's a bit lost. And Sundai's leadership has not been good for this, if we're honest.
GOOG is around the same price as it was in 2022, which means the AI wave went by through them with zero effect. With other tech companies doubling/tripling their market cap during this time, Sundai really left 1 trillion of unrealized value on the table (!); also consider Google had all the cards at one point, quite mediocre imo.
Sure, there’s almost nothing for you to buy from them right now from these efforts, except to the extent that it contributes behind the scenes to their ad business. That might be a problem for them, but not for the rest of us that benefit from the publications, software and services.
Even after the massive total market correction in the last few weeks, the earliest that GOOG was the same price as today is not even a full year ago. In fact, it's up 90% since 2022.
Any stock market source would tell you GOOG was ~140 USD at the start of 2022. Today it is ~170 USD. A 20% increase over three years, about the same rate as inflation and S&P.
This is extremely trivial to verify. Was this written by a GPT bot?
It's just the up and down of the entire market (and these big techs dominate S&P 500). I don't think that actually indicates anything.
You don't have to release a profitable product, but to compete over the next several decades you are going to need to own valuable land in remote territories where patent wars being fought today. I'm guessing Google's meta-strategy is a type of patent-colonialism.
I see, are you a VC in the valley?
I have not read it yet (it's being released in May), so can't give a true recommendation, but I did preorder it. The state of art is changing fast, and the book is expected to capture at least some snapshot of it.
And many popular robotics demos are either controlled by humans or scripted, so it's useful to have the "Autonomous" label as well to clear up confusion. For example, I know a lot of people who thought the recent Unitree G1 demos were autonomous.
These models are nowhere near that for now, but I'll be watching to see if the big investments into synthetic data generation over the next few years get them closer.
They can be competent enough to cook meals in a controlled environment: (environment built for machines and specific dishes), without ever being able to replicate the same in a human restaurant.
There are a few companies that do robotic cooking, and it has its challenges due to the above reason. I am not aware of the cost problem though.
How cool would it be if this replaced someone at Subway though
1) tweak something like that to increase the likelihood of certain situations (abundance of a particular resource/object; frequent geographical feature), and
2) instruct your embodied AI to control a "player" model (to whatever degree of accuracy in articulation/mobility) to wander and perform certain types of tasks.
Being able to order food and handle bureaucracy in these languages while speaking only English would be amazing. This seems like a simpler problem than tackling robots in 3D space, yet it’s still unsolved.
Perhaps it goes beyond the brightest minds at Google that people can grasp things with their eyes closed. That we don't need to see to grasp. But designing good robots with tactile sensors is too much for our top researchers.
All the best ideas are tried repeatedly until the input technologies are ripe enough.
But perhaps a great benefit of tactile input is its simplicity. Instead of processing thousands of pixels, which are passive to interference from changing light conditions, one only has to process perhaps a few dozen tactile inputs.
Ex-Googler so maybe I'm just spoiled by access to non-public information?
But I'm fairly sure there's plenty of public material of Google robots gripping.
Is it a play on words?
Like, "we don't need to see to grasp", but obviously that isn't what you meant. We just don't need to if we saw it previously, and it hadn't moved.
EDIT: It does look like the video demonstrates this, including why you can't forgo vision (changing conditions, see 1m02s https://youtu.be/4MvGnmmP3c0?t=62)
Those are realistically the 'natural' developments in the domain knowledge of Robotics/Computer Science.
However, what GP (I think) is raising is the blind spot that robotics currently has on proprioception and tactile sensing at the end-effector as well as a along the kinematic chain.
As in you can accomplish this with just kinematic position and force feedback and Visual servoing. But if you think of any dexterous primate they will handle an object and perceive texture, compliance, brittleness etc in a much richer way then any state-of-the art robotic end-effector.
Unless you devote significant research to creating miniaturized sensors that give a robot an approximation of the information rich sources in human skin, connective tissue, muscle, joints (tactil sensors, tensile sensor, vibration sensors, Force sensors) that blind spot remains.
For the inverse of the robot problem: younger me, spoiled by youth and thinking multitouch was the beginning of a drumbeat of steady revolution, distinctly thought we were a year or two out from having haptics that could "fake" the sensation of feeling a material.
I swear there was stuff to back this up...but I was probably just on a diet of unquestioning, and projecting, Apple blogs when the taptic engine was released, and they probably shared one-off research videos.
Blind people can grasp.
Stretch goal: small kids can walk on their hands.
Sorry, but I just burst out laughing at my own comment, when I considered the technical difficulties in trying to teach a robot to handle the change of context needed to balance on its hands, rather than its feet, let alone walk around on them. Ahaha.
Also, I vaguely remember similar demos without the AI hype. Maybe it was from DeepMind, or another upstart back in 2015.
Fuck it, make the arms big enough and it can do laundry, load/unload dishwasher, clean up after cooking/eating.
I can finally see this happening. Probably Tesla first tho.
"We consulted with ourselves and decided that we're a-okay!"
I expect they are more honest than the Telsa men in suits debacle, but my trust is low.
What do we know to be the facts?
And you can train the model by yourself without relying on cloud services.
Some URLs to get your started:
https://huggingface.co/lerobot
https://github.com/huggingface/lerobot/blob/main/examples/10...
the latest project: Le Kiwi, using the SO-ARM100 arm:
https://github.com/huggingface/lerobot/blob/main/examples/11...
the super advanced HopeJR shoulder + arm + hands:
https://github.com/TheRobotStudio/HOPEJr
cool video: https://www.youtube.com/watch?v=VKHfy2vACyw
Not being a high profile target myself, I'd rather take that risk and see where it goes. Unfortunately it's the high profile targets themselves that make the decisions, so after the first few incidents I figure there will be this whole mess where they try to clamp down on access to such things without sufficient forethought.
If you want to start a list of all the bozos Google wasted oodles of money on, you're going to be here a while.
I would love to experiment with something like this but everytime I try to figure out what hardware to do it with there's a thousand cheap no-name options and then bam 30k+ for the pro ones.
However, when I looked at the BOM I was surprised that the actual arm they use is an incredibly expensive off-the-self arm https://www.trossenrobotics.com/viperx-300
For a much cheaper option take a look at https://github.com/huggingface/lerobot (this is an AI training library/framework) which uses the SO-100 arm https://github.com/TheRobotStudio/SO-ARM100 (one arm is $123). See: https://www.youtube.com/watch?v=n32OmyoQkfs
There's also the Parol6 arm, which is more performant than the SO-100, but more expensive: https://source-robotics.com
Uhhh, I mean that's nice, but how about: "That's why we will never sell our products to military, police, or other openly violent groups, and will ensure that our robots will always respond instantly to commands like, 'stop, you're hurting me', which they understand in every documented human language on earth, and with which they will comply regardless of who gave the previous command that caused violent behavior."
Who is building the robot cohort that is immune - down to the firmware level - to state coercion and military industry influence?
Now, clean up the kitchen.
I assume google made the choice of selling the "brain" for any "body" whoever develops it. Something like android.
1.) Has cutting edge in house AI models (Like OpenAI, Anthropic, Grok, etc.)
2.) Has cutting edge in house AI hardware acceleration (Like Nvidia)
3.) Has (likely) cutting edge robotics (Like Boston Dynamics, Tesla, Figure)
4.) Has industry leading self driving taxis (Like Tesla wants)
5.) Has all the other stuff that Google does. (Like insert most tech companies)
The big thing that Google lacks is excitement and hype (Look at the comments for all their development showcases). They've lost their veneer, for totally understandable reasons, but that veneer is just dusty, the fundamentals of it are still top notch. They are still poised to dominate in what the current forecasted future looks like. The things that are tripping Google up are relatively easy fixes compared to something like a true tech disadvantage.
I'm not trying to shill despite how shill like this post objectively is. It's just an observation that Google has all the right players and really just needs better coaching. Something that isn't too difficult fix, and something shareholders will get eventually.
Google as a whole has a long history of not being able to successfully build great products out of great tech. That seems wrong from looking at search, Gmail, Maps*, Docs*, etc., but I think these are cases where a single great insight or innovation so dominated the rest of the product qualities that it made the product successful on it's own (PageRank, AJAX, realtime collaboration). There have been so many other cases where this pattern didn't hold, and even though Google had better tech, it wasn't so much better on one axis as to pull the whole product along with it.
That's the problem I see here. Maybe they have a better model. Can they make it a better product? OpenAI and Anthropic seem to ship faster, with a clearer vision, and more innovation with features around the model. Is their AI hardware acceleration really going to be a game changer if it's only ever available in-house?
I do believe in Waymo, but only because they've been incrementally investing and improving it for 15 years. They need to do that with all products, instead of giving up when they're not an instant hit.
*Maps, Docs, and YouTube were acquired with their key advantages in place, so I wonder how much they even count.
A better management with long term thinking would utilize Google's enormous base of talented engineers far better.
Larry Page was making the rounds when all of this AI hype started. He seemed to have a much more aggressive stance, even ruffling feathers about how many hours Google employees should be working to compete in AI. And there is obviously Demis Hassabis who is the most likely contender for a replacement.
I doubt it is an easy position to fill. But Pichai has presided over this lackluster Google. Even if he isn't strictly to blame, I am surprised he hasn't be replaced.
I guess it is easy to view it from my own perspective, one tinged with a hope for invention and innovation. But the market probably loves the financial stability Pichai has brought to the table and doesn't care about the flaws I see.
And I 'm not sure why I have rose tinted glasses for Nadella. I believe MS has been doing well financially (not something I've studied) while also supporting things I believe are valuable (e.g. VS Code, GitHub, TypeScript). Maybe I just wish I felt the same kind of balance in Google.
That’s such a refreshing change from the “DIE OPEN SOURCE DIE” attitude that Gates/Ballmer had.
I also love GitHub, TypeScript, and VSCode. These have become the foundation of my development toolset. That was something Gates did well, and Ballmer gave lip service to (“developers! developers!”) but for me only recently has Microsoft actually been maintaining good quality developer tools again.
That’s where my goodwill comes from anyway.
Google makes a better Office Suite (Gmail, Docs, Maps), ironically. But it’s hard for me to get too excited about that. It’s been pretty stagnant for 10 years.
Just off the top of my head, under Tim Cook the company managed to:
* Propel smartwatches as a brand new product category into the mainstream and be the leader in that category.
* Propel AirPods as a brand new product category into the mainstream (and be the leader in that category as well).
* Smoothly transition to ARM (aka Apple Silicon) with great success.
* Various behind the scenes logistical/supply-chain achievements (which makes sense, as Tim Cook is the logistics/supply-chain guy by specialization).
None of those things were simple or uncontroversial. In fact, I remember the pushback people and the press had against smartwatches and airpods, calling Apple washed out and Tim Cook a bean-counter. And these are just the largest examples off the top of my head, there are definitely more. However, Google doesn’t seem to have even a singular product win of such magnitude in the past 10 years.
In the meantime, what did Google do productwise? Catching up on the cloud compute game to AWS (while nearly killing it due to their PR nightmare announcements during 2019-2020 iirc), killing their chat app that finally managed to gain enough mainstream traction (Hangouts) and then rebranding/recreating it at least twice since then, redoing their payments app multiple times (gWallet vs gPay vs whatever else there was that I forgot), etc.
I am trying to be generous here, and of course Apple had their misses too (the butterfly keyboard on 2016-2019 intel macbooks, homepod is kinda up in the air as a product category, mac pro stagnating, etc.). But I legitimately cannot think of a single consumer product that Google knocked out of the park or any that wowed me.
This sucks, because I know for a fact it has nothing to do with their engineers lacking the skill to execute on a new innovative product (as evident by Google being early to the AI/transformers era and being fundamental to what is happening with AI right now). Google has all the technical prerequisites to succeed. But the product and organizational strategies there are by far the most cartoonishly bad I’ve ever seen for such a company.
I don’t want to blame it on Sundar, because I cannot say for sure that the root of this dysfunction is at his level. I just know it is on some level between org directors and Sundar, but not where exactly. I just know that killing off a whole org working on a truly innovative AR org/product, only for most of those people to switch to Meta and continue working on an improved version of the exact same thing (the Orion glasses) wasn’t the move. And I just know that having 5+ major reorgs in one year for a single team is not normal or good.
TLDR: apologies for the long rant, but the short version is that Google under Sundar has absolutely zero sense for internal organization management or delivering products to consumers. And comparing him to Tim Cook (who has been the CEO through the AirPods/Apple Watch/ARM macbooks era) is unfair to Tim Cook and is based purely on the public image.
Vision Pro might succeed or fail, and that’s fine. I tried it, and it is clearly a significant step towards the future, but I am not sure of it becoming a successful product at its current price point and in its current state.
I am not judging CEOs or companies negatively for taking ambitious product bets and not always striking gold on those bets. I am judging them negatively for not having any product wins and not taking any ambitious product bets.
Imagine being so replete with cash that after paying all your costs, all your salaries, all your R&D - you still can't find a way to spend 200 billion, so you threw a chunk of it away as tax and put the rest in the bank.
The price of a share should be utterly irrelevant to them.
Take chat, one of Google's biggest fumbles. They had a good thing with Gtalk. Really screwed things up with Hangouts (thanks, Vic!), added the weird Allo to the mix, almost turned things around, and then brought in Chat to compete with Slack as opposed to AIM...WhatsApp.
If they had just incrementally invested in chat, even if they swapped out back ends, they could have kept most of their user base, maybe even have grown it. Gchat was pretty popular, even during the rise of Facebook Messenger.
But they screwed around with the public-visible product side of things too much, and revealed their tech stack and org chart as product changes. There was no product-first, continuity-oriented planning.
Many of the decisions companies make are to ensure the cow they are currently milking very efficiently does not die. This is bad for the rest of us, specially if they place barriers to innovation.
May be that's the problem - that there is no one rallying individual for Waymo. They should just spin it off and make it an independent private company and retain % ownership.
I somehow feel Google will be way better if it's run like Berkshire, the CEO just focuses on capital allocation and let's the managers do their jobs in their respective companies - YT, Waymo, search, cloud, deepmind.
I'm not sure that culture can dissipate in Google at this juncture.
Waymo is all about partnerships with carmakers.
> they have blogged about their cutting-edge protobuf tsunami capabilities.
Not sure if you recall the blog post url or title, but I'm curious to read more.
Do you have a link to this?
There's far too much value and scale in the company and they can't even focus their energies appropriately.
YouTube is the most valuable media property in the world. As a standalone company, it would still outperform Netflix on the basis of ads alone.
The monopolistic stuff Google is pulling off with Chrome/Android/Search is unfathomably market distorting, so those business units alone could/should be pulled apart. The tech sector would probably be better off if YouTube, Waymo, and GCP/AI efforts were similarly split up.
Also there is MS that wants to pay for search engine placement and it's fact.
They'd just have to watch out for similar antitrust action.
I don’t think the same logic applies to Google Docs as it does to YouTube. The original companies behind Docs, Sheets, and Slides were practically unknown, and Google deserves credit for their evolution, features, and clear vision. Developing an office suite might be “easier” from a vision standpoint since the category already exists, whereas marketing something like Gemini Robotics is an entirely different challenge. Just my two cents.
They have a thinking model way back ago, which is pretty good with clean CoT and good performance close to R1. But it never gets any marketing whatever.
Veo2 has really good performance too, yet it is so slow in its rollout now Chinese competitors are getting all attentions because it is just easier to access.
It feels to me that Google is reliving its experience with messengers where you they have multiple competing roadmaps from different parties. The execution is disoriented and slow.
They will have to catch up in 2025, the fact grok is this good in one year is a wake up call to everyone, especially Google.
If they failed to do so, Gemini is going nowhere, it already has no tractions outside of Google, nobody’s first instinct when it comes to AI is Gemini
Doesn't ring a bell that very few, if any, of "AGI achieved" people seem to have backgrounds with or exposures to either classical NLP, or Google, and/or cultures that make heavy use of IME? To me the situation looked like that Googlers "have seen that trick" previously, and are doing bare minimum to defend the company from losing presence in this AGI hype storm.
They have Gemini and rolled out AI in Workspace and I believe they still have the most capability million token model
ChatGPT is already top 5 websites people visit, it is behind Google, but it will eat into its business very soon. That will happen regardless.
It's an artifact of their size -- no large corporation has vision or direction. Best they can aspire to is "stay the course". It's just something that inevitably happens as companies grow and age.
It won’t go anywhere, Windows is still a thing.
But ChaGPT is a fundamental threat to its search business. It replaces Google for me 50% of the time.
It is the natural language search engine people tried to build
You might say, yeah, but I can spot those mistakes, but can you really? I showed my fifth-grade son the result of asking if hippos were intelligent and the absurdity of the answer didn’t leap out at him. Now, consider something that’s more subtly wrong like an invented precedent in an AI-generated legal brief or a non-existent citation or citation that doesn’t support the claim and it’s all a disaster.
For sure, hallucinations will always be there, but I don't think it will hinder its take over, the usage trumps its shortcomings
Yesterday I tried asking ChatGPT "Can an Amazon L6 software engineer afford a house in [location]", without explicitly using the search mode. It went to levels.fyi to look up salary and redfin to look up housing price (exactly how I would have done it myself), and gave me a reasonable answer that agrees with my own analysis, and is definitely much faster than clicking things around myself.
OpenAI is fading away fast. Plus all major leaders left, Microsoft is leaving too, I don't feel its future is promising anymore.
Maybe the company and their business model are doomed to fail, but I'm grateful for what they enabled so far.
On other similar products like Google's NotebookLLM and Open AI's GPT 4.5 Research Mode: both products are awesome.
> VideoFX isn't available in your country yet
Maybe that's why?
I still maintain the reason they're playing catch-up with everyone else wrt. LLMs is because their Gemini models were not available in the EU until recently. Back when they were doing their releases, years ago, like everyone else here I took one look, saw the "not available in your country" banner, and stopped caring at all.
That's why nobody knows about it.
Obligatory overview of things Google has killed, because its easy to forget some of the gems:
Sure it could've used a bit of a facelift and some other tweaks, but they have a history of launching new, half-baked products instead of just maintaining the existing ones.
I even messaged from my GTalk to my Facebook as a test, which worked because both were Jabber. Both companies closed both services off to anyone else. Sadly.
It has a problem executing on that tech to create great products. It has a real problem with canning any project that doesn't have a billion users within a year.
Honestly they fail to understand how lucky they got with doubleclick and culturally the entire project evaluation criteria is based around the assumption that they can do another computer science rain dance to make it rain ads-level cash.
Maps used to be the absolute best and now I frequently get baffling driving directions in a US major metro area. No improvements within the last 10 years. New pixel phones are worse than latest Samsung. Some huge lead in AI absolutely totaled, their investment in anthropic their only hope. Inference HW accelerators that noone uses.
They are becoming like M$ - I expect M$ to be this terrible at product development - but at least M$ is fantastic at making money despite terrible products.
Google has allowed search experience to slide so much people would rather use some slow-ass unreliable chatbot. Are they really losing the war on SEO or have they decided that the internet-of-shit (i.e. affiliate marketing) is more valuable?
I am seeing bot-generated reviews more and more often, and when I look at what happened to search, I don't have a lot of faith in google to do a better job with maps. But I sure hope they do, because I'm with you - I really do rely on maps reviews.
So while they have a bunch of cool tech on the possibility horizon the only thing the market cares about is the ability to make money and there's some uncertainty on that front.
I think what keeps Google up at night is knowing that their ads business which pays all of the bills could be upended by regulation or by disruptive consumer AI of some kind and they’d then have approximately nothing in terms of revenue.
They have due to circumstances a different business model to OpenAI, Claude, Grok etc.
Open-Claude-Grok: "our AI is so cool, AGI next year" but we are losing money so invest in us at a $crazy bn valuation
Google: We are swimming in money from ads so no need to hype anything. If anything saying we will dominate AI as well as search, email, video, ads, browsers, phones etc would just get us broken up. So advance quietly.
Gemini has been one of the most cost-efficient models. Probably this is exactly what Google needs for productization.
Not sure if this is "vision" or "management" or whatever, it feels like that they're just self shackling in every single possible direction. There are something like 50 different teams involved at a major launch and they make some process/infra requirement/review/integration or whatever, from good will. Imagine how much time, effort and compromises you would need to appease all of them.
I think the recent memo from Sergey shows that the leadership finally acknowledges this problem at heart. But solving it is a different story of course. But a long time disconnection between IC, managements and leadership has been the culprit of this problem and at least some awareness might not hurt.
Based on P/E the US stock market is overvalued. So I would be careful with "undervaluation". Most undervalued tech stocks are probably in China.
Google also lost a lot to LLM. I use perplexity now 50% of the time, where I would have used Google. I also read a lot of "degoogeling" and "going off Amazon". My impressions of both companies are not the best. I have a gmail account I never got access back, even with the right password. And Amazon defrauded me of 40 USD. Claimed in a chat that they would reimburse express shipment after they f. up but then did not and called it a "misunderstanding".
I have somewhere list of the most valuable companies. And it changed every decade. So, past performance is not a guarantee for future performance :-)
Bust is unlikely with an ETF. They rebalance without you having to do anything. Most tech might come from China in the future.
https://finance.yahoo.com/quote/CQQQ/
It is at the same price it was at in 2019. You can't rebalance them because the techs all boom and bust together, there isn't any point. Still, if you think Chinese techs are going to take off big soon, now is a great time to get in (or at least as an emerging market hedge).
But yes, CQQQ is exactly the one I am going to buy.
They foster the confusion themselves.
I once tried to rebrand an in-house, purely dev facing product. I failed.
The only thing you left out of this analysis is their valuation. The market values Google at $2.05T (just over $2,000,000,000,000) which is 21 times their earnings (net profit). They are valued at $250 per person on Earth while selling, annually, $43.75 per person on Earth (sales) of which $12 per person is their profit.
How much would you pay to own a golden goose laying $12 in gold per year? Like, $250? If so you are the proud buyer of Google right now. (There is a buyer on every sale of every stock and this is the price they are paying right now.)
Apple (AAPL): 34.07
Microsoft (MSFT): 35.07
Amazon (AMZN): 36.69
Alphabet (GOOGL): 21.82
Meta Platforms (META): 24.49
Nvidia (NVDA): 41.33
Tesla (TSLA): 87.87
from this perspective Google, and to a lesser extent Meta, stand out as being valued quite conservatively.
Do I think Microsoft is performing 50% better than Google? Not really, no.
On the tech side they are excellent, but on the management/business/corporate culture side they have repeatedly proven that they are much less competent than pretty much everyone else.
Fortunately for them, they have a very prolific cow to milk with their ads business, and that's where they get their valuation from, but there tech is legitimately undervalued because they have repeatedly shown that they don't know how to convert that into business.
There is nuance. Saying A about B and being wrong does not imply saying A about C means you're wrong. It is indeed possible to lose focus on revenue and die. But it is also possible to focus too much on revenue and die. It is unclear if Google will achieve anything from it's "pure research" investments, but certainly they have room to try, and I personally am glad they are doing so.
They were profoundly wrong, but not about Bell Labs' ability to create value from their research. That, they were absolutely dead-on about. AT&T and Bell Labs were absolutely awful at reading the room about what their technology could do and how it could be monetized.
Some of that was just packaging things the right way, and some of it - like charging absolutely insane license fees for UNIX in the 80s and 90s during the beginnings of the personal computing revolution - was because of lazy execs who didn't want to really put in any effort. Either way, I'm not using a Bell Labs LabsBook Pro to write code for a UNIX OS, and I'm not using Bellgle to search for information. AT&T ultimately thought the best way to create value from Bell Labs was to sell that division.
We're in a long, hot AI summer, but we've had winters too. Who knows which hemisphere they're in at Google right now.
In addition to the other comment mentioning Boston Dynamics, they are also the employers of a lot of folks that were formerly at the Open Source Robotics Foundation(?) (OSRF) (it's more complicated than that) which is behind the ROS1/ROS2 framework that are widely (not universally) used; They also have an internal division or whatever, Intrinsic Robotics (or is it Intrinsic AI? too lazy to check). Plenty of smart people that I've met are involved there!
But I remain skeptical of the top level comment's take, given the lack of any robotics product execution of note by Google for a very long time now.
Discussing with a coworker yesterday, they also pointed out the distinction I'd missed, which is that there are separate robotics groups doing research (Gemini) and applied work (Intrinsic? physicalintelligence.company; these are under the Alphabet or X projects umbrella, I infer? Really haven't paid attention.)
Google Cloud is decent, again in my opinion, because they can more or less copy the product vision from AWS and focus on the technical excellence.
When were you last excited to use a Google product or service?
Part of the problem is also their internal incentives that lead to lot's of products being retired waay too soon, leaving behind a lot of users and hurting their reputation a lot.
They definitely have pricing power and also a large stake in Anthropic, so I'm not worried about them.
Even extraordinary products are rarely going to do that. Their AI products could be a huge success, and still not significantly change how valuable the company is.
> Google lacks is excitement and hype
People(me) used to look up to Google and the projects they had. 80/20 work/project time, moonshot projects, all the google perks, etc. It felt like the place to be. Fast forward 10 years, I just want antitrust to shatter it into smithereens.
> that veneer is just dusty
The problem is systematic, affecting the whole org from top to bottom and especially the top. They either get a new CEO that turns things around or become another IBM.
They are a roadblock to a lot of the startups backing the current administration
All of the technology in the world doesn't make up for that.
why work hard to be a part of that?
If scifi authors aren't keeping up it's hard to expect the rest of us to. But the macro and micro economic changes implied by this technology are huge. Very little of our daily lives will be undisrupted when it propagates and saturates the culture, even with no further fundamental advances.
Can anyone recommend scifi that makes plausible projections around this tech?
This is largely a function of what science fiction you read. Military SF is basically about retelling Horatio Hornblower stories in space, and it has never been seriously grounded in science. This isn't a criticism, exactly.
But if you look at, say, the award-winning science fiction of the 90s, for example you have A Fire Upon the Deep, the stories that were republished as Accelerando, the Culture novels, etc. All of these stories assume major improvements in AI and most of them involve breakneck rates of technological change.
But these stories have become less popular, because the authors generally thought through the implications of (for a example) AI that was sufficiently capable to maintain a starship. And the obvious conclusion is, why would AI stop at sweeping the corridors? Why not pilot the ship? Why not build the ships and give them orders? Why do people assume that technological progress conveniently stops right about the time the robots can mop the decks? Why doesn't that technology surpass and obsolete the humans entirely?
It turns that out that humans mostly want to read stories about other humans. Which is where many of the better SF authors have been focusing for a while now.
> Its technology is how a society copes with physical reality: how people get and keep and cook food, how they clothe themselves, what their power sources are (animal? human? water? wind? electricity? other?) what they build with and what they build, their medicine — and so on and on. Perhaps very ethereal people aren’t interested in these mundane, bodily matters, but I’m fascinated by them, and I think most of my readers are too.
> Technology is the active human interface with the material world.
Seeing drones do all the work unfortunately isn't very interesting though.
Also I love the Zones of Thought series and The Culture.
If you're open to Theory Fiction, you can read Nick Land. Even his early 1990s texts still feel futuristic. I think his views on the autonomization of AI, capital, and robots - and their convergence - are very interesting. [1]
This is just one of the side plots of the book, I think it could've been the whole plot of a book.
Tell me, which corporation exactly is kidnapping and drugging people to enslave them and then discard their bodies at sea to feed the capitalist global machine?
It seems like you have a big scoop if you are doing on the ground reporting, because that seems like it would be international news if it was real!
https://www.cbc.ca/radio/thecurrent/the-current-for-nov-12-2...
https://www.ap.org/news-highlights/seafood-from-slaves/2015/...
The sun is dying. A capable team is assembled and put into cryosleep in an automated ship for a journey to a neighboring star system to try to diagnose the problem. Only one member survives, and they have amnesia.
The novel does a great job of explaining the process of troubleshooting under pressure and with incomplete information.
Strong warning: Start with either book 2 (Player of Games) or book...7, Look to Windward.
I strongly suggest you skip book 1 until you're comfortable with the rest of the books that focus on the Culture itself, and not some weird offshoot story that barely involves the Culture.
Though I wouldn't recommend starting with any of the stories in the series. Or reading any at all. Find a summary or a Cliff's Notes instead. Iain M Banks has a talent for making great stories tedious.
Has a cyborg/AI as protagonist and paints a really interesting world with AIs and synthetic biology in it. Also does a good job at just shutting up about things it can not talk about, like interplanetary travel.
I've also seen it suggested that Harry Potter might be a more realistic look at what proliferated AI might be like.
"Will the security update finish before we're discovered and killed by the hunter seeker, stay tuned to find out more!"
It does such a good job building a convincing world, and its really good in just not going into details that it can't speak on (like how interplanetary travel works), while some of it's takes (e.g. small anti-personel drones) seem almost prescient after Ukraine.
All the synthetic biology and even the depictions of AIs and their struggles are really compelling, too.
Unironically, Wall-E. Humans leave earth behind on a ship where everything is automated.
This seems like it's rooted in reality.
It's pretty normal for it to take a few years to write a good book so I wouldn't look to science fiction to keep up to date on the latest tech hype train. This is probably a good thing because when the hype dies down or the bubble bursts, those books would often end up looking very dated and laughably naive.
There's a lot of books about AGI already which is probably more fun to write about than what passes for AI right now. Still, I'm sure that eventually we'll see characters getting their email badly summarized in fiction too.
My brother in Christ, ChatGPT blew up just 25 months ago. Give it time.
It seems unlikely that any company (Google included) will have a robotics moat.
If we see a real world application that a business actually uses, or that people want to use, that's great. But why announce the prototype with the lab demos? It's premature. Better to wait until you have a good real life working use case to brag about.
Because that's how you attract the media attention, talent, and financing you need to both go from prototype for product, and to have a market ready for the product when its ready.
Especially when other people are already publicly known to be working in the domain.
Lol, you need to drive up hype and convince investors you are not falling behind. Not even being cynical here, I think it's a good idea from a business perspective.
Had they focused more on driving innovation and not profit/being relevant, they could have had another win instead of another Google+ Instead, we got African-German Nazi's.
Google was already an advertising monopoly by the time this happened and his job is to sell ads and minimize costs...the rest of Google is just there for marketing & public relations