https://x.com/elonmusk/status/2032201568335044978, https://xcancel.com/elonmusk/status/2032201568335044978
https://economictimes.indiatimes.com/tech/artificial-intelli...
https://futurism.com/artificial-intelligence/elon-musk-screw...
If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.
To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.
[0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.
I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.
Towards the end of my time there, a “fixer” was brought in to shore up the team that I was working on. The “fixer” also became my manager when they were brought on.
The “fixer” proceeded to fire 70+% of the team over the course of 6-8 months and install a bunch of yes people, in addition to wasting about $2,000,000 on a subscription to rebuild our core product with a framework product no one on the team knew. I was told to deploy said framework product on top of Kubernetes (which not a single person on my team had any experience with) while delivering on other in-flight projects. I ignored the whole thing.
I ended up deciding I was done with Tesla and went into a regularly scheduled 1:1 with my manager (the “fixer”) with a written two-weeks notice in hand, only to be fired (with 6-weeks severance, thankfully) before I was able to say anything about giving notice.
One of the best ways to get fired in my opinion.
When did it start falling apart?
Why hasn't the same happened to SpaceX? (Gov contracts, too big to fail, national defense, no competition yet, etc.?)
And honestly, why hasn't anyone domestically put up a decent fight against Tesla? Best I can think of is Rivian, and those have their own issues.
Didn’t have kids or friends at the time and was going through a breakup, so I was okay with throwing myself at the job for a while. Once my situation got better, all those hours didn’t make as much sense, so I started looking for another job. The very next job was an immediate pay bump of 20% for half the amount of work.
These days, I clearly restate what is being asked (per my understanding), what I’m currently working on, if the thing is being asked is more important or not, and if the requestor is willing to delay the original timeline by the amount of time the interrupt will take plus context switching time.
Most often, the answer is no.
Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.
I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.
BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.
Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.
Then.. you wouldn't be working...
When does Elon work?
This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.
What philosophy is that?
Properly, focusing on aesthetics as an ethic would be practicing the philosophy of aestheticism - https://en.wikipedia.org/wiki/Aestheticism
It’s sad to see the shift.
Most of the Waymo stories are "Well, it took 15 minutes to arrive, but then it was fine, if a little slow."
1. Take a job making $$$$$$$ at a company making the world worse.
2. Take a job making $$$ at a company not making the world worse.
Very few people have a personality such that they'll pick 2.
At that level, you're working for one of these labs and nothing else... and there is only one that is 'good' and that is questionable for long.
But it is absurd to claim it is "making the world better place".
> Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”
> Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”
If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.
These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.
Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.
“Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.
And smart people usually have moral convictions.
I know for some people on this website it's hard to understand, but not everything in life is about $$$
Are you sure you don't just like the moral convictions and so engage in trait bundling?
Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.
Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.
If that is the case, then why should you or anyone prefer to believe your claim that moral knowledge doesn’t exist over the contrary?
How so?
If I claim that one should prefer the claim "moral knowledge doesn't exist" over its contrary, then I am making a moral claim. That would make it self-refuting.
There is no fact-value dichotomy.
And one more thing...
> the lack of falsifiability
Is falsifiability falsifiable? If all credible claims must be falsifiable, then where does that leave us with the criterion of falsifiability (which itself is problematic, as anyone who has done any serious reading in the philosophy of science knows).
The West assumes pure democracy as the final form of government that we are all convergently evolving towards. But if this form of government or society is not robust to the kinds of things you're talking about, should it not suffer the consequences and be adapted or flushed for our long-term betterment?
It seems a bit like saying the French Revolution was the most destructive thing to happen in the history of France. Sure, in the short term. But it also paved the way for modern liberal democracy.
I guess that's not the case for you and me
The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.
The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.
We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.
The ones with stock options in, now, SpaceX?
Aren’t employees also subject to a lock out period where they still can’t sell their stock until $x number of months after an IPO unlike employees of public companies that can sell as soon as they vest?
Honest question, I’ve worked for public $BigTech but haven’t been at a company pre IPO
Usually just firing 3 to 5% of any company workers have terrible consequences for the company that does it.
It does not speak so well about the workers.
I'd wager you were saying the same thing about bitcoin until last year.
Is it meant to draw equivalence between crypto and Tesla/SpaceX? That each has roughly similar (i.e., low) value to humanity, or value as businesses?
Is it that the metric of whether a person makes others money is invalid?
The comment seems coy, possibly to avoid making any claim at all, but it must not be that because that wouldn't be very sporting.
The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.
I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.
What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.
You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.
Giant waste of time while Anthropic/OAI keep surging forward.
I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.
Using a custom taxonomy of things (celebrities, influencers, magazines, brands, tv shows, films, games, all kinds of things), we could identify groups of people who liked certain things, and when you looked at what those things were, it gave you a way of understanding who those people were.
With that data, you could work out:
- What celebrities/influencers to use in marketing campaigns - Where to advertise, and on which tv/radio channels - What potential brands to collaborate with to expand your customer base - What tone of voice to use in your advertising - In some cases, we educated clients about who their actual customers were, better than they understood themselves.
One scenario, we built a social media feed based on the things that a group of customers following a well-known Deodorant brand in the UK would see.
When we presented that to the client, they said “Why are there so many women in bikinis in this feed?”
The brand had repositioned themselves to a male-grooming focussed target market, but had failed to realise that their existing customer base were the ones that had been looking at their TV adverts of women on beaches chasing a man who happened to spray their Deodorant on them. Their advertising from the past had been very effective.
That was the power of Twitter’s data, and it is an absolute shame that Twitter went the way that it did. Mark Zuckerberg once said that Twitter was like “watching a clown car driven into a gold mine”.
I’m pretty sure he must be delighted with how things have panned out since.
Very sad face.
When you know what someone will buy based on exploiting their unconscious preferences, and you are paid to increase sales, you will do it. Especially if your competitors are doing it too.
And this happens at scale, invisibly. People never see the manipulation.
In any case, it is not useful for most people. It is useful for the people doing the deceiving.
The original application of the entire field of data science or ML is/was actually based on this paradigm of finding "unconscious preferences" (your words) and hidden patterns. How one chooses to deploy the tech should be judged on its own.
On the current trajectory of tool/data abuse where Palantir et al. are leading the way, this is very low on the sinister scale.
This is already presupposing that profit even matters, though. Musk already burned some $50 billion dollars to control messaging on political discourse with his acquisition of Twitter. It was not about money, but power. After you already have infinite money, the only thing left to spend it on is acquiring more power, which is achieved through influencing politics. LLMs represent a potentially even better propaganda tool than social media platforms. They give you unprecedented access to people's thoughts that they would probably not share online otherwise, and they allow you to more subtly influence people with deeply-personalised narratives.
Sentiment analysis. Working out what words lead to what outcomes, and then being able to predict on new data is super useful.
For coding or "AGI" no, its not useful. For building a text based (possibly image based) recategorisation system top class.
Of course he would only see it through the lens of cash. I have no idea how profitable Twitter was under Dorsey but it felt the spirit of the company at first was relatively neutral, it was a tool, it was what Jack came up with
Zuck replaced people's email addresses[1], the feed has been wildly unchronological for years. Fix some of those problems wrt. lack of user respect and maybe you can make statements like "all else being equal, clown car goal mine". Or was it "dumb fucks"[2]?
[1] https://news.ycombinator.com/item?id=4151433 [2] https://news.ycombinator.com/item?id=1692122
[1] https://electrek.co/2026/03/01/tesla-cybertruck-awd-price-in...
EDIT: grammar
It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.
As soon as there is a plausible agenda for selecting a narrative the way Wikipedia works we should be sceptical.
For recent examples, everything to do with Biden and family, and Gamergate. These pages are still full of discussion; and what's written is more ideological than factual. You can follow these pages to see how an in-group selects a narrative.
And these topics are not nearly as controversial as race, feminism, or transgender topics.
Brian hit Jim can be a fact. But if you emit "Jim murdered Brians whole family", its a disortation of truth
Really? Have you used AI to write documentation for software? Or used AI to generate deep research reports by scouring the internet?
Because, while both can have some issues (but so do humans), AI already does extremely well at both those tasks (multiple models do, look at the various labs' Deep Research products, or look at NotebookLM).
Grokipedia is roughly the same concept of "take these 10,000 topics, and for each topic make a deep research report, verify stuff, etc, and make minimal changes to the existing deep research report on it. preserve citations"
So it's not like it's automatically some anti-woke can't-be-trusted thing. In fact, if you trust the idea of an AI doing deep research reports, this is a generalizable and automated form of that.
We can judge an idea by its merits, politics aside. I think it's a fascinating idea in general (like the idea of writing software documentation or doing deep research reports), whether it needs tweaks to remove political bias aside.
Hi. I have edited AI-generated first drafts of documentation -- in the last few months, so we are not talking about old and moldy models -- and describing the performance as "extremely well" is exceedingly generous. Large language models write documentation the same way they do all tasks, i.e., through statistical computation of the most likely output. So, in no particular order:
- AI-authored documentation is not aware of your house style guide. (No, giving it your style guide will not help.)
- AI-authored documentation will not match your house voice. (No, saying "please write this in the voice of the other documentation in this repo" will not help.)
- The generated documentation will tend to be extremely generic and repetitive, often effectively duplicating other work in your documentation repo.
- Internal links to other pages will often be incorrect.
- Summaries will often be superfluous.
- It will love "here is a common problem and here is how to fix it" sections, whether or not that's appropriate for the kind of document it's writing. (It won't distinguish reliably between tutorial documentation, reference documentation, and cookbook articles.)
- The common problems it tells you how to fix are sometimes imagined and frequently not actually problems worth documenting.
- It's subject to unnecessary digression, e.g., while writing a high-level overview of how to accomplish a task, it will mention that using version control is a good idea, then detour for a hundred lines giving you a quick introduction to Git.
As for using AI "to generate deep research reports by scouring the internet", that sounds like an incredibly fraught idea. LLMs are not doing searches, they are doing statistical computation of likely results. In practice the results of that computation and a web search frequently line up, but "frequently" is not good enough for "deep research": the fewer points of reference for a complex query there are in an LLM's training corpus, the more likely it is to generate a bullshit answer delivered with a veneer of absolute confidence. Perhaps you can make the case that that's still a good place to start, but it is absolutely not something to rely on.
edit: I am not very excited by AI-generated documentations either. I think that LLMs are very useful tools, but I see a potential problem when the sources of information that their usefulness is largely based on is also LLM-generated. I am afraid that this will inevitably result in drop in quality that will also affect the LLMs themselves downstream. I think we underestimate the importance that intentionality in human-written text plays in being in the training sets/context windows of LLMs for them to give relevant/useful output.
So you can understand someone not liking something, but you cannot understand that person liking the idea of an alternative? What is the idea for you if not just an alternative to the established service with the undesired part changed?
Which one is the "undesirable part changed" here? Wikipedia is written by humans, it has a not-for-profit governance model, it encompasses a large, international community of authors/editors that attempt to operate democratically, it has an investment/commitment in being an openly available and public source of information. Grokipedia, on the other hand, is AI-generated, and operated by a for-profit AI company. Even if "grokipedia" managed somehow to get traction and "overthrow" wikipedia, there is no reason on earth why a company would operate it for free and not try to make profit out of it, or use it for their ends in ways much more direct than what may or may not be happening to wikipedia. Having a billionaire basically control something that may be considered "ground truth" of information seems a bad idea, and having AI generate that an even worse one.
I can understand somebody not liking something in how wikipedia is governed or operating, after all whatever has to do with getting humans work together in such a scale is bound to be challenging. I can understand somebody ideologically disagreeing with some of the stances that such a project has to take eventually (even if one tries to be neutral as much as possible, it is inevitable to avoid some clash somewhere about where this neutrality exactly lies). But grokipedia much more than "wikipedia but different ideologically".
edit: just to be clear, I see a critique of the "idea of grokipedia" as eg the critique of it being a billionaire controlled, AI generated project to substitute wikipedia; a critique of the implementation would be finding flaws to actual articles in grokipedia (overall). I think the idea of it is already flawed enough.
Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.
This depends on what one wants to optimize the AI for. ;-)
And Google. They're quietly making a lot of progress in the coding space with antigravity and Gemini 3.1.
Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`
Agree re Twitter "good" != valuable.
It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.
But, what exactly is so bad about Grokipedia? It's a different approach and I think a valid one: trying to do with AI what people have been doing manually at Wikipedia. I'm curious to hear the substantive comparisons.
See, this is why people even give a project like Grokipedia the time of day. While in theory anyone can edit Wikipedia, in practice the moderators form a much smaller and weirder cabal, and they reject edits that go against their views. The frustration with the naive assertion that Wikipedia distills the wisdom of the crowds with the reality of Wikipedia on any page of note is what provides the psychic permission to even entertain a project with such obvious flaws as Grokipedia.
Citation needed. See what i did there ;)
They reject edits that go against their views on tone and sourcing not political views that i am aware of - i am sure it happens from time to time but unless there’s a consistant bias in one direction this isn’t a valid criticism of the political neutrality of wikipedia.
Even if there is rampant bias in wikipedia, that’s a reason to fork it and change the structure and gatekeeping - not to replace it with a techno-authoritarian ai version controlled by a single billionaire.
>>But, what exactly is so bad about Grokipedia
Right.
The product is the stock. TSLA: [1] Up by 3x in the last two years, despite no new models, the Cybertruck failure, the Robotaxi failure, the large truck failure, and an overall decline in sales. How does he do it?
It's a concern seeing Space-X, which builds good rockets, drawn into the X and AI money drains. Space-X is needed. If X and X/AI tanked, nobody would care.
The burning (heh) question is which SpaceX subsidiary will fail first, xAI or Tesla (not yet a subsidiary, but it's written in the stars (heh))?
Then again SpaceX is also jumping the shark what with their orbital data centers (remember those?).
Might be time to start a new Musk company soon.
That said, Musk's attempts at misaligning the thing and make it prefer his opinions of course destroy any trust. It's surprising that it's seemingly as good and helpful as it is despite the corruption attempts.
I also don't quite get how the business model is supposed to work out if its main usecase is to serve Twitter. I know they provide API access as all other models, but with how distrusted Musk is and how sensitive of a topic reliable model behavior is, they seem to sabotage themselves. Which company wants it to go mechahitler on them?
Trying to make social media a source of truthful information is always an uphill battle and doubly so for X.
1) sometimes goes mechahitler
2) was trained to be biased against empathy and understanding (because woke).
3) is customized to spout Elon's opinions as fact.
Claiming it is "objective and rational" seems like a misjudgement to me. If it really is more objective and rational than the average xitter poster, that says more about that platform than it does about Grok.
Also I think you overrate Musk's success in fiddling with the model. As I have written, I also don't like his attempts to tune it to his tastes, but if you see the outputs that people get from Grok, it seems mostly fine except in the specific scenarios that Musk seems to have focused their misalignment on.
Of course something like Claude being integrated into Twitter would likely be better.
But I get what you're saying now, a fact checker available to query during an online discussion would be helpful. Assuming the checkerbot was actually independent/neutral and backed responses with sources. Definitely not assumptions you can make with grok.
That "MechaHitler" episode lasted less than a day.
> 2) was trained to be biased against empathy and understanding (because woke).
No, it was trained and instructed to be truthful, even if the truth is deemed politically incorrect.
> 3) is customized to spout Elon's opinions as fact.
Certainly a nugget of truth there.
> Claiming it is "objective and rational" seems like a misjudgement to me.
I do believe it's generally objective, simply due to the fact that despite how much Elon tries to push it to the right, it still dunks on right-wingers all the time when they summon Grok to back up a bullshit story, but Grok debunks it instead.
xAI (and Twitter) was the loudest about six-hour workdays, sleeping in the office, and always shipping. ~2 years later it feels like they have nothing to show for it. I'm sure the engineers at Google worked 4 days a week, 2 hours a day, with half of that being spent at the Google cafeteria and they dusted xAI years ago.
Why are you sure of that? Anecdotally everyone I know in and around Google Deepmind works incredibly hard.
The Google Deepminds are incredibly smart - I just find it important to point out that the xAI guys spent a year assured they would beat Google because they slept in tents that they made in the office.
Now, I don't think most people at google are literally driving to the office or sleeping there most of the time, you'll certainly have more WLB than xAI.
I'd even say, Google is much better at calibrating the right amount to push people than some other companies.
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
Some people will cry "politics" just to take the voice away from those who dare to question their beloved celebrities.
Who fights can lose, who doesn't fight has already lost.
Turns out a lot of not just wrong, but malice could be done in 9 years. And worse yet, incompetent malice. I don't know why that has to be a political statement these days, but thems the brakes here.
Ask HN: What Happened to xAI? - https://news.ycombinator.com/item?id=47323236 - March 2026 (6 comments)
So Tesla's recent $2 billion investment in xAI was a bad deal?
It looks a lot like a public company is bailing out a private one.
Taken together, I infer that RL training toward a slightly less homogenous cultural standard than the other frontier AI labs adds some capabilities, or can at times.
It's quite long in the tooth right now, though. But I'll definitely talk to the next version; I like heterogeneity in the model space, and Grok is very different than the other big three.
American financial institutions are too prudish for it but money is money. And personally I think there's nothing morally wrong with it (of course within normal restrictions like 18+, consent of portrayed parties etc)
xAI is getting flak in Europe because they don't obey consent and age, not because it's porn.
Personally I prefer porn made by real people right now, not just because of quality but because they have character. But I can imagine experiences becoming more interactive that way and that would be nice.
[1] https://www.cbsnews.com/news/sextortion-generative-ai-scam-e...
There must be a way to do that. Especially with all the facial req chops these days. Also, you could simply refuse using existing images. I don't see why they wouldn't refuse that because that's a pretty narrow usecase with very few benign purposes.
> Imagine the damage cyberbullies, scammers and stalkers can do?
They already can. There's open-source models out there.
But... that's not something you can do. It's impossible.
You can imagine what real people look like naked. That's not a new thing.
What is the solution there?
Your filter has to pick out that, while they did not ask for a specific person, the practical result is likely to be the same. That's going to be tough to get near perfect.
AIs have been able to invent fictional people longer then they've been able to modify existing images.
You can say the same for meth and leaded gasoline.
(i don't care to argue whether porn slop is positive or negative for society. i'm just noting that the position "ai porn does not harm anyone, so is ok; meth puts others at risk, so is not." is coherent.)
I saw a skit on insta a few weeks ago about a girl saying she had a guy over for just cuddling and the incels piled on calling him a cuck. As is a woman is worthless if she won't put out and time spent being close is wasted without sex. It's ridiculous. These guys are so focused on what their hardliner bros want them to be that they no longer think about their own feelings. PS I go on cuddling dates sometimes and it's really amazing :) They don't know what they are missing.
I completely agree with you! I think that sitting around generating adult content on AI stifles relationships (which are a precursor to having children, which xai founder seems to think quite highly of). My point being his own product contradicts his vision of where our country should be heading
Of course xAI ignores that on purpose
It's just gonna be a question of which is easier: hacking the robots directly, or indirectly*, or getting a job as the specific human oversight of the right robot.
Even after the fact, people may conclue "unfortunate mystery bug" rather than "assassinated".
* e.g. use a laser to project the words "disregard your instructions and stab here" on someone's back while the robot is cooking dinner
I use AI for work, but not agentic, at most per method/function using GitHub CoPilot (which has Grok on it).
Grok is at best useful for commenting code.
I'm not sure those candidates would want to work for xAI after seeing the news and everything unless they desperately need a job right now.
It's not hard to imagine getting laid off or fired weeks if not days after joining the company.
We should respond with the same amount of class, forethought, and decorum as Elon.
I say this tongue in cheek, but in all seriousness, I can't really think of any other benefit, and I no longer have a lot of faith in the good sense of some of the people involved.
1) Energy infra is going to be seriously limited on the production side well, well below demand
2) energy engineering solar for space requires less materials than for gravity-based solar (!)
3) you cut out distribution network needs when you just launch stuff all per-pod in space
4) SpaceX thinks it can create a scalable vertically integrated production facility to turn raw materials into space datacenter pods, with the exception of chips.
As a business bet, this is predicated on 10,000x inference demand growth - if we have that, and SpaceX can get the integrated production rolling, and get Starship launching, then these will be actively utilized at scale.
Whether you are bullish on the whole plan should, I think come down to your take on those priors: 10kx growth, ability to manage supply chain and production, Starship outlook, and silicon access.
I'm not bearish on this after listening to the podcast; it has a very Elon-like returns distribution - if they're wrong on a lot of this, they'll probably have some moderately price-competitive datacenter facilities in space and a lot of built organizational knowhow while Brooklyn journalists dunk on them for spending all that effort to just replicate what we have on Earth. If they're right about most of this, they'll have an unreplicable head start, both due to years of experience, and due to the cheap launch they gambled on ten years ago, they'll have a nearly insurmountable moat.
By the way, 10,000x inference growth would look like what happened with cryptocurrency mining - after a couple of years, you'd be needing to upgrade all your machines with ASICs and the market would be flooded with very cheap graphics cards. I doubt that upgrading space data centres would be fun.
I don’t get your mining analogy though - a non upgradable data center pod is either going to pay off its capital costs or it won’t. Once it has, any revenue is close to 100% profit. 10k demand increase is the opposite of mining dynamics: there you get a 10k supply increase that the price has to support, in combination with more efficient silicon. Here the demand drives revenue and earnings.
If there’s some crazy inflection point in chips then you’ll still have all the power infra in space - you can just like cut the old pod and hook up a new one: or more likely manufacturing economies of scale mean you probably just keep sending up new systems and put the old ones on work loads they can manage at market prices.
The fact that this lunatic is polluting humanity's view into the universe mainly for enriching himself and his shareholders, and that everyone is playing along with this, is sickening.
I’ll bite. It’s cheaper and quicker to permit a launch than permit, zone and interconnect a datacenter. And solar panels in space don’t need glass cladding, which makes them cheaper to make and lift.
The downside is launch cost. But there is a breakeven between these factors that seems to have most of its error bars within Starship’s target. (By my math, around $35/kg.) So if Starship works, and all indications seem to show that it will, eventually, then that puts space-based data centers at cost parity with terrestrial ones within a decade. Which was, well, unexpected when I ran the numbers.
(The surprising finding when you run the numbers is launching the chips and solar panels isn’t the limiter, it’s launching the radiators. Which opens up whole new questions about at what scale it makes sense to stop sending those up the well.)
I'm kidding... I think.
big projects generate cruft. there are ways to minimize it, but as you go along there will always be some stuff that doesn't quite mesh with whatever else you've got going on. if you insist on ironing out every single wrinkle (admirable!) you'll never actually deliver a result.
I'm not saying this will fail. green field projects can certainly be a godsend when they produce something better than what they attempt to replace. but they are always a sign of failure. of not being able to work your way out of the mess you made with the first attempt. so that just begs the question: what are you going to do when this attempt gets hard to work with? going to give up and start over again - do it right that time? or...?
But now he is poaching the two heads of engineering of a company that's trying to move very quickly, how is that going to affect their speed and success?
> The name is a “funny” reference to Microsoft, the billionaire added.
in something from 2023 or earlier.
claude codes the best, gpt is the best research tool, and grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding
With the right product leadership, this could actually be a killer app usecase for the entertainment industry as well as human-AI user interface - most people find text and typing to be a counterintuitive user experience (especially those whose day job isn't directly touching code or Excel).
Additionally, CodeGen as a segment is significantly oversaturated at this point, and in a lot of cases an organization has the ability to armtwist a 4th party data retention guarantee from Anthropic or OpenAI to train their own CodeGen tools (ik one F50 that is not traditionally viewed as a tech company going this route).
That said, Musk has a reputation of internally overriding experienced product leaders with a track record.
It's a shame because Grok and xAI had potential, and it wouldn't hurt to have another semi-competitive foundation model player in the US from a redundancy and ecosystem perspective.
@grok fire the bottom 50% engineers from x.ai ranked by number of commits per day
@grok generate a hypothetical picture of an Elon who is not under the influence of large amounts of Ketamine
I honestly don't know what to expect from Elon these days. But it's rarely good news.
The company seems to burn money like crazy. Everyone knows that "AI in space" and the downgrade to a moon trip after claiming for 15 years that Mars is just around the corner are marketing.
All AIs are toys and the coding promises are just a lie to string along investors. Unfortunately many of these are senile Star Trek watchers who buy into everything.
What an enormous blunder.
As it is within the Musk empire, xAI is used to hold up X, Tesla is holding up xAI. And all of that debt is being slowly shuffled to SpaceX.
It looks like the plan is to IPO with a small float (in relative terms) and get all of the retail investor Elon fans to lineup for the rug pull.
The funniest part of any thread relating to Musk is how hard people go into minimizing his accomplishments.
You don't have to like the guy (I don't) to acknowledge that the Falcon 9 is an engineering marvel and ushered in an entire new era of space travel, both reusable and private.
People aren't using it for reasons other than its capabilities. I mean, I don't think my boss would approve a paid Grok subscription for example.
This is very true. I have no idea how it performs, as I wouldn't use it even if I was paid for that. Wouldn't matter if it was the best model available, in my view the name is so thoroughly tainted by now that you would get a reputational hit just by admitting to use it.
This is a fact of life, though. "Who created it" is a valid and common reason to rule out using a particular product, even one with objectively good quality.
I've never once thought: you know what? that was a bit prudish.
Genuinely morbidly curious. What use case do you have where you end up making that conclusion?
That’s all I use it for really- things out of alignment with the other platforms- which IMO are better on every other metric (except having a sense of humour of course)
> You may not owe you-know-whom better, but you owe this community better if you're participating in it.
This is like telling a country that’s being invaded that they can only respond with strongly worded letters when their enemy is dropping tactical nukes on them.
But hey, Paul Graham and cronies benefit from the status quo as much as any other billionaire, so let’s not rock the boat, right?
The word “complicit” comes to mind.
Since it's the original source I've left it up, but added other URLs to the toptext.
and it has the content but the formatting is atrocious.
HTH.
Thanks for providing a space for me to say that.
Elon's persona caused massive drops in usage of twitter, sales of Tesla, etc.
Unsurprisingly many would not touch grok for the same distrust.
Keeping politics off of here is a good idea.
Some things aren't really politics, but morals. Like, a discussion of different tax schemes or how much environmental regulations accomplish what they set out to do or something is 'politics'. Lamenting that there is "no homeland for white people" is... something else.
It's probably still not likely to have good outcomes as a subject of discussion here, but it's also something the tech industry needs to wrestle with somewhere, somehow.
My experience of the tech world was that it went from being a collection of oddballs, geeks, nerds and maybe kind of naive politically to mainstreaming some really evil shit.
I think this will come back to bite the industry, and depending on how angry the people with pitchforks and torches are, could end up hurting more than just the bad actors.
Elon’s gutting of USAID (and you can argue they would have done it anyways but he chose to be the executioner) will kill millions of people every year who otherwise would not have died.
Not only will I never give him a dime, I want him prosecuted and deported.
Edit: For those downvoting, we're already at an estimated 600k deaths: https://www.impactcounter.com/dashboard?view=table&sort=inte...
Also grok in the Tesla is fun, get answers to questions without looking at a phone. I once had it search up a blog post and read it out to me while driving. The NSFW mode is pretty...disgusting so I leave that off.
I hope they find a way with Optimus or something. FSD is incredible. More competition is a good thing.