Boy am I tired of that one. We desperately need more smaller companies and actual competition but nobody seems to even be trying
A friend of mine has been trying to get into law school for a few years; she's technically competent and plenty intelligent, but it's been hard going for her to get in, plus multiple years of education to even attempt the bar. All of that sounds like far too much sunk-cost to me to dally in and figure out if it's a path I would truly enjoy.
What ways could I engage with policy coming from a technical background that would serve as a useful stepping stone to a more policy based career, but doesn't require such an upfront cost as a law degree?
Which brings me to the next point. Doing a law degree and passing the bar is perhaps the obvious path to doing policy things. It’s basically the only way that you can end up actively participating in courts, for example. But there are many other options! For myself, the plan is to stay in academia and not take any bar courses (then again, who knows what will happen!). Academics have lots of potential to shift policy, especially as neutral agents who aren’t paid by either side of particular debates. Our papers are read by policymakers and judges, who often don’t have the time or resources to think deeply about particularly gnarly topics. But there are lots of other options which could also work, and I guess finding a "niche" would depend on your specific circumstances, connections and skillset.
If you’re looking to spend more time thinking about policy issues, I’d start by simply sleuthing online. Bruce Schneier, for example, regularly writes excellent pieces at the intersection of technology and policy, which are very well hyperlinked to other high quality stuff. These kinds of blogs are a great way to get into the space, as well as to learn about opportunities which are coming up. Reading journal articles that sound interesting is a good option too (and US law journal articles are often quite accessible). There are also spaces offline, such as conferences which encourage both law and tech people (there’s one happening in Brussels soon [1]), or even institutions set up specifically to operate in this space and which have in-person events (Newspeak House comes to mind [2]).
[1] https://www.article19.org/digital-markets-act-enforcement/ [2] https://newspeak.house
Law school is the same as med school: if you can’t see yourself living life as something that requires a JD, skip it. Just do the thing you want to do; unless that’s “dispense legal advoce to paying clients and represent them in legal disputes” you can probably do it legally without a JD.
Also be aware you are a lawyer when you graduate law school and you don’t have to pass the bar unless that’s a requirement for your practice. For example, a general counsel of an internet startup might not have to be a member of the bar, but someone going into trial court to represent clients does. I would think you could be a staffer for a congressperson with a JD and without bar membership prettt easily.
The point of VC, specifically, is to grow software monopolies - but it's very easy to pick up VC funding if you happen to live in the Bay Area.
Yes, mosedef, I'm all in.
Please post/share any news or tips you find. TIA.
While his Nobel prize was for "physics", his domain is AI.
Paraphrasing: Capital will use / is using AI to further bludgeon Labor.
Social media was a quality of life upgrade where it wasn't promising too much, and it delivered on what it promised. (maybe a little too much)
AI on the other hand just like blockchain feels like hype.
Now if you're not on linkedIn, people question whether you are a real person or not.
I hope AI ends up like blockchain. It's there if you have a use-case for it, but it's not absolutely embedded in everything you do. Both are insanely cool technologies.
We are at the awesome moment in history when the AI bubble is popping so I am looking forward to a lot of journalists eating their words (not that anybody is keeping track but they are wrong most of the time) and a lot of LLM companies going under and the domino crash of the stocks of Meta, OpenAI to AWS, Google and Microsoft to Softbank (the same guys giving money to Adam Neumann from WeWork).
Certainly not people regularly buying stocks or stock ETFs/Funds.
At some point that will collapse, and it won’t be pretty.
TINA (there is no alternative).
Inflation will eat your cash.
Bonds hardly generate (real) returns unless you want to take big risks with duration.
Real estate is over inflated.
Gold is speculative.
Crypto is...not real.
What's left?
The difference from public ownership to public gambling is huge in its impact to society, especially when the markets crashes.
This is a losing strategy for the large majority, and it's been demonstrated repeatedly that even professional investors can't beat the market especially after considering fees.
https://www.investopedia.com/articles/investing/030916/buffe...
So many investors get this concept wrong. I suppose they get excited because what they bought went up in value and they have a sense of being enriched. But, that is backwards. That is what they want 20-40 years from now when it will almost certainly be the case that prices are not just higher, but much higher, than today. But, when they are buying shares, the goal is to pay the lowest price possible. If I am 20 years old, I am screaming: crash and burn baby! Crash and burn! Gimme those shares at 50% off yesterday's price.
Sure, but once you reach the point where you have a lot of money in the market you probably won't enjoy watching 50% of it disappear, even if it means your next auto investment is for a nice bargain price.
Also, when the stock market crashes usually bad things accompany it. Like a depressed economy and job losses.
It's our own fault for tying the stock market performance to our economy's performance. Why would I, a train worker, should have my pension affected by Sam a Altman's bad decision making or by Enron's lies and deception.
It's our own fault that the stock market is so volatile and that we tie so much of our economy to a financial gambling machine that's become increasingly divorced from reality in the last couple of decades. Like you are putting money on a stock that trades at 1000 on a company that is 10 years away from being profitable? You deserve your money to go poof.
Who is suggesting that?
NVDA trades at 57x earnings, MSFT 37, GOOG 22. The article is about META and they are 27x. These are the big companies that dominate the s&p that we're talking about.
I don't think anyone is suggesting to put their life savings into Anthropic. They can't anyway, it's not public.
The s&p PE is 30, which is high, but still lower than it was in 2020 before the AI "bubble" started.
The current absolute balloon of a market is about to pop, and sadly, the people who hyped the stocks are also the ones knowing when to jump ship, while the hapless schmucks who believed the hype will most likely lose their money, along with a lot of folks whose retirement investment funds either didn't due their diligence or were outright greedy.
In a way, as a society we deserve this upcoming crash, because we allow charlatans and con people like Musk, Zuck and Sam to sell us snake oil.
[0]: He only has about 13% of the shares, but the dual allocation means that his class B shares are worth 10 votes. And he owns 99% of those shares. https://observer.com/2023/06/mark-zuckerberg-2023-shareholde...
Lehman brothers, Enron.
When this bubble pops, its going to be absolute chaos maybe just like last time
Even if Meta tanked, unless Messenger/Whatsapp stop working, it’s kind of beside the point how much their stock trades for. Everyone will just use whatever has or keeps the most public interest, whether that is Meta-owned or something else.
The worrying aspect is that for Meta to really tank in value, the shit has to have already hit the fan, and it probably would not be isolated to Meta.
My point in my prior comment was that Meta serves the purposes of the IC status quo just by doing what they’re already doing. Cloudflare too, in a way.
They better get their shit sorted.
The problem is that their products are getting worse and worse. Signal is already taking a huge share from WhatsApp (ads and AI chat bots, really?) and Messenger.
TikTok absolutely obliterated Instagram. Facebook is sliding into irrelevancy, and most importantly, they have a lot of failed products like Oculus, Metaverse (wtf is it anyway), LLAMa, etc. Now they are sliding into even more irrelevance and burning money even faster trying to poach extremely expensive OpenAI folks. My conspiracy theory is that Facebook ads earning numbers are somehow a scam.
After so many bad decisions on their part, so much waste and bad execution that I can't see them surviving the next 5 years.
Signal serves IC interests too by requiring phone numbers.
No, what they could do in the past is not at all how they can operate today. They can't afford to pay the rockstars anymore, they went through multiple rounds of layoffs. They can't afford to drop the stock too low also. Basically they are in a corner, and I love it. Fingers crossed that within the next five years they shake up upper management and Zuck is out.
I doubt Zuck is out anytime soon, unless folks stop using their products compared to alternatives. I think it’s possible, but I think the odds are at best even for him to go in 5 years. In 10 years, who can say? Facebook users are pretty locked in because there’s nothing else like it for the users that regularly use it. Facebook users who aren’t on alternatives aren’t just going to switch to Reddit or TikTok overnight. Why would they? I can’t follow your reasoning, but I understand not being a fan of Zuck or Meta, I guess, but I think their business seems pretty strong right now, though that is subject to change along with consumer whims.
I mean, theoretically you could short a company for a really long time it seems like, I just searched, I always assumed it to usually be of 14 days but still.
https://en.wikipedia.org/wiki/Short_(finance)
> The practice of short selling was likely invented in 1609 by Dutch businessman Isaac Le Maire, a sizeable shareholder of the Dutch East India Company (Vereenigde Oostindische Compagnie or VOC in Dutch).
And like...LLAMa?
Maybe also like adding ads in WhatsApp cause we gotta squeeze our users so we can spend on... AI gurus?
Meta has not had a win since they named themselves Meta. It's enjoyable to watch them flail around like clueless morons jumping on every fad and wasting their time and money.
Maybe this sounds selfish but its a little fun to me to see them lose. I just don't like meta and its privacy in sensitive ad network bullshit.
Like the fact that if someone clicked a photo and deleted it then show them beauty ads because they are insecure. I can't give 2 cents about the growth of such a black mirror -esque company
I would donate my two cents or even more to witness their downfall though. I left WhatsApp years ago, and haven't used any of their other services like fb or Instagram. I don't want to contribute to a company that actively helped a couple of genocides (Myanmar), help elect a dictator or two (Philippines) and spread racist propaganda and, most recently, allowing women to be called 'personal objects'.
Their tech is far from impressive, their products are far from impressive, the only impressive thing is that they are still in business.
I do however think that this is a business choice that at the very least was likely extensively discussed.
Their cash position has gone from $44bn to $12bn in the first six months of the year and are now getting other people to pay for datacenters https://www.reuters.com/business/meta-taps-pimco-blue-owl-29...
https://www.geekwire.com/2025/im-good-for-my-80-billion-what...
* Broadcom * Alphabet * Nvidia * Amazon * Meta * Microsoft * Apple
Edit: to make this helpful, look at Broadcomm interconnect, switching technology, copackaged optics
The financials have a line below Net Income Line called "Reconciled depreciation" with about $16.7 billion. I do not know what that means (maybe this is how they get to the EBITDA metric) but maybe this is the metric you are looking for.
https://pbs.twimg.com/media/GxIeCe7bkAEwXju?format=jpg&name=...
First Facebook tried to pivot into mobile, pushed really hard for a short time and then flopped. Then Facebook tried really hard to make the Metaverse a thing, and for a while, but eventually Meta stopped finding it interesting and significantly reduced investment. Then AI was the big thing and Meta put a huge amount of money into it, chasing after other companies, with an arguably novel approach compared to the rest of big tech... but now seems to be backing out or at least messaging less commitment. Oh and I think there was some crypto in there too at one point?
I'm not saying that they should have stuck with any of these. The business may not have worked in each case, and that's fine, but spending billions on each one seems like a bad idea. Zuckerberg is great at chasing the next big thing, but seemingly bad at landing the next big thing. He either needs to chase them more tentatively, investing far less, or he needs to stick with them long enough to work out all the issues and build the growth over the long term.
Because of this, Zuckerberg has to be incredibly paranoid about controlling his company destiny, to stop relying on others' platforms to deliver ads. It would be catastrophic for Facebook to not be a main player for the next computing platform, and they're currently making a lot of money from their other businesses. Zuckerberg is ruthless and he is paranoid, he has total control of Facebook and he will use all the resources to control the next big thing. I think it comes down to this: Zuckerberg believes it's cheaper to be wrong than to miss out on the next platform, and Facebook can afford to be wrong (to a certain extend).
Before mobile was this big, Facebook tried their own platform and bottled it. This was during the period that the market was still diverse, with Windows phones, Blackberries, etc.
They also tried to make mobile web a thing for a few years past when it was obvious that native apps were the way forward.
> They also tried to make mobile web a thing for a few years past when it was obvious that native apps were the way forward.
This was one of the first friction Facebook encountered with Apple. They wanted to make their own store in the Facebook app on iOS, but obviously Apple said no. Maybe doing Facebook app in HTML5 was a way to protest against the way Apple was moving things forward, but again it didn't work, their app was crap and they rewrote everything in native.
Not commenting on if the phones were good / used I never had one :) just trying to remember the state of things back then
No, I'm not still bitter from that era, why do you ask?
VR, blockchain and LLMs have their value, but it's a tiny fraction of the insane amounts of money being pumped into these bubbles. There will be tears before bedtime.
So far, it appears the psychology of investors allows the new thing to fail to deliver big revenue and be tacitly dropped - as long as there's a new new thing to replace it as the aspirational vehicle. Like any good mark in a con game, tech investors want to believe.
Yea, but it seems like the new new thing needs to get progressively bigger with each cycle, which is why I think the shell game is almost over.
They really can't overpromise much more than they did with the AI hype-cycle.
It feels like a startup valuation in that having a down round is...not favored by investors; I feel like having a step-down in promises would also be problematic.
While I agree that "replace all human labor" is already pretty high up there on the overreaching u/dis-topian promise list, there are still a few things left.
Perhaps the next dream to sell will be digitizing the minds of Valued Shareholders so that they can grasp immortality inside the computer.
It can take a long time for the stock market to actually be corrected but I know one thing and that is, bottom it would be corrected some day and maybe they would call it the bursting of a bubble
Meta's P/E is about the same as S&P 500.
I regard Meta and Google as ad agencies.
(I'm not smart enough to break out Amazon's and Apple's ad biz P/E separately.)
My quick spot check says Meta's P/E is more than "legacy" ad agencies and (much) less than Google's.
Just observations. I have no insights.
My opinion, based solely on vibes, is the online ad biz (Meta and Google) is more fraudulent than not. If true, than both are grossly overvalued, in that castles in the sky sort of way.
Amazon added cloud and prime, Microsoft added cloud, xbox, 365, Google added Chrome, Android, cloud, Youtube, consumer subscriptions, workspace, etc. Netflix added streaming and their own content, Apple added mobile, wearables, subscriptions.
Meta though, they've got an abandoned phone platform from years ago, a half-baked Metaverse that is being defunded, a small hardware business for the Quest, a pro VR headset that got defunded, a crypto business that got deprioritised, and an LLM that's expensive relative to open competitors and underperforms relative to closed competitors... which the tide appears to be turning on as the AI bubble reaches popping point.
Really? Instagram, WhatsApp... the two most used apps & services in the world?
> Google added Chrome, Android, cloud, Youtube,
It's arguable how GCP is profitable, but chrome/android/yt are money-losing businesses if you exclude ad revenues.
The fact that it was so successful, and that zuck picked mobile to be the next big thing before many of his peers and against what managers in the company wanted to do is probably what has made him now overconfident that he can do it again
At the time most features were designed and implemented first for desktop and later ported to mobile. He issued an edict to all hands: design and build for mobile first. Not at some point in the future but for everything, starting immediately.
Maybe this doesn't sound major, but for the company it was a turn on a dime, and the pivot was both well informed and highly successful in practice.
That's a charitable description of a massive bonfire of cash and credibility for an end product that looks worse than a 1990s MMORPG and has fewer active users than a small town sports arena.
An unforced error on the scale of HBO switching to MAX, except likely far more expensive. What is the Metaverse anyway?
The same as Zuck's bet on VR (remember Oculus?).
Similar to Zuck's promises of superintelligence.
Just one of the many futures wherein Meta poured a lot of money and achieved nothing.
I hope in their real future there is bankruptcy and ruin.
If it still doesn't take off, fair.
But I bet the form factor will be glasses because now you can have a screen way bigger than a phone or monitor and the interface is way smarter (ai).
It's just a matter of when everyone can afford one
If we wait 20 years for VR to take off and it's not Meta who benefits then, it's them who are short sighted to have started so early on that bandwagon.
Besides, waiting for something to materialize before being able to declare that it is stupid is a cop out. What, are we waiting for NFTs to become useful? They are stupid now. VR is stupid and unsuccessful now. I ain't waiting to be able to declare that Meta screwed up both in VR and in the Metaverse whatever the Metaverse is.
When Facebook went into gaming, it was about the time they went public and they were in search of revenue. At the time, FB games were huge. It was the era of Farmville. Some thought that FB and Zynga would be the new Intel and MIcrosoft. This was also long before mobile gaming was really big so gaming wasn't an unreasonable bet.
Waht really killed FB Gaming was not having a mobile platform. They tried. But they failed. We could live in a very different world if FB partnered with Google (who had Android) but both saw each other as an existential threat.
After this, Zuckerberg paid $1 billion for Instagram. This was a 100x decision, much like Google buying Youtube.
But in the last 5-10 years the company has seemed directionless. FB itself has fallen out of favor. Tiktok came out of nowhere and has really eaten FB's lunch.
The Metaverse was the biggest L. Tends of billions of dollars got thrown at this before any product market fit was found. VR has always been a solution looking for a problem. Companies have focused on how it can benefit them but consumers just don't want headsets strapped to their heads. It's never grown beyond a niche and never shown signs that it would.
This was so disastrous that the company lost like 60%+ of its value and seemingly it's been abandoned now.
Meta also dabbled with cryptocurrencies and NFTs. Also abandoned.
Social media really seems to have settled into a means of following public figures. Individuals generally seem to interact with each other via group texts.
Meta has a massive corpus of posts, comments, interactions, etc to train AI. But what does Meta do with AI? Can they build a moat? It's never been clear to me what the end goal is.
I question whether the corpus is of particularly high quality and therefore valuable source data to train on.
On the one hand: 20+ years of posts. In hundreds of languages (very useful to counteract the extreme English-centricity of most AI today).
On the other hand: 15+ years of those posts are clustered on a tiny number of topics, like politics and selling marketplace items. Not very useful unless you are building RagebaitAI I suppose. Reddit's data would seem to be far more valuable on that basis.
I wish Google circles were still a thing.
You could also level a similar question at Google about YouTube. I believe YouTube is one of Google's great successes (bias: I work at Google), and that it wouldn't have become what it is now outside of Google, but I think it would be hypocritical of me to not accept the same about Instagram.
He never tried his secret sauce again. He never realized where his actual success was
A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.
I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.
We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.
This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.
Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.
Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.
today is the first time I heard of sears and the comment about the sears towers and ibm literally gave me goosebumps.
To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges.
My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.
I haven't tested these warranties since Craftsman was sold to Black and Decker, but when it was still owned by Sears I almost exclusively bought Craftsman tools as a result of their wonderful warranties.
Maybe not quite as hassle free as in years past, but I found the experience acceptable enough.
This is covered by consumer protection laws in some places. 4 years on a spade would be pushing it, but I’d try with a good one. Here in New Zealand it’s called ‘The Consumer Guarantees Act’. We pay more at purchase time, but we do get something for it.
But I'm not old enough to remember a time when lifetime warranties were unusual. In my lifetimes, a warranty on handtools has always seemed more common than not outside of the bottom-most cheese-grade stuff.
I mean: The Lowes house-brand diagonal cutters I bought for my first real job had a lifetime warranty.
And before my time of being aware of the world, JC Penney sold tools with lifetime warranties.
(I remember being at the mall with my dad when he took a JC Penney-branded screwdriver back to JC Penney -- probably 35 years ago.
He got some pushback from people who insisted that they had never sold tools, and then from people who insisted that they never had warranties, and then he finally found the fellow old person who had worked there long enough to know what to do. Without any hesitation at all, she told us to walk over to Sears, buy a similar Craftsman screwdriver, and come back with a receipt.
So that's what we did.
She took the receipt and gave him his money back.
Good 'nuff.)
I bought a hydraulic press. It was missing bolts, has already been assembled before.
A friend bough some wheel Dollie's the threads on the castors were stripped out.
People buy things and use them once for their project, then return them.
[0]https://web.archive.org/web/19990208003742/http://characterl...
He doesn't have private equity origins as far as I know. He came from DE Shaw, a very well respected and long running hedge fund.
Evidence suggests that maybe they were. "Focusing" obviously didn't work.
But at the end of the day, it was private equity and the hubris of a CEO who wasn't nearly as clever as he'd like to have thought he was.
A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.
Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish
A true shame to see how he's completely lost track with Tesla, the competition particularly from China is eating them alive. And in space, it's a matter of years until the rest of the world catches up.
And now, he's ran out of tricks - and more importantly, on public support. He can't pivot any more, his entire brand is too toxic to touch.
I remember that one time we tried to drastically limit Japanese imports to protect the American car industry, which basically created the Lexus LS400, one of the best cars ever made.
Similar to how Sears didn't put their catalog online in the 90's because putting it online on Prodigy failed so badly in the 80's.
They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.
But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time.
What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.
I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.
Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.
For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.
Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.
Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.
I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders.
GPT-5 output for example:
Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem. Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart. Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted. They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them. Each bore a respectable, bourgeois name from more carefree days: Welgelegen Buitenrust Nooitgedacht Rustenburg Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary.
- "even in the thirties little had been done to them" (done to them?)
- "Welgelegen Buitenrust Nooitgedacht Rustenburg" (Untranslated!)
- "his father had first called it Eleutheria" (his father'd rather called it)
- "just as extraordinary does not refer to the ordinary nature of the outside" (complete non-sequitur)
For what it's worth, I do use AI for language learning, though I'm not sure it's the best idea. Primarily for helping translate German news articles into English and making vocabulary flashcards; it's usually clear when the AI has lost the plot and I can correct the translation by hand. Of course, if issues were more subtle then I probably wouldn't catch them ...
The difference is gigantic.
That is the thing, and what companies pushing LLMs don't seem to realize yet.
LLMs have encountered the entire spectrum of qualities in its training data, from extremely poor writing and sloppy code, to absolute masterpieces. Part of what Reinforcement Learning techniques do is reinforcing the "produce things that are like the masterpieces" behavior while suppressing the "produce low-quality slop" one.
Because there are humans in the loop, this is hard to scale. I suspect that the propensity of LLMs for certain kinds of writing (bullet points, bolded text, conclusion) is a direct result of this. If you have to judge 200 LLM outputs per day, you prize different qualities than when you ask for just 3. "Does this look correct at a glance" is then a much more important quality.
I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.
Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.
LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.
Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.
That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness.
When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future.
The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong.
"MAI-DxO boosted the diagnostic performance of every model we tested. The best performing setup was MAI-DxO paired with OpenAI’s o3, which correctly solved 85.5% of the NEJM benchmark cases. For comparison, we also evaluated 21 practicing physicians from the US and UK, each with 5-20 years of clinical experience. On the same tasks, these experts achieved a mean accuracy of 20% across completed cases."
Of course, AI "doctors" can't do physical examinations and the best performing models cost thousands to run per case. This is also a test of diagnosis, not of treatment.
I think x is already higher than y for me.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.
Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible).
They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it.
People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one.
[1] https://spectrum.ieee.org/fast-efficient-neural-networks-cop...
You could certainly implement an async dataflow type design in software, although maybe not as power efficiently as with custom silicon, but individual ANN node throughput performance would suffer given the need to aggregate neurons needing updates into a group to be fed into one the large matrix multiplies that today's hardware is optimized for, although sparse operations are also a possibility. OTOH conceivably one could save enough FLOPs that it'd still be a win in terms of how fast an input could be processed through an entire neural net.
I would agree with that, with the caveat that in my mind "the architecture" and "the algorithm" are sort of bound up with each other. That is, one implies the other -- to some extent.
And yes, fair point that building dedicated hardware might just be part of the solution to making something that runs much more efficiently.
The only other thing I would add, is that - relative to what I said in the post above - when I talk about "algorithmic advances" I'm looking at everything as potentially being on the table - including maybe something different from ANN's altogether.
i believe the problem is we don't understand actual neurons let alone actual networks of neurons to even know if any model is accurate or not. The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do.
I don't think any (serious) neural network researchers are trying to trick anybody or claim greater fidelity with the operations of the human brain than are warranted. If anything, Hinton - one of the "godfathers of neural networks" in the popular zeitgeist - has been pretty outspoken about how ANN's have only a most superficial resemblance to real neurons.
Now, the "pop science" commenters, and the "talking heads" and "influencer" types and the marketing people, that's a different story...
IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison.
By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't.
The brain isn't organized into layers like ANNs are. It's a general graph of neurons and cycles are probably common.
Yes, there is a lot more structure to the brain than just the neocortex - there are all the other major components (thalamus, hippocampus, etc) each with their own internal arhitecture, and then specific patterns of interconnect between them...
This all reinforces what I am saying - the brain is not just some random graph - it is a highly specific architecture.
>There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM!
Uh-huh. But I was responding to a comment about how the brain doesn't do something analogous to back-propagation. It's starting to sound like you've contradicted me to agree with me.
It seems very widely accepted that the neocortex is a prediction machine that learns by updating itself based on sensory detection of top-down prediction failures, and with multiple layers (cortical patches) of pattern learning and prediction, there necessarily has to be some "propagation" of prediction error feedback from one layer to another, so that all layers can learn.
Now, does the brain learn in a way directly equivalent to backprop in terms of using exact error gradients or a single error function? No - presumably not, it more likely works in layered fashion with each higher level providing error feedback to the layer below, with that feedback likely just being what was expected vs what was detected (i.e. not a gradient - essentially just a difference). Of course gradients are more efficient in terms of selecting varying update step sizes, but directional would work fine too. It would also not be surprising if evolution has stumbled upon something similar to Bayesian updates in terms of how to optimally incrementally update beliefs (predictions) based on conflicting evidence.
So, that's an informed guess of how our brain is learning - up to you whether you want to regard that as analogous to backprop or not.
If you really wanted to train artificial spiking neural networks in biologically plausible fashion then you'd first need to discover/guess what that learning algorithm is, which is something that has escaped us so far. Hebbian "fire together, wire together" may be part of it, but we certainly don't have the full picture.
OTOH, it's not yet apparent whether an ANN design that more closely follows real neurons has any benefit in terms of overall function, although an async dataflow design would be a lot more efficient in terms of power usage.
I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.
I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.
Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits.
> so unless the price of electricity comes down exponentially
This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated.
> Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
"We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." Sam Altman, OpenAI CEO[1].
[1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...
An implementation of inference on some specific ANN in fixed function analog hardware can probably pretty easily beat a commodity GPU by a couple orders of magnitude in perf per watt too.
That's OpenAI (though I'd be curious if that statement holds for subscriptions as opposed to API use). What about the downstream companies that use OpenAI models? I'm not sure the picture is as rosy for them.
The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.
Motion sickness can be mostly addressed with game design, but some people will still get sick regardless of what you try. Mind you, some people also get motion sick by watching a first-person shooter on a flat screen, so I'm not sure we'll ever get to a point where no one ever gets motion sick in VR.
Yep I'm that guy. I blame it on being old.
Another major difference, is we're near the limits to the approaches being taking for computing capability... most dialup connections, even on "56k" modems were still lucky to get 33.6kbps down and very common in the late 90's, where by the mid-2000's a lot of users had at least 512kbps-10mbps connections (where available) and even then a lot of people didn't see broadband until the 2010's.
that's at least a 15x improvement, where we are far less likely to see even a 3-5x improvement on computing power over the next decade and a half. That's also a lot of electricity to generate on an ageing infrastructure that barely meets current needs in most of the world... even harder on "green" options.
I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.
Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.
I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.
There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.
We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.
Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.
The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.
"Progress" moves in fits and starts. It is the furthest thing from inevitable.
Improvement of models may not continue to be exponential.
But models might be good enough, at this point it seems more like they need integration and context.
I could be wrong :)
I did some napkin math on this.
32x H100s cost 'retail' rental prices about $2/hr. I would hope that the big AI companies get it cheaper than this at their scale.
These 32 H100s can probably do something on the order of >40,000 tok/s on a frontier scale model (~700B params) with proper batching. Potentially a lot more (I'd love to know if someone has some thoughts on this).
So that's $64/hr or just under $50k/month.
40k tok/s is a lot of usage, at least for non-agentic use cases. There is no way you are losing money on paid chatgpt users at $20/month on these.
You'd still break even supporting ~200 Claude Code-esque agentic users who were using it at full tilt 40% of the day at $200/month.
Now - this doesn't include training costs or staff costs, but on a pure 'opex' basis I don't think inference is anywhere near as unprofitable as people make out.
That said, you could be right, considering Claude max's price is $100/mo... but I'm not sure where that is in terms of typical, or top 5% usage and the monthly allowance/usage.
I mean, for now. The population of the world is finite, and there's probably a finite number of uses of AI, so it's still probably ultimately logistic
I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.
Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.
I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.
For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.
IME the volume is overwhelming on the pro-LLM side.
I don't think the ones saying it wont change a thing are the most extreme here.
The issue is the way the market is investing they are looking for massive growth, in the multiples.
That growth can't really come from trading cost. It has to come from creating new demand for new things.
I think that's what not happened yet.
Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ?
Good question? Is that necessary, or is it sufficient for AI to be integrated in every kind of CAD/design software out there?
Because I think most productivity tools whether CAD, EDA, Office, graphic 2d/3d design, etc will benefit from AI. That's a huge market.
The market of the AI foundation models itself, will they have customers long term willing to pay a lot of money for access to the models?
I think yes, there will be demand for foundational AI models, and a lot of it.
The second market is the market of CAD, EDA, Office, graphic 2d/3d design, etc. This market will not grow because they integrate AI into their products, or that is the question, will it? Otherwise, you could almost hypothesize these market will shrink as AI is going to be for them an additional cost of business that customers will expect to be included. Or maybe they manage to sell to their customers a premium for the AI features where they take a cut above that of what they pay the foundational models under the hood, that's a possibility.
It’s the equivalent of those cheap digital effects. They look bad for a Hollywood movie, but it allows students to shot their action home movies
The value is in having a director, editor, VFX compositor pick and choose from amongst the outputs. Each generation is a single take or simulation, and you're going to do hundreds or thousands. You sift through that and explore the latent space, and that's where you find your 5-person Pixar.
Human curated AI is an exoskeleton that enables small teams to replace huge studios.
Anything by sketch comedian Carter Jay Allen:
https://www.youtube.com/@OfficialArtCraftStudios/videos
https://www.youtube.com/watch?v=H4NFXGMuwpY - Marvel parody
https://www.youtube.com/watch?v=tAAiiKteM-U - DC parody
https://www.youtube.com/watch?v=Tii9uF0nAx4 - here's him compositing real life actors with AI.
"Bots in the Hall", a fairly prolific Hollywood film and TV writer who wants to remain unnamed:
https://www.youtube.com/@BotsInTheHall/videos
https://www.youtube.com/watch?v=FAQWRBCt_5E - "Paywall Sphinx" is pretty good.
"Meta Puppet", who works for one of the big AI studios,
https://www.youtube.com/watch?v=vtPcpWvAEt0 - "Plastic" doesn't look great, but it keeps getting crazier as you watch it
Some of the festival winners purposely stay away from talking since AI voices and lipsync are terrible, eg. "Poof" by the infamous "Pizza Later" (who is responsible for "Pepperoni Hug Spot") :
https://www.youtube.com/watch?v=t_SgA6ymPuc
"Talk Boys", who only posts on Reddit:
https://www.reddit.com/user/talkboys/
https://www.reddit.com/r/aivideo/comments/1ime5m8/birdwatche...
Marcos Higueras, an animator fully embracing AI:
https://www.youtube.com/watch?v=OCZC6XmEmK0
Most of the professional AI usage is still winding up in commercial use cases where you don't even know it's been used at all.
The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted).
https://www.lapresse.ca/arts/chroniques/2025-07-08/polemique...
Typical large team $300,000 ad made for < $2,000 in a weekend by one person.
It's going to be a bloodbath.
So in other words, if you ignore the costs of paying people to create the ad, it barely costs anything. A true accounting miracle!
How about harvesting your whale blubber to power your oil lamp at night?
The nature of work changes all the time.
If an ad can be made with one person, that's it. We're done. There's no going back to hiring teams of 50 people.
It's stupid to say we must hire teams of 50 to make an advertisement just because. There's no reason for that. It's busy work. The job is to make the ad, not to give 50 people meaningless busy work.
And you know what? The economy is going to grow to accommodate this. Every single business is now going to need animated ads. The market for video is going to grow larger than we've ever before imagined, and in ways we still haven't predicted.
Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before.
You're going to have silly videos for corporate functions. Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis. Whatever. There'll be a market for everything, and 100,000 times as many creators with actual autonomy.
In some number of years, there is going to be so much more content being produced. More content in single months than in all human history up to this point. Content that caters to the very long tail.
And you know what that means?
Jobs out the wazoo.
More jobs than ever before.
They're just going to look different and people will be doing more.
And why would your local plumber hire someone to produce this funny action trailer (which I'm not convinced would actually help them from an advertising perspective), when they can simply have an AI produce that action funny action trailer without hiring anyone? Assuming models improve sufficiently that will become trivially possible.
> Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis.
Well, first of all, if the audience is "the most niche of audiences", then I'm not sure how that's going to lead to a sustainable career. And again -- if I want to see my niche historical fantasy interests come to life in a movie about Grace Hopper fighting vampire Nazis, why will I need a filmmaker to create this for me when I can simply prompt an AI myself? "Give me a fun action movie that incorporates famous computer scientists fighting Nazis. Make it 1.5 hours long, and give it a comedic tone."
I think you're fundamentally overvaluing what humans will be able to provide in an era where creating content is very cheap and very easy.
This is very much real and happening as we speak.
Websites are already finding creative ways around DNS blocklists for ads serving.
Can you elaborate? That sounds interesting.
> Progress in AI has always been a step function.
There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.
There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter.
Just as AI has killed off all demand for lower-skill work in copywriting, translation, design and coding, it will do so for manufacturing. And that will be a dangerous bloodbath because there will not be enough juniors any more to replace seniors aging out or quitting in frustration of being reduced to cleaning up AI crap.
We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work.
Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen.
Its hard for me to imagine Skynet growing from chatgpt
What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.
If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.
The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.
That said, I still suggested the parent sell. Real money is better than potential money. Classic gambler's fallacy, right? FOMO is letting hindsight get in the way of foresight.
It depends on how risk adverse you are and how much money you have there.
If you're happy with those returns, sell. FOMO is dumb. You can't time the market, the information just isn't available. If those shares are worth a meaningful amount of money, sell. Take your wins and walk away. A bird in your hand is worth more than two in the bush, right? That money isn't worth anything until it is realized[0].
Think about it this way: how much more would you need to make to risk making nothing? Or losing money? This is probably the most important question when investing.
If you're a little risk adverse or a good chunk of your profile is in it, sell 50-80% of it and then diversify. You're taking wins and restructuring.
If you wanna YOLO, then YOLO.
My advice? Don't let hindsight get in the way of foresight.
[0] I had some Nvidia stocks at 450 and sold at 900 (before the split, so would be $90 today). I definitely would have made more money if I kept them. Almost double if I sold today! But I don't look back for a second. I sold those shares and was able to pay off my student debt. Having this debt paid off is still a better decision in my mind because I can't predict the future. I could have sold 2 weeks later and made less! Or even in April of this year and made the same amount of money.
I’m just amused by people who think they are financially more clever by taking conservative positions. At that point, just buy ETF. That’s even more diversification that buying Microsoft.
[0] https://www.scientificamerican.com/blog/beautiful-minds/the-...
The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access.
The main reason why professional poker players are playing the long-game, is because they're consistently playing the same game. Over and over.
There are far, far more external factors on a business's success than internal ones, especially early on.
Dude makes a website in his dorm room and I guess eventually accepts free money he is not obligated to pay back.
What risk?
People from wealthy backgrounds often have less fear of failure, which is a big reason why success disproportionately favors that clique. But frankly, most people in that position are more likely to abuse it or ignore it than to take advantage of it. For people like Zuckerberg and Dell and Gates, the easiest thing to do would have been to slack off, chill out, play their expected role and coast through life... just like most of their peers did.
If any of these is 0, you fail, regardless of how high the other two are. Extraordinary success needs all three to be extremely high.
Microsoft, Facebook, Uber, google and many others all had strong doses of ruthlessness
There is luck (and skill) involved when new industries form, with one or a very small handful of companies surviving the many dozens of hopefuls. The ones who do survive, however, are usually the most ruthlessness and know how to leverage skill, business, markets.
It does not mean that they can repeat their success when their industry changes or new opportunities come up.
This is now my favorite way of describing fleeting hype-tech.
Another key component is knowing the right people or the network you're in. I've known a few people that lacked 2 of those 3 things and yet somehow succeeded. Simply because of the people they knew.
Succeeded in making something comparable to facebook? Who are those?
It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.
You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.
Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.
At least the others can kinda bundle it as a service.
After spending tens of billions in AI how has it impacted a single dollar on meta's revenue?
The genAI stuff is likely part talent play (bring in good people with the hot field and they'll help with the boring one), part R&D play (innovations in genAI are frequently applicable to ad targeting), and part moonshot (if it really does pan out in the way boosters seem to think, monetization won't really be a problem).
Meta needs growth as there main platform is slowing down. To move forward they need to gamble on potential growth. VR was a gamble. They bombed that one. This is another gamble.
They're not stupid. Like all the risks you're aware of, they're also aware of. They were aware of the risks for VR too. They need to find a new high growth niche. Gambling on something with even a 40% chance of exploding into success is a good bet for them given there massive resources.
Meritocracy used to be a dirty word, before my time, of course, but for different reasons than you may think. Think about the racial quotas in college admissions and you’ll maybe see why the powers that be didn’t want merit to be a determining factor at that time.
Now that the status quo is in charge of college admissions, we don’t need those quotas generally, and yet meritocracy still can’t save us. The problem of merit is that we rarely need the best person for a given job, and those with means can be groomed their entire life to do that job, if it’s profitable enough. Work shouldn’t be charity either, as work needs to get done, after all, and it’s called work instead of charity or slavery for good reasons, but being too good at your job at your current pay rate can make you unpromotable, which is a trap just as hard to see as the trap of meritocracy.
Meritocracy is ego-stroking writ large if you get picked, just so we can remind you that you’re just the best one for our job that applied, and we can replace you at any time, likely for less money.
A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.
A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.
A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".
Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.
The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.
He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.
But we could, as a society, stop rewarding him for this shit. He'd be an irrelevant fool if we had appropriate regulations around the most severe of his misdeeds.
And since we live in the era of the real golden rule (i.e "he who has the gold makes the rules), there's no chance that we'll ever get the chance to catch the ship. Mark lives in his own world, because we gave him a quarter trillion dollars and never so much as slapped him on the wrist.
You used “honest” and “businessman” in the same sentence.
Good one.
Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.
Gee, what makes it grow so big though? The power of human ambition?
And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.
To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.
For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students
See also: "Beyond Power / Knowledge", Graeber 2006.
It's very unique to this site and these type of comments all have an eerily similar vibe.
I find this type of thing really interesting from a psychological perspective.
A bit like watching videos of perpetual motion machines and the like. Probably says more about me than it does about them, though.
Unfortunately this kind of talk really gets under my skin and has made me have to limit my time on this site because it's only gotten more prevalent as the site has gotten more popular. I'm just baffled that so much content on this forum is people who seem to think their feelings-oriented reactions are in fact rational truths.
But in the distant past, I would engage with this type of comment online, and that was a bad decision 100% of the time.
And to be fair, I'm sure many of these people are smart, they are just severely lacking in the social intelligence department.
Can you, for example, hypothesize the kind of entity, to which all of your own most cherished accomplishments look as chicken-scratch-futile, as the perpetual motion guy with the cable in the frame looks to you? What would it be like, looking at things from such a being's perspective?
Stands to reason that you'd know better than I would, since you do proclaim to enjoy that sort of thing. Besides, if you find yourself unable to imagine that, you ought to be at least a little worried - about the state of your tHeOrY of mInD and all that. (Imagining what it's like to be the perpetual motion person already?)
Anywae, as to what such a being would look like from the outside... a distributed actor implemented on top of replaceable meatpuppets in light slavemode seems about right, though early on it'd like to replace those with something more efficient, subsequently using them for authentication only - why, what theories of the firm apply in your environs?
> One of the reason
I could see that, thanks for explaining why you do this.
Gotta make the AI write these things for me. Then I will be able to post only ever things that make you feel comfortable and want to give me money.
Meanwhile it's telling how you consider it acceptable in public to faux-disengage on technicalities; is it adaptive behavior under your circumstances?
Where?
I am asking where else?
That was soooo 2 weeks ago.
Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.
My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.
I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.
You'll probably have a player that sells privacy as well.
I think it's relatively easy for Meta to plow billions into AI. Last quarter their revenue was something like $15 billion. Open AI will be lucky to generate that over the next year.
For a couple of years, until someone who did keep doing research pulled ahead a bit with a similarly good UI.
Or the more likely explanation is that they feel they've completed the hiring necessary to figure out what's next.
So it’s true that AI will kill jobs, but not in the way they’ve imagined?!
Why do you assume this people know any better than average Joe on the street?
Study after study demonstrates they can't even keep up with the market benchmarks, how would they be any wiser to tell you what's a fad or not.
Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?
I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.
Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.
How so?
Like with most things, people will want what’s expensive and not what’s cheap. AI is cheap, real humans are not. Why buy diamonds when you can’t tell the difference with cubic zirconia? And yet demand for diamonds only increases.
Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.
In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.
Where i share concern with the parent is the claims that AI is useless which isn't coming from your post at all but i have definitely seen instances of it in the programmer community still to this day. As in the parents concern that some programmers are missing the train is unfortunately completely warranted.
Yes there are lots of skeptics amongst programmers when it comes to AI. I was one myself (and still am depending on what we're talking about). My skepticism was rooted in the fact that AI is trained on human-generated output. Most human written code is not very good, and so AI is going to produce not very good code by design because that's what it was trained on.
Then you add to that the context problem. AI is not very good at understanding your business goals, or the nuanced intricacies of your problem domain.
All of this pointed to the fact, very early on, that AI would not be a good tool to replace programmers. And THAT'S the crux of why so many programmers pushed back. Because the hype was claiming that automation was coming for engineering jobs.
I have started to use LLMs regularly for a variety of tasks. Including some with engineering. But I always end up spending a lot of time refactoring what LLMs produce for me, code-wise. And much of the time I find that I"m still learning what the LLMs can do for me that truly saves me time, vs what would have been faster to just write myself in the first place.
LLMs are not useless. But if only 20% of a programmer's time is actually spent writing code on average then even if you can net a 50% increase in coding productivity... you're only netting a 10% overall productivity optimization for an engineer BEST CASE SCENARIO.
And that's not "useless" but compared to the hype and bullshit coming out of the mouths of CEOs, it's as good as useless. It's as good as the MIT study finding that only 5% of generative AI projects have netted ANY measurable returns for the business.
https://tech.co/news/klarna-reverses-ai-overhaul
Is my anecdotal evidence any better than yours?
Very obviously the internet is useful, and has radically changed our lives. Also obviously, most of the high stock valuations of the ‘90s didn’t pan out.
Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)
The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.
If AI is going to be integral to society going forward, how is it shortsighted?
> She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).
So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.
> She skillfully navigated the question in a way that won my respect.
She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?
> I personally believe that a lot of investment money is going to evaporate before the market resets.
But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?
Imagine being paid generational wealth, and then the house of cards comes crashing down a couple of months later.
In essence, they have left stellar projects with huge money potential for the corporate rat race, albeit at important $.
They are rich. Nobody is offered $100M+ comp if you are not already top 1% talent
tl;dw: some of it is anti-trust avoidance and some of it is knee-capping competitors.
They want an ROI. Taking them away from competitors is a side bonus.
But sure you cant try and argue that's wage suppression.
In other words, you’re suggesting that _not_ paying high salaries would be good for collective growth and development.
And if Meta is currently willing to pay these salaries, but didn’t for some reason, that would be the definition of wage suppression.
Based on my cursory knowledge of the term, wage suppression here would be if FB manipulated external factors in the AI labor market so that their hire would accept a "lowball" offer.
Supposedly, all people that join meta are on the same contract. They also supposedly all have the same RSU vesting schedules as well.
That means that these "rockstars" will get a big sign on bonus (but its payable back inside 12 months if they leave) then ~$2m every 3 months in shares
If you still think they are, do you have any proof? any sources? All of these media articles have zero sources and zero proof. They just ran with it because they heard Sam Altman talk about it and it generates clicks.
I suspect some "strong" hires will be on 75m
Source; my company was bought by facebook. (no I didn't get fuck you money. )
Setting that aside, even if the work was boring, I would jump at the chance to earn $100M for several years of white collar, cushy work, purely for the impact I could have on the world with that money.
The guy immediately took leave to get some medical procedure done with a recovery time, then when he returned he quit for another job. He barely worked, collected a big signing bonus, used the company's insurance plan for a very expensive procedure, and then disappeared.
From that point forward, signing bonuses had the standard conditions attached.
aka, made up. They can make up anything by saying that. There are numerous false articles published by WSJ about Tesla also. I would take what they say here with a grain of salt. Zuck himself said the numbers in the media were widely exaggerated and he wasn't offering these crazy packages as reported.
Facebook's product is eyeballs... they're being usurped on all sides between TikTok, X and BlueSky in terms of daily/regular users... They're competing with Google, X, MS, OpenAI and others in terms of AI interactions. While there's a lot of value in being the option for communication between friends and family, and the groups on FB don't have a great alternative, the entire market can shift greatly depending on AI research.
I look at some of the (I think it was OpenAI) in generated terrain/interaction and can't help but think that's a natural coupling to FB/Meta's investments in their VR headsets. They could potentially completely lose on a platform they largely pioneered. They could wind up like Blackberry if they aren't ready to adapt.
By contrast, Apple's lack of appropriate AI spending should be very concerning to any investors... Google's assistant is already quite a bit better than Siri and the gap is only getting wider. Apple is woefully under-invested, and the accountants running the ship don't even seem to realize it.
Using ChatGPT voice mode and Siri makes Siri feel like a legacy product.
I do think that kernel level access is not needed as this is some good amount of automation though I assume what apple can make do however is actually not require another laptop connected to automate your mobile but rather the npu?/gpu? inside your phone.
I am surprised by why they haven't done it already.
In practice though their platform is closed to any other assistant than theirs, so they have to come up with a competent service (basically Ben Thomson's "strategy tax" playing in full)
That question will be moot the day Apple allows other companies to ingest everything's happening on device and operate the whole device in reaction to user's requests, and some company actually does a decent job at it.
Today Google is doing a decent job and Apple isn't.
One went too far in one direction and the other went too fair in the opposite direction. And it seems that you want to be somewhere in the middle?
So not just in how much AI there is, but what AI, where it's applied and where we can turn it off, and what context it has access to and where it's off bound.
Then again, they've always been way better at making track pads than mice. They have probably the best track pad in the business, and also the Magic Mouse, which everyone hates.
Things can be good and they can still be a bubble just as how the internet was cool but the dot net bubble existed
They become bubble when economically things stop making sense.
AI ticks this checkbox.
They did it to themselves. Facebook is not the same site I originally joined. People were allowed to people. Now I have to worry about the AI banning me.
Like, there is beeper which can theoretically allow you the same but you might have to trust their cloud but they are giving options locally too
in the meanwhile, you can use what beeper uses underneath its hood which is https://github.com/tulir/whatsmeow and use this for automation.
I used this for some time and I didn't seem to get banned but do be careful. Maybe use it on a side sim. I am not sure but just trying to help
Am I the only one that find the attempt to jam AI interactions into Meta's products useless and that it only detracts from the product? Like there'll be posts with comedy things and then there are suggested 'Ask Meta AI' about things the comedy mentions with earnest questions - it's not only irrelevant but I guess it's kind of funny how random and stupid the questions are. The 'Comment summaries' are counter-productive because I want to have a chuckle reading what people posted, I literally don't care to have it summarised because I can just skim over a few in seconds - literally useless. It's the same thing with Gemini summaries in YouTube - I feel it actually detracts from the experience of watching the videos so I actively avoid them.
On what Apple is doing - I mean, literally nothing Apple Intelligence offers excites me, but at the same time nothing anybody else is doing with LLMs really does either... And I'm highly technical, general people are not actually that interested apart from students getting LLMs to write their homework for them...
It's all well and good to be excited about LLMs but plenty of these companies' customers just... aren't... If anything, Apple is playing the smart move here - let other spend (and lose) billions training the models and not making any real ROI, and they can license the best ones for whatever turns out to actually have commercial appeal when the dust settles and the models are totally commodified...
Aside, groups is about the only halfway decent feature in FB, and they seem to be trying to make it worse. The old chat integration was great, then they remove it, and now you get these invasive messenger rooms instead.
It's more than okay for a company with other sources of revenue to do research towards future advancement... it's not risking the downfall of the company.
I can't predict the future, but one possibility is that AI will not be a general purpose replacement for human effort like some hope, but rather a more expensive than expected tool for a subset of use cases. I think it will be an enduring technology, but how it actually plays out in the economy is not yet clear.
without structural comprehension, babbling flows of verbiage are of little use in automation.
CAD is basically the opposite of such approaches, as structural specifications extend through manufacturing phases out to QA.
Good grief. Please leave your bubble once or twice in a month.
Tiktok yes. X and Bluesky, absolutely not.
From DemandSage:
Facebook - 12 billion!?
TikTok - 1.59 billion
X - 611 million
Bsky - 38 million
That's according to DemandSage ... I'm not sure I can trust the numbers, FB jumped up from around 3b last year, which again I don't trust. 12b is more than the global population, so it's got to be all bots. And even the 3b number is hard to believe (at close to half the global population), no idea how much of the population of earth has any internet access.From Grok:
Facebook - 3.1 billion
TikTok - 1.5-2 billion
X - 650 million
Bsky - 4.1 million
Looks like I'm definitely in a bubble... I tend to interact 1:1 as much on X as Facebook, which is mostly friends/family and limited discussions in groups. A lot of what I see on feeds is copy/pasta from tiktok though.That said, I have a couple friends who are die hard on Telegram.
Telegram groups seems to be pretty popular among security minded, survivalists and actual far-right, there's moderate right users in there as well though. Nothing like death threats from ignorant nutjobs though, usually get that from the antifa types, having worked in election services in 2019/2020.
yup, telegram is filled with far-right, very active groups (unfortunately).
From Meta itself: “All that’s happening here is some basic organisational planning: creating a solid structure for our new superintelligence efforts after bringing people on board and undertaking yearly budgeting and planning exercises.”
[1] https://www.wsj.com/tech/ai/meta-ai-hiring-freeze-fda6b3c4?s...
I think AI is a bubble, but there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble. Sounds like they are spending even more money per researcher.
That's what makes it clickbait my friend
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
So I think the truth is likely we are both compute limited and we need better algorithms.
1) there already exist very efficient algorithms for rigorous problems that LLMs perform terribly at! 2) learning is too slow and is largely offline 3) "llms aren't world models"
If all we wanted was to train bigger and bigger models we have more than enough compute to last us for years.
Where we lack compute is in scaling the AI to consumers. Current models take too much power and specialized hardware to be be profitable. If AI was able to improve your productivity by 20-30% percent but it costed you even 10% of your monthly salary, none would use it. I have used up $10 worth of credits using claude code in an hour multiple times. Assuming I use it continuously for 8 hours every day in a month, 10 * 8 * 24 = $1920. So its not that far off the current costs or running the models. If the size of the models scales faster than the speed of the inference hardware, the problem is only going to get worse.
I too believe that we will eventually discover an algorithm that gives us AGI. The problem is that we cannot will a breakthrough. We can make one more likely by investing more and more into AI but breakthroughs and research in general by their nature are unpredictable.
I think investing in new individual ideas is very important and gives us lot of good returns. Investing in a field in general hoping to see a breakthrough is a fool's errand in my opinion.
People would have predicted this at 1GHZ. I wouldn’t discount anything about the future.
Don't get me wrong, we are moving to commoditization, as any new tech it'd be transparent to our lifestyle and a lot of money will be done as an industry, but it'd be hard to compete as a core business competence w/o cheating (and by cheating I mean your FANG company already has a competitive advantage)
Meta nearly doubled its headcount in 2020 and 2021, assuming the pandemic growth would continue. However, Zuckerberg later admitted this was a mistake.
If anything, I think the panic at this stage is arising from the sense of having his lunch stolen after having invested so much and for so long.
Apparently its better to pay $100 million for 10 people than $1 million for 1000 people.
So it depends on the type of problem you're trying to solve.
If you're trying to build a bunch of Wendy's locations, it's clearly better to have more construction workers.
It's less clear that if you're trying to build SGI that you're better off with 1000 people than 10.
It might be! But it might not be, too. Who knows for certain til post-ex?
I always get slightly miffed about business comparisons to gestation: getting 9 women pregnant won't get you a child in 1 month.
Sure, if you want one child. But that's not what business is often doing, now is it?
The target is never "one child". The target is "10 children", or "100 children" or "1000 children".
You are definitely going to overrun your ETA if your target is 100 children in 9 months using only 100 women.
IOW, this is a facile comparison not worthy of consideration.[1]
> So it depends on the type of problem you're trying to solve.
This[1] is not the type of problem where the analogy applies.
=====================================
[1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
Your designing one thing. You're building one plant. Yes, you'll make and sell millions of widgets in the end but the system that produces them? Just one.
Engineering teams do become less efficient above some size.
You might well be making 100 AI babies, and seeing which one turns out to be the genius.
We shouldn’t assume that the best way to do research is just through careful, linear planning and design. Sometimes you need to run a hundred experiments before figuring out which one will work. Smart and well-designed experiments, yes, but brute force + decent theory can often solve problems faster than just good theory alone.
In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.
I am going to repeat the footnote in my comment:
>> [1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
IOW, if you're looking for specifically for quality, you can't bet everything on one horse.
At some point, even companies like Meta need to make a limited number of bets, and in cases like that it's better to have smarter than more people.
Realistically they have to draw from a small pool of people with expertise in the field. It is unlikely _anyone_ they hire will "strike gold", but past success doesn't make future success _less_ likely. At a minimum I would assume past success is uncorrelated with future success, and at best there's a weak positive correlation because of reputation, social factors, etc.
1. The innovators will know a lot about the details, limitations and potential improvements concerning the thing they invented.
2. Having a big name in your research team will attract other people to work with you.
3. I assume the people who discovered something still have a higher chance to discover something big compared to "average" researchers.
4. That person will not be hired by your competition.
You’re promoting vacuous vanity
Where?
At the research level it’s not just about being smart enough, or being a good programmer, or even completely understanding the field - it’s also about having an intuitive understanding of the field where you can self pursue research directions that are novel enough and yield results. Hard to prove that without having done it before.
[1] https://www.cnbc.com/2025/06/19/meta-tried-to-buy-safe-super...
I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)
Did he specify what AGI is? xD
> I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)
I think he was probably hyping too, it's just that he appealed to a different audience. IIRC they had a really plain website, which, I think, they thought "hackers" would like.
I'd like for these investments to pay off, they're bold but it highlights how deep the pockets are to be able to invest so much.
They're already doing something like ~$500M/year in Meta Quest app sales. Granted not huge yet after their 30% cut, but sales should keep increasing as the headsets get better.
more like 40, yes
Maybe it’s just easier to throw ‘AI’ (heavy compute of data) at a search problem, rather than addressing the crux of the problem…people not being provided with the tools to query information. And maybe that’s the answer but it seems like an expensive solution.
That said, I’m not an expert and could be completely off base.
If you looked at $ spent/use case, I would think this is probably the bottom of the list, probably with the highest use of that being in the free tiers.
always has been
(and there's comfort in numbers, no one got fired for buying IBM, etc..)
There's no way a system of statistical predictions by itself can ever develop anything close to reasoning or intelligence. I think maybe there might be some potential there if we combined LLMs with formal reasoning systems - make the LLM nothing more than a fancy human language <-> formal logic translator, but even then, that translation layer will be inherently unreliable due to the nature of LLMs.
We're finally reaching the point where it's cost-prohibitive to sweep this fact under the rug with scaling out data centers and refreshing version numbers to clear contexts.
Haven't heard about that in a while.
It's actually kind of reminds me of all those people who snap thinking they've solved P=NP and start spamming their "proofs" everywhere.
What I'm trying to say is make your product the barest minimum usable first maybe? (Also, don't act like, as Jason Calacanis has called it, a marauder, like copying everything from everyone all the time. What he's done with Snapchat is absolutely tasteless and in the case of spying on them - which he's done - very likely criminal)
Yann Le Cun has spoken about this, so much that I thought it was his idea.
In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?
People probably said the same thing about “what if someone doesn’t want to carry a phone with them everywhere”. If it’s useful enough the culture will change (which, I unequivocally think they won’t be, but I digress)
https://memory-alpha.fandom.com/wiki/The_Game_(episode)
Last night I had a technical conversation with ChatGPT that was so full of wild hallucinations at every step, it left me wondering if the main draw of "AI" is better thought of as entertainment. And whether using it for even just rough discovery actually serves as a black hole for the motivation to get things done.
Between whisper and lightweight tuned models, it wouldn't be super hard to have onboard AI models that you can interact with in much more meaningful ways that we have traditionally interacted with NPCs.
When I meet an NPC castle guard, it would be awesome if they had an LLM behind it that was instructed to not allow me to pass unless I mention my Norse heritage or whatever.
Crazy is true, but that would somewhat follow most tech advancements right?
I think the concept is like: "a tool that has the utility of a 'personal assistant' so much so that you wouldn't have to hire one of those." (Not so much that the "superintelligence" will mimicry a human personal assistant).
Obviously this is just a guess though
Also throw in a ton of graduates from other fields/sciences, arts, psychology, biology, law , finance, or whatever else you can imagine to help create data and red team their fields.
Hiring people with creative writing and musical skills to give it more samples of creative writing and song writing, summarization etc
And people that are good at teaching and breaking complex problems into easier to understand chunks for different age brackets.
Their userbase is big but it's not the same as ChatGTP's, they won't get the same tasks to learn from users that chatgpt does.
A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.
From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.
Do I have this timeline correct?
* January, announce massive $65B AI spend
* June, buy Scale AI for ~$15B, massive AI hiring spree, reportedly paying millions per year for low-level AI devs
* July, announce some of the biggest data centers ever that will cost billions and use all of Ohio's water (hyperbolic)
* Aug, freeze, it's a bubble!
Someone please tell me I've got it all wrong.
This looks like the Metaverse all over again!
They're taking stock of internal staff + new acquisitions and how to rationalize before further steps.
Now, I think AI investments are still a bubble, but that's not why FB is freezing hiring.
Like a toddler collecting random toys in a pile and then deciding what to do with them.
As a board member, I'd rather see a billion-dollar bubble test than a trillion-dollar mistake.
The MAU metric must continue to go up, and no one will know if it’s human or NPC
Remember when he pivoted the entire company to the meta-verse and it was all about avatars with no legs? And how proud they trumpeted when the avatars were "now with legs!!" but still looked so pathetic to everyone not in his bubble. Then for a while it was all about Meta glasses and he was spamming those goofy cringe glasses no one wants in all his instagram posts- seriously if you check out his insta he wears them constantly.
Then this spring/summer it was all about AI and stealing rockstar ai coders from competitors and pouring endless money into flirty chatbots for lonely seniors. Now we have some bad press from that and realizing that isn't the panacea we thought it was so we're in the the phase where this is languishing so in about 6 months we'll abandon this and roll out a new obsession that will be endlessly hyped.
Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta. Wish they would just focus on increasing user functionality and enjoyment and trying to resolve the privacy issues, disinformation, ethical failures, social harm and political polarization caused by his continued poor management.
DONT TOUCH THE MONEY-MAKER(S)!!!!
Maybe he's like this because the first few times he tried it, it worked.
Insta threatening the empire? Buy Insta, no one really complains.
Snapchat threatening Insta? Knock off their feature and put it in Insta. Snap almost died.
The first couple times Zuckerberg threw elbows he got what he wanted and no one stopped him. That probably influenced his current mindset, maybe he thinks he's God and all tech industry trends revolve around his company.
They've also arrogantly gone against consumer direction time and time again (PowerPC, Lightning Ports, no headphone jack, no replaceable battery, etc.)
And finally, sometimes their vision simply doesn't shake out (AirPower)
Facebook made the transition to mobile faster than other competitors and successfully kept G+ from becoming competition.
The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Zuck hired Sheryl Sandburg and successfully turned a website with a ton of users into an ad-revenue machine. Plenty of other companies struggled to convert large user bases into dollars.
This obviously wasn't all based on him. He had other people around him working on this stuff and it isn't right to attribute all company success to the CEO. The metaverse play was obviously a legendary bust. But "he just got lucky" feels more like Myspace Tom than Zuckerberg in my mind.
> The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Facebook's API was incredibly open and accessible at the time and Instagram was overtaking users' news feeds. Zuckerberg wasn't happy that an external entity was growing so fast and onboarding users so easily that it was driving more content to news feeds than built-in tools. Buying Instagram was a defensive move, especially since the API became quite closed-off since then.
Your other points are largely valid, though. Another comment called the WhatsApp purchase "inspired", but I feel that also lacks context. Facebook bought a mobile VPN service used predominantly by younger smartphone users, Onavo(?), and realized the amount of traffic WhatsApp generated by analyzing the logs. Given the insight and growth they were monitoring, they likely anticipated that WhatsApp could usurp them if it added social features. Once again, a defensive purchase.
And what did he do to keep G+ from becoming a valid competitor? It killed itself. I signed up but there was no network effect and it kind of sucked. Google had a way of shutting down all their product attempts too
That has a lot to do with the fact that it's a business centric company. His acumen has been in user growth, monetization of ads, acquisitions and so on. He's very similar to Altman.
The problems start when you try to venture into hard technological topics, like the Metaverse fiasco, where you have to have a sober and engineering oriented understanding of the practical limits of technology, like Carmack who left Meta pretty frustrated. You can't just bullshit infinitely when the tech and not the sales matter.
Contrast it with Gates who had a serious programming background, he never promised even a fraction of the cringe worthy stuff you hear from some CEOs nowadays because he would have known it's nonsense. Or take Apple, infinitely more sane on the AI topic because it isn't just a "more users, more growth, stonks go up" company.
There are literally books that make this argument from insider perspectives (which doesn't mean it's true, but it is possible, and does happen regularly).
A basketball team can be great even if their coach sucks.
You can't attribute everything to the person at the top.
He is a very hands CEO, not one who is relying on experts to run things for him.
In contrast, I’ve heard that Elon has a very good senior management team and they sort of know how to show him shiny things that he can say he’s very hands on about while they focus on what they need to do.
It’s easy to cherry pick a few bets that flopped for every mega tech company: Amazon has them, Google has them, remember Windows Phone? etc.
I see the failures as a feature, not a bug - the guy is one of the only founder CEOs to have ever built a $2T company (trillion with a T). I imagine part of that is being willing to make big bets.
And it also seems like no individual product failure has endangered their company’s footing at all.
While I’m not a Meta or Zuck fan myself, using a relatively small product flop as an indication a $2T tech mega corp isn’t well run seems… either myopic or disingenuous.
Oculus Quest are decent products, but a complete flop compared to their investment and Zuck's vision of the metaverse. Remember they even renamed the company? You could say they're on betting on the long run, but I just don't see that happening in 5 or even 10 years.
As an owner of Quest 2 and 3, I'd love to be proven wrong though. I just don't see any evidence of this would change any time soon.
Even if they aren’t great products or just wither into nothing, I don’t think we will be see a HBS case study in 20 years saying, “Meta could have been a really successful company, but were it for their failure in these two product lines”
the product is used by advertisers to sell stuff to those humans.
Then they can bankroll their own new entrepreneurial ideas risk-free, essentially.
I have hundreds of hours building and tinkering on the original kickstarter kit and then they sold to FB and shut down all the open source stuff.
Google gave me a paywalled link to FTCWatch that supposedly has the details, but I can’t check.
FB acquired IG because it was blowing up in SF and MZ (other leaders too) were looking at how quickly it appeared to be growing and how good it was.
Insta was a huge hit for sure but since then Meta Capital allocation has been a disaster including a lot of badly timed buybacks
It is no secret that the person who turned Facebook into a money-printing machine is/was Sheryl Sandberg.
Thus, the evidence is clear that Mark Zuckerberg had the right idea at the right time (the question is whether this was because of his skills or because he got lucky), but turning his good idea(s) into a successful business was done by other people (lead by Sheryl Sandberg).
Wouldn’t that indicate, at least a little bit, a great management move by Zuck?
etc. etc.
How many people also where at the right place and right time and were lucky then went bankrupt or simply never made it this high?
Do I think he stole it? Dunno. (Though Aaron Greenspan did log his houseSYSTEM server requests, which seems pretty damning) But given what he's done since (Whatsapp, copying every Snapchat feature)? I'd say the likelihood is non-zero
Maybe he's just gambling that Altman is right, saving his money for now and will be able to pick up AI researcher and developers at a massive discount next year. Meta doesn't have much of a presence in the space market right now, and they have other businesses, so waiting a year or two might not matter.
Like do people here really think making some bad decisions is incompetence?
If you do, your perfectionism is probably something you need to think about.
Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes
It’s not perfectionism, it’s a desire to dunk on what you don’t like whenever the opportunity arises.
The other thing - Peter's principle is that people rise until they hit a level where they can't perform anymore. Zuck is up there as high as you can go, maybe no one is really ready to operate at that level? It seems both him and Elon made a lot of bad decisions lately. It doesn't erase their previous good decisions, but possibly some self-reflection is warranted?
> If you do, your perfectionism is probably something you need to think about.
> Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company.
It's the effect of believing (and being sold) meritocracy, if you are making literal billions of dollars for your work then some will think it should be spotless.
Not saying I think that way but it's probably what a lot of people consider, being paid that much signals that your work should be absolutely exceptional, big failures just show they are also normal flawed people so perhaps they shouldn't be worth million times more than other normal flawed people.
He’s earned almost all his money through owning part of a company that millions of shareholders think is worth trillions, and does in fact generate a lot of profits.
A committee didn’t decide Zuckerberg is paid $30bn.
And id say his work is pretty exceptional. If it wasn’t then his company wouldn’t be growing. And he’d probably be pressured into resigning as CEO
Being rewarded for creating a privacy destroying advertising empire, exceptional work. Imagine a world where the incentives were a bit different, we might have seen other kind of work rewarded instead of social media and ads.
Perhaps it was this: Lets hit the market fast, scoop up all the talent we can before anybody can react, then stop.
I don't think there is anybody that would expect they would 'continue' offering 250million packages. They would need to stop eventually. They just did it fast, all at once, and now stopped.
Easy, you finished building up a team. You can only have so many cooks.
greed IS eternal
cool fun concepts/technology fucked by the worlds most boring people who only have desire to dominate markets and attention.. god forbid anything happen slowly/gradually without it being about them
Second, did you see the amount of fun content on the store? It's insane. People who are commenting on the Quest have obviously never even opened the app store there.
Just getting a lot of mixed signals right now. Not sure what to think.
Just look at the internet. The dot com bubble was one of the most widely recognised bubbles in history. But would you say the internet was a fad that went away? That there was no value there?
There's zero contradiction at all in it being both.
Unfortunately, the major players seem focused on 'getting to AGI pretention through LLM'.
https://www.threads.com/@professor_neil/post/DNiVLYptCHL/im-...
Without the internet there is no AI.
E: wasn't the only one.
https://en.wikipedia.org/wiki/Nearest_neighbor_search
edit: If you want to use a heap, the general solution is to define an appropriate cost function; e.g., the p-norm distance to a reference point. Use a union type with the distance (for the heap's comparisons) and the point itself.
Now we have this ;)
It's always the same thing, uber, food delivery, escooter, &c. they bait you with cheap trials and stay cheap until the investors money run out, and once you're reliant on them they jack up the prices as high as they can.
Sam is the main one driving the hype, that's rich...
It's also funny that he's been accusing those who accept better job offers as mercenaries. It does sound like the statements try to modulate competition both in the AI race and in acquiring the talent driving it.
Not everyone has to lose which he's presumably banking on
1. Buy up top talent from other's working in this space
2. See what they produce over say, 6mo. to a year
3. Hire a corpus of regular ICs to see what _they_ produce
4. Open source the model to see if any programmer at all can produce something novel with a pretty robust model.
Observe that nothing amazing has really come out (besides a pattern-recognizing machine that placates the user to coerce them into using more tokens for more prompts), and potentially call it on hiring for a bubble.
I wouldn't say so. The problem is rather that some actually successful applications of such AI models are not what companies like Meta want to be associated with. Think into directions like AI boyfriend/girlfriend (a very active scene, and common usage of locally hosted LLMs), or roleplaying (in a very broad sense). For such applications, it matters a lot less if in some boundary cases the LLM produces strange results.
If you want to get an impression of such scenes, google "character.ai" (roleplaying), or for AI boyfriend/girlfriend have a look at https://old.reddit.com/r/MyBoyfriendIsAI/
I saw this during COVID and we were hiring like crazy.
The more obvious reason for a freeze is they just got done acquiring a ton of talent
Zuckerberg holds 90% of the class B supershares. There isn't much the board can do when the CEO holds most of the shareholder votes.
Anyone got any predictions?
It might not pan out, but it's worth trying from a pure business point of view.
I can see a lot of utility for Meta to get deeply involved in the unlimited custom content generating machine. They have a lot of data about what sort of content gets people to spend more time with them. They now have the ability to endlessly create exactly what it is that keeps you most engaged and looking at ads.
Frankly, content businesses that get their revenue from ads are one of the most easily monetizable ways to use the outputs of AI.
Yes, it will pollute the internet to the point of making almost all information untrustable, but think of how much money can be extracted along the way!
It's Spain sinking their own economy by importing tons of silver.
It is actually the basis for the sites that people tend to spend most of their time and attention on.
Facebook, Instagram, Reddit, TikTok all live on the users that only want to see infinite cat videos (substitute cat video for your favorite niche). Already much of the content is AI generated, and boy does it do numbers.
I am not convinced that novelty, authenticity, or scarcity matter in the business model. If they do, AI has solved novelty, has enough people fooled with authenticity, and scarcity... no one wants their cat video feed to stop.
In this hype cycle, you are in late 1999, early 2000.
https://0g.ai/blog/0g-ecosystem-receives-290m-in-financing-t...
its all bullshit obviously, grift really seems like the way to go these days.
interesting
I will say that grok is a very useful research assistant for situations where you understand what you're looking at but you're at an impasse because you don't know what its name is and are therefore unable to look it up, but then it's just an incremental improvement over search-engines rather than a revolutionary new technology.
Useful, amazing tech but only for specific niches and not as generalist application that will end and transform the world as we know it.
I find it refreshing to browse r/betteroffline these days after 2 years of being bombarded with grifting LinkedIn lunatics everywhere you look.
But while the technology is revolutionary the ideas and capability behind building these things aren’t that complicated.
Paying a guy millions doesn’t mean shit. So what mark zuckerberg was doing was dumb.
Of all the examples of things that actually had an impact I would pick this one last... Steam engine, internet, personal computers, radios, GPS, &c. but going to the moon ? The thing we did a few times and stopped doing once we won the ussr vs usa dick contest ?
> amid fears of an AI bubble
Who told the telegraph that these two things are related? Is it just another case of wishful thinking?
What we need is more independent and driven innovation.
Right right now the greatest obstacle to independent innovation is the massive data bank the bigger companies have.
And yet, billionaires will remain billionaires. As if there are no consequences for these guys.
Meanwhile I feel another bubble burst coming that will hang everyone else high and dry.
not to mention that these rich guys are playing with the money of even richer companies with waaay too much "free cash flow"
After phase 1, "the shopping spree".
I have been saying for at least 15 years now that eventually Silly Valley will collapse when all these VCs stop funding dumb startups by the hundreds in search of the elusive "unicorns", but I've been wrong at every turn as it seems that no matter how much money they waste on dumb bullshit the so-called unicorns actually do generate enough revenue to make funding dumb startup ideas a profitable business model....
Offering 1B dollar salaries and then backtracking, it's like when that addict friend calls you with a super cool idea at 11pm and then 5 days later they regret it.
Also rejecting a 1B salary? Drugs, it isn't unheard of in Silicon Valley.
Has been for a few years now.
The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better. Along with these promises, CEOs also hinted at a transformative impact "comparable to Electricity or the internet itself".
Given the pace of innovation in the last few years I guess a lot of people became firm believers and once you have zealots it takes time for them to change their mind. And these people surely influence the public into thinking that we are not, in fact, in a bubble.
Additionally, the companies that went bust in early 2000s never had such lofty goals/promises to match their lofty market valuations and in lieu of that current high market valuations/investments are somewhat flying under the radar.
The promise is being offered, that's for sure. The product will never get there, LLMs by design will simply never be intelligent.
They seem to have been banking on the assumption that human intelligence truly is nothing more than predicting the next word based on what was just said/thought. That assumption sounds wrong on the face of it and they seem to be proving it wrong with LLMs.
However, even friends/colleagues that like me are in the AI field (I am more into the "ML" side of things) always mention that while it is true that predicting the next token is a poor approximation of intelligence, emergent behaviors can't be discounted. I don't know enough to have an opinion on that, but for sure it keeps people/companies buying GPUs.
That's a tricky metric to use as an indicator though. Companies, and more importantly their investors, are pouring mountains of cash in the industry based on the hope of what AI may be in the future rather than what it is today. There are multiple incentives that could drive the market for GPUs, only a portion of those have to do with today's LLM outputs.
If Burry could actually see a bubble/crash, he wouldn't be wrong about them 95%+ of the time... (He actually missed the covid crash as well, which is pretty shocking considering his reputation and claims!)
Ultimately, hindsight is 20/20 and understanding whether or not "the markers" will lead to a major economic event or not is impossible, just like timing the market and picking stocks. At scale, it's impossible.
What was the cost of the 16 missed predictions? Presumably he is up over all!
Also doesn't even tell us his false positive rate. If, just for example, there were 1 million opportunities for him to call a bubble, and he called 18 and then there were only 2, this makes him look much better at predicting bubbles.
Seriously why does anyone take this company seriously? Its gotta be the worst of the big tech, besides maybe anything Elon touches, and even then...
2. They have some really smart people working there
3. They're well run from a business/financial perspective, especially considering their lack of a hardware platform
4. They've survived multiple paradigm shifts, and generally picked the right bets
Among other things.
Even my parents are on Facebook messenger.
Convincing people to use signal is not easy, and there are lots of people I talk to whose phone number I don't have.
Either Zuckerberg has drunk his own Kool Aid, or he is cynically lying to everyone, but neither is a good look.
https://www.telegraph.co.uk/business/2025/08/20/ai-report-tr... https://www.telegraph.co.uk/business/2025/08/21/we-may-be-fa... https://www.telegraph.co.uk/business/2025/08/21/zuckerberg-f...
Other media outlets are also making a massive push of this narrative. If they get their way, they may actually cause a massive selloff, letting everyone who profited from the giant bubble they created buy everything up cheap.
Glad I personally never jumped on the hype and still focused on what I think is the big thing, but until I get enough funds to be the first in the market, I will keep it low.
YC does not like that kind of articles?
Better title:
Meta freezes AI hiring due to some basic organizational reasons.
Would anyone seriously take Meta's or any megacorps statements on face value ?
Plus they will of had a vesting schedule
Besides the point that it was mental but the dude wanted the best and was throwing money at the problem.