DON'T JOIN META, no matter how fast the recruiters reply to your messages. No matter how cool the work sounds (the managers lie in team matching). There's a reason why the average tenure is <2 years.
It's a toxic and fear based culture. You join, the people around you are already thinking how to scapegoat you. People gatekeep actual work and save it for political favorites and everyone else on the outside is stuck cooking up bullshit projects. If you do manage to find work on your own, people will immediately start scheming to steal it
But looking at the track record there's a very concerning lack of execution around critical strategic objectives. Take metaverse - I know most people laugh at it because they think it was a bad idea to start with. I push that aside and look at the execution. They poured a startling amount of money into it, and the end result - technically - sucks. This is not good execution of a bad idea. This is incompetent execution of an untested idea. After 5 years of huge investment the characters in Horizon Worlds still look like cartoons. All the advertised features of hyper-realistic worlds, generative world building etc failed to materialise. They made a face saving pivot to mobile where they claim it is successful but I literally never heard of anyone using it. I think it will be entirely synthetic traffic driven from their existing properties.
Then you can look at AI. You can say the jury is still out on their AI reboot, but it has been out a long time now, and it seems like at best they are just grading into being at par with leading AI labs. But I think that's being generous because so little has been released. What is certain is they went from a leading position right up to 2022-2023 to falling completely off the radar. Despite still holding the undisputed leading AI framework in PyTorch.
I have to conclude there's a genuine culture and execution problem that probably centers on the fact that Zuck is simply not a good people manager. And his relationship with the next level down (Andrew Bosworth etc) is such that he doesn't enable them to be either. And this all permeates through to an organization that delivers at a fraction of what it should given the resources it is expending.
Hmm.. I don't struggle, I enjoy it. The goal isn't to start glossy product production. It's to learn how to do it. As soon as it's obvious project is usually shelved. Except for the 'main line' projects which together can result in something significant.
So this applies to even, say, mid-level developers? Wouldn't you get work assigned to you after you're hired, or do you actually have to hunt for your own projects, like you might in some consulting firms?
The rest of big tech isn't much better. Big G is less stressful, but you'll see vicious and cringey behavior left and right. Hyped large startups are cults and 100% cringe. Meta is kind of the worst of both worlds though. "But they pay so well". Yeah, also: life is short.
Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.
For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.
There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.
"Hey, thanks! This is a great overview, and I actually asked ChatGPT before asking here and got a lot of the same information, but what I'm really looking for is..."
They are at high risk.
Employees using ChatGPT to renegotiate their salary are seriously showing lack of cognitive awareness.
I think this is a reasonable standard to hold, otherwise, like many before have said...send me the prompt. It's actually more interesting/better I know a coworker is struggling to communicate about something.
I often send out the LLM version, but still check if it contains the original thoughts correctly.
It's not a bad way to extend your vocabulary & catch spelling mistakes
Please don’t do this. You probably aren’t aware of how bad this can land. It’s not just about containing your original thoughts, it’s about the verbosity, repetitiveness, and absurdity of it all.
Grammarly is a much better tool for these kinds of purposes, and it actually guides and teaches you to improve your writing along the way.
This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.
LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible. You have to actually read and process the work, which takes 100 times more effort than it took them to make it.
For people in the working world who saw the workplace as a game of min-maxing their effort against the appearance of being a valuable contributor, LLMs are the perfect shortcut: They can now generate the appearance of doing a lot of work with no more than a few lines of asking an LLM to produce documents.
If anyone spends the 30 minutes to review the AI slop from their 15-second prompt, they'll copy your feedback into ChatGPT and send another document over with the fixes. Now they've even captured you into doing their work for them!
For teams or even entire companies that were relying on appearances of activity as a proxy for contributions, this is going to be a difficult transition. Everyone e-mail job worker in the world just received a tool that will generate the appearance of doing their job for them and even possibly be plausibly correct most of the time. One person can generate volumes of design documents, Jira tickets, and even copy and paste witty responses into the company Slack and appear to be the most engaged and dedicated employee by volume while doing less actual work than ever before.
I think teams that already had good review cultures with managers who cared about the output rather than the metrics are doing fine because anyone even a little bit engaged can spot the AI copy-and-paste employees with even a little inspection. The lazy managers who relied on skimming documents and plotting number of PRs or lines of code changed are in for a rude awakening when they discover the employees dominating their little games are the ones doing the most damage to the team.
Oh, we know. It's pretty clear in many cases.
And frankly the best signal now is: the shorter it is the greater the likelihood it was at least expensive for the human to produce. Said in another way - a shorter thing is easier to make sense of completely and if its garbage - its garbage. At least the cost borne on you was minimised!
Lmgtfy was a passive-aggressive (but not really passive) way to say “hey, are you too dumb to google this?”. Sending somebody ai output feels the same to me - the message you’re sending to the recipient is “here, you’re obviously too dumb to ask an LLM about this yourself”. Except some people don’t seem to realize that’s the message they’re sending
Most people know when they are doing it. If you feel the need to obscure your LLM usage, it means you didn’t put enough of your own voice and work into the final draft and you need to do something about that.
The closest acceptable thing to share is the full chat, including your prompts. If the output is useful enough to share, then the human thought process that led to the ai output is almost always more useful than the output itself.
The Nash equilibrium here is that the market has to find a way for the people producing things with LLMs to pay people to consume them, and the market always finds a way.
Firms are only going to pay out to model producers if they are getting more in excess of the cost of financing projects over time. If a firm does not see this happen, they reduce their spend on tokens. Simple.
Its a whole lot more nuanced than some shitty game theory.
That may be the case but every day LLM’s feel less like the next big thing and more like 3D printing. Here to stay, but not nearly as ubiquitous and earth shattering as people made it out to be.
If I had to guess right now, I would say LLM’s are more significant than 3D printers, but less significant than the Internet.
Agentic coding is a bit different, particularly if a great deal of effort and intelligence goes into it, but that's a quite different thing than just cranking out slop apps.
So sad to think that a generation or two ago, everyone wanted to emulate the HP Way. Now all of that is gone and unless you are a superstar, you're just a commodity to be managed, and extinguished when the time comes.
I remember that there is a passage in the book where the HP guys go and meet with other leaders of American corporations, and most of them felt that they did not have any kind of obligation back to society. I am a huge fan of the HP Way, but they were unusual, and not the norm.
I thought of this during his various scandals at the end of the 2010s. Everything was a PR reaction for him, rather than looking inward. The best PR is not being an asshole. I wonder if he's thought about it.
Or another way, 850,000,000 hours. It took 5-15 billion human hours of work to go to the moon. They steal 1 moon program worth of human time from humanity every 6 or so years. At the scales they operate we need to judge them on that scale. Mark get's paid/rewarded at that scale. He needs to be judged on the same scale. Not on 'the impact per individual'.
Meta has stolen multiple moon programs from humanity (again I am way under measuring) for that one change in order to increase their billions of dollars.
https://www.quora.com/How-many-man-hours-went-into-the-Apoll...
By employing psychologists who figure out how to make it addictive?
It's been said before that it's interesting Zuckerberg for making a social site is pretty introverted. It's because he stole it and he's always been stealing things. He did it to Whatsapp. He copied Snapchat multiple times. He thinks people are "dumb fucks" rather than "look, people shouldn't give info away, but now that I have it I'll do everything I can to keep it secure" (I DON'T like Google but my understanding is they have far fewer data problems). That's the mark of a certain kind of person which I'll, I suppose, not name. It's insulting to the web, what he does
No, it's just a common fallacy. If you don't like the guy, isn't "zuckerbergian" an example of helping him live rent free in people's heads?
There are a lot of people in the world who lack basic human empathy to such an extent that it is nearly impossible for them to just not be an asshole.
I don't know for sure if this applies to Mark Zuckerberg but based on all the second-hand anecdotal information I've heard about him "empathy" as he understands it is a product branding feature rather than a human emotion.
No amount of hate will fix it, and no amount of tracking will hide all but the most hidden secrets, so he better get over it. In his situation, hating leakers is like Garfield hating Mondays.
I say this as someone self employed that burned almost $1000 on tokens last month. And had. A lot of fun doing it.
I think all these companies front-loading staff reductions are actively sabotaging themselves in the worst possible way in this regard.
I’m in a dreadful situation right now. Everyone in team got a claude account, but I’m a contractor so not for me (the only dev in team of 25 consultants). Someone in the team assigned me a task to review claude skill that opens up tickets for me. I’m not even using claude and official policy is no AI use for development…
Otherwise it’s been mixed bag. Pace definitely picked up and things that I actually enjoyed doing (UI) it does very well. Things that are actually hard (backend logic) it sucks and painted me in corner too many times.
It's still insane to me that Meta thought this would be a good idea, or that employees would be comfortable with it even though they claim it's only used for anonymous AI training.
It's the other way around -- they're monitoring the computers to train AI.
Meta may know that their employees will put up with it, given how depressing the job market is right now, but unhappy, cynical, resentful employees do not produce good software and innovations.
there's a real financial cost to treating devs like cage-raised livestock.
I've never been happier, I can now build everything I've been wanting to build, really fast, with very few bugs.
I'm able to get 3x the work done. Greenfield stuff appears almost immediately.
My job is providing value to customers, not worshipping at the cathedral of software that will last forever. Nothing lasts forever.
Start treating software as ephemeral. It'll click.
This doesn't mean write low quality, unmaintainable software. It just means focus on getting stuff to your customer.
Writing in super typesafe languages with the highest level of strictness helps a lot. My AI stack is Rust and Typescript.
basically, AI will produce slop if left unattended. but it's not really its fault.. it's a process failing, like not supervising the interns. using AI the Right Way(tm) is a mental workout, quite a bit slower, but extremely rewarding (ime.)
The fallacy is believing that some kind of invisible hand will guide it to automatically produce an equal outcome and therefore any and all regulation is inherently bad. This seems to be a prevailing belief in the US in the current climate.
We have institutions that are designed to redistribute power and they are called governments. People have to believe in their role in doing that enough to actually empower them to implement it though.
Ultimately you end up with either going for totalitarianism (either to arrest development in the status quo, maintain a state of anarcho primitivism or technocratic tedium) or we resist that and break out by trying to forge forward into some unknown unchartered territory.
In practice we have no choice but to aim for the unknown and hope. Can't lie and say I can see what the way through all this is though.
I am hoping for the best, but life has taught me hard not to bet against humanity's worst instincts.
edit: add whether
I have a friend in a position of some influence, and am currently trying to persuade them to stop being so comfortable trusting in humanity to come to the right decisions for exactly that reason.
In the logic of Idiocracy, the way that an AI would "allow" the future society portrayed in the movie is by letting dumb people systematically have more kids than smart people, and "not allowing" this would entail some kind of coercive eugenics policy aimed at getting smart people to have more kids than they would otherwise be inclined to.
None of the points of Idiocracy depend on whether intelligence is by nature or by nurture. The premise of the movie stays exactly the same if you replace those two minutes of backstory with a dysfunctional education system, the return of child labour, an increase in teen pregnancies, and anti-intellectualism in general.
Edit; sorry, either I misread your comment or it was changed. On the premise that ignoring the intro a nurture based idiocracy could be possible, I would suggest it’s the thoroughness and extent of dumbing down that wouldn’t be possible if it was based on nurture.
Are you implying widespread infidelity here, or are you making the case that something besides "nature" may be determining intelligence?
There is still quite a lot of randomness in genes, the idea that intelligence would always be the average of the parents would require that that a very large number of SNPs are involved. GWAS studies do say this but this is more a side effect of using linear regression for the scores as this assumes independence which I think is not a safe assumption. I think some intelligence genes can be recessive so you can have two carrier parents where 1/4 of their children will be smarter than either of them.
I should also add, by pretty regularly I mean from the point of view from the smart people. Given a sample of smart people how often are they notably smarter than both parents.
Also did you think about why dumber people might have more children? A large part of that reason is national policy choices.
In this respect, that movie is a great filter for virtue-signalling low-intelligence societal rejects.
Educated people can neglect their kids. And less educated people can still recognize when education is valued by society.
Now look around. Do you feel like we're living in a society that values education? Did the successful people that kids see in their formative years get there through education, and/or do they visibly value education beyond lip service?
Some of them, yes. But I'd argue that between influencers, teachers' pay, and the increasingly obvious nepotism and corruption by people in power, the situation is looking pretty dire. We don't truly value education as a society and are therefore teaching a new generation that education isn't to be valued. And that has nothing to do with genetics.
One such perspective is Tools for Conviviality, a 1973 book by Ivan Illich.
Your ultimatum is imaginatively anemic.
There has not been "an effort to stop people noticing this". The Unabomber Manifesto has been available everywhere and published across mediums from the start. The topic beat to death by everyone from anarchists to eco-fascists to internet edgelords since it was released. It has also occupied a place of debate in academia, being studied and criticized in a lot of courses.
The Unabomber Manifesto wasn't even a particularly good critique in this topic. It just happened to become a popular one because he was a terrible person who murdered a lot of people and wanted to murder a lot more. The common criticism of the manifesto is that it was a bunch of cliches tied together with some writing that appeared eloquent, and then he forced it into notoriety by being a literal terrorist.
It doesn't stop comments like this from implying that he was on to something or the next step of implying that there's some broader conspiracy to stop us all from noticing that he had a point. The latter conspiracy breaks down when you look at how much everyone knows about the manifesto and how it has been reprinted and discussed to death for years. He even wrote and published entire freaking books from prison.
It's like Silicon Valley overdosed on Adderall.
You can have the same tech, just in 5 human generations. I don't see why you have to have it now.
I recently published an article about the Luddites. If you look at their actual demands, they were not anti-tech. They were labor activists. Life got much, much worse for most people in the industrial revolution until the laws they advocated were finally implemented.
https://www.disruptingjapan.com/the-real-luddites-would-have...
There’s no way they would have been pro-AI. It would take a very skilled VC to warp the world enough to make that sound true.
Yet
LLMs can be "trivially" decentralized by expanding the concept intellectual property to also cover algorithmic processing. It's just about how we setup our laws and rules.
It might be possible to organize all that with volunteers and some paid work, but how in practice? Stallman seems kind of out of the game at this point and there is no Linus Torvalds figure neither for this, as of now.
Well yes there is. It's Karpathy.
Temporarily embarrassed millionaires; I cannot get around that issue toward collective action, toward myself contributing to an answer. I'm stuck. I can't unsee its truth =/. The individual will choose enrichment. We all will.
It is just that peoples preference dont matter as billionaires have disproportionally more power.
If basically everyone transacts with Amazon, willingly, how is it possible that Bezos is the bad guy? I get that it's not black and white but the point stands: he didn't overthrow the government, the we put him there.
Reading the contents of proposed bills is a herculean task, to the extent that even our elected representatives dedicated to the task don't do so a significant fraction of the time. There's perhaps a good argument that's mostly because representatives (particularly in the House) spend too much time fundraising, but imagine the outcome when the burden is placed on people who have (sometimes several) completely independent, full-time jobs.
I would also argue that there's value in debating bills before passing them, but this opportunity for debate would all but disappear in a direct democracy, both because it's an additional burden on top of the time needed to read the bills and because it's a logistical nightmare to set up a proper debate venue that can properly accommodate everyone.
On top of that, you have to deal with the fact that the majority of US adults' literacy levels are below 6th grade, making them less likely to understand legislation they read or be able to engage in meaningful debate about it.
I think I'd want to fix our electoral system to make it more representative of the public (i.e. use something better than winner-take-all, first-past-the-post) before I'd even want to try tackling the monumental problems that we'd face in trying to enable a direct democracy for anything beyond the local city/municipality level.
> Maybe I'm not thinking through the difficulties well enough, be what we have with elected representatives campaigning on one set of ideals and then voting the complete opposite way is unacceptable. At least, that should be grounds for imprisonment.
I'm with you somewhat in spirit, but I think the devil's in the details.
A particular concern I'd have with doing this is that it's fairly common for representatives to attach riders to bills that have little to nothing to do with the original text. As such, there may be times when my representative may be forced to vote against a bill, the core of which is something they campaigned on, because one or more riders are completely unacceptable.
I do think there's probably value in providing a mechanism to recall representatives and senators, not the least of which is because we've seen in recent history several such politicians do full 180s and even change political parties upon election.
I don't think we want to open the pandora's box of incarcerating representatives based upon their voting history, though.
In the 1960's there was a young man graduated from the University of Michigan. Did some brilliant work in mathematics. Specifically bounded harmonic functions. Then he went on to Berkeley, was assistant professor, showed amazing potential, then he moved to Montana and he blew the competition away.
The printing press had sooo much state violence over that everywhere.
Oral contraceptives are a fight the USA is losing to the extremist christian republicans. Right now the line is right on Misoprostol. And shithole states like Texas even criminalize day-after pills and 'suspect' miscarriages.
And horrible tech like "weatherproof camera + AI + battery + solar + cell" (FLOCK) are easy to implement and already have been used in tracking women with miscarriages in Texas and across the country.
It seems for every new tech, theres 1 really cool good thing for the public, a few neutral things, and 1-3 absolutely terrible things.
And those terrible things make money. Lots of money.
Other technologies like surveillance (and, perhaps, AI) are more clearly centralizing and enabling of power.
The difference matters a lot if you're having mixed feelings about working in technology.
A popular claim against open source is that it alone is insufficient to prevent abuse (here: accumulation of power).
Recent decade has shown many cases of that, with corporations adopting open source projects without giving power to their users; e.g.: Android or cloud services.
Perhaps if we understood open source less as a process and more as a movement (so: if libre software movement was more popular) things would be different.
I'll go further and say that it accelerated getting us into this mess we're in today.
The OSI is owned and controlled by the tech titan hyperscalers who benefit from free labor.
Useful "open source software" always gets encrusted by the big titans that then build means to control the tech, and then the means to control us. And just to rub salt in the wounds, they rarely compensate the original authors.
Android is Linux, right? Then why can't we install our own software? Why does it spy on us? Open source is so great, right?
95% of humans will never own a phone that gives them freedom. And we enabled that.
Everything we as tech people own is also getting locked down. We're going to have to start providing our state ID to access the internet soon.
But OMG, Year of Linux on the Desktop 2012!!12
Pretty soon you won't even be able to use your Linux. Everything will be attested.
Open source hasn't stopped power from accruing to the titans. It's accelerated their domination.
People rush to defend Google and Amazon when you criticize how they profit off of Redis, Elasticsearch, etc. The teams that build the tech aren't becoming wealthy, and most of the bytes flowing through those systems are doing so behind closed source AWS/GCP/Azure offerings.
These companies then use their insane reach to tax everything that moves. Google owns 92% (yes, 92%!) of URL bars and they tax every search, especially searches for other companies' trademarks. They do even better - they turn it into a bidding war. Almost nothing that exists in the world today can make it to you without being taxed by them.
If they don't like your content, you just disappear.
Mobile platforms have never been ours. We can't install what we want. We're soon going to be locked at the firmware level to just Google and Apple and forced to use their adblocking-free, tracker-enabled "browsers" (1984 telescreens). Any competition can't get started due to the massive scale required, meanwhile Apple and Google tax everything at 30% and start correlating everything you do, everyone you talk to, everywhere you go in their panopticon.
"Open source" was wool pulled over our eyes so that we happily built, supported, and enabled this.
Open source should be replaced with "our proletariat users and small businesses can have this for free, but businesses listed on any stock exchange cannot commercialize this ever unless they pay out the nose for it".
"Source available" / shareware is peak. Give your users the thing, and the means to maintain it after you're gone, but tell Google et al. to go away.
"Fuck you, pay me" as the artists frequently say.
But also, let's stop giving the Death Star free labor.
(edit: I'd love a feedback sampling of the heavy downvotes. OSI purists? Goog employees? Surely MIT/BSD fans and not anyone who follows Stallman.)
I want to rescue this snippet.
Few of us remember the "fight" and discussions that happened when Firefox first pondered the idea of allowing encrypted video on the platform. Same with Linux. This was when The powers that be forced Netflix and other video distributors to introduce that opaque tech in the web. The same thing happened with DeCSS and Linux DVD playing; but that generation was a bit more... revel.
But we as a society are indeed slowly and steadily giving away our rights of many, for the rights of few cartels.
It's been a sad journey to see for someone born in the early 80s.
Increasing geopolitical multi-polarity may force big tech to give up ground. The EU and ASEAN in particular should be hitting Google et al. with the regulatory hammer.
When we get clearer heads back in power (Lina Khan was great, but moved much too slow), they ought to carve the tech cos into Baby Bells. Horizontally so they have to compete with themselves.
> It's been a sad journey to see for someone born in the early 80s.
The dream of the open web, privacy, freedom of speech, and freedom of computing is being killed by the oligarchy. And they convinced us the progressive thing to do was to give them our labor - they hung us with it.
I lived in places without any of those and I wouldn't want to do it again.
As a Gen Xer, I grew up with a strong belief in the "goodness" of technology, of its power to make people's lives better and to ameliorate suffering. So after 25 years of seeing so much invested into technology that actively makes people's lives worse (e.g. ad-tech, social media algorithms), and even conservatively just results in the huge accumulation of wealth and power to the very few, I can't help but feel extremely disillusioned.
Yes, I like showers and soap and running water, but I rarely see the type of economic investment into tech these days that will have as broad of a beneficial impact as running water did.
Here's the problem: you can't.
First, people have disagreements, often very fundamental ones, over what "benefits us all". There's no way to resolve many such disagreements short of brute force.
Second, "enforce"--note the last five letters of that word--means some people are given the power to do things to other people that, if anyone else did them, would be crimes. Throw you in jail, fine you, restrict the things you can do. Indeed, that's how David Friedman, whose "The Machinery of Freedom" is worth reading, defines a government. And the problem is that government still has to be done by humans, and humans can't be trusted with the power to do such things.
Ultimately the only defense we have is to not give other people such power. Not governments, not tech giants, nobody. But that requires a degree of foresight that most people don't have, or don't want to take the time to exercise, particularly not if something juicy is in front of them. How many people back when Facebook first started would have been willing to simply not use it--because they foresaw that in a couple of decades, Facebook would become a huge monster that nobody knows how to rein in? If my own personal circle is any guide, the answer is "not enough to matter"--of all the people I know, I am the only one who does not use Facebook and never has. And even I didn't refuse to use it back when it first started because I saw what things would be like today--I just had an instinctive reaction against it and listened to that reaction, and then watched the trainwreck slowly develop over the years since.
So we're stuck. Even if we end up deciding that, for example, the government will break up the tech giants, slap huge fines on Zuckerberg, Bezos, etc., maybe confiscate a bunch of their property, maybe even make them do a bunch of community service, possibly even some of them serve some jail time--it will still be just other humans doing things to them that no humans can be trusted to do. It won't fix the root problem. It will just kick the can down the road a little longer.
It is not possible to "enforce" a "value system". True.
But we can have a value system we share that benefits us all. We can work together to improve our welfare as a whole.
> Ultimately the only defense we have is to not give other people such power.
That is untrue. We must have webs of trust, and within those webs there are power hierarchies. The trick is that those hierarchies must not be arbitrary nor permanent.
This is a problem that has been wrestled with, and solved, several times in human history. Any human structure is vulnerable to outside attack, and most (all?) vulnerable to internal decay, but they can be established and are worth establishing.
An example I can think of are the Republican Militias as described by George Orwell in Road to Catalonia, and activist groups I have been involved with here in Aotearoa.
Nothing lasts for ever - that does not mean we cannot work together for good things.
--
In one group I was involved on we used to have monthly meetings, by phone, of the organising committee. We had thirty members (all activists running hot with their own opinions), made decisions by consensus. I was on that committee for five years and we went over our allotted two ours twice: Once by ninety minutes (that was a day!) and the other time by five minutes. Nobody ever felt unheard - that mattered to us.
It is possible, people have been doing it forever, the good things we make are vulnerable but we should still aspire to and achieve them
That's not what I said. What I said was that you can't enforce a value system "that benefits us all"--because "us all" will never agree on what value system that should be, at least not once you get beyond small groups of people. But of course you can just declare by fiat that your preferred value system "benefits us all", and ignore objections, and if you have enough brute force at your disposal, you can enforce it. You just won't be enforcing a value system that actually "benefits us all".
> We must have webs of trust
Yes.
> within those webs there are power hierarchies.
Only if you let that happen. But in a sane web of trust, you don't--because in a sane web of trust, everybody understands that power--in the sense of someone you trust doing something that harms you, simply because you're unable to prevent them--is a betrayal of trust.
> This is a problem that has been wrestled with, and solved, several times in human history.
I disagree. I certainly don't see the Republican Militias in the Spanish Civil War as solving this problem.
> that does not mean we cannot work together for good things
Of course we can. But our ability to do that without violating any trusts is limited, often very severely, by how many people we can get to agree with us, without any force or coercion being applied, on what "good things" to work together for. Unfortunately utopian dreamers and "revolutionaries" throughout human history have failed to recognize this basic fact, and their attempts to make a better world have always resulted in mass suffering and death.
If "us all" is a small enough group, sure. It doesn't scale, though.
I think that organized religion wants to say a few words here.
What needs to change is the system in which that technology exists inside, because otherwise removing technology will still keep us on the same trajectory to the same destination, only much slower and possibly with much more pain.
What we're seeing right now with layoffs and everything else is simply an acceleration of our current trajectory. We were always going to get here, AI just got us here a few decades ahead of schedule.
For once, however, we have a technology that could let us change this trajectory. I've said this before, but the capital class held so much power because it took a lot of people, and hence a lot of capital, to take on large endeavors that created new wealth. But things were rigged such that those who provided the capital also captured most of that new wealth.
Now, just as AI lets companies (i.e. capital) do the same things with fewer people, it also lets people do the work of entire companies by themselves... i.e. without capital. That is a big enough shift in power dynamics to alter the trajectory in previously inconceivable ways.
True during the mainframe. Not true during the PC age. Perhaps true again during frontier model / data center ago. Maybe not true again when hostable open weights models become efficient and good enough.
I have to very regularly remind myself many people genuinely believe this shit and are not straight up evil/maniacs, it's getting harder
We could have fun defining what's good usage but we're so far from it, it would just make me sad.
So you're not really complaining about technology making things worse. You're complaining about wealth inequality, which is a direct result of the mode of production and the organization of the economy.
Internet access should, at this point, be basically free. The best Internet in the country is municipal broadband. It's better and it's cheaper. It's owned by the town, city or county that it's in, which means it's owned by citizens of that municpality.
Instead what we have in most of the country are national ISPs like Verizon, Comcast, Spectrum and AT&T and the prices are sky high. They are only sky high so somebody far away can continue to extract profit from something that's already built and not that expensive to build.
You will get lied to by people saying national ISPs have an economy of scale. Well, if that were true, why is municipal broadband so (relatively) better and cheaper? Why would there be state laws that make municipal broadband illegal? Why would national ISPs lobby for such laws?
How would your country function, if all medical staff, construction, rail, sewage, police and firefighters suddenly worked half as long or not at all, starting tomorrow?
Because my home country tried this whole "if we seize the means of production from the wealthy elites, we won't have to work as hard anymore" ~80 years ago, and guess what happened to the workers? Were they working less hours for more money, OR, were they working just as much while also starving and being plagued by shortages?
The problem with your logic, is that it only applies to bullshit Western office corporate jobs who are anyway not actually doing much useful work for 40h. All those office jobs that don't need to be working 40h, were just subsidized by endless money printing, that's why you see so many layoffs happening, once the ZIRP era ended when they were hiring people just to raise headcounts to boost stock valuations to gullible investors they could rug-pull, but now the bubble popped and the jig is up.
And it only works in a world where you own the world reserve currency, and globalisation, free trade and international competition does not exist, because the countries who will work harder than you, will outcompete you and subjugate you in the long run so you can make their sneakers and phones for 60h/week, while they kick it back and live from printing money.
Medical professionals have better diagnostics, health records, MRIs and other imaging equipment and so on. The medical profession is pretty much a perfect example of my point, actually. Do we train more doctors (per-capita) or just expect existing doctors to work more hours? There are a whole bunch of vested interests in constraining doctor supply.
Likewise, resident physicians are incredibly profitable for hospitals because they create a lot of value and cost nothing. You see this where various parties are trying to increase emergency medicine residencies from 3 to 4 years.
Hospitals hate fully-qualified attending physicians because they can't artificially suppress their salaries. It's why we've gotten things like Nurse Practioners, Physician's Assistants, CRNAs, etc. It's also why, for example, you see a case like in Oregon where private equity is trying to destroy physician organization. I'm of course talking about Peace Health and ApolloMD, a case in Oregon recently.
We also make medical people spend a bunch of time dealing with insurance BS, for literally no reason.
This isn't just a BS "corporate job" thing.
Now its been 20 years later. The technology is mature and many of the patents have expired, but GMO has done absolutely nothing to solve world hunger.
We do not need GMO for food, I agree.
But where is the genetically engineered heart muscle that sits in the engine bay of my car running on nutrient solution, excreting C02 and urine while driving my car with a hydraulic motor?
My theory is that AI and robotics have the potential to break capitalism as we know it. We will probably reach a point where machines will be better than humans at pretty much anything and there will be almost no need for workers who just do a job (like most of us). But if nobody has money to buy things then there is no point in producing anything. Not sure where this will be going but I am pretty sure the capitalists will not voluntarily share the gains.
In theory all this progress should be great and exciting for humanity but without changing the system there may be dark times coming for most of us. I always have to think of Marshall Brain's "Manna" story. It may be a spot on prediction of things to come.
Technology is not a good-only or evil-only thing. You have use cases that are beneficial and you have use cases that is not benefical. The technology in by itself isn't what makes things worse. Even many thousand years ago, humans used weapons to bash in other humans. Remember the Ötzi: https://en.wikipedia.org/wiki/%C3%96tzi#Body he was killed by arrows, most logically from someone else shooting at him (at around 3230 BC). Nuclear energy is used as weapon or source for generation of energy (or rather, transformation of energy). And so on and so forth.
IMO the biggest question has less to do about technology, but distribution of wealth and possibilities. I think oligarchs need to be impossible; right now they are causing a ton of problems. Technology also creates problems, I agree on that, but I would not subscribe to a "technology makes everything worse". That does not seem to be a realistic assessment.
You shouldn't blame technology. You should blame the maniacs that have latched on to it as a way of extending their power. You should blame the government for their failures of regulation. You should blame the media for failing to cover this obvious problem.
The people who want to subjugate you are the problem.
no no, we're not doing that.
I get that hacker news would rather avoid inconvenient issues or simply satirize them but I take absolutely no care of this.
Tons of us called for common sense guard rails and a little bit of actual intention as we rolled out LLM’s, but we were all shouted down as “luddites” who were “obstructing progress.”
We all knew this was coming. It’s been incredibly frustrating knowing how preventable so much of it has been and will continue to be.
Edit: these responses are absurd. Banning GPU’s…? What are you on about? Who said anything about stopping or banning LLM’s? Did none of you see “guardrails”? “A little bit of actual intention”? Where are you getting these extreme interpretations?
I’m talking basic regulatory framework stuff. Regulations around disclosure, usage, access, etc. you know, all the stuff we neglected and are now paying for with social media in droves? We have done this song and dance so many times. No one is going to take away your precious robot helper, we’re just saying “maybe we should think about this for more than two seconds and not be completely blinded by dollar signs.” I mean people have literally died in my state because Zuckerberg wants to save a few bucks building his data center.
It feels like AI evangelists come out the woodwork seething if anybody even implies you shouldn’t be allowed to do literally whatever you want at all times.
Clearly, powers that be learned all too well from internet rollout.
I stopped reading. No point in engaging this if that’s how you’re kicking things off.
You're not helping.
> Meta also introduced internal dashboards to track employees’ consumption of “tokens,” a unit of A.I. use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many A.I. agents that others had to introduce agents to find agents, and agents to rate agents, two people said.
Maybe the first to be laid off should be the ones that thought it made sense to track token consumption. Goodhart's Law doesn't even apply in this scenario because that's a dumb metric whether or not you're using it to evaluate employees.
The dashboard got announced publicly and just about everyone's usage went up by 100%-200% almost immediately and hasn't come back down, but nothing I'm tracking shows any increase in output since then. We absolutely saw productivity gains a few months ago, but it feels like now people are just burning tokens for the sake of it.
On top of that, as a reaction to the rising costs, we've now gone from unlimited token use to every engineer now having a monthly token budget of $600. I get why that was done, but we're a publicly traded US tech company worth 10s of billions of dollars. We're not hurting for money and the knock on effects are just crazy. For example, I had an engineer in sprint planning say about a large migration type ticket, "Can we hold that ticket until the end of the month? I don't want to burn through all my tokens this early in the month." I just cannot imagine that that's the culture that our executive team was trying to cultivate when they first purchased these tools.
I'm not anti-AI and actually really enjoy using AI for development, but over and over I've watched business leaders shoot themselves in the foot trying to force more AI use on their employees in pursuit of ever increasing productivity. I just keep thinking that there's no way that any productivity gains we've seen from the forced, tracked AI usage are enough to offset the productivity lost from anxiety and churn caused by the unrealistic productivity expectations, vanity metrics, and mass layoffs that have come along with increased AI adoption.
If someone gave me unfettered access to inference of modern LLMs, there would be no concept of measurement other than the total system wide capacity of whatever the company had available.
Only if you assume in good faith that the point is to evaluate employees for productivity on some stated goal for the company or role. If you try to view the metric from other possible positions, the one I think fits best is the promotion of token consumption by all means. This is useful for signaling to the broader market that AI is profitable and merits more investment, and may be part of a deal between them and whoever they're buying tokens from. It makes more sense to me that Meta would be more interested in leveraging its control over people to manipulate the state of the world, market, and general sentiment than having them work on stable, well-established and market-dominant software services that really only need to be kept chugging along. Isn't mass-manipulation their whole business? Why wouldn't they use their employees and internal structure to contribute?
It seems like a common conclusion from a management that wants to push for AI adoption. I doubt it’s super effective, but we’ll see how it turns out.
Edit: and if you question that, you are a troublemaker to add to the list
Ooof. Famous last words.
Ford style assembly lines made the work of the factory workers more miserable. Partially automated cashier did the same thing.
I don't think there is any point in trying to resist automation, as the efficiency benefits are too important.
In those cases, that led to a transition period, nowadays only a small fraction of the human population is working to produce food, and their job is more about planning, finance and orchestration of machine work, but many specialised jobs were lost or made miserable in the process.
IMHO any job that can be done by a machine should not be done by a human, the tricky part is going there with as little undesirable effects as possible.
I would have been happy writting z80 and 68000 assembly code for an entire career.
If we look at automation beyond assemblers (e.g. compilers), even if you or I might be content without it, I think it's safe to say that the vast majority of programmers are glad they don't have to write assembly.
The ones with 10 hour shifts and mandatory overtime? Yea, I don't think it's the _line_ that's making them miserable.
> Partially automated cashier did the same thing.
I've not once heard anyone in the service industry make this complaint.
> as the efficiency benefits are too important.
You can squeeze every last drop of productivity from your employees. In the short term this may even evidence profits. In the long term it only works if you hold a monopoly position.
The whole innovation was about making the jobs as simple and repetitive as possible so humans would basically work like robots.
Once you're there, having removed any agency and freedom, pushing the hours to the limits of human exhaustion is just one logical step.
Yes it was jarring for me to experience that.
So they make fewer mistakes. Not that they become zombies that you are then able to abuse.
> pushing the hours to the limits of human exhaustion is just one logical step.
There's nothing logical about ignoring consequences. Which is probably why the "union strike" even exists. It's fighting illogic with illogic.
Also, avoid using Meta Pay aka Facebook Payments, where a user can send a payment to another user via the Messenger app. Someone sent me money a few weeks ago, and a two weeks alter they still have the payment marked as "Completed" on the sending side, and "Cancelled" on the receiving side. I told the sender to just do a chargeback with their bank because Facebook basically stole the money. Don't use Meta Pay for sending payments to anyone. Then when you try to open a "case" about it, you call a call center in Indonesia and the people have no access to see anything about the transaction, they just send it up the chain, only to have an automated response telling you to do something that the web site doesn't even offer as an option. I don't think there is any humans in the loop, besides the Indonesian call center that has no access to any of what you're calling about.
My current theory of tech layoffs is that over the last decade or so, churn-inducing practices like stack-ranking have gone out of vogue. One can speculate as to why this happened. Perhaps generational made middle management unwilling to do the dirty work? Nevertheless it happened.
However, companies still want to, and some would argue need to, eliminate low performers, so now they periodically do a companywide reduction in force and frame it with whatever justification is handy, macroeconomic conditions, AI, whatever.
This hypothesis would explain phenomena like companies hiring aggressively during or after a layoff, and why the layoffs keep happening year after year.
It seems to be a thing that comes and goes as the job market is weaker or stronger
Just to envelope math some of this: getting GE (~15%?) ratings at IC5 (plurality of employees) in a SWE/PE role in a west coast metro (95-100% pay scale) puts you somewhere around $550,000 with a flat stock price. (But most employees will get lower ratings.) I haven't run the numbers with the smaller 2026 refreshers and revised "Checkpoint" rating scale.
https://www.nbcdfw.com/news/nbc-5-responds/meta-users-contin...
I think the last good move Meta made was buying IG. Maybe not good for IG users but absolutely a great move for Meta. Not quite as good as Google buying Youtube but it's up there. Best $1 billion any company has probably spent.
But Facebook is a graveyard of conspiracy theory Debras, anti-vaxxers and your racist uncle just posting links all day. Sharing links was a contentious decision and it clearly improves short-term engagement but (IMHO) it destroys the platfrom's initial purpose of keeping in touch with friends and family.
Let's not forget too that Meta spent probably billions on building its own crypto (ie Libra). But that was just a taste of what was to come. The Metaverse was one of the largest boondoggles in corporate history. $70B+ with no product-market fit. It was an entirely ego-driven "build it and theyw ill come" moment from somebody who doesn't know waht to do with the empire he's built who is surrounded by Yes Men.
Facebook and AI feels a lot like Microsoft and mobile. Microsoft just completely missed the boat based on poor leadership and conflicting priorities (eg wanting one Windows code base for all devices). Facebook has a huge corpus of human communication and engagement, which should be a treasure trove for building AI but I don't think anybody really believes Meta knows what they're doing or will get anywhere doing it.
I've seen in this in big tech companies: big initatives get well-funded. Seasoned veterans swoop in and cash the fattest checks (in bonus stock) until the entire thing falls apart. Think Google Wave.
What I really think is going to kill these companies is the corporate layoffs or, rather, what they represent. They represent big tech companies turning into Corporate America where politics defines your careers, the company seems incapable of doing anything due to competing fiefdoms (a la Intel) and middle management just reorganizes every 6-12 months so nobody in management ever faces the consequences of their actions.
Monitoring your employees keystrokes with AI ins't going to help either. But management (or the consultants they end up paying) are never going to come to the conclusion that the problem is management.
Whereas if you're half-competent and at a startup, the AI is an incredible opportunity to try to leap ahead while the prices are subsidized (by the big tech behemoths fighting wth each other)
The reason is a complete inversion of Ownership and Agency.
For a decade of ZIRP, big tech convinced its employees that they're "changing the world", and what we did mattered. Sure the exhorbitant salaries and constantly rising stock value didn't hurt, but honestly other than the FIRE cultists, for most of us the difference between 200k/year and 800k/year didn't feel much day to day (other than the ability to buy a house or something, and feel safe with a retirement nest egg). No, most people were missionaries not mercanaries.
2021 was the first crack. The comps went crazy, half the industry turned over, and the ones who didn't felt a bitter sting where it became blatantly clear that all the new arrivals were just in it for the $$$, and the companies were willing to pay for the backfills but not to reward the loyalty of the missionaries.
Then came the yearly layoffs, chipping away further, and reminding every employee that they're at the mercy of a spreadsheet and the whims of people 3 levels above them in the org chart, in spite of the economic reality of their product, or their personal productivity.
And now we're here, and it's clear that all of the above is still relevant. The old-timers that hung around see that their personal output doesn't matter, their product's PnL doesn't matter. All that matters is 1) the company's AI strategy (and if they're not part of it, they're secondary), and 2) tokenmaxing.
How can anyone find joy in this environment unless they're purely in it for the comp?
I couldn't. I left my big tech job in December after 15 years, and have not been this happy at work since pre-COVID.
I can’t believe I read this sentence, lol.
800k is the ability to buy a house and support a family on a single income. Do you see so many people lamenting the days when this was possible? So many memes about the lifestyle Homer Simpson could provide, and may modern families can’t? 800k makes it possible.
It’s a huge lifestyle upgrade, especially if your partner wants to do something artistic, academic, or otherwise less profitable.
But yeah, "no difference between 200 and 800", while spelling out some MASSIVE differences is quite a statement.
https://en.wikipedia.org/wiki/Backward_bending_supply_curve_...
>2021 was the first crack. The comps went crazy, half the industry turned over, and the ones who didn't felt a bitter sting where it became blatantly clear that all the new arrivals were just in it for the $$$, and the companies were willing to pay for the backfills but not to reward the loyalty of the missionaries.
Also SVB collapsed in late 2022, notice that AI hype started right after.
working class is replaceable. So is the "executive" class. Only the billionaire/epstein class will prevail in this hyper capitalistic society.