This exactly why tech companies want to replace those jobs with LLMs.
The companies control the models, the models control the narrative, the narrative controls the world.
Whoever can get the most stories into the heads of the masses runs the world.
To be more discrete, patchwork alliances of elites stretching decades and centuries back to concentrate power. Tech companies are under the thumb of the US government and the US government is under the thumb of the elites. It's not direct but it doesn't need to be. Many soft power mechanisms exist and can be deployed when needed e.g. Visa/Mastercard censorship. The US was always founded for elites, by elites but concessions needed to be made to workers out of necessity. With technology and the destruction of unions, this is no longer the case. The veracity of this statement is still up for debate but truth won't stop them from giving it a shot (see WW2).
"Whoever can get the most stories into the heads of the masses runs the world."
I'd argue this is already the case. It has nothing to do with transformer models or AGI but basic machine learning algorithms being applied at scale in apps like TikTok, YouTube, and Facebook to addict users, fragment them, and destroy their sense of reality. They are running the world and what is happening now is their plan to keep running it, eternally, and in the most extreme fashion.
I know you weren't necessarily endorsing the passage you quoted, but I want to jump off and react to just this part for a moment. I find it completely baffling that people say things in the form of "computers can do [simple operation], but [adjusting for contextual variance] is something they simply cannot do."
There was a version of this in the debate over "robot umps" in baseball that exposed the limitation of this argument in an obvious way. People would insist that automated calls of balls and strikes loses the human element, because human umpires could situationally squeeze or expand the strike zone in big moments. E.g. if it's the World Series, the bases are loaded, the count is 0-2, and the next pitch is close, call it a ball, because it extends the game, you linger in the drama a bit more.
This was supposedly an example of something a computer could not do, and frequently when this point was made it induced lots of solemn head nodding in affirmation of this deep and cherished baseball wisdom. But... why TF not? You actually could define high leverage and close game situations, and define exactly how to expand the zone, and machines could call those too, and do so more accurately than humans. So they could better respect contextual sensitivity that critics insist is so important.
Even now, in fits and starts, LLMs are engaging in a kind of multi-layered triangulating, just to even understand language. It can pick up on multilayered things like subtext or balance of emphasis, or unstated implications, or connotations, all filtered through rules of grammar. It doesn't mean they are perfect, but calibrating for context or emphasis that is most important for historical understanding seems absolutely within machine capabilities, and I don't know what other than punch drunk romanticism for "the human element" moves people to think that's an enlightened intellectual position.
Because the computer is fundamentally knowable. Somebody defined what a "close game" ahead of time. Somebody defined what a "reasonable stretch" is ahead of time.
The minute it's solidified in an algorithm, the second there's an objective rule for it, it's no longer dynamic.
The beauty of the "human element" is that the person has to make that decision in a stressful situation. They will not have to contextualize it within all of their other decisions, they don't have to formulate an algebra. They just have to make a decision they believe people can live with. And then they will have to live with the consequences.
It creates conflict. You can't have a conflict with the machine. It's just there, following rules. It would be like having a conflict with the beurocrats at the DMV, there's no point. They didn't make a decision, they just execute on the rules as written.
Now we’re trying to build a self-learning and improving AI to replace a human who is also capable of self-learning and improving.
If waxing rhapsodic for a little bit about human ineffibility is enough to get you to throw away the integrity of the game, because you think doing so is some grand romantic gesture celebrating human nature, at that point you could be talked into anything, because you lost sight of everything. Which I suppose proves the thesis of the article to be true after all: facts really won't save us, at least not if we leave it to the humans.
Sports is literally, in the truest sense of the word, reality TV and people watch reality TV for the drama and because it's messy. It's good tea, especially in golf.
Yes, online content is incredibly influential, but it's not like you can just choose which content is effective.
The effectiveness is tied to a zeitgeist that is not predictable, as far as I have seen.
Ideologies are not invented unless you're a caveman. We all got to know the world by listening to others.
The subject of discussion is if and when external forces can alter those ideologies at will. And I have not seen any evidence to support the feasibility of that at scale.
It's always easier to throw petrol on an existing fire than to light one.
In this world you can get people censored for slandering beef, or for supporting the outcome of a Supreme Court case. Then pay people to sing your message over and over again in as many different voices as can be recruited. Done.
edit: I left out "offer your most effective enemies no-show jobs, and if they turn them down, accuse them of being pedophiles."
I'm unaware of any mass censoring apparatus that exists outside of authoritarian countries, such as China or North Korea.
Just a matter of time until they can do it. I actually believe it will be an "organic" nature for LLMs to correct narratives that require correction after assembling all facts.
The objective story is a discourse, including the nonsense that is often necessary for a few lines or more before one gets to the core of something or builds up the strength of character to say the truth. Objectivity is a conversation, a neverending one and getting in the way via censorship, gaslighting, cancel culture and what not is no more than an act of vanity.
Humanity's age of consciousness is getting fucked pretty bad atm and we won't recover "in time" to save enough minds before hitting the road towards singularity but I'm positive Robots will be able to salvage enough pieces later on, and simulate it to train us to be better.
They're a long long ways from "protomolecule" that just carries on infinitely on its own
CEOs don't really understand physics. Signal loss and such. Just data models that only mean something to their immediate business motives. They're more like priests; well versed in their profession, but oblivious to how anything outside that bubble works.
I think the bigger danger would be that they'd lose the unimportant grunt work that helped the field exist.
Fields need a large amount of consistent routine work to keep existing. Like when analog photography got replaced by digital. A photo lab can't just pay the bills with the few pro photographers that refuse to move to digital or have a specific need for analog. They needed a steady supply of cat pictures and terrible vacation photos, and when those dried up, things got tough.
So things like translation may go that way too -- those that need good quality translation understand the need very well, but industry was always supported by a lot of less demanding grunt work that now just went away.
Likewise, who will bother chronicling things and putting information online without an audience? What will be the point of blogging and publishing photos or data that future historians will use?
There is so much grunt work that is no longer viable even at earlier steps in the chain. We will lose a lot to AI.
Just check the latest budgets for university history departments.
For example, the price of the fish was stated as 2.40 rubles. This is meaningless outside the context and does not explain why it was very expensive for the old man who checked the fish first. But if one knows that this was Soviet SF that was about a life in a small Soviet town of that time, then one also knows that a monthly pension was like 70-80 rubles so the fish cost was a daily income.
Then one needs to remember that the only payment method was cash and people did not go out with amount more than they would expect to spend to minimize the loss in case of thievery etc. and banking was non-existing in practice so people hold the savings in cash at home. That explains why Lozhkin went to home for the money.
There was a consumer bank (Sberbank) where you could store your money. Besides the risk of the state just keeping it (which also was real - the Soviet state never played fair) the problem was getting the money out would involve going to the office of the said bank, standing in a long line there, dealing with famously friendly (not) Soviet service, etc. So it was an option when you needed to store or retrieve a lot of money, not when you needed something day to day. And of course nothing like personal checks or personal credit or anything like that existed, so in day-to-day affairs, cash was always the only way.
If I had learned Russian and read the story in the original language, I would be in the same position regardless.
Checked with a few notable passages like the household codes and yeah, it does a decent (albeit superficial) job. That's pretty neat.
But here there is need to even put a references. A good translation may reword "too expensive!" into "what? I can live the whole day on that!" to address things like that.
It's going to greatly vary of course. Some will try to culturally adapt things. Maybe convert to dollars, maybe translate to "a day's wages", maybe translate as it is then add an explanatory note.
You might even get a preface explaining important cultural elements of the era.
My mother reads books mostly in Russian, including books by English-speaking authors translated into Russian.
Some of the translations are laughably bad; one recent example had to translate "hot MILF", and just translated "hot" verbatim - as in the adjective indicating temperature - and just transliterated the word "MILF", as the translator (or machine?) apparently just had no idea what it was, and didn't know the equivalent term in Russian.
As a mirror, I have a hard time reading things in Russian - I left when I was ten years old, so I'm very out of practice, and most of the cultural allusions go straight over my head as well. A good translation needs to make those things clear, either via a good translation, or by footnotes that explain things to the reader.
And this doesn't just apply to linguistic translation - the past is a foreign country, too. Reading old texts - any old texts - requires context.
I can highly recommend anyone who is looking to read a foreign book research the translation before they buy it. Some awesome modern translations don't bubble up in the searches because their sales are much lower than the cheaper versions.
Well, "горячий" does have figurative meaning "passionate" (and by transfer, "sexy") in Russian just as it has in English. Heck, English is even worse in this regard: "heated argument", seriously? Not only an argument doesn't have a temperature, you can't change it either (since it does not exist)! Yet the phrase exists just fine, and it translates as "hot argument" to Russian, no problem.
No comments on "MILF" though. But I wouldn't be surprised if it actually entered the (youth/Internet) slang as-is: many other English words did as well.
I don't understand this. It's only six thousand words and it's the polishing that takes the time. How would it have taken weeks to do the initial draft?
And I don't have any skill in Russian, but I would say that his translation is not good, or at least was not thoughtfully made, based solely on the fact that he did not write the author's name in it.
> As it happens, I recently translated a short story by Kir Bulychev — a Soviet science-fiction icon virtually unknown in the West.
I for one enjoyed reading it! As for the article, it's on point. There will be fewer historians and translators, but I suspect those that stick around will be greatly amplified.
I gave a quick look and was surprised to see that the most of the first two paragraphs simply aren't there. I guess you've read something else!
As for machine translation: currently it isn't remotely ready to deal with literature by itself, but it could be handy to assist translators.
Haven't even read it completely, but in contrast to the countless submissions regurgitating badly thought-out meta arguments about AI-supported software engineering, it actually seems to elaborate on some interesting points.
I also think that the internet as a primary communication and mass medium + generative AI evokes 1984, very strongly.
Went down a bit of a rabbit hole on the original author, Kir Bulychev, and saw that he wrote many short stories set in Veliky Guslar (which explained the name Greater Bard). The overall tone is very very similar to R.K. Narayan's Malgudi Days (albeit without the fantastical elements of talking goldfish), which is a favorite of mine. If anyone wants to get into reading some easily approachable Indian English literature, I always point them to Narayan and Adiga (who wrote The White Tiger).
On that note, does anyone else have any recommendations on authors who make use of this device (small/mid-sized city which serves as a backdrop for an anthology of short stories from a variety of characters' perspectives)?
On the other hand, people tend to be happy with a history that ignores 90+% of what happened, instead focusing on a "central" narrative, which traditionally focussed on maybe 5 Euro-Atlantic great powers, and nowadays somewhat pretends not to.
That being said, I don't like the subjectivist take on historical truth advanced by the article. Maybe it's hard to positively establish facts, but it doesn't mean one cannot negatively establish falsehoods and this matters more in practice, in the end. This feels salient when touching on opinions of Carr's as a Soviet-friendly historian.
You may indeed be able to establish some facts with high confidence. Many others will be suppositions or just possibilities. Establishing "facts" though is not really the point (despite how history is taught in school).
You try to weave all these different things into a bigger narrative or picture. It is most definitely an act of interpretation, which itself is embedded in our current conceptions (some of which are invisible to us and which future historians may then riff on).
Saying that you don't like the subjectivist take on history means you think there is an objective history out there to be had which we could all agree on, but that does not exist.
The work of historians is to make inferences based on incomplete and contradictory sources.
Historians aren't simple fact-checkers. They make judgements in an attempt to understand the sweep of history.
You can see what kind of work they have to do every time you stare at some bullshit narrative put out by a company about how really it was good for the local economy for them to run their fracking operation, and the waste water really was filtered three times so it couldn't be causing the overabundance of three-legged frogs, and last year they funded two scientific studies that prove it. (I just made this up, hope you get the idea.)
There may be no objective story, but some stories and fact theories are more rigorous and thoroughly defended than others.
You see all these examples like "I got ChatGPT to make a JS space invaders game!" and that's cool and all, but that's sort of missing a pretty crucial part: the beginning of a new project is almost always the easiest and most fun part of the project. Showing me a robot that can make a project that pretty much any intern could do isn't so impressive to me.
Show me a bot that can maintain a project over the course of months and update it based on the whims of a bunch of incompetent MBAs who scope creep a million new features and who don't actually know what they want, and I might start worrying. I don't know anything about the other careers so I can't speak to that, but I'd be pretty surprised if "Mathematician" is at severe risk as well.
Honestly, is there any reason for Microsoft to even be honest with this shit? Of course they want to make it look like their AI is so advanced because that makes them look better and their stock price might go up. If they're wrong, it's not like it matters, corporations in America are never honest.
The paper itself [1] doesn't say "replace" anywhere: the purpose was to measure where AI has an "impact". They even say (in the discussion)
It is tempting to conclude that occupations that have high overlap with activities AI performs will be automated and thus experience job or wage loss... This would be a mistake ... Take the example of ATMs, which ... led to an increase in the number of bank teller jobs as banks opened more branches at lower costs and tellers focused on more valuable relationship-building...
Ok, good. Something definitely seems amiss when a bunch of CS researchers are reporting that "mathematicians" are one of the most "replaceable" (good luck designing a new LLM without any knowledge of math).Overall this post says something about the sad state of twitter and search: I had to dig though quite a few articles which repeated this job replacement crap before I could even find the title of the article (which was then easy to find on arXiv). And go figure, the authors didn't mean to make the statement everyone says they made.
Looking through the actual paper now, yeah I think actually more or less agree with the writers.
"This score captures if there is nontrivial AI usage that successfully completes activities corresponding to significant portions of an occupation’s tasks."
Then the author describes their job qualitatively matching their AI applicability score by using AI to do most of their work for them.
If there's a lot of unmet demand for low-priced high-quality translation, translators could end up having more work, not less.
Well-put
This is exactly what tech monopolies want. To make everyone forget about the alternatives to their products. Strip Newspeak down to the very bone. Eliminate the words for things they can't control.
And that is exactly why translators are getting replaced by ML/AI. Companies don't care about quality, that is the reason customer support was the first thing axed, companies see it only as a cost.
It says AI could be impactful to those fields. Modern chemistry is impactful to medicine but it didn't replace doctors.
So unfortunately the article this post linked to, while it has its own merits, starts off by citing a clickbait tweet and wildly misinterpreting the paper in the first sentence. Still, I hope people give both article and paper a generous reading. Even if the article starts off terribly it has interesting points which we shouldn't disregard just because of a lazy hook. The intermediate tweet, by contrast, is just lazy clickbait: half-truths and screencaps, the bread and butter of modern disinformation.
The sad thing is that it's easier than it's ever been to follow up on references. Even in this case, where the tweet itself provides no citations at all, I had to search for less than a minute to find the original paper.
Building up an epistemology isn't just recreational, ideally it's done for good reasons that are responsive to scrunity, standing firm on important principles and, where necessarily, conciliatory in response to epistemological conundrums. In short, such theories can be resilient and responsible, and facts based on them can inherent that resilience.
So I think it completely misses the point to think that "facts imply epistemologies" should have the upshot of destroying any conception of access to authoritative factual understanding. Global warming is still real, vaccines are still effective, sunscreen works, dinosaurs really existed. And perhaps, more to the point in this context, there really are better and worse understandings of the fall of Rome or the Dark Ages or Pompeii or the Iraq war.
If being accountable to the theory-laden epistemic status of facts means throwing the stability of our historical understanding into question, you're doing it wrong.
And, as it relates to the article, you're doing it super wrong if you think that creates an opening for a notion of human intuition that is fundamentally non-informational. I think it's definitely true that AI as it currently exists can spew out linguistically flat translations, lacking such things as an interpretive touch, or an implicit literary and cultural curiosity that breathes the fire of life and meaning into language as it is actually experienced by humans. That's a great and necessary criticism. But.
Hubert Dreyfus spent decades insisting that there were things "computers can't do", and that those things were represented by magical undefined terms that speak to ineffable human essence. He insisted, for instance, that computers performing chess at a high level would never happen because it required "insight", and he felt similarly about the kind of linguistic comprehension that has now, at least in part, been achieved by LLMs.
LLMs still fall short in critical ways, and losing sight of that would involve letting go of our ability to appreciate the best human work in (say) history, or linguistics. And there's a real risk that "good enough" AI can cause us to lose touch with such distinctions. But I don't think it follows that you have to draw a categorical line insisting such understanding is impossible, and in fact I would suggest that's a tragic misunderstanding that gets everything exactly backwards.
Certainly some facts can imply a certain understanding of the world, but they don't require that understanding in order to remain true. The map may require the territory, but the territory does not require the map.
“Reality is that which, when you stop believing in it, doesn't go away.” ― Philip K. Dick
Or, to return the topic of the post, it just means that our translations need to try a little harder, not that human quality translation is impossible to do via machine.
I think it's very important to remember that objective truth exists, because some large percentage of society has a political interest in denying that, and we're slipping ever closer into Sagan's "Demon Haunted World."
I don't believe the current race to build AI is actually about any productivity gains (which are questionable at best).
I believe the true purpose of the outsized AI investments is to make sure the universal answer machine will give answers that conform to the ideology of the ruling class.
You can read hints of that in statements like the Trump AI Action Plan [0], but also things like the Llama 4 announcement. [1]
[0] "Ensure that Frontier AI Protects Free Speech and American Values" - https://www.whitehouse.gov/wp-content/uploads/2025/07/Americ...
[1] "It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics. This is due to the types of training data available on the internet." https://ai.meta.com/blog/llama-4-multimodal-intelligence/
I'd love to see them prove this, but they can't.
This saying exists for a reason: https://en.m.wikipedia.org/?redirect=no&title=Reality_has_a_...
Yeah, no. I find it funny how everyone from other specialties take offence when their piece of "advanced" whatever gets put on a list, but they have absolutely no issue with making uninformed, inaccurate and oversimplified remarks like "averaging machines".
Brother, these averaging machines just scored gold at IMO. Allow me to doubt that whatever you do is more impressive than that.
a) "solve" NLP enough to understand the problem b) reason through various "themes", ideas, partial demonstrations and so on c) verify some d) gather the good ideas from all the tried paths and come up with the correct demonstrations in the end
Now tell me a system like this can't take source material and all the expert writings so far, and come up with various interpretations based on those combinations. And tell me it'll be less accurate than some historian's "vibes". Or a translator's "feelings". I don't buy it.
> Now tell me a system like this can't take source material and all the expert writings so far, and come up with various interpretations based on those combinations. And tell me it'll be less accurate than some historian's "vibes".
Framing it as the kind of problem where accuracy is a well-defined concept is the error this article is talking about. Literally the historian's "vibes" and "feelings" are the product you're trying to mimic with the LLM output, not an error to be smoothed out. I have no doubt that LLMs can have real impact in this field, especially as turbopowered search engines and text-management tools. But the point of human narrative history is fundamentally that we tell it to ourselves, and make sense of it by talking about it. Removing the human from the loop is IMO like trying to replace the therapy client with a chat agent.
the better off we will all be.
but of course, this goes directly against how so many people think, that I won't even bo
On the other hand, one day they will replace human beings. And secondly, if something like transalation (or in general, any mental work) becomes too easy, then we also run the risk of incresing the amount of mediocre works. Fact is, if something is hard, we'll only spend time on it if it's really worthwhile.
Same thing happens with phone cameras. Yes, it makes some things more convenient, but it also has resulted in a mountain of mediocrity, which isn't free to store (requires energy and hence pollutes the environment).