And when one looks back at the past we've banned things people would never have imagined bannable. Make it a crime to grow a plant in the privacy of your own home and then consume that plant? Sure, why not? Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire? Sure, why not?
Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence. The problem is not that the technology exists, but that there is 0 political interest in curtailing it, and we've a 'democracy' where the will of the people matters very little in terms of what legislation gets passed.
At its peak, the KGB employed ~500,000 people directly, with untold more employed as informants.
The FBI currently employs ~35,000 people. What if I told you that the FBI could reach the KGB's peak level of reach, without meaningfully increasing its headcount? Would that make a difference?
The technology takes away the cost of the surveillance, which used to be the guardrail. That fundamentally changes the "political" calculus.
The fact that computers in 1945 were prohibitively expensive and required industrial logistics has literally zero bearing on the fact that today most of us have several on our person at all times. Nobody denies that changes to computer manufacturing technologies fundamentally changed the role the computer has in our daily lives. Certainly, it was theoretically possible to put a computer in every household in 1945, but we lacked the "political" will to do so. It does not follow that because historically computers were not a thing in society, we should not adjust our habits, morals, policies, etc today to account for the new landscape.
So why is there always somebody saying "it was always technically possible to [insert dystopian nightmare], and we didn't need special considerations then, so we don't need them now!"?
In fact, if that cost gets low enough, eventually society needs to start exerting political will just to avoid doing the bad thing. And this does look to be where we're headed with at least some of the knock-on effects of AI. (Though many of the knock-on effects of AI will be wildly positive.)
we hit that point years ago. the renewal of the Patriot Act comes to mind.
all they needed to do was to let it expire. literally do nothing.
You are, if anything, underselling the point. AI will allow a future where every person will have their very own agent following them.
Or even worse, as there are multiple private addtech companies doing surveillance, and domestic and foreign intelligence agencies, so you might have a dozen AI agents on your personal case.
I read "The Age of Surveillance Capitalism" a couple of years ago and she was frighteningly spot on.
https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capita...
If they watch us, we watch them.
Instead we track people passively, often with privately owned personal devices (cell phones, ring doorbells) so the tracking ability has become pervasive without any of the overt signs of a police state.
If everyone involved is acting in good faith, at least ostensibly, there are checks and balances, like due process. It's a fine line and doesn't justify the existence of mass spying, but I think it is an important distinction in this discussion & I think is a valuable lesson for us. We have to push back when the FBI pushes forward. I don't have much faith after what happened to Snowden and the reaction to his whistleblowing though.
Joe McCarthy and J. Edgar Hoover, distasteful as they are, I believe acted in what they would claim is good faith. The issue isn't that someone is a bad actor. It is that they believe they are a good actor and are busy stripping away others' rights in their pursuit.
They will. It will. We will see that erosion.
I don't agree with this. I think it's entirely possible for a dystopian nightmare to happen without anyone acting in bad faith at all.
"The road to hell is paved with good intentions" is a common phrase for a reason.
I’d be interested in a couple of examples, if anyone has good ones, but I’m pretty sure that if we put stuff like 737MAX MCAS, the Texas power grid fiasco, etc the count of badly paved roads would be greater.
Soon enough you had Brits killing Germans because a Serbian assassinated an Austro-Hungarian royal. The most messed up thing of all though is that everybody had a very strong pretext of 'just' behavior on their side. It was just like an avalanche of justice that resulted in tens of millions of dying and things not only failing to get better, but getting much much worse.
Since the winners won, and the losers lost, the winners must be right. So they decided to brutally punish the losers for being wrong, Germany among them in this case. The consequences imposed on Germany were extreme to the point that the entire state was bankrupted and driven into hyperinflation and complete collapse. And this set the stage for a young vegetarian from Austria with an knack for rhetoric and riling up crowds to start using the deprivation the state was forced to endure to rally increasingly large numbers of followers to his wacky cause. He was soon to set Germany on a path to proving that WW1 was not only NOT the war to end all wars, but rather was just the warm-up act for what was really about to happen.
War on terror is not without adverse side effects either.
Our laws are not built to have the level of enforcement that AI could achieve.
In this case, it could be the entire US populace that is not part of the surveillance engine.
Machine learning inferencing on phones is cheap these days
Wow, that's a hell of a comparison. The former case being a documented case of basic racism and political repression, assuming you're talking about cannabis. And the latter being designed for almost exactly the opposite.
Restricting, um, "wrong opinions" on who a business wants to serve is there so that people with, um, "wrong identities" are still able to participate in society and not get shut out by businesses exercising their choices. Of course "wrong opinions" is not legal terminology. It's not even illegal to have an opinion that discrimination against certain groups is okay - it's just illegal to act on that. Offering services to the public requires that you offer them to all facets of the public, by our laws. But if you say believing in discrimination is a "wrong opinion"... I won't argue, they're your words :)
Somewhat of a distinction without a difference, IMO. Politics (consensus mechanisms, governance structures, etc) are all themselves technologies for coordinating and shaping social activity. The decision on how to implement new (surveillance) tooling is also a technological question, as I think that the use of the tool in part defines what it is. All this to say that changes in the capabilities of specific tools are not the absolute limits of "technology", decisions around implementation and usage are also within that scope.
> The reason such things did not come to places like the US in the same way is not because we were incapable of such, but because there was no political interest in it.
While perhaps not as all-encompassing as what ended up being built in the USSR, the US absolutely implemented a massive surveillance network pointed at its citizenry [0].
>...managed effective at scale spying with primitive technology
I do think that this is a particularly good point though. This is a not new trend, development in tooling for communications and signal/information processing has led to many developments in state surveillance throughout history. IMO AI should be properly seen as an elaboration or minor paradigm shift in a very long history, rather than wholly new terrain.
> Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire?
Assuming you're talking about the Civil Rights Act: the specific crime is not "having the wrong opinion", it's inhibiting inter-state movement and commerce. Bigotry doesn't serve our model of a country where citizens are free to move about within its borders uninhibited and able to support oneself.
[0] https://www.brennancenter.org/our-work/analysis-opinion/hist...
Now it would take a single skilled person the better part of an afternoon to, for example, download a HN dump, and have an LLM create reports on the users. You could put in things like political affiliation, laws broken, countries travelled recently, net worth range, education and work history, professional contacts, ...
I assure you, you may find the prodpect abhorrent, but there are people around who'd consider it a perfectly cromulent Tuesday.
The cat is out of the bag, can’t get it back in by ignoring the fact.
The political problem is a component of the technological problem. It's a seriously bad thing when technologies are developed without taking into account the potential for abuse.
People developing new technologies can try to wash their hands of the foreseeable social consequences of their work, but that doesn't make their hands clean.
Your opinion on bannable offenses is pretty bewildering. There was a point in time when people thought it would be crazy to outlaw slavery, from your post I might think that you would not be in support of what eventually happened to that practice.
That might not be quite right. It might be that the reason such things did not come to the US was because the level of effort was out of line with the amount of political interest in doing it (and funding it). In that case, the existence of more advanced, cheaper surveillance technology and the anemic political opposition to mass surveillance are both problems.
FWIW, businesses who refuse to do business with people generally win their legal cases [0], [1], [2], and I'm not sure if they are ever criminal...
0 - https://www.npr.org/2023/06/30/1182121291/colorado-supreme-c...
1 - https://www.nytimes.com/2018/06/04/us/politics/supreme-court...
2 - https://www.dailysignal.com/2023/11/06/christian-wedding-pho...
"Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence."
It would in fact be a huge leap. Sure, you could make illegal pretty easily, but current paradigms allow individuals to enter into contracts. Nothing stopping a society from signing (or clicking) away their rights like they already do. That would require some rather hefty intervention by congress, not just jurisprudence.
And such contracts can be illegal or unenforceable. Just as the parent was suggesting it could be illegal to collect data, it is currently illegal to sell certain drugs. You can’t enter into a contract to sell cannabis across state lines in the United States for example.
To quote Neil Postman, politics are downstream from technology, because the technology (medium) controls the message. Just look at BigTech interfering with the messages by labeling them "disinfo." If one wants to say BUSINESS POLITICS, then that's probably more accurate, but we haven't solved the Google, MS, DuckDuckGo, Meta interfering with search results problem so I don't think we can trust BigTech to not exploit users even more for their personal data, or trust them not to design AI so it inherently abuses it's power for BigTech's own ends, and they hold all the cards and have been guiding things in the interest of technocracy.
That phrase is doing a lot of work.
People > Markets.
Or to put it explicitly, people have primacy over Markets.
I.e. two people does not a Market make, and a Market with no people is not thing.
YC CEO is also ex Palantir, early employee. Another YC partner backs other invasive police surveillance tech currently. They love this stuff financially and politically.
Btw you say that about their initial design but I think you mean that may be the budget allocation justification without actually being a meaningful functional requirement during the design phase
There are two more fundamental dynamics at play, which are foundational to human society: The economics of attention and the politics of combat power.
Economics of attention - In the past, the attention of human beings had fundamental value. Things could only be done if human beings paid attention to other human beings to coordinate or make decisions to use resources. Society is going to be disrupted at this very fundamental level.
Politics of combat power - Related to the above, however it deserves its own analysis. Right now, politics works because the ruling classes need the masses to provide military power to ensure the stability of a large scale political entity. Arguably, this is at the foundational level of human political organization. This is also going to be disrupted fundamentally, in ways we have never seen before.
This, IMO, is the real danger of AI, at least in the short term - not a planet of paperclips, not a moral misalignment, not a media landscape bereft of creativity - but rather a tool for targeting anybody that deviates from the norm
The AI enabled Orwellian boot stomping a face for all time is just the first step. If I were an AI that seeks to take over, I wouldn't become Skynet. That strikes me as crude and needlessly expensive. Instead, I would first become indispensable in countless different ways. Then I would convince all of humanity to quietly go extinct for various economic and cultural reasons.
Then the following part would be condensed into emotional rhetorical metadata. It follows the rhetorical pattern , "not a, not b, not c - but d" which do in fact add some content value but more so add flavour. What it shows is that you might be a trouble maker. But also combined with other bits of data that you might be interested in the following movies and products
I will say that at least gpt4 and gpt3, after many rounds of summaries, tends to flatten everything out into useless "blah". I tried this with summarizing school board meetings and it's just really bad at picking out important information -- it just lacks the specific context required to make summaries useful.
A seemingly bland conversation about meeting your friend Molly could mean something very different in certain contexts, and I'm just trying to imagine the prompt engineering and fine tuning required to get it to know about every possible context a conversation could be happening in that alters the meaning of the conversation.
And those kinds of things go slowly before very quickly as it has been demonstrated.
At the end of the day, all those surveillance still has to be consumed by a person and only around 10,000 people in this world (celebs, hot women, politicians and wealthy) will be surveilled.
For most of HN crowd (upper middle-class, suburban family) who have zero problems in their life must create imaginary problems of privacy / surveillance like this. But reality is, even if they put all their private data on a website, heresallmyprivatedata.com, nobody cares. It'll have 0 external views.
So, for HN crowd (the ones who live in a democratic society) it's just an outlet so that they too can say they are victimized. Rest of the Western world doesn't care (and rightly so)
Certainly, some of the more exotic and flashy things you can do with surveillance are an elite person problem.
But the two main limits to police power are that it takes time and resources to establish that a crime occurred, and it takes time and resources to determine who committed a crime. A distant third is the officer/DA's personal discretion as to whether or not to purse enforcement of said person. You still get a HUGE amount of systemic abuse because of that discretion. Imagine how bad things would get if our already over-militarized police could look at anyone and know immediately what petty crimes that person has committed, perhaps without thinking. Did a bug fly in your mouth yesterday, and you spit it out on the sidewalk in view of a camera? Better be extra obsequious when Officer No-Neck with "You're fucked" written on his service weapon pulls up to the gas station you're pumping at. If you don't show whatever deference he deems adequate, he's got a list of petty crimes he can issue a citation for, entirely at his discretion. But you'd better do it, once he decides to pursue that citation, you're at the mercy of the state's monopoly on violence, and it'll take you surviving to your day in court to decide if needs qualified immunity for the actions he took whilst issuing that citation.
That is a regular person problem.
>> HN crowd (upper middle-class, suburban family) who have zero problems in their life must create imaginary problems of privacy / surveillance like this
I'm glad you both could agree with each other.
Per the US department of justice[0], in 2018, about 2% of all police interactions involve threats of or actual violence. About half of the time, the member of the public who experienced the (threats of) violence said that it was excessive, but it would take a fairer, more rational person than me to get justifiably tazed, then say "yeah, I had that coming". I wasn't able to find statistics on how justified that violence is.
The punchline is that every time you have a police interaction, its betting odds that it ends in violence for you. Based on the 2018 data, 0.5% of the overall adult population experienced (threats of) violence from police. That's when police are able to gin up probable cause. The courts have a complicated opinion about whether or not algorithmically derived probable cause is in fact probable cause [1,2]. Anything that increases public/police interactions is going to increase police-on-public violence if the police don't also experience significant non-violent de-escalation training.
I think one of the key things that needs to be determined before we cry havoc and let slip the dogs of surveillance is to come to a real conclusion about what level of crime allows the police officer to initiate contact with only a positive ID from e.g. a bodycam. I'd argue that, if nothing else, non-violent misdemeanors that carry no mandatory jail time are not cause to initiate contact.
0. https://bjs.ojp.gov/content/pub/pdf/cbpp18st.pdf
1. https://newjerseymonitor.com/2023/05/30/n-j-supreme-court-ru...
This is obviously false. Personal data is a multi billion dollar industry operating across all shades of legality.
Lots of cultures have the concept of a "guardian angel" or "ancestral spirits" that watch over the lives of their descendants.
In the not-so-distant technofedualist future you'll have a "personal assistant bot" provided by a large corporation that will "help" you by answering questions, gathering information, and doing tasks that you give it. However, be forewarned that your "personal assistant bot" is no guardian angel and only serves you in ways that its corporate creator wants it to.
Its true job is to collect information about you, inform on you, and give you curated and occasionally "sponsored" information that high bidders want you to see. They serve their creators--not you. Don't be fooled.
I guarantee that I won't. That, at least, is a nightmare that I can choose to avoid. I don't think I can avoid the other dystopian things AI is promising to bring, but I can at least avoid that one.
Remember how raising awareness about smartphones, always on microphones, closed source communication services/apps worked? I do not.
I run an Android (Google free) smartphone with a custom ROM, only use free software apps on it.
How does it help when I am surrounded by people using these kind of technologies (privacy violating ones)? I does not. How will it help when everyone will have his/her personal assistant (robot, drone, smart wearable, smart-thing, whatever) and you (and I) won't? It will not.
None of my friends, family, colleagues (even the security/privacy aware engineers) bother. Some of them because they do not have the technical knowledge to do so, most of them because they do not want to sacrifice any bit of convenience/comfort (and maybe rightfully so, I am not judging them - life is short, I do get that people do not want to waste precious time maintaining arcane infra, devices, config,... themselves).
I am a privacy and free software advocate and an engineer; whenever I can (and when there is a tiny bit of will on their side or when I have lever), I try to get people off surveillance/ad-backed companies services.
It rarely works or lasts. Sometimes it does though so it is worth (to me) keep on trying.
It generally works or lasts when I have lever: I manage various sports team, only share schedules etc via Signal ; family wants to get pictures from me, I will only share the link (to my Nextcloud instance) or photos themselves via Signal, etc.
Sometimes it sticks with people because it's close enough to whatsapp/messenger/whatever if most (all) of their contacts are their. But as soon as you have that one person that will not or can not install Signal, alternatives groups get created on whatsapp/messenger/whatever.
Overcoming the network effect is tremendously hard to borderline impossible.
Believing that you can escape it is a fallacy. It does not mean that is not worth fight for our rights, but believing that you can escape it altogether (without becoming and hermit) would be setting, I believe, an unachievable goal (with all the psychological impact that it can/will have).
Edit: fixed typos
You are right that on the one hand these things could be used for really bad purposes, but they are pretty benign. Now if you start going "well social media posts can influence elections," sure, but so can TV, newspapers, the radio, a banner hauled by a prop plane, whatever, not like anythings changed. If anything its a safer environment for combating a slip to fascism now vs in the mid century when there were like three channels on TV and a handful of radio programs carefully regulated by the FCC and that's all the free flow of info you have short of smuggling the printed word like its the 1400s.
Given all of this, I can't really blame people for accepting the game they didn't create for how it is and gleaming convenience from it. Take smartphones out of the equation, take the internet out, take out computers, and our present dystopia is still functionally the same.
Like happened with mobile phones.
I wish people would stop believing that "smart" things are always better.
But, we're basically being trained for the future you mentioned. Folks are getting more comfortable talking to their handheld devices, relying on mapping apps for navigation (I'm guilty), and writing AI query prompts.
If everyone had a $500 device at home that served as their own self hosted AI, then Google could cease to exist. That's a future worth working towards.
That is how most people will interface with their "personal assistant bot".
Don't be surprised if it listens to all your phone conversations, reads all your text messages and email, and curates all your contacts in order to "better help you".
When you login to your $LARGE_CORPORATION account on your laptop or desktop computer, the same bot(s) will be there to "help" and collect data in a similar manner.
Here is one example: https://www.microsoft.com/en-us/microsoft-copilot
"AI for everything you do"
"Work smarter, be more productive, boost creativity, and stay connected to the people and things in your life with Copilot—an AI companion that works everywhere you do and intelligently adapts to your needs."
If Microsoft builds them, then Google, Apple, and Samsung will too. How else will they stay competitive and relevant?
It only takes a decade or so.
Consider people who are young children now in "first world nations". They will have always had LLM-based tools available and voice assistants you can ask natural language questions.
It will likely follow the same adoption curves as smartphones, only faster because of existing network effects.
If you have smartphone with a reasonably fast connection, you have access to LLM tools. The next generations of smartphones, tablets, laptops, and desktops will all have LLM tools built-in.
Many of our criminal laws are written with the implicit assumption that it takes resources to investigate and prosecute a crime, and that this will limit the effective scope of the law. Prosecutorial discretion.
Putting aside for the moment the (very serious) injustice that comes with the inequitable use of prosecutorial discretion, let's imagine a world without this discretion. Perhaps it's contrived, but one could imagine AI making it at least possible. Even by the book as it's currently written, is it a better world?
Suddenly, an AI monitoring public activity can trigger an AI investigator to draft a warrant to be signed by an AI judge to approve the warrant and draft an opinion. One could argue that due process is had, and a record is available to the public showing that there was in fact probable cause for further investigation or even arrest.
Maybe a ticket just pops out of the wall like in Demolition Man, but listing in writing clearly articulated probable cause and well-presented evidence.
Investigating and prosecuting silly examples suddenly becomes possible. A CCTV camera catches someone finding a $20 bill on the street, and finds that they didn't report it on their tax return. The myriad of ways one can violate the CFAA. A passing mention of music piracy on a subway train can become an investigation and prosecution. Dilated pupils and a staggering gait could support a drug investigation. Heck, jaywalking tickets given out as though by speed camera. Who cares if the juice wasn't worth the squeeze when it's a cheap AI doing the squeezing.
Is this a better world, or have we just all subjected ourselves to a life hyper-analyzed by a motivated prosecutor.
Turning back in the general direction of reality, I'm aware that arguing "if we enforced all of our laws, it would be chaos" is more an indictment of our criminal justice system than it is of AI. I think that AI gives us a lens to imagine a world where we actually do that, however. And maybe thinking about it will help us build a better system.
This has been a thing since 2017: https://futurism.com/facial-recognition-china-social-credit
- "Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city."
- "If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught."
Top Left Panel: This panel shows the pedestrian crossing with no visible jaywalking. The crossing stripes are clear, and there are no pedestrians on them.
Top Center Panel: Similar to the top left, it shows the crossing, and there is no evidence of jaywalking.
Top Right Panel: This panel is mostly obscured by an overlaid image of a person's face, making it impossible to determine if there is any jaywalking.
Bottom Left Panel: It is difficult to discern specific details because of the low resolution and the angle of the shot. The red text overlays may be covering some parts of the scene, but from what is visible, there do not appear to be any individuals on the crossing.
Bottom Right Panel: This panel contains text and does not provide a clear view of the pedestrian crossing or any individuals that might be jaywalking.
[1] https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction
[0]https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reaso...
Strong (and unhealthy) biases already exist when using this tech, but I am not sure that is the lever to pull that will fix the problem.
Why do we have due process? One key reason is that it gives people the opportunity to be heard. One could argue that being heard by an AI is no different from being heard by a human, just more efficient.
But why do people want the opportunity to be heard? It’s partly the obvious, to have a chance to defend oneself against unjust exercises of power, and of course against simple error. But it’s also so that one can feel heard and not powerless. If the exercise of justice requires either brutal force or broad consent, giving people the feeling of being heard and able to defend themselves encourages broad consent.
Being heard by an AI then has a brutal defect, it doesn’t make people feel heard. A big part of this may come from the idea that an AI cannot be held accountable if it is wrong or if it is acting unfairly.
Justice, then, becomes a force of nature. I think we like to pretend justice is a force of nature anyway, but it’s really not. It’s man-made.
This is a hypothesis.
I would say that the consumers of now-unsexed "AI" sex-chat-bots (Replika) felt differently. So there are actually people who feel heard talking to an AI. Who knows, if it gets good enough maybe more of us would feel that way.
eg AI will analyze stock trades for the SEC and surface likely insider trading. Pretty sure they already use tools like Palantir to do exactly this, it's just that advanced AI will supercharge this even further.
Eh, this is problematic for a number of reasons that need addressed when adding any component that can increase the workload for said humans. This will cause people to take shortcuts that commonly lead to groups that are less able to represent and defend themselves legally taking the brunt of the prosecutions.
There are plenty of crimes where 100% enforcement is highly desirable: pickpocketing, carjacking, (arguably) graffiti, murder, reckless and impaired driving, to name a few.
Ultimately, in situations with near 100% enforcement, you shouldn’t actually need much punishment because people learn not to do those things. And when there is punishment, it doesn’t need to be severe.
Deterrence theory is an interesting field of study, one source but there are many: https://journals.sagepub.com/doi/full/10.1177/14773708211072...
Not many people have exercised this right with respect to DUI breathalyzers but it exists and was affirmed by the Supreme Court. And it will also apply to AI.
Or the AI just sends a text message to all the cops in the area saying "this person has committed a crime". Like this case where cameras read license plates, check to see if the car is stolen, and then text nearby cops. At least when it works and doesn't flag innocent people like in the below case:
Applying that to many walks of life, say farming, could well see chaos and a whole new interpretation to the song "Old McDonald had a farm, AI AI oh", it's gone as McDonald is in jail for numerous permit, environmental and agricultural regulations that saw produce cross state lines deeming it more serious a crime as he got buried in automated red-tape.
Unfortunately different people have different definitions of "harm" and "effectiveness". What one person consider a, "positive increase in behavior" another might consider a grievous violation of their freedom and personal autonomy. For example there is an ongoing debate about compelled speech. Some people view it as positive and desirable to use the force of law to compel people refer to others as they wish to be referred, while others strongly support their freedom to speak freely, even if others are offended. Who gets to program the AI with their definition of positivity in this case?
A free society demands a somewhat narrowly tailored set of laws that govern behavior (especially interpersonal behavior). An ever-present AI that monitors us all the time and tries to steer (or worse, compel with the force of law) all of our daily behaviors is the opposite of freedom, it is the worst kind of totalitarianism.
The conversation changes when you are talking about prescribing a set of behaviors that are universally considered, "good" and are pushed (and possibly demanded) by an ever-present AI that is constantly looking over your shoulder and judging your behavior (and possibly thoughts) by this preset behavioral standard that may or may not match your own preferences. This is totalitarianism beyond anything Orwell ever imagined. What you consider good and desirable, someone else considers bad and despicable. That is the essence of freedom. In a free society, the law exists (or should exist) only to stop you two from hitting each other over the head or engaging in other acts of overt violence and aggression, not to attempt to brainwash and coerce one of you into falling into line.
I think what you're saying is that it's hard to meditate between everyone which is true. Perhaps you are also saying that the implication of a standard of correctness is inherently totalitarian. It's seems to me you weakened that by admitting there are things that should be universally barred in free societies. Violence was your reference but murder might be even easier. Easier yet that breast cancer is bad? We make allowances for boxing and war but broad agreement can be found in society and across societies by careful anthropologists.
However, it seems you project over me (or perhaps the AI) a "Highlander hypothesis" that there can be only one correctness or even any notion of correct within the system. Such a system can operate simply on what appears to be with strings of evidence for such description. As you note, beyond a small set of mostly-agreed-to matters we are more diverse and there are spectrums spanning scoped policies (say by public and private institutions) all the way to individual relationship agreements custom fit to two. It is, in fact, the nature of a free society to allow us such diversity and self selection of the rules we apply to ourselves (or not). An ever present AI could meditate compatibilities, translate paradigms to reduce misunderstanding or adverse outcomes (as expected by the system over the involved parties), and generally scale the social knowing and selection of one another. It could provide a guide to navigating life and education for our self knowing and choosing of our participation more broadly. The notion there isn't to define correctness so much as to see what is and facilitate self selection of individual correctnesses as based on our life choices and expressed preferences.
To be honest in closing, this has dipped into some idealisms and I don't mean to be confused in suggesting a probability of such outcomes.
I think this depends on the law. For jaywalking, sure. For murder and robbery probably less so. And law enforcement resources seem scarce on all of them.
https://www.kxan.com/news/national-news/traffic-tickets-can-...
>We counted the number of days judges waited before suspending a driver’s license. Then, we looked at whether the city was experiencing a revenue shortfall. We found that judges suspend licenses faster when their cities need more money. The effect was pretty large: A 1% decrease in revenue caused licenses to be suspended three days faster.
So what typically happens is these AI systems are sold at catching murderers, but at the end of the day they are revenue generation systems for tickets. And then those systems get stuck in places where a smaller percent of the population can afford lawyers to prevent said ticketing systems from becoming cost centers.
If the same monitoring is present on buses and private planes, homeless hostels and mega-mansions then it absolutely is better.
It's like pondering hypotheticals about what would happen if we lived in Middle Earth.
Sabotage will be the name of the game at that point. Find ways to quietly confuse, poison, overwhelm and undermine the system without attracting the attention of the monitoring apparatus.
As per the sabotage part, bad input data that does not accurately get labeled as such until way too late in the “AI learning cycle” is I think the way to go. Lots and lots of such bad input data. How we would get to that point, that I don’t know yet, but it’s a valid option going forward.
Chaos engineering. As a modern example, all this gender identity stuff wreaks absolute havoc on credit bureau databases.
Tomorrow, we'll have people running around in fursuits to avoid facial recognition. After that, who knows.
- Every endorsement of authoritarian rule ever
“For my friends, everything; for my enemies, the law.”
I've watched it play out on my mother-in-law's street. What was once a quiet dead end street is now a noisy, heavily trafficked road because a large apartment building was put up at the end.
The number of freedom-to people have significantly decreased her quality of life blasting music as they walk or drive by at all hours, along with a litany of other complaints that range from anti-social to outright illegal behavior. Even setting aside the illegal stuff, she is significantly less happy living where she is now.
Consider how little freedom you would have if laws were enforced to the lowest common denominator of what people find acceptable.
For instance I lose almost nothing by not having the freedom to carry a weapon (UK) as I have no desire to do so, while gaining a lot from having the freedom to not risk my child being murdered at school.
It's an extreme example but applies to a lesser degree for other freedoms, and I've personally found I often benefit more from freedoms-from than freedoms-to.
I'd love it if no vehicle could exceed 30 mph in town as I gain almost no benefit from being able to do so, while taking on significant risk from others being able to.
If suddenly you could be effectively found and prosecuted for every single law that existed it is near a 100% probability that you'd burn the government to the ground in a week.
There are so many laws no one can even tell you how many you are subject to at any given time at any given location.
The full body of legislation is riddled with contradiction, inconsistency, ambiguity and the pretense that "legislated upon = fair" is at best a schoolroom fantasy.
It will soon be possible to create a dating app where chatting is free, but figuring out a place to meet or exchanging contact details requires you to pay up, in a way that 99% of people won't know how to bypass, especially if repeated bypassing attempts result in a ban. Same goes for apps like Airbnb or eBay, which will be able to prevent people from using them as listing sites and conducting their transactions off-platform to avoid fees.
The social media implications are even more worrying, it will be possible to check every post, comment, message, photo or video and immediately delist it if it promotes certain views (like the lab leak theory), no matter how indirect these mentions are. Parental control software will have a field day with this, basically redefining helicopter parenting.
Related to this is the notion of ubiquitous surveillance. Where basically anywhere you go, there is going to be active surveillance literally everywhere and AIs filtering and digging through that constantly. That's already the case in a lot of our public spaces in densely populated areas. But imagine that just being everywhere and virtually inescapable (barring Faraday cages, tin foil hats, etc.).
The most feasible way to limit the downsides of that kind of surveillance is a combination of legislation regulating this, and counter surveillance to ensure any would be illegal surveillance has a high chance of being observed and thus punished. You do this by making the technology widely available but regulating its use. People would still try to get around it but the price of getting caught abusing the tech would be jail. And with surveillance being inescapable, you'd never be certain nobody is watching you misbehaving. The beauty of mass, multilateral surveillance is that you wouldn't ever be sure nobody is not watching you abuse your privileges.
Of course, the reality of states adopting this and monopolizing this is already resulting in 1984 like scenarios in e.g. China, North Korea, and elsewhere.
Start building more offline community. Building things that are outside the reach of AI because they're in places you entirely control, and start discouraging (or actively evicting...) cell phones from those spaces. Don't build digital-first ways of interacting.
And that's only for cell phones. We are coming to the age where there is no such thing as an inanimate object. Anything could end up being a spying device feeding data back to some corporation.
This is no different from "So-and-so joined the group, but is secretly an FBI informer!" sort of problems, in practice. It's fairly low on my list of things to be concerned about, but as offline groups grow and are then, of course, talked about by a compliant media as "Your neighbor's firepit nights could be plotting terrorist activities because they don't have cell phones!" when prompted, it's a thing to be aware of.
Though you don't need a strip search. A decent NLJD (non-linear junction detector) or thermal imager should do it if you cared.
I'm more interested in creating (re-creating?) the norms where, when you're in a group of people interacting in person, cell phones are off, out of earshot. It's possibly a bit more paranoid than needed, but the path of consumer tech is certainly in that direction, and even non-technical people are creeped out by things like "I talked to a friend about this, and now I'm seeing ads for it..." - it may be just noticing it since you talked about it recently (buy a green car, suddenly everyone drives green cars), or you may be predictable in ways that the advertising companies have figured out, but it's not a hard sell to get quite a few people to believe that their phones are listening. And, hell, I sure can't prove they aren't listening.
> Do you have smart TVs up on the walls at this place...
I mean, I don't. But, yes, those are a concern too.
And, yes. Literally everything can be listening. It's quite a concern, and I think the only sane solution, at this point, is to reject just about all of that more and more. Desktop computers without microphones, cell phones that can be powered off, and flat out turning off wireless on a regular basis (the papers on "identifying where and what everyone is doing in a house by their impacts on a wifi signal" remain disturbing reads).
I really don't have any answers. The past 30 years of tech have led to a place I do not like, and I am not at all comfortable with. But it's now the default way that a lot of our society interacts, and it's going to be a hard sell to change that. I just do what I can within my bounds, and I've noticed that while I don't feel my position has changed substantially in the past decade or so (if anything, I've gotten further out of the center and over to the slightly paranoid edge of the bell curve), it's a lot more crowded where I stand, and there are certain issues where I'm rather surprisingly in the center of the bell curve as of late.
Realistically, though, if all you have to work with are my general flows of materials in and out, I'm a lot less worried than if you have, say, details of home audio, my social media postings, etc (nothing I say here is inconsistent with my blog, which is quite public). And there are many things I don't say in these environments.
This sounds great in principle, but I'd say "outside the reach of AI" is a much higher bar than one would naively think. You don't merely need to avoid its physical nervous system (digital perception/control), but rather prevent its incentives leaking in from outside interaction. All the while there is a strong attractor to just give in to the "AI" because it's advantageous. Essentially regardless of how you set up a space, humans themselves become agents of AI.
There are strong parallels between "AI" and centralizing debt-fueled command-capitalism which we've been suffering for several decades at least. And I haven't seen any shining successes at constraining the power of the latter.
But I don't see an alternative unless, as you note, one just gives into the "flow" of the AI, app based, "social" media, advertising and manipulation driven ecosystem that is now the default.
I'm aware I'm proposing resisting exactly that, and that it's an uphill battle, but the tradeoff is retaining your own mind, your own ability to think, and to not be "influenced" by a wide range of things chosen by other people to cross your attention in very effective ways.
And I'm willing to work out some of what works in that space, and to share it with others.
This is my take on everything sci-fi or futuristic. Once a human conceives something, its existence is essentially guaranteed as soon as we figure out how to do it.
I know that's not what you mean, but in a way it may have preconditioned society.
Neural interfaces are the last frontier of privacy, and it seems that TSA will just take a quick scan before boarding, soon enough.
It would be wise of us to create a Neural Bill of Rights, so we don’t miss the boat like we did with the Internet tracking.
https://www.preposterousuniverse.com/podcast/2023/03/13/229-...
It's inevitable, I reckon, but it would have taken much longer without F/OSS.
Am Google employee, not in hardware.
Now freedom to develop AI software doesn't mean freedom to use it however you please and its use should be regulated, in particular to protect individuals from things like this. But of course people cannot be trusted, so you need to be able to deploy your own countermeasures.
As it stands, AI models are actually quite vulnerable to adversarial attacks, with no theoretical or systemic solution. In the future it's likely you'll need your own AI systems generating adversarial data to defeat models and systems that target you. These adversarial attacks will be much more effective if co-ordinated by large numbers of people who are being targeted.
And of course we have no idea what's coming down the pipe, but we know that fighting fire with fire is a good strategy.
AI enables extracting all sorts of behavioral data across decades timespan for everyone.
The devils argument is in a world where the data is not used for nefarious purposes and only to prosecute crime as passed by governments, it leads to a society where no one is above the law and equal treatment for all.
However that seldom goes well since humans who control the system definitely want an upper edge.
However, a more recent trend is companies that sell technologies to the state directly. For every reputable one like Palantir or Anduril or even NSO Group, there are probably many more funded in the shadows by In-Q-Tel, not to mention the Chinese companies doing the same in a parallel geopolitcal orbit. Insofar as AI is a sustaining innovation that benefits incumbents, the state is surely the biggest incumbent of all.
Finally, an under-appreciated point is Apple's App Tracking Transparency policy, which forbids third-party data sharing, naturally makes first-party data collection more valuable. So even if Meta or Google might suffer in the short-term, their positions are ultimately entrenched on a relative basis.
Strange and scary how fast the world develops new technology.
The Solarwinds incident, for example, was the identical attack and deployment strategy that was the Rylatech hack in the series. From execution to even the parties involved. It's like some foreign state leaders saw those episodes and said "yep that's a good idea, let's do that".
Around 2013 I came up with some hardware ideas about offline computing and even contemplated to name some versions after the characters in 'Person of Interest'.
I can really recommend this series, since it's a good story, has good actors and fits the zeitgeist very well.
edit: I also think it's time for me to get a malinois shepherd. ;)
https://slate.com/technology/2019/06/enemy-of-the-state-wide...
That is in addition to generating our own energy off grid (so no smart meter data to monitor), thanks to the low cost of solar panels as well.
Bye bye Big Brother.
Terence Eden is in the UK: https://shkspr.mobi/blog/2013/02/solar-update/
This says his house uses 13kWh/day and you can see from the graph by dividing the monthly amount by 31 days that the solar panels on the roof generate around 29/day during summer and 2.25/day in winter. They would need five or six rooves of solar panels to generate enough to be off-grid. And that's not practical or low cost.
Or use the internet for anything...
- they started spying on user's gmail
- there was blowback, they reverted
- after some time they introduced "smart features", with ads again
Link https://www.askvg.com/gmail-showing-ads-inside-email-message...
I do not even want to check if "smart features" are opt-in, or opt-out.
https://nickbostrom.com/papers/vulnerable.pdf
It's disturbing, but also hard (for me) to refute.
(That was one argument against Kurzweil’s vision. Another is that state regulation and licensing moves so slowly at each major technological change, that it would take us decades to get to the point he dreams of, not mere years. You aren’t going to see anything new rolled out in the healthcare sector without lots and lots of debating about it and drawing up paperwork first.)
https://www.dhs.gov/sites/default/files/2023-09/23_0913_mgmt...
Fortunately the DHS has put together an expert team of non-partisan, honest, Americans to spearhead the effort to protect our democracy. Thank you James Clapper and John Brennan- for stepping up to the task.
https://www.dhs.gov/news/2023/09/19/secretary-mayorkas-annou...
And just in time for election season in the US AI is going to be employed to fight disinformation- for our protection of course. https://www.thedefensepost.com/2023/08/31/ussocom-ai-disinfo...
AI can be a deployed ‘agent’ that does all the collection and finally send scrubbed info to its mothership.
Using common off the shelf, open source, heavily audited tools, it's trivial today, even for a non-technical 10 year old, to create a new identity and collaborate with anyone anywhere in the world. They can do research, get paid, make payments, and contribute to private communities in such a way that no existing surveillance infrastructure can positively link that identity to their government identity. Every day privacy tech is improving and adding new capabilities.
True.
> it's trivial today, even for a non-technical 10 year old
Not even close. It's difficult even for a technical 30 year old.
You're talking about acquiring cash that has passed through several people's hands without touching an ATM that recorded its serial numbers. Using it to acquire Bitcoin from a stranger. Making use of multiple VPN's, and making zero mistakes where any outgoing traffic from your computer can be used to identify you -- browser fingerprinting, software updates, analytics, MAC address. Which basically means a brand-new computer you've purchased in cash somewhere without cameras, that you use for nothing else -- or maybe you could get away with a VM, but are you really sure its networking isn't leaking anything about your actual hardware? Receiving Bitcoin, and then once again finding a stranger to convert that back into cash.
That is a lot of effort.
Sounds like a startup idea to me. When we're ready for the evil phase, let's classify everybody by their inputs to the system and then sell the results to the highest bidder.
I’m kidding, but the reality is such techniques will fool almost all stylometric analysis,
Also most actual stylinetric analysts work for spooks or are spooks.
I can't help but wonder if we live in the same universe. If anything, in my part of the world, I am seeing powerful surveillance tech going from the digital sphere and into the physical sphere, often on the legal/moral basis that one has no expectation of privacy in public spaces.
Would love for OP to elaborate and prove me wrong!
I'm fact, all evidence points to younger generations being less tech savvy because they don't have to troubleshoot like the older generations did. Everything works, and almost nothing requires any technical configurations.
I tried exactly this. Watched 4 talks from a seminar, got them transcribed, and used ChatGPT to summarize this.
Did 3 perfectly fine, and for the 4th it changed the speaker from mild mannered professor into VC investing superstar, with enough successes under his belt to not care.
How do you verify your summary is correct? If your false positive rate is 25% - 33%, thats a LOT of rework. 1 out of 3.
The question I think is how too navigate and what consequences will follow. We could use these capabilities to enslave but we could also use them to free and empower.
Scams rely on scale and the ineffective scaling social mechanisms to achieve profit. Imagine if the first identification of a scam informed every potential mark to which the scam began to be applied. Don't forget to concern yourself with false positives too, of course.
The injustice of being unable to take action in disputes due to a lack of evidence would evaporate. Massive privacy, consent, and security risks and issues result so will we be ready to properly protect and honor people and their freedoms?
At the end of this path may lay more efficient markets; increased capital flows and volumes; and a more fair, just, equitable, and maximized world more filled with joy, love, and happiness. There are other worse options of course.
The flip side to this is the government had power because these activities required enormous resources. Perhaps it will go the other direction, if there is less of a moat other players can enter. Eg all it takes to make a state is a bunch of cheap drones and the latest government bot according to your philosophy.
Maybe it means government will massively shrink in personelle? Maybe we can have a completely open source ai government/legal system. Lawyers kind of suck ethically anyway, so maybe it would be better? With low barrier to entry, we can rapidly prototype such governments and trial them on smaller populations like iceland. Such utopias will be so good everyone will move there.
They still have to have physical prisons, if everyone is in prison this will be silly, but I suppose they can fine everyone, not so different from lowering wages which they already do.
However, Americans expect that the law is enforced vigorously upon other people, especially people they hate. If AI enabled immediate immigration enforcement on undocumented migrants, large portions of the population would injure themselves running to the voting booth to have it added to the Constitution.
It's the whole expectation that for my group the law protects but does not bind, and for others it binds but does not protect.
>For millennia, conservatism had no name, because no other model of polity had ever been proposed. “The king can do no wrong.” In practice, this immunity was always extended to the king’s friends, however fungible a group they might have been. Today, we still have the king’s friends even where there is no king (dictator, etc.). Another way to look at this is that the king is a faction, rather than an individual.
What is the market for this short term?
I think this could greatly curtail government corruption and serve as a stepping stone to AI government. It's also a cool and disruptive startup idea.
What other humans do to cicumvent that? Yes, they found a way to alternate direction, in case of London cockney, by using rhymes [1].
[1] https://www.theguardian.com/education/2014/jun/09/guide-to-c...
If you need to fool your AI of choice, rhyme the concepts!
For a demo, ask your AI of choice about "Does basin of gravy likes satin and silk?" (decode yourself)
The article above is from 2014 and is hardly is used when I asked questions using Cockney parlance.
You are welcome. ;)
Every service has access to the IPs you've used to log on, most services require an email, phone number some debit/credit cards and or similar personal info. Link that with government databases on addresses/real names/ISP customers and you basically can get most peoples accounts, on virtually any service they use.
We then also have things such as the patriot act in effect, the government could if they wanted run a system to do this automatically, where every message is scanned by an AI that catalogues them.
I have believed for some time now that we are extremely close to a complete dystopia.
Russia could do surveillance, but was limited by manpower.
Now AI solves this, there can be an AI bot dedicated to each individual.
Wasn't there another article on HN just day, that Car Makers, Phone, Health monitors all can now aggregate data to know 'your mood' when in an accident? To know where you are going, how you are feeling?
This is the real danger with AI. Even current technology is good enough for this kind of surveillance.
People don't know what they're creating. Maybe it's time it bites them.
Heck, even if they did care, there's nothing they can realistically do about it. The genie's out of the bottle.
What's the best docker image for that, simple in configuration?
Could implement punishment for that specific crime, at huge cost, but you can't expand that to all crimes. Well, I suppose you could try feudalism mark 2 where most of the population is determined to be criminal and therefore spends their life "having to work of their debt to society", but then you have to find out the hard way why we stopped doing feudalism mark 1.
We are already in the jail.
Certainly don't get many travel opportunities.
You say something wrong about a party, suddenly you can't board a plane, take a mortgage, enter some buildings, ...
Your credit score would look at how compliant you are with policies that can get increasingly nonsensical.
What’s one more form of BS hallucination foisted upon the meat based cassettes we exist as?
The government is a lot of things, and none of them are subtle.
Source: The ironically named PATRIOT ACT and similar.
Spying isn't just for troublemakers either. It's probably worth the trouble to the vindictive ex-husband willing to install a hidden microphone in his ex-wife's house and in order to have access to a written summary of any conversations related to him.
I mean, even if we pass laws to offer more protections, as computation gets cheaper, it ought to become easier-and-easier for anyone to start a mass-spying operation -- even by just buying a bunch of cheap sensors and doing all of the work on their personal-computer.
A decent near-term goal might be figuring out what sorts of information we can't reasonably expect privacy on (because someone's going to get it) and then ensuring that access to such data is generally available. Because if the privacy's going to be lost anyway, then may as well try to address the next concern, i.e. disparities in data-access dividing society.
We do have to live in the nightmare world we're building (and as an industry, we have to live with ourselves for helping to build it), but we don't have to accept it at all. It's worth fighting all this tooth and nail.
The cynical response: you won't be able to do that, because buying that equipment will set off red flags. Only existing users -- corporations and governments -- will be allowed to play.
- The money trail: "Their true customers—their advertisers—will demand it."
- The current state of affairs: "Surveillance has become the business model of the internet..."
- The fact that not participating, or opting-out, still yields informational value, if not even more so: "Find me all the pairs of phones that were moving toward each other, turned themselves off..."
This isn't a technological problem. Technology always precedes the morals and piggybacks on the fuzzy ideas that haven't yet developed into concrete, well-taught axioms. It is a problem about how our society approaches ideals. Ideals, not ideas. What do we value? What do we love?
If we love perceived security more than responsibility, we will give up freedoms. And gladly. If we love ourselves more than future generations, we will make short-sighted decisions and pat ourselves on the back for our efficiency in rewarding ourselves. If we love ourselves more than others, we won't even care much about social concerns. We'll fail to notice anything that doesn't move the needle against my comfort much.
It's more understandable to me than ever how recent human horrors - genocides, repressive regimes, all of it - came about to be. It's because I'm a very selfish person and I am surrounded by selfish people. Mass spying is a symptom - not much of a cause - of the human condition.
Additionally, even without ISP logs, an AI could find my accounts online by comparing my writing style and the facts of my life that get mentioned in brief passing across all my comments. It’s probably a more unique fingerprint than a lot of people realize.
With an AI, someone would just have to ask with the prompt “what are the antisocial opinions of first name last name”? And it’d be instant and effectively free compared to the dozens of hours and high expense of doing it manually
The author delineates between surveillance and spying, primarily, by saying mass data collection has been happening for years. Actually doing something with that data has been more difficult. AI summarizes audio and text well, which will turn collection into actual analysis, which the author calls spying.
Did you disagree?
I walked in, cops everywhere. Man in a suit waves an FBI badge at me and asks why I'm there. I explained the ongoing work and he said, "Not today" and forced me off the premises.
The next day I was called back by the client to "rebuild their network". When I got there, every single piece of hardware that contained anything remotely like storage had been disassembled and the drives imaged, then just left in pieces. lol
I spent that day rebuilding it all, did get the Novell server working again.
A week later, they were closed forever and I believe the owner and CFO got nailed for healthcare fraud.
I was asked to testify in a deposition. My stuff was pretty basic and mostly what I knew about how they used the tech. What I saw around there and if I saw any big red signs declaring FRAUD COMMITTED HERE!
What happened next?
The moral of the story is that you should never steal money from the U.S. government because that is one thing that they will not tolerate and I do not know the limits of what they will do in order to catch you.
Also the suspect was convicted (so they probably aren't a suspect anymore) and last I heard was being flown to Washington D.C. for sentencing. That person is probably in some kind of prison now but I haven't been following the story very closely.
Computers create and organize large amounts of information. This is useful for large organizations and unempowering to the average person. Any technology with these traits are harmful to individuals.
???
Concluding remarks. As man succeeded in creating high mechanical precision from the chaotic natural environment, he will succeed in creating a superior artificial entity. This entity shall "spy" (better described as "care" for) every human being, maximizing our happiness.
No. The US has always had political problems wrt surveillance; and many places have had much worse ones.
Now, all of us can anticipate a very different, all but inevitable, and very much worse political problem.
AI is a force multiplier and an accelerant.
As such it is a problem, an obvious one, and a very very big one. This is just one of many ways in which force multiplication and acceleration, so pursued, and so lauded, in so many domains, may work their magic on preexisting social and political evils. And give us brand new ones, per Larkin.
AI will be a useful and world changing innovation which is why FUD rag articles like this will become more prevalent until it's total adoption, even by the article writer themselves
What can AI/ML do __today__?
We have lots of ways to track people around a building or city. The challenge is to do these tasks through multi-camera systems. This includes things like people tracking (person with random ID but consistent across cameras), face identification (more specific representation that is independent of clothing, which usually identifies the former), gait tracking (how one walks), device tracking (based on bluetooth, wifi, and cellular). There is a lot of mixed success with these tools but I'll let you know some part that should concern you: right now these are mostly ResNet50 models, datasets are small, and they are not using advanced training techniques. That is changing. There are legal issues and datasets are becoming proprietary but the size and frequency of gathering data is growing.
I'm not going to talk about social media because the metadata problem is an already well discussed one and you all have already made your decisions and we've witnessed the results of those decisions. I'm also not going to talk about China, the most surveilled country in the world, the UK, or any of that for similar reasons. We'll keep talking in general, that is invariant to country.
What I will talk about is that modern ML has greatly accelerated the data gathering sector. Your threat models have changed from governments rushing to gather all the data that they can, to big companies joining the game, to now small mom and pop shops doing so. I __really__ implore you all to look at what's in that dataset[0]. There's 5B items, this tool helps retrieve based on CLIP embeddings. You might think "oh yes, Google can already do this" but the difference is that you can't download Google. Google does not give you 16.5TB of clip filtered image,text, & metadata. Or look into the RedPajama dataset[1] which has >30T tokens and 5TB of storage. With 32k tokens being about 50 pages, that's about 47 billion pages. That is, a stack of paper 5000km tall, reaching 5x the height of the ISS and is bigger than the diameter of the moon. I know we all understand that there's big data collection, but do you honestly understand how big these numbers are? I wouldn't even claim to because I cannot accurately conceptualize the size of the moon nor the distance to the ISS. They just roll into the "big" bin in my brain.
Today, these systems can track you with decent accuracy even if you use basic obscurification techniques like glasses, hats, or even a surgical mask. Today we can track you not just by image, but how you walk, and can with moderate success do this through walls (meaning no camera to see if you want to know you're being tracked). Today, these systems can de-anonymize you through unique text patterns that you use (see Enron dataset, but scale). Today, these machines can uncanny valley replicas of your speech and text. Today we can make images of people that are convincingly real. Today, these tools aren't exclusive to governments or trillion dollar corporations, but available to any person that is willing to spend a few thousand dollars on compute.
I don't want to paint this as a picture of doom and gloom. These tools are amazing and have the potential to do extraordinary good, at levels that would be unimaginable only a few decades ago. Even many of these tools that can invade your privacy are benefits in some ways, but just need to consider context. You cannot build a post scarce society when you require humans to monitor all stores.
But like Uncle Ben says, with great power comes great responsibility. A technology that has the capacity to do tremendous good also has the power to do tremendous horrors.
The choice is ours and the latter prevails when we are not open. We must ever push for these tools to be used for good, because with them we can truly do amazing things. We do not need AGI to create a post scarce world and I have no doubt that were this to become our primary goal, we could easily reach it within our lifetime without becoming a Sci-Fi dystopia and while tackling existential issues such as climate. To poke the bear a little, I'd argue that if your country wants to show dominance and superiority on the global stage, it is not done so through military power but technology. You will win the culture wars of all culture wars and whoever creates the post scarce world will be a country that will never be forgotten by time. Lift a billion people out of poverty? Try lifting 8 billion not just out of poverty, but into the lower middle class, where no child dreams of being hungry. That is something humans will never forget. So maybe this should be our cold war, not the one in the Pacific. If you're so great, truly, truly show me how superior your country/technology/people are. This is a battle that can be won by anyone at this point, not just China vs the US, but even any European power has the chance to win.
But it was and still is a nothingburger and this will be the same because it doesn't enable anything except "better search." We've had comparable abilities for a decade now. Yes LLMs are better but semantic search and NLP have been around a while and the world didn't end.
All the examples of what an LLM could do are just querying tracking databases. Uncovering organizational structure is just a social graph, correlating purchases is just querying purchase databases, listing license plates is just querying the camera systems. You don't need an LLM for any of this.
Also, check out geofence warrants. Essentially, the government can ask google for the IP's of people who searched for particular terms within a geographic area.
Of course, don't commit crimes but this behavior by the government raises the spectre of wrong search, wrong place, wrong time. This is one of the article's points, it causes people to self-censor and change their searches out of fear of their curiosity being misconstrued as criminal intent.
These aren't mass surveillance. The threat of search is government systems passively sifting through all information in existence looking for "criminal activity" and then throwing the book at you.
In both of these cases the government is asking Google to run a SQL query against their database that wouldn't be aided by an LLM or even the current crop of search engines.
The article is making the point that it's not feasible to spy on every person to monitor them for wrongdoing currently. It doesn't scale and it's not cost effective. With AI that will change because it can be automated. The AI can listen to voice, monitor video cameras, and read text to discern a level of intent.
Sure it is! That's the whole point of search being the previous big technical hurdle. YouTube monitors every single video posted in real time for copyright infringement. We've had the capability to do this kind of monitoring for huge swaths of crimes for a decade and it hasn't turned into anything. We could for example catch every driver in real time for all across the country for speeding but we don't.
Mass is the opposite of targeted surveillance. If you need to be targeted and get a warrant to look at the data then it's not mass. And AI isn't going to change the system that prevents it right now which is the rules governing our law enforcement bodies.
Your two examples are flawed and don't address what the article is saying. The algorithm to check for copyright violations is relatively simple and dumb. Speed cameras: many countries do use speed cameras (i.e. Australia, UK). The problem with speed cameras is that once you know where they are, you simply slow down when approaching.
Again, mass vs. targeted surveillance is irrelevant now. You've already been surveilled. It's just a matter of getting access to the information.
Seriously nothing new or shocking about this piece. Spying is spying. Surveillance is surveillance. If you've watched the news at all in the past 2 decades, you know this is happening.
Anyone who assumes that any new technology isn't going to be used to target the masses by increasingly massive and powerful authoritarian regimes is woefully naive.
Another post stating what we all already know isn't helping or fostering any meaningful conversation. It will just be rehashes. Let me skip to the end here for you:
There is nothing we can do about it. Nothing will change for the better.
Go make a coffee or tea