Regardless of whether or not it works perfectly, surely we can all relate to the childhood desire to 'speak' to animals at one point or another?
You can call it a waste of resources or someones desperate attempt at keeping their job if you want, but these are marine biologists. I imagine cross species communication would be a major achievement and seems like a worthwhile endeavor to me.
If we want to know something about what's going on in the ocean, or high on a mountain or in the sky or whatever - what if we can just ask some animals about it? What about for things that animals can naturally perceive that humans have trouble with - certain wavelengths of light or magnetic fields for example? How about being able to recruit animals to do specific tasks that they are better suited for? Seems like a win for us, and maybe a win for them as well.
Not sure what else, but history suggests that the more people have been able to communicate with each other, the better the outcomes. I assume this holds true more broadly as well.
Pelagic gillnets are probably the gear that still have the most issues with dolphin bycatch, and acoustic pingers that play a loud ultrasonic tone when they detect an echolocation click are already used to reduce interactions in some fisheries.
Humanity’s relationship with animals is so schizophrenic. On the one hand, let’s try to learn how to talk to cute dolphins and chat with them what it’s like to swim!, and on the other, well yeah that steak on my table may have once lead a subjective experience before it was slaughtered, and mass-farming it wrecks the ecosystem I depend on to live, but gosh it’s so tasty, I can’t give that up!
At the same time, I want to be as humane as practical; I don’t want to cause needless suffering to any creature. If I kill a bug, I don’t want it to suffer. Same with food animals.
The more like me an animal is, the less I want to eat it.
There are a lot of humans. Any action to forcefully reduce the number of humans or to forcefully reduce birth rates is almost certainly way more morally abhorrent to me, than doing what is necessary to feed those humans.
This is akin to saying ''humans are violent, so i am unapologetic about obeying biological imperatives to commit violence''.
So just be honest: you WANT to eat meat because you like it, consequences be damned.
And of course if you truly want to feed as many humans as possible the only solution is vegetarianism or even veganism. Meat is just way too wasteful to be a decent solution.
This myth needs to die. Two thirds of all farmland on this planet is pasture [1] that isn’t fertile enough to grow food for humans except by raising animals on it. If we were to switch to a plant based diet, the vast majority of our farmland as a civilization becomes unusable. Most of the world uses animals to generate calories from unproductive land, first via dairy and then slaughtering the animals for food.
Not to mention, animals have been crucial sources of sustainable fertilizer for many thousands of years, without which agriculture would never have been as productive.
That situation always auto-corrects as resource availability shifts.
What does happen is humans find things like mice / locust / kangaroo plagues inconvenient, so we decide to intervene.
It's not like lions get tired of all those pesky gazelle getting up in their grillz and find the need to get about in helicopters thinning the herd.
But white vegans aren’t prepared to actually reckon with the logical conclusion of their ideas. Go read David Benatar (he’s a vegan whose actually consistent btw)
> I am unapologetic about obeying biological imperatives to eat other animals.
Deep history exists in our "biological" context and is critical reality, but arguing some "biological imperative" to act on it, that strikes me as a strange place to start
Context: am biochemist, and I think about biology and biochemistry as a very integrated part of my worldview. But I don't harken to any biological imperative for my actions and choices. It explains them, it doesn't command them. Distorting our biology and psychology is what makes us human and agentic imho
Life is a game of shifting carbon. To stay alive, you need to kill. But you can try to limit that to the least amount of killing required, and to killing those life forms without sentience as we understand it. This is the foundation of any ethical reasoning.
Having said all that, I also reject the vertical ordering of life on the tree of evolution. Plants are just very different from us, not necessarily higher or lower. Considering we have to make a choice as to what we are ready to sacrifice to survive, we can still choose those life forms that likely are not capable of suffering like we do, before turning to those more similar to us.
To do this likely would require large-scale war.
- No, f... the sharks!
Side bonus, we also don’t kill the highly sentient and highly intelligent creatures you’re concerned about.
Those people can all just starve, and you're fine with that?
1. https://www.worldwildlife.org/stories/will-there-be-enough-f...
However if universal communication was to be made. Don't you think that animals are going to be pretty pissed to discover what we have done with their kingdom?
"Hi Mr Dolphin, how's the sea today?" "Wet and a plastic bottle cap got lodged in my blowhole last night..."
So, the story involves an animal DNA archivist interacting with what's presented as the last living humpback whale, focusing on its isolation etc. It turns out the research lab's goal is to trick the whale by faking mating signals, aiming to get it to reveal information about whale history and culture. It's essentially data mining the animal.
What's going on down the the sea over there? Would you mind pulling that thing from here to there?
Or whatever - I don't know what we'll figure out to do, but certainly something.
As far as them being mad at us, I doubt they will be, but I'd be interested to get their perspective - if they have one.
I do not believe we can expect anything resembling a human level of intelligence to be discovered.
Certainly will be interesting to see how much we can bribe Dolphins to do once we have faster communication methods.
These problems are generally industrial in nature so it's very knowable as to where a large source of pollution comes from.
There just isn't a political will to actually enforce laws.
The cynicism on display here is little more than virtue signalling and/or upvote farming.
Sad to see such thoughtless behaviour has reached even this bastion of reason.
Plenty of people genuinely dislike the concentration of economic and computing power that big tech represents. Them expressing this is not "virtue signaling", it is an authentic moral position they hold.
Plenty of people genuinely dislike the disregard for labor and intellectual property rights that anything Gen AI represents. Again, an authentic moral position.
"Virtue signaling" is for example, when a corporate entity doesn't authentically support diversity through any kind of consequential action but does make sure to sponsor the local pride event (in exchange for their logo being everywhere) and swaps their social media logos to rainbow versions.
The problem with virtue signaling is that it’s parroting virtue for social praise. This parrot-like, repeater-node behavior often attempts to move the conversation to virtue talking points and away from the specific topic.
To be clear, this is just about online virtue signaling. It’s just as silly in the physical world - certain attire, gestures, tribal obedience, etc.
Moreover, if all statements made in such a context needed to be acted out in someway, that would negate the whole purpose of the abstract space.
The purpose of my rhetoric in this thread has been to illustrate the issues with your definition rather than to say something about myself.
The harder question that of risk management between the computing power we like on the one hand and its tendency to enable both megalomaniacs at the high end, and the unspeakable depravity of child pornography at the low.
Mindlessly parroting such talking points where they're not applicable is also a form of virtue-signalling.
And the comments in this thread are predominantly such virtue signalling nonsense.
Calling something "trendy" is a great way to try to dismiss it without actually providing any counterargument. The deep suspicion of anything Google does is extremely well justified IMHO.
If a tobacco company invested in lung cancer research that resulted in some treatment breakthroughs, that research should be celebrated, while their main business should continue to be condemned.
Please give me one example in the last decade where meta or Google research has led to actual products or open-sourced technologies, and not just expensive proprietary experiments shelved after billions were spent on them?
I'll wait.
Sometimes it's good just to know things. If we needed to find a practical justification for everything before we started exploring it, we'd still be animals.
To me this task looks less like next token prediction language modeling and more like translating a single “word” at a time into English. It’s a pretty tractable problem. The harder parts probably come from all the messiness of hearing and playing sounds underwater.
I would imagine adapting to new vocab would be pretty clunky in an LLM based system. It would be interesting if it were able to add new words in real time.
https://www.nytimes.com/2017/12/08/science/dolphins-machine-...
The difference between recognizing someone from hearing them, and actually talking to them!
If it can't do the most basic stuff, please explain to be how in the fuck it is going to understand dolphin language and why would should believe its results anyway?
It's rather unsound reasoning, but you certainly can.
I personally was happy to see this project get built. The dolphin researchers have been doing great science for years, from the computational/mathematics side it was quite neat see how that was combined with the Gemma models.
There was a NASA funded attempt to communicate with Dolphins. This eccentric scientist created a house that was half water (a series of connected pools) and half dry spaces. A woman named Margaret Howe Lovatt lived full-time with the Dolphins attempting to learn a shared language between them.
Things went completely off the rails in many, many ways. The lead scientist became obsessed with LSD and built an isolation chamber above the house. This was like the sensory deprivation tanks you get now (often called float tanks). He would take LSD and place himself in the tank and believed he was psychically communicating with the Dolphins.
1. https://www.theguardian.com/environment/2014/jun/08/the-dolp...
She also had sex with a male dolphin called Peter.
>He would take LSD and place himself in the tank and believed he was psychically communicating with the Dolphins.
He eventually came to believe he was communicating with a cosmic entity called ECCO (Earth Coincidence Control Office). The story of the Sega game "Ecco the Dolphin" [1] is a tongue-in-cheek reference to this. I recommend watching the Atrocity Guide episode on John C. Lily and his dolphin "science" [2]. It's on par with The Men Who Stare at Goats (the non-fiction book [3], not the movie).
He has a website that looks like it's been untouched since his death, 2001: http://www.johnclilly.com/
[1] https://en.wikipedia.org/wiki/Ecco_the_Dolphin
[2] https://www.youtube.com/watch?v=UziFw-jQSks
[3] https://en.wikipedia.org/wiki/The_Men_Who_Stare_at_Goats
Paraphrasing carl sagan: "You don't go to Japan and kidnap a Japanese man start jking him off, give him fing acid, and then ask him to learn English!"
Imagine having to explain dragnet surveillance every time someone finds out you know how to code.
Imagine having to explain the Exon Valdez every time someone asks you about your car.
Roll out the reparations!
LLMs are multi-lingual without really trying assuming the languages in question are sufficiently well-represented in their training corpus.
I presume their ability to translate comes from the fact that there are lots of human-translated passages in their corpus; the same work in multiple languages, which lets them figure out the necessary mappings between semantic points (words.)
But I wonder about the translation capability of a model trained on multiple languages but with completely disjoint documents (no documents that were translations of another, no dictionaries, etc).
Could the emerging latent "concept space" of two completely different human languages be similar enough that the model could translate well, even without ever seeing examples of how a multilingual human would do a translation?
I don't have a strong intuition here but it seems plausible. And if so, that's remarkable because that's basically a science-fiction babelfish or universal translator.
In the case of non-human communication, I know there has been some fairly well-motivated theorizing about the semantics of individual whale vocalizations. You could imagine a first pass at something like this if the meaning of (say) a couple dozen vocalizations could be characterized with a reasonable degree of confidence.
Super interesting domain that's ripe for some fresh perspectives imo. Feels like at this stage, all people can really do is throw stuff at the wall. The interesting part will begin when someone can get something to stick!
> that's basically a science-fiction babelfish or universal translator
Ten years ago I would have laughed at this notion, but today it doesn't feel that crazy.
I'd conjecture that over the next ten years, this general line of research will yield some non-obvious insights into the structure of non-human communication systems.
Increasingly feels like the sci-fi era has begun -- what a time to be alive.
Yes. I remember reading that the EU parliamentary proceedings in particular are used to train machine translation models. Unfortunately, I cant remember where I read that. I did find the dataset: https://paperswithcode.com/dataset/europarl
Languages encode similar human experiences, so their conceptual spaces probably have natural alignments even without translation examples. Words for common objects or emotions might cluster similarly.
But without seeing actual translations, a model would miss nuances, idioms, and how languages carve up meaning differently. It might grasp that "dog" and "perro" relate to similar concepts without knowing they're direct translations.
And it gets even more complex because the connotations of "dog" in the USA in 2025 are unquestionably different from "dog" in England in 1599. I can only assume these distinctions also hold across languages. They're not a direct translation.
Let alone extreme cultural specificities... To follow the same example, how would one define "doge" now?
>By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication — a task previously requiring immense human effort.
But this doesn't really tell me anything. What does it mean to "help researchers uncover" this stuff? What is the model actually doing?
The article reads like the press releases you see from academic departments, where an earth shattering breakthrough is juuuuust around the corner. In every single department, of every single university.
It's more PR fluff than substance.
The bad outcome is the "AI" will translate our hellos as an insult, the dolphins will drop the masquerade, reveal themselves as our superiors and pound us into dust once and forever.
Picture the last surviving human surrounded by dolphins floating in the air with frickin laser beams coming out of their heads... all angrily asking "why did you say that about our mother?".
And in the background, ChatGPT is saying "I apologize if my previous response was not helpful".
But, since context is so important to communication, I think this would be easier to accomplish with carefully built experiments with captive dolphin populations first. Beginning with wild dolphins is like dropping a guy from New York City into rural Mongolia and hoping he'll learn the language.
To understand their language we need shared experiences, shared emotions, common internal worlds. Observation of dolphin-dolphin interaction would help but to a limited degree.
It would help if the dolphins are also interested in teaching us. Dolphins or we could say to the other '... that is how we pronounce sea-cucumber'. Shared nouns would be the easiest.
The next level, a far harder level, would be to reach the stage where we can say 'the emotion that you are feeling now, that we call "anger"'.
We will no quite have the right word for "anxiety that I feel when my baby's blood flow doesn't sound right in Doppler".
Teaching or learning 'ennui' and 'schadenfreude' would be a whole lot harder.
This begs a question can one fully feel and understand an emotion we do not have a word for ? Perhaps Wittgenstein has an answer.
Postscript: I seem to have triggered quite a few of you and that has me surprised. I thought this would be neither controversial nor unpopular. It's ironic in a way. If we can't understand each other, understanding dolphin "speech" would be a tough hill to climb.
For all the word that they don't have in their language, we/they can invent them. Just like we do all the time: artificial intelligence, social network, skyscraper, surfboard, tuxedo, black hole, whatever...
It might also be possible that dolphins' language uses the same patterns as our language(s) and that an LLM that knows both can manage to translate between the two.
I suggest a bit more optimistic look on the world, especially on something that's pretty-much impossible to have any negative consequences for humanity.
If you had read this part --
"But that alone would tell us almost nothing of what dolphin dialogue means.
To understand their language we need shared experiences, shared emotions, common internal worlds. Observation of dolphin-dolphin interaction would help but to a limited degree."
it ought to have been clear that what I am arguing is a corpus of dolphin communication fed to an LLM alone will not suffice. A lot of investments have to be made in this part -- To understand their language we need shared experiences, shared emotions, common internal worlds.
I am sure both you and me would be very happy the day we can have some conversation with dolphins.
There are tons of shared experiences and shared emotions. It's not like there's some hidden organism that we discovered are making noise from within the dark matter. These are animals in the oceans. Plenty of shared experiences and emotions. Dolphins have feelings. Anyway... let's agree to disagree. I fully support this project and am optimistic about its outcomes.
Not at all.
I believe just throwing a corpus of dolphin-dolphin vocalizations at an LLM will fall very short.
I quote myself again -- 'But that alone would tell us almost nothing of what dolphin dialogue means".
Note the emphasis on the word alone.
What needs to happen is to build shared experiences, perhaps with a pod and incorporate that into the ML training process. If this succeeds this exercise of building shared experience will do heavier lifting than the LLM.
Had you spent less effort in coming up with insults and used the leftover processing bandwidth to understand my position it would have made for a nicer exchange. For restoring civility to our conversation I indeed do not hold high hopes.
I suspect the experience of being a dolphin is stranger and more alien than we will ever know. This is a creature that employs its sense of sonar as part of how it understands the world, and has evolved with that sense. It will have concepts related to sonar and echolocation that we cannot grasp. We might be able to map a clumsy understanding of them, e.g. a dolphin cannot smell, it might be able to understand "my nose can taste the air", but is that the same? At least in humans with sensory deficiencies, there are parts of the brain that have evolved alongside the same senses that an unimpaired person has.
Maybe we could finagle an interspecies pidgin, but I wouldn't be surprised if we just fool ourselves for a while before we realize that dolphin language is just different. Even the word language brings along a set of rules and concepts that are almost certainly uniquely human.
Even a limited success would gladden my heart.
Why? With modern AI there exists unsupervised learning for translation where you don't have to explicitly make translation pairs between the 2 languages. It seems possible to eventually create a way to translate without having to have a teaching process for individual words like you describe.
We do share (presumably) experiences of hunger, pain, happiness, the perception of gradations of light and shape/form within them, some kind of dimensionally bound spatial medium they exist in as an individual and are moving through - though of course they might not conceive of these as "dimensions" or "space", they would surely have analogs for directional words - although given they aren't constrained to live on top of a 2D surface, these might not be "up", "down", "left", "right", but something in the vein of "lightward" or "darkward" as the two main directions, and then some complicated 3D rotational coordinate system for modeling direction. Who knows, maybe they even use quaternions!
For the subset of shared experiences and emotions this should be possible, not only that, I feel that we must try (as in, it's a moral/ ethical obligation).
Training an ML on dialogues alone will not be enough. One would need to spend a lot of time to build up a wealth of shared experiences, so that one can learn the mapping/correspondence.
Without grounding in some form of experience one can learn grammar and syntax but not understanding. "Chrome red" is a whole lot easier to teach than say the concept of "jealousy" when that's not part of a shared world of experience.
It's possible to learn a dictionary without understanding any of what those words mean. Dictionary just gives relations among the dictionary words themselves. That's it.
It takes a sensory or emotional experience to ground those words for learning.
Nouns are easy because you can point and teach, that there is a correspondence with the word 'apple' and the physical object that you are experiencing now. Abstract concepts emotions are much harder. There the need for shared experience is much stronger.
There's quite a bit of recorded knowledge for these things. Experiences of Hellen Keller. There's a story of a deaf man who could use sign language, but had an overwhelming and tearful experience in his thirties when it finally clicked that the sign for a 'door' has a correspondence for a door that his teacher was pointing at. Till that point, signing was just some meaningless ritualistic ceremony that needed to be mastered for social acceptance.
> Even with emotions different languages independently came up with words for them and we can still translate between those languages.
Of course. That's a no-brainer that different human languages have come up with names for experiences they share.
The hard part is learning the correspondence between say two nouns in different languages that mean the same thing.
Its perfectly possible for an unsupervised ML to use the French word 'rouge' in a French sentence but the notion that 'rouge' corresponds to 'red' in English has to come from some shared grounded experience/emotion.
The French word x word relationship graph has to get connected to the English word x word relationship graph.
BTW for people born deaf and blind it's an enormous challenge just to get to the point where the person understand that things have names. For example for Hellen Keller, it was a very non-trivial event when it finally clicked that the wet sensation she was feeling had a correspondence with what her teacher was writing on her arm. They were lucky that wet was an experience that was common between her and her teacher, lucky that Hellen Keller could experience wetness. Someone or something has to play the same role for dolphins and us. Just a corpus will not suffice.
> WDP is beginning to deploy DolphinGemma this field season with immediate potential benefits. By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication — a task previously requiring immense human effort. Eventually, these patterns, augmented with synthetic sounds created by the researchers to refer to objects with which the dolphins like to play, may establish a shared vocabulary with the dolphins for interactive communication.
And then the world will suddenly understand...
Dolphins are cool animals. Google AI decodes dolphins. Google AI is cool.
Separately, could invoking it anytime someone appears excited be described as distrustful of human sincerity or integrity?
After working through these exercises, my answers are no/yes, which leaves me having to agree its clearly cynical. (because "define:cynical" returns "distrustful of human sincerity or integrity")