This is kind of weird and reductive, comparing specialist to generalist models? How good is GPT3’s game of Go?
The post reads as kind of… obvious, old news padding a recruiting post? We know OpenAI started hiring the kind of specialist workers this post mentions, years ago at this point.
It is even weirder when you remember that Google had already released Meena[1], which was trained on natural language...
[1] And BERT before it, but it is less like GPT.
I obviously think that we still need subject-matter experts. This article argues correctly that the "data generation process" (or as I call it, experimentation and sampling) requires "deep expertise" to guide it properly past current "bottlenecks".
I have often phrased this to colleagues this way. We are reaching a point where you cannot just throw more data at a problem (especially arbitrary data). We have to think about what data we intentionally use to make models. With the right sampling of information, we may be able to make better models more cheaply and faster. But again, that requires knowledge about what data to include and how to come up with a representative sample with enough "resolution" to resolve all of the nuances that the problem calls for. Again, that means that subject-matter expertise does matter.
It had a fascinating look into the future, and in this case one insight in particular.
It basically said that in the future, answers would be cheap and plentiful, and questions would be valuable.
With AI I think this will become more true every day.
Maybe AI can answer anything, but won't we still need people to ask the right questions?
That said, I think ultimately there are some questions that have no answers regardless of how we try to answer them. For chaotic systems, even small uncertainties in the inputs result in large differences in the outputs. In that sense, we can always ask questions, but our questions sometimes can never be precise enough to get meaningful answers. That statement is hard to wrap your head around without taking a course in chaos theory.
I'm fine with a bit of speculative fiction, but I prefer it to be less dystopian than "The Inevitable". Got any good solarpunk recommendations?
The funny part is that it argues in favour of scientific expertise, but at the end it says they actually want to hire engineers instead.
I suppose scientists will tell you that has always been par for the course...
Hopefully nothing endangers people..
Very weird reasoning. Without AlphaGo, AlphaZero, there's probably no GPT ? Each were a stepping stone weren't they?
Right but wrong. Alphago and AlphaZero are built using very different techniques than GPT type LLMs. Google created Transformers which leads much more directly to GPTs, RLHF is the other piece which was basically created inside OpenAI by Paul Cristiano.
As did Google. They had their own language models before and at the same time, but chose different architectures for them which made them less suitable to what the market actually wanted. Contrary to the above claim, OpenAI seemingly "won" because of GPT's design, not so much because of the data (although the data was also necessary).
Anyways, good time for society.
Is that actually true. Is the mini-industry of people looking at pictures and classifying them dead? Does Mechanical Turk still get much use?
Or we could just, you know, not do that at all.