This take is as bad as it is banal. LLM's are glorified autocomplete? Today I used an LLM to help me figure out an issue with a docker-compose file, what was wrong with my database connection, extract structured data from tens of millions of tokens in a web scraper I'm building, I talked to it about the design of my web scraper and how I should structure the data, and more. How is this like autocomplete at all? I ask natural language questions on a broad array of topics and get back mostly correct answers and that's just autocomplete?
I think a lot of people know a little bit about how LLM's are trained, and then decide that the little bit they know is all there is. Yes, next token prediction is part of the training process, no it's not the only part and it doesn't capture what makes LLMs useful.
This type of criticism - I noticed OP is an author - is like if summed up OP's novels as glorified mitosis. Yes, she started as a single cell and divided over and over and to some extent that is pretty responsible for what the OP is - but it's a bad description that fails to capture what's important.
porridgeraisin 30 days ago [-]
Re: glorified autocomplete
People that have zero understanding use this and their belittling intentions are clear.
People that have a vague understanding think there is something else truly magical.
People that have a bit more understanding such as Yann lecun say "glorified autocomplete" (or similar) and mean that no matter what behaviour you "bootstrap" out of the autocomplete process(conversation, search, etc) you're, at end, limited by the mechanics of next word prediction.
Sort of like that popular normal distribution meme.
Whether this is sufficient for AGI or not is still a matter of definition and opinion of course.
consumer451 30 days ago [-]
> Re: glorified autocomplete
Linus Torvalds: ~"just predicting the next token is not the insult that people think it is, that's mostly what we are all doing."
markisus 30 days ago [-]
I like the glorified mitosis analogy. And psychology is glorified particle physics. It’s kind of true, but you just know there is something else going on there that might explain it better.
lowlevel 30 days ago [-]
It’s actually Auto Incorrect.
rsynnott 29 days ago [-]
Yeah, not sure why anyone would expect Leckie, of all people, to be particularly thrilled with LLMs.
I think a lot of people know a little bit about how LLM's are trained, and then decide that the little bit they know is all there is. Yes, next token prediction is part of the training process, no it's not the only part and it doesn't capture what makes LLMs useful.
This type of criticism - I noticed OP is an author - is like if summed up OP's novels as glorified mitosis. Yes, she started as a single cell and divided over and over and to some extent that is pretty responsible for what the OP is - but it's a bad description that fails to capture what's important.
People that have zero understanding use this and their belittling intentions are clear.
People that have a vague understanding think there is something else truly magical.
People that have a bit more understanding such as Yann lecun say "glorified autocomplete" (or similar) and mean that no matter what behaviour you "bootstrap" out of the autocomplete process(conversation, search, etc) you're, at end, limited by the mechanics of next word prediction.
Sort of like that popular normal distribution meme.
Whether this is sufficient for AGI or not is still a matter of definition and opinion of course.
Linus Torvalds: ~"just predicting the next token is not the insult that people think it is, that's mostly what we are all doing."