Any thoughts on the paragraph following your excerpt:
The most persuasive way you can demonstrate the reality of AI, though, is to describe how it is already being used today. Not in speculative sci-fi scenarios, but in everyday offices and laboratories and schoolrooms. And not in the ways that you already know — cheating on homework, drawing bad art, polluting the web — but in ones that feel surprising and new.
With that in mind, here are some things that AI has done in 2024.
LLMs don’t add anything vs actively scanning for a handful of basic rules and link scanning. Anything referencing a bank that isn’t on a whitelist of legitimate bank domains in a given country would likely be more effective.
The language stuff is the only parts they’re actually good at.
Chatbots are genuine dogshit, PDF to podcast is genuine dogshit, poetry is genuine dogshit.
Respectfully, none of the aforementioned examples are simple, or else humans wouldn’t have needed to leverage AI to make such substantial progress in less than 2 years.
They are simple, but they are not easy. Sorting M&Ms according to colour is also a simple task for any human with normal colour vision, but doing it with an Olympic-sized swimming pool full of M&Ms is not easy.
Computers are very good at examining data for patterns, and doing so in exhaustive detail. LLMs can detect patterns of types not visible to previous algorithms (and sometimes screw up royally and detect patterns that aren’t there, or that we want to get rid of even if they exist). That doesn’t make LLMs intelligent, it just makes them good tools for certain purposes. Nearly all of your examples are just applying a pattern that the algorithm has discerned—in bank records, in natural language, in sound samples, or whatever.
As for people being fooled by chatbots, that’s been happening for more than fifty years. The 'bot can be exceedingly primitive, and some people will still believe it’s a person because they want to believe. The fewer obvious mistakes the 'bot makes, the more lonely and vulnerable people will be willing to suspend their disbelief.
None of the ones that actually work resemble intelligence. They’re basic language skills by a tool that has no path to anything that has anything in common with intelligence. There’s plenty you can do algorithmically if you’re willing to lose a lot of money for every individual usage.
And again, several of them are egregious lies about shit that is actually worse than nothing.
Actual researchers aren’t the ones lying about LLMs. It’s exclusively corporate people and people who have left research for corporate paychecks playing make believe that they resemble intelligence.
That said, the academic research space is also a giant mess and you should also take even peer reviewed papers with a grain of salt, because many can’t be replicated and there is a good deal of actual fraud.
The market is incredibly irrational and massive bubbles happen all the time.
The number of users when all the search engines are forcibly injecting it in every search (and hemorrhaging money to do it)? Just as dumb.
Any thoughts on the paragraph following your excerpt:
Mostly hyping up very simple things?
LLMs don’t add anything vs actively scanning for a handful of basic rules and link scanning. Anything referencing a bank that isn’t on a whitelist of legitimate bank domains in a given country would likely be more effective.
The language stuff is the only parts they’re actually good at.
Chatbots are genuine dogshit, PDF to podcast is genuine dogshit, poetry is genuine dogshit.
Respectfully, none of the aforementioned examples are simple, or else humans wouldn’t have needed to leverage AI to make such substantial progress in less than 2 years.
They are simple, but they are not easy. Sorting M&Ms according to colour is also a simple task for any human with normal colour vision, but doing it with an Olympic-sized swimming pool full of M&Ms is not easy.
Computers are very good at examining data for patterns, and doing so in exhaustive detail. LLMs can detect patterns of types not visible to previous algorithms (and sometimes screw up royally and detect patterns that aren’t there, or that we want to get rid of even if they exist). That doesn’t make LLMs intelligent, it just makes them good tools for certain purposes. Nearly all of your examples are just applying a pattern that the algorithm has discerned—in bank records, in natural language, in sound samples, or whatever.
As for people being fooled by chatbots, that’s been happening for more than fifty years. The 'bot can be exceedingly primitive, and some people will still believe it’s a person because they want to believe. The fewer obvious mistakes the 'bot makes, the more lonely and vulnerable people will be willing to suspend their disbelief.
Do you have an example of human intelligence that doesn’t rely on pattern recognition through previous experience?
None of the ones that actually work resemble intelligence. They’re basic language skills by a tool that has no path to anything that has anything in common with intelligence. There’s plenty you can do algorithmically if you’re willing to lose a lot of money for every individual usage.
And again, several of them are egregious lies about shit that is actually worse than nothing.
At what point do you think that your opinion on AI trumps the papers and studies of researchers in those fields?
Actual researchers aren’t the ones lying about LLMs. It’s exclusively corporate people and people who have left research for corporate paychecks playing make believe that they resemble intelligence.
That said, the academic research space is also a giant mess and you should also take even peer reviewed papers with a grain of salt, because many can’t be replicated and there is a good deal of actual fraud.
did you actually just bring that up as a positive?
The author from the article did. It’s a bit of a stretch as are the last 2-3 pieces of the list 🤷🏾♂️. The first few are still pretty big.