The Problem with the Matrix Theory of AI-Assisted Human Learning (nytimes.com) 28
In an opinion piece for the New York Times, Vox co-founder Ezra Klein worries that early AI systems "will do more to distract and entertain than to focus." (Since they tend to "hallucinate" inaccuracies, and may first be relegated to areas "where reliability isn't a concern" like videogames, song mash-ups, children's shows, and "bespoke" images.)
"The problem is that those are the areas that matter most for economic growth..." One lesson of the digital age is that more is not always better... The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?
You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the "boring apocalypse" scenario for A.I., in which "we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We're just inflating and compressing content generated by A.I."
But there's another worry: that the increased efficiency "would come at the cost of new ideas and deeper insights." Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I've come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from "The Matrix" to download the knowledge of a book (or, to use the movie's example, a kung fu master) into our heads, and then we'd have it, instantly. But that misses much of what's really happening when we spend nine hours reading a biography. It's the time inside that book spent drawing connections to what we know ... that matters...
The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real... To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.
We failed that test with the internet. Let's not fail it with A.I.
"The problem is that those are the areas that matter most for economic growth..." One lesson of the digital age is that more is not always better... The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?
You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the "boring apocalypse" scenario for A.I., in which "we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We're just inflating and compressing content generated by A.I."
But there's another worry: that the increased efficiency "would come at the cost of new ideas and deeper insights." Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I've come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from "The Matrix" to download the knowledge of a book (or, to use the movie's example, a kung fu master) into our heads, and then we'd have it, instantly. But that misses much of what's really happening when we spend nine hours reading a biography. It's the time inside that book spent drawing connections to what we know ... that matters...
The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real... To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.
We failed that test with the internet. Let's not fail it with A.I.
Humans are like water (Score:5, Interesting)
We will follow the path of least resistance like water flows due to gravity.
For a business owner, that means replacing people with basic AI because it's less expensive. For a litigator, it means an opportunity to churn out enough paperwork to drown a target and make them give up if they don't have deep pockets. It will mean 'little people' not paying for basic art, music, or prose at the expense of growing the next generation of artists, composers, and writers.
Everyone will make the short term choice that is in their self-interest, because if they don't they will lose out to someone who does. You can't focus on a long term issue if the short term issue will prevent you from ever getting to it.
Re:Humans are like water (Score:5, Interesting)
AI is absolutely no different. For some kids, AI will accelerate their learning. For other kids, it wont make a difference in the slightest. The bottom quartile of humanity is near-illiterate. I seriously doubt AI will change that number much.
Re:Humans are like water (Score:5, Insightful)
So much hand-wringing. Every tool humans developed has made life easier in some way. From the invention of fire, to writing, to the printing press, the automobile, the computer and the internet.
Yeah, but look, fire and the automobile in particular have been abused way beyond what's sensible. That doesn't mean stop the clock, but it does mean that there's potential for abuse in new technologies, and we should pay attention.
The danger of AI is really not that it will allow some humans to reach their full potential, it's that it will allow some humans to reach their full potential for abuse. Yes, the problem is always what humans will do with it, so? Humans are still in charge, so that's still relevant. Humans becoming not in charge is a whole separate fear, and so far not at all relevant.
Re: (Score:2)
Most people are like that. Not everyone is or we would still be living in caves.
The real problem (Score:1)
actual humans who write click-bait articles with lame ass logic on a technology that is in its infant stages
Learn to code (Score:2)
Re: (Score:2)
It definitely has reduced the mental hurtles
So it's an analgesic then, too. Is there anything AI can't do?
NeXT revolution (Score:2)
The first AI developer to reflect upon how the PC took off can leverage artificial intelligence. ChatGPT is the PC-DOS command line equivalent after 50yrs technological advances.
Can it run a PC?
Put a mainframe in your pocket? Supercomputer Network anywhere on the planet? How many networks can it SNOW into a seamless orthogonal datastore?
Can ChatGPT operate in reverse? A psychoanalyst, Diagnostic SME, MD, etc
Can it be put on a Refinery to increase efficiency, Elec. Transmission Grid to load balance and Max s
Train them on more limited sources (Score:3)
If you want them to supplement (or, ugh, replace) e.g. textbooks then you're going to have to train them on textbooks and directly supporting material, not the whole internet. They'll still be wrong if the textbooks are wrong, and they'll still invent things that sound like they belong in a textbook and are total bullshit unless you come up with some way to check that what they said is supported by something in the training data — which means retaining that data and making it rapidly searchable, and coming up with some kind of system that specifically compares the probable bullshit generated to the source material (based on match relevance of search results?) But since the source material is much smaller, that becomes feasible.
Trying to have a text generator that can be all things to all people is senseless. But maybe with enough work you could make one that was actually useful for some specific purpose.
Just the Facts (Score:2)
Re: (Score:3)
What makes you think that the AIs will be trained on facts, and not directly on the spinning that their creators want to propagate?
Parlor Game (Score:5, Insightful)
What promise? (Score:3)
To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.
When I look at the cast of characters investing in AI, LLMs...etc the only promise that comes to mind is a promise to aggregate even more power into the hands of a few sociopaths.
What is being promised and by whom?
Re: (Score:2)
Obviously, those that control the AI, influence and control the answers it provides. Which means the monied interests given training costs.
So let the race to the bottom continue...
That's not AI's job (Score:4)
We use calculators at work to save time & effort because mentally calculating stuff is only beneficial in the early stages of numeracy & mathematics. When calculating is not the learning objective, we need to reduce the time & effort it uses up (a finite resource) in order to have the mental resources left to achieve the objective, i.e. it makes us more efficient. Likewise, if we're looking at genre & register features of written discourse, it helps to have an LLM generate the model/example passages that we want to analyse for those features. We still thinking long & hard about certain things in certain ways but we're cutting out the irrelevant hard work.
AFAIK, LLMs cannot design learning sequences like that or mediate learners while they work on such tasks. The best they can do is regurgitate a kind of middle-ground average of typical lesson plans of that type, which are often incoherent when evaluated from a "science of learning" perspective (a set of sub-disciplines of cognitive behavioural psychology). AI can't do education yet. Only pundits & entrepreneurs who are trying to bring their "exciting new products & services" to market to start generating revenue before the investor money starts running out believe that the current state-of-the-art AI can teach.
Easier (Score:2)
The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort.
I almost choked on my laughter. My experience is that current LLM's generate more work than they save. If your business is based around facts, then LLM's absolutely cause more problems than they solve. If your business is based solely on word count, then LLM's are a gold mine.
The recent story of a lawyer that used ChatGPT to research cases is a good example. For uses like that, every single word spouted by LLM's needs to be manually cross-referenced and verified by a human. It would be cheaper and much more
Don't hold your breath (Score:3)
Pet peeve, marketing types now seem to think everything has to be ChatGPT related. There is a even bloody underwear that is ChatGPT designed, ugh, how far humanity has fallen in the misuse of technology...
Will not affect deeper insights or new ideas (Score:2)
The reason is simple: Most humans cannot have these anyways. The few that can will have no trouble still finding the space to think about them.
Learning (Score:1)
why oh why (Score:2)
Why oh why would I give a shit about what Ezra Klein thinks about AI?