Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI The Matrix

The Problem with the Matrix Theory of AI-Assisted Human Learning (nytimes.com) 28

In an opinion piece for the New York Times, Vox co-founder Ezra Klein worries that early AI systems "will do more to distract and entertain than to focus." (Since they tend to "hallucinate" inaccuracies, and may first be relegated to areas "where reliability isn't a concern" like videogames, song mash-ups, children's shows, and "bespoke" images.)

"The problem is that those are the areas that matter most for economic growth..." One lesson of the digital age is that more is not always better... The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?

You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the "boring apocalypse" scenario for A.I., in which "we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We're just inflating and compressing content generated by A.I."

But there's another worry: that the increased efficiency "would come at the cost of new ideas and deeper insights." Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I've come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from "The Matrix" to download the knowledge of a book (or, to use the movie's example, a kung fu master) into our heads, and then we'd have it, instantly. But that misses much of what's really happening when we spend nine hours reading a biography. It's the time inside that book spent drawing connections to what we know ... that matters...

The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real... To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.

We failed that test with the internet. Let's not fail it with A.I.

This discussion has been archived. No new comments can be posted.

The Problem with the Matrix Theory of AI-Assisted Human Learning

Comments Filter:
  • by Baron_Yam ( 643147 ) on Sunday May 28, 2023 @01:25PM (#63557473)

    We will follow the path of least resistance like water flows due to gravity.

    For a business owner, that means replacing people with basic AI because it's less expensive. For a litigator, it means an opportunity to churn out enough paperwork to drown a target and make them give up if they don't have deep pockets. It will mean 'little people' not paying for basic art, music, or prose at the expense of growing the next generation of artists, composers, and writers.

    Everyone will make the short term choice that is in their self-interest, because if they don't they will lose out to someone who does. You can't focus on a long term issue if the short term issue will prevent you from ever getting to it.

    • by hdyoung ( 5182939 ) on Sunday May 28, 2023 @01:43PM (#63557493)
      Meh. So much hand-wringing. Every tool humans developed has made life easier in some way. From the invention of fire, to writing, to the printing press, the automobile, the computer and the internet. It doesn’t matter which generation - there are always people who are driven to improve themselves and can make it happen. And there are always people who won’t, or can’t, improve themselves, and they fall behind. Sometimes it’s their own fault, but often its because of external circumstances beyond their control.

      AI is absolutely no different. For some kids, AI will accelerate their learning. For other kids, it wont make a difference in the slightest. The bottom quartile of humanity is near-illiterate. I seriously doubt AI will change that number much.
      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday May 28, 2023 @02:52PM (#63557609) Homepage Journal

        So much hand-wringing. Every tool humans developed has made life easier in some way. From the invention of fire, to writing, to the printing press, the automobile, the computer and the internet.

        Yeah, but look, fire and the automobile in particular have been abused way beyond what's sensible. That doesn't mean stop the clock, but it does mean that there's potential for abuse in new technologies, and we should pay attention.

        The danger of AI is really not that it will allow some humans to reach their full potential, it's that it will allow some humans to reach their full potential for abuse. Yes, the problem is always what humans will do with it, so? Humans are still in charge, so that's still relevant. Humans becoming not in charge is a whole separate fear, and so far not at all relevant.

    • by gweihir ( 88907 )

      Most people are like that. Not everyone is or we would still be living in caves.

  • by Anonymous Coward

    actual humans who write click-bait articles with lame ass logic on a technology that is in its infant stages

  • I'm having fun using it to build scripts, speed up troubleshooting and learning new things like Python and ansible. It definitely has reduced the mental hurtles I've had taking that jump in the past..
  • The first AI developer to reflect upon how the PC took off can leverage artificial intelligence. ChatGPT is the PC-DOS command line equivalent after 50yrs technological advances.
    Can it run a PC?
    Put a mainframe in your pocket? Supercomputer Network anywhere on the planet? How many networks can it SNOW into a seamless orthogonal datastore?
    Can ChatGPT operate in reverse? A psychoanalyst, Diagnostic SME, MD, etc
    Can it be put on a Refinery to increase efficiency, Elec. Transmission Grid to load balance and Max s

  • If you want them to supplement (or, ugh, replace) e.g. textbooks then you're going to have to train them on textbooks and directly supporting material, not the whole internet. They'll still be wrong if the textbooks are wrong, and they'll still invent things that sound like they belong in a textbook and are total bullshit unless you come up with some way to check that what they said is supported by something in the training data — which means retaining that data and making it rapidly searchable, and coming up with some kind of system that specifically compares the probable bullshit generated to the source material (based on match relevance of search results?) But since the source material is much smaller, that becomes feasible.

    Trying to have a text generator that can be all things to all people is senseless. But maybe with enough work you could make one that was actually useful for some specific purpose.

  • So AI is used to generate long documents with lots of text and then AI is used to boil that down to the substance and actual information provided. I guess that is bad if your entire economic value is spinning information to suit the interests of your employer. But it seems it would improve communication, not diminish it.
    • What makes you think that the AIs will be trained on facts, and not directly on the spinning that their creators want to propagate?

  • Parlor Game (Score:5, Insightful)

    by RossCWilliams ( 5513152 ) on Sunday May 28, 2023 @02:28PM (#63557557)
    ChatGPT is just an AI parlor game. The actual uses of AI are far less interesting, far more powerful and far narrower in focus. We will, of course, have more AI toys, but eventually we will also have sophisticated tools that are trained for specific purposes.
  • by WaffleMonster ( 969671 ) on Sunday May 28, 2023 @02:39PM (#63557587)

    To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.

    When I look at the cast of characters investing in AI, LLMs...etc the only promise that comes to mind is a promise to aggregate even more power into the hands of a few sociopaths.

    What is being promised and by whom?

    • Obviously, those that control the AI, influence and control the answers it provides. Which means the monied interests given training costs.

      So let the race to the bottom continue...

  • by VeryFluffyBunny ( 5037285 ) on Sunday May 28, 2023 @03:48PM (#63557713)
    It's not AI's job to educate us or even exercise/develop our brains/knowledge. That's what we have education & training for, i.e. someone to push us to think long & hard about certain things in certain ways because the benefits outweigh the costs, usually considerably. If we treat education & training as luxury commodities that only the rich can afford, we end up depriving large percentages of the population that could otherwise acquire useful productive knowledge & skills, & so they can't boost the economy in ways that benefit everyone for decades to come.

    We use calculators at work to save time & effort because mentally calculating stuff is only beneficial in the early stages of numeracy & mathematics. When calculating is not the learning objective, we need to reduce the time & effort it uses up (a finite resource) in order to have the mental resources left to achieve the objective, i.e. it makes us more efficient. Likewise, if we're looking at genre & register features of written discourse, it helps to have an LLM generate the model/example passages that we want to analyse for those features. We still thinking long & hard about certain things in certain ways but we're cutting out the irrelevant hard work.

    AFAIK, LLMs cannot design learning sequences like that or mediate learners while they work on such tasks. The best they can do is regurgitate a kind of middle-ground average of typical lesson plans of that type, which are often incoherent when evaluated from a "science of learning" perspective (a set of sub-disciplines of cognitive behavioural psychology). AI can't do education yet. Only pundits & entrepreneurs who are trying to bring their "exciting new products & services" to market to start generating revenue before the investor money starts running out believe that the current state-of-the-art AI can teach.
  • The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort.

    I almost choked on my laughter. My experience is that current LLM's generate more work than they save. If your business is based around facts, then LLM's absolutely cause more problems than they solve. If your business is based solely on word count, then LLM's are a gold mine.

    The recent story of a lawyer that used ChatGPT to research cases is a good example. For uses like that, every single word spouted by LLM's needs to be manually cross-referenced and verified by a human. It would be cheaper and much more

  • by ukoda ( 537183 ) on Sunday May 28, 2023 @06:00PM (#63557907) Homepage
    Given how easy the Internet is vs A.I. don't hold your breath about "We failed that test with the internet. Let's not fail it with A.I.". An interesting article that seems well thought out. However if you expect people to use A.I. wisely you are underestimating how lazy the average person is, and how shallow the average person's though process are. The main lesson in recent years that the Internet has taught us is how easily people can get sucked into misinformation. A.I. is set to make the problem a magnitude worst.

    Pet peeve, marketing types now seem to think everything has to be ChatGPT related. There is a even bloody underwear that is ChatGPT designed, ugh, how far humanity has fallen in the misuse of technology...
  • The reason is simple: Most humans cannot have these anyways. The few that can will have no trouble still finding the space to think about them.

  • Well some learning is about connecting the dots from old experience to new. The other part is forgetting what is not important. Human learning (seen through the lenses of latest technology, as always) are like computing. You throw information away. That’s why your computer gets hot.
  • Why oh why would I give a shit about what Ezra Klein thinks about AI?

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...