I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • IzzyScissor@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    2
    ·
    8 months ago

    It’s your phone’s ‘predictive text’, but if it were trained on the internet.

    It can guess what the next word should be a lot of the time, but it’s also easy for it to go off the rails.