I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • kava@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    It’s all just fancy statistics. It turns words into numbers. Then it finds patterns in those numbers. When you enter a prompt, it finds numbers that are similar and spits out an answer.

    You can get into vectors and back propagation and blah blah blah but essentially it’s a math formula. We call it AI but it’s not fundamentally different than solving 2x + 4 = 10 for x.