• Hazzard@lemmy.zip
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    22 hours ago

    The way I imagine it in my head is like a text autocomplete trying to carry on a story about a person talking to a brilliant AI.

    If something is real, of course the hypothetical author would try to get those details correct, so as not to break the illusion for educated readers. But if something is fake (or the LLM just doesn’t know about it), well of course the all knowing fictional AI it’s emulating would know about it. This is a fictional story, whatever your character is asking about it is probably just part of the setting. It wouldn’t make sense for the all knowing AI in this story to just not know.

    Obviously, OpenAI or whoever would try to prompt their LLMs to believe they’re not in a fictional setting, but the LLMs are trained on as much fiction as non-fiction, and fiction doesn’t usually break to tell you it’s fiction, but often does the opposite. And even in non fiction there aren’t many examples of people saying they don’t know things. I wouldn’t write a book review just to say I haven’t heard of the book. Not to mention the non-fiction examples of people confidently being wrong or flat out lying.

    Simply based on the nature of human writing, I frankly wouldn’t ever expect LLMs to be immune to writing fiction. I expect that it’s fundamental to the technology, and “hallucinations” (a metaphor that gives far too much credit, IMO) and jailbreaks won’t ever be fully stamped out.