• Thorry@feddit.org
    link
    fedilink
    arrow-up
    29
    ·
    edit-2
    1 day ago

    I recently read a cool book and wanted to know what other people thought about it. I had no idea how to find out, probably obscure forums or something. But with search engines being shit these days, I could only find one line reviews. I was looking for something a little more in depth.

    So I thought hey let’s try some kind of LLM based solution, this is something it should be able to do right? So I told Chatgpt hey I read this book and I liked it, what are some common praises and criticisms of that book? And the “AI” faithfully did as told. A pretty good summery of pros and cons, with everything being explained properly without becoming too verbose. Some of the points I agreed with, others less so. Wow, that’s pretty neat.

    But then alarm bells started ringing in my head. Time for a sanity check. So in a new chat I posed the exact same question, word for word. However I replaced the name of the book and the name of the author with something completely made up. Real sounding for the context, not obviously fake, but weird enough a human would give pause. And of course, not similar to anything that actually exists. The damn thing proceeded to give a very similar result as before. Different points, but the same format and gist. In depth points about pacing and predictability of a book I made the fuck up just seconds earlier.

    I almost fell into the trap thinking LLMs could be useful in some cases. But in fact they are bullshit generators that just happen to be right some of the time.

    • Hazzard@lemmy.zip
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      20 hours ago

      The way I imagine it in my head is like a text autocomplete trying to carry on a story about a person talking to a brilliant AI.

      If something is real, of course the hypothetical author would try to get those details correct, so as not to break the illusion for educated readers. But if something is fake (or the LLM just doesn’t know about it), well of course the all knowing fictional AI it’s emulating would know about it. This is a fictional story, whatever your character is asking about it is probably just part of the setting. It wouldn’t make sense for the all knowing AI in this story to just not know.

      Obviously, OpenAI or whoever would try to prompt their LLMs to believe they’re not in a fictional setting, but the LLMs are trained on as much fiction as non-fiction, and fiction doesn’t usually break to tell you it’s fiction, but often does the opposite. And even in non fiction there aren’t many examples of people saying they don’t know things. I wouldn’t write a book review just to say I haven’t heard of the book. Not to mention the non-fiction examples of people confidently being wrong or flat out lying.

      Simply based on the nature of human writing, I frankly wouldn’t ever expect LLMs to be immune to writing fiction. I expect that it’s fundamental to the technology, and “hallucinations” (a metaphor that gives far too much credit, IMO) and jailbreaks won’t ever be fully stamped out.

    • leftytighty@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      the only time they’re useful is when assisted by an algorithmic search that provides good contextual information for it to summarize and more importantly link to for verification…

      if you’re struggling to find good results online it will absolutely not be helpful, if you’re struggling to read results then it might help you hone in on an area and save you time.

      however, chances are you’ll continue to get worse at independent information gathering