• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    4
    ·
    11 days ago

    It’s not useful to talk about the content that LLMs create in terms of whether they “understand it” or don’t. How can you verify if an LLM understands what it’s producing or not? Do you think it’s possible that some future technology might have this understanding? Do humans understand everything they produce? (A lot of people get pretty far by bullshitting.)

    Shouldn’t your argument equally apply to aimbots? After all, does an aimbot really understand the strategy, the game, the je-ne-sais-quoi of high-level play?