• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    the tech lobbying group CCIA Europe, whose members include Alphabet, Meta, and Apple.

    Freedom of speech is nice and all, but shit like this needs to stop because it’s a direct assault on democracy

    • xxce2AAb@feddit.dk
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Yeah. While I agree that “Europe isn’t the US” and that we definitely need “smarter AI rules”, I highly doubt my idea of what that means matches that of those corporate entities.

      By all means, use a LLM to chew through huge scientific datasets to search for correlations a human would never have noticed or come up with a 400 page mathematical “proof” that can at least inform a human-driven refinement process to achieve actual understanding, but practically ever other use of “AI” I’ve seen so far is a blursed waste of power at best and societally corrosive at worst.

      • skarn@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        use a LLM to chew through huge scientific datasets to search for correlations a human would never have noticed

        Are LLMs really any good for something like that?

        • xxce2AAb@feddit.dk
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Here I’m imprecisely using “LLM” as a general stand-in for “machine learning”. The only role I see for LLMs in that kind of endeavor is to allow researchers to ask natural questions about the dataset and get results. But with that correction made, yes, even simple polynomial partitioning of hyper-dimensional datasets is incredibly good at detecting clustering/corelations/patterns no human would ever be able to perceive, and is helpful in other ways - predicting (i.e. guessing) at properties for hitherto unknown compounds or alloys based on known properties of existing ones, which has been very helpful in everything from chemistry over material sciences to plasma physics. Point is, there’s plenty of useful and constructive uses for these technologies, but those are not the ones actually being funded. What investors are throwing money at is either tools that rip off other people’s work without compensation, enable positive (in the bad cybernetic sense) feedback loops with users or aim to replace large amounts of the workforce with nothing to replace the jobs lost, which will obviously do nothing good for societal or economic stability.

          • skarn@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Oh but I know full well that ML can do wonders in many of these circumstances, my doubts were about LLMs specifically.

            And my question was mostly an honest one. The chance of learning something that I really didn’t expect, is worth the risk of being “that guy”.

            The main current application of LLM seems to be the production of gargantuan amounts of horseshit.