• 0 Posts
  • 5 Comments
Joined 3 days ago
cake
Cake day: July 11th, 2025

help-circle

  • It’s like reading an article about a petrol refining company, who, having prior experience with gasoline as a useful and profitable substance, decides to seek venture capital for the development of a petrol-based fire-extinguisher. They obtain the funding - presumably because some people with money just wants to see the world burn and / or because being rich and having brains is not necessarily strongly correlated - but after having developed the product, tests conclusively prove the project’s early detractors right: The result is surprisingly always more fire, not less. And they “don’t know how to fix it, while still adhering to the vision of a petrol-based fire-extinguisher”.



  • Here I’m imprecisely using “LLM” as a general stand-in for “machine learning”. The only role I see for LLMs in that kind of endeavor is to allow researchers to ask natural questions about the dataset and get results. But with that correction made, yes, even simple polynomial partitioning of hyper-dimensional datasets is incredibly good at detecting clustering/corelations/patterns no human would ever be able to perceive, and is helpful in other ways - predicting (i.e. guessing) at properties for hitherto unknown compounds or alloys based on known properties of existing ones, which has been very helpful in everything from chemistry over material sciences to plasma physics. Point is, there’s plenty of useful and constructive uses for these technologies, but those are not the ones actually being funded. What investors are throwing money at is either tools that rip off other people’s work without compensation, enable positive (in the bad cybernetic sense) feedback loops with users or aim to replace large amounts of the workforce with nothing to replace the jobs lost, which will obviously do nothing good for societal or economic stability.


  • Yeah. While I agree that “Europe isn’t the US” and that we definitely need “smarter AI rules”, I highly doubt my idea of what that means matches that of those corporate entities.

    By all means, use a LLM to chew through huge scientific datasets to search for correlations a human would never have noticed or come up with a 400 page mathematical “proof” that can at least inform a human-driven refinement process to achieve actual understanding, but practically ever other use of “AI” I’ve seen so far is a blursed waste of power at best and societally corrosive at worst.