Here I’m imprecisely using “LLM” as a general stand-in for “machine learning”. The only role I see for LLMs in that kind of endeavor is to allow researchers to ask natural questions about the dataset and get results. But with that correction made, yes, even simple polynomial partitioning of hyper-dimensional datasets is incredibly good at detecting clustering/corelations/patterns no human would ever be able to perceive, and is helpful in other ways - predicting (i.e. guessing) at properties for hitherto unknown compounds or alloys based on known properties of existing ones, which has been very helpful in everything from chemistry over material sciences to plasma physics. Point is, there’s plenty of useful and constructive uses for these technologies, but those are not the ones actually being funded. What investors are throwing money at is either tools that rip off other people’s work without compensation, enable positive (in the bad cybernetic sense) feedback loops with users or aim to replace large amounts of the workforce with nothing to replace the jobs lost, which will obviously do nothing good for societal or economic stability.
Are LLMs really any good for something like that?
Here I’m imprecisely using “LLM” as a general stand-in for “machine learning”. The only role I see for LLMs in that kind of endeavor is to allow researchers to ask natural questions about the dataset and get results. But with that correction made, yes, even simple polynomial partitioning of hyper-dimensional datasets is incredibly good at detecting clustering/corelations/patterns no human would ever be able to perceive, and is helpful in other ways - predicting (i.e. guessing) at properties for hitherto unknown compounds or alloys based on known properties of existing ones, which has been very helpful in everything from chemistry over material sciences to plasma physics. Point is, there’s plenty of useful and constructive uses for these technologies, but those are not the ones actually being funded. What investors are throwing money at is either tools that rip off other people’s work without compensation, enable positive (in the bad cybernetic sense) feedback loops with users or aim to replace large amounts of the workforce with nothing to replace the jobs lost, which will obviously do nothing good for societal or economic stability.
Oh but I know full well that ML can do wonders in many of these circumstances, my doubts were about LLMs specifically.
And my question was mostly an honest one. The chance of learning something that I really didn’t expect, is worth the risk of being “that guy”.
The main current application of LLM seems to be the production of gargantuan amounts of horseshit.