An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates. Experts say the problem is bigger than that
As much as I agree with the sentiment and as much as I despise the current state of tech and llm’s, software and tech in general are very brittle, riddled with problems and human mistakes(a bug is just a made up word that allows displacement of responsibility).
“AI design is inherently defective and will never work correctly.”
Guess we need to jam it into more things!
As much as I agree with the sentiment and as much as I despise the current state of tech and llm’s, software and tech in general are very brittle, riddled with problems and human mistakes(a bug is just a made up word that allows displacement of responsibility).
Just rambling don’t really have a useful point