An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates. Experts say the problem is bigger than that
I’ve explained several times now exactly how and why it’s not a bubble. If you have problems understanding basic English, that’s not something I can help you with.
What did I say in the comment you just replied to?
I get it. We’ve done this circle before.
You’re going to say “it’s not a bubble”.
I’m going to ask “so is the stock price accurate”?
You’re going to say it’s a bubble without using the word bubble “any investor can tell the price is skyrocketed above what it should be worth”.
And then like the Patrick/Man Ray meme I’m going to say “so it’s a bubble” which you will then respond “no” to and we will repeat the circle.
I said I’m done.
I’ve explained several times now exactly how and why it’s not a bubble. If you have problems understanding basic English, that’s not something I can help you with.