I burned down a forest to confirm
Don’t ask it to name an NFL team that doesn’t end with ‘s’
DeepSeek eventually gets it, but it’s DeepThink takes a good ten minutes of racing ‘thoughts’ and loops to figure it out.

I burned down a forest to confirm
Don’t ask it to name an NFL team that doesn’t end with ‘s’
DeepSeek eventually gets it, but it’s DeepThink takes a good ten minutes of racing ‘thoughts’ and loops to figure it out.

I use it for code too and ive noticed the same problems. Sometimes it does really help (me save a search on StackOverflow) but other times it gives me odd spaghetti code or misunderstands the question, or like you said if does something in an odd and inefficient way. But when it works it’s great. And you can give it a “skeleton” of the code you want, of sorts, and have it fill it out.
But if it doesn’t get it on the first try, I’ve found that it will never get it. It’ll just go in circles. And it has just rewritten my code and turned it to mush, and rewriten parts I tell it not to touch etc.
I’m not as big on the anti-LLM train that the rest of Hexbear is on. Its a very specific tool, I’ve gotten some use out of it, but it’s no general intelligence. And I do like joining in the occasional poking fun at it.
LLM are a tool for a job. I feel a bit of guilt about using them and contributing to turning this world to shit but like, I also drive a car and I’ve been eating meat again. I know my contribution to those things pale in comparison to what mega corps are doing but it’s always still weighed on my mind as guilt. (we live in a society and all that)
But I’ve found that to try and “control” something like ChatGPT, you ask it small chunks similar to how programmers might break a problem into unique smaller bits and piece it together. I’ve had a lot of success that way, until it inevitably starts hallucinating and shitting the bed. It still cracks me up when I feed it AstroJS code and it spits out ReactJS and adds keys everywhere even though none of my code is in a return statement.