To be honest, I think titles like this are still problematic. It implicitly ascribes to AI some magical conscious power instead of just being another computer program. It’s like saying Notepad asked a child for nudes, it doesn’t make sense in way we would typically use words.
“Somewhat random text generator generates the bad text” is not a very exciting headline, but it is much more accurate in what I’d argue is an important way.
There does need to be filters in place to prevent results like this though. They have a responsibility to prevent their algorithms doing stuff like this.
Completely impossible to completely filter LLM outputs like this. As hard as all these big companies making frontier models have tried, you can still get them to say and tell you whatever you want if you prompt them right.
Also this was on a fucking Tesla I guess, so its not like this was in anyway a model designed for use with children. In fact, it sounds like it was just the standard Grok model, which is intended to be edgy and sexual. So the model is actually functioning as intended lol
This is a strawman because notepad isn’t going to spit shit out to children. Parents have a duty to prevent kids seeing any 18+ material on their computer.
You are probably right about them mishearing though.
To be honest, I think titles like this are still problematic. It implicitly ascribes to AI some magical conscious power instead of just being another computer program. It’s like saying Notepad asked a child for nudes, it doesn’t make sense in way we would typically use words.
“Somewhat random text generator generates the bad text” is not a very exciting headline, but it is much more accurate in what I’d argue is an important way.
There does need to be filters in place to prevent results like this though. They have a responsibility to prevent their algorithms doing stuff like this.
Completely impossible to completely filter LLM outputs like this. As hard as all these big companies making frontier models have tried, you can still get them to say and tell you whatever you want if you prompt them right.
Also this was on a fucking Tesla I guess, so its not like this was in anyway a model designed for use with children. In fact, it sounds like it was just the standard Grok model, which is intended to be edgy and sexual. So the model is actually functioning as intended lol
The filter needs to happen at the end.
I agree. Notepad should also have an NSFW filter so no children can read the word bum. Overascribing importance to AI output is the real issue here.
In reality, there probably is a filter and the AI just said ‘news’ and they misheard.
This is a strawman because notepad isn’t going to spit shit out to children. Parents have a duty to prevent kids seeing any 18+ material on their computer.
You are probably right about them mishearing though.