Alt
At the top is a screenshot from Wikimedia Commons showing an image that was updated to a larger size with the comment saying “Improved image”. Below it is the goose chasing meme with the goose twice asking “Where did the pixels come from?”.
At the top is a screenshot from Wikimedia Commons showing an image that was updated to a larger size with the comment saying “Improved image”. Below it is the goose chasing meme with the goose twice asking “Where did the pixels come from?”.
No.
The increased “detail” is entirely made up, based off whatever the AI model considers likely to be there based off similar images. AI isn’t somehow magically finding pixels that don’t exist, it’s effectively just guessing them.
I swear, were none of you people around for the LSD dogslugs of early image generation/style matching? Or the slightly newer “this person is not real” portrait generators that would merge hair and glasses, and often give multiple sets of eyes when glasses were at odd angles? This is effectively that with considerably more training data thrown at it.
It’s all made up. The AI isn’t taking another picture of the object with a higher resolution camera. It’s spreading out the existing pixels and doing a best guess to fill in the blanks. Maybe that’s fine for a family portait or something (I don’t agree, but you do you), but that’s definitively not OK for any sort of actual reference material.
Like that time when a upscaler turned Barack Obama into a white man.
Thanks for the link. I don’t know whether I should laugh or cry. It’s hilariously bad, and people embrace this humanity-destroying tech cheerfully.
How do you think image enhancement has always worked? Do you think AI tools are ignoring 30-years of Photoshop equations?
And no, we shouldn’t be fiddling with primary sources. Do you think Wikipedia is a primary source?