Somewhat. I am not familiar with this exact type of algorithm, but the global name is “Encoder-Decoder” algorithm. Broadly speaking you have an input (the original image) and you want to create an output (obviously). You want the input and the output to be “very similar” according to some definition, but you imagine that the AI algorithm has two parts, the encoder part that extracts as much meaningful information as possible from the input and a decoder, that takes that information and generates something new out if it. This information is practically stored as a list of numbers, and we do not impose any prior meaning to them (we do not say that the first number for example is the number of people in the image) but the algorithm learns to make the best out of the encoding.
Two different machines that run the same algorithm trained independently might have completely different middle information. The only thing that matters is that the “encoder” and the “decoder” parts both know what’s going on. (Basically, yes, it’s random but the computer knows how to interpret it - where “know” is used very loosely here)
Sorry for the rant! I hope you found it interesting
Somewhat. I am not familiar with this exact type of algorithm, but the global name is “Encoder-Decoder” algorithm. Broadly speaking you have an input (the original image) and you want to create an output (obviously). You want the input and the output to be “very similar” according to some definition, but you imagine that the AI algorithm has two parts, the encoder part that extracts as much meaningful information as possible from the input and a decoder, that takes that information and generates something new out if it. This information is practically stored as a list of numbers, and we do not impose any prior meaning to them (we do not say that the first number for example is the number of people in the image) but the algorithm learns to make the best out of the encoding.
Two different machines that run the same algorithm trained independently might have completely different middle information. The only thing that matters is that the “encoder” and the “decoder” parts both know what’s going on. (Basically, yes, it’s random but the computer knows how to interpret it - where “know” is used very loosely here)
Sorry for the rant! I hope you found it interesting