Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, but the theoretical endpoint of training a GAN is that the generator gets so good that the discriminator has to resort to guesses and become unable to tell with any sort of accuracy as to whether the example it is shown is real or generated.


I don't think that ever happens in training, at least in the image domain. The classifier can always can find some subtle clue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: