CycleGAN is a [[generative adversarial network]] for translating an image from one domain (e.g., a photo) to another domain (e.g., the painting style of Monet), first introduced in the paper [Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/pdf/1703.10593) .
The algorithm learns a mapping $G: X \to Y$ (i.e., from photos to Monet) with an inverse mapping $F: Y \to X$ under the constraint $F(G(X)) \approx X$ such that cycling an image through will translate from one domain back to its own domain. This is akin to what humans do when imagining the actual landscape that Monet painted, or the way Monet might paint a landscape.