Skip to content

nikhilrevankar98/Image-Image-Translation-Using-GAN

Repository files navigation

Image-Image-Translation-Using-GAN-Variants

Objective

  • Compare conditional GAN with the basic GAN.
  • Understanding of adversarial networks.
  • Loss function without hand-engineering.
  • Generative model to come up with a way of matching their generated distribution to a real data distribution, i.e., Minimizing the distance between the two distributions is critical for creating a system that generates content that looks good, new, and like it is from the original data distribution.

Dataset

Dataset used : facades data

Link to the dataset : http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/

Results

The results in this experiment suggest that conditional adversarial networks are a promising approach for many image-image translation tasks, especially those involving highly structured graphical outputs. These networks learn a loss adapted to the task and data at hand, which makes them applicable in a wide variety of settings.

Credits

Phillipi Isola Thanks for the wonderful paper on Image-Image translation using Conditional Adversarial Network

About

Image-Image translation using GAN variants

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors