.. note::
    :class: sphx-glr-download-link-note

    Click :ref:`here <sphx_glr_download_beginner_dcgan_faces_tutorial.py>` to download the full example code
.. rst-class:: sphx-glr-example-title

.. _sphx_glr_beginner_dcgan_faces_tutorial.py:


DCGAN Tutorial
==============

**Author**: `Nathan Inkawhich <https://github.com/inkawhich>`__
Introduction
------------

This tutorial will give an introduction to DCGANs through an example. We
will train a generative adversarial network (GAN) to generate new
celebrities after showing it pictures of many real celebrities. Most of
the code here is from the dcgan implementation in
`pytorch/examples <https://github.com/pytorch/examples>`__, and this
document will give a thorough explanation of the implementation and shed
light on how and why this model works. But don’t worry, no prior
knowledge of GANs is required, but it may require a first-timer to spend
some time reasoning about what is actually happening under the hood.
Also, for the sake of time it will help to have a GPU, or two. Lets
start from the beginning.

Generative Adversarial Networks
-------------------------------

What is a GAN?
~~~~~~~~~~~~~~

GANs are a framework for teaching a DL model to capture the training
data’s distribution so we can generate new data from that same
distribution. GANs were invented by Ian Goodfellow in 2014 and first
described in the paper `Generative Adversarial
Nets <https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf>`__.
They are made of two distinct models, a *generator* and a
*discriminator*. The job of the generator is to spawn ‘fake’ images that
look like the training images. The job of the discriminator is to look
at an image and output whether or not it is a real training image or a
fake image from the generator. During training, the generator is
constantly trying to outsmart the discriminator by generating better and
better fakes, while the discriminator is working to become a better
detective and correctly classify the real and fake images. The
equilibrium of this game is when the generator is generating perfect
fakes that look as if they came directly from the training data, and the
discriminator is left to always guess at 50% confidence that the
generator output is real or fake.

Now, lets define some notation to be used throughout tutorial starting
with the discriminator. Let :math:`x` be data representing an image.
:math:`D(x)` is the discriminator network which outputs the (scalar)
probability that :math:`x` came from training data rather than the
generator. Here, since we are dealing with images, the input to
:math:`D(x)` is an image of CHW size 3x64x64. Intuitively, :math:`D(x)`
should be HIGH when :math:`x` comes from training data and LOW when
:math:`x` comes from the generator. :math:`D(x)` can also be thought of
as a traditional binary classifier.

For the generator’s notation, let :math:`z` be a latent space vector
sampled from a standard normal distribution. :math:`G(z)` represents the
generator function which maps the latent vector :math:`z` to data-space.
The goal of :math:`G` is to estimate the distribution that the training
data comes from (:math:`p_{data}`) so it can generate fake samples from
that estimated distribution (:math:`p_g`).

So, :math:`D(G(z))` is the probability (scalar) that the output of the
generator :math:`G` is a real image. As described in `Goodfellow’s
paper <https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf>`__,
:math:`D` and :math:`G` play a minimax game in which :math:`D` tries to
maximize the probability it correctly classifies reals and fakes
(:math:`logD(x)`), and :math:`G` tries to minimize the probability that
:math:`D` will predict its outputs are fake (:math:`log(1-D(G(z)))`).
From the paper, the GAN loss function is

.. math:: \underset{G}{\text{min}} \underset{D}{\text{max}}V(D,G) = \mathbb{E}_{x\sim p_{data}(x)}\big[logD(x)\big] + \mathbb{E}_{z\sim p_{z}(z)}\big[log(1-D(G(z)))\big]

In theory, the solution to this minimax game is where
:math:`p_g = p_{data}`, and the discriminator guesses randomly if the
inputs are real or fake. However, the convergence theory of GANs is
still being actively researched and in reality models do not always
train to this point.

What is a DCGAN?
~~~~~~~~~~~~~~~~

A DCGAN is a direct extension of the GAN described above, except that it
explicitly uses convolutional and convolutional-transpose layers in the
discriminator and generator, respectively. It was first described by
Radford et. al. in the paper `Unsupervised Representation Learning With
Deep Convolutional Generative Adversarial
Networks <https://arxiv.org/pdf/1511.06434.pdf>`__. The discriminator
is made up of strided
`convolution <https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d>`__
layers, `batch
norm <https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm2d>`__
layers, and
`LeakyReLU <https://pytorch.org/docs/stable/nn.html#torch.nn.LeakyReLU>`__
activations. The input is a 3x64x64 input image and the output is a
scalar probability that the input is from the real data distribution.
The generator is comprised of
`convolutional-transpose <https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d>`__
layers, batch norm layers, and
`ReLU <https://pytorch.org/docs/stable/nn.html#relu>`__ activations. The
input is a latent vector, :math:`z`, that is drawn from a standard
normal distribution and the output is a 3x64x64 RGB image. The strided
conv-transpose layers allow the latent vector to be transformed into a
volume with the same shape as an image. In the paper, the authors also
give some tips about how to setup the optimizers, how to calculate the
loss functions, and how to initialize the model weights, all of which
will be explained in the coming sections.



.. code-block:: default


    from __future__ import print_function
    #%matplotlib inline
    import argparse
    import os
    import random
    import torch
    import torch.nn as nn
    import torch.nn.parallel
    import torch.backends.cudnn as cudnn
    import torch.optim as optim
    import torch.utils.data
    import torchvision.datasets as dset
    import torchvision.transforms as transforms
    import torchvision.utils as vutils
    import numpy as np
    import matplotlib.pyplot as plt
    import matplotlib.animation as animation
    from IPython.display import HTML

    # Set random seed for reproducibility
    manualSeed = 999
    #manualSeed = random.randint(1, 10000) # use if you want new results
    print("Random Seed: ", manualSeed)
    random.seed(manualSeed)
    torch.manual_seed(manualSeed)






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Random Seed:  999


Inputs
------

Let’s define some inputs for the run:

-  **dataroot** - the path to the root of the dataset folder. We will
   talk more about the dataset in the next section
-  **workers** - the number of worker threads for loading the data with
   the DataLoader
-  **batch_size** - the batch size used in training. The DCGAN paper
   uses a batch size of 128
-  **image_size** - the spatial size of the images used for training.
   This implementation defaults to 64x64. If another size is desired,
   the structures of D and G must be changed. See
   `here <https://github.com/pytorch/examples/issues/70>`__ for more
   details
-  **nc** - number of color channels in the input images. For color
   images this is 3
-  **nz** - length of latent vector
-  **ngf** - relates to the depth of feature maps carried through the
   generator
-  **ndf** - sets the depth of feature maps propagated through the
   discriminator
-  **num_epochs** - number of training epochs to run. Training for
   longer will probably lead to better results but will also take much
   longer
-  **lr** - learning rate for training. As described in the DCGAN paper,
   this number should be 0.0002
-  **beta1** - beta1 hyperparameter for Adam optimizers. As described in
   paper, this number should be 0.5
-  **ngpu** - number of GPUs available. If this is 0, code will run in
   CPU mode. If this number is greater than 0 it will run on that number
   of GPUs



.. code-block:: default


    # Root directory for dataset
    dataroot = "data/celeba"

    # Number of workers for dataloader
    workers = 2

    # Batch size during training
    batch_size = 128

    # Spatial size of training images. All images will be resized to this
    #   size using a transformer.
    image_size = 64

    # Number of channels in the training images. For color images this is 3
    nc = 3

    # Size of z latent vector (i.e. size of generator input)
    nz = 100

    # Size of feature maps in generator
    ngf = 64

    # Size of feature maps in discriminator
    ndf = 64

    # Number of training epochs
    num_epochs = 5

    # Learning rate for optimizers
    lr = 0.0002

    # Beta1 hyperparam for Adam optimizers
    beta1 = 0.5

    # Number of GPUs available. Use 0 for CPU mode.
    ngpu = 1








Data
----

In this tutorial we will use the `Celeb-A Faces
dataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`__ which can
be downloaded at the linked site, or in `Google
Drive <https://drive.google.com/drive/folders/0B7EVK8r0v71pTUZsaXdaSnZBZzg>`__.
The dataset will download as a file named *img_align_celeba.zip*. Once
downloaded, create a directory named *celeba* and extract the zip file
into that directory. Then, set the *dataroot* input for this notebook to
the *celeba* directory you just created. The resulting directory
structure should be:

::

   /path/to/celeba
       -> img_align_celeba  
           -> 188242.jpg
           -> 173822.jpg
           -> 284702.jpg
           -> 537394.jpg
              ...

This is an important step because we will be using the ImageFolder
dataset class, which requires there to be subdirectories in the
dataset’s root folder. Now, we can create the dataset, create the
dataloader, set the device to run on, and finally visualize some of the
training data.



.. code-block:: default


    # We can use an image folder dataset the way we have it setup.
    # Create the dataset
    dataset = dset.ImageFolder(root=dataroot,
                               transform=transforms.Compose([
                                   transforms.Resize(image_size),
                                   transforms.CenterCrop(image_size),
                                   transforms.ToTensor(),
                                   transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                               ]))
    # Create the dataloader
    dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
                                             shuffle=True, num_workers=workers)

    # Decide which device we want to run on
    device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")

    # Plot some training images
    real_batch = next(iter(dataloader))
    plt.figure(figsize=(8,8))
    plt.axis("off")
    plt.title("Training Images")
    plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(),(1,2,0)))






.. image:: /beginner/images/sphx_glr_dcgan_faces_tutorial_001.png
    :class: sphx-glr-single-img




Implementation
--------------

With our input parameters set and the dataset prepared, we can now get
into the implementation. We will start with the weight initialization
strategy, then talk about the generator, discriminator, loss functions,
and training loop in detail.

Weight Initialization
~~~~~~~~~~~~~~~~~~~~~

From the DCGAN paper, the authors specify that all model weights shall
be randomly initialized from a Normal distribution with mean=0,
stdev=0.02. The ``weights_init`` function takes an initialized model as
input and reinitializes all convolutional, convolutional-transpose, and
batch normalization layers to meet this criteria. This function is
applied to the models immediately after initialization.



.. code-block:: default


    # custom weights initialization called on netG and netD
    def weights_init(m):
        classname = m.__class__.__name__
        if classname.find('Conv') != -1:
            nn.init.normal_(m.weight.data, 0.0, 0.02)
        elif classname.find('BatchNorm') != -1:
            nn.init.normal_(m.weight.data, 1.0, 0.02)
            nn.init.constant_(m.bias.data, 0)








Generator
~~~~~~~~~

The generator, :math:`G`, is designed to map the latent space vector
(:math:`z`) to data-space. Since our data are images, converting
:math:`z` to data-space means ultimately creating a RGB image with the
same size as the training images (i.e. 3x64x64). In practice, this is
accomplished through a series of strided two dimensional convolutional
transpose layers, each paired with a 2d batch norm layer and a relu
activation. The output of the generator is fed through a tanh function
to return it to the input data range of :math:`[-1,1]`. It is worth
noting the existence of the batch norm functions after the
conv-transpose layers, as this is a critical contribution of the DCGAN
paper. These layers help with the flow of gradients during training. An
image of the generator from the DCGAN paper is shown below.

.. figure:: /_static/img/dcgan_generator.png
   :alt: dcgan_generator

Notice, how the inputs we set in the input section (*nz*, *ngf*, and
*nc*) influence the generator architecture in code. *nz* is the length
of the z input vector, *ngf* relates to the size of the feature maps
that are propagated through the generator, and *nc* is the number of
channels in the output image (set to 3 for RGB images). Below is the
code for the generator.



.. code-block:: default


    # Generator Code

    class Generator(nn.Module):
        def __init__(self, ngpu):
            super(Generator, self).__init__()
            self.ngpu = ngpu
            self.main = nn.Sequential(
                # input is Z, going into a convolution
                nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False),
                nn.BatchNorm2d(ngf * 8),
                nn.ReLU(True),
                # state size. (ngf*8) x 4 x 4
                nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ngf * 4),
                nn.ReLU(True),
                # state size. (ngf*4) x 8 x 8
                nn.ConvTranspose2d( ngf * 4, ngf * 2, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ngf * 2),
                nn.ReLU(True),
                # state size. (ngf*2) x 16 x 16
                nn.ConvTranspose2d( ngf * 2, ngf, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ngf),
                nn.ReLU(True),
                # state size. (ngf) x 32 x 32
                nn.ConvTranspose2d( ngf, nc, 4, 2, 1, bias=False),
                nn.Tanh()
                # state size. (nc) x 64 x 64
            )

        def forward(self, input):
            return self.main(input)








Now, we can instantiate the generator and apply the ``weights_init``
function. Check out the printed model to see how the generator object is
structured.



.. code-block:: default


    # Create the generator
    netG = Generator(ngpu).to(device)

    # Handle multi-gpu if desired
    if (device.type == 'cuda') and (ngpu > 1):
        netG = nn.DataParallel(netG, list(range(ngpu)))

    # Apply the weights_init function to randomly initialize all weights
    #  to mean=0, stdev=0.02.
    netG.apply(weights_init)

    # Print the model
    print(netG)






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Generator(
      (main): Sequential(
        (0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
        (3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (5): ReLU(inplace=True)
        (6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (8): ReLU(inplace=True)
        (9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (11): ReLU(inplace=True)
        (12): ConvTranspose2d(64, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (13): Tanh()
      )
    )


Discriminator
~~~~~~~~~~~~~

As mentioned, the discriminator, :math:`D`, is a binary classification
network that takes an image as input and outputs a scalar probability
that the input image is real (as opposed to fake). Here, :math:`D` takes
a 3x64x64 input image, processes it through a series of Conv2d,
BatchNorm2d, and LeakyReLU layers, and outputs the final probability
through a Sigmoid activation function. This architecture can be extended
with more layers if necessary for the problem, but there is significance
to the use of the strided convolution, BatchNorm, and LeakyReLUs. The
DCGAN paper mentions it is a good practice to use strided convolution
rather than pooling to downsample because it lets the network learn its
own pooling function. Also batch norm and leaky relu functions promote
healthy gradient flow which is critical for the learning process of both
:math:`G` and :math:`D`.


Discriminator Code


.. code-block:: default


    class Discriminator(nn.Module):
        def __init__(self, ngpu):
            super(Discriminator, self).__init__()
            self.ngpu = ngpu
            self.main = nn.Sequential(
                # input is (nc) x 64 x 64
                nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
                nn.LeakyReLU(0.2, inplace=True),
                # state size. (ndf) x 32 x 32
                nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ndf * 2),
                nn.LeakyReLU(0.2, inplace=True),
                # state size. (ndf*2) x 16 x 16
                nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ndf * 4),
                nn.LeakyReLU(0.2, inplace=True),
                # state size. (ndf*4) x 8 x 8
                nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ndf * 8),
                nn.LeakyReLU(0.2, inplace=True),
                # state size. (ndf*8) x 4 x 4
                nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
                nn.Sigmoid()
            )

        def forward(self, input):
            return self.main(input)








Now, as with the generator, we can create the discriminator, apply the
``weights_init`` function, and print the model’s structure.



.. code-block:: default


    # Create the Discriminator
    netD = Discriminator(ngpu).to(device)

    # Handle multi-gpu if desired
    if (device.type == 'cuda') and (ngpu > 1):
        netD = nn.DataParallel(netD, list(range(ngpu)))
    
    # Apply the weights_init function to randomly initialize all weights
    #  to mean=0, stdev=0.2.
    netD.apply(weights_init)

    # Print the model
    print(netD)






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Discriminator(
      (main): Sequential(
        (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (1): LeakyReLU(negative_slope=0.2, inplace=True)
        (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): LeakyReLU(negative_slope=0.2, inplace=True)
        (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): LeakyReLU(negative_slope=0.2, inplace=True)
        (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (10): LeakyReLU(negative_slope=0.2, inplace=True)
        (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
        (12): Sigmoid()
      )
    )


Loss Functions and Optimizers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

With :math:`D` and :math:`G` setup, we can specify how they learn
through the loss functions and optimizers. We will use the Binary Cross
Entropy loss
(`BCELoss <https://pytorch.org/docs/stable/nn.html#torch.nn.BCELoss>`__)
function which is defined in PyTorch as:

.. math:: \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]

Notice how this function provides the calculation of both log components
in the objective function (i.e. :math:`log(D(x))` and
:math:`log(1-D(G(z)))`). We can specify what part of the BCE equation to
use with the :math:`y` input. This is accomplished in the training loop
which is coming up soon, but it is important to understand how we can
choose which component we wish to calculate just by changing :math:`y`
(i.e. GT labels).

Next, we define our real label as 1 and the fake label as 0. These
labels will be used when calculating the losses of :math:`D` and
:math:`G`, and this is also the convention used in the original GAN
paper. Finally, we set up two separate optimizers, one for :math:`D` and
one for :math:`G`. As specified in the DCGAN paper, both are Adam
optimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping track
of the generator’s learning progression, we will generate a fixed batch
of latent vectors that are drawn from a Gaussian distribution
(i.e. fixed_noise) . In the training loop, we will periodically input
this fixed_noise into :math:`G`, and over the iterations we will see
images form out of the noise.



.. code-block:: default


    # Initialize BCELoss function
    criterion = nn.BCELoss()

    # Create batch of latent vectors that we will use to visualize
    #  the progression of the generator
    fixed_noise = torch.randn(64, nz, 1, 1, device=device)

    # Establish convention for real and fake labels during training
    real_label = 1.
    fake_label = 0.

    # Setup Adam optimizers for both G and D
    optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
    optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))








Training
~~~~~~~~

Finally, now that we have all of the parts of the GAN framework defined,
we can train it. Be mindful that training GANs is somewhat of an art
form, as incorrect hyperparameter settings lead to mode collapse with
little explanation of what went wrong. Here, we will closely follow
Algorithm 1 from Goodfellow’s paper, while abiding by some of the best
practices shown in `ganhacks <https://github.com/soumith/ganhacks>`__.
Namely, we will “construct different mini-batches for real and fake”
images, and also adjust G’s objective function to maximize
:math:`logD(G(z))`. Training is split up into two main parts. Part 1
updates the Discriminator and Part 2 updates the Generator.

**Part 1 - Train the Discriminator**

Recall, the goal of training the discriminator is to maximize the
probability of correctly classifying a given input as real or fake. In
terms of Goodfellow, we wish to “update the discriminator by ascending
its stochastic gradient”. Practically, we want to maximize
:math:`log(D(x)) + log(1-D(G(z)))`. Due to the separate mini-batch
suggestion from ganhacks, we will calculate this in two steps. First, we
will construct a batch of real samples from the training set, forward
pass through :math:`D`, calculate the loss (:math:`log(D(x))`), then
calculate the gradients in a backward pass. Secondly, we will construct
a batch of fake samples with the current generator, forward pass this
batch through :math:`D`, calculate the loss (:math:`log(1-D(G(z)))`),
and *accumulate* the gradients with a backward pass. Now, with the
gradients accumulated from both the all-real and all-fake batches, we
call a step of the Discriminator’s optimizer.

**Part 2 - Train the Generator**

As stated in the original paper, we want to train the Generator by
minimizing :math:`log(1-D(G(z)))` in an effort to generate better fakes.
As mentioned, this was shown by Goodfellow to not provide sufficient
gradients, especially early in the learning process. As a fix, we
instead wish to maximize :math:`log(D(G(z)))`. In the code we accomplish
this by: classifying the Generator output from Part 1 with the
Discriminator, computing G’s loss *using real labels as GT*, computing
G’s gradients in a backward pass, and finally updating G’s parameters
with an optimizer step. It may seem counter-intuitive to use the real
labels as GT labels for the loss function, but this allows us to use the
:math:`log(x)` part of the BCELoss (rather than the :math:`log(1-x)`
part) which is exactly what we want.

Finally, we will do some statistic reporting and at the end of each
epoch we will push our fixed_noise batch through the generator to
visually track the progress of G’s training. The training statistics
reported are:

-  **Loss_D** - discriminator loss calculated as the sum of losses for
   the all real and all fake batches (:math:`log(D(x)) + log(1 - D(G(z)))`).
-  **Loss_G** - generator loss calculated as :math:`log(D(G(z)))`
-  **D(x)** - the average output (across the batch) of the discriminator
   for the all real batch. This should start close to 1 then
   theoretically converge to 0.5 when G gets better. Think about why
   this is.
-  **D(G(z))** - average discriminator outputs for the all fake batch.
   The first number is before D is updated and the second number is
   after D is updated. These numbers should start near 0 and converge to
   0.5 as G gets better. Think about why this is.

**Note:** This step might take a while, depending on how many epochs you
run and if you removed some data from the dataset.



.. code-block:: default


    # Training Loop

    # Lists to keep track of progress
    img_list = []
    G_losses = []
    D_losses = []
    iters = 0

    print("Starting Training Loop...")
    # For each epoch
    for epoch in range(num_epochs):
        # For each batch in the dataloader
        for i, data in enumerate(dataloader, 0):
        
            ############################
            # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
            ###########################
            ## Train with all-real batch
            netD.zero_grad()
            # Format batch
            real_cpu = data[0].to(device)
            b_size = real_cpu.size(0)
            label = torch.full((b_size,), real_label, dtype=torch.float, device=device)
            # Forward pass real batch through D
            output = netD(real_cpu).view(-1)
            # Calculate loss on all-real batch
            errD_real = criterion(output, label)
            # Calculate gradients for D in backward pass
            errD_real.backward()
            D_x = output.mean().item()

            ## Train with all-fake batch
            # Generate batch of latent vectors
            noise = torch.randn(b_size, nz, 1, 1, device=device)
            # Generate fake image batch with G
            fake = netG(noise)
            label.fill_(fake_label)
            # Classify all fake batch with D
            output = netD(fake.detach()).view(-1)
            # Calculate D's loss on the all-fake batch
            errD_fake = criterion(output, label)
            # Calculate the gradients for this batch, accumulated (summed) with previous gradients
            errD_fake.backward()
            D_G_z1 = output.mean().item()
            # Compute error of D as sum over the fake and the real batches
            errD = errD_real + errD_fake
            # Update D
            optimizerD.step()

            ############################
            # (2) Update G network: maximize log(D(G(z)))
            ###########################
            netG.zero_grad()
            label.fill_(real_label)  # fake labels are real for generator cost
            # Since we just updated D, perform another forward pass of all-fake batch through D
            output = netD(fake).view(-1)
            # Calculate G's loss based on this output
            errG = criterion(output, label)
            # Calculate gradients for G
            errG.backward()
            D_G_z2 = output.mean().item()
            # Update G
            optimizerG.step()
        
            # Output training stats
            if i % 50 == 0:
                print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
                      % (epoch, num_epochs, i, len(dataloader),
                         errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
        
            # Save Losses for plotting later
            G_losses.append(errG.item())
            D_losses.append(errD.item())
        
            # Check how the generator is doing by saving G's output on fixed_noise
            if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
                with torch.no_grad():
                    fake = netG(fixed_noise).detach().cpu()
                img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
            
            iters += 1






.. rst-class:: sphx-glr-script-out

 Out:

 .. code-block:: none

    Starting Training Loop...
    [0/5][0/1583]   Loss_D: 1.6264  Loss_G: 5.5242  D(x): 0.5733    D(G(z)): 0.5501 / 0.0065
    [0/5][50/1583]  Loss_D: 0.1114  Loss_G: 23.4742 D(x): 0.9339    D(G(z)): 0.0000 / 0.0000
    [0/5][100/1583] Loss_D: 0.3762  Loss_G: 7.0341  D(x): 0.9237    D(G(z)): 0.1865 / 0.0046
    [0/5][150/1583] Loss_D: 0.8386  Loss_G: 6.3855  D(x): 0.9629    D(G(z)): 0.5016 / 0.0039
    [0/5][200/1583] Loss_D: 0.4339  Loss_G: 5.4600  D(x): 0.8892    D(G(z)): 0.2140 / 0.0092
    [0/5][250/1583] Loss_D: 0.5490  Loss_G: 4.5556  D(x): 0.8126    D(G(z)): 0.1890 / 0.0194
    [0/5][300/1583] Loss_D: 1.7886  Loss_G: 5.4333  D(x): 0.3171    D(G(z)): 0.0010 / 0.0135
    [0/5][350/1583] Loss_D: 0.4518  Loss_G: 3.3310  D(x): 0.8501    D(G(z)): 0.1995 / 0.0539
    [0/5][400/1583] Loss_D: 0.5199  Loss_G: 3.6276  D(x): 0.8246    D(G(z)): 0.2059 / 0.0483
    [0/5][450/1583] Loss_D: 0.5284  Loss_G: 4.0333  D(x): 0.8048    D(G(z)): 0.1978 / 0.0273
    [0/5][500/1583] Loss_D: 0.6033  Loss_G: 8.4737  D(x): 0.8796    D(G(z)): 0.3173 / 0.0006
    [0/5][550/1583] Loss_D: 0.7022  Loss_G: 5.4508  D(x): 0.6441    D(G(z)): 0.0176 / 0.0078
    [0/5][600/1583] Loss_D: 0.4758  Loss_G: 4.4126  D(x): 0.7855    D(G(z)): 0.1233 / 0.0238
    [0/5][650/1583] Loss_D: 0.5149  Loss_G: 3.9850  D(x): 0.8638    D(G(z)): 0.2435 / 0.0335
    [0/5][700/1583] Loss_D: 0.4534  Loss_G: 5.0981  D(x): 0.8910    D(G(z)): 0.2449 / 0.0127
    [0/5][750/1583] Loss_D: 0.6278  Loss_G: 3.8169  D(x): 0.6902    D(G(z)): 0.1095 / 0.0441
    [0/5][800/1583] Loss_D: 1.0548  Loss_G: 3.0892  D(x): 0.4656    D(G(z)): 0.0103 / 0.0887
    [0/5][850/1583] Loss_D: 0.6514  Loss_G: 3.3644  D(x): 0.6789    D(G(z)): 0.0920 / 0.0591
    [0/5][900/1583] Loss_D: 0.4422  Loss_G: 4.4616  D(x): 0.8122    D(G(z)): 0.1277 / 0.0266
    [0/5][950/1583] Loss_D: 0.4723  Loss_G: 2.9061  D(x): 0.7486    D(G(z)): 0.0873 / 0.0828
    [0/5][1000/1583]        Loss_D: 0.2464  Loss_G: 4.2105  D(x): 0.9499    D(G(z)): 0.1494 / 0.0300
    [0/5][1050/1583]        Loss_D: 0.3191  Loss_G: 3.8300  D(x): 0.8011    D(G(z)): 0.0521 / 0.0355
    [0/5][1100/1583]        Loss_D: 0.7374  Loss_G: 6.6006  D(x): 0.8990    D(G(z)): 0.3993 / 0.0029
    [0/5][1150/1583]        Loss_D: 0.7097  Loss_G: 5.0215  D(x): 0.9179    D(G(z)): 0.3838 / 0.0152
    [0/5][1200/1583]        Loss_D: 0.3887  Loss_G: 4.5167  D(x): 0.8335    D(G(z)): 0.1324 / 0.0223
    [0/5][1250/1583]        Loss_D: 0.5976  Loss_G: 5.1642  D(x): 0.8116    D(G(z)): 0.2645 / 0.0088
    [0/5][1300/1583]        Loss_D: 0.5579  Loss_G: 5.4790  D(x): 0.8874    D(G(z)): 0.3044 / 0.0080
    [0/5][1350/1583]        Loss_D: 0.4429  Loss_G: 3.9459  D(x): 0.8400    D(G(z)): 0.1866 / 0.0283
    [0/5][1400/1583]        Loss_D: 0.7603  Loss_G: 5.9099  D(x): 0.9328    D(G(z)): 0.4162 / 0.0068
    [0/5][1450/1583]        Loss_D: 0.6680  Loss_G: 2.5292  D(x): 0.6290    D(G(z)): 0.0628 / 0.1193
    [0/5][1500/1583]        Loss_D: 0.7403  Loss_G: 3.5778  D(x): 0.7298    D(G(z)): 0.2564 / 0.0479
    [0/5][1550/1583]        Loss_D: 0.9482  Loss_G: 2.2228  D(x): 0.5085    D(G(z)): 0.0259 / 0.1950
    [1/5][0/1583]   Loss_D: 0.4829  Loss_G: 5.4435  D(x): 0.8927    D(G(z)): 0.2774 / 0.0062
    [1/5][50/1583]  Loss_D: 0.5384  Loss_G: 3.5764  D(x): 0.7957    D(G(z)): 0.1975 / 0.0468
    [1/5][100/1583] Loss_D: 0.4915  Loss_G: 2.9807  D(x): 0.7410    D(G(z)): 0.1189 / 0.0823
    [1/5][150/1583] Loss_D: 0.5866  Loss_G: 6.3416  D(x): 0.9297    D(G(z)): 0.3542 / 0.0037
    [1/5][200/1583] Loss_D: 0.2911  Loss_G: 2.6558  D(x): 0.8257    D(G(z)): 0.0579 / 0.1377
    [1/5][250/1583] Loss_D: 0.8595  Loss_G: 1.5750  D(x): 0.5520    D(G(z)): 0.0478 / 0.2838
    [1/5][300/1583] Loss_D: 0.5594  Loss_G: 4.5668  D(x): 0.9297    D(G(z)): 0.3349 / 0.0200
    [1/5][350/1583] Loss_D: 0.5649  Loss_G: 2.7800  D(x): 0.7363    D(G(z)): 0.1643 / 0.0889
    [1/5][400/1583] Loss_D: 0.9665  Loss_G: 1.0203  D(x): 0.4794    D(G(z)): 0.0218 / 0.4487
    [1/5][450/1583] Loss_D: 0.3356  Loss_G: 4.6167  D(x): 0.9052    D(G(z)): 0.1766 / 0.0187
    [1/5][500/1583] Loss_D: 0.4024  Loss_G: 2.8893  D(x): 0.7783    D(G(z)): 0.0861 / 0.0854
    [1/5][550/1583] Loss_D: 0.3540  Loss_G: 3.9102  D(x): 0.8786    D(G(z)): 0.1807 / 0.0298
    [1/5][600/1583] Loss_D: 0.5110  Loss_G: 2.5565  D(x): 0.7344    D(G(z)): 0.1232 / 0.1101
    [1/5][650/1583] Loss_D: 0.3995  Loss_G: 2.9546  D(x): 0.8101    D(G(z)): 0.1219 / 0.0742
    [1/5][700/1583] Loss_D: 0.4375  Loss_G: 3.3834  D(x): 0.8453    D(G(z)): 0.2048 / 0.0480
    [1/5][750/1583] Loss_D: 0.6208  Loss_G: 4.6837  D(x): 0.8452    D(G(z)): 0.3229 / 0.0138
    [1/5][800/1583] Loss_D: 1.1145  Loss_G: 3.9938  D(x): 0.4306    D(G(z)): 0.0047 / 0.0338
    [1/5][850/1583] Loss_D: 0.4114  Loss_G: 4.0355  D(x): 0.8738    D(G(z)): 0.1994 / 0.0298
    [1/5][900/1583] Loss_D: 0.4831  Loss_G: 2.5995  D(x): 0.7637    D(G(z)): 0.1497 / 0.1003
    [1/5][950/1583] Loss_D: 0.3806  Loss_G: 3.7194  D(x): 0.8905    D(G(z)): 0.2062 / 0.0358
    [1/5][1000/1583]        Loss_D: 1.5130  Loss_G: 6.1749  D(x): 0.9722    D(G(z)): 0.6690 / 0.0062
    [1/5][1050/1583]        Loss_D: 0.5971  Loss_G: 3.1698  D(x): 0.6297    D(G(z)): 0.0186 / 0.0726
    [1/5][1100/1583]        Loss_D: 1.0992  Loss_G: 6.0371  D(x): 0.9737    D(G(z)): 0.5846 / 0.0060
    [1/5][1150/1583]        Loss_D: 0.4457  Loss_G: 2.1755  D(x): 0.7468    D(G(z)): 0.1056 / 0.1586
    [1/5][1200/1583]        Loss_D: 0.3984  Loss_G: 3.4682  D(x): 0.7907    D(G(z)): 0.0971 / 0.0539
    [1/5][1250/1583]        Loss_D: 0.5057  Loss_G: 4.7373  D(x): 0.9230    D(G(z)): 0.3077 / 0.0141
    [1/5][1300/1583]        Loss_D: 0.9985  Loss_G: 5.8650  D(x): 0.8885    D(G(z)): 0.5133 / 0.0083
    [1/5][1350/1583]        Loss_D: 0.4293  Loss_G: 2.6021  D(x): 0.7474    D(G(z)): 0.0751 / 0.1019
    [1/5][1400/1583]        Loss_D: 0.5045  Loss_G: 2.8468  D(x): 0.7427    D(G(z)): 0.1037 / 0.0881
    [1/5][1450/1583]        Loss_D: 0.3507  Loss_G: 3.0918  D(x): 0.7724    D(G(z)): 0.0511 / 0.0655
    [1/5][1500/1583]        Loss_D: 1.1847  Loss_G: 3.3812  D(x): 0.8411    D(G(z)): 0.5284 / 0.0725
    [1/5][1550/1583]        Loss_D: 0.4028  Loss_G: 3.5689  D(x): 0.8817    D(G(z)): 0.2147 / 0.0423
    [2/5][0/1583]   Loss_D: 0.7377  Loss_G: 4.6098  D(x): 0.9659    D(G(z)): 0.4522 / 0.0167
    [2/5][50/1583]  Loss_D: 0.5393  Loss_G: 4.0369  D(x): 0.8889    D(G(z)): 0.3072 / 0.0254
    [2/5][100/1583] Loss_D: 0.4597  Loss_G: 2.5869  D(x): 0.7958    D(G(z)): 0.1736 / 0.0993
    [2/5][150/1583] Loss_D: 0.5577  Loss_G: 2.2547  D(x): 0.6718    D(G(z)): 0.0836 / 0.1634
    [2/5][200/1583] Loss_D: 0.7188  Loss_G: 1.5030  D(x): 0.5650    D(G(z)): 0.0443 / 0.2827
    [2/5][250/1583] Loss_D: 1.1683  Loss_G: 6.1777  D(x): 0.9521    D(G(z)): 0.6162 / 0.0036
    [2/5][300/1583] Loss_D: 0.6982  Loss_G: 2.5517  D(x): 0.7172    D(G(z)): 0.2429 / 0.1032
    [2/5][350/1583] Loss_D: 0.4838  Loss_G: 2.2077  D(x): 0.7890    D(G(z)): 0.1824 / 0.1401
    [2/5][400/1583] Loss_D: 0.4602  Loss_G: 2.7857  D(x): 0.8668    D(G(z)): 0.2405 / 0.0777
    [2/5][450/1583] Loss_D: 0.4930  Loss_G: 2.5116  D(x): 0.8281    D(G(z)): 0.2305 / 0.1044
    [2/5][500/1583] Loss_D: 0.4766  Loss_G: 2.6458  D(x): 0.7167    D(G(z)): 0.0872 / 0.0968
    [2/5][550/1583] Loss_D: 0.5509  Loss_G: 3.9324  D(x): 0.9133    D(G(z)): 0.3293 / 0.0295
    [2/5][600/1583] Loss_D: 0.7929  Loss_G: 1.1335  D(x): 0.5408    D(G(z)): 0.0511 / 0.3819
    [2/5][650/1583] Loss_D: 0.8192  Loss_G: 2.1384  D(x): 0.7262    D(G(z)): 0.3206 / 0.1629
    [2/5][700/1583] Loss_D: 0.4602  Loss_G: 2.7125  D(x): 0.8307    D(G(z)): 0.2093 / 0.0920
    [2/5][750/1583] Loss_D: 0.5119  Loss_G: 2.2325  D(x): 0.7761    D(G(z)): 0.1892 / 0.1344
    [2/5][800/1583] Loss_D: 1.1312  Loss_G: 0.4463  D(x): 0.4050    D(G(z)): 0.0485 / 0.6771
    [2/5][850/1583] Loss_D: 0.7631  Loss_G: 3.0198  D(x): 0.8333    D(G(z)): 0.3977 / 0.0615
    [2/5][900/1583] Loss_D: 0.5862  Loss_G: 2.4207  D(x): 0.8249    D(G(z)): 0.2861 / 0.1113
    [2/5][950/1583] Loss_D: 0.5416  Loss_G: 3.7393  D(x): 0.8790    D(G(z)): 0.3037 / 0.0308
    [2/5][1000/1583]        Loss_D: 0.5487  Loss_G: 2.5007  D(x): 0.8409    D(G(z)): 0.2817 / 0.1058
    [2/5][1050/1583]        Loss_D: 0.7193  Loss_G: 4.0475  D(x): 0.9127    D(G(z)): 0.4277 / 0.0262
    [2/5][1100/1583]        Loss_D: 1.4707  Loss_G: 4.6470  D(x): 0.9758    D(G(z)): 0.6966 / 0.0181
    [2/5][1150/1583]        Loss_D: 0.5844  Loss_G: 2.0984  D(x): 0.7574    D(G(z)): 0.2236 / 0.1480
    [2/5][1200/1583]        Loss_D: 0.3437  Loss_G: 2.2059  D(x): 0.8207    D(G(z)): 0.1176 / 0.1335
    [2/5][1250/1583]        Loss_D: 0.5055  Loss_G: 2.3362  D(x): 0.8170    D(G(z)): 0.2368 / 0.1258
    [2/5][1300/1583]        Loss_D: 0.4507  Loss_G: 2.8706  D(x): 0.7544    D(G(z)): 0.1161 / 0.0766
    [2/5][1350/1583]        Loss_D: 0.5470  Loss_G: 3.3723  D(x): 0.8926    D(G(z)): 0.3214 / 0.0463
    [2/5][1400/1583]        Loss_D: 0.9641  Loss_G: 3.0495  D(x): 0.8483    D(G(z)): 0.4755 / 0.0721
    [2/5][1450/1583]        Loss_D: 0.5181  Loss_G: 3.5450  D(x): 0.9221    D(G(z)): 0.3303 / 0.0355
    [2/5][1500/1583]        Loss_D: 0.5329  Loss_G: 3.0808  D(x): 0.8424    D(G(z)): 0.2737 / 0.0591
    [2/5][1550/1583]        Loss_D: 0.9095  Loss_G: 4.0908  D(x): 0.9340    D(G(z)): 0.5122 / 0.0277
    [3/5][0/1583]   Loss_D: 0.4430  Loss_G: 3.3243  D(x): 0.8835    D(G(z)): 0.2486 / 0.0495
    [3/5][50/1583]  Loss_D: 0.6570  Loss_G: 2.4231  D(x): 0.7289    D(G(z)): 0.2482 / 0.1092
    [3/5][100/1583] Loss_D: 0.5586  Loss_G: 2.5726  D(x): 0.7371    D(G(z)): 0.1860 / 0.0970
    [3/5][150/1583] Loss_D: 0.5271  Loss_G: 2.3653  D(x): 0.7921    D(G(z)): 0.2170 / 0.1213
    [3/5][200/1583] Loss_D: 0.6721  Loss_G: 1.2943  D(x): 0.6453    D(G(z)): 0.1418 / 0.3147
    [3/5][250/1583] Loss_D: 0.4530  Loss_G: 2.1571  D(x): 0.7205    D(G(z)): 0.0878 / 0.1428
    [3/5][300/1583] Loss_D: 0.8066  Loss_G: 1.5920  D(x): 0.5923    D(G(z)): 0.1741 / 0.2636
    [3/5][350/1583] Loss_D: 0.5565  Loss_G: 2.7188  D(x): 0.8051    D(G(z)): 0.2518 / 0.0851
    [3/5][400/1583] Loss_D: 0.8214  Loss_G: 2.9371  D(x): 0.8724    D(G(z)): 0.4426 / 0.0722
    [3/5][450/1583] Loss_D: 1.0815  Loss_G: 4.5328  D(x): 0.9108    D(G(z)): 0.5763 / 0.0169
    [3/5][500/1583] Loss_D: 0.6918  Loss_G: 1.7718  D(x): 0.6808    D(G(z)): 0.1928 / 0.2130
    [3/5][550/1583] Loss_D: 0.5033  Loss_G: 1.8411  D(x): 0.7257    D(G(z)): 0.1302 / 0.1931
    [3/5][600/1583] Loss_D: 0.5027  Loss_G: 2.3626  D(x): 0.8346    D(G(z)): 0.2338 / 0.1285
    [3/5][650/1583] Loss_D: 0.6631  Loss_G: 2.0975  D(x): 0.7444    D(G(z)): 0.2593 / 0.1485
    [3/5][700/1583] Loss_D: 1.7371  Loss_G: 4.9957  D(x): 0.9629    D(G(z)): 0.7359 / 0.0122
    [3/5][750/1583] Loss_D: 1.4641  Loss_G: 3.9181  D(x): 0.9288    D(G(z)): 0.6653 / 0.0346
    [3/5][800/1583] Loss_D: 0.5301  Loss_G: 2.9642  D(x): 0.8297    D(G(z)): 0.2536 / 0.0668
    [3/5][850/1583] Loss_D: 0.5240  Loss_G: 2.1904  D(x): 0.8074    D(G(z)): 0.2362 / 0.1363
    [3/5][900/1583] Loss_D: 0.9364  Loss_G: 1.2861  D(x): 0.4764    D(G(z)): 0.0516 / 0.3363
    [3/5][950/1583] Loss_D: 0.6738  Loss_G: 4.1477  D(x): 0.8992    D(G(z)): 0.4022 / 0.0222
    [3/5][1000/1583]        Loss_D: 1.6685  Loss_G: 0.2151  D(x): 0.2626    D(G(z)): 0.0157 / 0.8410
    [3/5][1050/1583]        Loss_D: 0.9580  Loss_G: 0.6088  D(x): 0.4859    D(G(z)): 0.1081 / 0.5795
    [3/5][1100/1583]        Loss_D: 0.5047  Loss_G: 3.5423  D(x): 0.8540    D(G(z)): 0.2623 / 0.0425
    [3/5][1150/1583]        Loss_D: 0.8336  Loss_G: 1.6990  D(x): 0.5161    D(G(z)): 0.0700 / 0.2464
    [3/5][1200/1583]        Loss_D: 0.5757  Loss_G: 2.4373  D(x): 0.7537    D(G(z)): 0.2142 / 0.1214
    [3/5][1250/1583]        Loss_D: 0.6389  Loss_G: 3.1549  D(x): 0.8880    D(G(z)): 0.3575 / 0.0642
    [3/5][1300/1583]        Loss_D: 1.0447  Loss_G: 4.0795  D(x): 0.8784    D(G(z)): 0.5382 / 0.0261
    [3/5][1350/1583]        Loss_D: 0.6479  Loss_G: 1.9783  D(x): 0.8343    D(G(z)): 0.3247 / 0.1747
    [3/5][1400/1583]        Loss_D: 0.7147  Loss_G: 3.5368  D(x): 0.8459    D(G(z)): 0.3786 / 0.0413
    [3/5][1450/1583]        Loss_D: 0.5627  Loss_G: 2.9205  D(x): 0.8159    D(G(z)): 0.2682 / 0.0721
    [3/5][1500/1583]        Loss_D: 0.6769  Loss_G: 3.1224  D(x): 0.8233    D(G(z)): 0.3376 / 0.0601
    [3/5][1550/1583]        Loss_D: 0.5646  Loss_G: 2.7571  D(x): 0.8818    D(G(z)): 0.3223 / 0.0811
    [4/5][0/1583]   Loss_D: 1.3418  Loss_G: 0.9554  D(x): 0.3348    D(G(z)): 0.0177 / 0.4459
    [4/5][50/1583]  Loss_D: 0.7190  Loss_G: 4.2272  D(x): 0.9131    D(G(z)): 0.4125 / 0.0206
    [4/5][100/1583] Loss_D: 1.0983  Loss_G: 0.9872  D(x): 0.4127    D(G(z)): 0.0345 / 0.4148
    [4/5][150/1583] Loss_D: 0.7669  Loss_G: 1.4016  D(x): 0.5799    D(G(z)): 0.1152 / 0.2821
    [4/5][200/1583] Loss_D: 0.7154  Loss_G: 1.6569  D(x): 0.5755    D(G(z)): 0.0667 / 0.2369
    [4/5][250/1583] Loss_D: 0.7288  Loss_G: 3.5352  D(x): 0.8817    D(G(z)): 0.4094 / 0.0418
    [4/5][300/1583] Loss_D: 0.6061  Loss_G: 2.5184  D(x): 0.7974    D(G(z)): 0.2757 / 0.1004
    [4/5][350/1583] Loss_D: 0.4088  Loss_G: 2.4608  D(x): 0.7802    D(G(z)): 0.1232 / 0.1116
    [4/5][400/1583] Loss_D: 0.5897  Loss_G: 2.9590  D(x): 0.8059    D(G(z)): 0.2752 / 0.0668
    [4/5][450/1583] Loss_D: 0.5102  Loss_G: 2.6430  D(x): 0.8532    D(G(z)): 0.2687 / 0.0899
    [4/5][500/1583] Loss_D: 0.5295  Loss_G: 2.3221  D(x): 0.7113    D(G(z)): 0.1296 / 0.1285
    [4/5][550/1583] Loss_D: 0.8546  Loss_G: 1.4486  D(x): 0.5382    D(G(z)): 0.1096 / 0.2841
    [4/5][600/1583] Loss_D: 3.3059  Loss_G: 0.3194  D(x): 0.0628    D(G(z)): 0.0169 / 0.7621
    [4/5][650/1583] Loss_D: 1.0403  Loss_G: 0.8618  D(x): 0.4325    D(G(z)): 0.0455 / 0.4782
    [4/5][700/1583] Loss_D: 0.5510  Loss_G: 2.8474  D(x): 0.8633    D(G(z)): 0.2991 / 0.0757
    [4/5][750/1583] Loss_D: 0.4623  Loss_G: 2.8267  D(x): 0.7870    D(G(z)): 0.1715 / 0.0761
    [4/5][800/1583] Loss_D: 0.6027  Loss_G: 2.0048  D(x): 0.6876    D(G(z)): 0.1535 / 0.1719
    [4/5][850/1583] Loss_D: 0.7115  Loss_G: 1.9836  D(x): 0.6691    D(G(z)): 0.1985 / 0.1952
    [4/5][900/1583] Loss_D: 0.7157  Loss_G: 4.8457  D(x): 0.9443    D(G(z)): 0.4355 / 0.0119
    [4/5][950/1583] Loss_D: 0.5791  Loss_G: 3.1310  D(x): 0.8842    D(G(z)): 0.3268 / 0.0634
    [4/5][1000/1583]        Loss_D: 0.6034  Loss_G: 1.7793  D(x): 0.6845    D(G(z)): 0.1616 / 0.2084
    [4/5][1050/1583]        Loss_D: 0.6926  Loss_G: 3.3605  D(x): 0.8985    D(G(z)): 0.3938 / 0.0484
    [4/5][1100/1583]        Loss_D: 0.4776  Loss_G: 2.1112  D(x): 0.8122    D(G(z)): 0.2056 / 0.1499
    [4/5][1150/1583]        Loss_D: 0.5559  Loss_G: 3.1977  D(x): 0.8655    D(G(z)): 0.3034 / 0.0550
    [4/5][1200/1583]        Loss_D: 0.4915  Loss_G: 2.6348  D(x): 0.8186    D(G(z)): 0.2196 / 0.0936
    [4/5][1250/1583]        Loss_D: 0.5159  Loss_G: 2.3747  D(x): 0.7484    D(G(z)): 0.1679 / 0.1167
    [4/5][1300/1583]        Loss_D: 1.0828  Loss_G: 3.4695  D(x): 0.9347    D(G(z)): 0.5740 / 0.0454
    [4/5][1350/1583]        Loss_D: 0.5815  Loss_G: 2.0328  D(x): 0.7521    D(G(z)): 0.2216 / 0.1605
    [4/5][1400/1583]        Loss_D: 0.4324  Loss_G: 2.5044  D(x): 0.8152    D(G(z)): 0.1751 / 0.1057
    [4/5][1450/1583]        Loss_D: 0.5948  Loss_G: 2.4716  D(x): 0.8414    D(G(z)): 0.2993 / 0.1147
    [4/5][1500/1583]        Loss_D: 0.6803  Loss_G: 1.9229  D(x): 0.6134    D(G(z)): 0.1124 / 0.1857
    [4/5][1550/1583]        Loss_D: 0.6874  Loss_G: 1.4396  D(x): 0.6081    D(G(z)): 0.1117 / 0.2813


Results
-------

Finally, lets check out how we did. Here, we will look at three
different results. First, we will see how D and G’s losses changed
during training. Second, we will visualize G’s output on the fixed_noise
batch for every epoch. And third, we will look at a batch of real data
next to a batch of fake data from G.

**Loss versus training iteration**

Below is a plot of D & G’s losses versus training iterations.



.. code-block:: default


    plt.figure(figsize=(10,5))
    plt.title("Generator and Discriminator Loss During Training")
    plt.plot(G_losses,label="G")
    plt.plot(D_losses,label="D")
    plt.xlabel("iterations")
    plt.ylabel("Loss")
    plt.legend()
    plt.show()





.. image:: /beginner/images/sphx_glr_dcgan_faces_tutorial_002.png
    :class: sphx-glr-single-img




**Visualization of G’s progression**

Remember how we saved the generator’s output on the fixed_noise batch
after every epoch of training. Now, we can visualize the training
progression of G with an animation. Press the play button to start the
animation.



.. code-block:: default


    #%%capture
    fig = plt.figure(figsize=(8,8))
    plt.axis("off")
    ims = [[plt.imshow(np.transpose(i,(1,2,0)), animated=True)] for i in img_list]
    ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)

    HTML(ani.to_jshtml())





.. image:: /beginner/images/sphx_glr_dcgan_faces_tutorial_003.png
    :class: sphx-glr-single-img




**Real Images vs. Fake Images**

Finally, lets take a look at some real images and fake images side by
side.



.. code-block:: default


    # Grab a batch of real images from the dataloader
    real_batch = next(iter(dataloader))

    # Plot the real images
    plt.figure(figsize=(15,15))
    plt.subplot(1,2,1)
    plt.axis("off")
    plt.title("Real Images")
    plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(),(1,2,0)))

    # Plot the fake images from the last epoch
    plt.subplot(1,2,2)
    plt.axis("off")
    plt.title("Fake Images")
    plt.imshow(np.transpose(img_list[-1],(1,2,0)))
    plt.show()





.. image:: /beginner/images/sphx_glr_dcgan_faces_tutorial_004.png
    :class: sphx-glr-single-img




Where to Go Next
----------------

We have reached the end of our journey, but there are several places you
could go from here. You could:

-  Train for longer to see how good the results get
-  Modify this model to take a different dataset and possibly change the
   size of the images and the model architecture
-  Check out some other cool GAN projects
   `here <https://github.com/nashory/gans-awesome-applications>`__
-  Create GANs that generate
   `music <https://www.deepmind.com/blog/wavenet-a-generative-model-for-raw-audio/>`__



.. rst-class:: sphx-glr-timing

   **Total running time of the script:** ( 31 minutes  15.517 seconds)


.. _sphx_glr_download_beginner_dcgan_faces_tutorial.py:


.. only :: html

 .. container:: sphx-glr-footer
    :class: sphx-glr-footer-example



  .. container:: sphx-glr-download

     :download:`Download Python source code: dcgan_faces_tutorial.py <dcgan_faces_tutorial.py>`



  .. container:: sphx-glr-download

     :download:`Download Jupyter notebook: dcgan_faces_tutorial.ipynb <dcgan_faces_tutorial.ipynb>`


.. only:: html

 .. rst-class:: sphx-glr-signature

    `Gallery generated by Sphinx-Gallery <https://sphinx-gallery.readthedocs.io>`_