Show List

Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a type of neural network architecture that consists of two components: a generator and a discriminator. The generator is trained to produce new data that is similar to the training data, while the discriminator is trained to distinguish between the generated data and the real data. The generator and discriminator are trained simultaneously in a min-max game, where the generator tries to fool the discriminator by generating realistic data, and the discriminator tries to correctly classify the data as real or fake.

Here's an example of a simple GAN in Python using the Keras library:

scss
Copy code
from keras.models import Sequential from keras.layers import Dense, Flatten, Reshape from keras.optimizers import Adam import numpy as np import matplotlib.pyplot as plt # define the generator model generator = Sequential() generator.add(Dense(128, input_shape=(100,), activation='relu')) generator.add(Dense(784, activation='sigmoid')) generator.add(Reshape((28, 28))) # define the discriminator model discriminator = Sequential() discriminator.add(Flatten(input_shape=(28, 28))) discriminator.add(Dense(128, activation='relu')) discriminator.add(Dense(1, activation='sigmoid')) # define the GAN model gan = Sequential() gan.add(generator) gan.add(discriminator) # compile the discriminator model discriminator.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0002, beta_1=0.5), metrics=['accuracy']) # compile the GAN model gan.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0002, beta_1=0.5)) # load the MNIST dataset from keras.datasets import mnist (X_train, _), (_, _) = mnist.load_data() # normalize the data X_train = X_train / 255.0 # define the training loop def train(epochs, batch_size): for epoch in range(epochs): # train the discriminator idx = np.random.randint(0, X_train.shape[0], batch_size) real_images = X_train[idx] noise = np.random.normal(0, 1, (batch_size, 100)) fake_images = generator.predict(noise) x = np.concatenate((real_images, fake_images)) y = np.concatenate((np.ones((batch_size, 1)), np.zeros((batch_size, 1)))) d_loss, d_acc = discriminator.train_on_batch(x, y) # train the generator noise = np.random.normal(0, 1, (batch_size, 100)) g_loss = gan.train_on_batch(noise, np.ones((batch_size, 1))) # print the progress print("Epoch:", epoch, "Discriminator loss:", d_loss, "Discriminator accuracy:", d_acc, "Generator loss:", g_loss) # save generated images if epoch % 100 == 0: noise = np.random.normal(0, 1, (10, 100)) generated_images = generator.predict(noise) plt.figure(figsize=(10, 1)) for i in range(10): plt.subplot(1, 10, i+1) plt.imshow(generated_images[i], cmap='gray') plt.axis('off') plt.savefig("gan_images/epoch_%d.png" % epoch) plt.close() # train the GAN train(epochs=5000, batch_size=128)

In this example, we first define the generator and discriminator models as separate Keras models. The generator takes a 100-dimensional noise vector as input and produces a 28x28 image as output. The discriminator takes a 28x28 image as input and produces a binary output indicating whether the image is real or fake.

We then combine the generator and discriminator models into a single GAN model by stacking them. We compile the discriminator model with binary cross-entropy loss and the Adam optimizer with a learning rate of 0.0002 and a beta value of 0.5. We compile the GAN model with binary cross-entropy loss and the same optimizer.

Next, we load the MNIST dataset and normalize the pixel values to be between 0 and 1. We define the training loop, which consists of training the discriminator on a batch of real and fake images, and then training the generator to produce images that are more likely to fool the discriminator. We print the loss and accuracy of the discriminator and generator for each epoch, and save 10 generated images every 100 epochs.

Finally, we call the train() function with 5000 epochs and a batch size of 128 to train the GAN. The generated images are saved in the "gan_images" directory.


    Leave a Comment


  • captcha text