Update README.md
Browse files# iDCGAN-iGAN
## Model Description
iDCGAN-iGAN is a Deep Convolutional Generative Adversarial Network (DCGAN) that was trained to generate images based on input noise. The model was implemented using PyTorch and is a variant of the DCGAN architecture.
This model is suitable for tasks such as image generation and creative applications, where synthetic imagery is required.
## Training Procedure
The iDCGAN-iGAN was trained using adversarial training where a generator network learns to create realistic images, and a discriminator network learns to distinguish between real and fake images. The training process aims to improve both networks over time.
- **Generator Architecture**: Convolutional layers with transposed convolutions, batch normalization, and ReLU activations.
- **Discriminator Architecture**: Convolutional layers with LeakyReLU activations and batch normalization.
### Hyperparameters:
- Optimizer: Adam
- Learning Rate: `0.0002`
- Beta1: `0.5`
- Beta2: `0.999`
- device: `cuda`
## Intended Use
This model is intended for research and educational purposes in GAN architectures and synthetic image generation. It can be fine-tuned for other similar generative tasks.
## Model Output
The model generates a synthetic image given a latent vector as input. The latent vector is a 1D tensor sampled from a Gaussian distribution.
## How to Use the Model
Here's an example of how to load and use the generator model:
```python
import torch
from model import Generator # Assuming the generator class is in a file called model.py
# Load the generator model
model = Generator()
model.load_state_dict(torch.load('gan-generator.pth'))
model.eval()
# Generate an image
latent_vector = torch.randn(1, 64, 1, 1) # Example latent vector
with torch.no_grad():
generated_image = model(latent_vector)