Exploring Stable Diffusion with Diffusers: A Powerful Image Generation Technique

Honyee Chua

Hatched by Honyee Chua

Jan 01, 2024

3 min read


Exploring Stable Diffusion with Diffusers: A Powerful Image Generation Technique


Diffusers have gained popularity in the field of stable diffusion, offering fast and efficient image generation capabilities. With the recent release of xformers 0.0.16 on PyPI, diffusers can now be easily utilized as a package through pip install -U xformers. This article delves into the reasons behind the effectiveness of stable diffusion, the components involved, and provides actionable advice for optimal usage.

Understanding Stable Diffusion:

Stable Diffusion leverages an autoencoder with a reduction factor of 8. This means that input shape images (3, 512, 512) are transformed into a latent space of (3, 64, 64), requiring 8 × 8 = 64 times less memory. This remarkable reduction enables the rapid generation of high-resolution images, even on machines with limited GPU resources like the 16GB Colab GPU. The diffusion process in stable diffusion involves the transformation of a text encoder into an embedding space that can be understood by a U-Net.

The Role of U-Net:

The encoder and decoder parts of the U-Net are composed of ResNet blocks. The encoder compresses the image representation into a lower-resolution image, while the decoder decodes the lower-resolution image representation back into the original high-resolution image with reduced noise. The U-Net's output predictions are utilized to calculate the noise residuals that represent the denoised image representation.

Incorporating VAE Model:

The VAE model consists of an encoder and a decoder. The encoder converts the image into a low-dimensional latent representation, which serves as the input to the U-Net model. The decoder transforms the latent representation back into an image. The output of the U-Net, which is the noise residual, is then used to calculate the denoised latent image representation through various scheduler algorithms.

Choosing the Right Scheduler Algorithm:

To achieve stable diffusion, it is recommended to use one of the following scheduler algorithms:

  • 1. PNDM scheduler (used by default)
  • 2. DDIM scheduler
  • 3. K-LMS scheduler

These algorithms each have their own advantages and disadvantages, so it's important to experiment and determine which one suits your specific requirements.

Optimizing Image Size Selection:

When selecting image sizes, it is suggested to adhere to the following guidelines:

  • 1. Ensure that both height and width are multiples of 8.
  • 2. Going below 512 in either dimension may result in lower image quality.
  • 3. Going beyond 512 in both dimensions will lead to repeated image regions. To create non-square images, it is best to use a value of 512 in one dimension and a larger value in the other.


Stable diffusion with diffusers offers a powerful technique for fast and efficient image generation. By understanding the underlying components, such as the autoencoder, U-Net, and scheduler algorithms, users can harness the full potential of stable diffusion. Experimenting with different scheduler algorithms and optimizing image size selection are key factors in obtaining high-quality results. With the availability of xformers as a package, integrating stable diffusion into your projects is now more accessible than ever before.

Actionable Advice:

  • 1. Experiment with different scheduler algorithms to find the one that best suits your needs.
  • 2. Ensure image sizes are multiples of 8 and consider the trade-off between image quality and performance when selecting dimensions.
  • 3. Take advantage of the reduction factor in stable diffusion to generate high-resolution images even with limited GPU resources.

Hatch New Ideas with Glasp AI 🐣

Glasp AI allows you to hatch new ideas based on your curated content. Let's curate and create with Glasp AI :)