Image Generation w/ Stable Diffusion
Last updated
Was this helpful?
Last updated
Was this helpful?
This tutorial shows you how to run an arbitrary number of parallel Stable Diffusion inference tasks on ByteNite—generating multiple images from the same prompt, all at once.
We’ll use PyTorch and diffusers () to set up a Stable Diffusion pipeline, specifically from runwayml/stable-diffusion-v1-5
. The jobs will run on 16-core CPUs with 32 GB RAM.
We’ll use a simple replicator Partitioner, which fans out a number of tasks based on a parameter set during job creation.
~45 min
Medium
We’ve prepared a public container image for this tutorial. It includes all required dependencies—like torch and diffusers—bundled into a single build for amd64.
Use this container image in your manifest.json
, setting it under platform_config
-> container
.
Since Stable Diffusion is resource-intensive, set your minimum resource requirements:
min_cpu
: 16
min_memory
: 32
Go into your ./img-gen-diffusers/app/main.py
file and set up the image generation logic.
We’ll start by importing a few libraries:
json
, os
, time
, psutil
— for system utilities.
torch
, diffusers
— for the image generation task.
Next, we’ll wrap the core logic inside a function named generate_image
, which:
Accepts a prompt and an output path as inputs.
Saves the generated image to the output path.
In the entry point, you will:
Load parameters from the APP_PARAMS
environment variable.
Build the output path using the TASK_RESULTS_DIR
variable (where ByteNite expects your result to be saved).
Call the generate_image
function to process the input and write the output.
The main.py
code, including the generate_image
function, looks like this:
We want to make this partitioner generate one identical task for each num_replicas
parameter imported from the job request.
Since a new task is created for every output file found in the chunks directory, the partitioner needs to create num_replicas
chunks in the chunks directory.
Now that both img-gen-diffusers
and fanout-replica
are ready, it’s time to connect them with a template.
In this case, we don’t need an assembler, because we’ll store all output images directly into a bucket.
So, we’ll specify "passthrough"
as the assembler.
Now, upload your template:
Let’s make some inference magic happen! 🚀
You can set up, launch, and retrieve your distributed image generation job easily with this ready-made Postman collection:
This Postman collection automates a job workflow with these steps:
Get Access Token: Obtain an access token using your API key.
Create Image Generation Job: Send a POST request to the Job Create endpoint, including these fields in the request body:
templateId
img-gen-diffusers-template
dataSource
dataDestination
params
Quick notes about the request fields:
Bypass data source:
Since the app doesn’t need real input data, we use bypass
to generate a dummy file for the partitioner.
Customizing parameters:
Adjust the params
to:
Set your number of replicas (num_replicas
).
Write your own prompt under app.prompt
to generate different images.
Launch Job: Initiate the job.
Check Job Status: Monitor the job status.
Get Job Results: Access the job results.
Get Job Logs: Review the job logs.
You can check it out here: 🔗