Stable Diffusion is a powerful AI tool for generating images, but it can be resource-heavy. Running it on a DigitalOcean GPU Droplet gives you the computing power you need. In this guide, we’ll learn how to set it up using the Stable Diffusion WebUI by AUTOMATIC1111. We’ve made it easy, so even if you’re not a technical expert, don’t worry—just follow along!
Stable Diffusion can technically run on a CPU, but it’s slow. Running it on a GPU drastically improves the performance—DigitalOcean’s GPU Droplets are NVIDIA H100s that you can spin up on-demand—try them out by spinning up a GPU Droplet today. Note these are currently in early availability, and will be released for everyone soon!
Create a GPU Droplet
Log into your DigitalOcean account, create a new Droplet, and choose a plan that includes a GPU. A basic GPU plan should suffice for image generation.
Add a New User (Recommended)
Instead of using the root user for everything, it’s better to create a new user for security reasons:
adduser do-shark
usermod -aG sudo do-shark
su do-shark
cd ~/
Once you’re logged in, update the Droplet and install the necessary tools:
sudo apt update
sudo apt install -y wget git python3 python3-venv
Clone this repository from GitHub:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
To take advantage of GPU acceleration, you’ll need to rebuild xFormers with CUDA support. This step ensures your environment is optimized for performance:
pip uninstall xformers
pip install xformers --extra-index-url https://download.pytorch.org/whl/nightly/cu118
gpustat
To monitor your GPU utilization while running Stable Diffusion, you can use a tool called gpustat
. This tool gives you real-time information about your GPU usage, including memory, temperature, and current load.
To install and use gpustat
, follow these steps:
Install gpustat
using pip
:
pip install gpustat
After installation, you can monitor your GPU utilization by running the following command in another terminal:
gpustat --color -i 1
If you have a model download link, you can easily install it using the wget command. Here’s how to download and install the SDXL model:
wget -O models/Stable-diffusion/stable-diffusion-xl.safetensors "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"
This command downloads the SDXL model and saves it in the models/Stable-diffusion/ directory
with the filename stable-diffusion-xl.safetensors
. Once the download is complete, the model will be ready for use in your Stable Diffusion setup.
Now, it’s time to launch the Stable Diffusion WebUI. Run the following command to start the interface with Gradio sharing, xFormers for GPU acceleration, enable insecure extension access, and API access enabled:
./webui.sh --share --xformers --api --enable-insecure-extension-access
Once the WebUI is running, open your browser and go to https://[HASHING].gradio.live
to access the interface. Note that this link will expire in 72 hours.
After running the web-ui.sh
script, you can follow these steps to install a model through the CivitAI Browser extension:
Navigate to the “Extensions” tab in the WebUI.
Go to the “Available” sub-tab.
Click the orange button labeled “Load from” to load the available extensions from the repository.
In the search bar, type “CivitAI Browser+” and click the Install button.
Once the installation is complete, go to the “Installed” sub-tab.
Click Apply and restart UI to activate the extension.
After clicking the restart button, your console may appear to stop at “Reloading” due to the relaunch. You will need to use the new https://[HASHING].gradio.live
link generated in the console output from the terminal.
Once the WebUI restarts, you will see a new tab called “CivitAI Browser+”. This extension lets you easily search for and install models directly from CivitAI.
For this demo, let’s search for “Western Animation” within the CivitAI Browser+ tab and install it. Choose the one with Superman thumbnail. We will use this model for the next part of the crash course to generate images using text-to-image (txt2img).
Stable Diffusion is a powerful AI image generation tool that uses positive prompts and negative prompts to guide the AI in creating specific images. This tutorial will show you how to write prompts related to marine life and how to use negative prompts to improve the quality of your images in Stable Diffusion WebUI.
Prompts are the core part of generating images. Positive prompts tell the AI what you want to see, while negative prompts help exclude unwanted elements. Here are examples related to marine life to show you how to write prompts.
When writing prompts, use English to describe what you want to generate. You can use simple sentences or comma-separated keywords to describe the features. Here are some marine life-related examples:
Generate a sea turtle swimming over a coral reef:
a sea turtle swimming over a coral reef
Or, simplified as keywords:
sea turtle, swimming, coral reef, ocean
Generate a school of colorful fish:
colorful fish, swimming in the ocean, school of fish, tropical fish
Negative prompts are useful for excluding unwanted elements, especially when generating multiple images. Here are some common negative prompts to avoid low-quality or incorrect results:
lowres, bad anatomy, blurry, text, error, cropped, worst quality, jpeg artifacts, watermark, signature, low quality, worst quality
You can also add specific elements that you don’t want in your marine life images, like human characters or buildings:
nsfw, weapon, blood, human, car, city, building
Stable Diffusion WebUI’s txt2image feature allows you to generate images based on the prompts you write. Here’s how to use it:
Enter Positive and Negative Prompts: In the left text box, enter the marine life-related prompts, such as:
colorful fish, coral reef, underwater, ocean, vibrant colors
For the negative prompts, exclude unwanted elements:
lowres, bad anatomy, text, blurry, weapon, human
Select Sampling Method: Try “DPM++ 2M SDE Heun”, or “Euler a” for sampling methods.
Set Image Dimensions and Steps: Set the width and height to 1024x512 and sampling steps to 30. You can also select “Hires. fix” with default value to improve details in the image, which might help even with marine life.
Generate the Image: Click the “Generate” button on the top right to start generating the image. Once done, you can save or adjust the image as needed.
Stable Diffusion WebUI provides different syntaxes to improve the precision of image generation. Here are some useful ones:
Attention/Emphasis: Use parentheses ( )
to emphasize certain elements in the prompt. For example, to highlight the color of a dolphin:
dolphin, ((blue)), ocean, swimming
Prompt Switching: You can switch prompts during the generation process with this syntax:
[shark : whale : 10] swimming in the ocean
Generate an octopus underwater:
octopus, underwater, ocean, coral reef, vibrant colors
Negative prompt:
lowres, blurry, bad anatomy, text, human
Generate a dolphin jumping out of the water:
dolphin, jumping out of the water, ocean, sunset, splash, realistic
Negative prompt:
lowres, bad anatomy, blurry, text, car, building
Generate a shark swimming in deep water:
shark, swimming, deep ocean, dark blue water, scary, realistic
Negative prompt:
lowres, bad anatomy, blurry, text, human, building
This is just the beginning of your journey in creating Gen-AI art with Stable Diffusion on DigitalOcean’s GPU Droplets. In the upcoming series, we’ll dive deeper into running a dockerized Stable Diffusion API with GPU DigitalOcean Kubernetes and explore real-world use cases alongside other DigitalOcean products. Stay tuned for more exciting insights and tutorials!
Ready to take your AI projects further? Explore more tutorials on AI and cloud solutions at DigitalOcean.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!