this post was submitted on 23 Jul 2023
10 points (100.0% liked)

Stable Diffusion

499 readers
1 users here now

Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.

Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.

founded 2 years ago
MODERATORS
 

I was curious, do you run Stable Diffusion locally? On someone else's server? What kind of computer do you need to run SD locally?

you are viewing a single comment's thread
view the rest of the comments
[–] IanM32 5 points 1 year ago (1 children)

I run it locally. I prefer having the most control I can over the install, what extensions I want to use, etc.

The most important thing to run it in my opinion is VRAM. The more the better, as much as you can get.

[–] [email protected] 2 points 1 year ago (2 children)

I run locally too. I have a 10gb 3080.

I haven’t had vram issues could you elaborate on your statement?

I know on local llama I have been limited to 13b models

[–] IanM32 2 points 1 year ago

Stable Diffusion loves VRAM. The larger and more complex the images you're trying to produce, the more it'll eat.

My line of thinking is that if you have a slower GPU it'll generate slower, sure, but if you run out of VRAM it'll straight up fail and shout at you.

I'm not an expert in this field though, so grain of salt, YMMV, all that.