this post was submitted on 11 Jun 2023
8 points (90.0% liked)

Stable Diffusion

499 readers
4 users here now

Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.

Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.

founded 1 year ago
MODERATORS
 

top 1 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 year ago

I started by generating some rooms that matched the general colour scheme I was looking for. I then used my favourite in ControlNet 'Reference Only' with a prompt that closer matched the view I wanted, which gave me a closeup of the window. I then outpainted a few times, and inpainted the robots, the table and the clutter. I then ran it through ControlNet Tile with the SD Ultimate Upscale script, with a scaling value of 4x, a CFG of 2.5 and using the DPM++ SDE Sampler at 50 steps (overdid the steps). I used the Realistic Vision 2.0 model and the LowRA Lora.