this post was submitted on 01 Jul 2023
13 points (100.0% liked)

Stable Diffusion

4326 readers
34 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

Hi there, I've been toying around with Stable Diffusion for some time now and I'm starting to get a bit of a feel for it. So far, I'm quite happy with my workflow for generating image compositions (i.e. getting the characters, items, poses, .... that I want). But these images look quite crude sometimes. Question is - what tools can I use to iron out images, get them more crisp looking and so on, while keeping the image composition the same all the time? Any tips are highly welcome

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 1 year ago (1 children)

Honestly I go through a process where I place the image into img to img and re-render the image at increasingly lower and lower denoising scales until I get a result that looks good. I tend to alternate between sampling models too, depending on what I'm going for. Haven't quite got the hang of inpainting yet, but I've seen other people's workflows of upscaling the image and then address problem areas individually.

[โ€“] [email protected] 6 points 1 year ago* (last edited 1 year ago)

Yeah it's really kind of a process right now. Most of the time I can get an okay image with a really good prompt, but if I want a great image it's usually

  • Form the perfect prompt, couple hours fine tuning
  • Move to img2img, render to make a bit more cohesive
  • Pick one from the batch, use in paint
  • For each round of in painting:
    • Paint area
    • Alter prompt
    • Make a batch that's nice
    • If good, move to next round of in painting, else repeat
  • A couple final rounds of img2img
  • Scale up

It's quite a process right now to make it perfect. Overall if you want something real specific it may take a few hours of fine tuning and adjustments to get it there