this post was submitted on 01 Jul 2023
13 points (100.0% liked)

Stable Diffusion

4324 readers
30 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

Hi there, I've been toying around with Stable Diffusion for some time now and I'm starting to get a bit of a feel for it. So far, I'm quite happy with my workflow for generating image compositions (i.e. getting the characters, items, poses, .... that I want). But these images look quite crude sometimes. Question is - what tools can I use to iron out images, get them more crisp looking and so on, while keeping the image composition the same all the time? Any tips are highly welcome

you are viewing a single comment's thread
view the rest of the comments
[–] tryingnottobefat 2 points 1 year ago

I use SD to make character portraits for TTRPGs online and I’ve had decent luck fixing images with Photoshop (Beta) generative fill. It’s good at some things, like replacing one eye so that it better matches the other. It’s okay at noses. It’s very bad at mouths. It’s good at removing backgrounds or replacing something with background. Extending images is 50-50 but it can be nice for getting a character centred in frame, or for filling in the top of their head if it was cut off. It’s also pretty good at blending two images- for example, I often have a good full body image with a really weird face; it makes it a lot easier to paste a different face on top and blend the edges.

I know this is a paid option for what you’re asking but hopefully it helps.