There's a way to do this in Auto1111 (sort of):
- generate an image with part of your steps
- Enable openpose
- Add the partially generated pixel image
- Set controlnet to start at the halfway point, etc.
- re-generate the image with the same settings
This feels pretty janky, though. I think you could do it better (and in one shot) in comfyUI by processing the partially generated latent, feeding that result to a controlnet preprocessor node, then adding the resulting controlnet conditioning plus the original half-finished latent to a new ksampler node. You'd then finish generation (continuing from the original latent) at whatever step you split off.