With ControlNet, you can upload a source image to influence your AI image generations.

You can use this feature to do a bunch of cool things like mantaining compositions, copying poses, and transforming art styles. And it all depends on the preprocessor you choose.

Each preprocessor creates a unique ‘map’ of a source image. This ‘map’ tells the Model how the image it generates should be composed, oriented or even what patterns they need to contain.

Let’s take an example to understand this better. We’re going to use the image below in the ControlNet and experiment with different preprocessors and prompts to see what we can create.

<aside> 💡 Pro Tip: Try to select an aspect ratio that matches your source image to avoid cropping or distorting in your generated images.

</aside>

ControlNet source image

ControlNet source image


Copy a human pose with OpenPose

(works with non-SDXL models only)

Control Map made by the OpenPose preprocessor

Control Map made by the OpenPose preprocessor

OpenPose transfers human poses from your source image to your AI generated characters.

Model: CyberRealistic
Prompt: gritty raw street photography, a woman, sitting in a cafe, (hyperrealism:1.2), (8K UHD:1.2), (photorealistic:1.2), shot with Canon EOS 5D Mark IV, detailed face, detailed hair

Model: CyberRealistic Prompt: gritty raw street photography, a woman, sitting in a cafe, (hyperrealism:1.2), (8K UHD:1.2), (photorealistic:1.2), shot with Canon EOS 5D Mark IV, detailed face, detailed hair