Basic-Flux Inpainting Workflow – Easily Replace Characters
Basic-Flux Inpainting Workflow – Easily Replace Characters
[FULL GUIDE]
᛫
Apr 28, 2025
᛫
by Mickmumpitz

Introduction
With the Basic-Flux Inpainting workflow, you can use either a single start image or a start video to replace a character or object using inpainting.
This allows you to seamlessly replace a person or element in an image or video – giving you maximum flexibility and creative freedom.

Introduction
With the Basic-Flux Inpainting workflow, you can use either a single start image or a start video to replace a character or object using inpainting.
This allows you to seamlessly replace a person or element in an image or video – giving you maximum flexibility and creative freedom.
🎨 Workflow Sections
🟨 Important Notes
⬜ Input / Output / Model Loaders
🟩 Prompt / Conditioning
🟧 Inpaint
🟪 ControlNets / Adapters
🟥 Latent / Sampling

Installation
Download the .json file and drag and drop it into your ComfyUI window.
Install the missing custom nodes via the manager and restart ComfyUI.
Download Model
FLUX CHECKPOINT: flux1-dev-fp8
📁 ComfyUI_windows_portable\ComfyUI\models\checkpoints
Search for "Flux Dev fp8" in the model manager and install "flux1-dev-fp8" and install the one by ComfyOrg.
CONTROL NET: InstantX/FLUX.1-dev Controlnet (Union)
📁 Open the ComfyUI Manager > Model Manager > search for “union” > install
You can also use the new Version of the ControlNet. Download it here:
https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0/tree/main
Put it in 📁\ComfyUI\models\controlnet
🎨 Workflow Sections
🟨 Important Notes
⬜ Input / Output / Model Loaders
🟩 Prompt / Conditioning
🟧 Inpaint
🟪 ControlNets / Adapters
🟥 Latent / Sampling

Installation
Download the .json file and drag and drop it into your ComfyUI window.
Install the missing custom nodes via the manager and restart ComfyUI.
Download Model
FLUX CHECKPOINT: flux1-dev-fp8
📁 ComfyUI_windows_portable\ComfyUI\models\checkpoints
Search for "Flux Dev fp8" in the model manager and install "flux1-dev-fp8" and install the one by ComfyOrg.
CONTROL NET: InstantX/FLUX.1-dev Controlnet (Union)
📁 Open the ComfyUI Manager > Model Manager > search for “union” > install
You can also use the new Version of the ControlNet. Download it here:
https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0/tree/main
Put it in 📁\ComfyUI\models\controlnet
You can find the WORKFLOWS & EXAMPLE FILES here:
Before You Begin: Thank you for considering supporting us! Since these workflows can be complex, we recommend testing the free versions first to ensure compatibility with your system. We cannot guarantee full compatibility with every system that's why we always provide the main functionalities for free! Please take a moment to read through the entire guide. If you encounter any issues:
|
Input LoRA
In this workflow, you can even use multiple LoRAs at once.

This is especially useful if you want to maintain a consistent character across multiple shots. It ensures that your character stays visually coherent throughout.
You can also use LoRAs to apply a specific style to your images or videos. In our example, we used a special LoRA to achieve the desired look.

Prompt + Mask
Here, you can input either a video or a single image.
If you choose to input a video, only the first frame will be used. This is useful for video workflows such as WanVideo Startimage + CN. That’s why the frame_load_cap
is set to 1.

After loading the image, you’ll receive a message like:

Once you dismiss the message, you can draw a mask inside the Preview Bridge node. Right click on the image and select Open in MaskEditor

Then paint over the person you want to replace. It looks roughly like this:

In the Positive Prompt below, you describe what should be shown in the masked area. Don’t forget to click Save to store the mask.
If you are using a Consistent Character LoRA, be sure to include its trigger name in the prompt. For example, if you are using the a1mber
LoRA, include a1mber
in the prompt as you see in the image below.

Inpainting
In the inpainting step, the area you masked earlier will be cut out for image generation.
Through these inpainting process, only this masked region is modified — the rest of the image stays untouched. The regenerated part is then seamlessly blended back in, making the person replacement look natural.

ControlNet
ControlNet in this workflow uses both OpenPose and a Depth-ControlNet.
You can adjust the settings under Apply Advanced ControlNet as needed. Typically, I use OpenPose more strongly, as reflected in the values, but Depth-ControlNet is also important. Depending on the scene, I adjust the strength of the depth map individually — sometimes higher, sometimes lower — based on how well the pose is detected.

Sampler
The Sampler generates the replaced character in the masked area and stitches it back into the image.
If you're not satisfied with the initial result, you can adjust the noise_seed
value in the RandomNoise node to generate different outcomes. This way, you can quickly explore variations until you find the one that fits your project best.

Input LoRA
In this workflow, you can even use multiple LoRAs at once.

This is especially useful if you want to maintain a consistent character across multiple shots. It ensures that your character stays visually coherent throughout.
You can also use LoRAs to apply a specific style to your images or videos. In our example, we used a special LoRA to achieve the desired look.

Prompt + Mask
Here, you can input either a video or a single image.
If you choose to input a video, only the first frame will be used. This is useful for video workflows such as WanVideo Startimage + CN. That’s why the frame_load_cap
is set to 1.

After loading the image, you’ll receive a message like:

Once you dismiss the message, you can draw a mask inside the Preview Bridge node. Right click on the image and select Open in MaskEditor

Then paint over the person you want to replace. It looks roughly like this:

In the Positive Prompt below, you describe what should be shown in the masked area. Don’t forget to click Save to store the mask.
If you are using a Consistent Character LoRA, be sure to include its trigger name in the prompt. For example, if you are using the a1mber
LoRA, include a1mber
in the prompt as you see in the image below.

Inpainting
In the inpainting step, the area you masked earlier will be cut out for image generation.
Through these inpainting process, only this masked region is modified — the rest of the image stays untouched. The regenerated part is then seamlessly blended back in, making the person replacement look natural.

ControlNet
ControlNet in this workflow uses both OpenPose and a Depth-ControlNet.
You can adjust the settings under Apply Advanced ControlNet as needed. Typically, I use OpenPose more strongly, as reflected in the values, but Depth-ControlNet is also important. Depending on the scene, I adjust the strength of the depth map individually — sometimes higher, sometimes lower — based on how well the pose is detected.

Sampler
The Sampler generates the replaced character in the masked area and stitches it back into the image.
If you're not satisfied with the initial result, you can adjust the noise_seed
value in the RandomNoise node to generate different outcomes. This way, you can quickly explore variations until you find the one that fits your project best.

© 2025 Mickmumpitz
© 2025 Mickmumpitz
© 2025 Mickmumpitz