Create hyperrealistic CONSITENT CHARACTERS from one input image!

Create hyperrealistic CONSITENT CHARACTERS from one input image!

[FULL GUIDE]

Oct 7, 2025

by Mickmumpitz

I just made generating consistent characters & training Loras MUCH EASIER!

With the newest version of the Consistent Character Creator 3.5. you will now not only generate a set of varied images for lora training, it will automatically generate the whole dataset including detailed captions for you. Just drag and drop it into your favorite lora trainer like AI toolkit or Flux Gym and start training.

This version will also handle HYPER REALISTIC as well as STYLIZED CHARACTERS a lot better!

Recently, a new version of the Qwen Image Edit model was released, and it's better in every single way. It also allows for easier installation since we no longer need to rely on extra LoRAs for additional functionality. New features include:

  • Better consistency

  • Pose transfer from any image

  • Improved Virtual Try-On

  • Advanced upscaling & dataset creation

I just made generating consistent characters & training Loras MUCH EASIER!

With the newest version of the Consistent Character Creator 3.5. you will now not only generate a set of varied images for lora training, it will automatically generate the whole dataset including detailed captions for you. Just drag and drop it into your favorite lora trainer like AI toolkit or Flux Gym and start training.

This version will also handle HYPER REALISTIC as well as STYLIZED CHARACTERS a lot better!

Recently, a new version of the Qwen Image Edit model was released, and it's better in every single way. It also allows for easier installation since we no longer need to rely on extra LoRAs for additional functionality. New features include:

  • Better consistency

  • Pose transfer from any image

  • Improved Virtual Try-On

  • Advanced upscaling & dataset creation

🎨 Workflow Sections

⬜ Input / Output / Model Loaders
🟪 Modular Groups
🟥 Dataset Creation
🟨 Important Notes
🟩 Virtual Try On

Installation

  • Download the .json workflow file

  • Drag and drop it into ComfyUI

  • Install missing custom nodes if necessary → Update ALL

  • Download the models. You can find the links directly in the workflow and below.

  • To select the correct GGUF version for your system, choose the largest version that fits within your available VRAM.

  • Hit “R” to refresh or restart ComfyUI. Double check the model loader nodes and select the correct model from the list!

Download Models

Qwen-Image-Edit-GGUF
https://huggingface.co/QuantStack/Qwen-Image-Edit-GGUF/tree/main
📁 ComfyUI/models/unet/gguf

Qwen-Image-Lightning-4Steps
https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-4steps-V1.0-bf16.safetensors
📁 ComfyUI/models/lora

qwen_2.5_vl_7b_fp8_scaled.safetensors
https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders
📁 ComfyUI/models/clip/qwen

qwen_image_vae.safetensors
https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/vae
📁 ComfyUI/models/vae

flux1-dev-fp8.safetensors
https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors
📁 ComfyUI/models/checkpoints

uso-flux1-dit-lora-v1.safetensors
https://huggingface.co/Comfy-Org/USO_1.0_Repackaged/resolve/main/split_files/loras/uso-flux1-dit-lora-v1.safetensors
📁 ComfyUI/models/loras

uso-flux1-projector-v1.safetensors
https://huggingface.co/Comfy-Org/USO_1.0_Repackaged/resolve/main/split_files/model_patches/uso-flux1-projector-v1.safetensors
📁 ComfyUI/models/model_patches

sigclip_vision_patch14_384.safetensors
https://huggingface.co/Comfy-Org/sigclip_vision_384/resolve/main/sigclip_vision_patch14_384.safetensors
📁 ComfyUI/models/clip_visions

🎨 Workflow Sections

⬜ Input / Output / Model Loaders
🟪 Modular Groups
🟥 Dataset Creation
🟨 Important Notes
🟩 Virtual Try On

Installation

  • Download the .json workflow file

  • Drag and drop it into ComfyUI

  • Install missing custom nodes if necessary → Update ALL

  • Download the models. You can find the links directly in the workflow and below.

  • To select the correct GGUF version for your system, choose the largest version that fits within your available VRAM.

  • Hit “R” to refresh or restart ComfyUI. Double check the model loader nodes and select the correct model from the list!

Download Models

Qwen-Image-Edit-GGUF
https://huggingface.co/QuantStack/Qwen-Image-Edit-GGUF/tree/main
📁 ComfyUI/models/unet/gguf

Qwen-Image-Lightning-4Steps
https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-4steps-V1.0-bf16.safetensors
📁 ComfyUI/models/lora

qwen_2.5_vl_7b_fp8_scaled.safetensors
https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/text_encoders
📁 ComfyUI/models/clip/qwen

qwen_image_vae.safetensors
https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files/vae
📁 ComfyUI/models/vae

flux1-dev-fp8.safetensors
https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors
📁 ComfyUI/models/checkpoints

uso-flux1-dit-lora-v1.safetensors
https://huggingface.co/Comfy-Org/USO_1.0_Repackaged/resolve/main/split_files/loras/uso-flux1-dit-lora-v1.safetensors
📁 ComfyUI/models/loras

uso-flux1-projector-v1.safetensors
https://huggingface.co/Comfy-Org/USO_1.0_Repackaged/resolve/main/split_files/model_patches/uso-flux1-projector-v1.safetensors
📁 ComfyUI/models/model_patches

sigclip_vision_patch14_384.safetensors
https://huggingface.co/Comfy-Org/sigclip_vision_384/resolve/main/sigclip_vision_patch14_384.safetensors
📁 ComfyUI/models/clip_visions

Before You Begin:

Thank you for considering supporting us! Since these workflows can be complex, we recommend testing the free versions first to ensure compatibility with your system. We cannot guarantee full compatibility with every system that's why we always provide the main functionalities for free!

Please take a moment to read through the entire guide. If you encounter any issues:

  1. Check the troubleshooting section at the end of the guide

  2. If problems persist, visit our Discord's #need-help channel and use the search function—many common issues have already been resolved

  3. If you cannot find it there ask and we will try to help you. give as much information as possible. Include screenshots, the error.

The workflow consists of two parts:

  1. The upper groups where you generate your characters images in different poses, lighting scenarios and framing.

  2. The dataset creation groups where you can upscale your images and generate captions for them.

USING THE WORKFLOW

1 - GENERATE YOUR CHARACTER

  • Run the workflow in two steps: First generate your character images THEN activate the dataset creation group below and run again.

  • Import an image of your character. Both full body and close-up images work equally well.

  • Important: Give your Character a name! All your images will be put into a new folder that will be created in your comfyUI output folder.

  • Optional: Create a simple one-sentence prompt. Describe the image style and clothing, especially elements not fully visible in your input image.

  • The workflow is modular: You can duplicate groups and adjust the prompts!

Remember: You can always tweak things and change prompts! Also, the workflow is modular—you can duplicate entire groups if you want more Virtual Try-On options, for example.

Speaking of the Virtual Try On section: For it and the Pose Transfer group you will need to import Reference images to work!

2 - DATASET CREATION

1: Activate the DATASET CREATION group here AFTER you ran the first part of the workflow!

2: Download these models. You can use any SDXL LIGHTNING model for upscaling.

3: Set your final upscale resolution here.

4: OPTIONAL: Adjust the prompt for upscaling here.

5: Your images from the first part will be loaded here.

6: The Ksampler for the upscaling setup. It is matched to the requirements of the model. To the left you can change the “strength_end” value in the “Timestep Keyframe Interpolation” node. Lower values = Better Quality + Less Likeness // Higher Values = Less Quality + more likeness

7: This will ad post processing effects like grain and chromatic aberration. You can bypass this group if you don’t want that.

8: You can see the caption previews for all the images here. The name of your character is used as keyword.

If your captions names don’t match the filenames, delete everything in the upscale folder and run the DATASET CREATION PART again doing this:

The workflow consists of two parts:

  1. The upper groups where you generate your characters images in different poses, lighting scenarios and framing.

  2. The dataset creation groups where you can upscale your images and generate captions for them.

USING THE WORKFLOW

1 - GENERATE YOUR CHARACTER

  • Run the workflow in two steps: First generate your character images THEN activate the dataset creation group below and run again.

  • Import an image of your character. Both full body and close-up images work equally well.

  • Important: Give your Character a name! All your images will be put into a new folder that will be created in your comfyUI output folder.

  • Optional: Create a simple one-sentence prompt. Describe the image style and clothing, especially elements not fully visible in your input image.

  • The workflow is modular: You can duplicate groups and adjust the prompts!

Remember: You can always tweak things and change prompts! Also, the workflow is modular—you can duplicate entire groups if you want more Virtual Try-On options, for example.

Speaking of the Virtual Try On section: For it and the Pose Transfer group you will need to import Reference images to work!

2 - DATASET CREATION

1: Activate the DATASET CREATION group here AFTER you ran the first part of the workflow!

2: Download these models. You can use any SDXL LIGHTNING model for upscaling.

3: Set your final upscale resolution here.

4: OPTIONAL: Adjust the prompt for upscaling here.

5: Your images from the first part will be loaded here.

6: The Ksampler for the upscaling setup. It is matched to the requirements of the model. To the left you can change the “strength_end” value in the “Timestep Keyframe Interpolation” node. Lower values = Better Quality + Less Likeness // Higher Values = Less Quality + more likeness

7: This will ad post processing effects like grain and chromatic aberration. You can bypass this group if you don’t want that.

8: You can see the caption previews for all the images here. The name of your character is used as keyword.

If your captions names don’t match the filenames, delete everything in the upscale folder and run the DATASET CREATION PART again doing this:

© 2025 Mickmumpitz

© 2025 Mickmumpitz

© 2025 Mickmumpitz