Control Multiple Consistent Characters & Camera

[FULL GUIDE]

Mar 7, 2025

by Mickmumpitz

One of the biggest challenges with AI image generation is maintaining character consistency, especially when you want to control camera angles, character poses, or have multiple characters interacting in the same shot. 

This guide will walk you through a powerful workflow that combines ComfyUI with Blender to give you unprecedented control for creating AI movies, comics, children's books, virtual influencers, or company mascots.

Table of Contents

  1. Prerequisites and Setup

  2. Character Creation

    • Option A: LoRA Training Workflow

    • Option B: SDXL Workflow (No LoRA Training)

  3. 3D Character Creation

  4. Environment Setup

  5. Scene Composition

  6. AI Rendering Workflow

    • Single Character Setup

    • Two Characters Setup

  7. Animation (Optional)

  8. Advanced Techniques

Prerequisites and Setup

Before we dive into creating consistent characters, let's make sure you have all the necessary tools installed and configured properly.

Required Software

  • ComfyUI - Our primary AI image generation interface

  • ComfyUI Manager - Makes installing models and nodes much easier

  • Blender - Free 3D software for posing and scene creation

Required Models and Nodes

For Flux/ControlNet Workflow:

Flux is a powerful image model that produces high-quality results when used with LoRAs. To set it up:

  1. Install ComfyUI* Manager and open ComfyUI

  2. From Manager, install:

    • Flux_DEV checkpoint (search "flux" in model manager)

    • ControlNet Union Model (search "union" in model manager)

    • Download the 8-step LoRA for Flux:

      • Follow the link in the node to download the pytorch model

      • Save to ComfyUI/models/loras directory

      • Rename to "8-step-LoRA.safetensors" (this speeds up generation significantly)

- Install ComfyUI https://youtu.be/E_D7y0YjE88?si=2Lf3PJVOv1MWR9TC

For SDXL Workflow:

If you prefer not to train LoRAs or have a less powerful GPU, the SDXL workflow is an excellent alternative:

  1. From Manager, install:

    • Juggernaut XL checkpoint (a versatile SDXL model)

    • Promax ControlNet (search "promax" in model manager)

    • ComfyUI IP Adapter Plus custom node pack (this is crucial for character consistency)

Required Custom Nodes:

The workflow uses several custom nodes that extend ComfyUI's functionality:

  1. Open ComfyUI Manager

  2. Click "Install missing custom nodes"

  3. Select all and install

  4. Restart ComfyUI

Character Creation

Traditional AI image generation struggles with consistency when placing multiple characters in a scene - they often merge features or have inconsistent proportions. We'll solve this with two different approaches.

Option A: LoRA Training Workflow

Training LoRAs (Low-Rank Adaptations) for your characters gives the best quality and consistency:

  1. Follow my "Consistent Character Creator" tutorial to:

    • Generate a varied dataset of your character from different angles

    • Train a LoRA model for your character

    • Ensure you create separate LoRAs for each character

  2. Note your trigger words for each character LoRA - you'll need these in your prompts later

This approach requires a decent GPU, but produces the most consistent results for character features, especially when viewed from different angles.

Option B: SDXL Workflow (No LoRA Training)

If you don't have a powerful GPU or want faster results, this alternative approach works surprisingly well:

  1. Prepare a clear frontal image of your character

  2. This will be used with IP Adapter in the workflow, which acts as a visual reference

The IP Adapter essentially transforms your reference image into a sort of prompt, helping maintain consistency without training.

3D Character Creation

Used Workflow: 250227_Hunyuan3D+Upscale_v01.json

To truly control our characters' poses and camera angles, we need to create 3D models that we can position and animate.

Using Hunyuan 3D (Free Option)

Hunyuan 3D is an incredible free tool that can generate a 3D model from a 2D image:

  1. Install Hunyuan 3D:

  2. Character Model Creation:

    • Import your frontal character image

    • Click "Queue" to generate the 3D model

    • Wait for processing (may take several minutes depending on your system)



  3. Enhance Textures (Optional but Recommended):
    The default textures from Hunyuan 3D can be pretty bad. So I created an additional ComfyUI workflow to upscale them. This one uses:

    • Uses Juggernaut XL and Florence 2 for higher quality textures

    • Applies ControlNet Union and IP Adapter to maintain character likeness

    • Experiment with sampler settings (values between 0.35-0.40 work well)

    • Use face detailer to improve facial features and expressions



  4. This step makes a huge difference in the final quality - the improved textures will make your character much more recognizable.

Alternative Options If you prefer web-based tools:

Here is a selection of the results from different generators:

Rigging in Blender

Once you have your 3D model, it needs a skeleton (rig) to be posable:

  1. Import your 3D model into Blender

  2. Activate the Rigify add-on - this gives us a powerful pre-built human rig:

    • Edit > Preferences > Add-ons > Search for "rigify"

    • Enable the add-on

  3. Add a meta-rig as a starting point:

    • Add > Armature > Human (Meta-Rig)

    • Align with your 3D model

  4. Adjust the rig to fit your character:

    • Enter Edit Mode

    • Activate X-Axis Mirror (so adjustments happen symmetrically)

    • Move bones to fit your geometry

    • For asymmetrical models, deactivate X-Axis Mirror for final adjustments

  5. Generate the rig - this transforms the simple meta-rig into a fully featured animation rig:

    • Select the rig

    • Go to Data Properties

    • Click "Generate Rig"

  6. Clean up geometry - 3D models often have overlapping vertices that can cause issues:

    • Select the model

    • Enter Edit Mode

    • Select all vertices (A)

    • Mesh > Clean Up > Merge By Distance

    • Use a value around 0.002 to remove duplicate vertices without changing appearance

    • This step removed about 3,000 vertices in our example without visibly changing the model

  7. Bind the rig to your model:

    • Select the model first

    • Then select the rig

    • Press Ctrl+P

    • Choose "Armature Deform with Automatic Weights"

  8. Hide extra bones:

    • Hide most of the face rig if not needed

    • The rig might look complicated, but we only need the main body controls Your character is now ready to pose! The rig doesn't need to be perfect - even if there are some small issues, they'll be fixed in the final AI rendering stage.

Environment Setup

Your characters need a world to exist in. Here are three approaches for creating environments:

Option 1: 3D Modeling

The traditional approach that gives the most control:

  1. Generate assets using Hunyuan3D for furniture and props

  2. Use free asset packs to fill out your scene

  3. Create traditional models in Blender for anything specific you need

Option 2: 360° Environment Generation

A faster approach using AI to generate your entire environment:

  1. Use the 360° image workflow for Flux:

    • Apply the 360 HDR LoRA to generate a complete surrounding environment

    • Generate a 360° environment with a depth map (this gives spatial information)

  2. Import into Blender as a world sphere:

    • Create an icosphere

    • Create a shader with:

      • Environment image as equirectangular emission texture

      • Depth map (inverted) to create actual depth

      • RGB curves node to shape the room's dimensions

  3. While not perfect, this approach creates a believable environment much faster than modeling everything manually, and any imperfections will be cleaned up in the final AI rendering.

Option 3: Texture-based Approach

A hybrid approach that combines modeling with AI texturing:

  1. Model basic geometry in Blender (walls, floors, major furniture)

  2. Use Stable Diffusion or Flux to texture the models with realistic materials

  3. See my dedicated tutorial for this specific workflow

This method balances control with efficiency, letting you focus your modeling efforts on the structure while AI handles the details.

Scene Composition

Now it's time to bring everything together to create your scenes:

  1. Append your characters and environment into one Blender scene

  2. Create camera shots for your scenes - think like a film director:

    • Consider framing, composition, and storytelling through camera placement

    • Set up multiple cameras for different angles if needed

  3. Pose your characters:

    • Focus on key poses for each shot (like traditional animation "keyframes")

    • Use a blocking approach (no interpolation needed) - just the important poses

    • This mirrors traditional 3D animation workflow where key poses are established first

  4. Set up lighting to establish the mood:

    • Experiment with lighting to create the desired atmosphere

    • For our example film, we created a late evening atmosphere just after sunset

    • Good lighting dramatically improves your final renders

Don't worry about making everything perfect in Blender - we're mainly creating a layout that our AI rendering will transform. The poses and camera angles are what matter most.

AI Rendering Workflow

This is where the magic happens - transforming our basic 3D layouts into polished, artistic renderings using AI. We have two main approaches: Flux or SDXL.

You can find the FREE WORKFLOWS & INSTALLATION GUIDE here: Patreon Free Post

For ADVANCED WORKFLOWS & EXAMPLE FILES: Patreon Advanced Post

Flux Version (Higher Quality)

Single Character Setup

For scenes with just one character:

  1. Import the single character workflow in ComfyUI

  2. Drag and drop your Blender render into the input

  3. Configure the character prompt:

    • Include LoRA trigger word to activate your character's specific features

    • Describe the character's appearance and expression

    • Add environment description (office, outdoor scene, etc.)

  4. Load the character LoRA at maximum strength for best consistency

  5. Configure ControlNet:

    • Typically only Tile ControlNet is needed (95% of the time)

    • The workflow uses keyframe interpolation to gradually reduce control

    • This preserves composition while allowing creative details to emerge

  6. Run the workflow by clicking "Queue Prompt"

  7. Experiment with different seeds if needed - seed has a huge impact on the final image

  8. For better hands or details:

    • Bypass the 8-step LoRA (which prioritizes speed)

    • Increase steps to 25 or higher (slower but higher quality)

Two Characters Setup

For scenes where characters interact:

  1. Import the two character workflow in ComfyUI

  2. Drag and drop your Blender render with two characters

  3. The workflow automatically detects and masks each character using SAM2 segmentation

    • This is crucial to prevent character features from "bleeding" into each other

  4. Configure each character separately:

    • Set prompt with LoRA trigger word for Character 1

    • Set prompt with LoRA trigger word for Character 2

    • Set environment prompt for the background/setting

  5. Load the appropriate LoRAs for each character

  6. Use Tile ControlNet for composition consistency

  7. Run the workflow

  8. If characters switch positions, swap the masks in the nodes

    • Sometimes the automatic detection might mix up which character is which

SDXL Version (No LoRA Training)

This version works surprisingly well without requiring any LoRA training:

Single Character Setup

  1. Import the SDXL workflow in ComfyUI

  2. Drag and drop your Blender render

  3. Configure the character prompt:

    • Use list style format (preferred by SDXL)

    • Add detailed character description

    • Add negative prompt to prevent unwanted elements

  4. Configure ControlNet:

    • Use both Tile and Canny ControlNet

    • Canny extracts outlines from the original image, helping maintain structure

    • Adjust strength as needed

  5. Configure IP Adapter:

    • Load the frontal image of your character as a visual reference

    • This helps maintain consistency without LoRA training

    • Uses the ComfyUI IP Adapter Plus node pack

  6. Run the workflow

Two Characters Setup

  1. Import the SDXL two character workflow

  2. Drag and drop your Blender render with two characters

  3. The workflow automatically detects and masks characters

  4. Configure each character separately:

    • Set prompt for Character 1

    • Set prompt for Character 2

    • Set environment prompt

    • Add negative prompt

  5. Configure ControlNet (Tile and Canny)

  6. Configure IP Adapter:

    • Load frontal image for Character 1

    • Load frontal image for Character 2

  7. Run the workflow

  8. If characters switch positions, swap the masks in the nodes

Advanced Workflows (Patreon)

For even better results, the advanced versions add:

  1. Face detailer - enhances facial features:

    • Particularly useful for shots where faces are small or distant

    • Automatically detects and improves all faces in the image

  2. Upscaler - two options:

    • Option 1 (faster): General prompt-based upscaling (set to 2)

    • Option 2 (higher quality): Regional LoRA and prompts upscaling (set to 1)

    • The second option maintains better character consistency but is much slower

Don't expect perfect results on the first try - play with the seed values and control strengths to get exactly the image you want.

Animation (Optional)

If you want to create a full animated video from your character images:

  1. Generate key frames using the workflow above

  2. Use Kling AI for video interpolation:

    • Upload start and end frames

    • Describe the action in natural language ("character turns head" etc.)

    • Set relevance to 0.7 for closer adherence to images

    • This balances between following your keyframes and adding natural movement

  3. For dialog scenes:

    • Use ElevenLabs for voice generation and voice changing

    • Use Kling lip sync to add lip movements to match the audio

    • Tip: Use negative prompts like "talking, screaming" to prevent unwanted mouth movements

  4. For sound effects:

    • Try MM Audio to generate audio based on video

    • It analyzes the video frames and creates appropriate sound effects

Advanced Techniques

Style Changes

For visual variety or special sequences:

  1. Use the image-to-image workflow to modify images

  2. Apply different styles to create visual contrast

  3. Cut between styles for dramatic effect

    • In our example film, we used an "early AI" style for the action sequence

    • This created a distinct visual language for that part of the story

Troubleshooting Tips

  • If character masks are incorrect, create manual masks in Blender using Mat ID passes

  • If faces look poor in distant shots, use the face detailer node

  • For hand issues, increase steps and bypass the 8-step LoRA

  • Try multiple seeds for problematic images - sometimes a small seed change fixes issues

  • If characters' features are mixing, make sure the masks are properly separated

Remember that this workflow is quite flexible - don't be afraid to experiment and adapt it to your specific needs. The goal is to give you unprecedented control over AI-generated characters while maintaining consistency throughout your project.

One of the biggest challenges with AI image generation is maintaining character consistency, especially when you want to control camera angles, character poses, or have multiple characters interacting in the same shot. 

This guide will walk you through a powerful workflow that combines ComfyUI with Blender to give you unprecedented control for creating AI movies, comics, children's books, virtual influencers, or company mascots.

Table of Contents

  1. Prerequisites and Setup

  2. Character Creation

    • Option A: LoRA Training Workflow

    • Option B: SDXL Workflow (No LoRA Training)

  3. 3D Character Creation

  4. Environment Setup

  5. Scene Composition

  6. AI Rendering Workflow

    • Single Character Setup

    • Two Characters Setup

  7. Animation (Optional)

  8. Advanced Techniques

Prerequisites and Setup

Before we dive into creating consistent characters, let's make sure you have all the necessary tools installed and configured properly.

Required Software

  • ComfyUI - Our primary AI image generation interface

  • ComfyUI Manager - Makes installing models and nodes much easier

  • Blender - Free 3D software for posing and scene creation

Required Models and Nodes

For Flux/ControlNet Workflow:

Flux is a powerful image model that produces high-quality results when used with LoRAs. To set it up:

  1. Install ComfyUI* Manager and open ComfyUI

  2. From Manager, install:

    • Flux_DEV checkpoint (search "flux" in model manager)

    • ControlNet Union Model (search "union" in model manager)

    • Download the 8-step LoRA for Flux:

      • Follow the link in the node to download the pytorch model

      • Save to ComfyUI/models/loras directory

      • Rename to "8-step-LoRA.safetensors" (this speeds up generation significantly)

- Install ComfyUI https://youtu.be/E_D7y0YjE88?si=2Lf3PJVOv1MWR9TC

For SDXL Workflow:

If you prefer not to train LoRAs or have a less powerful GPU, the SDXL workflow is an excellent alternative:

  1. From Manager, install:

    • Juggernaut XL checkpoint (a versatile SDXL model)

    • Promax ControlNet (search "promax" in model manager)

    • ComfyUI IP Adapter Plus custom node pack (this is crucial for character consistency)

Required Custom Nodes:

The workflow uses several custom nodes that extend ComfyUI's functionality:

  1. Open ComfyUI Manager

  2. Click "Install missing custom nodes"

  3. Select all and install

  4. Restart ComfyUI

Character Creation

Traditional AI image generation struggles with consistency when placing multiple characters in a scene - they often merge features or have inconsistent proportions. We'll solve this with two different approaches.

Option A: LoRA Training Workflow

Training LoRAs (Low-Rank Adaptations) for your characters gives the best quality and consistency:

  1. Follow my "Consistent Character Creator" tutorial to:

    • Generate a varied dataset of your character from different angles

    • Train a LoRA model for your character

    • Ensure you create separate LoRAs for each character

  2. Note your trigger words for each character LoRA - you'll need these in your prompts later

This approach requires a decent GPU, but produces the most consistent results for character features, especially when viewed from different angles.

Option B: SDXL Workflow (No LoRA Training)

If you don't have a powerful GPU or want faster results, this alternative approach works surprisingly well:

  1. Prepare a clear frontal image of your character

  2. This will be used with IP Adapter in the workflow, which acts as a visual reference

The IP Adapter essentially transforms your reference image into a sort of prompt, helping maintain consistency without training.

3D Character Creation

Used Workflow: 250227_Hunyuan3D+Upscale_v01.json

To truly control our characters' poses and camera angles, we need to create 3D models that we can position and animate.

Using Hunyuan 3D (Free Option)

Hunyuan 3D is an incredible free tool that can generate a 3D model from a 2D image:

  1. Install Hunyuan 3D:

  2. Character Model Creation:

    • Import your frontal character image

    • Click "Queue" to generate the 3D model

    • Wait for processing (may take several minutes depending on your system)



  3. Enhance Textures (Optional but Recommended):
    The default textures from Hunyuan 3D can be pretty bad. So I created an additional ComfyUI workflow to upscale them. This one uses:

    • Uses Juggernaut XL and Florence 2 for higher quality textures

    • Applies ControlNet Union and IP Adapter to maintain character likeness

    • Experiment with sampler settings (values between 0.35-0.40 work well)

    • Use face detailer to improve facial features and expressions



  4. This step makes a huge difference in the final quality - the improved textures will make your character much more recognizable.

Alternative Options If you prefer web-based tools:

Here is a selection of the results from different generators:

Rigging in Blender

Once you have your 3D model, it needs a skeleton (rig) to be posable:

  1. Import your 3D model into Blender

  2. Activate the Rigify add-on - this gives us a powerful pre-built human rig:

    • Edit > Preferences > Add-ons > Search for "rigify"

    • Enable the add-on

  3. Add a meta-rig as a starting point:

    • Add > Armature > Human (Meta-Rig)

    • Align with your 3D model

  4. Adjust the rig to fit your character:

    • Enter Edit Mode

    • Activate X-Axis Mirror (so adjustments happen symmetrically)

    • Move bones to fit your geometry

    • For asymmetrical models, deactivate X-Axis Mirror for final adjustments

  5. Generate the rig - this transforms the simple meta-rig into a fully featured animation rig:

    • Select the rig

    • Go to Data Properties

    • Click "Generate Rig"

  6. Clean up geometry - 3D models often have overlapping vertices that can cause issues:

    • Select the model

    • Enter Edit Mode

    • Select all vertices (A)

    • Mesh > Clean Up > Merge By Distance

    • Use a value around 0.002 to remove duplicate vertices without changing appearance

    • This step removed about 3,000 vertices in our example without visibly changing the model

  7. Bind the rig to your model:

    • Select the model first

    • Then select the rig

    • Press Ctrl+P

    • Choose "Armature Deform with Automatic Weights"

  8. Hide extra bones:

    • Hide most of the face rig if not needed

    • The rig might look complicated, but we only need the main body controls Your character is now ready to pose! The rig doesn't need to be perfect - even if there are some small issues, they'll be fixed in the final AI rendering stage.

Environment Setup

Your characters need a world to exist in. Here are three approaches for creating environments:

Option 1: 3D Modeling

The traditional approach that gives the most control:

  1. Generate assets using Hunyuan3D for furniture and props

  2. Use free asset packs to fill out your scene

  3. Create traditional models in Blender for anything specific you need

Option 2: 360° Environment Generation

A faster approach using AI to generate your entire environment:

  1. Use the 360° image workflow for Flux:

    • Apply the 360 HDR LoRA to generate a complete surrounding environment

    • Generate a 360° environment with a depth map (this gives spatial information)

  2. Import into Blender as a world sphere:

    • Create an icosphere

    • Create a shader with:

      • Environment image as equirectangular emission texture

      • Depth map (inverted) to create actual depth

      • RGB curves node to shape the room's dimensions

  3. While not perfect, this approach creates a believable environment much faster than modeling everything manually, and any imperfections will be cleaned up in the final AI rendering.

Option 3: Texture-based Approach

A hybrid approach that combines modeling with AI texturing:

  1. Model basic geometry in Blender (walls, floors, major furniture)

  2. Use Stable Diffusion or Flux to texture the models with realistic materials

  3. See my dedicated tutorial for this specific workflow

This method balances control with efficiency, letting you focus your modeling efforts on the structure while AI handles the details.

Scene Composition

Now it's time to bring everything together to create your scenes:

  1. Append your characters and environment into one Blender scene

  2. Create camera shots for your scenes - think like a film director:

    • Consider framing, composition, and storytelling through camera placement

    • Set up multiple cameras for different angles if needed

  3. Pose your characters:

    • Focus on key poses for each shot (like traditional animation "keyframes")

    • Use a blocking approach (no interpolation needed) - just the important poses

    • This mirrors traditional 3D animation workflow where key poses are established first

  4. Set up lighting to establish the mood:

    • Experiment with lighting to create the desired atmosphere

    • For our example film, we created a late evening atmosphere just after sunset

    • Good lighting dramatically improves your final renders

Don't worry about making everything perfect in Blender - we're mainly creating a layout that our AI rendering will transform. The poses and camera angles are what matter most.

AI Rendering Workflow

This is where the magic happens - transforming our basic 3D layouts into polished, artistic renderings using AI. We have two main approaches: Flux or SDXL.

You can find the FREE WORKFLOWS & INSTALLATION GUIDE here: Patreon Free Post

For ADVANCED WORKFLOWS & EXAMPLE FILES: Patreon Advanced Post

Flux Version (Higher Quality)

Single Character Setup

For scenes with just one character:

  1. Import the single character workflow in ComfyUI

  2. Drag and drop your Blender render into the input

  3. Configure the character prompt:

    • Include LoRA trigger word to activate your character's specific features

    • Describe the character's appearance and expression

    • Add environment description (office, outdoor scene, etc.)

  4. Load the character LoRA at maximum strength for best consistency

  5. Configure ControlNet:

    • Typically only Tile ControlNet is needed (95% of the time)

    • The workflow uses keyframe interpolation to gradually reduce control

    • This preserves composition while allowing creative details to emerge

  6. Run the workflow by clicking "Queue Prompt"

  7. Experiment with different seeds if needed - seed has a huge impact on the final image

  8. For better hands or details:

    • Bypass the 8-step LoRA (which prioritizes speed)

    • Increase steps to 25 or higher (slower but higher quality)

Two Characters Setup

For scenes where characters interact:

  1. Import the two character workflow in ComfyUI

  2. Drag and drop your Blender render with two characters

  3. The workflow automatically detects and masks each character using SAM2 segmentation

    • This is crucial to prevent character features from "bleeding" into each other

  4. Configure each character separately:

    • Set prompt with LoRA trigger word for Character 1

    • Set prompt with LoRA trigger word for Character 2

    • Set environment prompt for the background/setting

  5. Load the appropriate LoRAs for each character

  6. Use Tile ControlNet for composition consistency

  7. Run the workflow

  8. If characters switch positions, swap the masks in the nodes

    • Sometimes the automatic detection might mix up which character is which

SDXL Version (No LoRA Training)

This version works surprisingly well without requiring any LoRA training:

Single Character Setup

  1. Import the SDXL workflow in ComfyUI

  2. Drag and drop your Blender render

  3. Configure the character prompt:

    • Use list style format (preferred by SDXL)

    • Add detailed character description

    • Add negative prompt to prevent unwanted elements

  4. Configure ControlNet:

    • Use both Tile and Canny ControlNet

    • Canny extracts outlines from the original image, helping maintain structure

    • Adjust strength as needed

  5. Configure IP Adapter:

    • Load the frontal image of your character as a visual reference

    • This helps maintain consistency without LoRA training

    • Uses the ComfyUI IP Adapter Plus node pack

  6. Run the workflow

Two Characters Setup

  1. Import the SDXL two character workflow

  2. Drag and drop your Blender render with two characters

  3. The workflow automatically detects and masks characters

  4. Configure each character separately:

    • Set prompt for Character 1

    • Set prompt for Character 2

    • Set environment prompt

    • Add negative prompt

  5. Configure ControlNet (Tile and Canny)

  6. Configure IP Adapter:

    • Load frontal image for Character 1

    • Load frontal image for Character 2

  7. Run the workflow

  8. If characters switch positions, swap the masks in the nodes

Advanced Workflows (Patreon)

For even better results, the advanced versions add:

  1. Face detailer - enhances facial features:

    • Particularly useful for shots where faces are small or distant

    • Automatically detects and improves all faces in the image

  2. Upscaler - two options:

    • Option 1 (faster): General prompt-based upscaling (set to 2)

    • Option 2 (higher quality): Regional LoRA and prompts upscaling (set to 1)

    • The second option maintains better character consistency but is much slower

Don't expect perfect results on the first try - play with the seed values and control strengths to get exactly the image you want.

Animation (Optional)

If you want to create a full animated video from your character images:

  1. Generate key frames using the workflow above

  2. Use Kling AI for video interpolation:

    • Upload start and end frames

    • Describe the action in natural language ("character turns head" etc.)

    • Set relevance to 0.7 for closer adherence to images

    • This balances between following your keyframes and adding natural movement

  3. For dialog scenes:

    • Use ElevenLabs for voice generation and voice changing

    • Use Kling lip sync to add lip movements to match the audio

    • Tip: Use negative prompts like "talking, screaming" to prevent unwanted mouth movements

  4. For sound effects:

    • Try MM Audio to generate audio based on video

    • It analyzes the video frames and creates appropriate sound effects

Advanced Techniques

Style Changes

For visual variety or special sequences:

  1. Use the image-to-image workflow to modify images

  2. Apply different styles to create visual contrast

  3. Cut between styles for dramatic effect

    • In our example film, we used an "early AI" style for the action sequence

    • This created a distinct visual language for that part of the story

Troubleshooting Tips

  • If character masks are incorrect, create manual masks in Blender using Mat ID passes

  • If faces look poor in distant shots, use the face detailer node

  • For hand issues, increase steps and bypass the 8-step LoRA

  • Try multiple seeds for problematic images - sometimes a small seed change fixes issues

  • If characters' features are mixing, make sure the masks are properly separated

Remember that this workflow is quite flexible - don't be afraid to experiment and adapt it to your specific needs. The goal is to give you unprecedented control over AI-generated characters while maintaining consistency throughout your project.

© 2025 Mickmumpitz

© 2025 Mickmumpitz

© 2025 Mickmumpitz