From 3D Generation to Texturing

From 3D Generation to Texturing

[FULL GUIDE]

Jun 10, 2025

by Mickmumpitz

In this guide, we’ll walk you through a comprehensive workflow that starts with creating a 3D model and then moves into the texturing phase. This guide is designed to help you understand each step of the process, from generating your initial 3D model to applying detailed textures. By the end, you'll have a fully textured 3D model ready for your projects.

In this guide, we’ll walk you through a comprehensive workflow that starts with creating a 3D model and then moves into the texturing phase. This guide is designed to help you understand each step of the process, from generating your initial 3D model to applying detailed textures. By the end, you'll have a fully textured 3D model ready for your projects.

🎨 Workflow Sections

🟨 Important Notes
⬜ Input / Output / Model Loaders
🟩 Prompt / Image Generation
🟪 Image Processing
🟥 Latent / Sampling
🟧​ Texture Processing

Installation

Download the .json file and drag and drop it into your ComfyUI window.
Install the missing custom nodes via the manager and restart ComfyUI.

Hunyuan3DWrapper Installation

Currently the Hunyuan3DWrapper has a bug when installing with the ComyUI Manager.
In your custom_nodes folder under ComfyUI_windows_portable\ComfyUI\custom_nodes you can see that it has a typo, which causes issues at later steps.

The folder is incorrectly named comfyui-hunyan3dwrapper — note the missing “u” in “hunyuan”.

Correcting the folder name to “comfyui-hunyuan3dwrapper” will solve this issue.

After that you need to run 2 commands to finish the installation. For that run cmd from the address bar in your ComfyUI_windows_portable folder to start a terminal session directly in that folder.

In there you can run two commands to complete the Hunyuan3DWrapper installation.

First run this command:
python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\requirements.txt

Then run this command:
python_embeded\python.exe -m pip install ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\wheels\custom_rasterizer-0.1.0+torch260.cuda126-cp312-cp312-win_amd64.whl

Now restart ComfyUI and the installation should be completed!

Download Models

hunyuan3d-dit-v2-0-fp16.safetensors
https://huggingface.co/Kijai/Hunyuan3D-2_safetensors/blob/main/hunyuan3d-dit-v2-0-fp16.safetensors
📁 ComfyUI/models/diffusion_models/

controlnet-promax
Install via Model Manager.
Search for:
controlnet promax and install → xinsir/ControlNet++: All-in-one ControlNet (ProMax model)

juggernautXL_juggXIByRundiffusion:
https://huggingface.co/misri/juggernautXL_juggXIByRundiffusion/blob/main/juggernautXL_juggXIByRundiffusion.safetensors
📁 ComfyUI/models/checkpoints
Any SDXL model will work, don't forget to adjust the sampler settings!

4x-ClearRealityV1.pth:
https://huggingface.co/skbhadra/ClearRealityV1/blob/main/4x-ClearRealityV1.pth
📁 ComfyUI/models/upscale_models
You can use other upscaling models here too.

face_yolov8m.pt:
Install via Model Manager.
Search for: face_yolov8m

🎨 Workflow Sections

🟨 Important Notes
⬜ Input / Output / Model Loaders
🟩 Prompt / Image Generation
🟪 Image Processing
🟥 Latent / Sampling
🟧​ Texture Processing

Installation

Download the .json file and drag and drop it into your ComfyUI window.
Install the missing custom nodes via the manager and restart ComfyUI.

Hunyuan3DWrapper Installation

Currently the Hunyuan3DWrapper has a bug when installing with the ComyUI Manager.
In your custom_nodes folder under ComfyUI_windows_portable\ComfyUI\custom_nodes you can see that it has a typo, which causes issues at later steps.

The folder is incorrectly named comfyui-hunyan3dwrapper — note the missing “u” in “hunyuan”.

Correcting the folder name to “comfyui-hunyuan3dwrapper” will solve this issue.

After that you need to run 2 commands to finish the installation. For that run cmd from the address bar in your ComfyUI_windows_portable folder to start a terminal session directly in that folder.

In there you can run two commands to complete the Hunyuan3DWrapper installation.

First run this command:
python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\requirements.txt

Then run this command:
python_embeded\python.exe -m pip install ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\wheels\custom_rasterizer-0.1.0+torch260.cuda126-cp312-cp312-win_amd64.whl

Now restart ComfyUI and the installation should be completed!

Download Models

hunyuan3d-dit-v2-0-fp16.safetensors
https://huggingface.co/Kijai/Hunyuan3D-2_safetensors/blob/main/hunyuan3d-dit-v2-0-fp16.safetensors
📁 ComfyUI/models/diffusion_models/

controlnet-promax
Install via Model Manager.
Search for:
controlnet promax and install → xinsir/ControlNet++: All-in-one ControlNet (ProMax model)

juggernautXL_juggXIByRundiffusion:
https://huggingface.co/misri/juggernautXL_juggXIByRundiffusion/blob/main/juggernautXL_juggXIByRundiffusion.safetensors
📁 ComfyUI/models/checkpoints
Any SDXL model will work, don't forget to adjust the sampler settings!

4x-ClearRealityV1.pth:
https://huggingface.co/skbhadra/ClearRealityV1/blob/main/4x-ClearRealityV1.pth
📁 ComfyUI/models/upscale_models
You can use other upscaling models here too.

face_yolov8m.pt:
Install via Model Manager.
Search for: face_yolov8m

You can find the WORKFLOWS & EXAMPLE FILES here:

Before You Begin:

Thank you for considering supporting us! Since these workflows can be complex, we recommend testing the free versions first to ensure compatibility with your system. We cannot guarantee full compatibility with every system that's why we always provide the main functionalities for free!

Please take a moment to read through the entire guide. If you encounter any issues:

  1. Check the troubleshooting section at the end of the guide

  2. If problems persist, visit our Discord's #need-help channel and use the search function—many common issues have already been resolved

  3. If you cannot find it there ask and we will try to help you. give as much information as possible. Include screenshots, the error.

1. Image Generation for 3D Model Generation

In the first step of the workflow, you have the flexibility to generate the initial character image in several ways. You can create the character image directly from scratch, use an open-pose image combined with ControlNet for more dynamic posing, or upload a pre-existing image that you've already created. By default, the tool is set to generate an image from scratch. Simply enter a description in the prompt to define what you want to create. As always, feel free to experiment with the seed value if you’re looking to achieve different variations and results.

If you choose to use an open-pose image, you'll need to activate the ControlNet input and apply ControlNet to guide the pose.

Alternatively, if you want to use an image you’ve already created, you can activate the input image node below and switch the connections accordingly. This gives you the freedom to start from a variety of sources and tailor the character creation to your needs.

2. Removing the Background

This step is automated—no input required. This section replaces the image background with a clean white fill, ensuring a neutral and consistent base for 3D Mesh generation.

3. Generate 3D Mesh

In this section, the 3D mesh is generated. Usually, you don’t need to make any adjustments here either.

4. Delighting the Image

To neutralize lighting on the image, this section delights the generated image. This often reduces fine details, so a blending operation reintroduces them. By default, a blend value of 0.5 is applied, which you can adjust to fine-tune the balance between the original image and the de-lit version.

5. Render Textures

This section generates multiple views of the 3D model for consistent image generation later in the workflow.

Set the texture resolution in the Hy3D Render MultiView node by adjusting the texture_size parameter. A resolution of 2048 pixels typically strikes a good balance between quality and performance, but for higher detail, 4096 pixels can be used.

On the right-hand side of the screen, you'll see preview images showing position maps and depth maps. For the upcoming image generation, the depth map will be used.

6. Texture Sampling

In this step, additional views of your model are generated using the image from the Delighting section as a style and detail reference. This ensures texture consistency across all angles, helping to produce a seamless and visually coherent final texture.

7. Upscaling and Face Fix

Here, all sampled images are upscaled to enhance resolution and sharpness. Be sure to revisit the prompt in the upscaling module to specify how the final texture should appear. Optionally, enable the face detailer to refine facial features—particularly useful for character models. If your model doesn’t have a face, you can disable the nodes.

8. Texture Inpainting + Normal & Heigh Maps

In the final stage, your textures are exported. You’ll receive:

  • A fully inpainted texture file, with any gaps intelligently filled in

  • A normal map

  • A height map

With these two files, your 3D model is ready for use in games, renderings, or real-time engines.

9. Final Model + Texture Preview

At the end you should see a preview of your model and texture!

1. Image Generation for 3D Model Generation

In the first step of the workflow, you have the flexibility to generate the initial character image in several ways. You can create the character image directly from scratch, use an open-pose image combined with ControlNet for more dynamic posing, or upload a pre-existing image that you've already created. By default, the tool is set to generate an image from scratch. Simply enter a description in the prompt to define what you want to create. As always, feel free to experiment with the seed value if you’re looking to achieve different variations and results.

If you choose to use an open-pose image, you'll need to activate the ControlNet input and apply ControlNet to guide the pose.

Alternatively, if you want to use an image you’ve already created, you can activate the input image node below and switch the connections accordingly. This gives you the freedom to start from a variety of sources and tailor the character creation to your needs.

2. Removing the Background

This step is automated—no input required. This section replaces the image background with a clean white fill, ensuring a neutral and consistent base for 3D Mesh generation.

3. Generate 3D Mesh

In this section, the 3D mesh is generated. Usually, you don’t need to make any adjustments here either.

4. Delighting the Image

To neutralize lighting on the image, this section delights the generated image. This often reduces fine details, so a blending operation reintroduces them. By default, a blend value of 0.5 is applied, which you can adjust to fine-tune the balance between the original image and the de-lit version.

5. Render Textures

This section generates multiple views of the 3D model for consistent image generation later in the workflow.

Set the texture resolution in the Hy3D Render MultiView node by adjusting the texture_size parameter. A resolution of 2048 pixels typically strikes a good balance between quality and performance, but for higher detail, 4096 pixels can be used.

On the right-hand side of the screen, you'll see preview images showing position maps and depth maps. For the upcoming image generation, the depth map will be used.

6. Texture Sampling

In this step, additional views of your model are generated using the image from the Delighting section as a style and detail reference. This ensures texture consistency across all angles, helping to produce a seamless and visually coherent final texture.

7. Upscaling and Face Fix

Here, all sampled images are upscaled to enhance resolution and sharpness. Be sure to revisit the prompt in the upscaling module to specify how the final texture should appear. Optionally, enable the face detailer to refine facial features—particularly useful for character models. If your model doesn’t have a face, you can disable the nodes.

8. Texture Inpainting + Normal & Heigh Maps

In the final stage, your textures are exported. You’ll receive:

  • A fully inpainted texture file, with any gaps intelligently filled in

  • A normal map

  • A height map

With these two files, your 3D model is ready for use in games, renderings, or real-time engines.

9. Final Model + Texture Preview

At the end you should see a preview of your model and texture!

© 2025 Mickmumpitz

© 2025 Mickmumpitz

© 2025 Mickmumpitz