3D Model Texturing Made Easy [ADVANCED]
3D Model Texturing Made Easy [ADVANCED]
[FULL GUIDE]
᛫
Jun 10, 2025
᛫
by Mickmumpitz

This step-by-step guide shows you how to turn a GLB 3D model into textures using a smart, automated workflow. It’s perfect for generating a large number of variations—quickly and with precise control. Whether you're experimenting with styles, iterating on ideas, or producing consistent outputs at scale, this process helps you stay flexible without the need for manual texture painting.
Ideal for 3D artists, designers, and creators, it offers a fast and intuitive way to explore texture possibilities without sacrificing control.
This step-by-step guide shows you how to turn a GLB 3D model into textures using a smart, automated workflow. It’s perfect for generating a large number of variations—quickly and with precise control. Whether you're experimenting with styles, iterating on ideas, or producing consistent outputs at scale, this process helps you stay flexible without the need for manual texture painting.
Ideal for 3D artists, designers, and creators, it offers a fast and intuitive way to explore texture possibilities without sacrificing control.
🎨 Workflow Sections
🟨 Important Notes
⬜ Input / Output / Model Loaders
🟩 Prompt / Image Generation
🟪 Image Processing
🟥 Latent / Sampling
🟧 Texture Processing

Installation
Download the .json file and drag and drop it into your ComfyUI window.
Install the missing custom nodes via the manager and restart ComfyUI.
Hunyuan3DWrapper Installation
Currently the Hunyuan3DWrapper has a bug when installing with the ComyUI Manager.
In your custom_nodes folder under ComfyUI_windows_portable\ComfyUI\custom_nodes you can see that it has a typo, which causes issues at later steps.
The folder is incorrectly named comfyui-hunyan3dwrapper — note the missing “u” in “hunyuan”.

Correcting the folder name to “comfyui-hunyuan3dwrapper” will solve this issue.

After that you need to run 2 commands to finish the installation. For that run cmd from the address bar in your ComfyUI_windows_portable folder to start a terminal session directly in that folder.

In there you can run two commands to complete the Hunyuan3DWrapper installation.
First run this command:python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\requirements.txt
Then run this command:python_embeded\python.exe -m pip install ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\wheels\custom_rasterizer-0.1.0+torch260.cuda126-cp312-cp312-win_amd64.whl
Now restart ComfyUI and the installation should be completed!
Download Models
controlnet-promax
Install via Model Manager.
Search for:
controlnet promax and install → xinsir/ControlNet++: All-in-one ControlNet (ProMax model)
juggernautXL_juggXIByRundiffusion:
https://huggingface.co/misri/juggernautXL_juggXIByRundiffusion/blob/main/juggernautXL_juggXIByRundiffusion.safetensors
📁 ComfyUI/models/checkpoints
Any SDXL model will work, don't forget to adjust the sampler settings!
4x-ClearRealityV1.pth:
https://huggingface.co/skbhadra/ClearRealityV1/blob/main/4x-ClearRealityV1.pth
📁 ComfyUI/models/upscale_models
You can use other upscaling models here too.
face_yolov8m.pt:
Install via Model Manager.
Search for: face_yolov8m
Download Models for ComfyUI IPAdapter plus
Download all models for the ComfyUI IPAdapter plus nodes. You can find them on the official GitHub page: https://github.com/cubiq/ComfyUI_IPAdapter_plus
It is sufficient to download only the first two sections of the models.
1. Download the clip_vision models and copy them into:
📁 /ComfyUI/models/clip_vision
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, download and rename to:
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors, download and rename to:
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
clip-vit-large-patch14-336.bin, download and rename to:
clip-vit-large-patch14-336.bin
2. Download the ipadapter models and copy them into:
📁 /ComfyUI/models/ipadapter (create it if not present)
ip-adapter_sd15.safetensors, Basic model, average strength
ip-adapter_sd15_light_v11.bin, Light impact model
ip-adapter-plus_sd15.safetensors, Plus model, very strong
ip-adapter-plus-face_sd15.safetensors, Face model, portraits
ip-adapter-full-face_sd15.safetensors, Stronger face model, not necessarily better
ip-adapter_sd15_vit-G.safetensors, Base model, requires bigG clip vision encoder
ip-adapter_sdxl_vit-h.safetensors, SDXL model
ip-adapter-plus_sdxl_vit-h.safetensors, SDXL plus model
ip-adapter-plus-face_sdxl_vit-h.safetensors, SDXL face model
ip-adapter_sdxl.safetensors, vit-G SDXL model, requires bigG clip vision encoder
ip-adapter_sd15_light.safetensors, v1.0 Light impact model
🎨 Workflow Sections
🟨 Important Notes
⬜ Input / Output / Model Loaders
🟩 Prompt / Image Generation
🟪 Image Processing
🟥 Latent / Sampling
🟧 Texture Processing

Installation
Download the .json file and drag and drop it into your ComfyUI window.
Install the missing custom nodes via the manager and restart ComfyUI.
Hunyuan3DWrapper Installation
Currently the Hunyuan3DWrapper has a bug when installing with the ComyUI Manager.
In your custom_nodes folder under ComfyUI_windows_portable\ComfyUI\custom_nodes you can see that it has a typo, which causes issues at later steps.
The folder is incorrectly named comfyui-hunyan3dwrapper — note the missing “u” in “hunyuan”.

Correcting the folder name to “comfyui-hunyuan3dwrapper” will solve this issue.

After that you need to run 2 commands to finish the installation. For that run cmd from the address bar in your ComfyUI_windows_portable folder to start a terminal session directly in that folder.

In there you can run two commands to complete the Hunyuan3DWrapper installation.
First run this command:python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\requirements.txt
Then run this command:python_embeded\python.exe -m pip install ComfyUI\custom_nodes\ComfyUI-Hunyuan3DWrapper\wheels\custom_rasterizer-0.1.0+torch260.cuda126-cp312-cp312-win_amd64.whl
Now restart ComfyUI and the installation should be completed!
Download Models
controlnet-promax
Install via Model Manager.
Search for:
controlnet promax and install → xinsir/ControlNet++: All-in-one ControlNet (ProMax model)
juggernautXL_juggXIByRundiffusion:
https://huggingface.co/misri/juggernautXL_juggXIByRundiffusion/blob/main/juggernautXL_juggXIByRundiffusion.safetensors
📁 ComfyUI/models/checkpoints
Any SDXL model will work, don't forget to adjust the sampler settings!
4x-ClearRealityV1.pth:
https://huggingface.co/skbhadra/ClearRealityV1/blob/main/4x-ClearRealityV1.pth
📁 ComfyUI/models/upscale_models
You can use other upscaling models here too.
face_yolov8m.pt:
Install via Model Manager.
Search for: face_yolov8m
Download Models for ComfyUI IPAdapter plus
Download all models for the ComfyUI IPAdapter plus nodes. You can find them on the official GitHub page: https://github.com/cubiq/ComfyUI_IPAdapter_plus
It is sufficient to download only the first two sections of the models.
1. Download the clip_vision models and copy them into:
📁 /ComfyUI/models/clip_vision
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors, download and rename to:
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors, download and rename to:
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
clip-vit-large-patch14-336.bin, download and rename to:
clip-vit-large-patch14-336.bin
2. Download the ipadapter models and copy them into:
📁 /ComfyUI/models/ipadapter (create it if not present)
ip-adapter_sd15.safetensors, Basic model, average strength
ip-adapter_sd15_light_v11.bin, Light impact model
ip-adapter-plus_sd15.safetensors, Plus model, very strong
ip-adapter-plus-face_sd15.safetensors, Face model, portraits
ip-adapter-full-face_sd15.safetensors, Stronger face model, not necessarily better
ip-adapter_sd15_vit-G.safetensors, Base model, requires bigG clip vision encoder
ip-adapter_sdxl_vit-h.safetensors, SDXL model
ip-adapter-plus_sdxl_vit-h.safetensors, SDXL plus model
ip-adapter-plus-face_sdxl_vit-h.safetensors, SDXL face model
ip-adapter_sdxl.safetensors, vit-G SDXL model, requires bigG clip vision encoder
ip-adapter_sd15_light.safetensors, v1.0 Light impact model
You can find the WORKFLOWS & EXAMPLE FILES here:
Before You Begin: Thank you for considering supporting us! Since these workflows can be complex, we recommend testing the free versions first to ensure compatibility with your system. We cannot guarantee full compatibility with every system that's why we always provide the main functionalities for free! Please take a moment to read through the entire guide. If you encounter any issues:
|
1. Insert Your 3D Model
Start by uploading your model in GLB format in the “Hy3D Upload Mesh” Node.
At the bottom of the interface, the "Hy3D Mesh UV Wrap" node is off by default for models that already have UVs. If your model doesn't already have UVs enable this node to auto-generate UV coordinates.
Once done, your model will appear in the 3D preview window.

2. Render Textures
This section generates multiple views of the 3D model for consistent image generation later in the workflow.
Next, set the texture resolution in the Hy3D Render MultiView node by adjusting the texture_size
parameter. A resolution of 2048 pixels typically strikes a good balance between quality and performance, but for higher detail, 4096 pixels can be used.
On the right-hand side of the screen, you'll see preview images showing position maps and depth maps. For the upcoming image generation, the depth map will be used.

3. Selecting the Main View
Now choose the main view for your model. This view serves as the primary reference for creating consistent textures. For humanoid models, the front view—usually value 1
—is ideal. For horizontally oriented models like dinosaurs, a side view might capture the geometry better. Set the corresponding view value to proceed.

4. Generating the Image for the Main View
With your main view selected, it's time to generate an image. Use the prompt field to describe how you'd like the texture to appear. If the result isn't quite right, tweak the seed value in the case sampler to generate variations. Repeat until you're satisfied with the appearance.

5. Reference Image (Optional)
In the Reference Image section, you can enhance the image generated in the Image Generation step by incorporating a reference image. This reference image will influence the style, details, or overall look of the generated character. If you have a specific image that you’d like your final result to resemble, you can upload it here.
You can also fine-tune how much influence the reference image has by adjusting the IP adapter settings. Specifically, you can change the weight to control how strongly the reference image affects the output, and adjust the start and end values to determine when the influence begins and ends during the generation process.


6. Inpainting
Use the Inpainting section to refine specific parts of your generated image. If there’s an area you’re not satisfied with—like the character’s face—you can make targeted adjustments.
Start by creating a mask over the area you want to improve. For example, if you want to enhance the face, draw a mask on the face area.

Then, add a description like "face" to the prompt and generate a new inpainted version of the image. This new image will incorporate the changes based on your adjustments, using the original image from the Image Generation section as a reference. This allows you to fine-tune and perfect specific details of your character.

7. Removing the Background
This step is automated—no input required. The section replaces the image background with a clean white fill, ensuring a neutral and consistent base for the final texture.

8. Delighting the Image
To neutralize lighting on the image, this section delights the generated image. This often reduces fine details, so a blending operation reintroduces them. By default, a blend value of 0.5 is applied, which you can adjust to fine-tune the balance between the original image and the de-lit version.

9. Texture Sampling
In this step, additional views of your model are generated using the main view as a style and detail reference. This ensures texture consistency across all angles, helping to produce a seamless and visually coherent final texture.

10. Upscaling and Face Fix
Here, all sampled images are upscaled to enhance resolution and sharpness. Be sure to revisit the prompt in the upscaling module to specify how the final texture should appear. Optionally, enable the face detailer to refine facial features—particularly useful for character models. If your model doesn’t have a face, you can disable the nodes.

11. Finalizing the Textures
In the final stage, your textures are exported. You’ll receive:
A fully inpainted texture file, with any gaps intelligently filled in
A normal map
With these two files, your 3D model is ready for use in games, renderings, or real-time engines.

12. Final Texture Preview
At the end you should see a preview of your texture!

1. Insert Your 3D Model
Start by uploading your model in GLB format in the “Hy3D Upload Mesh” Node.
At the bottom of the interface, the "Hy3D Mesh UV Wrap" node is off by default for models that already have UVs. If your model doesn't already have UVs enable this node to auto-generate UV coordinates.
Once done, your model will appear in the 3D preview window.

2. Render Textures
This section generates multiple views of the 3D model for consistent image generation later in the workflow.
Next, set the texture resolution in the Hy3D Render MultiView node by adjusting the texture_size
parameter. A resolution of 2048 pixels typically strikes a good balance between quality and performance, but for higher detail, 4096 pixels can be used.
On the right-hand side of the screen, you'll see preview images showing position maps and depth maps. For the upcoming image generation, the depth map will be used.

3. Selecting the Main View
Now choose the main view for your model. This view serves as the primary reference for creating consistent textures. For humanoid models, the front view—usually value 1
—is ideal. For horizontally oriented models like dinosaurs, a side view might capture the geometry better. Set the corresponding view value to proceed.

4. Generating the Image for the Main View
With your main view selected, it's time to generate an image. Use the prompt field to describe how you'd like the texture to appear. If the result isn't quite right, tweak the seed value in the case sampler to generate variations. Repeat until you're satisfied with the appearance.

5. Reference Image (Optional)
In the Reference Image section, you can enhance the image generated in the Image Generation step by incorporating a reference image. This reference image will influence the style, details, or overall look of the generated character. If you have a specific image that you’d like your final result to resemble, you can upload it here.
You can also fine-tune how much influence the reference image has by adjusting the IP adapter settings. Specifically, you can change the weight to control how strongly the reference image affects the output, and adjust the start and end values to determine when the influence begins and ends during the generation process.


6. Inpainting
Use the Inpainting section to refine specific parts of your generated image. If there’s an area you’re not satisfied with—like the character’s face—you can make targeted adjustments.
Start by creating a mask over the area you want to improve. For example, if you want to enhance the face, draw a mask on the face area.

Then, add a description like "face" to the prompt and generate a new inpainted version of the image. This new image will incorporate the changes based on your adjustments, using the original image from the Image Generation section as a reference. This allows you to fine-tune and perfect specific details of your character.

7. Removing the Background
This step is automated—no input required. The section replaces the image background with a clean white fill, ensuring a neutral and consistent base for the final texture.

8. Delighting the Image
To neutralize lighting on the image, this section delights the generated image. This often reduces fine details, so a blending operation reintroduces them. By default, a blend value of 0.5 is applied, which you can adjust to fine-tune the balance between the original image and the de-lit version.

9. Texture Sampling
In this step, additional views of your model are generated using the main view as a style and detail reference. This ensures texture consistency across all angles, helping to produce a seamless and visually coherent final texture.

10. Upscaling and Face Fix
Here, all sampled images are upscaled to enhance resolution and sharpness. Be sure to revisit the prompt in the upscaling module to specify how the final texture should appear. Optionally, enable the face detailer to refine facial features—particularly useful for character models. If your model doesn’t have a face, you can disable the nodes.

11. Finalizing the Textures
In the final stage, your textures are exported. You’ll receive:
A fully inpainted texture file, with any gaps intelligently filled in
A normal map
With these two files, your 3D model is ready for use in games, renderings, or real-time engines.

12. Final Texture Preview
At the end you should see a preview of your texture!

© 2025 Mickmumpitz
© 2025 Mickmumpitz
© 2025 Mickmumpitz