diff --git a/docs.json b/docs.json
index 23475b9d..87d12d08 100644
--- a/docs.json
+++ b/docs.json
@@ -250,6 +250,13 @@
"group": "Audio",
"pages": ["tutorials/audio/ace-step/ace-step-v1"]
},
+ {
+ "group": "Utility",
+ "pages": [
+ "tutorials/utility/preprocessors",
+ "tutorials/utility/frame-interpolation"
+ ]
+ },
{
"group": "Partner Nodes",
"pages": [
@@ -942,6 +949,13 @@
"group": "音频",
"pages": ["zh-CN/tutorials/audio/ace-step/ace-step-v1"]
},
+ {
+ "group": "Utility",
+ "pages": [
+ "zh-CN/tutorials/utility/preprocessors",
+ "zh-CN/tutorials/utility/frame-interpolation"
+ ]
+ },
{
"group": "合作伙伴节点",
"pages": [
diff --git a/tutorials/utility/frame-interpolation.mdx b/tutorials/utility/frame-interpolation.mdx
new file mode 100644
index 00000000..a8973c23
--- /dev/null
+++ b/tutorials/utility/frame-interpolation.mdx
@@ -0,0 +1,51 @@
+---
+title: "Frame interpolation workflow"
+description: "Learn how to use frame interpolation to smooth motion and increase frame rate in ComfyUI video workflows"
+sidebarTitle: "Frame interpolation"
+---
+
+## What is frame interpolation?
+
+Frame interpolation generates intermediate frames between existing frames, resulting in smoother motion and improved temporal consistency. This technique is essential for video post-processing and can significantly improve the quality of generated videos.
+
+## Use cases
+
+Frame interpolation supports:
+- **Increasing frame rate in short clips** - Convert low-FPS videos to higher frame rates
+- **Smoothing motion in generated or edited video** - Reduce stuttering and stepping artifacts
+- **Preparing sequences for downstream video models** - Standardize frame rate for consistent processing
+- **Fixing low-FPS generations** - Many video generation workflows default to 16fps, which can introduce noticeable stutter
+
+## Why frame interpolation matters
+
+Many image and video generation workflows still default to 16fps, which can introduce noticeable stutter, stepping, and uneven motion—especially in camera moves and character animation.
+
+Frame interpolation is an effective way to smooth these artifacts without regenerating the source frames, making motion feel more natural while preserving the original composition and timing.
+
+Rather than always treating interpolation as a final post-process, it can also be used as a preprocessing step—allowing you to standardize frame rate early and feed cleaner temporal data into larger animation and video pipelines.
+
+
+ Run on Comfy Cloud
+
+
+## Getting started
+
+
+
+ Update ComfyUI to the latest version or use Comfy Cloud
+
+
+ Download the workflow linked above or find it in Templates on Comfy Cloud
+
+
+ Follow the pop-up dialogs to download the required models and custom nodes
+
+
+ Load your video frames, adjust interpolation settings, and run the workflow
+
+
+
+## Related tutorials
+
+- [Wan Video](/tutorials/video/wan/wan2_2) - Video generation workflows
+- [LTX Video](/tutorials/video/ltxv) - Video generation with LTX models
diff --git a/tutorials/utility/preprocessors.mdx b/tutorials/utility/preprocessors.mdx
new file mode 100644
index 00000000..4502703f
--- /dev/null
+++ b/tutorials/utility/preprocessors.mdx
@@ -0,0 +1,102 @@
+---
+title: "Preprocessor workflows"
+description: "Learn how to use depth estimation, lineart conversion, pose detection, and normals extraction preprocessors in ComfyUI"
+sidebarTitle: "Preprocessors"
+---
+
+## What are preprocessors?
+
+Preprocessors are foundational tools that extract structural information from images. They convert images into conditioning signals like depth maps, lineart, pose skeletons, and surface normals. These outputs drive better control and consistency in ControlNet, image-to-image, and video workflows.
+
+Using preprocessors as separate workflows enables:
+- Faster iteration without full graph reruns
+- Clear separation of preprocessing and generation
+- Easier debugging and tuning
+- More predictable image and video results
+
+## Depth estimation
+
+Depth estimation converts a flat image into a depth map representing relative distance within a scene. This structural signal is foundational for controlled generation, spatially aware edits, and relighting workflows.
+
+This workflow emphasizes:
+- Clean, stable depth extraction
+- Consistent normalization for downstream use
+- Easy integration with ControlNet and image-edit pipelines
+
+Depth outputs can be reused across multiple passes, making it easier to iterate without re-running expensive upstream steps.
+
+
+ Run on Comfy Cloud
+
+
+## Lineart conversion
+
+Lineart preprocessors distill an image down to its essential edges and contours, removing texture and color while preserving structure.
+
+This workflow is designed to:
+- Produce clean, high-contrast lineart
+- Minimize broken or noisy edges
+- Provide reliable structural guidance for stylization and redraw workflows
+
+Lineart pairs especially well with depth and pose, offering strong structural constraints without overconstraining style.
+
+
+ Run on Comfy Cloud
+
+
+## Pose detection
+
+Pose detection extracts body keypoints and skeletal structure from images, enabling precise control over human posture and movement.
+
+This workflow focuses on:
+- Clear, readable pose outputs
+- Stable keypoint detection suitable for reuse across frames
+- Compatibility with pose-based ControlNet and animation pipelines
+
+By isolating pose extraction into a dedicated workflow, pose data becomes easier to inspect, refine, and reuse.
+
+
+ Run on Comfy Cloud
+
+
+## Normals extraction
+
+Normals estimation converts a flat image into a surface normal map—a per-pixel direction field that describes how each part of a surface is oriented (typically encoded as RGB). This signal is useful for relighting, material-aware stylization, and highly structured edits.
+
+This workflow emphasizes:
+- Clean, stable normal extraction with minimal speckling
+- Consistent orientation and normalization for reliable downstream use
+- ControlNet-ready outputs for relighting, refinement, and structure-preserving edits
+- Reuse across passes so you can iterate without re-running earlier steps
+
+Normal outputs can be used to:
+- Drive relight/shading changes while preserving geometry
+- Add a stronger 3D-like structure to stylization and redraw pipelines
+- Improve consistency across frames when paired with pose/depth for animation work
+
+
+ Run on Comfy Cloud
+
+
+## Getting started
+
+
+
+ Update ComfyUI to the latest version or use Comfy Cloud
+
+
+ Download the workflows linked above or find them in Templates on Comfy Cloud
+
+
+ Follow the pop-up dialogs to download the required models and custom nodes
+
+
+ Review inputs, adjust settings, and run the workflow
+
+
+
+## Related tutorials
+
+- [ControlNet](/tutorials/controlnet/controlnet) - Use preprocessor outputs with ControlNet
+- [Depth ControlNet](/tutorials/controlnet/depth-controlnet) - Depth-guided image generation
+- [Pose ControlNet](/tutorials/controlnet/pose-controlnet-2-pass) - Pose-guided image generation
diff --git a/zh-CN/tutorials/utility/frame-interpolation.mdx b/zh-CN/tutorials/utility/frame-interpolation.mdx
new file mode 100644
index 00000000..497a7673
--- /dev/null
+++ b/zh-CN/tutorials/utility/frame-interpolation.mdx
@@ -0,0 +1,51 @@
+---
+title: "帧插值工作流"
+description: "学习如何在 ComfyUI 视频工作流中使用帧插值来平滑运动和提高帧率"
+sidebarTitle: "帧插值"
+---
+
+## 什么是帧插值?
+
+帧插值在现有帧之间生成中间帧,从而实现更平滑的运动和改进的时间一致性。这种技术对于视频后处理至关重要,可以显著提高生成视频的质量。
+
+## 使用场景
+
+帧插值支持:
+- **提高短片的帧率** - 将低帧率视频转换为更高帧率
+- **平滑生成或编辑视频中的运动** - 减少卡顿和阶梯状伪影
+- **为下游视频模型准备序列** - 标准化帧率以实现一致处理
+- **修复低帧率生成** - 许多视频生成工作流默认为 16fps,这可能会引入明显的卡顿
+
+## 为什么帧插值很重要
+
+许多图像和视频生成工作流仍然默认为 16fps,这可能会引入明显的卡顿、阶梯效应和不均匀的运动——尤其是在相机移动和角色动画中。
+
+帧插值是一种有效的方法,可以在不重新生成源帧的情况下平滑这些伪影,使运动感觉更自然,同时保留原始构图和时序。
+
+帧插值不仅可以作为最终后处理步骤,还可以用作预处理步骤——允许你尽早标准化帧率,并将更干净的时间数据输入到更大的动画和视频管道中。
+
+
+ 在 Comfy Cloud 上运行
+
+
+## 开始使用
+
+
+
+ 将 ComfyUI 更新到最新版本或使用 Comfy Cloud
+
+
+ 下载上面链接的工作流或在 Comfy Cloud 的模板中找到它
+
+
+ 按照弹出对话框下载所需的模型和自定义节点
+
+
+ 加载你的视频帧,调整插值设置,然后运行工作流
+
+
+
+## 相关教程
+
+- [万相视频](/zh-CN/tutorials/video/wan/wan2_2) - 视频生成工作流
+- [LTX 视频](/zh-CN/tutorials/video/ltxv) - 使用 LTX 模型的视频生成
diff --git a/zh-CN/tutorials/utility/preprocessors.mdx b/zh-CN/tutorials/utility/preprocessors.mdx
new file mode 100644
index 00000000..fb809945
--- /dev/null
+++ b/zh-CN/tutorials/utility/preprocessors.mdx
@@ -0,0 +1,102 @@
+---
+title: "预处理器工作流"
+description: "学习如何在 ComfyUI 中使用深度估计、线稿转换、姿态检测和法线提取预处理器"
+sidebarTitle: "预处理器"
+---
+
+## 什么是预处理器?
+
+预处理器是从图像中提取结构信息的基础工具。它们将图像转换为条件信号,如深度图、线稿、姿态骨架和表面法线。这些输出可以在 ControlNet、图生图和视频工作流中提供更好的控制和一致性。
+
+将预处理器作为独立工作流使用可以实现:
+- 无需重新运行完整图表即可快速迭代
+- 预处理和生成的清晰分离
+- 更容易调试和调优
+- 更可预测的图像和视频结果
+
+## 深度估计
+
+深度估计将平面图像转换为表示场景中相对距离的深度图。这种结构信号是受控生成、空间感知编辑和重新打光工作流的基础。
+
+此工作流强调:
+- 干净、稳定的深度提取
+- 一致的归一化以供下游使用
+- 与 ControlNet 和图像编辑管道的轻松集成
+
+深度输出可以在多个处理过程中重复使用,使迭代更容易,无需重新运行昂贵的上游步骤。
+
+
+ 在 Comfy Cloud 上运行
+
+
+## 线稿转换
+
+线稿预处理器将图像提炼为其基本边缘和轮廓,去除纹理和颜色,同时保留结构。
+
+此工作流旨在:
+- 生成干净、高对比度的线稿
+- 最小化断裂或噪声边缘
+- 为风格化和重绘工作流提供可靠的结构指导
+
+线稿与深度和姿态配合特别好,提供强大的结构约束而不会过度约束风格。
+
+
+ 在 Comfy Cloud 上运行
+
+
+## 姿态检测
+
+姿态检测从图像中提取身体关键点和骨骼结构,实现对人体姿势和动作的精确控制。
+
+此工作流专注于:
+- 清晰、可读的姿态输出
+- 适合跨帧重用的稳定关键点检测
+- 与基于姿态的 ControlNet 和动画管道的兼容性
+
+通过将姿态提取隔离到专用工作流中,姿态数据变得更容易检查、优化和重用。
+
+
+ 在 Comfy Cloud 上运行
+
+
+## 法线提取
+
+法线估计将平面图像转换为表面法线图——一个描述表面每个部分朝向的逐像素方向场(通常编码为 RGB)。这种信号对于重新打光、材质感知风格化和高度结构化的编辑非常有用。
+
+此工作流强调:
+- 干净、稳定的法线提取,最小化斑点
+- 一致的方向和归一化以供可靠的下游使用
+- ControlNet 就绪的输出,用于重新打光、优化和保持结构的编辑
+- 跨处理过程重用,无需重新运行早期步骤即可迭代
+
+法线输出可用于:
+- 在保持几何形状的同时驱动重新打光/着色变化
+- 为风格化和重绘管道添加更强的 3D 结构
+- 与姿态/深度配合用于动画工作时提高跨帧一致性
+
+
+ 在 Comfy Cloud 上运行
+
+
+## 开始使用
+
+
+
+ 将 ComfyUI 更新到最新版本或使用 Comfy Cloud
+
+
+ 下载上面链接的工作流或在 Comfy Cloud 的模板中找到它们
+
+
+ 按照弹出对话框下载所需的模型和自定义节点
+
+
+ 检查输入,调整设置,然后运行工作流
+
+
+
+## 相关教程
+
+- [ControlNet](/zh-CN/tutorials/controlnet/controlnet) - 将预处理器输出与 ControlNet 配合使用
+- [深度 ControlNet](/zh-CN/tutorials/controlnet/depth-controlnet) - 深度引导的图像生成
+- [姿态 ControlNet](/zh-CN/tutorials/controlnet/pose-controlnet-2-pass) - 姿态引导的图像生成