VTON-HandFit: Virtual Try-on for Arbitrary Hand Pose Guided by Hand Priors Embedding

Yujie Liang1*, Xiaobin Hu2*, Boyuan Jiang2 Donghao Luo2† Kai WU2
Wenhui Han2 Taisong Jin1† Chengjie Wang2
1Xiamen University,2Tencent
*Indicates Equal Contribution,Indicates Corresponding authors.
MY ALT TEXT

Abstract

Although diffusion-based image virtual try-on has made considerable progress, emerging approaches still struggle to effectively address the issue of hand occlusion (i.e., clothing regions occluded by the hand part), leading to a notable degradation of the try-on performance. To tackle this issue widely existing in real-world scenarios, we propose VTON-HandFit, leveraging the power of hand priors to reconstruct the appearance and structure for hand occlusion cases. Firstly, we tailor a Handpose Aggregation Net using the ControlNet-based structure explicitly and adaptively encoding the global hand and pose priors. Besides, to fully exploit the hand-related structure and appearance information, we propose Hand-feature Disentanglement Embedding module to disentangle the hand priors into the hand structure-parametric and visual-appearance features, and customize a masked cross attention for further decoupled feature embedding. Lastly, we customize a hand-canny constraint loss to better learn the structure edge knowledge from the hand template of model image. VTON-HandFit outperforms the baselines in qualitative and quantitative evaluations on the public dataset and our self-collected hand-occlusion Handfit-3K dataset particularly for the arbitrary hand pose occlusion cases in real-world scenarios.

Method

Dataset compression scale

An overview of our VTON-HandFit Network. The network consists of two main components: Hand-feature Disentanglement Embedding and Hand-Pose Aggregation Net. The Hand-feature Disentanglement Embedding module uses the HaMeR model to extract hand priors, including hand type \( T_h \), 3D vertices \( V_h \), spatial joint locations \( J_{2d} \), and joint rotation matrices \( \theta_h \). These features are processed by the Hand-Struct processor to derive structural features \( c_{struc} \). Simultaneously, hand images cropped using bounding boxes are processed by DINOv2 and the Hand-Appear processor to obtain visual features \( c_{appear} \). The structural and visual features are integrated using mask cross attention. The Hand-Pose Aggregation Net module controls body and hand poses by aggregating DWpose, Densepose, and hand depth.

Quantitative Results

Dataset compression scale Dataset compression scale

HandFit-3K Dataset

Dataset compression scale

BibTeX

@article{liang2024vton,
  title={VTON-HandFit: Virtual Try-on for Arbitrary Hand Pose Guided by Hand Priors Embedding},
  author={Liang, Yujie and Hu, Xiaobin and Jiang, Boyuan and Luo, Donghao and Wu, Kai and Han, Wenhui and Jin, Taisong and Wang, Chengjie},
  journal={arXiv preprint arXiv:2408.12340},
  year={2024}
}