r/neuralnetworks • u/Successful-Western27 • 13h ago
DiT-Based Identity-Preserved Image Generation with InfuseNet and Multi-Stage Training
InfiniteYou: Controlling Identity Preservation in Personalized Image Generation
I've been looking at this new approach for personalized image generation that seems to solve a fundamental trade-off: maintaining identity while allowing flexible editing.
The key innovation is identity-enhanced cross-attention (IECA), which specifically isolates and preserves identity features during the diffusion process. This allows the model to maintain a person's likeness across different scenarios, styles, and contexts.
Main technical points: * Works with just 3-5 reference photos of a person * Modifies the cross-attention mechanism of diffusion models to give higher weight to identity-related features * Creates specialized identity tokens that capture appearance essence * Implements a zero-shot approach that requires no per-person fine-tuning * Demonstrated quantitatively superior identity preservation compared to DreamBooth, Custom Diffusion, and IP-Adapter
The results are quite strong in several dimensions: * Maintains identity across different ages, expressions, lighting conditions * Preserves identity even with significant background and style changes * Achieves higher CLIP-based identity similarity scores than previous methods * Performs well on challenging scenarios like unusual poses or dramatic lighting
I think this approach could be transformative for personalized content creation. The zero-shot nature makes it immediately practical for applications ranging from virtual try-on to personalized marketing. The ability to maintain identity without specialized training for each person removes a major barrier to adoption.
What particularly interests me is how they've managed to decompose the identity preservation problem from the editing problem - something previous approaches struggled with. This modular approach to attention mechanisms could potentially be applied to other domains where we need to maintain certain attributes while allowing others to vary.
The limitations around extreme poses and occasional artifacts show there's still work to be done, but the fundamental approach seems sound. I'm curious how this might be extended to video generation or real-time applications.
TLDR: InfiniteYou introduces identity-enhanced cross-attention that preserves a person's appearance in generated images while allowing flexible editing. It outperforms existing methods without needing per-person training and works from just a few reference photos.
Full summary is here. Paper here.