Nvidia Research Introduces DiffUHaul, an AI Tool That Allows Object Relocation in Images


Nvidia researchers introduced a new artificial intelligence (AI) model Monday that can relocate objects in an image. Dubbed DiffUHaul, the tool can spatially understand the context of an image to move an object from one place to another without impacting the background or the shape of the image. The unique aspect of this technique is that it is training-free, meaning no pre-training data was used to build this tool. The new technology was showcased by the company at the Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH) Asia 2024 conference.

In a research paper, Nvidia researchers detailed the new AI tool. The technology was developed in collaboration with The Hebrew University of Jerusalem, Tel Aviv University, and Reichman University. With the new tool, the researchers aimed to solve a prominent issue with AI image generation models – the problem of relocating objects in an image with spatial awareness.

The paper highlights that this particular editing task has remained a bottleneck for AI scientists due to AI models lacking spatial reasoning. Existing visual models can understand the context of an image, but are unable to move objects as they do not understand how a movement in a 2D environment would be perceived spatially.

With DiffUHaul, Nvidia claims this issue can be solved. Based on image diffusion architecture, the tool uses attention masking in the denoising step. This is done to preserve the high-level object appearance. The AI tool uses BlobGEN, a new technique that integrates spatial understanding into the AI tool. Further, new techniques were used to reconstruct real images with the localised model in the designated place.

On the front end, users will be able to type a text prompt highlighting the object they want changed and the AI can spatially readjust the object while adjusting the background accordingly. In demonstrations shown by the company, it could not be determined if the AI editing tool can understand the shape changes that come with spatial movement. For instance, if an air-borne balloon is moved to the ground, its shape is also changed. However, the AI might not be able to capture that due to a lack of training.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *