Intel Labs creates 360-degree images using AI
Intel Labs has introduced a new AI diffusion model capable of using AI to generate realistic 3D visual content from text prompts.
The Latent Diffusion Model for 3D (LDM3D) is capable of creating 360-degree, vivid and immersive images, complete with depth maps, from a given text prompt using almost the same number of parameters as latent stable diffusion uses to create 2D images.
LDM3D is trained on a dataset constructed from a subset of 10,000 samples of the LAION-400M database. The research team used Intel Labs’ Dense Prediction Transformer large depth estimation model to provide pixel relative depth for each pixel in a generated image.
This was used to create DepthFusion, an application that leverages standard 2D RGB photos and depth maps to create 360-degree visual experiences.
The model has been trained on an Intel AI supercomputer powered by Intel Xeon processors and Intel Habana Gaudi AI accelerators.
LMD3D was recently awarded the Best Poster Award at the 3DMV workshop at this year’s Conference on Computer Vision and Pattern Recognition in Vancouver, Canada.
“Unlike existing latent stable diffusion models, LDM3D allows users to generate an image and a depth map from a given text prompt using almost the same number of parameters,” Intel Labs AI/ML Research Scientist Vasudev Lal commented.
“It provides more accurate relative depth for each pixel in an image compared to standard post-processing methods for depth estimation and saves developers significant time to develop scenes.”
Thoughtworks, AWS to accelerate GenAI adoption
Thoughtworks has entered a strategic collaboration with AWS to accelerate the adoption of GenAI...
VMware customers want to keep perpetual licences
Broadcom's efforts to replace perpetual VMware licences with a subscription-based model...
Teradata deepens GenAI collaboration with AWS
Teradata's expanded collaboration with AWS will allow joint customers to access 'rapid...