How to make 3D content available on a massive scale – the biggest challenge in AR

“Why don’t we see 3D content available in everyday applications?”

That was the thought that popped in my head three years ago, when I was finishing my PhD and working as a Researcher at the Fraunhofer Institute for Computer Graphics Research. Then, I observed that, apart from video games and sporadic VR initiatives hailed as the ‘next big thing’, 3D visualisations were not common in daily online life. I concluded that the technology was not being exploited to its full potential, despite the many possibilities it offered.

There were a few different reasons which could explain why this was pre-2017. Firstly, hardware and software capabilities did not allow for real-time renderings that provide a high degree of realism. Secondly, powerful 3D graphics processors were only available as dedicated hardware for gaming PCs or consoles, and VR hardware was simply not affordable for non-professionals.

Interactive 3D visualisations also required dedicated, system-dependent software which is usually not pre-installed in common machines. Last but not least, since the technology was rather unusual in everyday life, most users found 3D assets difficult to manipulate.

Fast-forward to 2017 and we have all those limitations overcome, with the technical requirements for photorealistic 3D content fulfilled. We just had to wait and see which was going to be the first every-day life “killer app” to come.

Today, I see 3D technology spanning a number of different and, allow me to say, great use cases. 

Just take a look at IKEA’s “Place” app. It uses Augmented Reality enabled by 3D technology to allow customers to virtually place true-to-scale models in their very own houses. Users can furnish a whole room in just one tap, and check out with another. 

More recently, Kanye West announced a new website for his fashion label, Yeezy, which is heavily dependent on 3D technology. The groundbreaking platform features a 3D model who walks and talks across the screen in the outfit chosen by the user, blurring the lines between e-commerce and video games.  

The possibilities are not limited to online retail 

3D visualisations can be used to create digital catalogues of museum’s assets and expand knowledge reach by making content available globally rather than just on-site; serve as repair manuals for after sales in industrial applications; or even assist medical evaluation by providing clearer pictures of blood vessels, organs and bones.

One of the catalysts for the increased incorporation of 3D technology in everyday applications was the advent of WebGL, a JavaScript API for the real-time rendering of interactive 3D graphics within any browser, without additional plug-ins. Suddenly, rotating an object on your phone or desktop screen has gone from something odd to an intuitive action. 

There is, however, still a large gap between what is technically possible and what has been adopted by real-world applications. That’s mainly because, compared to the creation of 2D imagery such as photos or videos, creating 3D data still requires a lot of manual work and dedicated expertise. 

It goes without saying that the use of three-dimensional computer-generated imagery – especially for commercial purposes – requires a high level of quality and visual appeal. But creating realistic 3D assets at a large scale remains challenging, especially when it comes to processes involving the reduction and compression of large files without causing a notable loss in visual quality. In other words, there is a need to standardise 3D content workflows, meaning that 3D files needed to serve real-world use cases can actually be created in a scalable manner.

The tech behind the 3D revolution

The first technology that will help to bring about this revolution is 3D scanning. This now requires a relatively small device which can quickly scan products such as shoes to produce 3D files. At the moment, 3D scanners need to be manually operated, but in the future I believe this process will take place on an automated production line which can quickly produce a large number of 3D files at speed. 

Computer vision and machine learning can also be used to automatically generate digital three-dimensional models for computer-aided design. By extracting lines, edges and measurements from imagery and applying algorithms, 3D models with detailed dimensions can be created for integration with other applications. 

But what’s mostly needed is a fast, reliable way of making the 3D files small enough to be seen on every platform. We need software that automatically produces optimised 3D content without the need for laborious manual work, allowing the processing of thousands of 3D scanned data sets at once using a single PC, preparing assets for visualization by drastically reducing the amount of data without a visible impact on visual quality.

If you think about it, reducing the resolution of an image and saving it as JPEG is an absolute no-brainer. Processing 3D assets should be similarly easy and accessible to everyone. We’re used to seeing 3D content in games and movies, but I predict it will become as much of a part of our day-to-day life as pictures are right now. 


About the Author

Max Limper is CEO and co-founder at DGG, creators of RapidCompact. Free up your creative team’s time by automating the optimization of high-quality compact 3D models ready for web, mobile, AR and XR platforms.

Featured image: ©DragonImages