Nefertari’s tomb is hailed as one of the finest in all of Egypt. And now visitors can explore it in exquisite detail without hopping on a transoceanic flight.
Nefertari was known as the most beautiful of five wives of Ramses II, a pharaoh renowned for his colossal monuments. The tomb he built for his favorite queen is a shrine to her beauty — every centimeter of the walls in the tomb’s three chambers and connecting corridors is adorned with colorful scenes.
Like most of the tombs in the Valley of the Queens, this one had been plundered by the time it was discovered by archaeologists in 1904. And while preservation efforts have been made, the site remains extremely fragile, not to mention remote to most of the world’s population.
Simon Che de Boer and his New Zealand-based VFX R&D company, reality virtual.co, have found a way to digitally preserve Nefertari’s tomb and give countless individuals the chance to see inside it.
Nefertari: A Journey to Eternity is a VR experience that uses high-end photogrammetry, visual effects techniques, and AI to create an amazingly detailed experience that returns Queen Nefertari’s tomb to its original glory. Visitors can digitally walk around, view the scene from different angles, and zoom in for a closer look.
It’s an amazingly realistic substitute for those who might otherwise have to travel to the other side of the Earth to experience it.
Powerful Data Crunching with NVIDIA Quadro GPUs
To replicate the tomb’s elaborate details, Che de Boer captured nearly 4,000 42-megapixel photographs of the site, then combined photogrammetry (the science of making measurements from photographs) with deep learning methods for processing and visualization.
NVIDIA GPUs played a critical role in processing the many hours of photogrammetric data collected onsite, crunching it many times faster than would be possible on CPUs.
GPUs were also integral to performing 3D reconstruction and presenting detailed textures. Working on powerful HP workstations equipped with high-end NVIDIA Quadro GPUs, realityvirtual.co converted the data to a dense 24Bn 3D point-cloud using CapturingReality for initial creation. Autodesk MeshMixer and Maya were used for initial clean-up. They then used an in-house, proprietary pipeline to work on the refinements and efficiencies — filling holes, extrapolating material characteristics, removing noise, and cleaning up artifacts.
These very large datasets were then optimized for real-time rendering in Unreal Engine at a stable 90 frames per second, retaining all 24Bn points of detail utilizing texture streaming from Granite. Full dynamic lighting, volumetric fog, reflections, effects, and 3D spatial audio.
“With these large datasets, speed of processing and playback is key,” said Che de Boer. “NVIDIA’s new architecture combined with Unreal Engine adds a level of speed and power that’s unbeatable with this enormous amount of data.”
AI: Creating More Realistic VR
No visit to an ancient tomb would be believable without removing the signs of recent modernization. To accomplish this, realityvirtual.co collected all the data that was encapsulated from the location and used the programmable Tensor Cores and 24Gb VRAM capacity from a single high-end NVIDIA Quadro GPU to train their super-sampling set.
By teaching the computer to understand what it was looking at, it could then modify the image to how it would have appeared with the modern artifacts removed. For instance, exit signs, plaques, handrails, floorboards, and halogen lighting were painted out via in-painting methods and replaced with contextually aware content from the spaces around them.
To cover gaps in images, remove unwanted elements or fix overlap areas in the source photogrammetry images, realityvirtual.co infilled these areas using elements from the surrounding environment by leveraging a new AI-based method for Image InPainting developed by NVIDIA Research and available to software developers soon through the NVIDIA NGX technology stack. (Learn more about AI InPainting.)
“Without the kind of memory the high-end NVIDIA Quadro provides, processing the data from our 42-megapixel images would not have been possible,” said Che de Boer. “We use NVIDIA CUDA cuDNN extensively in both our photogrammetry and AI processes and throughout all aspects of our creation pipeline to achieve the most realism. It looks absolutely amazing. You get a real sense of being there and it’s only going to get better once we integrate NVIDIA RTX real time raytracing into our future releases.”
More recent in-house releases of the “Tomb” have been run through realityvirtual.co’s own super-sampling methods. This essentially trains their super-sampling on their own datasets, adding another level of detail to the final texture maps.
At that point, a viewer can’t distinguish the final pixels no matter how close they get to the Tomb’s artifacts. In addition, more recent projects are now using realityvirtual.co’s deepPBR methods to extrapolate contextually aware normals, debit diffuse, roughness, and displacement. They’re invaluable for working with physically based rendering engines such as Unreal Engine.
All this data was trained on itself, a great example of AI using its own data to improve itself. The result is an educational simulation that’s available on the STEAM gaming platform for free but requires a Vive, Rift, or Windows VR headset.
To continue documenting heritage sites and digitally preserving them for years to come, Che de Boer recently formed a strategic relationship with Professor Sarah Kenderdine at EPFL, a prestigious research university in Lausanne Switzerland. Together they’re looking to virtually re-create New Zealand’s ChristChurch Cathedral as it existed before it was damaged by a 2011 earthquake as well as other locations that can not yet be disclosed, though are of the most prestigious nature.
“These are locations that everyone knows about but only a few get to access to,” said Che de Boer. “Our goal is to make these sites accessible to people around the world who wouldn’t otherwise get an opportunity to experience them in their lifetime.”