Collaborating on physical objects remotely has always been a challenge, but with the introduction of SharedNeRF, the landscape is about to change drastically. SharedNeRF is a new remote conferencing system that allows users to manipulate a 3D view of a scene, enabling them to assist in complex tasks like debugging intricate hardware. The system combines two different graphics rendering techniques to provide both photorealistic visuals and instantaneous feedback, elevating the remote collaboration experience to new heights.
Developed by Mose Sakashita, a doctoral student in information science, SharedNeRF has been described as a potential paradigm shift in remote collaboration tools. Sakashita’s work on the system, done during an internship at Microsoft in 2023, was in collaboration with Andrew Wilson ’93, a computer science major from Cornell. The system is set to be presented at the Association of Computing Machinery (ACM) CHI conference on Human Factors in Computing Systems (CHI’24) where it has already received an honorable mention. According to Wilson, traditional video conferencing systems fall short when it comes to tasks involving physical objects, making SharedNeRF a game-changer in the field.
SharedNeRF utilizes a neural radiance field (NeRF) method for rendering scenes in 3D, a technique that leverages artificial intelligence to construct realistic depictions with reflections, transparency, and textures. The system captures the scene through a combination of cameras, including a head-mounted camera and an RGB-D camera that detect color and depth. This data is then fed into a NeRF deep learning model that generates a 3D representation of the scene for remote collaborators to interact with. By combining detail-rich visuals from NeRF with faster point cloud rendering, SharedNeRF offers users a dynamic and immersive experience.
In a study involving seven volunteers who tested SharedNeRF by collaborating on a flower-arranging project, the system received positive feedback. Users found that SharedNeRF allowed them to have better control over the scene, enabling them to independently change viewpoints and zoom in on details. The inclusion of an avatar representing the local collaborator’s head further enhanced the immersive experience, providing remote users with insight into the other person’s perspective. The majority of volunteers preferred SharedNeRF over traditional video conferencing tools or point cloud rendering alone, highlighting the system’s potential for enhancing remote collaboration.
Although currently designed for one-on-one collaboration, SharedNeRF has the potential to evolve into a platform that supports multiple users simultaneously. Future enhancements may focus on improving image quality and exploring virtual reality or augmented reality integration to offer a more immersive experience. Sakashita and the team of researchers behind SharedNeRF are dedicated to pushing the boundaries of remote collaboration technology, with ongoing work aimed at refining the system for broader applications and enhanced user experiences.
Overall, SharedNeRF represents a significant advancement in the field of remote collaboration, offering users a glimpse into the future of interactive and dynamic virtual interactions. With its innovative use of NeRF rendering technology and focus on user control and immersion, SharedNeRF is poised to redefine the way people work together across distances.