Using Mobile VR to Assess Claustrophobia During an MRI

Using Mobile VR to Assess Claustrophobia During an MRI

new methods for exposure therapy

Stephanie O’Malley


Dr. Richard Brown and his colleague Dr. Jadranka Stojanovska had an idea for how VR could be used in a clinical setting. Having realized a problem with patients undergoing MRI scans experiencing claustrophobia, they wanted to use VR simulations to introduce potential patients to what being inside an MRI machine might feel like.

Duderstadt Center programmer Sean Petty and director Dan Fessahazion alongside Dr. Richard Brown

Claustrophobia in this situation is a surprisingly common problem. While there are 360 videos that convey what an MRI might look like, these fail to address the major factor contributing to claustrophobia: The perceived confined space within the bore. 360 videos tend to make the environment skewed, seeming further away than it would be in reality and thereby failing to induce the same feelings of claustrophobia that the MRI bore would produce in reality. With funding from the Patient Education Award Committee, Dr. Brown approached the Duderstadt Center to see if a better solution could be produced.

VR MRI: Character customization
A patient enters feet-first into the bore of the MRI machine.

In order to simulate the effects of an MRI accurately, a CGI MRI machine was constructed and ported to the Unity game engine. A customize-able avatar representing the viewer’s body was also added to give viewers a sense of self. When a VR headset is worn, the viewer’s perspective allows them to see their avatar body and the real proportions of the MRI machine as they are slowly transported into the bore. Verbal instructions mimic what would be said throughout the course of a real MRI, with the intimidating boom of the machine occurring as the simulated scan proceeds.

Two modes are provided within the app: Feet first or head first, to accommodate the most common scanning procedures that have been shown to induce claustrophobia.  

In order to make this accessible to patients, the MRI app was developed with mobile VR in mind, allowing anyone (patients or clinicians) with a VR-capable phone to download the app and use it with a budget friendly headset like Google Daydream or Cardboard.

Dr. Brown’s VR simulator was recently featured as the cover story in the September edition of Tomography magazine.

Customer Discovery Using 360 Video

Customer Discovery Using 360 Video

Year after year, students in Professor Dawn White’s Entrepreneurship 411 course are tasked with doing a “customer discovery” – a process where students interested in creating a business, interview professionals in a given field to assess their needs and how products they develop could address these needs and alleviate some of the difficulties they encounter on a daily basis.

Often when given this assignment, students would defer to their peers for feedback instead of reaching out to strangers working in these fields of interest. This demographic being so similar to the students themselves, would result in a fairly biased outcome that didn’t truly get to the root issue of why someone might want or need a specific product. Looking for an alternative approach, Dawn teamed up with her long time friend, Professor Alison Bailey, who teaches DEI at the University, and Aileen Huang-Saad from Biomedical Engineering, and approached the Duderstadt Center with their idea: What if students could interact with a simulated and more diverse professional to conduct their customer discovery?

After exploring the many routes this could take for development, including things like motion capture-driven CGI avatars, 360 video became the decided platform on which to create this simulation. 360 Video viewed within an Oculus Rift VR headset ultimately gave the highest sense of realism and immersion when conducting an interview, which was important for making the interview process feel authentic.

Up until this point, 360 videos were largely passive experiences. They did not allow users to tailor the experience based on their choices or interact with the scene in any way. This Customer Discovery project required the 360 videos to be responsive – when a student asked a recognized customer discovery question, the appropriate video response would need to be triggered to play. And to do this, the development required both some programming logic to trigger different videos but also an integrated voice recognition software so students could ask a question out loud and have the speech recognized within the application.

Dawn and Alison sourced three professionals to serve as their simulated actors for this project:

Fritz discusses his career as an IT professional

Fritz – Fritz is a young black man with a career as an IT professional


Cristina – Cristina is a middle aged woman with a noticeable accent, working in education


Charles – Charles is a white adult man employed as a barista

These actors were chosen for their authenticity and diversity, having qualities that may lead interviewers to make certain assumptions or expose biases in their interactions with them. With the help of talented students at the Visualization Studio, these professionals were filmed responding to various customer discovery questions using the Ricoh Theta 360 camera and a spatial microphone (this allows for spatial audio in VR, so you feel like the sound is coming from a specific direction where the actor is sitting). For footage of one response to be blended with the next, the actors had to remember to revert their hands and face to the same pose between responses so the footage could be aligned. They also were filmed giving generic responses to any unplanned questions that may get asked as well as twiddling their thumbs and patiently waiting – footage that could be looped to fill any idle time between questions.

Once the footage was acquired, the frame ranges for each response were noted and passed off to programmers to implement into the Duderstadt Center’s in-house VR rendering software, Jugular. As an initial prototype of the concept, the application was originally intended to run as a proctored simulation – students engaging in the simulation would wear an Oculus Rift and ask their questions out loud, with the proctor listening in and triggering the appropriate actor response using keyboard controls. For a more natural feel, Dawn was interested in exploring voice recognition to make the process more automated.

Within Jugular, students view an interactive 360 video where they are seated across from one of three professionals available for interviewing. Using the embedded microphone in the Oculus Rift they are able to ask questions that are recognized using Dialogue Flow, that in turn trigger the appropriate video response, allowing students to conduct mock interviews.

With Dawn employing some computer science students to tackle the voice recognition element over the summer, they were able to integrate this feature into Jugular using the Dialogue Flow agent with Python scripts. Students could now be immersed in an Oculus Rift, speaking to a 360 video filmed actor, and have their voice interpreted as they asked their questions out loud, using the embedded microphone on the Rift.

Upon it’s completion, the Customer Discovery application was piloted in the Visualization Studio with Dawn’s students for the Winter 2019 semester.

Leisure & Luxury in the Age of Nero: VR Exhibit for the Kelsey Museum

Leisure & Luxury in the Age of Nero: VR Exhibit for the Kelsey Museum

As part of the Kelsey museum’s most grandiose exhibition to date, Leisure & Luxury in the Age of Nero: The Villas of Oplontis Near Pompeii features over 230 artifacts from the ancient world. These artifacts originate from the ancient villa of Oplontis, an area near Pompeii that was destroyed when Mt. Vesuvius erupted.

Real world location of the ancient villa of Oplontis

The traveling exhibit explores the lavish lifestyle and economic interests of ancient Rome’s wealthiest. This location is currently being excavated and is currently off limits to the general public, but as part of the Kelsey’s exhibit, visitors will get to experience the location with the assistance of virtual reality headsets like the Oculus Rift and tablet devices.

The Duderstadt Center worked closely with curator Elaine Gazda as well as the Oplontis Project team from the University of Texas to optimize a virtual re-creation for the Oculus Rift & MIDEN and to generate panoramic viewers for tablet devices. The virtual environment uses high resolution photos and scan data captured on location, mapped to the surface of 3D models to give a very real sense of being at the real-world location.

Visitors to the Kelsey can navigate Oplontis in virtual reality through the use of an Oculus Rift headset, or through panoramas presented on tablet devices.

Visitors to the Kelsey can traverse this recreation using the Rift, or they can travel to the Duderstadt to experience it in the MIDEN – and not only can viewers experience the villa as they appear in modern day-they can also toggle on an artist’s re-creation of what the villas would have looked like before their destruction. In the re-created version of the scene, the ornate murals covering the walls of the villa are restored and foliage and ornate statues populate the scene. Alongside the virtual reality experience, the Kelsey Museum will also house a physically reconstructed replica of one of the rooms found in the villa as part of the exhibit.

Leisure & Luxury in the Age of Nero: The Villas of Oplontis Near Pompeii is a traveling exhibit that will also be on display at the Museum of the Rockies at the Montana State University, Bozeman, and the Smith College Museum of Art in Northampton, Massachusetts.

On Display at the Kelsey Museum, Leisure & Luxury in the Age of Nero: The Villas of Oplontis Near Pompeii

Michigan Alumnus: Libraries with No Limits

Michigan Alumnus: Libraries with No Limits

The Duderstadt Center’s MIDEN is featured on the cover of the Michigan Alumnus with the caption “Libraries of the Future”. This tribute to Michigan’s high-tech libraries is continued on page 36 with an article that explores the new additions to our libraries that enhance student and instructor experiences. The article introduces new visualization stations in the Duderstadt Center (dubbed “VizHubs”) that are similar to the type of collaborative work spaces found at Google and Apple.

Read the full article here.

Steel Structures – Collaborative Learning with Oculus Rift

Steel Structures – Collaborative Learning with Oculus Rift

Civil & Environmental Engineering: Design of Metal Structures (CEE413) uses a cluster of Oculus Rift head-mounted displays to visualize buckling metal columns in virtual reality. The cluster is configured in the Duderstadt Center’s Jugular software so that the instructor leads a guided tour using a joystick while three students follow his navigation. This configuration allows the instructor to control movement around the virtual object while students are only able to look around.

Developed in a collaboration with the Visualization Studio, using the Duderstadt Center’s Jugular software this simulation can run on both an Oculus Rift or within the MIDEN.

Lia Min: RAW, April 7 – 8

Lia Min: RAW

Research fellow Lia Min will be exhibiting “RAW”  in the 3D lab’s MIDEN April 7 & 8th from 4 – 6pm. All are welcome to attend. Lia Min’s exhibit is an intersection of art and science, assembled through her training as a neuroscientist. Her data set, commonly referred to as a “Brainbow“,  focuses on a lobe of a fruit fly brain at the base of an antenna. This visualization scales microns to centimeters to enlarge the specimen with an overall visual volume of about 1.8 x 1.8 x 0.4 meters.

Mammoth Calf Lyuba On Display

Mammoth Calf Lyuba On Display

Mammoth Calf Lyuba, a Collaborative Exploration of Data

On Nov. 17th-19th the Duderstadt Center’s Visualization Expert, Ted Hall, will be in Austin, Texas representing the Duderstadt Center at SC15, a super computing event. The technology on display will allow people in Austin to be projected into the MIDEN, the University of Michigan’s immersive virtual reality cave, allowing visitors in both Ann Arbor and in Austin to explore the body of a mummified mammoth.

The mummified remains of Lyuba.

The mammoth in question is a calf called Lyuba, found in Siberia in 2007 after being preserved underground for 50,000 years. This specimen is considered the best preserved mammoth mummy in the world, and is currently on display in the Shemanovskiy Museum and Exhibition Center in Salekhard, Russia.

University of Michigan Professor Daniel Fisher and his colleagues at the University of Michigan Museum of Paleontology arranged to have the mummy scanned using X-Ray computed tomography in Ford Motor Company’s Nondestructive Evaluation Laboratory. Adam Rountrey then applied a color map to the density data to reveal the internal anatomical structures.

Lyuba with her skeleton visible.

The Duderstadt Center got this data as an image stack for interactive volumetric visualization. The stack comprises 1,132 JPEG image slices with 762×700 pixel resolution per slice. Each of the resulting voxels is 1mm cubed.

When this data is brought into the Duderstadt Center’s Jugular software, the user can interactively slice through the mammoth’s total volume by manipulating a series of hexagonal planes, revealing the internal structure. In the MIDEN, the user can explore the mammoth in the same way while the mammoth appears to exist in front of them in three virtual dimensions. The MIDEN’s Virtual Cadaver used a similar process.

For the demo at SC15, users in Texas can occupy the same virtual space as another user in Ann Arbor’s MIDEN. Via a Kinect sensor in Austin, a 3D mesh of the user will be projected into the MIDEN alongside Lyuba allowing for simultaneous interaction and exploration of the data.

Showings will take place in the MIDEN

Sean Petty and Ted Hall simultaneously explore the Lyuba data set, with Ted’s form being projected into the virtual space of the MIDEN via Kinect sensor.

More about the Lyuba specimen:
Fisher, Daniel C.; Shirley, Ethan A.; Whalen, Christopher D.; Calamari, Zachary T.; Rountrey, Adam N.;
Tikhonov, Alexei N.; Buigues, Bernard; Lacombat, Frédéric; Grigoriev, Semyon; Lazarev, Piotr A. (2014 July). “X-ray Computed Tomography of Two Mammoth Calf Mummies.” Journal of Paleontology 88(4):664-675. DOI: http://dx.doi.org/10.1666/13-092
https://en.wikipedia.org/wiki/Lyuba
http://www.dallasnews.com/lifestyles/travel/headlines/20100418-42-000-year-old-baby-mammoth-4566.ece

Surgical Planning for Dentistry: Digital Manipulation of the Jaw

Surgical Planning for Dentistry: Digital Manipulation of the Jaw

CT data was brought into Zbrush & Topogun to be segmented and re-topologized. Influence was then added to the skin mesh allowing it to deform as the bones were manipulated.

Hera Kim-Berman is a Clinical Assistant Professor with the University of Michigan School of Dentistry. She recently approached the Duderstadt Center with an idea that would allow surgeons to prototype jaw surgery specific to patient data extracted from CT scans. Hera’s concept involved the ability to digitally manipulate portions of the skull in virtual reality, just as surgeons would when physically working with a patient, allowing them to preview different scenarios and evaluate how effective a procedure might be prior to engaging in surgery.

Before re-positioning the jaw segments, the jaw has a shallow profile.

After providing the Duderstadt Center with CT scan data, Shawn O’Grady was able to extract 3D meshes of the patient’s skull and skin using Magics. From there, Stephanie O’Malley worked with the models to make them interactive and suitable for real-time platforms. This involved bringing the skull into a software like Zbrush and creating slices in the mesh to correspond to areas identified by Hera as places where the skull would potentially be segmented during surgery. The mesh was then also optimized to perform at a higher frame rate when incorporated into real-time platforms. The skin mesh was also altered, undergoing a process called “re-topologizing” which allowed it to be more smoothly deformed.  From there, the segmented pieces of the skull were re-assembled, and then assigned influence over areas of the skin in a process called “rigging”. This allowed for areas of the skin to move with selected bones as they were separated and shifted by a surgeon in 3D space.

After re-positioning of the jaw segments, the jaw is more pronounced.

Once a working model was achieved, it was passed off to Ted Hall and student programmer Zachary Kiekover, to be implemented into the Duderstadt Center’s Jugular Engine, allowing the demo to run at large scale and in stereoscopic 3D from within the virtual reality MIDEN but also on smaller head mounted displays like the Oculus Rift. Additionally, more intuitive user controls were added which allowed for easier selection of the various bones using a game controller or motion tracked hand gestures via the Leap Motion. This meant surgeons could not only view the procedure from all angles in stereoscopic 3D, but they could also physically grab the bones they wanted to manipulate and transpose them in 3D space.

Zachary demonstrates the ability to manipulate the model using the Leap Motion.

Virtual Cadaver – Supercomputing

Virtual Cadaver – Supercomputing

The Virtual Cadaver is a visualization of data provided by the Visible Human Project of the National Library of Medicine. This project aimed to create a digital image dataset of complete human male and female cadavers.

Volumetric anatomy data from the Visible Human Project

The male dataset originate from Joseph Paul Jernigan, a 38-year-old convicted Texas murderer who was executed by lethal injection. He donated his body for scientific research in 1993. The female cadaver remains anonymous, and has been described as a 59-year-old Maryland housewife who passed away from a heart attack. Her specimen contains several pathologies, including cardiovascular disease and diverticulitis.

Both cadavers were encased in gelatin and water mixture and frozen to produce the fine slices that comprise the data. The male dataset consists of 1,871 slices produced at 1 millimeter intervals. The female dataset is comprised of 5,164 slices.

The Duderstadt Center was directed to the dataset for the female subject in December of 2013. To load the data into the virtual reality MIDEN (a fully-immersive multi-screen head-tracked CAVE environment) and a variety of other display environments, the images were pre-processed into JPEGs at 1024×608 pixels. Every tenth slice is loaded, allowing the figure to be formed out of 517 slices at 3.3mm spacing per slice. A generic image-stack loader was written to allow for a 3D volume model to be produced from any stack of images, not specific to the Visible Human data. In this way, it can be configured to load a denser sample of slices over a shorter range should a subset of the model need to be viewed in higher detail.

Users can navigate around their data in passive cinema-style stereoscopic projection. In the case of the Virtual Cadaver, the body appears just as it would to a surgeon, revealing the various bones, organs and tissues. Using a game controller, users can arbitrarily position sectional planes to view a cross-section of the subject. This allows for cuts to be made that would otherwise be very difficult to produce in a traditional anatomy lab. The system can accommodate markerless motion-tracking through devices like the Microsoft Kinect and can also allow for multiple simultaneous users interacting with a shared scene from remote locations across a network.