Scientific Visualization of Pain

Scientific Visualization of Pain

XR at the Headache & Orofacial Pain Effort (HOPE) Lab

Dr. Alexandre DaSilva is an Associate Professor in the School of Dentistry, an Adjunct Associate Professor of Psychology in the College of Literature, Science & Arts, and a neuroscientist in the Molecular and Behavioral Neuroscience Institute.  Dr. DaSilva and his associates study pain – not only its cause, but also its diagnosis and treatment – in his Headache & Orofacial Pain Effort (HOPE) Lab, located in the 300 N. Ingalls Building.

Dr. Alex DaSilva slices through a PET scan of a “migraine brain” in the MIDEN, to find areas of heightened μ-opioid activity.

Virtual and augmented reality have been important tools in this endeavor, and Dr. DaSilva has brought several projects to the Digital Media Commons (DMC) in the Duderstadt Center over the years.

In one line of research, Dr. DaSilva has obtained positron emission tomography (PET) scans of patients in the throes of migraine headaches.  The raw data obtained from these scans are three-dimensional arrays of numbers that encode the activation levels of dopamine or μ-opioid in small “finite element” volumes of the brain.  As such, they’re incomprehensible.  But, we bring the data to life through DMC-developed software that maps the numbers into a blue-to-red color gradient and renders the elements in stereoscopic 3D virtual reality (VR) – in the Michigan Immersive Digital Experience Nexus (MIDEN), or in head-mounted displays such as the Oculus Rift.  In VR, the user can effortlessly slide section planes through the volumes of data, at any angle or offset, to hunt for the red areas where the dopamine or μ-opioid signals are strongest.  Understanding how migraine headaches affect the brain may help in devising more focused and effective treatments.

Dr. Alex DaSilva’s associate, Hassan Jassar, demonstrates the real-time fNIRS-to-AR brain activation visualization, as seen through a HoloLens, as well as the tablet-based app for painting pain sensations on an image of a head. [Photo credit: Hour Detroit magazine, March 28, 2017. https://www.hourdetroit.com/health/virtual-reality-check/

In another line of research, Dr. DaSilva employs functional near-infrared spectroscopy (fNIRS) to directly observe brain activity associated with pain in “real time”, as the patient experiences it.  As Wikipedia describes it: “Using fNIRS, brain activity is measured by using near-infrared light to estimate cortical hemodynamic activity which occur in response to neural activity.”  [https://en.wikipedia.org/wiki/Functional_near-infrared_spectroscopy]  The study participant wears an elastic skullcap fitted with dozens of fNIRS sensors wired to a control box, which digitizes the signal inputs and sends the numeric data to a personal computer running a MATLAB script.  From there, a two-part software development by the DMC enables neuroscientists to visualize the data in augmented reality (AR).  The first part is a MATLAB function that opens a Wi-Fi connection to a Microsoft HoloLens and streams the numeric data out to it.  The second part is a HoloLens app that receives that data stream and renders it as blobs of light that change hue and size to represent the ± polarity and intensity of each signal.  The translucent nature of HoloLens AR rendering allows the neuroscientist to overlay this real-time data visualization on the actual patient.  Being able to directly observe neural activity associated with pain may enable a more objective scale, versus asking a patient to verbally rate their pain, for example “on a scale of 1 to 5”.  Moreover, it may be especially helpful for diagnosing or empathizing with patients who are unable to express their sensations verbally at all, whether due to simple language barriers or due to other complicating factors such as autism, dementia, or stroke.

Yet another DMC software development, the “PainTrek” mobile application also started by Dr. DaSilva, allows patients to “paint their pain” on an image of a manikin head that can be rotated freely on the screen, as a more convenient and intuitive reporting mechanism than filling out a common questionnaire.

PainTrek app allows users to “paint” regions of the body experiencing pain to indicate and track pain intensity.

Architectural Lighting Scenarios Envisioned in the MIDEN

Architectural Lighting Scenarios Envisioned in the MIDEN

ARCH 535 & Arch 545, Winter 2022

Mojtaba Navvab, Ted Hall


Prof. Mojtaba Navvab teaches environmental technology in the Taubman College of Architecture and Urban Planning, with particular interests in lighting and acoustics.  He is a regular user of the Duderstadt Center’s MIDEN (Michigan Immersive Digital Experience Nexus) – in teaching as well as sponsored research.

On April 7, 2022, he brought a combined class of ARCH 535 and ARCH 545 students to the MIDEN to see, and in some cases hear, their projects in full-scale virtual reality.

Recreating the sight and sound of the 18-story atrium space of the Hyatt Regency Louisville, where the Kentucky All State Choir gathers to sing the National Anthem.

Arch 535: To understand environmental technology design techniques through case studies and compliance with building standards.  VR applications are used to view the design solutions.

Arch 545: To apply the theory, principles, and lighting design techniques using a virtual reality laboratory.

“The objectives are to bring whatever you imagine to reality in a multimodal perception; in the MIDEN environment, whatever you create becomes a reality.  This aims toward simulation, visualization, and perception of light and sound in a virtual environment.”

Recreating and experiencing one of the artworks by James Turrell.

“Human visual perception is psychophysical because any attempt to understand it necessarily draws upon the disciplines of physics, physiology, and psychology.  A ‘Perceptionist’ is a person concerned with the total visual environment as interpreted in the human mind.”

“Imagine if you witnessed or viewed a concert hall or a choir performance in a cathedral.  You could describe the stimulus generated by the architectural space by considering each of the senses independently as a set of unimodal stimuli.  For example, your eyes would be stimulated with patterns of light energy bouncing off the simulated interior surfaces or luminous environment while you listen to an orchestra playing or choir singing with a correct auralized room acoustics.”

A few selected images photographed in the MIDEN are included in this article.  For the user wearing the stereoscopic glasses, the double images resolve into an immersive 3D visual experience that they can step into, with 270° of peripheral vision.

Students explore a daylight design solution for a library.

Learning to Develop for Mixed Reality – The ENTR 390 “VR Lab”

Learning to Develop for Virtual Reality – The ENTR 390 “VR Lab”

XR Prototyping

For the past several years, students enrolled in the Center for Entrepreneurship’s Intro to Entrepreneurial Design Virtual Reality course have been introduced to programming and content creation pipelines for XR development using a variety of Visualization Studio resources. Their goal? Create innovative applications for XR. From creating video games to changing the way class material is accessed with XR capable textbooks, if you have an interest in learning how to make your own app for Oculus Rift, MIDEN or even a smart phone, this might be a class to enroll in. Students interested in this course are not required to have any prior programming or 3d modeling knowledge, and if you’ve never used a VR headset that’s OK too. This course will teach you everything you need to know.

Henry Duhaime presents his VR game for Oculus Rift, in which players explore the surface of Mars in search of a missing NASA rover.
Michael Meadows prototypes AR capable textbooks using a mobile phone and Apple’s ARKit.

Multi-Sensing the Universe

Multi-Sensing the Universe

Envisioning a Toroidal universe

Robert Alexander teamed with Danielle Battaglia, a senior in Art & Design, to compose and integrate audio effects into her conceptual formal model of the Toroidal Universe.  Danielle combined Plato’s notion of the universe as a dodecahedron with modern notions of black holes, worm holes, and child universes.  Their multi-sensory multiverse came together in the MIDEN and was exhibited there as part of the Art & Design senior integrative art exhibition.

Interested in using the MIDEN to do something similar? Contact us.

Behind the Scenes: Re-creating Citizen Kane in VR

Behind the Scenes: Re-creating Citizen Kane in VR

inside a classic

Stephanie O’Malley


Students in Matthew Solomon’s classes are used to critically analyzing film. Now they get the chance to be the director for arguably one of the most influential films ever produced: Citizen Kane.

Using an application developed at the Duderstadt Center with grant funding provided by LSA Technology Services, students are placed in the role of the film’s director and able to record a prominent scene from the movie using a virtual camera. The film set which no longer exists, has been meticulously re-created in black and white CGI using reference photographs from the original set, with a CGI Orson Welles acting out the scene on repeat – his actions performed by Motion Capture actor Matthew Henerson, carefully chosen for his likeness to Orson Welles, with the Orson avatar generated from a photogrammetry scan of Matthew.

Top down view of the CGI re-creation of the film set for Citizen Kane

Analyzing the original film footage, doorways were measured, actor heights compared, and footsteps were counted, to determine a best estimate for the scale of the set when 3D modeling. With feedback from Citizen Kane expert, Harlan Lebo, fine details down to the topics of the books on the bookshelves were able to be determined.

Archival photograph provided by Vincent Longo of the original film set

Motion Capture actor Matthew Henerson was flown in to play the role of the digital Orson Welles. In a carefully choreographed session directed by Matthew’s PhD student, Vincent Longo, the iconic scene from Citizen Kane was re-enacted while the original footage played on an 80″ TV in the background, ensuring every step aligned to the original footage perfectly.

Actor Matthew Henerson in full mocap attire amidst the makeshift set for Citizen Kane – Props constructed using PVC. Photo provided by Shawn Jackson.

The boundaries of the set were taped on the floor so the data could be aligned to the digitally re-created set. Eight Vicon motion capture cameras, the same used throughout Hollywood for films like Lord of the Rings or Planet of the Apes, formed a circle around the makeshift set. These cameras rely on infrared light reflected off of tiny balls affixed to the motion capture suit to track the actor’s motion. Any props during the motion capture recording were carefully constructed out of cardboard and PVC (later to be 3D modeled) so as to not obstruct his movements. The 3 minutes of footage attempting to be re-created took 3 days to complete, comprised over 100 individual mocap takes and several hours of footage, which were then compared for accuracy and stitched together to complete the full route Orson travels through the environment.

Matthew Henerson
Orson Welles

  Matthew Henerson then swapped his motion capture suit for an actual suit, similar to that worn by Orson in the film, and underwent 3D scanning using the Duderstadt Center’s photogrammetry resources. 

Actor Matthew Henerson wears asymmetrical markers to assist the scanning process

Photogrammetry is a method of scanning existing objects or people, commonly used in Hollywood and throughout the video game industry to create a CGI likenesses of famous actors. This technology has been used in films like Star Wars (an actress similar in appearance to Carrie Fischer was scanned and then further sculpted, to create a more youthful Princess Leia) with entire studios now devoted to photogrammetry scanning. The process relies on several digital cameras surrounding the subject and taking simultaneous photographs.

Matthew Henerson being processed for Photogrammetry

The photos are submitted to a software that analyzes them on a per-pixel basis, looking for similar features across multiple photos. When a feature is recognized, it is triangulated using the focal length of the camera and it’s position relative to other identified features, allowing millions of tracking points to be generated. From this an accurate 3D model can be produced, with the original digital photos mapped to its surface to preserve photo-realistic color. These models can be further manipulated: Sometimes they are sculpted by an artist, or, with the addition of a digital “skeleton”, they can be driven by motion data to become a fully articulated digital character.

  The 3d modeled scene and scanned actor model were joined with mocap data and brought into the Unity game engine to develop the functionality students would need to film within the 3D set. A virtual camera was developed with all of the same settings you would find on a film camera from that era. When viewed in a virtual reality headset like the Oculus Rift, Matthew’s students can pick up the camera and physically move around to position it at different locations in the CGI environment, often capturing shots that otherwise would be difficult to do in a conventional film set. The footage students film within the app can be exported as MP4 video and then edited in their editing software of choice, just like any other camera footage.

  Having utilized the application for his course in the Winter of 2020, Matthew Solomon’s project with the Duderstadt Center was recently on display as part of the iLRN’s 2020 Immersive Learning Project Showcase & Competition. With Covid-19 making the conference a remote experience, the Citizen Kane project was able to be experienced in Virtual Reality by conference attendees using the FrameVR platform. Highlighting innovative ways of teaching with VR technologies, attendees from around the world were able to learn about the project and watch student edits made using the application.

Citizen Kane on display for iLRN’s 2020 Immersive Learning Project Showcase & Competition using Frame VR

Novels in VR – Experiencing Uncle Tom’s Cabin

Novels in VR – Experiencing Uncle Tom’s Cabin

A Unique Perspective

Stephanie O’Malley


This past semester, English Professor Sara Blair taught a course at the University titled, “The Novel and Virtual Realities.”  – The purpose of this course was to expose students to different methods of analyzing novels and ways of understanding them from different perspectives by utilizing platforms like VR and AR.

Designed as a hybrid course, her class was split between a traditional classroom environment, and an XR lab, providing a comparison between traditional learning methods, and more hands-on experiential lessons through the use of immersive, interactive VR and AR simulations.

As part of her class curriculum, students were exposed to a variety of experiential XR content. Using the Visualization Studio’s Oculus Rifts, her class was able to view Dr. Courtney Cogburn’s “1000 Cut Journey” installation – a VR experience that puts viewers in the shoes of a black american man growing up in the time of segregation, allowing viewers to see first hand how racism affects every facet of their life. They also had the opportunity to view Asad J. Malik’s “Terminal 3” using augmented reality devices like the Microsoft Hololens. Students engaging with Terminal 3 see how Muslim identities in the U.S. are approached through the lens of an airport interrogation.

Wanting to create a similar experience for her students at the University of Michigan, Sara approached the Duderstadt Center about the possibility of turning another novel into a VR experience: Uncle Tom’s Cabin.

She wanted her students to understand the novel from the perspective of it’s lead character, Eliza, during the pivotal moment where as a slave, she is trying to escape her captors and reach freedom. But she also wanted to give her students the perspective of the slave owner and other slaves tasked with her pursuit, as well as the perspective of an innocent bystander watching this scene unfold.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin

Using Unreal Engine, the Duderstadt Center was able to make this a reality. An expansive winter environment was created based on imagery detailed in the novel, and CGI characters for Eliza and her captors were produced and then paired with motion capture data to drive their movements. When students put on the Oculus Rift headset, they can choose to experience the moment of escape either through Eliza’s perspective, her captors, or as a bystander. And to better evaluate what components contributed to student’s feelings during the simulation, versions of these scenarios were provided with and without sound. With sound enabled as Eliza, you hear footsteps in the snow gaining on you, the crack of the ice beneath your feet as you leap across a tumultuous river, and the barking of a vicious dog on your heels – all adding to the tension of the moment. While viewers are able to freely look around the environment, they are passive observers: They have no control over the choices Eliza makes or where she can go.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin – Freedom for Eliza lies on the other side of the frozen Ohio river.

The scene ends with Eliza reaching freedom on the opposite side of the Ohio river and leaving her pursuers behind. What followed the student’s experience with the VR version of the novel was a deep class discussion on how the scene felt in VR verses how it felt reading the same passage in the book. Some students wondered what it might feel like to instead be able to control the situation and control where Eliza goes, or as a bystander, to move freely through the environment as the scene plays out, deciding which party (Eliza or her pursuers) was of most interest to follow in that moment.

While Sara’s class has concluded for the semester, you can still try this experience for yourself – Uncle Tom’s Cabin is available to demo on all Visualization Studio workstations equipped with an Oculus Rift.

Using Mobile VR to Assess Claustrophobia During an MRI

Using Mobile VR to Assess Claustrophobia During an MRI

new methods for exposure therapy

Stephanie O’Malley


Dr. Richard Brown and his colleague Dr. Jadranka Stojanovska had an idea for how VR could be used in a clinical setting. Having realized a problem with patients undergoing MRI scans experiencing claustrophobia, they wanted to use VR simulations to introduce potential patients to what being inside an MRI machine might feel like.

Duderstadt Center programmer Sean Petty and director Dan Fessahazion alongside Dr. Richard Brown

Claustrophobia in this situation is a surprisingly common problem. While there are 360 videos that convey what an MRI might look like, these fail to address the major factor contributing to claustrophobia: The perceived confined space within the bore. 360 videos tend to make the environment skewed, seeming further away than it would be in reality and thereby failing to induce the same feelings of claustrophobia that the MRI bore would produce in reality. With funding from the Patient Education Award Committee, Dr. Brown approached the Duderstadt Center to see if a better solution could be produced.

VR MRI: Character customization
A patient enters feet-first into the bore of the MRI machine.

In order to simulate the effects of an MRI accurately, a CGI MRI machine was constructed and ported to the Unity game engine. A customize-able avatar representing the viewer’s body was also added to give viewers a sense of self. When a VR headset is worn, the viewer’s perspective allows them to see their avatar body and the real proportions of the MRI machine as they are slowly transported into the bore. Verbal instructions mimic what would be said throughout the course of a real MRI, with the intimidating boom of the machine occurring as the simulated scan proceeds.

Two modes are provided within the app: Feet first or head first, to accommodate the most common scanning procedures that have been shown to induce claustrophobia.  

In order to make this accessible to patients, the MRI app was developed with mobile VR in mind, allowing anyone (patients or clinicians) with a VR-capable phone to download the app and use it with a budget friendly headset like Google Daydream or Cardboard.

Dr. Brown’s VR simulator was recently featured as the cover story in the September edition of Tomography magazine.

Students Learn 3D Modeling for Virtual Reality

Students Learn 3D Modeling for Virtual Reality

making tiny worlds

Stephanie O’Malley


ArtDes240 is course offered by the Stamps School of Art & Design and taught by Stephanie O’Malley that teaches students 3D modeling & animation.  As one of only a few 3D digital classes offered at the University of Michigan, AD240 sees student interest from several schools across campus with students looking to gain a better understanding of 3D art as it pertains to the video game industry.

The students in AD240 are given a crash-course in 3D modeling in 3D Studio Max and level creation within the Unreal Editor. It is then within Unreal that all of their objects are positioned, terrain is sculpted, and atmospheric effects such as time of day, weather, or fog can be added.

“Candyland” – Elise Haadsma & Heidi Liu, developed using 3D Studio Max and Unreal Engine
“Candyland” – Elise Haadsma & Heidi Liu, developed using 3D Studio Max and Unreal Engine

With just 5 weeks to model their entire environment, bring it into Unreal,  package it as an executable, and test it in the MIDEN (or on the Oculus Rift), the resulting student projects were truly impressive. Art & Design Students Elise Haadsma & Heidi Liu took inspiration from the classic board game, “Candyland” to create a life-size game board environment in Unreal consisting of a lollipop forest, mountains of Hershey’s kisses, even a gingerbread house and chocolate river.

Lindsay Balaka  from the School of Music, Theater & Dance, chose to create her scene using the Duderstadt Center’s in-house rendering software “Jugular” instead of Unreal Engine-Her creation, “Galaxy Cakes”, is a highly stylized (reminiscent of an episode of the 1960’s cartoon, The Jetson’s) cupcake shop, complete with spatial audio emanating from the corner Jukebox.

Lindsay Balaka’s “Galaxy Cakes” environment
An abandoned school, created by Vicki Liu in 3D Studio Max and Unreal Engine

Vicki Liu, also of Art & Design, created a realistic horror scene using Unreal. After navigating down a poorly lit hallway of an abandoned nursery school, you will find yourself in a run down classroom inhabited by some kind of mad man. A tally of days passed has been scratched into the walls, an eerie message scrawled onto the chalkboard, and furniture haphazardly barricades the windows.

While the goal of the final project was to create a traversible environment for virtual reality, some students took it a step further.

Art & Design student Gus Schissler created an environment composed of neurons in Unreal intended for viewing within the Oculus Rift. He then integrated data from an Epoch neurotransmitter (a device capable of reading brain waves) to allow the viewer to telepathically interact with the environment. The viewers mood when picked up by the Epoch not only changed the way the environment looked by adjusting the intensities of the light being emitted by the neurons, but also allowed the viewer to think specific commands (push, pull, etc) in order to navigate their way past various obstacles in the environment.

Students spend the last two weeks of the semester scheduling time with Ted Hall and Sean Petty to test their scenes and ensure everything runs and looks correctly on the day of their presentations. This was a class that not just introduced students to the design process, but to also allowed them to get hands on experience with upcoming technologies as virtual reality continues to expand in the game and film industries.

Student Gus Schissler demonstrates his Neuron environment for Oculus Rift that uses inputs from an Epoch neurotransmitter to interact.

Passion & Violence: Anna Galeotti’s MIDEN Installation

Passion & Violence

Anna Galeotti’s MIDEN INstallation

Ph.D. Fullbright Scholar (Winter, 2014) Anna Galeotti:  exploring the concept of “foam” or “bubbles” as a possible model for audiovisual design elements and their relationships. Her art installation, “Passion and Violence in Brazil” was displayed in the Duderstadt Center’s MIDEN.

Interested in using the MIDEN to do something similar? Contact us.