The M.I.D.E.N. transports you into Nova’s Reality

The M.I.D.E.N. transports you into Nova’s Reality

Using Virtual Production Techniques to Film a Music Video

Author

Emma Powell and Eilene Koo


Photo Credit: Eilene Koo

Photo Credit: Eilene Koo

The Michigan Immersive Digital Experience Nexus (MIDEN) doesn’t seem very large at first, as it is a 10 x 10 x 10 foot space and a blank canvas ready for its next project like “Nova’s Reality.” For this special performance, though, the MIDEN transformed into an expansive galactic environment, placing electronic jazz musician Nova Zaii in the middle of a long subway car traveling through space. 

The technological centerpiece of Zaii’s performance was the Nova Portals, a cone-shaped instrument invented by Zaii which uses motion and doesn’t require direct contact to create sound. Zaii created the Nova Portals during his time as a student at the University of Michigan and majoring in Performing Arts Technology and Jazz Studies. 

Several members of the U-M community helped bring Zaii’s performance to life inside the MIDEN. Akash Dewan, a graduated senior and director of the project, first met Nova in his freshman year. “I first met Nova in my freshman year in the spring, when I decided to shoot Masimba Hwati’s opening ceremony for his piece at the UMMA,” Dewan said. Masinba Hwati is a Zimbabwean sculptor and musician who made the Ngoromera, a sculpture made out of many different objects and musical instruments. When Hwati performed at the opening ceremony, his drummer was Nova Zaii. 

“I took photos throughout the performance and took some particularly cool photos of Nova, so I decided to chat with him after the event and show him the photos I took,” Dewan said. “Since then, I have been following his journey on Instagram as a jazz musician part of the Juju Exchange [a musical partnership between Zaii and two childhood friends] and inventor of the Nova Portals, which I’ve always taken a fascination towards due to my interest in the intersection of technology and art.”

Following major hardware upgrades, the MIDEN’s image refresh rate has doubled, along with brighter light outputs, to support stereoscopic immersion for two simultaneous users — four perspective projections for four eyes, 60 times per second. 

Sean Petty, senior software developer at the DMC’s Emerging Technologies Group, assisted the “Nova’s Reality” team inside the MIDEN and explained how the space was used. 

“This project is using the MIDEN as a virtual production space, where the MIDEN screens will deliver a perspective correct background that will make it appear as if the actor is actually in the virtual environment from the perspective of the camera,” Petty said. “To accommodate this, we modified the MIDEN to only display one image per frame, rather than the four that would be required for the usual two user VR experience. We also reconfigured the motion tracking to track the motion of the camera, rather than the motion of the VR glasses.”

The high-speed projection refresh rate of 240 scans per second allowed for a flicker-free recording by the video camera.

“This entire process was extremely inspiring for all of us involved, and maintains my strong drive to continue to find new, fresh interdisciplinary approaches to visualizing music,” said Dewan. After this project and their graduation, each member will also be continuing individual creative work. 

The full project will be published on novazaii.com and akashdewan.com

Photo Credit: Eilene Koo


Find more of their creative works through their Instagram profiles:

Performer & Inventor of the Nova Portals: Nova Zaii, @novazaii

Director, Co-Director of Photography, Editor, 3D Graphics Developer: Akash Dewan, @akashdewann

Audio Engineer/Technician: Adithya Sastr, @adithyasastry

3D Graphics Developer: Elvis Xiang

Co-Director of Photography: Gokul Madathil, @madlight.productions

BTS Photographer: Randall Xiao, @randysfoto

MIDEN Staff + Technical Support: Sean Petty and Theodore Hall

Volunteers Needed: Study on Virtual Reality Technologies

Volunteers Needed: Study on Virtual Reality Technologies

Participate in a VR Study with the Warden Lab

Hello Wolverines!


Volunteers are needed for a research study by the Warden Lab. We are conducting an innovative experiment to understand how people interact with digital information presented in virtual reality (VR). This study (Institutional Review Board reference ID HUM00260931) will involve responding to questions about information presented in VR. Your participation will help us explore potential applications of VR technologies in enhancing our understanding and engagement with digital content.


Session Duration: The experiment will be completed in a single session, lasting no more than
1.5 hours.

Location: On North Campus in the basement of the Industrial and Operations Engineering Building (IOE, 1205 Beal Avenue, Ann Arbor, MI 48109) Room G733

Compensation: Participants will receive $20 (cash) after completion of the experiment.

Requirements to participate:

● Be at least 18 years old.
● Have normal or corrected-to-normal vision.
● Not be susceptible to motion sickness or cyber sickness.
● Not have a history of seizure disorders.
● Not be colorblind


We strongly encourage people with previous VR experience to apply for participation.


If you are interested, please fill out the Interest Form to participate in our experiment.

Renew Scleroderma – Mobile Health Tracking App for Managing Scleroderma

Renew Scleroderma

Mobile Health Tracking App for managing Scleroderma

The Renew Scleroderma app aims to assist individuals with Scleroderma by giving access to a full list of resources and activities designed to help manage their condition. RENEW stands for Resilience-based Energy Management to Enhance Well-being.

Scleroderma is a rare autoimmune condition that causes inflammation and thickening of the skin in the hands and face. The high amount of collagen in the skin can advance to internal organs potentially causing complications in multiple bodily systems. Those who are diagnosed with scleroderma have a high symptom burden and need to learn strategies for self-management.

The mobile app presents users with information on Scleroderma as well as weekly activities they can perform to manage the condition. The app requires people to set goals and track health behaviors such as activity, pacing, sleep, relaxation, and engagement in physical activities. The user is encouraged to set goals within the app to complete certain activities, and their progress towards these goals is accessible to their assigned health coach from a secure web portal. Patients have regular check-ins with their health coach, discuss their progress, and adjust their plan to manage the condition based on their progress in the app. Tracking symptoms in real-time. Participants can track their health behaviors specific to the learning modules and the RENEW program. 

Image of a mobile device demonstrating the app

Renew is quick and easy to use, users download the mobile app from either the the Google Play or iTunes app stores and create an account. The app is currently developed for iOS & Android mobile devices, allowing wide accessibility to the general public. 

The benefit of Renew is that this technology can relay a user’s progress to the database accessed from a secure web portal. This web portal allows users to easily connect to an assigned University of Michigan Health Coach who has access to the information they input into the app. Users are assigned a health coach from a pool of qualified health coaches at Michigan Medicine – all of whom have Scleroderma themselves. The health coach can view their progress within the mobile app to provide feedback. That way the health coach can also see how their mentees are doing and prepare for their one on one meetings. 

One main consideration in the design process was to ensure that the app is physically easy for users to interact with. Most people with scleroderma have limited hand function, so the team consulted directly with users on where to put navigation buttons, how big the buttons needed to be, and how information should be entered into the app to reduce fatigue.

Susan Murphy acted as a faculty member for the development team consisting of Sara ‘Dari’ Eskandari, Daniel Vincenz and Sean Petty. The LiveWell App Factory supported the development of this application to Support Health and Function of People with Disabilities funded by a grant from the National Institute on Disability, Independent Living and Rehabilitation Research in the U.S. Department of Health and Human Services. With a working prototype completed and piloted with patients, future iterations of the app have been handed off to Atomic Object – a custom software development and design company local to Ann Arbor.

Video of the Mobile App Preview

Recruiting Unity VR programmers to Evaluate Sound Customization Toolkit for Virtual Reality Applications

Recruiting Unity VR programmers to Evaluate Sound Customization Toolkit for Virtual Reality Applications

Participate in a study by the EECS Accessibility Lab

The EECS Accessibility Lab needs your help evaluating a new Sound Accessibility toolkit for Virtual Reality!

Our research team is studying how sound customization tools, like modulating frequency or dynamically adjusting volume can enhance VR experience for DHH people. We are recruiting adult (18 or older) participants who have at least 1 year of experience working with UnityVR and have at least 2 previous projects that have sounds to add our toolkit into.

This study will be self-paced, remote, and asynchronous. It will take around 60 – 90 minutes.

In this study, we will collect some demographic information about you (e.g., age, gender) and ask about your experience working with UnityVR. We will then introduce our Sound Customization Toolkit and ask you to apply it to your own project. We will ask you to record your screen and voice during this implementation process. We will ask you to complete a form during the study to provide feedback for our toolkit.

After the study, we will compensate you $30 in the form of an Amazon Gift Card for your time.

If you are interested in participating, please fill out this Google Form. For more information, feel free to reach out to Xinyun Cao: [email protected].

For more details on our work, see our lab’s webpage.

Scientific Visualization of Pain

Scientific Visualization of Pain

XR at the Headache & Orofacial Pain Effort (HOPE) Lab

Dr. Alexandre DaSilva is an Associate Professor in the School of Dentistry, an Adjunct Associate Professor of Psychology in the College of Literature, Science & Arts, and a neuroscientist in the Molecular and Behavioral Neuroscience Institute.  Dr. DaSilva and his associates study pain – not only its cause, but also its diagnosis and treatment – in his Headache & Orofacial Pain Effort (HOPE) Lab, located in the 300 N. Ingalls Building.

Dr. Alex DaSilva slices through a PET scan of a “migraine brain” in the MIDEN, to find areas of heightened μ-opioid activity.

Virtual and augmented reality have been important tools in this endeavor, and Dr. DaSilva has brought several projects to the Digital Media Commons (DMC) in the Duderstadt Center over the years.

In one line of research, Dr. DaSilva has obtained positron emission tomography (PET) scans of patients in the throes of migraine headaches.  The raw data obtained from these scans are three-dimensional arrays of numbers that encode the activation levels of dopamine or μ-opioid in small “finite element” volumes of the brain.  As such, they’re incomprehensible.  But, we bring the data to life through DMC-developed software that maps the numbers into a blue-to-red color gradient and renders the elements in stereoscopic 3D virtual reality (VR) – in the Michigan Immersive Digital Experience Nexus (MIDEN), or in head-mounted displays such as the Oculus Rift.  In VR, the user can effortlessly slide section planes through the volumes of data, at any angle or offset, to hunt for the red areas where the dopamine or μ-opioid signals are strongest.  Understanding how migraine headaches affect the brain may help in devising more focused and effective treatments.

Dr. Alex DaSilva’s associate, Hassan Jassar, demonstrates the real-time fNIRS-to-AR brain activation visualization, as seen through a HoloLens, as well as the tablet-based app for painting pain sensations on an image of a head. [Photo credit: Hour Detroit magazine, March 28, 2017. https://www.hourdetroit.com/health/virtual-reality-check/

In another line of research, Dr. DaSilva employs functional near-infrared spectroscopy (fNIRS) to directly observe brain activity associated with pain in “real time”, as the patient experiences it.  As Wikipedia describes it: “Using fNIRS, brain activity is measured by using near-infrared light to estimate cortical hemodynamic activity which occur in response to neural activity.”  [https://en.wikipedia.org/wiki/Functional_near-infrared_spectroscopy]  The study participant wears an elastic skullcap fitted with dozens of fNIRS sensors wired to a control box, which digitizes the signal inputs and sends the numeric data to a personal computer running a MATLAB script.  From there, a two-part software development by the DMC enables neuroscientists to visualize the data in augmented reality (AR).  The first part is a MATLAB function that opens a Wi-Fi connection to a Microsoft HoloLens and streams the numeric data out to it.  The second part is a HoloLens app that receives that data stream and renders it as blobs of light that change hue and size to represent the ± polarity and intensity of each signal.  The translucent nature of HoloLens AR rendering allows the neuroscientist to overlay this real-time data visualization on the actual patient.  Being able to directly observe neural activity associated with pain may enable a more objective scale, versus asking a patient to verbally rate their pain, for example “on a scale of 1 to 5”.  Moreover, it may be especially helpful for diagnosing or empathizing with patients who are unable to express their sensations verbally at all, whether due to simple language barriers or due to other complicating factors such as autism, dementia, or stroke.

Yet another DMC software development, the “PainTrek” mobile application also started by Dr. DaSilva, allows patients to “paint their pain” on an image of a manikin head that can be rotated freely on the screen, as a more convenient and intuitive reporting mechanism than filling out a common questionnaire.

PainTrek app allows users to “paint” regions of the body experiencing pain to indicate and track pain intensity.

Architectural Lighting Scenarios Envisioned in the MIDEN

Architectural Lighting Scenarios Envisioned in the MIDEN

ARCH 535 & Arch 545, Winter 2022

Mojtaba Navvab, Ted Hall


Prof. Mojtaba Navvab teaches environmental technology in the Taubman College of Architecture and Urban Planning, with particular interests in lighting and acoustics.  He is a regular user of the Duderstadt Center’s MIDEN (Michigan Immersive Digital Experience Nexus) – in teaching as well as sponsored research.

On April 7, 2022, he brought a combined class of ARCH 535 and ARCH 545 students to the MIDEN to see, and in some cases hear, their projects in full-scale virtual reality.

Recreating the sight and sound of the 18-story atrium space of the Hyatt Regency Louisville, where the Kentucky All State Choir gathers to sing the National Anthem.

Arch 535: To understand environmental technology design techniques through case studies and compliance with building standards.  VR applications are used to view the design solutions.

Arch 545: To apply the theory, principles, and lighting design techniques using a virtual reality laboratory.

“The objectives are to bring whatever you imagine to reality in a multimodal perception; in the MIDEN environment, whatever you create becomes a reality.  This aims toward simulation, visualization, and perception of light and sound in a virtual environment.”

Recreating and experiencing one of the artworks by James Turrell.

“Human visual perception is psychophysical because any attempt to understand it necessarily draws upon the disciplines of physics, physiology, and psychology.  A ‘Perceptionist’ is a person concerned with the total visual environment as interpreted in the human mind.”

“Imagine if you witnessed or viewed a concert hall or a choir performance in a cathedral.  You could describe the stimulus generated by the architectural space by considering each of the senses independently as a set of unimodal stimuli.  For example, your eyes would be stimulated with patterns of light energy bouncing off the simulated interior surfaces or luminous environment while you listen to an orchestra playing or choir singing with a correct auralized room acoustics.”

A few selected images photographed in the MIDEN are included in this article.  For the user wearing the stereoscopic glasses, the double images resolve into an immersive 3D visual experience that they can step into, with 270° of peripheral vision.

Students explore a daylight design solution for a library.

Revolutionizing 3D Rotational Angiograms with Microsoft Hololens

Angiography with Hololens augmented reality

Revolutionizing 3D Rotational Angiograms with Microsoft Hololens

A NEW WAY TO VISUALIZE THE HEART

Stephanie O’Malley


Just prior to release of the Microsoft Hololens 2, the Visualization Studio was approached by Dr. Arash Salavitabar in the U-M CS Mott Children’s Hospital with an innovative idea: to use XR to improve evaluation of patient scans stemming from 3D rotational angiography. 

Rotational angiography is a medical imaging technique based on x-ray, that allows clinicians to acquire CT-like 3D volumes during hybrid surgery or during a catheter intervention. This technique is performed by injecting contrast into the pulmonary artery followed by rapidly rotating a cardiac C-arm. Clinicians are then able to view the resulting data on a computer monitor, manipulating images of the patient’s vasculature. This is used to evaluate how a procedure should move forward and to aid in communicating that with the patient’s family.

With augmented reality devices like the Hololens 2, new possibilities for displaying and manipulating patient data have emerged, along with the potential for collaborative interactions with patient data among clinicians.

What if, instead of viewing a patient’s vasculature as a series of 2D images displayed on a computer monitor, you and your fellow doctors could view it more like a tangible 3D object placed on the table in front of you? What if you could share in the interaction with this 3D model — rotating and scaling the model, viewing cross sections, or taking measurements, to plan a procedure and explain it to the patient’s family?

This has now been made possible with a Faith’s Angels grant awarded to Dr. Salavitabar, intended to explore innovative ways of addressing congenital heart disease. The funding for this grant was generously provided by a family impacted by congenital heart disease, who unfortunately had lost a child to the disease at a very young age.

The Visualization Studio consulted with Dr. Salavitabar on essential features and priorities to realize his vision, using the latest version of the Visualization Studio’s Jugular software.

This video was spliced from two separate streams recorded concurrently from two collaborating HoloLens users. Each user has a view of the other, as well as their own individual perspectives of the shared holographic model.

JUGULAR

The angiography system in the Mott clinic produces digital surface models of the vasculature in STL format.

That format is typically used for 3D printing, but the process of queuing and printing a physical 3D model often takes at least several hours or even days, and the model is ultimately physical waste that must be properly disposed of after its brief use.

Jugular offers the alternative of viewing a virtual 3D model in devices such as the Microsoft HoloLens, loaded from the same STL format, with a lead time under an hour.  The time is determined mostly by the angiography software to produce the STL file.  Once the file is ready, it takes only minutes to upload and view on a HoloLens.  Jugular’s network module allows several HoloLens users to share a virtual scene over Wi-Fi.  The HoloLens provides a “spatial anchor” capability that ties hologram locations to a physical space.  Users can collaboratively view, walk around, and manipulate shared holograms relative to their shared physical space.  The holograms can be moved, scaled, sliced, and marked using hand gestures and voice commands.

This innovation is not confined to medical purposes.  Jugular is a general-purpose extended-reality program with applications in a broad range of fields.  The developers analyze specific project requirements in terms of general XR capabilities.  Project-specific requirements are usually met through easily-editable configuration files rather than “hard coding.”

Robots Who Goof: Can We Trust Them?

Robotics in Unreal Engine

Robots Who Goof: Can We Trust Them?

EVERYONE MAKES MISTAKES

The human-like, android robot used in the virtual experimental task of handling boxes.

When robots make mistakes—and they do from time to time—reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.

In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” said Lionel Robert, associate professor at the U-M School of Information and core faculty of the Robotics Institute.

“Robots will make mistakes when working with humans, decreasing humans’ trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears.”

For their study published in the Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication, Robert and doctoral student Connor Esterwood examined how the repair strategies—including a new strategy of explanations—impact the elements that drive trust: ability (competency), integrity (honesty) and benevolence (concern for the trustor).

The mechanical arm robot used in the virtual experiment.

The researchers recruited 164 participants to work with a robot in a virtual environment, loading boxes onto a conveyor belt. The human was the quality assurance person, working alongside a robot tasked with reading serial numbers and loading 10 specific boxes. One robot was anthropomorphic or more humanlike, the other more mechanical in appearance.

Sara Eskandari and Stephanie O’Malley of the Emerging Technology Group at U-M’s James and Anne Duderstadt Center helped develop the experimental virtual platform.

The robots were programed to intentionally pick up a few wrong boxes and to make one of the following trust repair statements: “I’m sorry I got the wrong box” (apology), “I picked the correct box so something else must have gone wrong” (denial), “I see that was the wrong serial number” (explanation), or “I’ll do better next time and get the right box” (promise).

Previous studies have examined apologies, denials and promises as factors in trust or trustworthiness but this is the first to look at explanations as a repair strategy, and it had the highest impact on integrity, regardless of the robot’s appearance.

When the robot was more humanlike, trust was even easier to restore for integrity when explanations were given and for benevolence when apologies, denials and explanations were offered.

As in the previous research, apologies from robots produced higher integrity and benevolence than denials. Promises outpaced apologies and denials when it came to measures of benevolence and integrity.

Esterwood said this study is ongoing with more research ahead involving other combinations of trust repairs in different contexts, with other violations.

“In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life,” Esterwood said. “For example, does a barista robot’s explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?”

This originally appeared on Michigan News.

More information:

Using Mobile VR to Assess Claustrophobia During an MRI

Using Mobile VR to Assess Claustrophobia During an MRI

new methods for exposure therapy

Stephanie O’Malley


Dr. Richard Brown and his colleague Dr. Jadranka Stojanovska had an idea for how VR could be used in a clinical setting. Having realized a problem with patients undergoing MRI scans experiencing claustrophobia, they wanted to use VR simulations to introduce potential patients to what being inside an MRI machine might feel like.

Duderstadt Center programmer Sean Petty and director Dan Fessahazion alongside Dr. Richard Brown

Claustrophobia in this situation is a surprisingly common problem. While there are 360 videos that convey what an MRI might look like, these fail to address the major factor contributing to claustrophobia: The perceived confined space within the bore. 360 videos tend to make the environment skewed, seeming further away than it would be in reality and thereby failing to induce the same feelings of claustrophobia that the MRI bore would produce in reality. With funding from the Patient Education Award Committee, Dr. Brown approached the Duderstadt Center to see if a better solution could be produced.

VR MRI: Character customization
A patient enters feet-first into the bore of the MRI machine.

In order to simulate the effects of an MRI accurately, a CGI MRI machine was constructed and ported to the Unity game engine. A customize-able avatar representing the viewer’s body was also added to give viewers a sense of self. When a VR headset is worn, the viewer’s perspective allows them to see their avatar body and the real proportions of the MRI machine as they are slowly transported into the bore. Verbal instructions mimic what would be said throughout the course of a real MRI, with the intimidating boom of the machine occurring as the simulated scan proceeds.

Two modes are provided within the app: Feet first or head first, to accommodate the most common scanning procedures that have been shown to induce claustrophobia.  

In order to make this accessible to patients, the MRI app was developed with mobile VR in mind, allowing anyone (patients or clinicians) with a VR-capable phone to download the app and use it with a budget friendly headset like Google Daydream or Cardboard.

Dr. Brown’s VR simulator was recently featured as the cover story in the September edition of Tomography magazine.

Learning Jaw Surgery with Virtual Reality

Learning Jaw Surgery with Virtual Reality

Jaw surgery can be complex and there are many factors that contribute to how a procedure is done. From routine corrective surgery to reconstructive surgery, the traditional means of teaching these scenarios has been unchanged for years. In an age populated with computers and the growing popularity of virtual reality, students still find themselves moving paper cut-outs of their patients around on a table top to explore different surgical methods.

Dr. Hera Kim-Berman was inspired to change this. Working with the Duderstadt Center’s 3D artist and programmers, a more immersive and comprehensive learning experience was achieved. Hera was able to provide the Duderstadt Center with patient Dicom data. These data sets were originally comprised of a series of two-dimensional MRI images, which were converted into 3D models and then segmented just as they would be during a surgical procedure. These were then joined to a model of the patient’s skin, allowing the movement of the various bones to influence real-time changes to a person’s facial structure, now visible from any angle.

This was done for several common practice scenarios (such as correcting an extreme over or under bite, or a jaw misalignment) and then imported into the Oculus Rift, where hand tracking controls were developed to allow students to “grab” the bones for adjusting in 3D.

Before re-positioning the jaw segments, the jaw has a shallow profile.

After re-positioning of the jaw segments, the jaw is more pronounced.

As a result, students are now able to gain a more thorough understanding of the spatial movement of bones and more complex scenarios, such as extensive reconstructive surgery, could be practiced well in advance of seeing a patient for a scheduled surgery.