Multi-Sensing the Universe

Multi-Sensing the Universe

Envisioning a Toroidal universe

Robert Alexander teamed with Danielle Battaglia, a senior in Art & Design, to compose and integrate audio effects into her conceptual formal model of the Toroidal Universe.  Danielle combined Plato’s notion of the universe as a dodecahedron with modern notions of black holes, worm holes, and child universes.  Their multi-sensory multiverse came together in the MIDEN and was exhibited there as part of the Art & Design senior integrative art exhibition.

Interested in using the MIDEN to do something similar? Contact us.

Revolutionizing 3D Rotational Angiograms with Microsoft Hololens

Angiography with Hololens augmented reality

Revolutionizing 3D Rotational Angiograms with Microsoft Hololens

A NEW WAY TO VISUALIZE THE HEART

Stephanie O’Malley


Just prior to release of the Microsoft Hololens 2, the Visualization Studio was approached by Dr. Arash Salavitabar in the U-M CS Mott Children’s Hospital with an innovative idea: to use XR to improve evaluation of patient scans stemming from 3D rotational angiography. 

Rotational angiography is a medical imaging technique based on x-ray, that allows clinicians to acquire CT-like 3D volumes during hybrid surgery or during a catheter intervention. This technique is performed by injecting contrast into the pulmonary artery followed by rapidly rotating a cardiac C-arm. Clinicians are then able to view the resulting data on a computer monitor, manipulating images of the patient’s vasculature. This is used to evaluate how a procedure should move forward and to aid in communicating that with the patient’s family.

With augmented reality devices like the Hololens 2, new possibilities for displaying and manipulating patient data have emerged, along with the potential for collaborative interactions with patient data among clinicians.

What if, instead of viewing a patient’s vasculature as a series of 2D images displayed on a computer monitor, you and your fellow doctors could view it more like a tangible 3D object placed on the table in front of you? What if you could share in the interaction with this 3D model — rotating and scaling the model, viewing cross sections, or taking measurements, to plan a procedure and explain it to the patient’s family?

This has now been made possible with a Faith’s Angels grant awarded to Dr. Salavitabar, intended to explore innovative ways of addressing congenital heart disease. The funding for this grant was generously provided by a family impacted by congenital heart disease, who unfortunately had lost a child to the disease at a very young age.

The Visualization Studio consulted with Dr. Salavitabar on essential features and priorities to realize his vision, using the latest version of the Visualization Studio’s Jugular software.

This video was spliced from two separate streams recorded concurrently from two collaborating HoloLens users. Each user has a view of the other, as well as their own individual perspectives of the shared holographic model.

JUGULAR

The angiography system in the Mott clinic produces digital surface models of the vasculature in STL format.

That format is typically used for 3D printing, but the process of queuing and printing a physical 3D model often takes at least several hours or even days, and the model is ultimately physical waste that must be properly disposed of after its brief use.

Jugular offers the alternative of viewing a virtual 3D model in devices such as the Microsoft HoloLens, loaded from the same STL format, with a lead time under an hour.  The time is determined mostly by the angiography software to produce the STL file.  Once the file is ready, it takes only minutes to upload and view on a HoloLens.  Jugular’s network module allows several HoloLens users to share a virtual scene over Wi-Fi.  The HoloLens provides a “spatial anchor” capability that ties hologram locations to a physical space.  Users can collaboratively view, walk around, and manipulate shared holograms relative to their shared physical space.  The holograms can be moved, scaled, sliced, and marked using hand gestures and voice commands.

This innovation is not confined to medical purposes.  Jugular is a general-purpose extended-reality program with applications in a broad range of fields.  The developers analyze specific project requirements in terms of general XR capabilities.  Project-specific requirements are usually met through easily-editable configuration files rather than “hard coding.”

Robots Who Goof: Can We Trust Them?

Robotics in Unreal Engine

Robots Who Goof: Can We Trust Them?

EVERYONE MAKES MISTAKES

The human-like, android robot used in the virtual experimental task of handling boxes.

When robots make mistakes—and they do from time to time—reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.

In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” said Lionel Robert, associate professor at the U-M School of Information and core faculty of the Robotics Institute.

“Robots will make mistakes when working with humans, decreasing humans’ trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears.”

For their study published in the Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication, Robert and doctoral student Connor Esterwood examined how the repair strategies—including a new strategy of explanations—impact the elements that drive trust: ability (competency), integrity (honesty) and benevolence (concern for the trustor).

The mechanical arm robot used in the virtual experiment.

The researchers recruited 164 participants to work with a robot in a virtual environment, loading boxes onto a conveyor belt. The human was the quality assurance person, working alongside a robot tasked with reading serial numbers and loading 10 specific boxes. One robot was anthropomorphic or more humanlike, the other more mechanical in appearance.

Sara Eskandari and Stephanie O’Malley of the Emerging Technology Group at U-M’s James and Anne Duderstadt Center helped develop the experimental virtual platform.

The robots were programed to intentionally pick up a few wrong boxes and to make one of the following trust repair statements: “I’m sorry I got the wrong box” (apology), “I picked the correct box so something else must have gone wrong” (denial), “I see that was the wrong serial number” (explanation), or “I’ll do better next time and get the right box” (promise).

Previous studies have examined apologies, denials and promises as factors in trust or trustworthiness but this is the first to look at explanations as a repair strategy, and it had the highest impact on integrity, regardless of the robot’s appearance.

When the robot was more humanlike, trust was even easier to restore for integrity when explanations were given and for benevolence when apologies, denials and explanations were offered.

As in the previous research, apologies from robots produced higher integrity and benevolence than denials. Promises outpaced apologies and denials when it came to measures of benevolence and integrity.

Esterwood said this study is ongoing with more research ahead involving other combinations of trust repairs in different contexts, with other violations.

“In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life,” Esterwood said. “For example, does a barista robot’s explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?”

This originally appeared on Michigan News.

More information:

Behind the Scenes: Re-creating Citizen Kane in VR

Behind the Scenes: Re-creating Citizen Kane in VR

inside a classic

Stephanie O’Malley


Students in Matthew Solomon’s classes are used to critically analyzing film. Now they get the chance to be the director for arguably one of the most influential films ever produced: Citizen Kane.

Using an application developed at the Duderstadt Center with grant funding provided by LSA Technology Services, students are placed in the role of the film’s director and able to record a prominent scene from the movie using a virtual camera. The film set which no longer exists, has been meticulously re-created in black and white CGI using reference photographs from the original set, with a CGI Orson Welles acting out the scene on repeat – his actions performed by Motion Capture actor Matthew Henerson, carefully chosen for his likeness to Orson Welles, with the Orson avatar generated from a photogrammetry scan of Matthew.

Top down view of the CGI re-creation of the film set for Citizen Kane

Analyzing the original film footage, doorways were measured, actor heights compared, and footsteps were counted, to determine a best estimate for the scale of the set when 3D modeling. With feedback from Citizen Kane expert, Harlan Lebo, fine details down to the topics of the books on the bookshelves were able to be determined.

Archival photograph provided by Vincent Longo of the original film set

Motion Capture actor Matthew Henerson was flown in to play the role of the digital Orson Welles. In a carefully choreographed session directed by Matthew’s PhD student, Vincent Longo, the iconic scene from Citizen Kane was re-enacted while the original footage played on an 80″ TV in the background, ensuring every step aligned to the original footage perfectly.

Actor Matthew Henerson in full mocap attire amidst the makeshift set for Citizen Kane – Props constructed using PVC. Photo provided by Shawn Jackson.

The boundaries of the set were taped on the floor so the data could be aligned to the digitally re-created set. Eight Vicon motion capture cameras, the same used throughout Hollywood for films like Lord of the Rings or Planet of the Apes, formed a circle around the makeshift set. These cameras rely on infrared light reflected off of tiny balls affixed to the motion capture suit to track the actor’s motion. Any props during the motion capture recording were carefully constructed out of cardboard and PVC (later to be 3D modeled) so as to not obstruct his movements. The 3 minutes of footage attempting to be re-created took 3 days to complete, comprised over 100 individual mocap takes and several hours of footage, which were then compared for accuracy and stitched together to complete the full route Orson travels through the environment.

Matthew Henerson
Orson Welles

  Matthew Henerson then swapped his motion capture suit for an actual suit, similar to that worn by Orson in the film, and underwent 3D scanning using the Duderstadt Center’s photogrammetry resources. 

Actor Matthew Henerson wears asymmetrical markers to assist the scanning process

Photogrammetry is a method of scanning existing objects or people, commonly used in Hollywood and throughout the video game industry to create a CGI likenesses of famous actors. This technology has been used in films like Star Wars (an actress similar in appearance to Carrie Fischer was scanned and then further sculpted, to create a more youthful Princess Leia) with entire studios now devoted to photogrammetry scanning. The process relies on several digital cameras surrounding the subject and taking simultaneous photographs.

Matthew Henerson being processed for Photogrammetry

The photos are submitted to a software that analyzes them on a per-pixel basis, looking for similar features across multiple photos. When a feature is recognized, it is triangulated using the focal length of the camera and it’s position relative to other identified features, allowing millions of tracking points to be generated. From this an accurate 3D model can be produced, with the original digital photos mapped to its surface to preserve photo-realistic color. These models can be further manipulated: Sometimes they are sculpted by an artist, or, with the addition of a digital “skeleton”, they can be driven by motion data to become a fully articulated digital character.

  The 3d modeled scene and scanned actor model were joined with mocap data and brought into the Unity game engine to develop the functionality students would need to film within the 3D set. A virtual camera was developed with all of the same settings you would find on a film camera from that era. When viewed in a virtual reality headset like the Oculus Rift, Matthew’s students can pick up the camera and physically move around to position it at different locations in the CGI environment, often capturing shots that otherwise would be difficult to do in a conventional film set. The footage students film within the app can be exported as MP4 video and then edited in their editing software of choice, just like any other camera footage.

  Having utilized the application for his course in the Winter of 2020, Matthew Solomon’s project with the Duderstadt Center was recently on display as part of the iLRN’s 2020 Immersive Learning Project Showcase & Competition. With Covid-19 making the conference a remote experience, the Citizen Kane project was able to be experienced in Virtual Reality by conference attendees using the FrameVR platform. Highlighting innovative ways of teaching with VR technologies, attendees from around the world were able to learn about the project and watch student edits made using the application.

Citizen Kane on display for iLRN’s 2020 Immersive Learning Project Showcase & Competition using Frame VR

Novels in VR – Experiencing Uncle Tom’s Cabin

Novels in VR – Experiencing Uncle Tom’s Cabin

A Unique Perspective

Stephanie O’Malley


This past semester, English Professor Sara Blair taught a course at the University titled, “The Novel and Virtual Realities.”  – The purpose of this course was to expose students to different methods of analyzing novels and ways of understanding them from different perspectives by utilizing platforms like VR and AR.

Designed as a hybrid course, her class was split between a traditional classroom environment, and an XR lab, providing a comparison between traditional learning methods, and more hands-on experiential lessons through the use of immersive, interactive VR and AR simulations.

As part of her class curriculum, students were exposed to a variety of experiential XR content. Using the Visualization Studio’s Oculus Rifts, her class was able to view Dr. Courtney Cogburn’s “1000 Cut Journey” installation – a VR experience that puts viewers in the shoes of a black american man growing up in the time of segregation, allowing viewers to see first hand how racism affects every facet of their life. They also had the opportunity to view Asad J. Malik’s “Terminal 3” using augmented reality devices like the Microsoft Hololens. Students engaging with Terminal 3 see how Muslim identities in the U.S. are approached through the lens of an airport interrogation.

Wanting to create a similar experience for her students at the University of Michigan, Sara approached the Duderstadt Center about the possibility of turning another novel into a VR experience: Uncle Tom’s Cabin.

She wanted her students to understand the novel from the perspective of it’s lead character, Eliza, during the pivotal moment where as a slave, she is trying to escape her captors and reach freedom. But she also wanted to give her students the perspective of the slave owner and other slaves tasked with her pursuit, as well as the perspective of an innocent bystander watching this scene unfold.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin

Using Unreal Engine, the Duderstadt Center was able to make this a reality. An expansive winter environment was created based on imagery detailed in the novel, and CGI characters for Eliza and her captors were produced and then paired with motion capture data to drive their movements. When students put on the Oculus Rift headset, they can choose to experience the moment of escape either through Eliza’s perspective, her captors, or as a bystander. And to better evaluate what components contributed to student’s feelings during the simulation, versions of these scenarios were provided with and without sound. With sound enabled as Eliza, you hear footsteps in the snow gaining on you, the crack of the ice beneath your feet as you leap across a tumultuous river, and the barking of a vicious dog on your heels – all adding to the tension of the moment. While viewers are able to freely look around the environment, they are passive observers: They have no control over the choices Eliza makes or where she can go.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin – Freedom for Eliza lies on the other side of the frozen Ohio river.

The scene ends with Eliza reaching freedom on the opposite side of the Ohio river and leaving her pursuers behind. What followed the student’s experience with the VR version of the novel was a deep class discussion on how the scene felt in VR verses how it felt reading the same passage in the book. Some students wondered what it might feel like to instead be able to control the situation and control where Eliza goes, or as a bystander, to move freely through the environment as the scene plays out, deciding which party (Eliza or her pursuers) was of most interest to follow in that moment.

While Sara’s class has concluded for the semester, you can still try this experience for yourself – Uncle Tom’s Cabin is available to demo on all Visualization Studio workstations equipped with an Oculus Rift.

Using Mobile VR to Assess Claustrophobia During an MRI

Using Mobile VR to Assess Claustrophobia During an MRI

new methods for exposure therapy

Stephanie O’Malley


Dr. Richard Brown and his colleague Dr. Jadranka Stojanovska had an idea for how VR could be used in a clinical setting. Having realized a problem with patients undergoing MRI scans experiencing claustrophobia, they wanted to use VR simulations to introduce potential patients to what being inside an MRI machine might feel like.

Duderstadt Center programmer Sean Petty and director Dan Fessahazion alongside Dr. Richard Brown

Claustrophobia in this situation is a surprisingly common problem. While there are 360 videos that convey what an MRI might look like, these fail to address the major factor contributing to claustrophobia: The perceived confined space within the bore. 360 videos tend to make the environment skewed, seeming further away than it would be in reality and thereby failing to induce the same feelings of claustrophobia that the MRI bore would produce in reality. With funding from the Patient Education Award Committee, Dr. Brown approached the Duderstadt Center to see if a better solution could be produced.

VR MRI: Character customization
A patient enters feet-first into the bore of the MRI machine.

In order to simulate the effects of an MRI accurately, a CGI MRI machine was constructed and ported to the Unity game engine. A customize-able avatar representing the viewer’s body was also added to give viewers a sense of self. When a VR headset is worn, the viewer’s perspective allows them to see their avatar body and the real proportions of the MRI machine as they are slowly transported into the bore. Verbal instructions mimic what would be said throughout the course of a real MRI, with the intimidating boom of the machine occurring as the simulated scan proceeds.

Two modes are provided within the app: Feet first or head first, to accommodate the most common scanning procedures that have been shown to induce claustrophobia.  

In order to make this accessible to patients, the MRI app was developed with mobile VR in mind, allowing anyone (patients or clinicians) with a VR-capable phone to download the app and use it with a budget friendly headset like Google Daydream or Cardboard.

Dr. Brown’s VR simulator was recently featured as the cover story in the September edition of Tomography magazine.

Students Learn 3D Modeling for Virtual Reality

Students Learn 3D Modeling for Virtual Reality

making tiny worlds

Stephanie O’Malley


ArtDes240 is course offered by the Stamps School of Art & Design and taught by Stephanie O’Malley that teaches students 3D modeling & animation.  As one of only a few 3D digital classes offered at the University of Michigan, AD240 sees student interest from several schools across campus with students looking to gain a better understanding of 3D art as it pertains to the video game industry.

The students in AD240 are given a crash-course in 3D modeling in 3D Studio Max and level creation within the Unreal Editor. It is then within Unreal that all of their objects are positioned, terrain is sculpted, and atmospheric effects such as time of day, weather, or fog can be added.

“Candyland” – Elise Haadsma & Heidi Liu, developed using 3D Studio Max and Unreal Engine
“Candyland” – Elise Haadsma & Heidi Liu, developed using 3D Studio Max and Unreal Engine

With just 5 weeks to model their entire environment, bring it into Unreal,  package it as an executable, and test it in the MIDEN (or on the Oculus Rift), the resulting student projects were truly impressive. Art & Design Students Elise Haadsma & Heidi Liu took inspiration from the classic board game, “Candyland” to create a life-size game board environment in Unreal consisting of a lollipop forest, mountains of Hershey’s kisses, even a gingerbread house and chocolate river.

Lindsay Balaka  from the School of Music, Theater & Dance, chose to create her scene using the Duderstadt Center’s in-house rendering software “Jugular” instead of Unreal Engine-Her creation, “Galaxy Cakes”, is a highly stylized (reminiscent of an episode of the 1960’s cartoon, The Jetson’s) cupcake shop, complete with spatial audio emanating from the corner Jukebox.

Lindsay Balaka’s “Galaxy Cakes” environment
An abandoned school, created by Vicki Liu in 3D Studio Max and Unreal Engine

Vicki Liu, also of Art & Design, created a realistic horror scene using Unreal. After navigating down a poorly lit hallway of an abandoned nursery school, you will find yourself in a run down classroom inhabited by some kind of mad man. A tally of days passed has been scratched into the walls, an eerie message scrawled onto the chalkboard, and furniture haphazardly barricades the windows.

While the goal of the final project was to create a traversible environment for virtual reality, some students took it a step further.

Art & Design student Gus Schissler created an environment composed of neurons in Unreal intended for viewing within the Oculus Rift. He then integrated data from an Epoch neurotransmitter (a device capable of reading brain waves) to allow the viewer to telepathically interact with the environment. The viewers mood when picked up by the Epoch not only changed the way the environment looked by adjusting the intensities of the light being emitted by the neurons, but also allowed the viewer to think specific commands (push, pull, etc) in order to navigate their way past various obstacles in the environment.

Students spend the last two weeks of the semester scheduling time with Ted Hall and Sean Petty to test their scenes and ensure everything runs and looks correctly on the day of their presentations. This was a class that not just introduced students to the design process, but to also allowed them to get hands on experience with upcoming technologies as virtual reality continues to expand in the game and film industries.

Student Gus Schissler demonstrates his Neuron environment for Oculus Rift that uses inputs from an Epoch neurotransmitter to interact.

Passion & Violence: Anna Galeotti’s MIDEN Installation

Passion & Violence

Anna Galeotti’s MIDEN INstallation

Ph.D. Fullbright Scholar (Winter, 2014) Anna Galeotti:  exploring the concept of “foam” or “bubbles” as a possible model for audiovisual design elements and their relationships. Her art installation, “Passion and Violence in Brazil” was displayed in the Duderstadt Center’s MIDEN.

Interested in using the MIDEN to do something similar? Contact us.