Delving into the art (instead of science) of anatomy

Delving into the art (instead of science) of anatomy

New XR Course for FAll 2024

Author


The first thing the students saw were the bones.

There were more than a hundred of them, stacked neatly in plastic bins on a long table in the front of classroom 2060. Some were long and slender, others bulbous and asymmetrical. All had the same glossy sheen.

From far away, they resembled delicate china figurines. Up close, it was easier to tell that they were 3D-printed versions of the same bones you’d find in a human pelvis or mouth or arm.

The students in the “Art of Anatomy” mini-course rummaged through the bone-like objects, serious expressions on their faces as they deliberated which to choose.

Their assignment that day was to create a sculptural arrangement. It could be anatomically correct; it could resemble nothing that you’d typically find in a skeleton. Then they had to take a picture of their designs and draw the shadows they’d made, using graphite pencils or charcoal.

The activity was intended to explore the body from a different perspective, to discover the angles and shapes its parts could make, and to ask the question that lay at the core of every session in this course: How does interacting with models of our anatomy, which try to approximate the experience of real human bodies, compare to encountering the real thing?

The next hour or so was nearly silent, except for the clunking of the tiny bones and the scratching of the pencils. Maya Moufawad, a pre-dental and art major, had chosen two halves of a jaw, complete with teeth. She fit them together and then affixed them to two smaller bones, making it look like the Flintstones had gotten their hands on a dental mold and decided to display it as art, using bones as the frame.

Movement science student Abby Kramer went with a thoracic vertebra, a lumbar vertebra, and a sacral bone. She liked being able to hold the bones, to turn them around and flip them upside down to better understand their structure and proportion.

She connected the lumbar vertebral bone directly with the sacrum, which would have been in the appropriate location anatomically. But, as she noted later: “There were still a lot of unknowns.” You couldn’t fully understand the body by looking at these bones. They were, both literally and figuratively, missing connective tissue.

“We’re trying to get them to understand that even the most factual anatomical model is still a fiction,” says Jennifer Gear, an art history and movement science lecturer who co-designed and taught the course. “It’s still removed from the body. In what ways and for what reasons? How do you stop thinking about these things as objective truths — but rather, to see them as believable fictions?”

***

When movement science student Regan Lee walked into the Capuchin Crypt in Rome, she, too, was fascinated by the bones.

In this case, they were real human bones from deceased Catholic friars, used to adorn a mausoleum that is like few others on Earth. The crypt is literally decorated with human remains — skulls framing archways, tibias and femurs arranged in elaborate crosses and mandalas on the walls and ceilings.

At the time, Lee was on a day trip during movement science associate professor Melissa Gross’ class, “Art and Anatomy in the Italian Renaissance,” for which students travel to Italy and use the classical statues and paintings of the Renaissance era as a guide to learning about anatomical structures.

As she walked away from the unique crypt, Lee was “nerding out.”

“I think everyone should see this,” she told Gross.

Gross had a different idea.

“What if we made this a class?” she pondered. “Let’s have students make their own art with the bones they’re used to looking at. We could 3-D print bones that the students could think critically about.”

“That’s crazy,” Lee responded. “Are you serious?”

***

Gross was indeed. She’d 3-D printed a small number of bones for previous anatomy courses, so she knew it could be done. And she’d spent her career creating innovative interdisciplinary courses in an attempt to engage students, stimulating them to learn in ways that worked better for them.

Together with Gear, she’d applied that paradigm of thinking to create the “Art and Anatomy in the Italian Renaissance” course Lee was so enjoying. She thought she and Gear could build off that successful partnership and come up with a class that challenged students to revisit their preconceptions about both art and the body.

The pair started brainstorming. They wanted to teach a projects-based class — one with no tests, plenty of guest speakers, and lots of hands-on activities. They wanted to take students to different locations: the Hatcher Library’s Special Collections Research Center to look at Renaissance-era anatomy books, the Taubman Health Sciences Library to examine digital cadavers via the interactive Anatomage table, the Visualization Studio at the James and Anne Duderstadt Center on North Campus to play around with bones in virtual and augmented reality.

“I think of the classroom as a sandbox,” Gear says, “and I’m going to bring my best toys. Because I’ve got to be there with the students every day, too, and I don’t want to be bored. So I try to think about what would be fun to do, and this was a class that could lend itself to fun things.”

They wanted to ground the course in an arts-based approach, using critical thinking to respectfully challenge assumptions and foster dialogue that valued different perspectives. To do so, they planned to advertise in different schools on campus to attract students with varying backgrounds.

“Our goal was to open the students’ minds to other ways of seeing, of moving, of experiencing,” Gross says.

***

Coincidentally, the U-M Arts Initiative was looking for proposals for its Arts & Curriculum grant, which promotes the integration of arts into course development and teaching. In November 2022, the initiative gave its approval — and $19,611 worth of funds — to support Gross and Gear’s seven-week-long mini-course.

The pair used some of the grant money to pay Lee, who began the arduous task of printing the bones. Even the smallest ones took hours, and the printers often malfunctioned. Lee stuffed the ones that failed to print in her bag, and they clanked around as she walked.

“Even my apartment had bones everywhere,” Lee says.

Eventually, most of the bones made their way to SKB’s classroom 2060, as did 20 students — some from Kinesiology, some from Engineering, some from the Stamps School of Art and Design.

The students drew the bones, sometimes asking those who specialized in art to help the others portray the structures accurately.

Maya Moufawad drawing her 3D-printed bone sculpture

They paged through 16th-century books full of woodcut illustrations of bodies and bones, their faces full of wonder at the opportunity.

Ariana Ravitz looking at ancient anatomy books

They manipulated a digital cadaver on the Anatomage table, working as a group to make decisions about which bones and muscles and tissues to look at first and how to explore them. In that case, the Kines and biomedical engineering students often took the lead in explaining the names of the bones and where they were located.

They dissected five real animal carcasses and bones that Gross had gotten from generous butchers at Plum Market; one student, who disliked the smell of meat, was able to overcome her discomfort enough to participate with the support of her fellow classmates.

They talked about the ethics of using bones and bodies for research or education. In their reflection for that class session, students discussed whether they would donate their bodies to science given what they’d learned, noting that it was rare for them to feel this comfortable talking about such a difficult topic in class.

It began to feel like a kind of alchemy was taking place on Fridays from noon to 2 p.m.

“Every single class, I found myself being encouraged to think deeper, within my own knowledge and with the help of my peers,” one student wrote in a reflection. “The class’ emphasis on helping each other to understand is something I value so much. In fact, these discussions were so interesting to me that I always called my mom about them afterwards, because I was so excited to continue the conversation.”

“To see them bring their authentic selves to the challenges we’re setting every week, for them to treat it so seriously,” Gross says, “it feels like we’ve touched something important.”

***

On the final day of class, the students had one last opportunity to see the bones in a new way.

The Emerging Technologies Group at the Duderstadt Center had taken the digital files used to 3D print the bones and uploaded them to their visualization platforms, including virtual and augmented reality set-ups.

Movement science student Gordon Luo held a controller in one hand, using his index finger to press a button that grabbed the bone on his computer screen and moved it around. Then he found a way to digitally measure the bone.

“That’s so cool,” he says.

He was so immersed in the experience that he nearly tripped over the desk, less aware of his physical surroundings compared to the virtual world of the bones.

“It’s cool to realize this is where we’re at with technology,” he says.

Art student Summer Pengelly and biomedical engineering student Angel Rose Sajan were wearing HoloLens headsets that projected the bones hologram-style onto their surroundings.

“We’re building an elephant,” Pengelly tells me. “Or placing the bones so they’re shaped like an elephant head. I wish I could take a photo so I could show you. Oh, I just did.”

The photo was still contained in the software, so Pengelly picked up a piece of paper and started drawing the arrangement they’d made.


She and Sajan both agreed that they liked the HoloLens better than the VR headsets.

“It’s easier to manipulate the bones,” Pengelly says. “Using your hands as controllers gives you more access.”

“I kept turning the controller to figure out how to hold it,” Sajan says.

In the back of the visualization studio lay yet another digital environment to explore. Called the MIDEN for Michigan Immersive Digital Experience Nexus, it projects images onto the walls and floor of a room. Users wear headsets that place them within the environment created and give them tools to manipulate the objects in the space. In this case, students were able to slice a cadaver into different planes.

Cece Crowther and another student explore the MIDEN in the Duderstadt Center.

“MIDEN might be my favorite [of the technologies],” says Cece Crowther, a biomedical engineering student. “The Anatomage Table had the same energy as medical school. This felt more artistic.”

“But I could call three different [sessions] my favorite in this class,” she says. “Every class has been unique.”

***

A cake with an artistic pattern made from repeating bone patterns

When the mixed reality class wound down, everyone gathered to eat celebratory cake. The top of the cake had an artistic design, made by creating a repeating pattern of one of the bone sculptures a student had designed early in the course.

“We’ve touched, looked at, manipulated, and drawn bones,” Gross says to the group. “Now we’re eating bones to wrap it all up.”

As students ate their cake, they reflected on the course, sharing feedback like, “I will not stop recommending this class to people” and “I made my schedule around this class.” Several mentioned that they’d gained so much from working alongside folks with different backgrounds.

“I appreciate this class so much because it normalizes the idea of art and science working together,” Moufawad, the art and pre-dental major, tells me. “Whenever I tell people what I’m studying, they always think it’s random, but it’s really not. There’s so much at the intersection of these two topics, and I love that this class celebrates that.”

A few weeks later, after the students have written their final reflections, I meet Gross in her first-floor office. She’s giddy over the success of the course. Her eyes light up and her tone becomes reverential as she talks about what she and Gear, with the help of some committed students, have managed to achieve.

“This experience we spent so many hours designing and thinking about, it actually worked,” Gross says. “Some important vein got exposed, and we’re not sure what’s flowing. It’ll take some time to unpack what was so empowering for so many students, but it’s a big fulfillment for us as teachers.”

“Delight,” she says, “is too soft a word.”

The Art of Anatomy course was made possible by a grant from the Arts Initiative at the University of Michigan to recipient Melissa Gross. Gross and Gear plan to offer the course again in fall 2024.

Full Article from the University of Michigan School of Kinesiology:

https://www.kines.umich.edu/news-events/news/delving-art-instead-science-anatomy

Recruiting Unity VR programmers to Evaluate Sound Customization Toolkit for Virtual Reality Applications

Recruiting Unity VR programmers to Evaluate Sound Customization Toolkit for Virtual Reality Applications

Participate in a study by the EECS Accessibility Lab

The EECS Accessibility Lab needs your help evaluating a new Sound Accessibility toolkit for Virtual Reality!

Our research team is studying how sound customization tools, like modulating frequency or dynamically adjusting volume can enhance VR experience for DHH people. We are recruiting adult (18 or older) participants who have at least 1 year of experience working with UnityVR and have at least 2 previous projects that have sounds to add our toolkit into.

This study will be self-paced, remote, and asynchronous. It will take around 60 – 90 minutes.

In this study, we will collect some demographic information about you (e.g., age, gender) and ask about your experience working with UnityVR. We will then introduce our Sound Customization Toolkit and ask you to apply it to your own project. We will ask you to record your screen and voice during this implementation process. We will ask you to complete a form during the study to provide feedback for our toolkit.

After the study, we will compensate you $30 in the form of an Amazon Gift Card for your time.

If you are interested in participating, please fill out this Google Form. For more information, feel free to reach out to Xinyun Cao: [email protected].

For more details on our work, see our lab’s webpage.

Fall 2023 XR Classes

Fall 2023 XR Classes

Looking for Classes that incorporate XR?

EECS 498 – Extended Reality & Society


Credits : 4
More Info Here
Contact with Questions:
Austin Yarger
[email protected]

From pediatric medical care, advanced manufacturing, and commerce to film analysis, first-responder training, and unconscious bias training, the fledgling, immersive field of extended reality may take us far beyond the realm of traditional video games and entertainment, and into the realm of diverse social impact.

“EECS 498 : Extended Reality and Society” is a programming-intensive senior capstone / MDE course that empowers students with the knowledge and experience to…

    • Implement medium-sized virtual and augmented reality experiences using industry-standard techniques and technologies.
    • Game Engines (Unreal Engine / Unity), Design Patterns, Basic Graphics Programming, etc.
    • Design socially-conscious, empowering user experiences that engage diverse audiences.
    • Contribute to cultural discourse on the hopes, concerns, and implications of an XR-oriented future.
    • Privacy / security concerns, XR film review (The Matrix, Black Mirror, etc)
    • Carry out user testing and employ feedback after analysis.
    • Requirements + Customer Analysis, Iterative Design Process, Weekly Testing, Analytics, etc.
    • Work efficiently in teams of 2-4 using agile production methods and software.
    • Project Management Software (Jira), Version Control (Git), Burndown Charting and Resource Allocation, Sprints, etc.

Students will conclude the course with at least three significant, socially-focused XR projects in their public portfolios.

 

ENTR 390 – Intro to Entrepreneurial Design, VR Lab


Credits : 3
More Info Here
Contact with Questions:
Sara ‘Dari’ Eskandari
[email protected]

In this lab, you’ll learn how to develop virtual reality content for immersive experiences in the Oculus Rift, MIDEN or for Virtual Production using Unreal Engine and 3d modeling software. You’ll also be introduced to asset creation and scene assembly by bringing assets into the Unreal Engine & creating interactive experiences. At the end of the class you’ll be capable of developing virtual reality experiences, simulations, and tools to address real-world problems.

Students will have an understanding of how to generate digital content for Virtual Reality platforms; be knowledgeable on versatile file formats, content pipelines, hardware platforms and industry standards; understand methods of iterative design and the creation of functional prototypes using this medium; employ what is learned in the lecture section of this course to determine what is possible, what is marketable, and what are the various distribution methods available within this platform; become familiar with documenting their design process and also pitching their ideas to others, receiving and providing quality feedback.

 

FTVM 307 – Film Analysis for Filmmakers


Credits : 3
More Info Here
Contact with Questions:
Matthew Solomon
[email protected]

 Filmmakers learn about filmmaking by watching films. This course reverse engineers movies to understand how they were produced. The goal is to learn from a finished film how the scenes were produced in front of the camera and microphone and how the captured material was edited. Students in this class use VR to reimagine classic film scenes – giving them the ability to record and edit footage from a virtual set.

 

UARTS 260 / EIPC FEAST – Empathy in Pointclouds


Credits: 1-5
More Info Here
Contact with Questions:
Dawn Gilpin
[email protected]

Empathy In Point Clouds: Spatializing Design Ideas and Storytelling through Immersive Technologies integrates LiDAR scanning, photogrammetry, and UnReal Engine into education, expanding the possible methodologies and processes of architectural design. Entering our third year of the FEAST program, we turn our attention to storytelling and worldbuilding using site-specific point cloud models as the context for our narratives. This year the team will produce 1-2 spatial narratives for the three immersive technology platforms we are working with: VR headset, MiDEN/VR CAVE, and the LED stage.

 

 

ARTDES 217 – Bits and Atoms


Credits: 3
More Info Here
Contact with Questions:
Sophia Brueckner
[email protected]

This is an introduction to digital fabrication within the context of art and design. Students learn about the numerous types of software and tools available and develop proficiency with the specific software and tools at Stamps. Students discuss the role of digital fabrication in creative fields.

 

ARTDES 420 – Sci-Fi Prototyping


Credits: 3
More Info Here
Contact with Questions:
Sophia Brueckner
[email protected]

This course ties science fiction with speculative/critical design as a means to encourage the ethical and thoughtful design of new technologies. With a focus on the creation of functional prototypes, this course combines the analysis of science fiction with physical fabrication or code-based interpretations of the technologies they depict.

 

SI 559 – Introduction to AR/VR Application Design

Credits: 3
More Info Here
Contact with Questions:
Michael Nebeling
[email protected]

This course will introduce students to Augmented Reality (AR) and Virtual Reality (VR) interfaces. This course covers basic concepts; students will create two mini-projects, one focused on AR and one on VR, using prototyping tools. The course requires neither special background nor programming experience.

 

FTVM 394 – Digital Media Production, Virtual Reality

Credits: 4
More Info Here
Contact with Questions:
Yvette Granata
[email protected]

This course provides an introduction to key software tools, techniques, and fundamental concepts supporting digital media arts production and design. Students will learn and apply the fundamentals of design and digital media production with software applications, web-based coding techniques and study the principals of design that translate across multiple forms of media production.

Learning to Develop for Mixed Reality – The ENTR 390 “VR Lab”

Learning to Develop for Virtual Reality – The ENTR 390 “VR Lab”

XR Prototyping

For the past several years, students enrolled in the Center for Entrepreneurship’s Intro to Entrepreneurial Design Virtual Reality course have been introduced to programming and content creation pipelines for XR development using a variety of Visualization Studio resources. Their goal? Create innovative applications for XR. From creating video games to changing the way class material is accessed with XR capable textbooks, if you have an interest in learning how to make your own app for Oculus Rift, MIDEN or even a smart phone, this might be a class to enroll in. Students interested in this course are not required to have any prior programming or 3d modeling knowledge, and if you’ve never used a VR headset that’s OK too. This course will teach you everything you need to know.

Henry Duhaime presents his VR game for Oculus Rift, in which players explore the surface of Mars in search of a missing NASA rover.
Michael Meadows prototypes AR capable textbooks using a mobile phone and Apple’s ARKit.

Revolutionizing 3D Rotational Angiograms with Microsoft Hololens

Angiography with Hololens augmented reality

Revolutionizing 3D Rotational Angiograms with Microsoft Hololens

A NEW WAY TO VISUALIZE THE HEART

Stephanie O’Malley


Just prior to release of the Microsoft Hololens 2, the Visualization Studio was approached by Dr. Arash Salavitabar in the U-M CS Mott Children’s Hospital with an innovative idea: to use XR to improve evaluation of patient scans stemming from 3D rotational angiography. 

Rotational angiography is a medical imaging technique based on x-ray, that allows clinicians to acquire CT-like 3D volumes during hybrid surgery or during a catheter intervention. This technique is performed by injecting contrast into the pulmonary artery followed by rapidly rotating a cardiac C-arm. Clinicians are then able to view the resulting data on a computer monitor, manipulating images of the patient’s vasculature. This is used to evaluate how a procedure should move forward and to aid in communicating that with the patient’s family.

With augmented reality devices like the Hololens 2, new possibilities for displaying and manipulating patient data have emerged, along with the potential for collaborative interactions with patient data among clinicians.

What if, instead of viewing a patient’s vasculature as a series of 2D images displayed on a computer monitor, you and your fellow doctors could view it more like a tangible 3D object placed on the table in front of you? What if you could share in the interaction with this 3D model — rotating and scaling the model, viewing cross sections, or taking measurements, to plan a procedure and explain it to the patient’s family?

This has now been made possible with a Faith’s Angels grant awarded to Dr. Salavitabar, intended to explore innovative ways of addressing congenital heart disease. The funding for this grant was generously provided by a family impacted by congenital heart disease, who unfortunately had lost a child to the disease at a very young age.

The Visualization Studio consulted with Dr. Salavitabar on essential features and priorities to realize his vision, using the latest version of the Visualization Studio’s Jugular software.

This video was spliced from two separate streams recorded concurrently from two collaborating HoloLens users. Each user has a view of the other, as well as their own individual perspectives of the shared holographic model.

JUGULAR

The angiography system in the Mott clinic produces digital surface models of the vasculature in STL format.

That format is typically used for 3D printing, but the process of queuing and printing a physical 3D model often takes at least several hours or even days, and the model is ultimately physical waste that must be properly disposed of after its brief use.

Jugular offers the alternative of viewing a virtual 3D model in devices such as the Microsoft HoloLens, loaded from the same STL format, with a lead time under an hour.  The time is determined mostly by the angiography software to produce the STL file.  Once the file is ready, it takes only minutes to upload and view on a HoloLens.  Jugular’s network module allows several HoloLens users to share a virtual scene over Wi-Fi.  The HoloLens provides a “spatial anchor” capability that ties hologram locations to a physical space.  Users can collaboratively view, walk around, and manipulate shared holograms relative to their shared physical space.  The holograms can be moved, scaled, sliced, and marked using hand gestures and voice commands.

This innovation is not confined to medical purposes.  Jugular is a general-purpose extended-reality program with applications in a broad range of fields.  The developers analyze specific project requirements in terms of general XR capabilities.  Project-specific requirements are usually met through easily-editable configuration files rather than “hard coding.”

Robots Who Goof: Can We Trust Them?

Robotics in Unreal Engine

Robots Who Goof: Can We Trust Them?

EVERYONE MAKES MISTAKES

The human-like, android robot used in the virtual experimental task of handling boxes.

When robots make mistakes—and they do from time to time—reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.

In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” said Lionel Robert, associate professor at the U-M School of Information and core faculty of the Robotics Institute.

“Robots will make mistakes when working with humans, decreasing humans’ trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears.”

For their study published in the Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication, Robert and doctoral student Connor Esterwood examined how the repair strategies—including a new strategy of explanations—impact the elements that drive trust: ability (competency), integrity (honesty) and benevolence (concern for the trustor).

The mechanical arm robot used in the virtual experiment.

The researchers recruited 164 participants to work with a robot in a virtual environment, loading boxes onto a conveyor belt. The human was the quality assurance person, working alongside a robot tasked with reading serial numbers and loading 10 specific boxes. One robot was anthropomorphic or more humanlike, the other more mechanical in appearance.

Sara Eskandari and Stephanie O’Malley of the Emerging Technology Group at U-M’s James and Anne Duderstadt Center helped develop the experimental virtual platform.

The robots were programed to intentionally pick up a few wrong boxes and to make one of the following trust repair statements: “I’m sorry I got the wrong box” (apology), “I picked the correct box so something else must have gone wrong” (denial), “I see that was the wrong serial number” (explanation), or “I’ll do better next time and get the right box” (promise).

Previous studies have examined apologies, denials and promises as factors in trust or trustworthiness but this is the first to look at explanations as a repair strategy, and it had the highest impact on integrity, regardless of the robot’s appearance.

When the robot was more humanlike, trust was even easier to restore for integrity when explanations were given and for benevolence when apologies, denials and explanations were offered.

As in the previous research, apologies from robots produced higher integrity and benevolence than denials. Promises outpaced apologies and denials when it came to measures of benevolence and integrity.

Esterwood said this study is ongoing with more research ahead involving other combinations of trust repairs in different contexts, with other violations.

“In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life,” Esterwood said. “For example, does a barista robot’s explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?”

This originally appeared on Michigan News.

More information:

Behind the Scenes: Re-creating Citizen Kane in VR

Behind the Scenes: Re-creating Citizen Kane in VR

inside a classic

Stephanie O’Malley


Students in Matthew Solomon’s classes are used to critically analyzing film. Now they get the chance to be the director for arguably one of the most influential films ever produced: Citizen Kane.

Using an application developed at the Duderstadt Center with grant funding provided by LSA Technology Services, students are placed in the role of the film’s director and able to record a prominent scene from the movie using a virtual camera. The film set which no longer exists, has been meticulously re-created in black and white CGI using reference photographs from the original set, with a CGI Orson Welles acting out the scene on repeat – his actions performed by Motion Capture actor Matthew Henerson, carefully chosen for his likeness to Orson Welles, with the Orson avatar generated from a photogrammetry scan of Matthew.

Top down view of the CGI re-creation of the film set for Citizen Kane

Analyzing the original film footage, doorways were measured, actor heights compared, and footsteps were counted, to determine a best estimate for the scale of the set when 3D modeling. With feedback from Citizen Kane expert, Harlan Lebo, fine details down to the topics of the books on the bookshelves were able to be determined.

Archival photograph provided by Vincent Longo of the original film set

Motion Capture actor Matthew Henerson was flown in to play the role of the digital Orson Welles. In a carefully choreographed session directed by Matthew’s PhD student, Vincent Longo, the iconic scene from Citizen Kane was re-enacted while the original footage played on an 80″ TV in the background, ensuring every step aligned to the original footage perfectly.

Actor Matthew Henerson in full mocap attire amidst the makeshift set for Citizen Kane – Props constructed using PVC. Photo provided by Shawn Jackson.

The boundaries of the set were taped on the floor so the data could be aligned to the digitally re-created set. Eight Vicon motion capture cameras, the same used throughout Hollywood for films like Lord of the Rings or Planet of the Apes, formed a circle around the makeshift set. These cameras rely on infrared light reflected off of tiny balls affixed to the motion capture suit to track the actor’s motion. Any props during the motion capture recording were carefully constructed out of cardboard and PVC (later to be 3D modeled) so as to not obstruct his movements. The 3 minutes of footage attempting to be re-created took 3 days to complete, comprised over 100 individual mocap takes and several hours of footage, which were then compared for accuracy and stitched together to complete the full route Orson travels through the environment.

Matthew Henerson
Orson Welles

  Matthew Henerson then swapped his motion capture suit for an actual suit, similar to that worn by Orson in the film, and underwent 3D scanning using the Duderstadt Center’s photogrammetry resources. 

Actor Matthew Henerson wears asymmetrical markers to assist the scanning process

Photogrammetry is a method of scanning existing objects or people, commonly used in Hollywood and throughout the video game industry to create a CGI likenesses of famous actors. This technology has been used in films like Star Wars (an actress similar in appearance to Carrie Fischer was scanned and then further sculpted, to create a more youthful Princess Leia) with entire studios now devoted to photogrammetry scanning. The process relies on several digital cameras surrounding the subject and taking simultaneous photographs.

Matthew Henerson being processed for Photogrammetry

The photos are submitted to a software that analyzes them on a per-pixel basis, looking for similar features across multiple photos. When a feature is recognized, it is triangulated using the focal length of the camera and it’s position relative to other identified features, allowing millions of tracking points to be generated. From this an accurate 3D model can be produced, with the original digital photos mapped to its surface to preserve photo-realistic color. These models can be further manipulated: Sometimes they are sculpted by an artist, or, with the addition of a digital “skeleton”, they can be driven by motion data to become a fully articulated digital character.

  The 3d modeled scene and scanned actor model were joined with mocap data and brought into the Unity game engine to develop the functionality students would need to film within the 3D set. A virtual camera was developed with all of the same settings you would find on a film camera from that era. When viewed in a virtual reality headset like the Oculus Rift, Matthew’s students can pick up the camera and physically move around to position it at different locations in the CGI environment, often capturing shots that otherwise would be difficult to do in a conventional film set. The footage students film within the app can be exported as MP4 video and then edited in their editing software of choice, just like any other camera footage.

  Having utilized the application for his course in the Winter of 2020, Matthew Solomon’s project with the Duderstadt Center was recently on display as part of the iLRN’s 2020 Immersive Learning Project Showcase & Competition. With Covid-19 making the conference a remote experience, the Citizen Kane project was able to be experienced in Virtual Reality by conference attendees using the FrameVR platform. Highlighting innovative ways of teaching with VR technologies, attendees from around the world were able to learn about the project and watch student edits made using the application.

Citizen Kane on display for iLRN’s 2020 Immersive Learning Project Showcase & Competition using Frame VR

Passion & Violence: Anna Galeotti’s MIDEN Installation

Passion & Violence

Anna Galeotti’s MIDEN INstallation

Ph.D. Fullbright Scholar (Winter, 2014) Anna Galeotti:  exploring the concept of “foam” or “bubbles” as a possible model for audiovisual design elements and their relationships. Her art installation, “Passion and Violence in Brazil” was displayed in the Duderstadt Center’s MIDEN.

Interested in using the MIDEN to do something similar? Contact us.

Extended Reality: changing the face of learning, teaching, and research

Extended Reality: changing the face of learning, teaching, and research

Written by Laurel Thomas, Michigan News

Students in a film course can evoke new emotions in an Orson Welles classic by virtually changing the camera angles in a dramatic scene.

Any one of us could take a smartphone, laptop, paper, Play-doh and an app developed at U-M, and with a little direction become a mixed reality designer. 

A patient worried about an upcoming MRI may be able put fears aside after virtually experiencing the procedure in advance. 

Dr. Jadranka Stojanovska, one of the collaborators on the virtual MRI, tries on the device

This is XR—Extended Reality—and the University of Michigan is making a major investment in how to utilize the technology to shape the future of learning. 

Recently, Provost Martin Philbert announced a three-year funded initiative led by the Center for Academic Innovation to fund XR, a term used to encompass augmented reality, virtual reality, mixed reality and other variations of computer-generated real and virtual environments and human-machine interactions. 

The Initiative will explore how XR technologies can strengthen the quality of a Michigan education, cultivate interdisciplinary practice, and enhance a national network of academic innovation. 

Throughout the next three years, the campus community will explore new ways to integrate XR technologies in support of residential and online learning and will seek to develop innovative partnerships with external organizations around XR in education.

“Michigan combines its public mission with a commitment to research excellence and innovation in education to explore how XR will change the way we teach and learn from the university of the future,” said Jeremy Nelson, new director of the XR Initiative in the Center for Academic Innovation.

Current Use of XR

 
Applications of the technology are already changing the learning experience across the university in classrooms and research labs with practical application for patients in health care settings. 

In January 2018, a group of students created the Alternative Reality Initiative to provide a community for hosting development workshops, discussing industry news, and connecting students in the greater XR ecosystem.

In Matthew Solomon‘s film course, students can alter a scene in Orson Welles’ classic “Citizen Kane.” U-M is home to one of the largest Orson Welles collections in the world.

Solomon’s concept for developers was to take a clip from the movie and model a scene to look like a virtual reality setting—almost like a video game. The goal was to bring a virtual camera in the space so students could choose shot angles to change the look and feel of the scene. 

This VR tool will be used fully next semester to help students talk about filmmaker style, meaning and choice.

“We can look at clips in class and be analytical but a tool like this can bring these lessons home a little more vividly,” said Solomon, associate professor in the Department of Film, Television and Media.

A scene from Orson Welles’ “Citizen Kane” from the point of view of a virtual camera that allows students to alter the action.

Sara Eskandari, who just graduated with a Bachelor of Arts from the Penny Stamps School of Art and a minor in Computer Science, helped develop the tool for Solomon’s class as a member of the Visualization Studio team.

“I hope students can enter an application like ‘Citizen Kane’ and feel comfortable experimenting, iterating, practicing, and learning in a low-stress environment,” Eskandari said. “Not only does this give students the feeling of being behind an old-school camera, and supplies them with practice footage to edit, but the recording experience itself removes any boundaries of reality. 

“Students can float to the ceiling to take a dramatic overhead shot with the press of a few buttons, and a moment later record an extreme close up with entirely different lighting.”

Sara Blair, vice provost for academic and faculty affairs, and the Patricia S. Yaeger Collegiate Professor of English Language and Literature, hopes to see more projects like “Citizen Kane.”

“An important part of this project, which will set it apart from experiments with XR on many other campuses, is our interest in humanities-centered perspectives to shape innovations in teaching and learning at a great liberal arts institution,” Blair said. “How can we use XR tools and platforms to help our students develop historical imagination or to help students consider the value and limits of empathy, and the way we produce knowledge of other lives than our own? 

“We hope that arts and humanities colleagues won’t just participate in this [initiative] but lead in developing deeper understandings of what we can do with XR technologies, as we think critically about our engagements with them.”

UM Faculty Embracing XR

 

Mark W. Newman, professor of information and of electrical engineering, and chair of the Augmented and Virtual Reality Steering Committee, said indications are that many faculty are thinking about ways to use the technology in teaching and research, as evidenced by progress on an Interdisciplinary Graduate Certificate Program in Augmented and Virtual Reality.

Newman chaired the group that pulled together a number of faculty working in XR to identify the scope of the work under way on campus, and to recommend ways to encourage more interdisciplinary collaboration. He’s now working with deans and others to move forward with the certificate program that would allow experiential learning and research collaborations on XR projects.

“Based on this, I can say that there is great enthusiasm across campus for increased engagement in XR and particularly in providing opportunities for students to gain experience employing these technologies in their own academic work,” Newman said, addressing the impact the technology can have on education and research.

“With a well-designed XR experience, users feel fully present in the virtual environment, and this allows them to engage their senses and bodies in ways that are difficult if not impossible to achieve with conventional screen-based interactive experiences. We’ve seen examples of how this kind of immersion can dramatically aid the communication and comprehension of otherwise challenging concepts, but so far we’re only scratching the surface in terms of understanding exactly how XR impacts users and how best to design experiences that deliver the effects intended by experience creators.

Experimentation for All

 

Encouraging everyone to explore the possibilities of mixed reality (MR) is a goal of Michael Nebeling, assistant professor in the School of Information, who has developed unique tools that can turn just about anyone into an augmented reality designer using his ProtoAR or 360proto software.

Most AR projects begin with a two-dimensional design on paper that are then made into a 3D model, typically by a team of experienced 3D artists and programmers. 

Michael Nebeling’s mixed reality app for everyone.

With Nebeling’s ProtoAR app content can be sketched on paper, or molded with Play-doh, then the designer either moves the camera around the object or spins the piece in front of the lens to create motion. ProtoAR then blends the physical and digital content to come up with various AR applications.

Using his latest tool, 360proto, they can even make the paper sketches interactive so that users can experience the AR app live on smartphones and headsets, without

Michael Nebeling’s mixed reality app for everyone.

spending hours and hours on refining and implementing the design in code.

These are the kind of technologies that not only allow his students to learn about AR/VR in his courses, but also have practical applications. For example, people can

experience their dream kitchen at  home, rather than having to use their imaginations when clicking things together on home improvement sites. He also is working on getting many solutions directly into future web browsers so that people can access AR/VR modes when visiting home improvement sites, cooking a recipe in the kitchen, planning a weekend trip with museum or gallery visits, or when reading articles on wikipedia or the news.

Nebeling is committed to “making mixed reality a thing that designers do and users want.”

“As a researcher, I can see that mixed reality has the potential to fundamentally change the way designers create interfaces and users interact with information,” he said. “As a user of current AR/VR applications, however, it’s difficult to see that potential even for me.”

He wants to enable a future in which “mixed reality will be mainstream, available and accessible to anyone, at the snap of a finger. Where everybody will be able to ‘play,’ be it as consumer or producer.”

 

XR and the Patient Experience

A team in the Department of Radiology, in collaboration with the Duderstadt Center Visualization Studio, has developed a Virtual Reality tool to simulate an MRI, with the goal of reducing last minute cancellations due to claustrophobia that occur in an estimated 4-14% of patients. The clinical trial is currently enrolling patients. 
VR MRI Machine

“The collaboration with the Duderstadt team has enabled us to develop a cutting-edge tool that allows patients to truly experience an MRI before having a scan,” said Dr. Richard K.J. Brown, professor of radiology. The patient puts on a headset and is ‘virtually transported’ into an MRI tube. A calming voice explains the MRI exam, as the patient hears the realistic sounds of the magnet in motion, simulating an exam experience. 

The team also is developing an Augmented Reality tool to improve the safety of CT-guided biopsies.

Team members include doctors Brown, Jadranka Stojanovska, Matt Davenport, Ella Kazerooni, Elaine Caoili from Radiology, and Dan Fessahazion, Sean Petty, Stephanie O’Malley,Theodore Hall and several students from the Visualization Studio. 

“The Duderstadt Center and the Visualization Studio exists to foster exactly these kinds of collaborations,” said Daniel Fessahazion, center’s associate director for emerging technologies. “We have a deep understanding of the technology and collaborate with faculty to explore its applicability to create unique solutions.” 

Dr. Elaine Caoili, Saroja Adusumilli Collegiate Professor of Radiology, demonstrates and Augmented Reality tool under development that will improve the safety of CT-guided biopsies.

AI’s Nelson said the first step of this new initiative is to assess and convene early innovators in XR from across the university to shape how this technology may best support serve residential and online learning. 

“We have a unique opportunity with this campus-wide initiative to build upon the efforts of engaged students, world-class faculty, and our diverse alumni network to impact the future of learning,” he said.