Engineering Grants for XR

The Enhancing Engineering Education Grants Program is designed to support innovative strategies for engaging and supporting all learners in Michigan Engineering undergraduate courses. This program springs from a collaboration among ADUEADGPECAENCRLT-Engin, and Nexus. Proposals will be invited across the range of innovations in engineering education, including instructional practices, course design and content, and instructional technology.

As part of the initial Enhancing Education using Technology (EET) proposal to the College to support the instructional needs of faculty, grants were offered to support the implementation of innovative ideas that instructors needed money to accomplish. The first year of the grants program was FY23 and all grant money was awarded to faculty. It included three major grants of $50K each on the topics of XR, DEI, and Tandem. Additional smaller grants were also awarded to faculty. At the completion of this first year, the team used the past year’s knowledge to propose improvements and changes to the program.

For AY 2024-2025, there are three grants available to support instructional faculty members:

Education Innovation Grants

Grants of up to $10K are available to COE faculty & staff

About the Grant

Grants of up to $10K are available to individual or small groups of Michigan Engineering instructional faculty and staff members seeking to implement innovative teaching methods and/or tools.

Group 2 applications are now being accepted. This call for proposals is open to all eligible applicants and does not necessitate a previous Group 1 proposal or submission.


Proposal Evaluation Criteria
  • Applies a novel method or tool to achieve objectives
  • Reflects innovation in teaching methods or approaches
  • For online courses they utilize the Quality Matters framework and work with Nexus to do so.
  • Involves partnering with Nexus or CRLT-E to co-teach a new faculty development workshop
  • Builds upon evidence-based best practices for enhancing student learning
  • Promotes equitable instruction for all learners
  • Implements practices or tools that have the potential for great impact (either large impact on a small population, or be something that could be applied to a larger population at Michigan Engineering)
  • Other funding opportunities do not exist for this type of work
  • Achieves synergy with goals, strengths, and ongoing work of the College of Engineering, especially as it relates to Michigan Engineering’s Strategic Vision

Group 2 applications close Wednesday, May 1, 2024

Project Statement:

  • Clearly describe the proposed project
  • Explain the value of the project
  • Identify the specific innovation and its relation to evidence-based practices
  • Explain how the project supports equitable instruction and enhanced student learning
  • Discuss the project’s potential for application in broader contexts

Project Evaluation Plan:
Explain how the success of this project will be evaluated, documented, and disseminated. Approaches might include midterm course assessments, focus groups, and surveys, among others.

Budget Request:

  • Graduate or undergraduate student salaries
  • Materials and supplies
  • Project evaluation expenses
  • Travel and registration fees for teaching-related conferences, seminars or workshops
  • Faculty member summer salary (up to $2K of the project’s total budget)

Timeline:
Submissions will be accepted until Wednesday, May 1, 2024 with funding decisions announced late May.

Strategic Technology Grants

COE Projects Focussed on XR, online/hybrid learning and/or generative artificial intelligence

About the Grant

Grants of up to $50,000 are available to teams of at least three Michigan Engineering instructional faculty and staff members to implement innovative teaching methods and/or tools that require an investment of time/resources and collaboration for deployment that is larger than what is available via Education Innovation Grants. Projects should focus on strategic themes of XR, online/hybrid learning and/or generative artificial intelligence.


Proposal Evaluation Criteria
  • Applies a novel method, modality or tool to achieve objectives
  • Reflects innovation in teaching methods or approaches
  • Builds upon evidence-based best practices for enhancing student learning
  • Promotes equitable instruction for all learners
  • If online, leverages the Quality Matters rubric and best practices with online course design and development
  • Implements practices or tools that have the potential for great impact (either large impact on a small population, or be something that could be applied to a larger population at Michigan Engineering)
  • Achieves synergy with goals, strengths, and ongoing work of ADUEADGPRCAENCRLT-EnginNexus, and/or the broader College of Engineering, especially as it relates to Michigan Engineering’s Strategic Vision

Applications closed March 4, 2024

Identify your proposal’s strategic theme:

  • Online/hybrid learning
  • Generative artificial intelligence
  • XR

Project Statement:

  • Clearly describe the proposed project
  • Explain the value of the project
  • Identify the specific innovation and its relation to evidence-based practices
  • Explain how the project supports equitable instruction and enhanced student learning
  • If online, describe how the course design and development effort will leverage the Quality Matters rubric
  • Discuss the project’s potential for great impact
  • Describe your goals for collaboration with at least one E3 grant sponsor (ADUE, ADGPE, CAEN, CRLT-Engin, and/or Nexus)

Project Evaluation Plan:
Explain how the success of this project will be evaluated, documented, and disseminated.

Budget Request:

  • Graduate or undergraduate student salaries
  • Instructional software and classroom technology
  • Materials and supplies
  • Project evaluation expenses
  • Travel and registration fees for teaching-related conferences, seminars or workshops
  • Faculty member summer salary (up to $2K of the project’s total budget)

Team Roster:
Provide a list of all team members, with descriptions of their respective roles and very brief bios.

Timeline:

Submissions are due on Monday, March 4, 2024 with funding decisions announced in April.

Software Pilot Grants

GRANT FUNDING UP TO $10K for COE Faculty & STAFF SEEKING TO PILOT INSTRUCTIONAL SOFTWARE

About the Grant

Grants of up to $10K are available to instructional faculty and staff members seeking to pilot innovative and results-oriented instructional software that has the potential to improve teaching and learning in Michigan Engineering. Proposals may be submitted by individuals requesting software for a specific class or a team of faculty members requesting software for a group of classes.

In the spirit of innovation, all ideas are welcome. Proposals that call for the use of collaborative teaching and learning strategies are encouraged. Priority will be given to projects that, if proven successful, can be replicated throughout the College.

Please note that there are many routes for procuring software licenses at the University of Michigan. We encourage you to reach out to our team at e3grants.engin@umich.edu to help determine if this grant program is appropriate for your request before submitting a proposal.


REQUIRED DELIVERABLES
  • Presentation of a case study of your application of the software and how it impacted your students’ learning objectives to the Michigan Engineering faculty community
  • Engagement with CAEN on evaluation of software for possible college adoption
  • Acting as a faculty advocate for this software and sharing how you are using it in your class

Applications for Fall 2024 close April 1, 2024

Course Information:
Logistical course details including frequency the course is taught, enrollment summary, etc.

Learning Gaps:
Describe the learning gap(s) you have identified in your lesson/module/unit/course.

Teaching Intervention (Pedagogical Support):
Explain the teaching and technology intervention(s) that will close the stated learning gaps. Identify the evidence-based practices that support the efficacy of the proposed software solution.

Comparative Tool Evaluation:

  • Identify 3-4 comparable software tools (including your proposed tool) that could fill the established learning gaps.
  • List the criteria you will use to evaluate the 3-4 comparable tools to inform your decision making.

Project Evaluation Plan:

  • Explain how the success of this software will be evaluated, documented, and disseminated -approaches might include midterm course assessments, focus groups, and surveys, among others.
  • Explain how you will evaluate if this software met the needs of you and your students. How will you identify if it has improved the educational experience?

Budget Request:
Provide the number of licenses, estimated cost per license,  and estimated total cost for this software.

Timeline:
To use the software for instruction in the Fall 2024 term, proposals must be submitted by April 1, 2024.

Fall 2024 XR Classes

Fall 2024 XR Classes

Looking for Classes that incorporate XR?

EECS 440 – Extended Reality for Social Impact (Capstone / MDE)

More Info Here
Contact with Questions:
Austin Yarger
ayarger@umich.edu

Extended Reality for Social Impact — Design, development, and application of virtual and augmented reality software for social impact. Topics include: virtual reality, augmented reality, game engines, ethics / accessibility, interaction design patterns, agile project management, stakeholder outreach, XR history / culture, and portfolio construction. Student teams develop and exhibit socially impactful new VR / AR applications.


ENTR 390.005 & 390.010 – Intro to Entrepreneurial Design, VR Lab

More Info Here
Contact with Questions:
Sara ‘Dari’ Eskandari
seskanda@umich.edu

In this lab, you’ll learn how to develop virtual reality content for immersive experiences in the Meta Quest, MIDEN or for Virtual Production using Unreal Engine and 3d modeling software. You’ll also be introduced to asset creation and scene assembly by bringing assets into the Unreal Engine & creating interactive experiences. At the end of the class you’ll be capable of developing virtual reality experiences, simulations, and tools to address real-world problems.

Students will have an understanding of how to generate digital content for Virtual Reality platforms; be knowledgeable on versatile file formats, content pipelines, hardware platforms and industry standards; understand methods of iterative design and the creation of functional prototypes using this medium; employ what is learned in the lecture section of this course to determine what is possible, what is marketable, and what are the various distribution methods available within this platform; become familiar with documenting their design process and also pitching their ideas to others, receiving and providing quality feedback.


UARTS 260 – Empathy in Pointclouds

More Info Here
Contact with Questions:
Dawn Gilpin
dgilpin@umich.edu

Empathy In Point Clouds: Spatializing Design Ideas and Storytelling through Immersive Technologies integrates LiDAR scanning, photogrammetry, and Unreal Engine into education, expanding the possible methodologies and processes of architectural design. Entering our third year of the FEAST program, we turn our attention to storytelling and worldbuilding using site-specific point cloud models as the context for our narratives. This year the team will produce 1-2 spatial narratives for the three immersive technology platforms we are working with: Meta Quest VR headset, MiDEN/VR CAVE, and the LED stage.


ARTDES 217 – Bits and Atoms

More Info Here
Contact with Questions:
Sophia Brueckner
sbrueckn@umich.edu

This is an introduction to digital fabrication within the context of art and design. Students learn about the numerous types of software and tools available and develop proficiency with the specific software and tools at Stamps. Students discuss the role of digital fabrication in creative fields.


ARTDES 420 – Sci-Fi Prototyping

More Info Here
Contact with Questions:
Sophia Brueckner
sbrueckn@umich.edu

This course ties science fiction with speculative/critical design as a means to encourage the ethical and thoughtful design of new technologies. With a focus on the creation of functional prototypes, this course combines the analysis of science fiction with physical fabrication or code-based interpretations of the technologies they depict.


SI 559 – Introduction to AR/VR Application Design

More Info Here
Contact with Questions:
Michael Nebeling
nebeling@umich.edu

This course will introduce students to Augmented Reality (AR) and Virtual Reality (VR) interfaces. This course covers basic concepts; students will create two mini-projects, one focused on AR and one on VR, using prototyping tools. The course requires neither special background nor programming experience.


FTVM 394 / DIGITAL 394 – Topics in Digital Media Production, Virtual Reality

More Info Here
Contact with Questions:
Yvette Granata
ygranata@umich.edu

This course provides an introduction to key software tools, techniques, and fundamental concepts supporting digital media arts production and design. Students will learn and apply the fundamentals of design and digital media production with software applications, web-based coding techniques and study the principals of design that translate across multiple forms of media production.


UARTS 260/360/460/560 – THE BIG CITY: Lost & Found in XR

More Info Here
Contact with Questions:
Matthew Solomon & Sara Eskandari
mpsolo@umich.edu / seskanda@umich.edu

No copies are known to exist of 1928 lost film THE BIG CITY, only still photographs, a cutting continuity, and a detailed scenario of the film. This is truly a shame because the film featured a critical mass of black performers — something extremely uncommon at the time. Using Unreal Engine, detailed 3D model renderings, and live performance, students will take users back in time into the fictional Harlem Black Bottom cabaret and clubs shown in the film. Students will experience working in a small game development team to create a high-fidelity, historical recreation of the sets using 3D modeling, 2D texturing skills, level design, and game development pipelines. They will experience a unique media pipeline of game design for live performance and cutting-edge virtual production. This project will also dedicate focus towards detailed documentation in order to honor the preservation of THE BIG CITY that allows us to attempt this endeavor and the black history that fuels it.


MOVESCI 313 – The Art of Anatomy

Contact with Questions:
Melissa Gross & Jenny Gear
mgross@umich.edu / gearj@umich.edu

Learn about human anatomy and how it has historically been taught through human history covering a variety of mediums including the recent adoption of XR tools. Students will get hands-on experience with integrating and prototyping AR and VR Visualization technologies for medical and anatomical study.


ARCH 565 – Research in Environmental Technology

Contact with Questions:
Mojtaba Navvab
moji@umich.edu

The focus of this course is the introduction to research methods in environmental technology. Qualitative and quantitative research results are studied with regard to their impact on architectural design. Each course participant undertakes an investigation in a selected area of environmental technology. The experimental approach may use physical modeling, computer simulation, or other appropriate methods (VR).


FTVM 455.004 – Topics in Film: Eco Imaginations
WGS 412.001 – Fem Art Practices

Contact with Questions:
Petra Kuppers
petra@umich.edu

These courses will include orientations to XR technologies and sessions leveraging Unreal Engine and Quixel 3d assets to create immersive virtual reality environments.

Security Robots Study

Security Robots

Using XR to conduct studies in robotics

Maredith Byrd


Xin Ye is a University of Michigan Master’s Student at the School of Information. She approached The Duderstadt Center with her Master’s Thesis Defense Project to test the favorability of humanoid robots. Stephanie O’Malley at the Visualization Studio helped Xin to develop a simulation using three types of security robots with varying features to see if a more humanoid robot is viewed with more favorable experiences.

Panoramic of Umich Hallway

The simulation’s goal is to make participants feel like they were interacting with a real robot standing in front of them, so the MIDEN was the perfect tool to use for this experiment. The MIDEN (Michigan Immersive Digital Experience Nexus) is a 10 x 10 x 10 square box that relies on projections so the user can naturally walk in a virtual environment. An environment is constructed in Unreal Engine and projected into the MIDEN allowing the user to still see their physical body within the projected digital world, and the digital world is created to be highly detailed. 

Panoramic of the MIDEN

Users step into the MIDEN and by wearing 3D glasses are immersed in a digital environment that recreates common locations on a college campus: such as a university hallway/commons area OR an outdoor parking lot. After a short while, the participant gains the attention of the security robot, and it approaches them to question them.

Setting up the MIDEN

Xin Ye then triggers the appropriate response so users think the robot is responding intelligently. The robots were all configured to have different triggerable answers to participants that Xin Ye could initiate behind the curtains of the MIDEN. This is a technique referred to in studies as “Wizard of Oz” because the participant thinks the robotic projection has an artificial intelligence just as a real robot in this situation would possess when in reality it is a human deciding the appropriate response.

Knightscope
Ramsee
Pepper

This project aimed to evaluate the human perception of different types of security robots – some more humanoid than others, to see if a more humanoid robot was viewed more favorably. Three different types of robots were used: Knightscope, Ramsee, and Pepper. Knightscope is a cone-shaped robot that lacks any humanoid features. Ramsee is a little more humanoid with simple facial features, while Pepper is the most humanoid with more complex features as well as arms and legs.  

Participants interacted with 1 of 3 different robot types. The robot would approach the participant in the MIDEN, and question them – asking for them to present an MCard, put on a face mask, or if they’ve witnessed anything suspicious. To ensure that these robots all had a fair chance, each used the same “Microsoft David” automated male voice. Once the dialogue chain is complete, the robot thanks the participant and moves away. The participant then removes the 3D glasses and is taken to another location in the building for an exit interview. After the simulation, participants were interviewed about their interactions with the robots. If any participant realized that it was a human controlling the robot, they were disqualified from the study. 

Knightscope in Hallway
Ramsee in Hallway

Xin Ye presented her findings in a paper titled, “Human Security Robot Interaction and Anthropomorphism: An Examination of Pepper, RAMSEE, and Knightscope Robots” at the 32nd IEEE International Conference on Robot & Human Interactive Communication in Busan, South Korea.

Fall 2023 XR Classes

Fall 2023 XR Classes

Looking for Classes that incorporate XR?

EECS 498 – Extended Reality & Society


Credits : 4
More Info Here
Contact with Questions:
Austin Yarger
ayarger@umich.edu

From pediatric medical care, advanced manufacturing, and commerce to film analysis, first-responder training, and unconscious bias training, the fledgling, immersive field of extended reality may take us far beyond the realm of traditional video games and entertainment, and into the realm of diverse social impact.

“EECS 498 : Extended Reality and Society” is a programming-intensive senior capstone / MDE course that empowers students with the knowledge and experience to…

    • Implement medium-sized virtual and augmented reality experiences using industry-standard techniques and technologies.
    • Game Engines (Unreal Engine / Unity), Design Patterns, Basic Graphics Programming, etc.
    • Design socially-conscious, empowering user experiences that engage diverse audiences.
    • Contribute to cultural discourse on the hopes, concerns, and implications of an XR-oriented future.
    • Privacy / security concerns, XR film review (The Matrix, Black Mirror, etc)
    • Carry out user testing and employ feedback after analysis.
    • Requirements + Customer Analysis, Iterative Design Process, Weekly Testing, Analytics, etc.
    • Work efficiently in teams of 2-4 using agile production methods and software.
    • Project Management Software (Jira), Version Control (Git), Burndown Charting and Resource Allocation, Sprints, etc.

Students will conclude the course with at least three significant, socially-focused XR projects in their public portfolios.

 

ENTR 390 – Intro to Entrepreneurial Design, VR Lab


Credits : 3
More Info Here
Contact with Questions:
Sara ‘Dari’ Eskandari
seskanda@umich.edu

In this lab, you’ll learn how to develop virtual reality content for immersive experiences in the Oculus Rift, MIDEN or for Virtual Production using Unreal Engine and 3d modeling software. You’ll also be introduced to asset creation and scene assembly by bringing assets into the Unreal Engine & creating interactive experiences. At the end of the class you’ll be capable of developing virtual reality experiences, simulations, and tools to address real-world problems.

Students will have an understanding of how to generate digital content for Virtual Reality platforms; be knowledgeable on versatile file formats, content pipelines, hardware platforms and industry standards; understand methods of iterative design and the creation of functional prototypes using this medium; employ what is learned in the lecture section of this course to determine what is possible, what is marketable, and what are the various distribution methods available within this platform; become familiar with documenting their design process and also pitching their ideas to others, receiving and providing quality feedback.

 

FTVM 307 – Film Analysis for Filmmakers


Credits : 3
More Info Here
Contact with Questions:
Matthew Solomon
mpsolo@umich.edu

 Filmmakers learn about filmmaking by watching films. This course reverse engineers movies to understand how they were produced. The goal is to learn from a finished film how the scenes were produced in front of the camera and microphone and how the captured material was edited. Students in this class use VR to reimagine classic film scenes – giving them the ability to record and edit footage from a virtual set.

 

UARTS 260 / EIPC FEAST – Empathy in Pointclouds


Credits: 1-5
More Info Here
Contact with Questions:
Dawn Gilpin
dgilpin@umich.edu

Empathy In Point Clouds: Spatializing Design Ideas and Storytelling through Immersive Technologies integrates LiDAR scanning, photogrammetry, and UnReal Engine into education, expanding the possible methodologies and processes of architectural design. Entering our third year of the FEAST program, we turn our attention to storytelling and worldbuilding using site-specific point cloud models as the context for our narratives. This year the team will produce 1-2 spatial narratives for the three immersive technology platforms we are working with: VR headset, MiDEN/VR CAVE, and the LED stage.

 

 

ARTDES 217 – Bits and Atoms


Credits: 3
More Info Here
Contact with Questions:
Sophia Brueckner
sbrueckn@umich.edu

This is an introduction to digital fabrication within the context of art and design. Students learn about the numerous types of software and tools available and develop proficiency with the specific software and tools at Stamps. Students discuss the role of digital fabrication in creative fields.

 

ARTDES 420 – Sci-Fi Prototyping


Credits: 3
More Info Here
Contact with Questions:
Sophia Brueckner
sbrueckn@umich.edu

This course ties science fiction with speculative/critical design as a means to encourage the ethical and thoughtful design of new technologies. With a focus on the creation of functional prototypes, this course combines the analysis of science fiction with physical fabrication or code-based interpretations of the technologies they depict.

 

SI 559 – Introduction to AR/VR Application Design

Credits: 3
More Info Here
Contact with Questions:
Michael Nebeling
nebeling@umich.edu

This course will introduce students to Augmented Reality (AR) and Virtual Reality (VR) interfaces. This course covers basic concepts; students will create two mini-projects, one focused on AR and one on VR, using prototyping tools. The course requires neither special background nor programming experience.

 

FTVM 394 – Digital Media Production, Virtual Reality

Credits: 4
More Info Here
Contact with Questions:
Yvette Granata
ygranata@umich.edu

This course provides an introduction to key software tools, techniques, and fundamental concepts supporting digital media arts production and design. Students will learn and apply the fundamentals of design and digital media production with software applications, web-based coding techniques and study the principals of design that translate across multiple forms of media production.

Learning to Develop for Mixed Reality – The ENTR 390 “VR Lab”

Learning to Develop for Virtual Reality – The ENTR 390 “VR Lab”

XR Prototyping

For the past several years, students enrolled in the Center for Entrepreneurship’s Intro to Entrepreneurial Design Virtual Reality course have been introduced to programming and content creation pipelines for XR development using a variety of Visualization Studio resources. Their goal? Create innovative applications for XR. From creating video games to changing the way class material is accessed with XR capable textbooks, if you have an interest in learning how to make your own app for Oculus Rift, MIDEN or even a smart phone, this might be a class to enroll in. Students interested in this course are not required to have any prior programming or 3d modeling knowledge, and if you’ve never used a VR headset that’s OK too. This course will teach you everything you need to know.

Henry Duhaime presents his VR game for Oculus Rift, in which players explore the surface of Mars in search of a missing NASA rover.
Michael Meadows prototypes AR capable textbooks using a mobile phone and Apple’s ARKit.

Robots Who Goof: Can We Trust Them?

Robotics in Unreal Engine

Robots Who Goof: Can We Trust Them?

EVERYONE MAKES MISTAKES

Laurel Thomas
ltgnagey@umich.edu


The human-like, android robot used in the virtual experimental task of handling boxes.

When robots make mistakes—and they do from time to time—reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.

In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” said Lionel Robert, associate professor at the U-M School of Information and core faculty of the Robotics Institute.

“Robots will make mistakes when working with humans, decreasing humans’ trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears.”

For their study published in the Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication, Robert and doctoral student Connor Esterwood examined how the repair strategies—including a new strategy of explanations—impact the elements that drive trust: ability (competency), integrity (honesty) and benevolence (concern for the trustor).

The mechanical arm robot used in the virtual experiment.

The researchers recruited 164 participants to work with a robot in a virtual environment, loading boxes onto a conveyor belt. The human was the quality assurance person, working alongside a robot tasked with reading serial numbers and loading 10 specific boxes. One robot was anthropomorphic or more humanlike, the other more mechanical in appearance.

Sara Eskandari and Stephanie O’Malley of the Emerging Technology Group at U-M’s James and Anne Duderstadt Center helped develop the experimental virtual platform.

The robots were programed to intentionally pick up a few wrong boxes and to make one of the following trust repair statements: “I’m sorry I got the wrong box” (apology), “I picked the correct box so something else must have gone wrong” (denial), “I see that was the wrong serial number” (explanation), or “I’ll do better next time and get the right box” (promise).

Previous studies have examined apologies, denials and promises as factors in trust or trustworthiness but this is the first to look at explanations as a repair strategy, and it had the highest impact on integrity, regardless of the robot’s appearance.

When the robot was more humanlike, trust was even easier to restore for integrity when explanations were given and for benevolence when apologies, denials and explanations were offered.

As in the previous research, apologies from robots produced higher integrity and benevolence than denials. Promises outpaced apologies and denials when it came to measures of benevolence and integrity.

Esterwood said this study is ongoing with more research ahead involving other combinations of trust repairs in different contexts, with other violations.

“In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life,” Esterwood said. “For example, does a barista robot’s explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?”

This originally appeared on Michigan News.

More information:

Behind the Scenes: Re-creating Citizen Kane in VR

Behind the Scenes: Re-creating Citizen Kane in VR

inside a classic

Stephanie O’Malley


Students in Matthew Solomon’s classes are used to critically analyzing film. Now they get the chance to be the director for arguably one of the most influential films ever produced: Citizen Kane.

Using an application developed at the Duderstadt Center with grant funding provided by LSA Technology Services, students are placed in the role of the film’s director and able to record a prominent scene from the movie using a virtual camera. The film set which no longer exists, has been meticulously re-created in black and white CGI using reference photographs from the original set, with a CGI Orson Welles acting out the scene on repeat – his actions performed by Motion Capture actor Matthew Henerson, carefully chosen for his likeness to Orson Welles, with the Orson avatar generated from a photogrammetry scan of Matthew.

Top down view of the CGI re-creation of the film set for Citizen Kane

Analyzing the original film footage, doorways were measured, actor heights compared, and footsteps were counted, to determine a best estimate for the scale of the set when 3D modeling. With feedback from Citizen Kane expert, Harlan Lebo, fine details down to the topics of the books on the bookshelves were able to be determined.

Archival photograph provided by Vincent Longo of the original film set

Motion Capture actor Matthew Henerson was flown in to play the role of the digital Orson Welles. In a carefully choreographed session directed by Matthew’s PhD student, Vincent Longo, the iconic scene from Citizen Kane was re-enacted while the original footage played on an 80″ TV in the background, ensuring every step aligned to the original footage perfectly.

Actor Matthew Henerson in full mocap attire amidst the makeshift set for Citizen Kane – Props constructed using PVC. Photo provided by Shawn Jackson.

The boundaries of the set were taped on the floor so the data could be aligned to the digitally re-created set. Eight Vicon motion capture cameras, the same used throughout Hollywood for films like Lord of the Rings or Planet of the Apes, formed a circle around the makeshift set. These cameras rely on infrared light reflected off of tiny balls affixed to the motion capture suit to track the actor’s motion. Any props during the motion capture recording were carefully constructed out of cardboard and PVC (later to be 3D modeled) so as to not obstruct his movements. The 3 minutes of footage attempting to be re-created took 3 days to complete, comprised over 100 individual mocap takes and several hours of footage, which were then compared for accuracy and stitched together to complete the full route Orson travels through the environment.

Matthew Henerson
Orson Welles

  Matthew Henerson then swapped his motion capture suit for an actual suit, similar to that worn by Orson in the film, and underwent 3D scanning using the Duderstadt Center’s photogrammetry resources. 

Actor Matthew Henerson wears asymmetrical markers to assist the scanning process

Photogrammetry is a method of scanning existing objects or people, commonly used in Hollywood and throughout the video game industry to create a CGI likenesses of famous actors. This technology has been used in films like Star Wars (an actress similar in appearance to Carrie Fischer was scanned and then further sculpted, to create a more youthful Princess Leia) with entire studios now devoted to photogrammetry scanning. The process relies on several digital cameras surrounding the subject and taking simultaneous photographs.

Matthew Henerson being processed for Photogrammetry

The photos are submitted to a software that analyzes them on a per-pixel basis, looking for similar features across multiple photos. When a feature is recognized, it is triangulated using the focal length of the camera and it’s position relative to other identified features, allowing millions of tracking points to be generated. From this an accurate 3D model can be produced, with the original digital photos mapped to its surface to preserve photo-realistic color. These models can be further manipulated: Sometimes they are sculpted by an artist, or, with the addition of a digital “skeleton”, they can be driven by motion data to become a fully articulated digital character.

  The 3d modeled scene and scanned actor model were joined with mocap data and brought into the Unity game engine to develop the functionality students would need to film within the 3D set. A virtual camera was developed with all of the same settings you would find on a film camera from that era. When viewed in a virtual reality headset like the Oculus Rift, Matthew’s students can pick up the camera and physically move around to position it at different locations in the CGI environment, often capturing shots that otherwise would be difficult to do in a conventional film set. The footage students film within the app can be exported as MP4 video and then edited in their editing software of choice, just like any other camera footage.

  Having utilized the application for his course in the Winter of 2020, Matthew Solomon’s project with the Duderstadt Center was recently on display as part of the iLRN’s 2020 Immersive Learning Project Showcase & Competition. With Covid-19 making the conference a remote experience, the Citizen Kane project was able to be experienced in Virtual Reality by conference attendees using the FrameVR platform. Highlighting innovative ways of teaching with VR technologies, attendees from around the world were able to learn about the project and watch student edits made using the application.

Citizen Kane on display for iLRN’s 2020 Immersive Learning Project Showcase & Competition using Frame VR

Novels in VR – Experiencing Uncle Tom’s Cabin

Novels in VR – Experiencing Uncle Tom’s Cabin

A Unique Perspective

Stephanie O’Malley


This past semester, English Professor Sara Blair taught a course at the University titled, “The Novel and Virtual Realities.”  – The purpose of this course was to expose students to different methods of analyzing novels and ways of understanding them from different perspectives by utilizing platforms like VR and AR.

Designed as a hybrid course, her class was split between a traditional classroom environment, and an XR lab, providing a comparison between traditional learning methods, and more hands-on experiential lessons through the use of immersive, interactive VR and AR simulations.

As part of her class curriculum, students were exposed to a variety of experiential XR content. Using the Visualization Studio’s Oculus Rifts, her class was able to view Dr. Courtney Cogburn’s “1000 Cut Journey” installation – a VR experience that puts viewers in the shoes of a black american man growing up in the time of segregation, allowing viewers to see first hand how racism affects every facet of their life. They also had the opportunity to view Asad J. Malik’s “Terminal 3” using augmented reality devices like the Microsoft Hololens. Students engaging with Terminal 3 see how Muslim identities in the U.S. are approached through the lens of an airport interrogation.

Wanting to create a similar experience for her students at the University of Michigan, Sara approached the Duderstadt Center about the possibility of turning another novel into a VR experience: Uncle Tom’s Cabin.

She wanted her students to understand the novel from the perspective of it’s lead character, Eliza, during the pivotal moment where as a slave, she is trying to escape her captors and reach freedom. But she also wanted to give her students the perspective of the slave owner and other slaves tasked with her pursuit, as well as the perspective of an innocent bystander watching this scene unfold.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin

Using Unreal Engine, the Duderstadt Center was able to make this a reality. An expansive winter environment was created based on imagery detailed in the novel, and CGI characters for Eliza and her captors were produced and then paired with motion capture data to drive their movements. When students put on the Oculus Rift headset, they can choose to experience the moment of escape either through Eliza’s perspective, her captors, or as a bystander. And to better evaluate what components contributed to student’s feelings during the simulation, versions of these scenarios were provided with and without sound. With sound enabled as Eliza, you hear footsteps in the snow gaining on you, the crack of the ice beneath your feet as you leap across a tumultuous river, and the barking of a vicious dog on your heels – all adding to the tension of the moment. While viewers are able to freely look around the environment, they are passive observers: They have no control over the choices Eliza makes or where she can go.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin – Freedom for Eliza lies on the other side of the frozen Ohio river.

The scene ends with Eliza reaching freedom on the opposite side of the Ohio river and leaving her pursuers behind. What followed the student’s experience with the VR version of the novel was a deep class discussion on how the scene felt in VR verses how it felt reading the same passage in the book. Some students wondered what it might feel like to instead be able to control the situation and control where Eliza goes, or as a bystander, to move freely through the environment as the scene plays out, deciding which party (Eliza or her pursuers) was of most interest to follow in that moment.

While Sara’s class has concluded for the semester, you can still try this experience for yourself – Uncle Tom’s Cabin is available to demo on all Visualization Studio workstations equipped with an Oculus Rift.

Using Mobile VR to Assess Claustrophobia During an MRI

Using Mobile VR to Assess Claustrophobia During an MRI

new methods for exposure therapy

Stephanie O’Malley


Dr. Richard Brown and his colleague Dr. Jadranka Stojanovska had an idea for how VR could be used in a clinical setting. Having realized a problem with patients undergoing MRI scans experiencing claustrophobia, they wanted to use VR simulations to introduce potential patients to what being inside an MRI machine might feel like.

Duderstadt Center programmer Sean Petty and director Dan Fessahazion alongside Dr. Richard Brown

Claustrophobia in this situation is a surprisingly common problem. While there are 360 videos that convey what an MRI might look like, these fail to address the major factor contributing to claustrophobia: The perceived confined space within the bore. 360 videos tend to make the environment skewed, seeming further away than it would be in reality and thereby failing to induce the same feelings of claustrophobia that the MRI bore would produce in reality. With funding from the Patient Education Award Committee, Dr. Brown approached the Duderstadt Center to see if a better solution could be produced.

VR MRI: Character customization
A patient enters feet-first into the bore of the MRI machine.

In order to simulate the effects of an MRI accurately, a CGI MRI machine was constructed and ported to the Unity game engine. A customize-able avatar representing the viewer’s body was also added to give viewers a sense of self. When a VR headset is worn, the viewer’s perspective allows them to see their avatar body and the real proportions of the MRI machine as they are slowly transported into the bore. Verbal instructions mimic what would be said throughout the course of a real MRI, with the intimidating boom of the machine occurring as the simulated scan proceeds.

Two modes are provided within the app: Feet first or head first, to accommodate the most common scanning procedures that have been shown to induce claustrophobia.  

In order to make this accessible to patients, the MRI app was developed with mobile VR in mind, allowing anyone (patients or clinicians) with a VR-capable phone to download the app and use it with a budget friendly headset like Google Daydream or Cardboard.

Dr. Brown’s VR simulator was recently featured as the cover story in the September edition of Tomography magazine.

Customer Discovery Using 360 Video

Customer Discovery Using 360 Video

Year after year, students in Professor Dawn White’s Entrepreneurship 411 course are tasked with doing a “customer discovery” – a process where students interested in creating a business, interview professionals in a given field to assess their needs and how products they develop could address these needs and alleviate some of the difficulties they encounter on a daily basis.

Often when given this assignment, students would defer to their peers for feedback instead of reaching out to strangers working in these fields of interest. This demographic being so similar to the students themselves, would result in a fairly biased outcome that didn’t truly get to the root issue of why someone might want or need a specific product. Looking for an alternative approach, Dawn teamed up with her long time friend, Professor Alison Bailey, who teaches DEI at the University, and Aileen Huang-Saad from Biomedical Engineering, and approached the Duderstadt Center with their idea: What if students could interact with a simulated and more diverse professional to conduct their customer discovery?

After exploring the many routes this could take for development, including things like motion capture-driven CGI avatars, 360 video became the decided platform on which to create this simulation. 360 Video viewed within an Oculus Rift VR headset ultimately gave the highest sense of realism and immersion when conducting an interview, which was important for making the interview process feel authentic.

Up until this point, 360 videos were largely passive experiences. They did not allow users to tailor the experience based on their choices or interact with the scene in any way. This Customer Discovery project required the 360 videos to be responsive – when a student asked a recognized customer discovery question, the appropriate video response would need to be triggered to play. And to do this, the development required both some programming logic to trigger different videos but also an integrated voice recognition software so students could ask a question out loud and have the speech recognized within the application.

Dawn and Alison sourced three professionals to serve as their simulated actors for this project:

Fritz discusses his career as an IT professional

Fritz – Fritz is a young black man with a career as an IT professional


Cristina – Cristina is a middle aged woman with a noticeable accent, working in education


Charles – Charles is a white adult man employed as a barista

These actors were chosen for their authenticity and diversity, having qualities that may lead interviewers to make certain assumptions or expose biases in their interactions with them. With the help of talented students at the Visualization Studio, these professionals were filmed responding to various customer discovery questions using the Ricoh Theta 360 camera and a spatial microphone (this allows for spatial audio in VR, so you feel like the sound is coming from a specific direction where the actor is sitting). For footage of one response to be blended with the next, the actors had to remember to revert their hands and face to the same pose between responses so the footage could be aligned. They also were filmed giving generic responses to any unplanned questions that may get asked as well as twiddling their thumbs and patiently waiting – footage that could be looped to fill any idle time between questions.

Once the footage was acquired, the frame ranges for each response were noted and passed off to programmers to implement into the Duderstadt Center’s in-house VR rendering software, Jugular. As an initial prototype of the concept, the application was originally intended to run as a proctored simulation – students engaging in the simulation would wear an Oculus Rift and ask their questions out loud, with the proctor listening in and triggering the appropriate actor response using keyboard controls. For a more natural feel, Dawn was interested in exploring voice recognition to make the process more automated.

Within Jugular, students view an interactive 360 video where they are seated across from one of three professionals available for interviewing. Using the embedded microphone in the Oculus Rift they are able to ask questions that are recognized using Dialogue Flow, that in turn trigger the appropriate video response, allowing students to conduct mock interviews.

With Dawn employing some computer science students to tackle the voice recognition element over the summer, they were able to integrate this feature into Jugular using the Dialogue Flow agent with Python scripts. Students could now be immersed in an Oculus Rift, speaking to a 360 video filmed actor, and have their voice interpreted as they asked their questions out loud, using the embedded microphone on the Rift.

Upon it’s completion, the Customer Discovery application was piloted in the Visualization Studio with Dawn’s students for the Winter 2019 semester.