Engineering Grants for XR

The Enhancing Engineering Education Grants Program is designed to support innovative strategies for engaging and supporting all learners in Michigan Engineering undergraduate courses. This program springs from a collaboration among ADUEADGPECAENCRLT-Engin, and Nexus. Proposals will be invited across the range of innovations in engineering education, including instructional practices, course design and content, and instructional technology.

As part of the initial Enhancing Education using Technology (EET) proposal to the College to support the instructional needs of faculty, grants were offered to support the implementation of innovative ideas that instructors needed money to accomplish. The first year of the grants program was FY23 and all grant money was awarded to faculty. It included three major grants of $50K each on the topics of XR, DEI, and Tandem. Additional smaller grants were also awarded to faculty. At the completion of this first year, the team used the past year’s knowledge to propose improvements and changes to the program.

For AY 2024-2025, there are three grants available to support instructional faculty members:

Education Innovation Grants

Grants of up to $10K are available to COE faculty & staff

About the Grant

Grants of up to $10K are available to individual or small groups of Michigan Engineering instructional faculty and staff members seeking to implement innovative teaching methods and/or tools.

Group 2 applications are now being accepted. This call for proposals is open to all eligible applicants and does not necessitate a previous Group 1 proposal or submission.


Proposal Evaluation Criteria
  • Applies a novel method or tool to achieve objectives
  • Reflects innovation in teaching methods or approaches
  • For online courses they utilize the Quality Matters framework and work with Nexus to do so.
  • Involves partnering with Nexus or CRLT-E to co-teach a new faculty development workshop
  • Builds upon evidence-based best practices for enhancing student learning
  • Promotes equitable instruction for all learners
  • Implements practices or tools that have the potential for great impact (either large impact on a small population, or be something that could be applied to a larger population at Michigan Engineering)
  • Other funding opportunities do not exist for this type of work
  • Achieves synergy with goals, strengths, and ongoing work of the College of Engineering, especially as it relates to Michigan Engineering’s Strategic Vision

Group 2 applications close Wednesday, May 1, 2024

Project Statement:

  • Clearly describe the proposed project
  • Explain the value of the project
  • Identify the specific innovation and its relation to evidence-based practices
  • Explain how the project supports equitable instruction and enhanced student learning
  • Discuss the project’s potential for application in broader contexts

Project Evaluation Plan:
Explain how the success of this project will be evaluated, documented, and disseminated. Approaches might include midterm course assessments, focus groups, and surveys, among others.

Budget Request:

  • Graduate or undergraduate student salaries
  • Materials and supplies
  • Project evaluation expenses
  • Travel and registration fees for teaching-related conferences, seminars or workshops
  • Faculty member summer salary (up to $2K of the project’s total budget)

Timeline:
Submissions will be accepted until Wednesday, May 1, 2024 with funding decisions announced late May.

Strategic Technology Grants

COE Projects Focussed on XR, online/hybrid learning and/or generative artificial intelligence

About the Grant

Grants of up to $50,000 are available to teams of at least three Michigan Engineering instructional faculty and staff members to implement innovative teaching methods and/or tools that require an investment of time/resources and collaboration for deployment that is larger than what is available via Education Innovation Grants. Projects should focus on strategic themes of XR, online/hybrid learning and/or generative artificial intelligence.


Proposal Evaluation Criteria
  • Applies a novel method, modality or tool to achieve objectives
  • Reflects innovation in teaching methods or approaches
  • Builds upon evidence-based best practices for enhancing student learning
  • Promotes equitable instruction for all learners
  • If online, leverages the Quality Matters rubric and best practices with online course design and development
  • Implements practices or tools that have the potential for great impact (either large impact on a small population, or be something that could be applied to a larger population at Michigan Engineering)
  • Achieves synergy with goals, strengths, and ongoing work of ADUEADGPRCAENCRLT-EnginNexus, and/or the broader College of Engineering, especially as it relates to Michigan Engineering’s Strategic Vision

Applications closed March 4, 2024

Identify your proposal’s strategic theme:

  • Online/hybrid learning
  • Generative artificial intelligence
  • XR

Project Statement:

  • Clearly describe the proposed project
  • Explain the value of the project
  • Identify the specific innovation and its relation to evidence-based practices
  • Explain how the project supports equitable instruction and enhanced student learning
  • If online, describe how the course design and development effort will leverage the Quality Matters rubric
  • Discuss the project’s potential for great impact
  • Describe your goals for collaboration with at least one E3 grant sponsor (ADUE, ADGPE, CAEN, CRLT-Engin, and/or Nexus)

Project Evaluation Plan:
Explain how the success of this project will be evaluated, documented, and disseminated.

Budget Request:

  • Graduate or undergraduate student salaries
  • Instructional software and classroom technology
  • Materials and supplies
  • Project evaluation expenses
  • Travel and registration fees for teaching-related conferences, seminars or workshops
  • Faculty member summer salary (up to $2K of the project’s total budget)

Team Roster:
Provide a list of all team members, with descriptions of their respective roles and very brief bios.

Timeline:

Submissions are due on Monday, March 4, 2024 with funding decisions announced in April.

Software Pilot Grants

GRANT FUNDING UP TO $10K for COE Faculty & STAFF SEEKING TO PILOT INSTRUCTIONAL SOFTWARE

About the Grant

Grants of up to $10K are available to instructional faculty and staff members seeking to pilot innovative and results-oriented instructional software that has the potential to improve teaching and learning in Michigan Engineering. Proposals may be submitted by individuals requesting software for a specific class or a team of faculty members requesting software for a group of classes.

In the spirit of innovation, all ideas are welcome. Proposals that call for the use of collaborative teaching and learning strategies are encouraged. Priority will be given to projects that, if proven successful, can be replicated throughout the College.

Please note that there are many routes for procuring software licenses at the University of Michigan. We encourage you to reach out to our team at e3grants.engin@umich.edu to help determine if this grant program is appropriate for your request before submitting a proposal.


REQUIRED DELIVERABLES
  • Presentation of a case study of your application of the software and how it impacted your students’ learning objectives to the Michigan Engineering faculty community
  • Engagement with CAEN on evaluation of software for possible college adoption
  • Acting as a faculty advocate for this software and sharing how you are using it in your class

Applications for Fall 2024 close April 1, 2024

Course Information:
Logistical course details including frequency the course is taught, enrollment summary, etc.

Learning Gaps:
Describe the learning gap(s) you have identified in your lesson/module/unit/course.

Teaching Intervention (Pedagogical Support):
Explain the teaching and technology intervention(s) that will close the stated learning gaps. Identify the evidence-based practices that support the efficacy of the proposed software solution.

Comparative Tool Evaluation:

  • Identify 3-4 comparable software tools (including your proposed tool) that could fill the established learning gaps.
  • List the criteria you will use to evaluate the 3-4 comparable tools to inform your decision making.

Project Evaluation Plan:

  • Explain how the success of this software will be evaluated, documented, and disseminated -approaches might include midterm course assessments, focus groups, and surveys, among others.
  • Explain how you will evaluate if this software met the needs of you and your students. How will you identify if it has improved the educational experience?

Budget Request:
Provide the number of licenses, estimated cost per license,  and estimated total cost for this software.

Timeline:
To use the software for instruction in the Fall 2024 term, proposals must be submitted by April 1, 2024.

Recruiting Unity VR programmers to Evaluate Sound Customization Toolkit for Virtual Reality Applications

Recruiting Unity VR programmers to Evaluate Sound Customization Toolkit for Virtual Reality Applications

Participate in a study by the EECS Accessibility Lab

The EECS Accessibility Lab needs your help evaluating a new Sound Accessibility toolkit for Virtual Reality!

Our research team is studying how sound customization tools, like modulating frequency or dynamically adjusting volume can enhance VR experience for DHH people. We are recruiting adult (18 or older) participants who have at least 1 year of experience working with UnityVR and have at least 2 previous projects that have sounds to add our toolkit into.

This study will be self-paced, remote, and asynchronous. It will take around 60 – 90 minutes.

In this study, we will collect some demographic information about you (e.g., age, gender) and ask about your experience working with UnityVR. We will then introduce our Sound Customization Toolkit and ask you to apply it to your own project. We will ask you to record your screen and voice during this implementation process. We will ask you to complete a form during the study to provide feedback for our toolkit.

After the study, we will compensate you $30 in the form of an Amazon Gift Card for your time.

If you are interested in participating, please fill out this Google Form. For more information, feel free to reach out to Xinyun Cao: xinyunc@umich.edu.

For more details on our work, see our lab’s webpage.

Fall 2023 XR Classes

Fall 2023 XR Classes

Looking for Classes that incorporate XR?

EECS 498 – Extended Reality & Society


Credits : 4
More Info Here
Contact with Questions:
Austin Yarger
ayarger@umich.edu

From pediatric medical care, advanced manufacturing, and commerce to film analysis, first-responder training, and unconscious bias training, the fledgling, immersive field of extended reality may take us far beyond the realm of traditional video games and entertainment, and into the realm of diverse social impact.

“EECS 498 : Extended Reality and Society” is a programming-intensive senior capstone / MDE course that empowers students with the knowledge and experience to…

    • Implement medium-sized virtual and augmented reality experiences using industry-standard techniques and technologies.
    • Game Engines (Unreal Engine / Unity), Design Patterns, Basic Graphics Programming, etc.
    • Design socially-conscious, empowering user experiences that engage diverse audiences.
    • Contribute to cultural discourse on the hopes, concerns, and implications of an XR-oriented future.
    • Privacy / security concerns, XR film review (The Matrix, Black Mirror, etc)
    • Carry out user testing and employ feedback after analysis.
    • Requirements + Customer Analysis, Iterative Design Process, Weekly Testing, Analytics, etc.
    • Work efficiently in teams of 2-4 using agile production methods and software.
    • Project Management Software (Jira), Version Control (Git), Burndown Charting and Resource Allocation, Sprints, etc.

Students will conclude the course with at least three significant, socially-focused XR projects in their public portfolios.

 

ENTR 390 – Intro to Entrepreneurial Design, VR Lab


Credits : 3
More Info Here
Contact with Questions:
Sara ‘Dari’ Eskandari
seskanda@umich.edu

In this lab, you’ll learn how to develop virtual reality content for immersive experiences in the Oculus Rift, MIDEN or for Virtual Production using Unreal Engine and 3d modeling software. You’ll also be introduced to asset creation and scene assembly by bringing assets into the Unreal Engine & creating interactive experiences. At the end of the class you’ll be capable of developing virtual reality experiences, simulations, and tools to address real-world problems.

Students will have an understanding of how to generate digital content for Virtual Reality platforms; be knowledgeable on versatile file formats, content pipelines, hardware platforms and industry standards; understand methods of iterative design and the creation of functional prototypes using this medium; employ what is learned in the lecture section of this course to determine what is possible, what is marketable, and what are the various distribution methods available within this platform; become familiar with documenting their design process and also pitching their ideas to others, receiving and providing quality feedback.

 

FTVM 307 – Film Analysis for Filmmakers


Credits : 3
More Info Here
Contact with Questions:
Matthew Solomon
mpsolo@umich.edu

 Filmmakers learn about filmmaking by watching films. This course reverse engineers movies to understand how they were produced. The goal is to learn from a finished film how the scenes were produced in front of the camera and microphone and how the captured material was edited. Students in this class use VR to reimagine classic film scenes – giving them the ability to record and edit footage from a virtual set.

 

UARTS 260 / EIPC FEAST – Empathy in Pointclouds


Credits: 1-5
More Info Here
Contact with Questions:
Dawn Gilpin
dgilpin@umich.edu

Empathy In Point Clouds: Spatializing Design Ideas and Storytelling through Immersive Technologies integrates LiDAR scanning, photogrammetry, and UnReal Engine into education, expanding the possible methodologies and processes of architectural design. Entering our third year of the FEAST program, we turn our attention to storytelling and worldbuilding using site-specific point cloud models as the context for our narratives. This year the team will produce 1-2 spatial narratives for the three immersive technology platforms we are working with: VR headset, MiDEN/VR CAVE, and the LED stage.

 

 

ARTDES 217 – Bits and Atoms


Credits: 3
More Info Here
Contact with Questions:
Sophia Brueckner
sbrueckn@umich.edu

This is an introduction to digital fabrication within the context of art and design. Students learn about the numerous types of software and tools available and develop proficiency with the specific software and tools at Stamps. Students discuss the role of digital fabrication in creative fields.

 

ARTDES 420 – Sci-Fi Prototyping


Credits: 3
More Info Here
Contact with Questions:
Sophia Brueckner
sbrueckn@umich.edu

This course ties science fiction with speculative/critical design as a means to encourage the ethical and thoughtful design of new technologies. With a focus on the creation of functional prototypes, this course combines the analysis of science fiction with physical fabrication or code-based interpretations of the technologies they depict.

 

SI 559 – Introduction to AR/VR Application Design

Credits: 3
More Info Here
Contact with Questions:
Michael Nebeling
nebeling@umich.edu

This course will introduce students to Augmented Reality (AR) and Virtual Reality (VR) interfaces. This course covers basic concepts; students will create two mini-projects, one focused on AR and one on VR, using prototyping tools. The course requires neither special background nor programming experience.

 

FTVM 394 – Digital Media Production, Virtual Reality

Credits: 4
More Info Here
Contact with Questions:
Yvette Granata
ygranata@umich.edu

This course provides an introduction to key software tools, techniques, and fundamental concepts supporting digital media arts production and design. Students will learn and apply the fundamentals of design and digital media production with software applications, web-based coding techniques and study the principals of design that translate across multiple forms of media production.

Learning to Develop for Mixed Reality – The ENTR 390 “VR Lab”

Learning to Develop for Virtual Reality – The ENTR 390 “VR Lab”

XR Prototyping

For the past several years, students enrolled in the Center for Entrepreneurship’s Intro to Entrepreneurial Design Virtual Reality course have been introduced to programming and content creation pipelines for XR development using a variety of Visualization Studio resources. Their goal? Create innovative applications for XR. From creating video games to changing the way class material is accessed with XR capable textbooks, if you have an interest in learning how to make your own app for Oculus Rift, MIDEN or even a smart phone, this might be a class to enroll in. Students interested in this course are not required to have any prior programming or 3d modeling knowledge, and if you’ve never used a VR headset that’s OK too. This course will teach you everything you need to know.

Henry Duhaime presents his VR game for Oculus Rift, in which players explore the surface of Mars in search of a missing NASA rover.
Michael Meadows prototypes AR capable textbooks using a mobile phone and Apple’s ARKit.

Revolutionizing 3D Rotational Angiograms with Microsoft Hololens

Angiography with Hololens augmented reality

Revolutionizing 3D Rotational Angiograms with Microsoft Hololens

A NEW WAY TO VISUALIZE THE HEART

Stephanie O’Malley


Just prior to release of the Microsoft Hololens 2, the Visualization Studio was approached by Dr. Arash Salavitabar in the U-M CS Mott Children’s Hospital with an innovative idea: to use XR to improve evaluation of patient scans stemming from 3D rotational angiography. 

Rotational angiography is a medical imaging technique based on x-ray, that allows clinicians to acquire CT-like 3D volumes during hybrid surgery or during a catheter intervention. This technique is performed by injecting contrast into the pulmonary artery followed by rapidly rotating a cardiac C-arm. Clinicians are then able to view the resulting data on a computer monitor, manipulating images of the patient’s vasculature. This is used to evaluate how a procedure should move forward and to aid in communicating that with the patient’s family.

With augmented reality devices like the Hololens 2, new possibilities for displaying and manipulating patient data have emerged, along with the potential for collaborative interactions with patient data among clinicians.

What if, instead of viewing a patient’s vasculature as a series of 2D images displayed on a computer monitor, you and your fellow doctors could view it more like a tangible 3D object placed on the table in front of you? What if you could share in the interaction with this 3D model — rotating and scaling the model, viewing cross sections, or taking measurements, to plan a procedure and explain it to the patient’s family?

This has now been made possible with a Faith’s Angels grant awarded to Dr. Salavitabar, intended to explore innovative ways of addressing congenital heart disease. The funding for this grant was generously provided by a family impacted by congenital heart disease, who unfortunately had lost a child to the disease at a very young age.

The Visualization Studio consulted with Dr. Salavitabar on essential features and priorities to realize his vision, using the latest version of the Visualization Studio’s Jugular software.

This video was spliced from two separate streams recorded concurrently from two collaborating HoloLens users. Each user has a view of the other, as well as their own individual perspectives of the shared holographic model.

JUGULAR

The angiography system in the Mott clinic produces digital surface models of the vasculature in STL format.

That format is typically used for 3D printing, but the process of queuing and printing a physical 3D model often takes at least several hours or even days, and the model is ultimately physical waste that must be properly disposed of after its brief use.

Jugular offers the alternative of viewing a virtual 3D model in devices such as the Microsoft HoloLens, loaded from the same STL format, with a lead time under an hour.  The time is determined mostly by the angiography software to produce the STL file.  Once the file is ready, it takes only minutes to upload and view on a HoloLens.  Jugular’s network module allows several HoloLens users to share a virtual scene over Wi-Fi.  The HoloLens provides a “spatial anchor” capability that ties hologram locations to a physical space.  Users can collaboratively view, walk around, and manipulate shared holograms relative to their shared physical space.  The holograms can be moved, scaled, sliced, and marked using hand gestures and voice commands.

This innovation is not confined to medical purposes.  Jugular is a general-purpose extended-reality program with applications in a broad range of fields.  The developers analyze specific project requirements in terms of general XR capabilities.  Project-specific requirements are usually met through easily-editable configuration files rather than “hard coding.”

Robots Who Goof: Can We Trust Them?

Robotics in Unreal Engine

Robots Who Goof: Can We Trust Them?

EVERYONE MAKES MISTAKES

Laurel Thomas
ltgnagey@umich.edu


The human-like, android robot used in the virtual experimental task of handling boxes.

When robots make mistakes—and they do from time to time—reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.

In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” said Lionel Robert, associate professor at the U-M School of Information and core faculty of the Robotics Institute.

“Robots will make mistakes when working with humans, decreasing humans’ trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears.”

For their study published in the Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication, Robert and doctoral student Connor Esterwood examined how the repair strategies—including a new strategy of explanations—impact the elements that drive trust: ability (competency), integrity (honesty) and benevolence (concern for the trustor).

The mechanical arm robot used in the virtual experiment.

The researchers recruited 164 participants to work with a robot in a virtual environment, loading boxes onto a conveyor belt. The human was the quality assurance person, working alongside a robot tasked with reading serial numbers and loading 10 specific boxes. One robot was anthropomorphic or more humanlike, the other more mechanical in appearance.

Sara Eskandari and Stephanie O’Malley of the Emerging Technology Group at U-M’s James and Anne Duderstadt Center helped develop the experimental virtual platform.

The robots were programed to intentionally pick up a few wrong boxes and to make one of the following trust repair statements: “I’m sorry I got the wrong box” (apology), “I picked the correct box so something else must have gone wrong” (denial), “I see that was the wrong serial number” (explanation), or “I’ll do better next time and get the right box” (promise).

Previous studies have examined apologies, denials and promises as factors in trust or trustworthiness but this is the first to look at explanations as a repair strategy, and it had the highest impact on integrity, regardless of the robot’s appearance.

When the robot was more humanlike, trust was even easier to restore for integrity when explanations were given and for benevolence when apologies, denials and explanations were offered.

As in the previous research, apologies from robots produced higher integrity and benevolence than denials. Promises outpaced apologies and denials when it came to measures of benevolence and integrity.

Esterwood said this study is ongoing with more research ahead involving other combinations of trust repairs in different contexts, with other violations.

“In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life,” Esterwood said. “For example, does a barista robot’s explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?”

This originally appeared on Michigan News.

More information:

Behind the Scenes: Re-creating Citizen Kane in VR

Behind the Scenes: Re-creating Citizen Kane in VR

inside a classic

Stephanie O’Malley


Students in Matthew Solomon’s classes are used to critically analyzing film. Now they get the chance to be the director for arguably one of the most influential films ever produced: Citizen Kane.

Using an application developed at the Duderstadt Center with grant funding provided by LSA Technology Services, students are placed in the role of the film’s director and able to record a prominent scene from the movie using a virtual camera. The film set which no longer exists, has been meticulously re-created in black and white CGI using reference photographs from the original set, with a CGI Orson Welles acting out the scene on repeat – his actions performed by Motion Capture actor Matthew Henerson, carefully chosen for his likeness to Orson Welles, with the Orson avatar generated from a photogrammetry scan of Matthew.

Top down view of the CGI re-creation of the film set for Citizen Kane

Analyzing the original film footage, doorways were measured, actor heights compared, and footsteps were counted, to determine a best estimate for the scale of the set when 3D modeling. With feedback from Citizen Kane expert, Harlan Lebo, fine details down to the topics of the books on the bookshelves were able to be determined.

Archival photograph provided by Vincent Longo of the original film set

Motion Capture actor Matthew Henerson was flown in to play the role of the digital Orson Welles. In a carefully choreographed session directed by Matthew’s PhD student, Vincent Longo, the iconic scene from Citizen Kane was re-enacted while the original footage played on an 80″ TV in the background, ensuring every step aligned to the original footage perfectly.

Actor Matthew Henerson in full mocap attire amidst the makeshift set for Citizen Kane – Props constructed using PVC. Photo provided by Shawn Jackson.

The boundaries of the set were taped on the floor so the data could be aligned to the digitally re-created set. Eight Vicon motion capture cameras, the same used throughout Hollywood for films like Lord of the Rings or Planet of the Apes, formed a circle around the makeshift set. These cameras rely on infrared light reflected off of tiny balls affixed to the motion capture suit to track the actor’s motion. Any props during the motion capture recording were carefully constructed out of cardboard and PVC (later to be 3D modeled) so as to not obstruct his movements. The 3 minutes of footage attempting to be re-created took 3 days to complete, comprised over 100 individual mocap takes and several hours of footage, which were then compared for accuracy and stitched together to complete the full route Orson travels through the environment.

Matthew Henerson
Orson Welles

  Matthew Henerson then swapped his motion capture suit for an actual suit, similar to that worn by Orson in the film, and underwent 3D scanning using the Duderstadt Center’s photogrammetry resources. 

Actor Matthew Henerson wears asymmetrical markers to assist the scanning process

Photogrammetry is a method of scanning existing objects or people, commonly used in Hollywood and throughout the video game industry to create a CGI likenesses of famous actors. This technology has been used in films like Star Wars (an actress similar in appearance to Carrie Fischer was scanned and then further sculpted, to create a more youthful Princess Leia) with entire studios now devoted to photogrammetry scanning. The process relies on several digital cameras surrounding the subject and taking simultaneous photographs.

Matthew Henerson being processed for Photogrammetry

The photos are submitted to a software that analyzes them on a per-pixel basis, looking for similar features across multiple photos. When a feature is recognized, it is triangulated using the focal length of the camera and it’s position relative to other identified features, allowing millions of tracking points to be generated. From this an accurate 3D model can be produced, with the original digital photos mapped to its surface to preserve photo-realistic color. These models can be further manipulated: Sometimes they are sculpted by an artist, or, with the addition of a digital “skeleton”, they can be driven by motion data to become a fully articulated digital character.

  The 3d modeled scene and scanned actor model were joined with mocap data and brought into the Unity game engine to develop the functionality students would need to film within the 3D set. A virtual camera was developed with all of the same settings you would find on a film camera from that era. When viewed in a virtual reality headset like the Oculus Rift, Matthew’s students can pick up the camera and physically move around to position it at different locations in the CGI environment, often capturing shots that otherwise would be difficult to do in a conventional film set. The footage students film within the app can be exported as MP4 video and then edited in their editing software of choice, just like any other camera footage.

  Having utilized the application for his course in the Winter of 2020, Matthew Solomon’s project with the Duderstadt Center was recently on display as part of the iLRN’s 2020 Immersive Learning Project Showcase & Competition. With Covid-19 making the conference a remote experience, the Citizen Kane project was able to be experienced in Virtual Reality by conference attendees using the FrameVR platform. Highlighting innovative ways of teaching with VR technologies, attendees from around the world were able to learn about the project and watch student edits made using the application.

Citizen Kane on display for iLRN’s 2020 Immersive Learning Project Showcase & Competition using Frame VR

Passion & Violence: Anna Galeotti’s MIDEN Installation

Passion & Violence

Anna Galeotti’s MIDEN INstallation

Ph.D. Fullbright Scholar (Winter, 2014) Anna Galeotti:  exploring the concept of “foam” or “bubbles” as a possible model for audiovisual design elements and their relationships. Her art installation, “Passion and Violence in Brazil” was displayed in the Duderstadt Center’s MIDEN.

Interested in using the MIDEN to do something similar? Contact us.

Extended Reality: changing the face of learning, teaching, and research

Extended Reality: changing the face of learning, teaching, and research

Written by Laurel Thomas, Michigan News

Students in a film course can evoke new emotions in an Orson Welles classic by virtually changing the camera angles in a dramatic scene.

Any one of us could take a smartphone, laptop, paper, Play-doh and an app developed at U-M, and with a little direction become a mixed reality designer. 

A patient worried about an upcoming MRI may be able put fears aside after virtually experiencing the procedure in advance. 

Dr. Jadranka Stojanovska, one of the collaborators on the virtual MRI, tries on the device

This is XR—Extended Reality—and the University of Michigan is making a major investment in how to utilize the technology to shape the future of learning. 

Recently, Provost Martin Philbert announced a three-year funded initiative led by the Center for Academic Innovation to fund XR, a term used to encompass augmented reality, virtual reality, mixed reality and other variations of computer-generated real and virtual environments and human-machine interactions. 

The Initiative will explore how XR technologies can strengthen the quality of a Michigan education, cultivate interdisciplinary practice, and enhance a national network of academic innovation. 

Throughout the next three years, the campus community will explore new ways to integrate XR technologies in support of residential and online learning and will seek to develop innovative partnerships with external organizations around XR in education.

“Michigan combines its public mission with a commitment to research excellence and innovation in education to explore how XR will change the way we teach and learn from the university of the future,” said Jeremy Nelson, new director of the XR Initiative in the Center for Academic Innovation.

Current Use of XR

 
Applications of the technology are already changing the learning experience across the university in classrooms and research labs with practical application for patients in health care settings. 

In January 2018, a group of students created the Alternative Reality Initiative to provide a community for hosting development workshops, discussing industry news, and connecting students in the greater XR ecosystem.

In Matthew Solomon‘s film course, students can alter a scene in Orson Welles’ classic “Citizen Kane.” U-M is home to one of the largest Orson Welles collections in the world.

Solomon’s concept for developers was to take a clip from the movie and model a scene to look like a virtual reality setting—almost like a video game. The goal was to bring a virtual camera in the space so students could choose shot angles to change the look and feel of the scene. 

This VR tool will be used fully next semester to help students talk about filmmaker style, meaning and choice.

“We can look at clips in class and be analytical but a tool like this can bring these lessons home a little more vividly,” said Solomon, associate professor in the Department of Film, Television and Media.

A scene from Orson Welles’ “Citizen Kane” from the point of view of a virtual camera that allows students to alter the action.

Sara Eskandari, who just graduated with a Bachelor of Arts from the Penny Stamps School of Art and a minor in Computer Science, helped develop the tool for Solomon’s class as a member of the Visualization Studio team.

“I hope students can enter an application like ‘Citizen Kane’ and feel comfortable experimenting, iterating, practicing, and learning in a low-stress environment,” Eskandari said. “Not only does this give students the feeling of being behind an old-school camera, and supplies them with practice footage to edit, but the recording experience itself removes any boundaries of reality. 

“Students can float to the ceiling to take a dramatic overhead shot with the press of a few buttons, and a moment later record an extreme close up with entirely different lighting.”

Sara Blair, vice provost for academic and faculty affairs, and the Patricia S. Yaeger Collegiate Professor of English Language and Literature, hopes to see more projects like “Citizen Kane.”

“An important part of this project, which will set it apart from experiments with XR on many other campuses, is our interest in humanities-centered perspectives to shape innovations in teaching and learning at a great liberal arts institution,” Blair said. “How can we use XR tools and platforms to help our students develop historical imagination or to help students consider the value and limits of empathy, and the way we produce knowledge of other lives than our own? 

“We hope that arts and humanities colleagues won’t just participate in this [initiative] but lead in developing deeper understandings of what we can do with XR technologies, as we think critically about our engagements with them.”

UM Faculty Embracing XR

 

Mark W. Newman, professor of information and of electrical engineering, and chair of the Augmented and Virtual Reality Steering Committee, said indications are that many faculty are thinking about ways to use the technology in teaching and research, as evidenced by progress on an Interdisciplinary Graduate Certificate Program in Augmented and Virtual Reality.

Newman chaired the group that pulled together a number of faculty working in XR to identify the scope of the work under way on campus, and to recommend ways to encourage more interdisciplinary collaboration. He’s now working with deans and others to move forward with the certificate program that would allow experiential learning and research collaborations on XR projects.

“Based on this, I can say that there is great enthusiasm across campus for increased engagement in XR and particularly in providing opportunities for students to gain experience employing these technologies in their own academic work,” Newman said, addressing the impact the technology can have on education and research.

“With a well-designed XR experience, users feel fully present in the virtual environment, and this allows them to engage their senses and bodies in ways that are difficult if not impossible to achieve with conventional screen-based interactive experiences. We’ve seen examples of how this kind of immersion can dramatically aid the communication and comprehension of otherwise challenging concepts, but so far we’re only scratching the surface in terms of understanding exactly how XR impacts users and how best to design experiences that deliver the effects intended by experience creators.

Experimentation for All

 

Encouraging everyone to explore the possibilities of mixed reality (MR) is a goal of Michael Nebeling, assistant professor in the School of Information, who has developed unique tools that can turn just about anyone into an augmented reality designer using his ProtoAR or 360proto software.

Most AR projects begin with a two-dimensional design on paper that are then made into a 3D model, typically by a team of experienced 3D artists and programmers. 

Michael Nebeling’s mixed reality app for everyone.

With Nebeling’s ProtoAR app content can be sketched on paper, or molded with Play-doh, then the designer either moves the camera around the object or spins the piece in front of the lens to create motion. ProtoAR then blends the physical and digital content to come up with various AR applications.

Using his latest tool, 360proto, they can even make the paper sketches interactive so that users can experience the AR app live on smartphones and headsets, without

Michael Nebeling’s mixed reality app for everyone.

spending hours and hours on refining and implementing the design in code.

These are the kind of technologies that not only allow his students to learn about AR/VR in his courses, but also have practical applications. For example, people can

experience their dream kitchen at  home, rather than having to use their imaginations when clicking things together on home improvement sites. He also is working on getting many solutions directly into future web browsers so that people can access AR/VR modes when visiting home improvement sites, cooking a recipe in the kitchen, planning a weekend trip with museum or gallery visits, or when reading articles on wikipedia or the news.

Nebeling is committed to “making mixed reality a thing that designers do and users want.”

“As a researcher, I can see that mixed reality has the potential to fundamentally change the way designers create interfaces and users interact with information,” he said. “As a user of current AR/VR applications, however, it’s difficult to see that potential even for me.”

He wants to enable a future in which “mixed reality will be mainstream, available and accessible to anyone, at the snap of a finger. Where everybody will be able to ‘play,’ be it as consumer or producer.”

 

XR and the Patient Experience

A team in the Department of Radiology, in collaboration with the Duderstadt Center Visualization Studio, has developed a Virtual Reality tool to simulate an MRI, with the goal of reducing last minute cancellations due to claustrophobia that occur in an estimated 4-14% of patients. The clinical trial is currently enrolling patients. 
VR MRI Machine

“The collaboration with the Duderstadt team has enabled us to develop a cutting-edge tool that allows patients to truly experience an MRI before having a scan,” said Dr. Richard K.J. Brown, professor of radiology. The patient puts on a headset and is ‘virtually transported’ into an MRI tube. A calming voice explains the MRI exam, as the patient hears the realistic sounds of the magnet in motion, simulating an exam experience. 

The team also is developing an Augmented Reality tool to improve the safety of CT-guided biopsies.

Team members include doctors Brown, Jadranka Stojanovska, Matt Davenport, Ella Kazerooni, Elaine Caoili from Radiology, and Dan Fessahazion, Sean Petty, Stephanie O’Malley,Theodore Hall and several students from the Visualization Studio. 

“The Duderstadt Center and the Visualization Studio exists to foster exactly these kinds of collaborations,” said Daniel Fessahazion, center’s associate director for emerging technologies. “We have a deep understanding of the technology and collaborate with faculty to explore its applicability to create unique solutions.” 

Dr. Elaine Caoili, Saroja Adusumilli Collegiate Professor of Radiology, demonstrates and Augmented Reality tool under development that will improve the safety of CT-guided biopsies.

AI’s Nelson said the first step of this new initiative is to assess and convene early innovators in XR from across the university to shape how this technology may best support serve residential and online learning. 

“We have a unique opportunity with this campus-wide initiative to build upon the efforts of engaged students, world-class faculty, and our diverse alumni network to impact the future of learning,” he said.