Planting Disabled Futures – A call for artists to collaborate

Planting Disabled Futures

OPen Call for Artist Collaborators

Author


Petra Kuppers is disability culture activist and a community performance artist. She creates participatory community performance environments that think/feel into public space, tenderness, site-specific art, access and experimentation. Petra grounds herself in disability culture methods, and uses ecosomatics, performance, and speculative writing to engage audiences toward more socially just and enjoyable futures.


Her latest project, Planting Disabled Futures, is funded by a Just Tech fellowship.

In the Planting Disabled Futures project, Petra aims to use live performance approaches and virtual reality (and other) technologies to share energy, liveliness, ongoingness, crip joy and experiences of pain. 

In the development of the Virtual Reality (VR) components of the project, we will ask: How can VR allow us to celebrate difference, rather than engage in hyper-mobile fantasies of overcoming and of disembodied life? How can our disabled bodymindspirits develop non-extractive intimacies, in energetic touch, using VR as a tool toward connecting with plants, with the world, even in pain, in climate emergency, in our ongoing COVID world?

A watercolor mock-up of the Crip Cave, with Moira Williams’ Stim Tent, two VR stations, a potential sound bed, and a table for drawing/writing.

Petra envisions a sensory art installation equipped with a VR experience, stimming tent, a soundbed and a drawing and writing table. The VR experience would be supplemented by actors providing opportunities to engage with unique taste, touch and smell sensations as the environment is navigated.

A cyanotype (blue) and watercolor mock-up of what the VR app might look like: a violet leaf with sensation hubs, little white ink portals, that might lead to an audio dream journey

The VR experience involved in the Crip-Cave is expected to be tree-like environment that allows participants to select either a visual or an auditory experience. Participants can travel down to the roots and experience earth critters or up to the branches and into the leafy canopy. In both locations, “sensory hubs” would take participants on a journey to other worlds – worlds potentially populated with content produced by fellow artists.

A cyanotype/watercolor mock-up of little critters that might accompany you on your journey through the environment.

Artist collaborators are welcome to contribute their talents generating 3d worlds in Unreal Engine, reciting poetry, animating or composing music to create a dream journey in virtual reality. Artists generating digital content they would like considered for inclusion in this unique art installation can reach out to: [email protected]


To learn more about Planting Disabled Futures, visit:
https://www.petrakuppers.com/planting-disabled-futures

Engineering Grants for XR

The Enhancing Engineering Education Grants Program is designed to support innovative strategies for engaging and supporting all learners in Michigan Engineering undergraduate courses. This program springs from a collaboration among ADUEADGPECAENCRLT-Engin, and Nexus. Proposals will be invited across the range of innovations in engineering education, including instructional practices, course design and content, and instructional technology.

As part of the initial Enhancing Education using Technology (EET) proposal to the College to support the instructional needs of faculty, grants were offered to support the implementation of innovative ideas that instructors needed money to accomplish. The first year of the grants program was FY23 and all grant money was awarded to faculty. It included three major grants of $50K each on the topics of XR, DEI, and Tandem. Additional smaller grants were also awarded to faculty. At the completion of this first year, the team used the past year’s knowledge to propose improvements and changes to the program.

For AY 2024-2025, there are three grants available to support instructional faculty members:

Education Innovation Grants

Grants of up to $10K are available to COE faculty & staff

About the Grant

Grants of up to $10K are available to individual or small groups of Michigan Engineering instructional faculty and staff members seeking to implement innovative teaching methods and/or tools.

Group 2 applications are now being accepted. This call for proposals is open to all eligible applicants and does not necessitate a previous Group 1 proposal or submission.


Proposal Evaluation Criteria
  • Applies a novel method or tool to achieve objectives
  • Reflects innovation in teaching methods or approaches
  • For online courses they utilize the Quality Matters framework and work with Nexus to do so.
  • Involves partnering with Nexus or CRLT-E to co-teach a new faculty development workshop
  • Builds upon evidence-based best practices for enhancing student learning
  • Promotes equitable instruction for all learners
  • Implements practices or tools that have the potential for great impact (either large impact on a small population, or be something that could be applied to a larger population at Michigan Engineering)
  • Other funding opportunities do not exist for this type of work
  • Achieves synergy with goals, strengths, and ongoing work of the College of Engineering, especially as it relates to Michigan Engineering’s Strategic Vision

Group 2 applications close Wednesday, May 1, 2024

Project Statement:

  • Clearly describe the proposed project
  • Explain the value of the project
  • Identify the specific innovation and its relation to evidence-based practices
  • Explain how the project supports equitable instruction and enhanced student learning
  • Discuss the project’s potential for application in broader contexts

Project Evaluation Plan:
Explain how the success of this project will be evaluated, documented, and disseminated. Approaches might include midterm course assessments, focus groups, and surveys, among others.

Budget Request:

  • Graduate or undergraduate student salaries
  • Materials and supplies
  • Project evaluation expenses
  • Travel and registration fees for teaching-related conferences, seminars or workshops
  • Faculty member summer salary (up to $2K of the project’s total budget)

Timeline:
Submissions will be accepted until Wednesday, May 1, 2024 with funding decisions announced late May.

Strategic Technology Grants

COE Projects Focussed on XR, online/hybrid learning and/or generative artificial intelligence

About the Grant

Grants of up to $50,000 are available to teams of at least three Michigan Engineering instructional faculty and staff members to implement innovative teaching methods and/or tools that require an investment of time/resources and collaboration for deployment that is larger than what is available via Education Innovation Grants. Projects should focus on strategic themes of XR, online/hybrid learning and/or generative artificial intelligence.


Proposal Evaluation Criteria
  • Applies a novel method, modality or tool to achieve objectives
  • Reflects innovation in teaching methods or approaches
  • Builds upon evidence-based best practices for enhancing student learning
  • Promotes equitable instruction for all learners
  • If online, leverages the Quality Matters rubric and best practices with online course design and development
  • Implements practices or tools that have the potential for great impact (either large impact on a small population, or be something that could be applied to a larger population at Michigan Engineering)
  • Achieves synergy with goals, strengths, and ongoing work of ADUEADGPRCAENCRLT-EnginNexus, and/or the broader College of Engineering, especially as it relates to Michigan Engineering’s Strategic Vision

Applications closed March 4, 2024

Identify your proposal’s strategic theme:

  • Online/hybrid learning
  • Generative artificial intelligence
  • XR

Project Statement:

  • Clearly describe the proposed project
  • Explain the value of the project
  • Identify the specific innovation and its relation to evidence-based practices
  • Explain how the project supports equitable instruction and enhanced student learning
  • If online, describe how the course design and development effort will leverage the Quality Matters rubric
  • Discuss the project’s potential for great impact
  • Describe your goals for collaboration with at least one E3 grant sponsor (ADUE, ADGPE, CAEN, CRLT-Engin, and/or Nexus)

Project Evaluation Plan:
Explain how the success of this project will be evaluated, documented, and disseminated.

Budget Request:

  • Graduate or undergraduate student salaries
  • Instructional software and classroom technology
  • Materials and supplies
  • Project evaluation expenses
  • Travel and registration fees for teaching-related conferences, seminars or workshops
  • Faculty member summer salary (up to $2K of the project’s total budget)

Team Roster:
Provide a list of all team members, with descriptions of their respective roles and very brief bios.

Timeline:

Submissions are due on Monday, March 4, 2024 with funding decisions announced in April.

Software Pilot Grants

GRANT FUNDING UP TO $10K for COE Faculty & STAFF SEEKING TO PILOT INSTRUCTIONAL SOFTWARE

About the Grant

Grants of up to $10K are available to instructional faculty and staff members seeking to pilot innovative and results-oriented instructional software that has the potential to improve teaching and learning in Michigan Engineering. Proposals may be submitted by individuals requesting software for a specific class or a team of faculty members requesting software for a group of classes.

In the spirit of innovation, all ideas are welcome. Proposals that call for the use of collaborative teaching and learning strategies are encouraged. Priority will be given to projects that, if proven successful, can be replicated throughout the College.

Please note that there are many routes for procuring software licenses at the University of Michigan. We encourage you to reach out to our team at [email protected] to help determine if this grant program is appropriate for your request before submitting a proposal.


REQUIRED DELIVERABLES
  • Presentation of a case study of your application of the software and how it impacted your students’ learning objectives to the Michigan Engineering faculty community
  • Engagement with CAEN on evaluation of software for possible college adoption
  • Acting as a faculty advocate for this software and sharing how you are using it in your class

Applications for Fall 2024 close April 1, 2024

Course Information:
Logistical course details including frequency the course is taught, enrollment summary, etc.

Learning Gaps:
Describe the learning gap(s) you have identified in your lesson/module/unit/course.

Teaching Intervention (Pedagogical Support):
Explain the teaching and technology intervention(s) that will close the stated learning gaps. Identify the evidence-based practices that support the efficacy of the proposed software solution.

Comparative Tool Evaluation:

  • Identify 3-4 comparable software tools (including your proposed tool) that could fill the established learning gaps.
  • List the criteria you will use to evaluate the 3-4 comparable tools to inform your decision making.

Project Evaluation Plan:

  • Explain how the success of this software will be evaluated, documented, and disseminated -approaches might include midterm course assessments, focus groups, and surveys, among others.
  • Explain how you will evaluate if this software met the needs of you and your students. How will you identify if it has improved the educational experience?

Budget Request:
Provide the number of licenses, estimated cost per license,  and estimated total cost for this software.

Timeline:
To use the software for instruction in the Fall 2024 term, proposals must be submitted by April 1, 2024.

Fall 2024 XR Classes

Fall 2024 XR Classes

Looking for Classes that incorporate XR?

EECS 440 – Extended Reality for Social Impact (Capstone / MDE)

More Info Here
Contact with Questions:
Austin Yarger
[email protected]

Extended Reality for Social Impact — Design, development, and application of virtual and augmented reality software for social impact. Topics include: virtual reality, augmented reality, game engines, ethics / accessibility, interaction design patterns, agile project management, stakeholder outreach, XR history / culture, and portfolio construction. Student teams develop and exhibit socially impactful new VR / AR applications.


ENTR 390.005 & 390.010 – Intro to Entrepreneurial Design, VR Lab

More Info Here
Contact with Questions:
Sara ‘Dari’ Eskandari
[email protected]

In this lab, you’ll learn how to develop virtual reality content for immersive experiences in the Meta Quest, MIDEN or for Virtual Production using Unreal Engine and 3d modeling software. You’ll also be introduced to asset creation and scene assembly by bringing assets into the Unreal Engine & creating interactive experiences. At the end of the class you’ll be capable of developing virtual reality experiences, simulations, and tools to address real-world problems.

Students will have an understanding of how to generate digital content for Virtual Reality platforms; be knowledgeable on versatile file formats, content pipelines, hardware platforms and industry standards; understand methods of iterative design and the creation of functional prototypes using this medium; employ what is learned in the lecture section of this course to determine what is possible, what is marketable, and what are the various distribution methods available within this platform; become familiar with documenting their design process and also pitching their ideas to others, receiving and providing quality feedback.


UARTS 260 – Empathy in Pointclouds

More Info Here
Contact with Questions:
Dawn Gilpin
[email protected]

Empathy In Point Clouds: Spatializing Design Ideas and Storytelling through Immersive Technologies integrates LiDAR scanning, photogrammetry, and Unreal Engine into education, expanding the possible methodologies and processes of architectural design. Entering our third year of the FEAST program, we turn our attention to storytelling and worldbuilding using site-specific point cloud models as the context for our narratives. This year the team will produce 1-2 spatial narratives for the three immersive technology platforms we are working with: Meta Quest VR headset, MiDEN/VR CAVE, and the LED stage.


ARTDES 217 – Bits and Atoms

More Info Here
Contact with Questions:
Sophia Brueckner
[email protected]

This is an introduction to digital fabrication within the context of art and design. Students learn about the numerous types of software and tools available and develop proficiency with the specific software and tools at Stamps. Students discuss the role of digital fabrication in creative fields.


ARTDES 420 – Sci-Fi Prototyping

More Info Here
Contact with Questions:
Sophia Brueckner
[email protected]

This course ties science fiction with speculative/critical design as a means to encourage the ethical and thoughtful design of new technologies. With a focus on the creation of functional prototypes, this course combines the analysis of science fiction with physical fabrication or code-based interpretations of the technologies they depict.


SI 559 – Introduction to AR/VR Application Design

More Info Here
Contact with Questions:
Michael Nebeling
[email protected]

This course will introduce students to Augmented Reality (AR) and Virtual Reality (VR) interfaces. This course covers basic concepts; students will create two mini-projects, one focused on AR and one on VR, using prototyping tools. The course requires neither special background nor programming experience.


FTVM 394 / DIGITAL 394 – Topics in Digital Media Production, Virtual Reality

More Info Here
Contact with Questions:
Yvette Granata
[email protected]

This course provides an introduction to key software tools, techniques, and fundamental concepts supporting digital media arts production and design. Students will learn and apply the fundamentals of design and digital media production with software applications, web-based coding techniques and study the principals of design that translate across multiple forms of media production.


UARTS 260/360/460/560 – THE BIG CITY: Lost & Found in XR

More Info Here
Contact with Questions:
Matthew Solomon & Sara Eskandari
[email protected] / [email protected]

No copies are known to exist of 1928 lost film THE BIG CITY, only still photographs, a cutting continuity, and a detailed scenario of the film. This is truly a shame because the film featured a critical mass of black performers — something extremely uncommon at the time. Using Unreal Engine, detailed 3D model renderings, and live performance, students will take users back in time into the fictional Harlem Black Bottom cabaret and clubs shown in the film. Students will experience working in a small game development team to create a high-fidelity, historical recreation of the sets using 3D modeling, 2D texturing skills, level design, and game development pipelines. They will experience a unique media pipeline of game design for live performance and cutting-edge virtual production. This project will also dedicate focus towards detailed documentation in order to honor the preservation of THE BIG CITY that allows us to attempt this endeavor and the black history that fuels it.


MOVESCI 313 – The Art of Anatomy

Contact with Questions:
Melissa Gross & Jenny Gear
[email protected] / [email protected]

Learn about human anatomy and how it has historically been taught through human history covering a variety of mediums including the recent adoption of XR tools. Students will get hands-on experience with integrating and prototyping AR and VR Visualization technologies for medical and anatomical study.


ARCH 565 – Research in Environmental Technology

Contact with Questions:
Mojtaba Navvab
[email protected]

The focus of this course is the introduction to research methods in environmental technology. Qualitative and quantitative research results are studied with regard to their impact on architectural design. Each course participant undertakes an investigation in a selected area of environmental technology. The experimental approach may use physical modeling, computer simulation, or other appropriate methods (VR).


FTVM 455.004 – Topics in Film: Eco Imaginations
WGS 412.001 – Fem Art Practices

Contact with Questions:
Petra Kuppers
[email protected]

These courses will include orientations to XR technologies and sessions leveraging Unreal Engine and Quixel 3d assets to create immersive virtual reality environments.

Multi-Sensing the Universe

Multi-Sensing the Universe

Envisioning a Toroidal universe

Robert Alexander teamed with Danielle Battaglia, a senior in Art & Design, to compose and integrate audio effects into her conceptual formal model of the Toroidal Universe.  Danielle combined Plato’s notion of the universe as a dodecahedron with modern notions of black holes, worm holes, and child universes.  Their multi-sensory multiverse came together in the MIDEN and was exhibited there as part of the Art & Design senior integrative art exhibition.

Interested in using the MIDEN to do something similar? Contact us.

Novels in VR – Experiencing Uncle Tom’s Cabin

Novels in VR – Experiencing Uncle Tom’s Cabin

A Unique Perspective

Stephanie O’Malley


This past semester, English Professor Sara Blair taught a course at the University titled, “The Novel and Virtual Realities.”  – The purpose of this course was to expose students to different methods of analyzing novels and ways of understanding them from different perspectives by utilizing platforms like VR and AR.

Designed as a hybrid course, her class was split between a traditional classroom environment, and an XR lab, providing a comparison between traditional learning methods, and more hands-on experiential lessons through the use of immersive, interactive VR and AR simulations.

As part of her class curriculum, students were exposed to a variety of experiential XR content. Using the Visualization Studio’s Oculus Rifts, her class was able to view Dr. Courtney Cogburn’s “1000 Cut Journey” installation – a VR experience that puts viewers in the shoes of a black american man growing up in the time of segregation, allowing viewers to see first hand how racism affects every facet of their life. They also had the opportunity to view Asad J. Malik’s “Terminal 3” using augmented reality devices like the Microsoft Hololens. Students engaging with Terminal 3 see how Muslim identities in the U.S. are approached through the lens of an airport interrogation.

Wanting to create a similar experience for her students at the University of Michigan, Sara approached the Duderstadt Center about the possibility of turning another novel into a VR experience: Uncle Tom’s Cabin.

She wanted her students to understand the novel from the perspective of it’s lead character, Eliza, during the pivotal moment where as a slave, she is trying to escape her captors and reach freedom. But she also wanted to give her students the perspective of the slave owner and other slaves tasked with her pursuit, as well as the perspective of an innocent bystander watching this scene unfold.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin

Using Unreal Engine, the Duderstadt Center was able to make this a reality. An expansive winter environment was created based on imagery detailed in the novel, and CGI characters for Eliza and her captors were produced and then paired with motion capture data to drive their movements. When students put on the Oculus Rift headset, they can choose to experience the moment of escape either through Eliza’s perspective, her captors, or as a bystander. And to better evaluate what components contributed to student’s feelings during the simulation, versions of these scenarios were provided with and without sound. With sound enabled as Eliza, you hear footsteps in the snow gaining on you, the crack of the ice beneath your feet as you leap across a tumultuous river, and the barking of a vicious dog on your heels – all adding to the tension of the moment. While viewers are able to freely look around the environment, they are passive observers: They have no control over the choices Eliza makes or where she can go.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin – Freedom for Eliza lies on the other side of the frozen Ohio river.

The scene ends with Eliza reaching freedom on the opposite side of the Ohio river and leaving her pursuers behind. What followed the student’s experience with the VR version of the novel was a deep class discussion on how the scene felt in VR verses how it felt reading the same passage in the book. Some students wondered what it might feel like to instead be able to control the situation and control where Eliza goes, or as a bystander, to move freely through the environment as the scene plays out, deciding which party (Eliza or her pursuers) was of most interest to follow in that moment.

While Sara’s class has concluded for the semester, you can still try this experience for yourself – Uncle Tom’s Cabin is available to demo on all Visualization Studio workstations equipped with an Oculus Rift.

Using Mobile VR to Assess Claustrophobia During an MRI

Using Mobile VR to Assess Claustrophobia During an MRI

new methods for exposure therapy

Stephanie O’Malley


Dr. Richard Brown and his colleague Dr. Jadranka Stojanovska had an idea for how VR could be used in a clinical setting. Having realized a problem with patients undergoing MRI scans experiencing claustrophobia, they wanted to use VR simulations to introduce potential patients to what being inside an MRI machine might feel like.

Duderstadt Center programmer Sean Petty and director Dan Fessahazion alongside Dr. Richard Brown

Claustrophobia in this situation is a surprisingly common problem. While there are 360 videos that convey what an MRI might look like, these fail to address the major factor contributing to claustrophobia: The perceived confined space within the bore. 360 videos tend to make the environment skewed, seeming further away than it would be in reality and thereby failing to induce the same feelings of claustrophobia that the MRI bore would produce in reality. With funding from the Patient Education Award Committee, Dr. Brown approached the Duderstadt Center to see if a better solution could be produced.

VR MRI: Character customization
A patient enters feet-first into the bore of the MRI machine.

In order to simulate the effects of an MRI accurately, a CGI MRI machine was constructed and ported to the Unity game engine. A customize-able avatar representing the viewer’s body was also added to give viewers a sense of self. When a VR headset is worn, the viewer’s perspective allows them to see their avatar body and the real proportions of the MRI machine as they are slowly transported into the bore. Verbal instructions mimic what would be said throughout the course of a real MRI, with the intimidating boom of the machine occurring as the simulated scan proceeds.

Two modes are provided within the app: Feet first or head first, to accommodate the most common scanning procedures that have been shown to induce claustrophobia.  

In order to make this accessible to patients, the MRI app was developed with mobile VR in mind, allowing anyone (patients or clinicians) with a VR-capable phone to download the app and use it with a budget friendly headset like Google Daydream or Cardboard.

Dr. Brown’s VR simulator was recently featured as the cover story in the September edition of Tomography magazine.

Students Learn 3D Modeling for Virtual Reality

Students Learn 3D Modeling for Virtual Reality

making tiny worlds

Stephanie O’Malley


ArtDes240 is course offered by the Stamps School of Art & Design and taught by Stephanie O’Malley that teaches students 3D modeling & animation.  As one of only a few 3D digital classes offered at the University of Michigan, AD240 sees student interest from several schools across campus with students looking to gain a better understanding of 3D art as it pertains to the video game industry.

The students in AD240 are given a crash-course in 3D modeling in 3D Studio Max and level creation within the Unreal Editor. It is then within Unreal that all of their objects are positioned, terrain is sculpted, and atmospheric effects such as time of day, weather, or fog can be added.

“Candyland” – Elise Haadsma & Heidi Liu, developed using 3D Studio Max and Unreal Engine
“Candyland” – Elise Haadsma & Heidi Liu, developed using 3D Studio Max and Unreal Engine

With just 5 weeks to model their entire environment, bring it into Unreal,  package it as an executable, and test it in the MIDEN (or on the Oculus Rift), the resulting student projects were truly impressive. Art & Design Students Elise Haadsma & Heidi Liu took inspiration from the classic board game, “Candyland” to create a life-size game board environment in Unreal consisting of a lollipop forest, mountains of Hershey’s kisses, even a gingerbread house and chocolate river.

Lindsay Balaka  from the School of Music, Theater & Dance, chose to create her scene using the Duderstadt Center’s in-house rendering software “Jugular” instead of Unreal Engine-Her creation, “Galaxy Cakes”, is a highly stylized (reminiscent of an episode of the 1960’s cartoon, The Jetson’s) cupcake shop, complete with spatial audio emanating from the corner Jukebox.

Lindsay Balaka’s “Galaxy Cakes” environment
An abandoned school, created by Vicki Liu in 3D Studio Max and Unreal Engine

Vicki Liu, also of Art & Design, created a realistic horror scene using Unreal. After navigating down a poorly lit hallway of an abandoned nursery school, you will find yourself in a run down classroom inhabited by some kind of mad man. A tally of days passed has been scratched into the walls, an eerie message scrawled onto the chalkboard, and furniture haphazardly barricades the windows.

While the goal of the final project was to create a traversible environment for virtual reality, some students took it a step further.

Art & Design student Gus Schissler created an environment composed of neurons in Unreal intended for viewing within the Oculus Rift. He then integrated data from an Epoch neurotransmitter (a device capable of reading brain waves) to allow the viewer to telepathically interact with the environment. The viewers mood when picked up by the Epoch not only changed the way the environment looked by adjusting the intensities of the light being emitted by the neurons, but also allowed the viewer to think specific commands (push, pull, etc) in order to navigate their way past various obstacles in the environment.

Students spend the last two weeks of the semester scheduling time with Ted Hall and Sean Petty to test their scenes and ensure everything runs and looks correctly on the day of their presentations. This was a class that not just introduced students to the design process, but to also allowed them to get hands on experience with upcoming technologies as virtual reality continues to expand in the game and film industries.

Student Gus Schissler demonstrates his Neuron environment for Oculus Rift that uses inputs from an Epoch neurotransmitter to interact.

Student Uses Photogrammetry to Miniaturize Herself

Stamps Student Uses Photogrammetry to Miniaturize Herself

  Stamps student Annie Turpin came to the Duderstadt Center with an idea for her Sophomore studio project: She wanted to create a hologram system, similar to the “Pepper’s Pyramid” or “Pepper’s Ghost” display, that would allow her to project a miniaturized version of herself into a pinhole camera.

Pepper’s Ghost relied on carefully placed mirrors to give the illusion of a transparent figure

  The concept of Pepper’s Pyramid is derived from an illusion technique created by John Henry Pepper in 1862. Originally coined “Pepper’s Ghost”, the trick initially relied on a large pane of glass to reflect an illuminated room or person that was hidden from view. This gave the impression of a “ghost” and became a technique frequently used in theatre to create a phantasmagoria. Similar methods are still used today, often substituting Mylar foil in place of glass and using CG content (such as the 2012 Coachella performance, in which a “holographic” Tupac was resurrected to sing alongside Dr. Dre).

Pepper’s Pyramid takes the concept of Pepper’s Ghost, and gives it 3 dimensions using a pyramid of Plexiglas instead of mirrors.

  “Pepper’s Pyramid” is a similar concept. Instead of a single pane of glass reflecting a single angle, a video is duplicated 4 times and projected downward onto a pyramid of Plexiglas, allowing the illusion to be viewed from multiple angles and for the content to be animated.

  For Annie’s project, she re-created a small version of Pepper’s Pyramid to fit inside a pinhole camera that she had constructed, and used a mobile phone to project the video instead of a monitor. She then had herself 3D scanned using the Duderstadt Center’s Photogrammetry rig to generate a realistic 3D model of herself that was animated and then exported as an MP4 video.

Annie’s pinhole camera

  The process of Photogrammetry allows an existing object or person to be converted into a full color, highly detailed, 3D model. This is done using a series of digital photographs captured 360 degrees around the subject. While Photogrammetry can be done at home for most static subjects, the Duderstadt Center’s Photogrammetry resources are set up to allow moving subjects like people to be scanned as well. The process using surface detail on the subject to plot points in 3D space and construct a 3D model. For scans of people, these models can even have a digital skeleton created to drive their motion, and be animated as CGI characters. Annie’s resulting scan was animated to rotate in place, and projected into the the plexiglas pyramid as a “hologram” for viewing through her pinhole camera.

The result of 3D printing Annie’s photogrammetry scan

  Annie would make use of Photogrammetry again the following year, when she had herself 3d scanned again, but this time for the purpose of 3D printing the resulting model for a diorama. In this instance, she was scanned using Photogrammetry in what is referred to as “T-Pose”. This is a pose where the subject stands with their arms and legs apart, so their limbs can be articulated into a different position later. After Annie’s model was generated, it was posed to have her sitting in a computer chair and working on a laptop. This model was sent to the Duderstadt Center’s J750 3D color printer to produce a 6″ high 3D printed model.

  This printer allows for full spectrum color and encases the model in a support structure that must be carefully removed, but allows for more intricate features and overhangs on the model.

Annie carefully removes the support structure from her 3D printed model

A duplicate print of Annie’s creation can now be viewed in the display case within the Duderstadt Center’s Fabrication Studio.

Learning Jaw Surgery with Virtual Reality

Learning Jaw Surgery with Virtual Reality

Jaw surgery can be complex and there are many factors that contribute to how a procedure is done. From routine corrective surgery to reconstructive surgery, the traditional means of teaching these scenarios has been unchanged for years. In an age populated with computers and the growing popularity of virtual reality, students still find themselves moving paper cut-outs of their patients around on a table top to explore different surgical methods.

Dr. Hera Kim-Berman was inspired to change this. Working with the Duderstadt Center’s 3D artist and programmers, a more immersive and comprehensive learning experience was achieved. Hera was able to provide the Duderstadt Center with patient Dicom data. These data sets were originally comprised of a series of two-dimensional MRI images, which were converted into 3D models and then segmented just as they would be during a surgical procedure. These were then joined to a model of the patient’s skin, allowing the movement of the various bones to influence real-time changes to a person’s facial structure, now visible from any angle.

This was done for several common practice scenarios (such as correcting an extreme over or under bite, or a jaw misalignment) and then imported into the Oculus Rift, where hand tracking controls were developed to allow students to “grab” the bones for adjusting in 3D.

Before re-positioning the jaw segments, the jaw has a shallow profile.

After re-positioning of the jaw segments, the jaw is more pronounced.

As a result, students are now able to gain a more thorough understanding of the spatial movement of bones and more complex scenarios, such as extensive reconstructive surgery, could be practiced well in advance of seeing a patient for a scheduled surgery.

Customer Discovery Using 360 Video

Customer Discovery Using 360 Video

Year after year, students in Professor Dawn White’s Entrepreneurship 411 course are tasked with doing a “customer discovery” – a process where students interested in creating a business, interview professionals in a given field to assess their needs and how products they develop could address these needs and alleviate some of the difficulties they encounter on a daily basis.

Often when given this assignment, students would defer to their peers for feedback instead of reaching out to strangers working in these fields of interest. This demographic being so similar to the students themselves, would result in a fairly biased outcome that didn’t truly get to the root issue of why someone might want or need a specific product. Looking for an alternative approach, Dawn teamed up with her long time friend, Professor Alison Bailey, who teaches DEI at the University, and Aileen Huang-Saad from Biomedical Engineering, and approached the Duderstadt Center with their idea: What if students could interact with a simulated and more diverse professional to conduct their customer discovery?

After exploring the many routes this could take for development, including things like motion capture-driven CGI avatars, 360 video became the decided platform on which to create this simulation. 360 Video viewed within an Oculus Rift VR headset ultimately gave the highest sense of realism and immersion when conducting an interview, which was important for making the interview process feel authentic.

Up until this point, 360 videos were largely passive experiences. They did not allow users to tailor the experience based on their choices or interact with the scene in any way. This Customer Discovery project required the 360 videos to be responsive – when a student asked a recognized customer discovery question, the appropriate video response would need to be triggered to play. And to do this, the development required both some programming logic to trigger different videos but also an integrated voice recognition software so students could ask a question out loud and have the speech recognized within the application.

Dawn and Alison sourced three professionals to serve as their simulated actors for this project:

Fritz discusses his career as an IT professional

Fritz – Fritz is a young black man with a career as an IT professional


Cristina – Cristina is a middle aged woman with a noticeable accent, working in education


Charles – Charles is a white adult man employed as a barista

These actors were chosen for their authenticity and diversity, having qualities that may lead interviewers to make certain assumptions or expose biases in their interactions with them. With the help of talented students at the Visualization Studio, these professionals were filmed responding to various customer discovery questions using the Ricoh Theta 360 camera and a spatial microphone (this allows for spatial audio in VR, so you feel like the sound is coming from a specific direction where the actor is sitting). For footage of one response to be blended with the next, the actors had to remember to revert their hands and face to the same pose between responses so the footage could be aligned. They also were filmed giving generic responses to any unplanned questions that may get asked as well as twiddling their thumbs and patiently waiting – footage that could be looped to fill any idle time between questions.

Once the footage was acquired, the frame ranges for each response were noted and passed off to programmers to implement into the Duderstadt Center’s in-house VR rendering software, Jugular. As an initial prototype of the concept, the application was originally intended to run as a proctored simulation – students engaging in the simulation would wear an Oculus Rift and ask their questions out loud, with the proctor listening in and triggering the appropriate actor response using keyboard controls. For a more natural feel, Dawn was interested in exploring voice recognition to make the process more automated.

Within Jugular, students view an interactive 360 video where they are seated across from one of three professionals available for interviewing. Using the embedded microphone in the Oculus Rift they are able to ask questions that are recognized using Dialogue Flow, that in turn trigger the appropriate video response, allowing students to conduct mock interviews.

With Dawn employing some computer science students to tackle the voice recognition element over the summer, they were able to integrate this feature into Jugular using the Dialogue Flow agent with Python scripts. Students could now be immersed in an Oculus Rift, speaking to a 360 video filmed actor, and have their voice interpreted as they asked their questions out loud, using the embedded microphone on the Rift.

Upon it’s completion, the Customer Discovery application was piloted in the Visualization Studio with Dawn’s students for the Winter 2019 semester.