Behind the Scenes: Re-creating Citizen Kane in VR

Behind the Scenes: Re-creating Citizen Kane in VR

inside a classic

Stephanie O’Malley


Students in Matthew Solomon’s classes are used to critically analyzing film. Now they get the chance to be the director for arguably one of the most influential films ever produced: Citizen Kane.

Using an application developed at the Duderstadt Center with grant funding provided by LSA Technology Services, students are placed in the role of the film’s director and able to record a prominent scene from the movie using a virtual camera. The film set which no longer exists, has been meticulously re-created in black and white CGI using reference photographs from the original set, with a CGI Orson Welles acting out the scene on repeat – his actions performed by Motion Capture actor Matthew Henerson, carefully chosen for his likeness to Orson Welles, with the Orson avatar generated from a photogrammetry scan of Matthew.

Top down view of the CGI re-creation of the film set for Citizen Kane

Analyzing the original film footage, doorways were measured, actor heights compared, and footsteps were counted, to determine a best estimate for the scale of the set when 3D modeling. With feedback from Citizen Kane expert, Harlan Lebo, fine details down to the topics of the books on the bookshelves were able to be determined.

Archival photograph provided by Vincent Longo of the original film set

Motion Capture actor Matthew Henerson was flown in to play the role of the digital Orson Welles. In a carefully choreographed session directed by Matthew’s PhD student, Vincent Longo, the iconic scene from Citizen Kane was re-enacted while the original footage played on an 80″ TV in the background, ensuring every step aligned to the original footage perfectly.

Actor Matthew Henerson in full mocap attire amidst the makeshift set for Citizen Kane – Props constructed using PVC. Photo provided by Shawn Jackson.

The boundaries of the set were taped on the floor so the data could be aligned to the digitally re-created set. Eight Vicon motion capture cameras, the same used throughout Hollywood for films like Lord of the Rings or Planet of the Apes, formed a circle around the makeshift set. These cameras rely on infrared light reflected off of tiny balls affixed to the motion capture suit to track the actor’s motion. Any props during the motion capture recording were carefully constructed out of cardboard and PVC (later to be 3D modeled) so as to not obstruct his movements. The 3 minutes of footage attempting to be re-created took 3 days to complete, comprised over 100 individual mocap takes and several hours of footage, which were then compared for accuracy and stitched together to complete the full route Orson travels through the environment.

Matthew Henerson
Orson Welles

  Matthew Henerson then swapped his motion capture suit for an actual suit, similar to that worn by Orson in the film, and underwent 3D scanning using the Duderstadt Center’s photogrammetry resources. 

Actor Matthew Henerson wears asymmetrical markers to assist the scanning process

Photogrammetry is a method of scanning existing objects or people, commonly used in Hollywood and throughout the video game industry to create a CGI likenesses of famous actors. This technology has been used in films like Star Wars (an actress similar in appearance to Carrie Fischer was scanned and then further sculpted, to create a more youthful Princess Leia) with entire studios now devoted to photogrammetry scanning. The process relies on several digital cameras surrounding the subject and taking simultaneous photographs.

Matthew Henerson being processed for Photogrammetry

The photos are submitted to a software that analyzes them on a per-pixel basis, looking for similar features across multiple photos. When a feature is recognized, it is triangulated using the focal length of the camera and it’s position relative to other identified features, allowing millions of tracking points to be generated. From this an accurate 3D model can be produced, with the original digital photos mapped to its surface to preserve photo-realistic color. These models can be further manipulated: Sometimes they are sculpted by an artist, or, with the addition of a digital “skeleton”, they can be driven by motion data to become a fully articulated digital character.

  The 3d modeled scene and scanned actor model were joined with mocap data and brought into the Unity game engine to develop the functionality students would need to film within the 3D set. A virtual camera was developed with all of the same settings you would find on a film camera from that era. When viewed in a virtual reality headset like the Oculus Rift, Matthew’s students can pick up the camera and physically move around to position it at different locations in the CGI environment, often capturing shots that otherwise would be difficult to do in a conventional film set. The footage students film within the app can be exported as MP4 video and then edited in their editing software of choice, just like any other camera footage.

  Having utilized the application for his course in the Winter of 2020, Matthew Solomon’s project with the Duderstadt Center was recently on display as part of the iLRN’s 2020 Immersive Learning Project Showcase & Competition. With Covid-19 making the conference a remote experience, the Citizen Kane project was able to be experienced in Virtual Reality by conference attendees using the FrameVR platform. Highlighting innovative ways of teaching with VR technologies, attendees from around the world were able to learn about the project and watch student edits made using the application.

Citizen Kane on display for iLRN’s 2020 Immersive Learning Project Showcase & Competition using Frame VR

Novels in VR – Experiencing Uncle Tom’s Cabin

Novels in VR – Experiencing Uncle Tom’s Cabin

A Unique Perspective

Stephanie O’Malley


This past semester, English Professor Sara Blair taught a course at the University titled, “The Novel and Virtual Realities.”  – The purpose of this course was to expose students to different methods of analyzing novels and ways of understanding them from different perspectives by utilizing platforms like VR and AR.

Designed as a hybrid course, her class was split between a traditional classroom environment, and an XR lab, providing a comparison between traditional learning methods, and more hands-on experiential lessons through the use of immersive, interactive VR and AR simulations.

As part of her class curriculum, students were exposed to a variety of experiential XR content. Using the Visualization Studio’s Oculus Rifts, her class was able to view Dr. Courtney Cogburn’s “1000 Cut Journey” installation – a VR experience that puts viewers in the shoes of a black american man growing up in the time of segregation, allowing viewers to see first hand how racism affects every facet of their life. They also had the opportunity to view Asad J. Malik’s “Terminal 3” using augmented reality devices like the Microsoft Hololens. Students engaging with Terminal 3 see how Muslim identities in the U.S. are approached through the lens of an airport interrogation.

Wanting to create a similar experience for her students at the University of Michigan, Sara approached the Duderstadt Center about the possibility of turning another novel into a VR experience: Uncle Tom’s Cabin.

She wanted her students to understand the novel from the perspective of it’s lead character, Eliza, during the pivotal moment where as a slave, she is trying to escape her captors and reach freedom. But she also wanted to give her students the perspective of the slave owner and other slaves tasked with her pursuit, as well as the perspective of an innocent bystander watching this scene unfold.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin

Using Unreal Engine, the Duderstadt Center was able to make this a reality. An expansive winter environment was created based on imagery detailed in the novel, and CGI characters for Eliza and her captors were produced and then paired with motion capture data to drive their movements. When students put on the Oculus Rift headset, they can choose to experience the moment of escape either through Eliza’s perspective, her captors, or as a bystander. And to better evaluate what components contributed to student’s feelings during the simulation, versions of these scenarios were provided with and without sound. With sound enabled as Eliza, you hear footsteps in the snow gaining on you, the crack of the ice beneath your feet as you leap across a tumultuous river, and the barking of a vicious dog on your heels – all adding to the tension of the moment. While viewers are able to freely look around the environment, they are passive observers: They have no control over the choices Eliza makes or where she can go.

Adapted for VR by the Duderstadt Center: Uncle Tom’s Cabin – Freedom for Eliza lies on the other side of the frozen Ohio river.

The scene ends with Eliza reaching freedom on the opposite side of the Ohio river and leaving her pursuers behind. What followed the student’s experience with the VR version of the novel was a deep class discussion on how the scene felt in VR verses how it felt reading the same passage in the book. Some students wondered what it might feel like to instead be able to control the situation and control where Eliza goes, or as a bystander, to move freely through the environment as the scene plays out, deciding which party (Eliza or her pursuers) was of most interest to follow in that moment.

While Sara’s class has concluded for the semester, you can still try this experience for yourself – Uncle Tom’s Cabin is available to demo on all Visualization Studio workstations equipped with an Oculus Rift.

Using Mobile VR to Assess Claustrophobia During an MRI

Using Mobile VR to Assess Claustrophobia During an MRI

new methods for exposure therapy

Stephanie O’Malley


Dr. Richard Brown and his colleague Dr. Jadranka Stojanovska had an idea for how VR could be used in a clinical setting. Having realized a problem with patients undergoing MRI scans experiencing claustrophobia, they wanted to use VR simulations to introduce potential patients to what being inside an MRI machine might feel like.

Duderstadt Center programmer Sean Petty and director Dan Fessahazion alongside Dr. Richard Brown

Claustrophobia in this situation is a surprisingly common problem. While there are 360 videos that convey what an MRI might look like, these fail to address the major factor contributing to claustrophobia: The perceived confined space within the bore. 360 videos tend to make the environment skewed, seeming further away than it would be in reality and thereby failing to induce the same feelings of claustrophobia that the MRI bore would produce in reality. With funding from the Patient Education Award Committee, Dr. Brown approached the Duderstadt Center to see if a better solution could be produced.

VR MRI: Character customization
A patient enters feet-first into the bore of the MRI machine.

In order to simulate the effects of an MRI accurately, a CGI MRI machine was constructed and ported to the Unity game engine. A customize-able avatar representing the viewer’s body was also added to give viewers a sense of self. When a VR headset is worn, the viewer’s perspective allows them to see their avatar body and the real proportions of the MRI machine as they are slowly transported into the bore. Verbal instructions mimic what would be said throughout the course of a real MRI, with the intimidating boom of the machine occurring as the simulated scan proceeds.

Two modes are provided within the app: Feet first or head first, to accommodate the most common scanning procedures that have been shown to induce claustrophobia.  

In order to make this accessible to patients, the MRI app was developed with mobile VR in mind, allowing anyone (patients or clinicians) with a VR-capable phone to download the app and use it with a budget friendly headset like Google Daydream or Cardboard.

Dr. Brown’s VR simulator was recently featured as the cover story in the September edition of Tomography magazine.

Learning Jaw Surgery with Virtual Reality

Learning Jaw Surgery with Virtual Reality

Jaw surgery can be complex and there are many factors that contribute to how a procedure is done. From routine corrective surgery to reconstructive surgery, the traditional means of teaching these scenarios has been unchanged for years. In an age populated with computers and the growing popularity of virtual reality, students still find themselves moving paper cut-outs of their patients around on a table top to explore different surgical methods.

Dr. Hera Kim-Berman was inspired to change this. Working with the Duderstadt Center’s 3D artist and programmers, a more immersive and comprehensive learning experience was achieved. Hera was able to provide the Duderstadt Center with patient Dicom data. These data sets were originally comprised of a series of two-dimensional MRI images, which were converted into 3D models and then segmented just as they would be during a surgical procedure. These were then joined to a model of the patient’s skin, allowing the movement of the various bones to influence real-time changes to a person’s facial structure, now visible from any angle.

This was done for several common practice scenarios (such as correcting an extreme over or under bite, or a jaw misalignment) and then imported into the Oculus Rift, where hand tracking controls were developed to allow students to “grab” the bones for adjusting in 3D.

Before re-positioning the jaw segments, the jaw has a shallow profile.

After re-positioning of the jaw segments, the jaw is more pronounced.

As a result, students are now able to gain a more thorough understanding of the spatial movement of bones and more complex scenarios, such as extensive reconstructive surgery, could be practiced well in advance of seeing a patient for a scheduled surgery.

Unreal Engine in Stereoscopic Virtual Reality

Unreal Engine in Stereoscopic Virtual Reality

Up until now, the Oculus Rift has been the go-to system for gamers seeking the ultimate immersive experience, offering immersive stereo compatibility with game engines like Unreal and Unity 3D. Recently, the Duderstadt Center was able to push this experience even further, with Graphics Programmer Sean Petty adapting the Unreal Engine to work within the Duderstadt Center’s M.I.D.E.N – a fully immersive, stereoscopic 3D virtual reality experience.

Visitors entering the MIDEN are equipped with a pair of stereo glasses and a game controller, both outfitted with reflective markers that are then tracked by a series of Vicon cameras positioned around the top perimeter of the space. The existing capabilities of the MIDEN allow viewers to not only explore a space beyond the confines of the 10’x10′ room, but to also interact with objects using the provided game controller.

The services of the Duderstadt Center are open to all departments within the University, making visualization services, professional studio spaces, and exploratory technology accessible to artists, engineers, architects and more. The diverse atmosphere of the Digital Media Commons generates a multitude of cross-curricular collaborative projects each year – From live performances featuring orchestras manipulated via brain waves to exploring the anatomy of a digital cadaver in virtual reality.

In the past the Duderstadt Center’s MIDEN has been used to prototype architectural spaces, host artistic installations and assess human behavior or simulate training scenarios. Incorporating the Unreal Engine into a space like the MIDEN allows visitors to experience an intense level of realism never before achieved in this sort of environment, opening new doors not just for gamers, but for those seeking high quality visualizations for research and exploration. Unreal Engine brings a wide range of materials and visual effects to any scene. From realistic water, foliage or glass, to effects like fire and transitions in the time of day.

Sean Petty, graphics programmer of the Duderstadt Center, explains his process for getting Unreal to operate from within the MIDEN:

The MIDEN requires us to render a different view of the scene to each of the four walls from the perspective of the user. In order to achieve this we must track the location and orientation of the users eyes, which is accomplished by motion tracking a pair of glasses worn by the user. In the MIDEN there is a dedicated computer performing the necessary calculations, the first step to enabling MIDEN support in Unreal is to modify the engine to interface with this computer.

Visitors to the MIDEN are motion tracked within the space via reflective markers placed around a pair of stereo glasses and a hand held game controller. These markers are monitored by eight Vicon cameras located along the perimeter of the MIDEN.

Once the location of the user has been determined we must project the user’s view to each of the four walls. When rendering a scene in a standard desktop environment the camera is positioned in the center of the screen. A centered camera only requires a symmetric frustum projection which is the native transformation supported by Unreal. In the MIDEN, the center of the camera may be anywhere within the space and will often not be centered on a screen. This requires the use of an asymmetric frustum projection, which is functionality that had to be added to the engine.

Images for each wall are projected through a corresponding projector located behind the walls of the MIDEN. The floor is projected using a mirror located at the top of the space.

Unreal has native support for stereo by rendering the left and right views next to each other into the single image. This setup is used for devices such as the Oculus rift where the both images for the left and right eye are displayed at the same time. The MIDEN uses a technology called “active stereo”, where the displayed image flickers back and forth rapidly between the left and right images. This requires a modification to the engine so the left and right images are rendered to two separate buffers rather than to two sides of a single image.

Unreal Engine as seen from within the Duderstadt Center’s Virtual Reality MIDEN. The MIDEN is a 10’x10′ room comprised of 5 walls utilizing stereoscopic projection. Visitors are tracked using Vicon cameras allowing them to travel beyond the confines of the physical space.

The final step for displaying unreal scenes in the MIDEN is to get the four rendering computers communicating with each other. This ensures that when the user moves all the screens are updated appropriately to give a consistent view of the scene. The networking is accomplished using Unreal‘s built in network replication functionality, which is designed for use in multiplayer games.

With this latest development, researchers across all disciplines are now able to utilize this technology to reproduce lifelike environments for their studies giving subjects the ultimate immersive experience. It is hoped that this higher level of immersion offered by the Unreal Engine will have a dramatic impact in studies involving human behavior and environmental effects.

In addition to incorporating Unreal, the MIDEN also continues to operate using an in-house engine developed by Ted Hall & Sean Petty, called “Jugular,” which provides support for a broad range of models, materials, and interactivity. While Unreal offers finer elements of photo-realism for mesh-based geometry, Jugular supports easier import of a wider range of file types from a variety of sources, including not only meshes but also solid volumes and informatics graphs.

Virtual Reality 3-D Brain Helps Scientists Understand Migraine Pain

Virtual Reality 3-D Brain Helps Scientists Understand Migraine Pain

Dr. Alex DaSilva Photo Credit: Scott Soderberg, Michigan Photography

From U-M News:

ANN ARBOR—Wielding a joystick and wearing special glasses, pain researcher Alexandre DaSilva rotates and slices apart a large, colorful, 3-D brain floating in space before him.

Despite the white lab coat, it appears DaSilva’s playing the world’s most advanced virtual video game.  The University of Michigan dentistry professor is actually hoping to better understand how our brains make their own pain-killing chemicals during a migraine attack.

The 3-D brain is a novel way to examine data from images taken during a patient’s actual migraine attack, says DaSilva, who heads the Headache and Orofacial Pain Effort at the U-M School of Dentistry and the Molecular and Behavioral Neuroscience Institute.

Different colors in the 3-D brain give clues about chemical processes happening during a patient’s migraine attack using a PET scan, or positron emission tomography, a type of medical imaging.

“This high level of immersion (in 3-D) effectively places our investigators inside the actual patient’s brain image,” DaSilva said.

Through some innovative work done by Dr. Alexandre Dasilva and his team in the School of Dentistry, the Duderstadt Center was presented with some exciting new data that shows activation in the brain *during* a migraine attack. Most data happens before or after an attack. Sean Petty and Ted Hall worked closely with Dr. DaSilva to interpret the data and add some new tools to Jugular, our in-house 3D engine, for exploring volumetric data such as fMRI and CT scans. Dr. DaSilva can now explore the brain data by easily walking around the data and interactively cutting through it.

Article: Measurable Domain for Colour Differences within a Virtual Environment

Article: Measurable Domain for Colour Differences within a Virtual Environment

Light & Engineering (vol. 20, no. 3, 2012) | Светотехника (2 • 2012)

Professor Moji Navvab has published another article regarding his lighting analysis of virtual reality: “Область Поддающихся Измерению Цветовых Различий в Виртуальной Среде” (“Measurable Domain for Colour Differences within a Virtual Environment”), in, Светотехника (Light & Engineering).

UROP Summer Symposium 2012 (Kinect, Virtual Reality, and iOS)

UROP Summer Symposium 2012 (Kinect, Virtual Reality, and iOS)

Rachael Miller and Rob Soltesz presented their summer work on Kinect development, natural user interfaces, and capturing emotive qualities of users at the 2012 UROP Symposium for MSTEM. Rachael won a Blue Ribbon at the event for her poster and they are both the first (that I know of) who have successfully used multiple Kinects in an immersive virtual reality space for virtual physical presence.

Rachael focused on creating a natural user interface for immersive 3D environments by combining multiple connects for a more robust skeleton.  This stable and predictable skeleton allowed her to then wrap virtual (invisible) objects around the user’s limbs and torso effectively allowing people to interact with virtual objects without markers or special tracking devices. Beyond simple interaction with virtual objects she then developed several gestures to be used for navigation in virtual reality.

Rob worked with Rachael on aspects of her project but also looked into using the Kinect’s multiple microphones and internal voice recognition capabilities to extract emotive qualities from the user inside a virtual reality space.

Andrew Janke also presented at a second UROP symposium on his work with iOS connectivity to a variety of applications. Getting data off of an iOS device is not always trivial. Formatting that data into a PDF and then sending it via email to a specific individual can be a challenge. Andrew developed a process that allows arbitrary iOS applications to send data, using simple sockets, which can then be formatted and then sent via email. This functionality was required by a few of our applications in development and proved to be extremely useful.

All students did a great job over the summer and we’re excited to be a part of the UROP program at the University of Michigan.

Kinect in Virtual Reality – M.I.D.E.N. Test

Kinect in Virtual Reality – M.I.D.E.N. Test


The Kinect exploded on the gaming and natural user interface scene. People had it hacked within a few days and a collective desire to see how a depth sensing camera can be used was born. Caught up in the same energy the Duderstadt Center started playing with the hacks coming out and seeing how they could be used with other technology. After some initial tests, and the release of the official SDK from Microsoft, we dove into deeper development with the device.

In an effort to improve interactivity in the MIDEN, the Kinect has been applied as a way of representing the physical body in a virtual space. By analyzing the data received from the Kinect, the Duderstadt Center’s rendering engine can create a digital model of the body. This body represents an avatar that corresponds to the user’s location in space, allowing them to interact with virtual objects. Because the MIDEN offers the user perspective and depth perception, interaction feels more natural than maneuvering an avatar on a screen; the user can reach out and directly “touch” objects.

Virtual Jet Ski Driving Simulator

Virtual Jet Ski Driving Simulator

The Virtual Jet Ski Driving Simulator allows a user to drive a jet ski (or personal watercraft) through a lake environment that is presented in an immersive virtual reality MIDEN system. The user sits on a jet ski mockup and controls the ride via handlebar and throttle. While the mockup is stationary (does not move), the environment changes dynamically in response to handlebar and throttle operation, thereby, creating the feeling of jet ski driving in a very convincing way. The virtual reality system provides head-referenced stereo viewing and a realistic, full scale representation of the environment.

The simulator was developed to study human risk factors related to the operation of a personal watercraft (PWC). In recreational boating, PWCs are involved in accidents in disproportional numbers. Using the simulator, accident scenarios can be simulated and the reaction of PWC operators in specific situations can be studied. The simulator provides a cost-effective analysis tool for regulators and equipment designers as well as a training device for PWC operators, enforcers, and educators.

The simulator was developed for the U.S. Coast Guard (USCG) by the University of Michigan Virtual Reality Laboratory and the Research Triangle Institute. It is now in the process of being revived through help from the Undergraduate Research Opportunity Program (UROP)