Students Learn 3D Modeling for Virtual Reality

Students Learn 3D Modeling for Virtual Reality

making tiny worlds

Stephanie O’Malley


ArtDes240 is course offered by the Stamps School of Art & Design and taught by Stephanie O’Malley that teaches students 3D modeling & animation.  As one of only a few 3D digital classes offered at the University of Michigan, AD240 sees student interest from several schools across campus with students looking to gain a better understanding of 3D art as it pertains to the video game industry.

The students in AD240 are given a crash-course in 3D modeling in 3D Studio Max and level creation within the Unreal Editor. It is then within Unreal that all of their objects are positioned, terrain is sculpted, and atmospheric effects such as time of day, weather, or fog can be added.

“Candyland” – Elise Haadsma & Heidi Liu, developed using 3D Studio Max and Unreal Engine
“Candyland” – Elise Haadsma & Heidi Liu, developed using 3D Studio Max and Unreal Engine

With just 5 weeks to model their entire environment, bring it into Unreal,  package it as an executable, and test it in the MIDEN (or on the Oculus Rift), the resulting student projects were truly impressive. Art & Design Students Elise Haadsma & Heidi Liu took inspiration from the classic board game, “Candyland” to create a life-size game board environment in Unreal consisting of a lollipop forest, mountains of Hershey’s kisses, even a gingerbread house and chocolate river.

Lindsay Balaka  from the School of Music, Theater & Dance, chose to create her scene using the Duderstadt Center’s in-house rendering software “Jugular” instead of Unreal Engine-Her creation, “Galaxy Cakes”, is a highly stylized (reminiscent of an episode of the 1960’s cartoon, The Jetson’s) cupcake shop, complete with spatial audio emanating from the corner Jukebox.

Lindsay Balaka’s “Galaxy Cakes” environment
An abandoned school, created by Vicki Liu in 3D Studio Max and Unreal Engine

Vicki Liu, also of Art & Design, created a realistic horror scene using Unreal. After navigating down a poorly lit hallway of an abandoned nursery school, you will find yourself in a run down classroom inhabited by some kind of mad man. A tally of days passed has been scratched into the walls, an eerie message scrawled onto the chalkboard, and furniture haphazardly barricades the windows.

While the goal of the final project was to create a traversible environment for virtual reality, some students took it a step further.

Art & Design student Gus Schissler created an environment composed of neurons in Unreal intended for viewing within the Oculus Rift. He then integrated data from an Epoch neurotransmitter (a device capable of reading brain waves) to allow the viewer to telepathically interact with the environment. The viewers mood when picked up by the Epoch not only changed the way the environment looked by adjusting the intensities of the light being emitted by the neurons, but also allowed the viewer to think specific commands (push, pull, etc) in order to navigate their way past various obstacles in the environment.

Students spend the last two weeks of the semester scheduling time with Ted Hall and Sean Petty to test their scenes and ensure everything runs and looks correctly on the day of their presentations. This was a class that not just introduced students to the design process, but to also allowed them to get hands on experience with upcoming technologies as virtual reality continues to expand in the game and film industries.

Student Gus Schissler demonstrates his Neuron environment for Oculus Rift that uses inputs from an Epoch neurotransmitter to interact.

Student Uses Photogrammetry to Miniaturize Herself

Stamps Student Uses Photogrammetry to Miniaturize Herself

  Stamps student Annie Turpin came to the Duderstadt Center with an idea for her Sophomore studio project: She wanted to create a hologram system, similar to the “Pepper’s Pyramid” or “Pepper’s Ghost” display, that would allow her to project a miniaturized version of herself into a pinhole camera.

Pepper’s Ghost relied on carefully placed mirrors to give the illusion of a transparent figure

  The concept of Pepper’s Pyramid is derived from an illusion technique created by John Henry Pepper in 1862. Originally coined “Pepper’s Ghost”, the trick initially relied on a large pane of glass to reflect an illuminated room or person that was hidden from view. This gave the impression of a “ghost” and became a technique frequently used in theatre to create a phantasmagoria. Similar methods are still used today, often substituting Mylar foil in place of glass and using CG content (such as the 2012 Coachella performance, in which a “holographic” Tupac was resurrected to sing alongside Dr. Dre).

Pepper’s Pyramid takes the concept of Pepper’s Ghost, and gives it 3 dimensions using a pyramid of Plexiglas instead of mirrors.

  “Pepper’s Pyramid” is a similar concept. Instead of a single pane of glass reflecting a single angle, a video is duplicated 4 times and projected downward onto a pyramid of Plexiglas, allowing the illusion to be viewed from multiple angles and for the content to be animated.

  For Annie’s project, she re-created a small version of Pepper’s Pyramid to fit inside a pinhole camera that she had constructed, and used a mobile phone to project the video instead of a monitor. She then had herself 3D scanned using the Duderstadt Center’s Photogrammetry rig to generate a realistic 3D model of herself that was animated and then exported as an MP4 video.

Annie’s pinhole camera

  The process of Photogrammetry allows an existing object or person to be converted into a full color, highly detailed, 3D model. This is done using a series of digital photographs captured 360 degrees around the subject. While Photogrammetry can be done at home for most static subjects, the Duderstadt Center’s Photogrammetry resources are set up to allow moving subjects like people to be scanned as well. The process using surface detail on the subject to plot points in 3D space and construct a 3D model. For scans of people, these models can even have a digital skeleton created to drive their motion, and be animated as CGI characters. Annie’s resulting scan was animated to rotate in place, and projected into the the plexiglas pyramid as a “hologram” for viewing through her pinhole camera.

The result of 3D printing Annie’s photogrammetry scan

  Annie would make use of Photogrammetry again the following year, when she had herself 3d scanned again, but this time for the purpose of 3D printing the resulting model for a diorama. In this instance, she was scanned using Photogrammetry in what is referred to as “T-Pose”. This is a pose where the subject stands with their arms and legs apart, so their limbs can be articulated into a different position later. After Annie’s model was generated, it was posed to have her sitting in a computer chair and working on a laptop. This model was sent to the Duderstadt Center’s J750 3D color printer to produce a 6″ high 3D printed model.

  This printer allows for full spectrum color and encases the model in a support structure that must be carefully removed, but allows for more intricate features and overhangs on the model.

Annie carefully removes the support structure from her 3D printed model

A duplicate print of Annie’s creation can now be viewed in the display case within the Duderstadt Center’s Fabrication Studio.

Learning Jaw Surgery with Virtual Reality

Learning Jaw Surgery with Virtual Reality

Jaw surgery can be complex and there are many factors that contribute to how a procedure is done. From routine corrective surgery to reconstructive surgery, the traditional means of teaching these scenarios has been unchanged for years. In an age populated with computers and the growing popularity of virtual reality, students still find themselves moving paper cut-outs of their patients around on a table top to explore different surgical methods.

Dr. Hera Kim-Berman was inspired to change this. Working with the Duderstadt Center’s 3D artist and programmers, a more immersive and comprehensive learning experience was achieved. Hera was able to provide the Duderstadt Center with patient Dicom data. These data sets were originally comprised of a series of two-dimensional MRI images, which were converted into 3D models and then segmented just as they would be during a surgical procedure. These were then joined to a model of the patient’s skin, allowing the movement of the various bones to influence real-time changes to a person’s facial structure, now visible from any angle.

This was done for several common practice scenarios (such as correcting an extreme over or under bite, or a jaw misalignment) and then imported into the Oculus Rift, where hand tracking controls were developed to allow students to “grab” the bones for adjusting in 3D.

Before re-positioning the jaw segments, the jaw has a shallow profile.

After re-positioning of the jaw segments, the jaw is more pronounced.

As a result, students are now able to gain a more thorough understanding of the spatial movement of bones and more complex scenarios, such as extensive reconstructive surgery, could be practiced well in advance of seeing a patient for a scheduled surgery.

Steel Structures – Collaborative Learning with Oculus Rift

Steel Structures – Collaborative Learning with Oculus Rift

Civil & Environmental Engineering: Design of Metal Structures (CEE413) uses a cluster of Oculus Rift head-mounted displays to visualize buckling metal columns in virtual reality. The cluster is configured in the Duderstadt Center’s Jugular software so that the instructor leads a guided tour using a joystick while three students follow his navigation. This configuration allows the instructor to control movement around the virtual object while students are only able to look around.

Developed in a collaboration with the Visualization Studio, using the Duderstadt Center’s Jugular software this simulation can run on both an Oculus Rift or within the MIDEN.

Art Students Model With Photogrammetry

Art Students Model With Photogrammetry

The Stamps School of Art and Design features a fabrication class called Bits and Atoms. This course is taught by Sophia Brueckner and it focuses on detailed and accurate modeling for 3D digital fabrication and manufacturing.

Sophia brought her students into the Duderstadt Center to use our new Photogrammetry rig. This rig features 3 cameras that take multiple photos of a subject placed on a rotating platform. Each photograph captures a different angle of the subject. When these photos are imported into a computer program, the result is a 3D model of the subject. The program tracks the movement of reference points in each photo in order to construct this model. This process is called photogrammetry.

The art students created digital models of themselves by sitting on the rotating platform. Their 3D models were then manipulated using Rhino and Meshmixer.

Robert Alexander’s “Audification Explained” Featured on BBC World Service

Robert Alexander’s “Audification Explained” Featured on BBC World Service

Sonification is the conversion of data sets to audio files. Robert Alexander II is a Sonification Specialist working with NASA, who uses satellite recordings of the sun’s emissions to discover new solar phenomena. The Duderstadt Center worked with Robert to produce a short video explaining the concept of data audification.

Recently Robert was featured in a BBC World Service clip along with his video about making music from the sun: http://www.bbc.co.uk/programmes/p03crzsv

Lia Min: RAW, April 7 – 8

Lia Min: RAW

Research fellow Lia Min will be exhibiting “RAW”  in the 3D lab’s MIDEN April 7 & 8th from 4 – 6pm. All are welcome to attend. Lia Min’s exhibit is an intersection of art and science, assembled through her training as a neuroscientist. Her data set, commonly referred to as a “Brainbow“,  focuses on a lobe of a fruit fly brain at the base of an antenna. This visualization scales microns to centimeters to enlarge the specimen with an overall visual volume of about 1.8 x 1.8 x 0.4 meters.

An Application for Greek Transcription

An Application for Greek Transcription

Practice is the only way to learn a new language. However, when learning ancient languages, such as Greek, it can be difficult to get immediate, reliable feedback on practice work. This is why Professor Pablo Alvarez in Papyrology is working with Duderstadt Center student programmer Edward Wijaya to create an app for students to practice transcribing ancient Greek manuscripts into digital writing.

The app is divided into three modes: Professor/curator mode, student mode, and discovery mode. The professor mode allows the curator to upload a picture of the manuscript and post a line by line digital transcription of the document. These are the “answers” to the document. In student mode, these manuscript are transcribed by the students. When they click the check button, the student is given a line by line comparison to the curator’s answers. Furthermore, the discovery mode allows individuals with no Greek training to learn about the letters and read descriptions in the notations used.

A wide variety of fragile manuscripts which are often inaccessible to students are available on the app allowing the students to  gain experience with diverse handwriting and histories

Surgical Planning for Dentistry: Digital Manipulation of the Jaw

Surgical Planning for Dentistry: Digital Manipulation of the Jaw

CT data was brought into Zbrush & Topogun to be segmented and re-topologized. Influence was then added to the skin mesh allowing it to deform as the bones were manipulated.

Hera Kim-Berman is a Clinical Assistant Professor with the University of Michigan School of Dentistry. She recently approached the Duderstadt Center with an idea that would allow surgeons to prototype jaw surgery specific to patient data extracted from CT scans. Hera’s concept involved the ability to digitally manipulate portions of the skull in virtual reality, just as surgeons would when physically working with a patient, allowing them to preview different scenarios and evaluate how effective a procedure might be prior to engaging in surgery.

Before re-positioning the jaw segments, the jaw has a shallow profile.

After providing the Duderstadt Center with CT scan data, Shawn O’Grady was able to extract 3D meshes of the patient’s skull and skin using Magics. From there, Stephanie O’Malley worked with the models to make them interactive and suitable for real-time platforms. This involved bringing the skull into a software like Zbrush and creating slices in the mesh to correspond to areas identified by Hera as places where the skull would potentially be segmented during surgery. The mesh was then also optimized to perform at a higher frame rate when incorporated into real-time platforms. The skin mesh was also altered, undergoing a process called “re-topologizing” which allowed it to be more smoothly deformed.  From there, the segmented pieces of the skull were re-assembled, and then assigned influence over areas of the skin in a process called “rigging”. This allowed for areas of the skin to move with selected bones as they were separated and shifted by a surgeon in 3D space.

After re-positioning of the jaw segments, the jaw is more pronounced.

Once a working model was achieved, it was passed off to Ted Hall and student programmer Zachary Kiekover, to be implemented into the Duderstadt Center’s Jugular Engine, allowing the demo to run at large scale and in stereoscopic 3D from within the virtual reality MIDEN but also on smaller head mounted displays like the Oculus Rift. Additionally, more intuitive user controls were added which allowed for easier selection of the various bones using a game controller or motion tracked hand gestures via the Leap Motion. This meant surgeons could not only view the procedure from all angles in stereoscopic 3D, but they could also physically grab the bones they wanted to manipulate and transpose them in 3D space.

Zachary demonstrates the ability to manipulate the model using the Leap Motion.

Tour the Michigan Ion Beam Laboratory in 3D

Tour the Michigan Ion Beam Laboratory in 3D

3D Model of the Michigan Ion Beam Laboratory

The Michigan Ion Beam Laboratory (MIBL) was established in 1986 as part of the Department of Nuclear Engineering and Radiological Sciences in the College of Engineering. Located on the University of Michigan’s North Campus, the MIBL serves to provide unique and extensive facilities to support research and development. Recently, Professor Gary Was, Director of the MIBL reached out to the Duderstadt Center for assistance with developing content for the MIBL website to better introduce users to the capabilities of their lab as construction on a new particle accelerator reached completion.

Gary’s group was able to provide the Duderstadt Center with a scale model of the Ion Beam Laboratory generated in Inventor and a detailed synopsis of the various components and executable experiments. From there, the Stephanie O’Malley of the Duderstadt Center optimized and beautified the provided model, adding corresponding materials, labels and lighting. A series of fly-throughs, zoom-ins, and experiment animations were generated from this model that would serve to introduce visitors to the various capabilities of the lab.

These interactive animations were then integrated into the MIBL’s wordpress platform by student programmer, Yun-Tzu Chang. Visitors to the MIBL website are now able to compare the simplified digital replica of the space with actual photos of the equipment as well as run various experiments to better understand how each component functions.  To learn more about the Michigan Ion Beam Laboratory and to explore the space yourself, visit their website at  mibl.engin.umich.edu.