The M.I.D.E.N. transports you into Nova’s Reality

The M.I.D.E.N. transports you into Nova’s Reality

Using Virtual Production Techniques to Film a Music Video

Author

Emma Powell and Eilene Koo


Photo Credit: Eilene Koo

Photo Credit: Eilene Koo

The Michigan Immersive Digital Experience Nexus (MIDEN) doesn’t seem very large at first, as it is a 10 x 10 x 10 foot space and a blank canvas ready for its next project like “Nova’s Reality.” For this special performance, though, the MIDEN transformed into an expansive galactic environment, placing electronic jazz musician Nova Zaii in the middle of a long subway car traveling through space. 

The technological centerpiece of Zaii’s performance was the Nova Portals, a cone-shaped instrument invented by Zaii which uses motion and doesn’t require direct contact to create sound. Zaii created the Nova Portals during his time as a student at the University of Michigan and majoring in Performing Arts Technology and Jazz Studies. 

Several members of the U-M community helped bring Zaii’s performance to life inside the MIDEN. Akash Dewan, a graduated senior and director of the project, first met Nova in his freshman year. “I first met Nova in my freshman year in the spring, when I decided to shoot Masimba Hwati’s opening ceremony for his piece at the UMMA,” Dewan said. Masinba Hwati is a Zimbabwean sculptor and musician who made the Ngoromera, a sculpture made out of many different objects and musical instruments. When Hwati performed at the opening ceremony, his drummer was Nova Zaii. 

“I took photos throughout the performance and took some particularly cool photos of Nova, so I decided to chat with him after the event and show him the photos I took,” Dewan said. “Since then, I have been following his journey on Instagram as a jazz musician part of the Juju Exchange [a musical partnership between Zaii and two childhood friends] and inventor of the Nova Portals, which I’ve always taken a fascination towards due to my interest in the intersection of technology and art.”

Following major hardware upgrades, the MIDEN’s image refresh rate has doubled, along with brighter light outputs, to support stereoscopic immersion for two simultaneous users — four perspective projections for four eyes, 60 times per second. 

Sean Petty, senior software developer at the DMC’s Emerging Technologies Group, assisted the “Nova’s Reality” team inside the MIDEN and explained how the space was used. 

“This project is using the MIDEN as a virtual production space, where the MIDEN screens will deliver a perspective correct background that will make it appear as if the actor is actually in the virtual environment from the perspective of the camera,” Petty said. “To accommodate this, we modified the MIDEN to only display one image per frame, rather than the four that would be required for the usual two user VR experience. We also reconfigured the motion tracking to track the motion of the camera, rather than the motion of the VR glasses.”

The high-speed projection refresh rate of 240 scans per second allowed for a flicker-free recording by the video camera.

“This entire process was extremely inspiring for all of us involved, and maintains my strong drive to continue to find new, fresh interdisciplinary approaches to visualizing music,” said Dewan. After this project and their graduation, each member will also be continuing individual creative work. 

The full project will be published on novazaii.com and akashdewan.com

Photo Credit: Eilene Koo


Find more of their creative works through their Instagram profiles:

Performer & Inventor of the Nova Portals: Nova Zaii, @novazaii

Director, Co-Director of Photography, Editor, 3D Graphics Developer: Akash Dewan, @akashdewann

Audio Engineer/Technician: Adithya Sastr, @adithyasastry

3D Graphics Developer: Elvis Xiang

Co-Director of Photography: Gokul Madathil, @madlight.productions

BTS Photographer: Randall Xiao, @randysfoto

MIDEN Staff + Technical Support: Sean Petty and Theodore Hall

F’25 Brings New Course on 3D Modeling, Animation & Game Dev for Everyone

EECS 298 : 3D Technical Art and Animation

F’25 Brings New Course on 3D Modeling, Animation & Game Dev for Everyone

Author


Coming in Fall 2025 and running yearly, EECS 298 : 3D Technical Art and Animation is a new, open-to-everyone course teaching students how to create 3D characters, objects, environments, materials, armatures, animations, and more in Blender before bringing them to life in a game engine. Per the course syllabus, students will not only learn how to create these 3D art assets, but also learn how to integrate them into technical ecosystems such as the Unity game engine, achieving special effects and interactivity (dynamic hair, flowing lava, layered animations, particles, etc). No background in programming or art is assumed, and there are no prerequisite course requirements.

Students will conclude the course with a portfolio of 30+ low-poly models, along with a functional, shareable, 3D platforming video game (think Super Mario 3D World) featuring student-made (and student-integrated) playable characters, environments, NPCs, and objects. The course staff will provide some of the programming– students produce the 3D assets and game engine integration that brings everything together (and to life!).

Created by Austin Yarger in collaboration with Evan Marcus, the course will run once yearly (in fall semesters) with a capacity of approximately 50 students. The course will provide 4 credits of varying types depending on the department of the student (for example, CS students will earn FlexTech credit, while STAMPS students will earn elective credit). The course is designed to complement a growing suite of game development and XR courses on campus, including EECS 494 : Game Design and Development, EECS 440 : Extended Reality for Social Impact, EECS 498.007 : Game Engine Architecture, SI 311 : Games and UX, PAT 305 : Video Game Music, EDUC 333 : Games and Learning, etc. 

A full list of planned topics include–

3D content authoring, basic animation techniques, 3D topology, open source tools (Blender), basic computational geometry, 3D asset formats and representations, asset optimization, armature design, UV mapping and textures, basic materials and lighting / shader logic, asset-to-engine pipelines, 3D printing and photogrammetry, etc.

Student Uses Photogrammetry to Miniaturize Herself

Stamps Student Uses Photogrammetry to Miniaturize Herself

  Stamps student Annie Turpin came to the Duderstadt Center with an idea for her Sophomore studio project: She wanted to create a hologram system, similar to the “Pepper’s Pyramid” or “Pepper’s Ghost” display, that would allow her to project a miniaturized version of herself into a pinhole camera.

Pepper’s Ghost relied on carefully placed mirrors to give the illusion of a transparent figure

  The concept of Pepper’s Pyramid is derived from an illusion technique created by John Henry Pepper in 1862. Originally coined “Pepper’s Ghost”, the trick initially relied on a large pane of glass to reflect an illuminated room or person that was hidden from view. This gave the impression of a “ghost” and became a technique frequently used in theatre to create a phantasmagoria. Similar methods are still used today, often substituting Mylar foil in place of glass and using CG content (such as the 2012 Coachella performance, in which a “holographic” Tupac was resurrected to sing alongside Dr. Dre).

Pepper’s Pyramid takes the concept of Pepper’s Ghost, and gives it 3 dimensions using a pyramid of Plexiglas instead of mirrors.

  “Pepper’s Pyramid” is a similar concept. Instead of a single pane of glass reflecting a single angle, a video is duplicated 4 times and projected downward onto a pyramid of Plexiglas, allowing the illusion to be viewed from multiple angles and for the content to be animated.

  For Annie’s project, she re-created a small version of Pepper’s Pyramid to fit inside a pinhole camera that she had constructed, and used a mobile phone to project the video instead of a monitor. She then had herself 3D scanned using the Duderstadt Center’s Photogrammetry rig to generate a realistic 3D model of herself that was animated and then exported as an MP4 video.

Annie’s pinhole camera

  The process of Photogrammetry allows an existing object or person to be converted into a full color, highly detailed, 3D model. This is done using a series of digital photographs captured 360 degrees around the subject. While Photogrammetry can be done at home for most static subjects, the Duderstadt Center’s Photogrammetry resources are set up to allow moving subjects like people to be scanned as well. The process using surface detail on the subject to plot points in 3D space and construct a 3D model. For scans of people, these models can even have a digital skeleton created to drive their motion, and be animated as CGI characters. Annie’s resulting scan was animated to rotate in place, and projected into the the plexiglas pyramid as a “hologram” for viewing through her pinhole camera.

The result of 3D printing Annie’s photogrammetry scan

  Annie would make use of Photogrammetry again the following year, when she had herself 3d scanned again, but this time for the purpose of 3D printing the resulting model for a diorama. In this instance, she was scanned using Photogrammetry in what is referred to as “T-Pose”. This is a pose where the subject stands with their arms and legs apart, so their limbs can be articulated into a different position later. After Annie’s model was generated, it was posed to have her sitting in a computer chair and working on a laptop. This model was sent to the Duderstadt Center’s J750 3D color printer to produce a 6″ high 3D printed model.

  This printer allows for full spectrum color and encases the model in a support structure that must be carefully removed, but allows for more intricate features and overhangs on the model.

Annie carefully removes the support structure from her 3D printed model

A duplicate print of Annie’s creation can now be viewed in the display case within the Duderstadt Center’s Fabrication Studio.

Creating Cave-Like Digital Structures with Photogrammetry

Creating Cave-Like Digital Structures with Photogrammetry

Students in Professor Matias Del Campo’s Architecture Thesis class have been exploring organic, cave-like structures for use in a real-world underground architectural space.

His students were tasked with constructing textured surfaces reminiscent of cave interiors such as stalactites and stalagmites, rocky surfaces, and erosion using a variety of mediums-from spray foam to poured concrete.

These creations were then scanned at the Duderstadt Center using the process of Photogrammetry to convert their model to digital form. The resulting digital models could then be altered (retouched, scaled or mirrored, for example) by the students for design purposes when incorporating the forms into the planned space.