Transforming Sculptures into Stereoscopic Images

Transforming Sculptures into Stereoscopic Images

Artwork by Sean Darby

Sean Darby, a local Ann Arbor artist, wanted to turn his various relief sculptures into 3D images for his senior presentation to emphasize the depth of his color reliefs. Sean’s relief sculptures were scanned with the Handy Scan laser scanner, then extracted accurate depth information to generate stereo pairs. This method was fairly time consuming, so they tried to create depth another way by having Sean digitally painted an image with black, greys, and white. When using stereoscopic projections, the white stands in front of the gray which stands in front of the black, and this creates the illusion of depth. The stereoscopic images were displayed in traditional View-Master and looped on a screen alongside the final sculptures.

Sean Darby’s original sculptures.

Massive Lighting in Sponza to Develop Global Illumination

Massive Lighting in Sponza to Develop Global Illumination

A person in the MIDEN exploring Sponza.

Real light is a complicated phenomenon that not only acts upon objects, but interacts with them–light bounces off an object and to another object so that an entire scene is implicated. In graphical applications, however, usually only one surface is lit without taking into consideration the other objects in the scene. Ray tracing is sometimes used in graphics to generate realistic lighting effects by tracing the path of light through a scene and the objects it would encounter. While this creates accurate and realistic lighting effects, this technique is so slow that it is not practical for real-time applications like video games or simulations.

To create real-time, real-looking lighting effects, graphics engineer Sean Petty and staff at the Duderstadt Center have been experimenting with a publicly available and commonly used scene called Sponza to develop global illumination skill. The Sponza Atrium is a model of an actual building in Croatia with dramatic lighting. The lighting experiments in Sponza has helped the lab to develop a more realistic global illumination. Spherical harmonic (SH) lighting creates a realistic light rendering, using volumes to approximate how light should behave. While this method isn’t perfectly accurate in the way ray tracing is, algorithms are used to figure out which rays intersect objects and calculates the intensity of light going towards it, and emitting from it. This information is inserted into the 3D volume and overall virtual environment. These algorithms can then be applied in other scenes. Realistic lighting is vital to a user becoming psychologically immersed in a scene.

The Sponza Atrium is a model of an actual building in Croatia.

Out of Body In Molly Dierks Art

Out of Body In Molly Dierks Art

Molly Dierks Art Installation, “Postmodern Venus”

Molly Dierks, as an MFA candidate at the Penny W. Stamps School of Art & Design, used resources at the Duderstadt Center to create an installation peice called “Postmodern Venus.” Shawn O’Grady scanned her body with the HandyScan Laser Scanner to create a 3D model of her body. The model was then textured to look like ancient marble, and presented in the MIDEN as a life-size replication of herself.

“Postmodern Venus” plays with modern conceptions of objectivity and objectification by allowing the viewer to interact with the accurately scanned body of Molly Dierks, touching and moving through it. On her website she notes, “Experience with disability fuels my work, which probes the divide between our projected selves as they relate to the trappings of our real and perceived bodies. This work questions if there is a difference between what is real with relation to our bodies and our identities, and what is constructed, reflected or projected.” To read more about this and other work, visit Molly Dierks’ website: http://www.mollyvdierks.com/#Postmodern-Venus

A Configurable iOS Controller for a Virtual Reality Environment

A Configurable iOS Controller for a Virtual Reality Environment

James examining a volumetric brain in the MIDEN with an iPod controller

Traditionally, users navigate through 3D virtual environments via game controllers; however, game controllers are littered with ambiguously labeled buttons.  And while excellent for gaming, this setup makes navigating through 3D space unnecessarily complicated for the average user.  James Cheng, a sophomore in Computer Science in Engineering, has been working to resolve this headache by using touch screens such as those found in mobile devices instead game controllers.  Using the Jugular Engine in development at the Duderstadt Center, he has been developing a scalable UI system that can be used for a wide range of immersive simulations. Want to cut through a volumetric brain?  Select the “slice button” and start dragging.  What to fly through an environment instead of walking?  Switch to “Fly” mode and take off.  The system aims to be highly configurable since every experience is different.

Initial development is being done for the iOS platform due to it’s consistent hardware and options for scalable user interfaces.  James aims to make immersive experiences more intuitive and give the developer more options for communicating with the user.  You can now say “good-bye!” to memorizing what buttons “X” and “Y” do for each simulation, and instead utilize clearly defined and simulation-specific buttons.

Duderstadt Center Collaboration on NASA Proposals

Duderstadt Center Collaboration on NASA Proposals

Cover Graphics for the Armada Proposal for NASA

Over the years the Duderstadt Center has provided its services of visualization for a variety of NASA Proposals. Submitting a proposal requires a packet of information and visual aids that follow a strict format and series of guidelines.

Illustration from NASA proposal, MAARSI

Most recently, the Duderstadt Center assisted with the Mars Radar and Radiometry Subsurface Investigation (MARRSI) proposal. This was submitted in December 2013 and is currently awaiting a response. This proposal aims to implement new ways of tracing evidence of water in the martian soil, by utilizing the antenna of the existing Mars rovers. This antenna would detect signals from Earth that are reflected off the surface of Mars, thereby probing the soil for indications of water. The Duderstadt Center worked with the professor involved, as well as NASA’s Jet Propulsion Laboratory to design a proposal cover, diagrams and CDs for submission that adhere to the format requested.

Satellite render for NASA proposal, AERIE

Additionally, the Duderstadt Center was also involved in the Trace Gas Microwave Radiometer (TGMR) proposal. This proposal was centered on detecting the processes that produce and destroy methane gas on the surface of Mars. The goal of both of these proposals is to seek evidence of both methane and water on Mars, which may lead to discovering signs of bacterial life on Mars.

In the past, the Duderstadt Center designed mission logos and a cover for the Armada proposal. This proposal concerned documenting atmospheric events on Earth using cube satellites.

Technology Interventions for Health, $5M Center Award from Department of Education (UMHS, CoE, SI, Library)

Technology Interventions for Health, $5M Center Award from Department of Education (UMHS, CoE, SI, Library)

Recently, the University of Michigan received a prestigious 5 million dollar Center Grant, awarded by the National Institute on Disability and Rehabilitation Research (NIDRR), part of the Department of Education.

The funds from this award will primarily be used to pursue several development, research, and training projects/studies involving technology interventions for self management of health behaviors. The newly formed center, led by Michelle Meade (PI, Rehab Medicine), will be an interdisciplinary endeavor, involving clinicians, researchers, and engineers from multiple departments on campus. This will allow UM researchers to continue to study how technology (including applications for smartphones/tablets, video games) can benefit individuals with spinal cord or neuro-developmental disabilities.

S.C.I Hard – an educational mobile game teaching independence to young adults with spinal cord injuries

For the past three years, the Duderstadt Center has been developing SCI Hard, a transformative game facilitating skill development and promoting the ability of individuals with Spinal Cord Injuries (SCI). Through game-play, SCI Hard teaches players how to manage their health and interact more readily in home, health care and community environments. Combining practical teaching methods with the element of play, SCI Hard aims to give autonomy and confidence back to individuals who find their world drastically altered after a spinal cord injury, specifically young men (ages 15-25) with a recent SCI.

Players navigate the game by wheelchair, enabling them to face their real-world challenges: juggling doctors’ appointments, attending therapy sessions to build muscle, and learning to drive a wheelchair-accessible vehicle. Even banal tasks such as waiting in line at the DMV are covered in a way that exposes the new obstacles individuals with a SCI may face. SCI Hard tackles this difficult subject matter with optimism and an earnest of humor. (The player’s quest is ultimately to stop the evil Dr. Schrync from taking over the world with zombie animals.)

Funds from this grant will be used to study how playing games like SCI Hard can directly benefit the health or alter the behaviors of individuals with a SCI, an effort that has been supported and well received by the accessibility advocacy, gamification, and health science communities. Receiving the Center Grant allows Duderstadt Center to continue to develop SCI Hard and other projects through Android support, more health/configuration options, voice acting throughout for greater immersion, and leader boards to help track progress.

To learn more about how the grant will be used and what University of Michigan departments are involved, read The Record’s write up on this great accomplishment. For a sneak-peek at SCI Hard and what it entails, check out the video below.

Low-Cost Dynamic and Immersive Gaze Tracking

Low-Cost Dynamic and Immersive Gaze Tracking

From touch-screen computers to the Kinect’s full-body motion sensor—interacting with your computer is as simple as a tap on the screen or a wave of the hand. But what if you could control your computer by simply looking at it? Gaze tracking is a dynamic and immersive input system with the potential to revolutionize modern technology.

Realizing this potential, Rachael Havens, a member of the Duderstadt Center and UROP student, investigated ways of integrating an efficient and economical gaze tracker into our system. However since this powerful tool is overlooked by many people, this task proved to be quite the challenge. Current professional gaze tracking tools are highly specialized and require buyers to drop tens of thousands of dollars for a single system. The open-source alternative is not much better, as it sacrifices quality for availability. Since none of the aforementioned options were ideal, a custom design was pursued.

Inspired by the EyeWriter Project, the Sony PS Eye was hacked. We systemically replaced the Infrared filtered lens and lens mount, adding a visible light filter and installing our own 3D printed lens mount. With little expense, we transformed a $30 webcam into an infrared, head-mounted gaze tracker. The Duderstadt Center didn’t stop there, however; we integrated this gaze tracker’s software with Jugular, an in-house interactive 3D engine. Now a glance from the user doesn’t just move the cursor on a desktop, it selects objects in a 3D virtual environment of our own design.

The 3D Production Pipeline: Teleporter Troubles Animation

The 3D Production Pipeline: Teleporter Troubles Animation

In the Fall, students were invited to participate in an immersive year-long course on 3D animation called The 3D Production Pipeline. The course was co-taught by Elona Van Gent (Stamps School of Art & Design) and Duderstadt Center’s Eric Maslowski, Steffen Heise & Stephanie O’Malley. Students with varying levels of experience constructed their concept, characters, storyboards and renderings, tirelessly working together to create a short animated feature called Teleporter Troubles, which follows the (mis)adventures of Wesley, a shy, smart blogger who is convinced he can use a teleporter to make an important date— meeting his girlfriend’s parents for the first time.

Concept art of Wesley, star of Teleporter Troubles

By combining the talents and resources of The Stamps School of Art & Design and the Duderstadt Center, students were able to create high-quality work in an innovative and collaborative space using state-of-the-art technology. To begin their process, students first drew concept art (what they imagined theircharacters, sets, and props would look like), many using Wacom tablets to capture the gesture and style of hand-drawing. From there, they used Maya for modeling the individual components, ZBrush to add detail to the models, and 3D Studio Max to put it all into motion! In 3D Studio Max, students adjusted the ‘rigs’ of their components to make them move and behave as they wanted. A rig is the skeletal structure of an animated object (much like the connected parts of a marionette puppet) that animators manipulate to create the desired posture or facial expressions of their characters. Because the class required copious amounts of teamwork to create one animation, students and professors used TeamLab, an online resource for file sharing and messaging, allowing students to upload their work and discuss their ideas in one convenient place online. The use of this software enabled students to animate professionally and communicate efficiently. (For more details on the teamwork involved and the exhausting creative process of animating, visit Play Gallery’s post on the project.)

Teleporter Troubles was created over the course of a year by the following team of students:
Zoe Allen-Wickler, Ashley Marie Allis, J’Vion Armstrong, Ashley Boudrie, Stephanie Boxold, Anna Jonetta Brown, Jaclyn Caris, Emily Cedar, Annie Cheng, John Foley, Paris London Glickman, Molly Lester, Rich Liverance, Lonny Marino, Olivia Meadows, Thabiso O Mhlaba, Maggie Miller, Kaisa Ryding and Sarah Schwendeman.

Finalized models of architectural elements.

Floor Plans to Future Plans: 3D Modeling Cabins

Floor Plans to Future Plans: 3D Modeling Cabins

Initial floor plan provided by Professor Nathan Niemi

Nathan Niemi—associate professor for Earth & Environmental Science— approached the 3D lab with a series of floor-plans he had designed in Adobe Illustrator. Nathan and his colleagues (who research neotectonics and structural geology) are working on cabins to be built at their field station in Teton County, Wyoming. The current cabins at their field station are small, and new cabins would provide the opportunity for more student researchers to work the area. Nathan’s group wanted to show alumni and possible donors the plans for the cabins so they can pledge financial support to the project. Nathan was curious about how he could translate his floor plans into a more complete model of the architecture.

Working with Nathan and his colleagues, the Duderstadt Center was able to take his floor plans and create splines (lines used in 3D modeling) in 3D Studio Max. Using these splines, accurate 3D models of the cabins were created to scale. These models were then shown to several people in Nathan’s group, at which point Teton County noticed the slope of the cabin’s roof would not meet building codes for snow load in that region. By viewing their models in 3D, the group was able to revise and review their plans to accommodate these restrictions. These plans are currently being shown to investors and others interested in the project.

U-M Future of Visualization Committee Issues Report

U-M Future of Visualization Committee Issues Report

The U-M Future of Visualization Committee* issued a report early this month focusing on the role Visualization plays at the University of Michigan, as well as steps for addressing growing needs on campus. The report concluded that two “visualization hubs” should be created on campus to make computing visualization services more accessible to our campus research community. “The hubs envisioned by the committee would leverage existing resources and consist of advanced workstations, high bandwidth connectivity, and collaborative learning spaces, with a support model based on that of the Duderstadt Center and Flux. The hardware and software would be configured to allow departments or individuals to purchase their own resources in a way that would reduce fragmentation and allow for efficient support, training, and maintenance.” (Text courtesy of Dan Miesler and ARC)

The following excerpts from the executive summary of the report highlight the importance and educational value of visualization services:

“The University of Michigan has seen incredible growth and change over the years. The growth will continue as we innovate and adapt. How we teach, conduct research, facilitate student learning, push technological boundaries, and collaborate with our peers will create demand for new tools and infrastructure. One such need is visualization because of the imperative role it plays in facilitating innovation. When one considers the vast quantities of data currently being generated from disparate domains, methods that facilitate discovery, exploration, and integration become necessary to ensure those data are understood and effectively used.

There is a great opportunity to change the way research and education has been done but to also allow for a seamless transition between the two through advancements in connectivity, mobility, and visualization. The opportunity here is tremendous, complex, and in no way trivial. Support for a responsive and organized visualization program and its cyberinfrastructure needs is necessary to leverage the opportunities currently present at the University of Michigan.”

A full copy of the report is available here.

*The committee was created by Dan Atkins with the charge of evaluating existing visualization technologies and methods on campus; developing an action plan for addressing deficiencies in visualization needs; establishing a group of visualization leaders; and communicating with the community on visualization topics. It is composed of faculty members and staff from ARC, University Libraries, Dentistry, LSA, the Medical School, ITS, Architecture and Urban Planning, Atmospheric and Oceanic and Space Sciences, and the College of Engineering. (Text courtesy of Dan Miesler and ARC)