User Story: Rachael Miller and Carlos Garcia

User Story: Rachael Miller and Carlos Garcia 

Rachael Miller and Carlos Garcia discuss how their individual experiences with the Digital Media Commons (DMC) shaped their projects and ambitions. Rachael, an undergraduate in computer science, was able to expand her horizons by working in the Duderstadt Center on projects which dealt with virtual reality. She gained vital knowledge about motion capture by working in the MIDEN with the Kinect, and continues to apply her new skills to projects and internships today.

Carolos Garcia worked to combine technology and art in the form of projection mapping for his senior thesis Out of the Box. To approach the project, he began by searching for resources and found DMC to be the perfect fit. By establishing connections to staff in the 3D Lab, Groundworks, the Video Studio and many others, he was able to complete his project and go on to teach others the process as well. For a more behind the scenes look at both Carlos Garcia and Racheael Miller’s projects and process, please watch the video above!

 

User Story: Robert Alexander and Sonification of Data

User Story: Robert Alexander and Sonification of Data

Robert Alexander, a graduate student at the University of Michigan, represents what students can do in the Digital Media Commons (DMC), a service of the Library, if they take the time to embrace their ideas and use the resources available to them. In the video above, he talks about the projects, culture, and resources available through the Library. In particular, he mentions time spent pursuing the sonification of data for NASA research, art installations, and musical performances.

 

Virtual Cadaver – Supercomputing

Virtual Cadaver – Supercomputing

The Virtual Cadaver is a visualization of data provided by the Visible Human Project of the National Library of Medicine. This project aimed to create a digital image dataset of complete human male and female cadavers.

Volumetric anatomy data from the Visible Human Project

The male dataset originate from Joseph Paul Jernigan, a 38-year-old convicted Texas murderer who was executed by lethal injection. He donated his body for scientific research in 1993. The female cadaver remains anonymous, and has been described as a 59-year-old Maryland housewife who passed away from a heart attack. Her specimen contains several pathologies, including cardiovascular disease and diverticulitis.

Both cadavers were encased in gelatin and water mixture and frozen to produce the fine slices that comprise the data. The male dataset consists of 1,871 slices produced at 1 millimeter intervals. The female dataset is comprised of 5,164 slices.

The Duderstadt Center was directed to the dataset for the female subject in December of 2013. To load the data into the virtual reality MIDEN (a fully-immersive multi-screen head-tracked CAVE environment) and a variety of other display environments, the images were pre-processed into JPEGs at 1024×608 pixels. Every tenth slice is loaded, allowing the figure to be formed out of 517 slices at 3.3mm spacing per slice. A generic image-stack loader was written to allow for a 3D volume model to be produced from any stack of images, not specific to the Visible Human data. In this way, it can be configured to load a denser sample of slices over a shorter range should a subset of the model need to be viewed in higher detail.

Users can navigate around their data in passive cinema-style stereoscopic projection. In the case of the Virtual Cadaver, the body appears just as it would to a surgeon, revealing the various bones, organs and tissues. Using a game controller, users can arbitrarily position sectional planes to view a cross-section of the subject. This allows for cuts to be made that would otherwise be very difficult to produce in a traditional anatomy lab. The system can accommodate markerless motion-tracking through devices like the Microsoft Kinect and can also allow for multiple simultaneous users interacting with a shared scene from remote locations across a network.

GIS Data As Seen In The (Immersive) Environment

GIS Data As Seen In The (Immersive) Environment

Viewing Vector Data of Ann Arbor in the MIDEN.

Geographical Information Systems (GIS) are used for mapping and analysis of data pertaining to geographic locations. The location data may consist of vectors, rasters, or points.

Vector data are typically used to represent boundaries of discrete political entities, zoning, or land use categories.

Raster data are often used to represent geographic properties that vary continuously over a 2D area, such as terrain elevation. Each raster represents a small rectangular finite element of information projected onto a regular 2D grid. It’s simple to construct a triangulated mesh from such data.

Unstructured point clouds are often acquired by LIDAR or other scanning techniques. Dense clouds of points can create a fuzzy visual impression of 3D surfaces of terrain, vegetation, and structures. Unlike raster data, point clouds can represent concave, undercut surfaces, but it’s harder to construct a triangulated mesh from such data.

The MIDEN demo collection includes test cases for generic loaders of all three of these types. The vector tests include boundaries of roads and wooded areas in Ann Arbor projected onto a 2D map, and national boundaries projected onto the surface of a globe. The raster test is a 3D terrain mesh for a section of the Grand Canyon. The point test is a LIDAR scan of a fault line in the Grand Tetons.

Exploring the globe with GIS data.

Transforming Sculptures into Stereoscopic Images

Transforming Sculptures into Stereoscopic Images

Artwork by Sean Darby

Sean Darby, a local Ann Arbor artist, wanted to turn his various relief sculptures into 3D images for his senior presentation to emphasize the depth of his color reliefs. Sean’s relief sculptures were scanned with the Handy Scan laser scanner, then extracted accurate depth information to generate stereo pairs. This method was fairly time consuming, so they tried to create depth another way by having Sean digitally painted an image with black, greys, and white. When using stereoscopic projections, the white stands in front of the gray which stands in front of the black, and this creates the illusion of depth. The stereoscopic images were displayed in traditional View-Master and looped on a screen alongside the final sculptures.

Sean Darby’s original sculptures.

Massive Lighting in Sponza to Develop Global Illumination

Massive Lighting in Sponza to Develop Global Illumination

A person in the MIDEN exploring Sponza.

Real light is a complicated phenomenon that not only acts upon objects, but interacts with them–light bounces off an object and to another object so that an entire scene is implicated. In graphical applications, however, usually only one surface is lit without taking into consideration the other objects in the scene. Ray tracing is sometimes used in graphics to generate realistic lighting effects by tracing the path of light through a scene and the objects it would encounter. While this creates accurate and realistic lighting effects, this technique is so slow that it is not practical for real-time applications like video games or simulations.

To create real-time, real-looking lighting effects, graphics engineer Sean Petty and staff at the Duderstadt Center have been experimenting with a publicly available and commonly used scene called Sponza to develop global illumination skill. The Sponza Atrium is a model of an actual building in Croatia with dramatic lighting. The lighting experiments in Sponza has helped the lab to develop a more realistic global illumination. Spherical harmonic (SH) lighting creates a realistic light rendering, using volumes to approximate how light should behave. While this method isn’t perfectly accurate in the way ray tracing is, algorithms are used to figure out which rays intersect objects and calculates the intensity of light going towards it, and emitting from it. This information is inserted into the 3D volume and overall virtual environment. These algorithms can then be applied in other scenes. Realistic lighting is vital to a user becoming psychologically immersed in a scene.

The Sponza Atrium is a model of an actual building in Croatia.

Out of Body In Molly Dierks Art

Out of Body In Molly Dierks Art

Molly Dierks Art Installation, “Postmodern Venus”

Molly Dierks, as an MFA candidate at the Penny W. Stamps School of Art & Design, used resources at the Duderstadt Center to create an installation peice called “Postmodern Venus.” Shawn O’Grady scanned her body with the HandyScan Laser Scanner to create a 3D model of her body. The model was then textured to look like ancient marble, and presented in the MIDEN as a life-size replication of herself.

“Postmodern Venus” plays with modern conceptions of objectivity and objectification by allowing the viewer to interact with the accurately scanned body of Molly Dierks, touching and moving through it. On her website she notes, “Experience with disability fuels my work, which probes the divide between our projected selves as they relate to the trappings of our real and perceived bodies. This work questions if there is a difference between what is real with relation to our bodies and our identities, and what is constructed, reflected or projected.” To read more about this and other work, visit Molly Dierks’ website: http://www.mollyvdierks.com/#Postmodern-Venus

Floor Plans to Future Plans: 3D Modeling Cabins

Floor Plans to Future Plans: 3D Modeling Cabins

Initial floor plan provided by Professor Nathan Niemi

Nathan Niemi—associate professor for Earth & Environmental Science— approached the 3D lab with a series of floor-plans he had designed in Adobe Illustrator. Nathan and his colleagues (who research neotectonics and structural geology) are working on cabins to be built at their field station in Teton County, Wyoming. The current cabins at their field station are small, and new cabins would provide the opportunity for more student researchers to work the area. Nathan’s group wanted to show alumni and possible donors the plans for the cabins so they can pledge financial support to the project. Nathan was curious about how he could translate his floor plans into a more complete model of the architecture.

Working with Nathan and his colleagues, the Duderstadt Center was able to take his floor plans and create splines (lines used in 3D modeling) in 3D Studio Max. Using these splines, accurate 3D models of the cabins were created to scale. These models were then shown to several people in Nathan’s group, at which point Teton County noticed the slope of the cabin’s roof would not meet building codes for snow load in that region. By viewing their models in 3D, the group was able to revise and review their plans to accommodate these restrictions. These plans are currently being shown to investors and others interested in the project.

U-M Future of Visualization Committee Issues Report

U-M Future of Visualization Committee Issues Report

The U-M Future of Visualization Committee* issued a report early this month focusing on the role Visualization plays at the University of Michigan, as well as steps for addressing growing needs on campus. The report concluded that two “visualization hubs” should be created on campus to make computing visualization services more accessible to our campus research community. “The hubs envisioned by the committee would leverage existing resources and consist of advanced workstations, high bandwidth connectivity, and collaborative learning spaces, with a support model based on that of the Duderstadt Center and Flux. The hardware and software would be configured to allow departments or individuals to purchase their own resources in a way that would reduce fragmentation and allow for efficient support, training, and maintenance.” (Text courtesy of Dan Miesler and ARC)

The following excerpts from the executive summary of the report highlight the importance and educational value of visualization services:

“The University of Michigan has seen incredible growth and change over the years. The growth will continue as we innovate and adapt. How we teach, conduct research, facilitate student learning, push technological boundaries, and collaborate with our peers will create demand for new tools and infrastructure. One such need is visualization because of the imperative role it plays in facilitating innovation. When one considers the vast quantities of data currently being generated from disparate domains, methods that facilitate discovery, exploration, and integration become necessary to ensure those data are understood and effectively used.

There is a great opportunity to change the way research and education has been done but to also allow for a seamless transition between the two through advancements in connectivity, mobility, and visualization. The opportunity here is tremendous, complex, and in no way trivial. Support for a responsive and organized visualization program and its cyberinfrastructure needs is necessary to leverage the opportunities currently present at the University of Michigan.”

A full copy of the report is available here.

*The committee was created by Dan Atkins with the charge of evaluating existing visualization technologies and methods on campus; developing an action plan for addressing deficiencies in visualization needs; establishing a group of visualization leaders; and communicating with the community on visualization topics. It is composed of faculty members and staff from ARC, University Libraries, Dentistry, LSA, the Medical School, ITS, Architecture and Urban Planning, Atmospheric and Oceanic and Space Sciences, and the College of Engineering. (Text courtesy of Dan Miesler and ARC)

Using the MIDEN for Hospital Room Visualization

Using the MIDEN for Hospital Room Visualization

How can doctors and nurses walk around a hospital room that hasn’t been built yet? It may seem like an impossible riddle, but the Duderstadt Center is making it possible!

Working with the University Of Michigan Hospital and a team of architects, healthcare professionals are able to preview full-scale re-designs of hospital rooms using the MIDEN. The MIDEN— or Michigan Immersive Digital Experience Nexus— is our an advanced audio-visual system for virtual reality. It provides its users with the convincing illusion of being fully immersed in a computer-generated, three-dimensional world. This world is presented in life-size stereoscopic projections on four surfaces that together fill the visual field, as well as 4.1 surround sound with attenuation and Doppler Effect.

Architects and nursing staff are using the MIDEN to preview patient room upgrades in the Trauma Burn Unit of the University Hospital. Of particular interest is the placement of an adjustable wall-mounted workstation monitor and keyboard. The MIDEN offers full-scale immersive visualization of clearances and sight-lines for the workstation with respect to the walls, cabinets, and patient bed. The design is being revised based on these visualizations before any actual construction occurs, avoiding time-consuming and costly renovations later.