Sonar Visualized in EECS

Sonar Visualized in EECS

Original point cloud data brought into Autodesk Recap

Professor Kamal Sarabandi, of Electrical Engingeering and Computer Science, and student Samuel Cook were looking into the accuracy of sonar equipment and came to the 3D Lab for assistance with visualizing their data. Their goal was to generate an accurate to scale 3D model of the EECS atrium that would be used to align their data to a physical space.

Gaps in point cloud data indicate an obstruction encountered by the sonar.

The Duderstadt Center’s Stephanie O’Malley and student consultant, Maggie Miller, used precise measurements and photo reference provided by Sam to re-create the atrium in 3D Studio Max. The point cloud data produced by their sonar was then exported as a *.PTS file, and brought into Autodesk Recap to quickly determine if everything appeared correct. When viewing point cloud data from the sonar, any significant gaps in the cloud indicate an obstruction, such as furniture, plants, or people.

Using the origin of the sonar device positioned on the second floor balcony, the data was aligned to the scene, and colored appropriately.  When the images produced by the sonar were aligned with the re-created EECS atrium, they were able to see the sonar picking up large objects such as benches or posts because those areas did not produce data points.  Professor Sarabandi’s research focus encompasses a wide range of topics in the area of applied electromagnetics.  The visualization efforts of the Duderstadt Center assisted in furthering his research by helping to improve the accuracy of their radar.

Sonar data aligned to a model of the EECS atrium

Museum of Natural History – Planetarium Shows

Museum of Natural History – Planetarium Shows

Shredding Stars: Stars are consumed by a black hole
The Museum of Natural History will soon be the host of several animations produced by the Duderstadt Center covering an array of space-related subjects. From understanding the behavior of black holes, to demonstrations of the life cycle of stars, Stephanie O’Malley, digital artist of the Duderstatdt Center, has created the animations in collaboration with Matthew Linke, the planetarium director, Lydia Bieri, professor in mathematics, and Kayhan Gultekin, an assistant researcher in astronomy.
Kicked out black holes: The gravitational pull of a black hole can cause multiple black holes to merge together. This spinning motion then causes the merged black holes to be kicked out of orbit.
The Museum of Natural History houses a vast collection of natural history objects ranging from local birds species, to larger mammals, to the skeletons of mammoths.  The museum is located on campus and provides educational opportunities and exhibits open to both the campus and the wider community.  The planetarium is located on the top floor of the museum.  Since 1958 the planetarium has put on informative shows about astronomy for visitors.  A full-dome screen is used to immerse guests in the night sky, and throughout the year staff put on seasonal star talks using the dome to visualize what the sky looks like at that time of the year.
 
The collaboration between visualization artists and scientists produced well-researched visualizations on an array of astronomy topics.  These animations are unique in that much of what has been visualized stems from raw data in many cases.  Nobody has ever photographed these events actually occurring in space and they are largely hypothetical in some cases.  These animations are scheduled to be projected on the museum’s full-dome screen and used as a tool in classes to better acquaint students with concepts discussed in class.  They are also being featured for a short time in a separate exhibit outside of the planetarium space.
 
Those familiar with Saturday Morning Physics lessons may recognize some of the animations, as they were shown recently during Lydia Bieri’s spot discussing gravitational lensing and gravity waves (Click here for the link to the video).
Gravitational Lensing: A gravitational lens refers to a distribution of matter (such as a cluster of galaxies) between a distant source and an observer, that is capable of bending the light from the source, as it travels towards the observer.

The animations created were each part of National Science Foundation funded grants. They were created in After Effects and 3D Studio Max, using a special plugin (Domemaster 3D camera shader) for Full Dome Planetarium warping (this is what gives single frames of an animation the correct distortion to be projected onto the planetarium’s curved ceiling). Frames were then rendered at 1200-4k pixel resolution to accommodate even very large planetariums looking to feature these animations.

The Kelsey Museum – Visualizing Lost Cylinder Seals

The Kelsey Museum – Visualizing Lost Cylinder Seals

2D illustration of one of the seal imprints used to generate a 3D model

The Kelsey Museum houses a collection of more than 100,000 ancient and medieval objects from the civilizations of the Mediterranean and the Near East.  Margaret Root, curator of the Greek and Near Eastern Collections at the Kelsey Museum, came to the Duderstadt Center with the impressions of several ancient cylinder seals.  A cylinder seal is a small cylindrical tool, about one inch long, used in ancient times to engrave symbols or marks.  When rolled in wet clay, the seal would leave an impression equivalent to a person’s “signature.”  These signatures were commonly used to sign for goods when trading.  Some of the earliest cylinder seals were found in the Mesopotamian region.The Kelsey Museum wanted to re-create these seals from the impressions to generate 3D prototypes or for use in a digital exhibit.  These exhibits would allow visitors to the Kelsey to experience the cylinder seal tradition first-hand by providing seals and clay to roll their own impressions.  The problem was these seals tend to get lost over time so the museum did not have the original seals, only the imprints.To recover the seal’s three-dimensional form, Margaret Root provided the Duderstadt Center with an outline of the imprints in Adobe Illustrator.  From the outline, Stephanie O’Malley of the Duderstadt Center added varying amounts of grey to generate a depth map, where the darkest areas were the most inset and the lightest areas were the most protruding.  With a depth map in place she was then able to inset areas on a cylindrical mesh in Zbrush (a 3d sculpting software) to re-create what the cylinder seal (the example seal is the “queen’s seal” ) would have looked like. Shawn O’Grady has printed one of these seals already.

A 3D render of the re-created cylinder seal.

The Duderstadt Center has since obtained the new Projet 3D printer, and plans are now underway to eventually print one of these on the Projet since it has a much higher print resolution and these seals are typically quite small.

To check out more at the Kelsey Museum, click here.

Architectural Visualization of Renovated Space for the Department of Pathology

Architectural Visualization of Renovated Space for the Department of Pathology

The University of Michigan Health System, Department of Pathology has recently started making preparations to move to the North Campus Research Complex.  Previously, the Department of Pathology had labs dispersed around the campus.  Now there is a proposed $160 million effort to centralize the labs of the Department of Pathology and other health system branches in a space that will be more flexible and adept at accommodating future research and developments in technology.

Tsoi/Kobus and Associates, an architecture firm based in Cambridge, Massachusetts are the architects chosen to design the new labs.  They are a firm based out of Cambridge, Massachusetts specializing in architecture and interior design of technology and science, university, and healthcare projects.

The Duderstadt Center played host to the design review lead by Christine Baker of the UMH Facilities Projects and Corrie Pennington-Block of CW Mott Administration.  The Department of Pathology staff were invited and asked to give feedback on the designs.  These meetings continued for a week from April 20-24, 2015 with various participating sub-groups.  Sessions comprised of an introduction and orientation to the design using standard hard-copy architectural floor plans.  Computer-generated walk-through videos and Google Earth maps were put up on the large Tiled Display to assist in visualizing.

Pathology staff viewing Google Earth layout of the proposed site.

The designers wanted the staff’s opinion on questions of space utilization and adjacencies.  To assist in visualizing this, the designs were uploaded to the MIDEN through FBX files, a file form exported from Autodesk Revit.  Through the use of the MIDEN, Pathology staff could walk through a full-scale virtual replication of the architects floor plan allowing participants to explore the proposed layout and give more in-depth feedback.  Based in part on that feedback, the architects revised the design and ran
another review session on May 14.

Experiencing the new labs in the MIDEN

Click here to read The Ann Arbor News article about the proposed lab space.

Breaking Ground at Taubman College

Breaking Ground at Taubman College

The Taubman College of Architecture at the University of Michigan is adding another wing to the college.  Located on North Campus, the College of Architecture currently shares the 240,000 square foot space with the Penny Stamps School of Art and Design.  The architecture studios itself, located on the third floor of the building, occupy a space of around 30,000 square feet, making it the largest academic studio space in the world.

After a recent gift of $12.5 million made by Alfred Taubman, the college plans on building a new addition which will be called the A. Alfred Taubman wing.  The completed wing will be 36,000 square feet and will have new studios, new offices for faculty and new classrooms.

On April 25, 2015 the University of Michigan’s President Mark Schlissel along with Taubman College’s Dean Monica Ponce De Leon with donor A. Alfred Taubman present, broke ground at the site of the new wing.  The ceremonial shoveling was preformed by Taubman College’s Kuka Robot, a robot designed for architectural fabrication research but today was modified to assist in the ceremony.

The Duderstadt Center helped program the robot for the ceremony by filming a human shoveling in the lab.  The motion was captured in the lab by motion capture cameras and a program was developed for the robot to mimic the motion.

Integrated Design Solutions, a firm based in Troy, Michigan, along with architect Preston Scott Cohen are in charge of the design for the new college.  The building is scheduled to be completed in 2017.

Click here to read an article about the ceremony released on Taubman College’s website.

Two Minute Tech: Audification Explained

 

Two Minute Tech: Audification Explained

Robert Alexander is a Design Science Ph.D. Graduate and member of the Solar and Heliospheric Research Group. Working with NASA, Robert aims to use data audification to teach us something new about the Sun’s solar wind and is using mixed media coupled with unique interaction methods to pull viewers into the experience. The Duderstadt Center worked with Robert to put his research into video form:

 

User Story: Rachael Miller and Carlos Garcia

User Story: Rachael Miller and Carlos Garcia 

Rachael Miller and Carlos Garcia discuss how their individual experiences with the Digital Media Commons (DMC) shaped their projects and ambitions. Rachael, an undergraduate in computer science, was able to expand her horizons by working in the Duderstadt Center on projects which dealt with virtual reality. She gained vital knowledge about motion capture by working in the MIDEN with the Kinect, and continues to apply her new skills to projects and internships today.

Carolos Garcia worked to combine technology and art in the form of projection mapping for his senior thesis Out of the Box. To approach the project, he began by searching for resources and found DMC to be the perfect fit. By establishing connections to staff in the 3D Lab, Groundworks, the Video Studio and many others, he was able to complete his project and go on to teach others the process as well. For a more behind the scenes look at both Carlos Garcia and Racheael Miller’s projects and process, please watch the video above!

 

User Story: Robert Alexander and Sonification of Data

User Story: Robert Alexander and Sonification of Data

Robert Alexander, a graduate student at the University of Michigan, represents what students can do in the Digital Media Commons (DMC), a service of the Library, if they take the time to embrace their ideas and use the resources available to them. In the video above, he talks about the projects, culture, and resources available through the Library. In particular, he mentions time spent pursuing the sonification of data for NASA research, art installations, and musical performances.

 

Virtual Cadaver – Supercomputing

Virtual Cadaver – Supercomputing

The Virtual Cadaver is a visualization of data provided by the Visible Human Project of the National Library of Medicine. This project aimed to create a digital image dataset of complete human male and female cadavers.

Volumetric anatomy data from the Visible Human Project

The male dataset originate from Joseph Paul Jernigan, a 38-year-old convicted Texas murderer who was executed by lethal injection. He donated his body for scientific research in 1993. The female cadaver remains anonymous, and has been described as a 59-year-old Maryland housewife who passed away from a heart attack. Her specimen contains several pathologies, including cardiovascular disease and diverticulitis.

Both cadavers were encased in gelatin and water mixture and frozen to produce the fine slices that comprise the data. The male dataset consists of 1,871 slices produced at 1 millimeter intervals. The female dataset is comprised of 5,164 slices.

The Duderstadt Center was directed to the dataset for the female subject in December of 2013. To load the data into the virtual reality MIDEN (a fully-immersive multi-screen head-tracked CAVE environment) and a variety of other display environments, the images were pre-processed into JPEGs at 1024×608 pixels. Every tenth slice is loaded, allowing the figure to be formed out of 517 slices at 3.3mm spacing per slice. A generic image-stack loader was written to allow for a 3D volume model to be produced from any stack of images, not specific to the Visible Human data. In this way, it can be configured to load a denser sample of slices over a shorter range should a subset of the model need to be viewed in higher detail.

Users can navigate around their data in passive cinema-style stereoscopic projection. In the case of the Virtual Cadaver, the body appears just as it would to a surgeon, revealing the various bones, organs and tissues. Using a game controller, users can arbitrarily position sectional planes to view a cross-section of the subject. This allows for cuts to be made that would otherwise be very difficult to produce in a traditional anatomy lab. The system can accommodate markerless motion-tracking through devices like the Microsoft Kinect and can also allow for multiple simultaneous users interacting with a shared scene from remote locations across a network.

GIS Data As Seen In The (Immersive) Environment

GIS Data As Seen In The (Immersive) Environment

Viewing Vector Data of Ann Arbor in the MIDEN.

Geographical Information Systems (GIS) are used for mapping and analysis of data pertaining to geographic locations. The location data may consist of vectors, rasters, or points.

Vector data are typically used to represent boundaries of discrete political entities, zoning, or land use categories.

Raster data are often used to represent geographic properties that vary continuously over a 2D area, such as terrain elevation. Each raster represents a small rectangular finite element of information projected onto a regular 2D grid. It’s simple to construct a triangulated mesh from such data.

Unstructured point clouds are often acquired by LIDAR or other scanning techniques. Dense clouds of points can create a fuzzy visual impression of 3D surfaces of terrain, vegetation, and structures. Unlike raster data, point clouds can represent concave, undercut surfaces, but it’s harder to construct a triangulated mesh from such data.

The MIDEN demo collection includes test cases for generic loaders of all three of these types. The vector tests include boundaries of roads and wooded areas in Ann Arbor projected onto a 2D map, and national boundaries projected onto the surface of a globe. The raster test is a 3D terrain mesh for a section of the Grand Canyon. The point test is a LIDAR scan of a fault line in the Grand Tetons.

Exploring the globe with GIS data.