User Story: Robert Alexander and Sonification of Data

User Story: Robert Alexander and Sonification of Data

Robert Alexander, a graduate student at the University of Michigan, represents what students can do in the Digital Media Commons (DMC), a service of the Library, if they take the time to embrace their ideas and use the resources available to them. In the video above, he talks about the projects, culture, and resources available through the Library. In particular, he mentions time spent pursuing the sonification of data for NASA research, art installations, and musical performances.

 

Transforming Sculptures into Stereoscopic Images

Transforming Sculptures into Stereoscopic Images

Artwork by Sean Darby

Sean Darby, a local Ann Arbor artist, wanted to turn his various relief sculptures into 3D images for his senior presentation to emphasize the depth of his color reliefs. Sean’s relief sculptures were scanned with the Handy Scan laser scanner, then extracted accurate depth information to generate stereo pairs. This method was fairly time consuming, so they tried to create depth another way by having Sean digitally painted an image with black, greys, and white. When using stereoscopic projections, the white stands in front of the gray which stands in front of the black, and this creates the illusion of depth. The stereoscopic images were displayed in traditional View-Master and looped on a screen alongside the final sculptures.

Sean Darby’s original sculptures.

Out of Body In Molly Dierks Art

Out of Body In Molly Dierks Art

Molly Dierks Art Installation, “Postmodern Venus”

Molly Dierks, as an MFA candidate at the Penny W. Stamps School of Art & Design, used resources at the Duderstadt Center to create an installation peice called “Postmodern Venus.” Shawn O’Grady scanned her body with the HandyScan Laser Scanner to create a 3D model of her body. The model was then textured to look like ancient marble, and presented in the MIDEN as a life-size replication of herself.

“Postmodern Venus” plays with modern conceptions of objectivity and objectification by allowing the viewer to interact with the accurately scanned body of Molly Dierks, touching and moving through it. On her website she notes, “Experience with disability fuels my work, which probes the divide between our projected selves as they relate to the trappings of our real and perceived bodies. This work questions if there is a difference between what is real with relation to our bodies and our identities, and what is constructed, reflected or projected.” To read more about this and other work, visit Molly Dierks’ website: http://www.mollyvdierks.com/#Postmodern-Venus

Low-Cost Dynamic and Immersive Gaze Tracking

Low-Cost Dynamic and Immersive Gaze Tracking

From touch-screen computers to the Kinect’s full-body motion sensor—interacting with your computer is as simple as a tap on the screen or a wave of the hand. But what if you could control your computer by simply looking at it? Gaze tracking is a dynamic and immersive input system with the potential to revolutionize modern technology.

Realizing this potential, Rachael Havens, a member of the Duderstadt Center and UROP student, investigated ways of integrating an efficient and economical gaze tracker into our system. However since this powerful tool is overlooked by many people, this task proved to be quite the challenge. Current professional gaze tracking tools are highly specialized and require buyers to drop tens of thousands of dollars for a single system. The open-source alternative is not much better, as it sacrifices quality for availability. Since none of the aforementioned options were ideal, a custom design was pursued.

Inspired by the EyeWriter Project, the Sony PS Eye was hacked. We systemically replaced the Infrared filtered lens and lens mount, adding a visible light filter and installing our own 3D printed lens mount. With little expense, we transformed a $30 webcam into an infrared, head-mounted gaze tracker. The Duderstadt Center didn’t stop there, however; we integrated this gaze tracker’s software with Jugular, an in-house interactive 3D engine. Now a glance from the user doesn’t just move the cursor on a desktop, it selects objects in a 3D virtual environment of our own design.

Floor Plans to Future Plans: 3D Modeling Cabins

Floor Plans to Future Plans: 3D Modeling Cabins

Initial floor plan provided by Professor Nathan Niemi

Nathan Niemi—associate professor for Earth & Environmental Science— approached the 3D lab with a series of floor-plans he had designed in Adobe Illustrator. Nathan and his colleagues (who research neotectonics and structural geology) are working on cabins to be built at their field station in Teton County, Wyoming. The current cabins at their field station are small, and new cabins would provide the opportunity for more student researchers to work the area. Nathan’s group wanted to show alumni and possible donors the plans for the cabins so they can pledge financial support to the project. Nathan was curious about how he could translate his floor plans into a more complete model of the architecture.

Working with Nathan and his colleagues, the Duderstadt Center was able to take his floor plans and create splines (lines used in 3D modeling) in 3D Studio Max. Using these splines, accurate 3D models of the cabins were created to scale. These models were then shown to several people in Nathan’s group, at which point Teton County noticed the slope of the cabin’s roof would not meet building codes for snow load in that region. By viewing their models in 3D, the group was able to revise and review their plans to accommodate these restrictions. These plans are currently being shown to investors and others interested in the project.

Hybrid Force-Active Structures and Visualization

Hybrid Force-Active Structures and Visualization

Tom Bessai demonstrates using a Microscribe

Tom Bessai is a Canadian architect currently teaching at the University of Michigan Taubman College of Architecture & Urban Planning. This past year Tom has been on sabbatical working with Sean Alquist to research hybrid force-active structures—or structures that work under the force of tension. Much like a bungee cord, these structures have two forms: slack and taught. Sean and Tom have been researching material options and constraints for these structures, experimenting with rope, mesh, nylon, and elastic in various forms.

While these structures borrow from techniques seen in gridshell structures, they are entirely new in that they actuate material as well as the geometry of their design. These structures are first designed in computer-aided design (CAD) software and then are physically built. After building the scale models, Tom uses a Microscribe to plot the vertices of the model in 3D space. These points then appear in Rhino, creating a CAD model based off of the actual, physical structure. Tom can then compare his built model to his simulated model. Comparing the measurements of both structures identifies the relationship between the tension of the structure and the material used. By taking these measurements, the properties of the material can be more specifically defined, allowing for larger and smaller structures to be more accurately designed.

These structures are not only complex and beautiful; Tom imagines they could have a practical application as well. Hybrid force-active structures could be used to control architectural acoustics, create intimate or open environments, or define interior and exterior spaces.

Using Motion Capture To Test Robot Movement

Using Motion Capture To Test Robot Movement

Student analyzing movement of his group’s robot

At the end of every year, seniors in the College of Engineering are working hard to finish their capstone design projects. These projects are guided by a professor but built entirely by students. Keteki Saoji, a mechanical engineer focusing on manufacturing, took inspiration from Professor Revzen who studies legged locomotion in both insects and robots. Earlier in the year Professor Revzen published the results of experiments with tripping cockroaches which indicated that insects can use their body mechanics and momentum to stabilize their motions, rather than relying on their nervous system interpreting their environment and sending electrical messages to the muscles. The study predicts that robots which similarly lack feedback can be designed to be remarkably stable while running.

Saoji and her three teammates took on the challenge of creating a robot that would maintain such stability on very rough terrains. They worked with a hexapedal robot designed at the University of Pennsylvania that was shown to follow the same mechanically stabilizing dynamics as cockroaches. The team had to design new legs with sensors allowing the robot to detect when its feet hit the ground. The changes in motion introduced by sensing were so subtle that they needed special equipment to see the change. Using the Duderstadt Center’s eight-camera Motion Capture system, the team was able to track the intricacies of how the robot moved when sensory information is used and when it is not used. They took the data collected from the Motion Capture session to track how the robot moved with their mechanical and programming revisions, establishing that ground contact sensing allows robot motions to adapt more effectively to rougher ground.

A student’s robot covered in sensors.

Dialogue of the Senses: Different Eyes

Dialogue of the Senses: Different Eyes

Three guests experiencing Alex Surdu’s exhibit

“Dialogue of the Senses” was the theme for an exhibit of student work from the Department of Performing Arts Technology, School of Music, Theater & Dance (May 2013). Alex Surdu titled his piece, “Different Eyes / I’m Standing in a Vroom.” He designed it for exhibition in the MIDEN, for aural as well as visual immersion.  In Alex’s words:

With each passing day, we find ourselves gaining perspective from the places we go, the people we meet, and the world that we experience. We are ultimately, however, alone in our individual universes of experience. With this piece, I attempt to bridge this gap by immersing participants in an abstract virtual universe that utilizes isochronic pulses to stimulate different states of consciousness. If art was a device created by man to communicate perspective, then works of this nature are the next logical step in realizing art’s purpose: providing not just something to look at, but a way with which to look at it.

Motion Capture and Kinects Analyze Movement in Tandem

Motion Capture and Kinects Analyze Movement in Tandem

As part of their research under the Dynamic Project Management (DPM) group, PhD candidates Joon Oh Seo and SangUk Han with UROP student Drew Nikolai used the Motion Capture system to study the ergonomics and biomechanics of climbing a ladder. The team, advised by Professor SangHyun Lee, is analyzing the movements of construction workers to identify behaviors that may lead to injury or undue stress on the body. Using MoCap the team can collect data on joint movement, and by using the Kinect they can collect depth information. By comparing the two data sets of Nikolai climbing and descending the ladder, Seo and Han can compare accuracy, and potentially use the Kinects to collect information at actual construction sites.

PainTrek Released on iTunes!

PainTrek Released on iTunes!


Get the App!

Ever have a headache or facial pain that seemingly comes and goes without warning? Ever been diagnosed with migraines, TMD or facial neuralgias but feel that your ability to explain your pain is limited?

PainTrek is a novel app that was developed to make it easier to track, analyze, and talk about pain. Using an innovative “paint your pain” interface, users can easily enter the intensity and area of pain by simply dragging over a 3D head model. Pain information can be entered as often as desired, can be viewed over time, and even analyzed to provide deeper understanding of your pain.

The PainTrek application measures pain area and progression using a unique and accurate anatomical 3D system. The head 3D model is based on a square grid system with vertical and horizontal coordinates using anatomical landmarks. Each quadrangle frames well-detailed craniofacial areas for real-time indication of precise pain location and intensity in a quantifiable method. This is combined with essential sensory and biopsychosocial questionnaires related to previous and ongoing treatments, and their rate of success/failure, integrating and displaying such information in an intuitive way.