Planting Disabled Futures – A call for artists to collaborate

Planting Disabled Futures

OPen Call for Artist Collaborators

Author


Petra Kuppers is disability culture activist and a community performance artist. She creates participatory community performance environments that think/feel into public space, tenderness, site-specific art, access and experimentation. Petra grounds herself in disability culture methods, and uses ecosomatics, performance, and speculative writing to engage audiences toward more socially just and enjoyable futures.


Her latest project, Planting Disabled Futures, is funded by a Just Tech fellowship.

In the Planting Disabled Futures project, Petra aims to use live performance approaches and virtual reality (and other) technologies to share energy, liveliness, ongoingness, crip joy and experiences of pain. 

In the development of the Virtual Reality (VR) components of the project, we will ask: How can VR allow us to celebrate difference, rather than engage in hyper-mobile fantasies of overcoming and of disembodied life? How can our disabled bodymindspirits develop non-extractive intimacies, in energetic touch, using VR as a tool toward connecting with plants, with the world, even in pain, in climate emergency, in our ongoing COVID world?

A watercolor mock-up of the Crip Cave, with Moira Williams’ Stim Tent, two VR stations, a potential sound bed, and a table for drawing/writing.

Petra envisions a sensory art installation equipped with a VR experience, stimming tent, a soundbed and a drawing and writing table. The VR experience would be supplemented by actors providing opportunities to engage with unique taste, touch and smell sensations as the environment is navigated.

A cyanotype (blue) and watercolor mock-up of what the VR app might look like: a violet leaf with sensation hubs, little white ink portals, that might lead to an audio dream journey

The VR experience involved in the Crip-Cave is expected to be tree-like environment that allows participants to select either a visual or an auditory experience. Participants can travel down to the roots and experience earth critters or up to the branches and into the leafy canopy. In both locations, “sensory hubs” would take participants on a journey to other worlds – worlds potentially populated with content produced by fellow artists.

A cyanotype/watercolor mock-up of little critters that might accompany you on your journey through the environment.

Artist collaborators are welcome to contribute their talents generating 3d worlds in Unreal Engine, reciting poetry, animating or composing music to create a dream journey in virtual reality. Artists generating digital content they would like considered for inclusion in this unique art installation can reach out to: [email protected]


To learn more about Planting Disabled Futures, visit:
https://www.petrakuppers.com/planting-disabled-futures

Security Robots Study

Security Robots

Using XR to conduct studies in robotics

Maredith Byrd


Xin Ye is a University of Michigan Master’s Student at the School of Information. She approached The Duderstadt Center with her Master’s Thesis Defense Project to test the favorability of humanoid robots. Stephanie O’Malley at the Visualization Studio helped Xin to develop a simulation using three types of security robots with varying features to see if a more humanoid robot is viewed with more favorable experiences.

Panoramic of Umich Hallway

The simulation’s goal is to make participants feel like they were interacting with a real robot standing in front of them, so the MIDEN was the perfect tool to use for this experiment. The MIDEN (Michigan Immersive Digital Experience Nexus) is a 10 x 10 x 10 square box that relies on projections so the user can naturally walk in a virtual environment. An environment is constructed in Unreal Engine and projected into the MIDEN allowing the user to still see their physical body within the projected digital world, and the digital world is created to be highly detailed. 

Panoramic of the MIDEN

Users step into the MIDEN and by wearing 3D glasses are immersed in a digital environment that recreates common locations on a college campus: such as a university hallway/commons area OR an outdoor parking lot. After a short while, the participant gains the attention of the security robot, and it approaches them to question them.

Setting up the MIDEN

Xin Ye then triggers the appropriate response so users think the robot is responding intelligently. The robots were all configured to have different triggerable answers to participants that Xin Ye could initiate behind the curtains of the MIDEN. This is a technique referred to in studies as “Wizard of Oz” because the participant thinks the robotic projection has an artificial intelligence just as a real robot in this situation would possess when in reality it is a human deciding the appropriate response.

Knightscope
Ramsee
Pepper

This project aimed to evaluate the human perception of different types of security robots – some more humanoid than others, to see if a more humanoid robot was viewed more favorably. Three different types of robots were used: Knightscope, Ramsee, and Pepper. Knightscope is a cone-shaped robot that lacks any humanoid features. Ramsee is a little more humanoid with simple facial features, while Pepper is the most humanoid with more complex features as well as arms and legs.  

Participants interacted with 1 of 3 different robot types. The robot would approach the participant in the MIDEN, and question them – asking for them to present an MCard, put on a face mask, or if they’ve witnessed anything suspicious. To ensure that these robots all had a fair chance, each used the same “Microsoft David” automated male voice. Once the dialogue chain is complete, the robot thanks the participant and moves away. The participant then removes the 3D glasses and is taken to another location in the building for an exit interview. After the simulation, participants were interviewed about their interactions with the robots. If any participant realized that it was a human controlling the robot, they were disqualified from the study. 

Knightscope in Hallway
Ramsee in Hallway

Xin Ye presented her findings in a paper titled, “Human Security Robot Interaction and Anthropomorphism: An Examination of Pepper, RAMSEE, and Knightscope Robots” at the 32nd IEEE International Conference on Robot & Human Interactive Communication in Busan, South Korea.

Customer Discovery Using 360 Video

Customer Discovery Using 360 Video

Year after year, students in Professor Dawn White’s Entrepreneurship 411 course are tasked with doing a “customer discovery” – a process where students interested in creating a business, interview professionals in a given field to assess their needs and how products they develop could address these needs and alleviate some of the difficulties they encounter on a daily basis.

Often when given this assignment, students would defer to their peers for feedback instead of reaching out to strangers working in these fields of interest. This demographic being so similar to the students themselves, would result in a fairly biased outcome that didn’t truly get to the root issue of why someone might want or need a specific product. Looking for an alternative approach, Dawn teamed up with her long time friend, Professor Alison Bailey, who teaches DEI at the University, and Aileen Huang-Saad from Biomedical Engineering, and approached the Duderstadt Center with their idea: What if students could interact with a simulated and more diverse professional to conduct their customer discovery?

After exploring the many routes this could take for development, including things like motion capture-driven CGI avatars, 360 video became the decided platform on which to create this simulation. 360 Video viewed within an Oculus Rift VR headset ultimately gave the highest sense of realism and immersion when conducting an interview, which was important for making the interview process feel authentic.

Up until this point, 360 videos were largely passive experiences. They did not allow users to tailor the experience based on their choices or interact with the scene in any way. This Customer Discovery project required the 360 videos to be responsive – when a student asked a recognized customer discovery question, the appropriate video response would need to be triggered to play. And to do this, the development required both some programming logic to trigger different videos but also an integrated voice recognition software so students could ask a question out loud and have the speech recognized within the application.

Dawn and Alison sourced three professionals to serve as their simulated actors for this project:

Fritz discusses his career as an IT professional

Fritz – Fritz is a young black man with a career as an IT professional


Cristina – Cristina is a middle aged woman with a noticeable accent, working in education


Charles – Charles is a white adult man employed as a barista

These actors were chosen for their authenticity and diversity, having qualities that may lead interviewers to make certain assumptions or expose biases in their interactions with them. With the help of talented students at the Visualization Studio, these professionals were filmed responding to various customer discovery questions using the Ricoh Theta 360 camera and a spatial microphone (this allows for spatial audio in VR, so you feel like the sound is coming from a specific direction where the actor is sitting). For footage of one response to be blended with the next, the actors had to remember to revert their hands and face to the same pose between responses so the footage could be aligned. They also were filmed giving generic responses to any unplanned questions that may get asked as well as twiddling their thumbs and patiently waiting – footage that could be looped to fill any idle time between questions.

Once the footage was acquired, the frame ranges for each response were noted and passed off to programmers to implement into the Duderstadt Center’s in-house VR rendering software, Jugular. As an initial prototype of the concept, the application was originally intended to run as a proctored simulation – students engaging in the simulation would wear an Oculus Rift and ask their questions out loud, with the proctor listening in and triggering the appropriate actor response using keyboard controls. For a more natural feel, Dawn was interested in exploring voice recognition to make the process more automated.

Within Jugular, students view an interactive 360 video where they are seated across from one of three professionals available for interviewing. Using the embedded microphone in the Oculus Rift they are able to ask questions that are recognized using Dialogue Flow, that in turn trigger the appropriate video response, allowing students to conduct mock interviews.

With Dawn employing some computer science students to tackle the voice recognition element over the summer, they were able to integrate this feature into Jugular using the Dialogue Flow agent with Python scripts. Students could now be immersed in an Oculus Rift, speaking to a 360 video filmed actor, and have their voice interpreted as they asked their questions out loud, using the embedded microphone on the Rift.

Upon it’s completion, the Customer Discovery application was piloted in the Visualization Studio with Dawn’s students for the Winter 2019 semester.

S.C.I Hard Available in App Store

S.C.I Hard Available in App Store

Those with spinal cord injuries (SCI) encounter a drastically different world when they are released from the hospital. With varying degrees of disability, mobility and function, the world around them becomes a collection of physical and mental challenges which is a complete departure from their previous lifestyles. Whether they are in crutches or manual/automatic wheelchairs, they need to learn mobility, scheduling, and social tasks once again.

Players in S.C.I Hard must navigate a chaotic club scene to wrangle escaped tarsier monkeys

S.C.I Hard is a mobile game developed by the Duderstadt Center and designed by Dr. Michelle Meade for the Center for Technology & Independence (TIKTOC RERC) with funding from a NIDRR Field Initiated Development Grant.

Its purpose is to assist persons with spinal cord injury and develop and apply the necessary skills to keep their bodies healthy while managing the many aspects of SCI care, serving as a fun and engaging manual for individuals with spinal cord injuries learning independence. Tasks such as scheduling, mobility, and social interaction are all integrated subtly into the game. Players engage in goofy quests, from befriending roid-raging girlscouts in the park to collecting tarsier monkeys running rampant at a night club. The goal of S.C.I Hard was to be different from most medically oriented games, so players don’t feel like they’re being lectured or bombarded with  boring medical jargon, and instead learn the important concepts of their condition in a more light-hearted and engaging way.

Players shop for a handicap accessible vehicle to take their road test as they learn independence

With more than 30 different scenarios and mini-games, a full cast of odd characters to talk with, and dozens of collectible items and weapons only you can save the town from impending doom. SCI-Hard puts you, the player, in the chair of someone with a Spinal Cord Injury. Introducing you to new challenges and obstacles all while trying to save the world from legions of mutated animals. Join the fight and kick a** while sitting down!

S.C.I Hard is now available for free on Apple and Android devices through the app store, but will require participation in the subsequent study or feedback group to play:

Apple Devices: https://itunes.apple.com/us/app/sci-hard/id1050205395?mt=8

Android Devices: https://play.google.com/store/apps/details?id=edu.umich.mobile.SciHard&hl=en

To learn more about the subsequent study or to participate in the study involving S.C.I Hard, visit:
http://cthi.medicine.umich.edu/projects/tiktoc-rerc/projects/r2

Steel Structures – Collaborative Learning with Oculus Rift

Steel Structures – Collaborative Learning with Oculus Rift

Civil & Environmental Engineering: Design of Metal Structures (CEE413) uses a cluster of Oculus Rift head-mounted displays to visualize buckling metal columns in virtual reality. The cluster is configured in the Duderstadt Center’s Jugular software so that the instructor leads a guided tour using a joystick while three students follow his navigation. This configuration allows the instructor to control movement around the virtual object while students are only able to look around.

Developed in a collaboration with the Visualization Studio, using the Duderstadt Center’s Jugular software this simulation can run on both an Oculus Rift or within the MIDEN.

Virtual Cadaver Featured in Proto Magazine

Virtual Cadaver Featured in Proto Magazine

Proto Magazine features articles on biomedicine and health care, targeting physicians, researchers and policy makers.

Proto is a natural science magazine produced by Massachusetts General Hospital in collaboration with Time Inc. Content Solutions. Launched in 2005, the magazine covers topics in the field of biomedicine and health care, targeting physicians, researchers and policy makers. In June, Proto featured an article, “Mortal Remains” that discusses alternatives to using real cadavers in the study of medicine.

Preserving human remains for use as a cadaver during a school semester has tremendous costs associated with it. The article in Proto magazine discusses options for revolutionizing this area of study, from the mention of old techniques like 17th Century anatomically correct wax models or Plastination (the process of removing fluids from the body and instead injecting a polymer) to new technology utilizing the Visible Human data, with a specific mention of the Duderstadt Center’s Virtual Cadaver.

To learn more, the full article from Proto Magazine can be found here.

vis_visible-human_miden_02
Sean Petty manipulates cross-sections of the Virtual Cadaver from within the 3D Lab’s virtual reality environment, the MIDEN.

Exploring Human Anatomy with the Anatomage Table

 

Exploring Human Anatomy with the Anatomage Table

The Anatomage table is a technologically advanced anatomy visualization system that allows users to explore the complex anatomy of the human body in digital form, eliminating the need for a human cadaver. The table presents a human figure at 1:1 scale, and utilizes data from the Visible Human effort with the additional capability of loading real patient data (CT, MRI, etc), making it a great resource for research, collaborative discovery, and the studying of surgical procedures. Funding to obtain the table was a collaborative effort between the schools of Dentistry, Movement Science, and Nursing although utilization is expected to expand to include Biology. Currently on display in the Duderstadt Center for exploration, the Anatomage table will be relocating to its more permanent home inside the Taubman Health Library in early July.

The Anatomage table allows users to explore the complex anatomy of the human body.

User Story: Rachael Miller and Carlos Garcia

User Story: Rachael Miller and Carlos Garcia 

Rachael Miller and Carlos Garcia discuss how their individual experiences with the Digital Media Commons (DMC) shaped their projects and ambitions. Rachael, an undergraduate in computer science, was able to expand her horizons by working in the Duderstadt Center on projects which dealt with virtual reality. She gained vital knowledge about motion capture by working in the MIDEN with the Kinect, and continues to apply her new skills to projects and internships today.

Carolos Garcia worked to combine technology and art in the form of projection mapping for his senior thesis Out of the Box. To approach the project, he began by searching for resources and found DMC to be the perfect fit. By establishing connections to staff in the 3D Lab, Groundworks, the Video Studio and many others, he was able to complete his project and go on to teach others the process as well. For a more behind the scenes look at both Carlos Garcia and Racheael Miller’s projects and process, please watch the video above!

 

User Story: Robert Alexander and Sonification of Data

User Story: Robert Alexander and Sonification of Data

Robert Alexander, a graduate student at the University of Michigan, represents what students can do in the Digital Media Commons (DMC), a service of the Library, if they take the time to embrace their ideas and use the resources available to them. In the video above, he talks about the projects, culture, and resources available through the Library. In particular, he mentions time spent pursuing the sonification of data for NASA research, art installations, and musical performances.

 

Virtual Cadaver – Supercomputing

Virtual Cadaver – Supercomputing

The Virtual Cadaver is a visualization of data provided by the Visible Human Project of the National Library of Medicine. This project aimed to create a digital image dataset of complete human male and female cadavers.

Volumetric anatomy data from the Visible Human Project

The male dataset originate from Joseph Paul Jernigan, a 38-year-old convicted Texas murderer who was executed by lethal injection. He donated his body for scientific research in 1993. The female cadaver remains anonymous, and has been described as a 59-year-old Maryland housewife who passed away from a heart attack. Her specimen contains several pathologies, including cardiovascular disease and diverticulitis.

Both cadavers were encased in gelatin and water mixture and frozen to produce the fine slices that comprise the data. The male dataset consists of 1,871 slices produced at 1 millimeter intervals. The female dataset is comprised of 5,164 slices.

The Duderstadt Center was directed to the dataset for the female subject in December of 2013. To load the data into the virtual reality MIDEN (a fully-immersive multi-screen head-tracked CAVE environment) and a variety of other display environments, the images were pre-processed into JPEGs at 1024×608 pixels. Every tenth slice is loaded, allowing the figure to be formed out of 517 slices at 3.3mm spacing per slice. A generic image-stack loader was written to allow for a 3D volume model to be produced from any stack of images, not specific to the Visible Human data. In this way, it can be configured to load a denser sample of slices over a shorter range should a subset of the model need to be viewed in higher detail.

Users can navigate around their data in passive cinema-style stereoscopic projection. In the case of the Virtual Cadaver, the body appears just as it would to a surgeon, revealing the various bones, organs and tissues. Using a game controller, users can arbitrarily position sectional planes to view a cross-section of the subject. This allows for cuts to be made that would otherwise be very difficult to produce in a traditional anatomy lab. The system can accommodate markerless motion-tracking through devices like the Microsoft Kinect and can also allow for multiple simultaneous users interacting with a shared scene from remote locations across a network.