Customer Discovery Using 360 Video

Customer Discovery Using 360 Video

Year after year, students in Professor Dawn White’s Entrepreneurship 411 course are tasked with doing a “customer discovery” – a process where students interested in creating a business, interview professionals in a given field to assess their needs and how products they develop could address these needs and alleviate some of the difficulties they encounter on a daily basis.

Often when given this assignment, students would defer to their peers for feedback instead of reaching out to strangers working in these fields of interest. This demographic being so similar to the students themselves, would result in a fairly biased outcome that didn’t truly get to the root issue of why someone might want or need a specific product. Looking for an alternative approach, Dawn teamed up with her long time friend, Professor Alison Bailey, who teaches DEI at the University, and Aileen Huang-Saad from Biomedical Engineering, and approached the Duderstadt Center with their idea: What if students could interact with a simulated and more diverse professional to conduct their customer discovery?

After exploring the many routes this could take for development, including things like motion capture-driven CGI avatars, 360 video became the decided platform on which to create this simulation. 360 Video viewed within an Oculus Rift VR headset ultimately gave the highest sense of realism and immersion when conducting an interview, which was important for making the interview process feel authentic.

Up until this point, 360 videos were largely passive experiences. They did not allow users to tailor the experience based on their choices or interact with the scene in any way. This Customer Discovery project required the 360 videos to be responsive – when a student asked a recognized customer discovery question, the appropriate video response would need to be triggered to play. And to do this, the development required both some programming logic to trigger different videos but also an integrated voice recognition software so students could ask a question out loud and have the speech recognized within the application.

Dawn and Alison sourced three professionals to serve as their simulated actors for this project:

Fritz discusses his career as an IT professional

Fritz – Fritz is a young black man with a career as an IT professional


Cristina – Cristina is a middle aged woman with a noticeable accent, working in education


Charles – Charles is a white adult man employed as a barista

These actors were chosen for their authenticity and diversity, having qualities that may lead interviewers to make certain assumptions or expose biases in their interactions with them. With the help of talented students at the Visualization Studio, these professionals were filmed responding to various customer discovery questions using the Ricoh Theta 360 camera and a spatial microphone (this allows for spatial audio in VR, so you feel like the sound is coming from a specific direction where the actor is sitting). For footage of one response to be blended with the next, the actors had to remember to revert their hands and face to the same pose between responses so the footage could be aligned. They also were filmed giving generic responses to any unplanned questions that may get asked as well as twiddling their thumbs and patiently waiting – footage that could be looped to fill any idle time between questions.

Once the footage was acquired, the frame ranges for each response were noted and passed off to programmers to implement into the Duderstadt Center’s in-house VR rendering software, Jugular. As an initial prototype of the concept, the application was originally intended to run as a proctored simulation – students engaging in the simulation would wear an Oculus Rift and ask their questions out loud, with the proctor listening in and triggering the appropriate actor response using keyboard controls. For a more natural feel, Dawn was interested in exploring voice recognition to make the process more automated.

Within Jugular, students view an interactive 360 video where they are seated across from one of three professionals available for interviewing. Using the embedded microphone in the Oculus Rift they are able to ask questions that are recognized using Dialogue Flow, that in turn trigger the appropriate video response, allowing students to conduct mock interviews.

With Dawn employing some computer science students to tackle the voice recognition element over the summer, they were able to integrate this feature into Jugular using the Dialogue Flow agent with Python scripts. Students could now be immersed in an Oculus Rift, speaking to a 360 video filmed actor, and have their voice interpreted as they asked their questions out loud, using the embedded microphone on the Rift.

Upon it’s completion, the Customer Discovery application was piloted in the Visualization Studio with Dawn’s students for the Winter 2019 semester.

Steel Structures – Collaborative Learning with Oculus Rift

Steel Structures – Collaborative Learning with Oculus Rift

Civil & Environmental Engineering: Design of Metal Structures (CEE413) uses a cluster of Oculus Rift head-mounted displays to visualize buckling metal columns in virtual reality. The cluster is configured in the Duderstadt Center’s Jugular software so that the instructor leads a guided tour using a joystick while three students follow his navigation. This configuration allows the instructor to control movement around the virtual object while students are only able to look around.

Developed in a collaboration with the Visualization Studio, using the Duderstadt Center’s Jugular software this simulation can run on both an Oculus Rift or within the MIDEN.

Tour the Michigan Ion Beam Laboratory in 3D

Tour the Michigan Ion Beam Laboratory in 3D

3D Model of the Michigan Ion Beam Laboratory

The Michigan Ion Beam Laboratory (MIBL) was established in 1986 as part of the Department of Nuclear Engineering and Radiological Sciences in the College of Engineering. Located on the University of Michigan’s North Campus, the MIBL serves to provide unique and extensive facilities to support research and development. Recently, Professor Gary Was, Director of the MIBL reached out to the Duderstadt Center for assistance with developing content for the MIBL website to better introduce users to the capabilities of their lab as construction on a new particle accelerator reached completion.

Gary’s group was able to provide the Duderstadt Center with a scale model of the Ion Beam Laboratory generated in Inventor and a detailed synopsis of the various components and executable experiments. From there, the Stephanie O’Malley of the Duderstadt Center optimized and beautified the provided model, adding corresponding materials, labels and lighting. A series of fly-throughs, zoom-ins, and experiment animations were generated from this model that would serve to introduce visitors to the various capabilities of the lab.

These interactive animations were then integrated into the MIBL’s wordpress platform by student programmer, Yun-Tzu Chang. Visitors to the MIBL website are now able to compare the simplified digital replica of the space with actual photos of the equipment as well as run various experiments to better understand how each component functions.  To learn more about the Michigan Ion Beam Laboratory and to explore the space yourself, visit their website at  mibl.engin.umich.edu.

Sonar Visualized in EECS

Sonar Visualized in EECS

Original point cloud data brought into Autodesk Recap

Professor Kamal Sarabandi, of Electrical Engingeering and Computer Science, and student Samuel Cook were looking into the accuracy of sonar equipment and came to the 3D Lab for assistance with visualizing their data. Their goal was to generate an accurate to scale 3D model of the EECS atrium that would be used to align their data to a physical space.

Gaps in point cloud data indicate an obstruction encountered by the sonar.

The Duderstadt Center’s Stephanie O’Malley and student consultant, Maggie Miller, used precise measurements and photo reference provided by Sam to re-create the atrium in 3D Studio Max. The point cloud data produced by their sonar was then exported as a *.PTS file, and brought into Autodesk Recap to quickly determine if everything appeared correct. When viewing point cloud data from the sonar, any significant gaps in the cloud indicate an obstruction, such as furniture, plants, or people.

Using the origin of the sonar device positioned on the second floor balcony, the data was aligned to the scene, and colored appropriately.  When the images produced by the sonar were aligned with the re-created EECS atrium, they were able to see the sonar picking up large objects such as benches or posts because those areas did not produce data points.  Professor Sarabandi’s research focus encompasses a wide range of topics in the area of applied electromagnetics.  The visualization efforts of the Duderstadt Center assisted in furthering his research by helping to improve the accuracy of their radar.

Sonar data aligned to a model of the EECS atrium

User Story: Rachael Miller and Carlos Garcia

User Story: Rachael Miller and Carlos Garcia 

Rachael Miller and Carlos Garcia discuss how their individual experiences with the Digital Media Commons (DMC) shaped their projects and ambitions. Rachael, an undergraduate in computer science, was able to expand her horizons by working in the Duderstadt Center on projects which dealt with virtual reality. She gained vital knowledge about motion capture by working in the MIDEN with the Kinect, and continues to apply her new skills to projects and internships today.

Carolos Garcia worked to combine technology and art in the form of projection mapping for his senior thesis Out of the Box. To approach the project, he began by searching for resources and found DMC to be the perfect fit. By establishing connections to staff in the 3D Lab, Groundworks, the Video Studio and many others, he was able to complete his project and go on to teach others the process as well. For a more behind the scenes look at both Carlos Garcia and Racheael Miller’s projects and process, please watch the video above!

 

A Configurable iOS Controller for a Virtual Reality Environment

A Configurable iOS Controller for a Virtual Reality Environment

James examining a volumetric brain in the MIDEN with an iPod controller

Traditionally, users navigate through 3D virtual environments via game controllers; however, game controllers are littered with ambiguously labeled buttons.  And while excellent for gaming, this setup makes navigating through 3D space unnecessarily complicated for the average user.  James Cheng, a sophomore in Computer Science in Engineering, has been working to resolve this headache by using touch screens such as those found in mobile devices instead game controllers.  Using the Jugular Engine in development at the Duderstadt Center, he has been developing a scalable UI system that can be used for a wide range of immersive simulations. Want to cut through a volumetric brain?  Select the “slice button” and start dragging.  What to fly through an environment instead of walking?  Switch to “Fly” mode and take off.  The system aims to be highly configurable since every experience is different.

Initial development is being done for the iOS platform due to it’s consistent hardware and options for scalable user interfaces.  James aims to make immersive experiences more intuitive and give the developer more options for communicating with the user.  You can now say “good-bye!” to memorizing what buttons “X” and “Y” do for each simulation, and instead utilize clearly defined and simulation-specific buttons.

Xbox Kinect used to scan surfaces in wind tunnel

Xbox Kinect used to scan surfaces in wind tunnel

At­­ the Gorguze Family Laboratory here on North Campus, Alexander Pankonien scanned a test wing in the 5 by 7 foot wind tunnel. Aerospace engineers will test their prototypes or parts in the wind tunnels to see how they will fare, scanning the objects to measure how it was affected and determine whether it will be safe to launch.

Usually, the Aerospace engineers scan with a laser scanner which picks up single point. In this instance, Alexander used the Xbox 360 Kinect, which captures entire surfaces, to scan the wing. The Kinect can scan from behind glass or acrylic screening, so it doesn’t upset wind patterns. Though the Kinect is less accurate than the laser scanner, it can scan more of the object than the single laser beam. And, if it is accurate enough to make measurements worthwhile, this will be a great, fast solution to scanning objects in wind tunnels.

Autonomous Control of Helicopters

Autonomous Control of Helicopters

Under the guidance of Prof. Girard from Aerospace Engineering and with the help of the University of Michigan Duderstadt Center, two students developed novel algorithms to give autonomous motion control to miniature helicopters. Using the Duderstadt Center’s motion capture system they were able to determine the helicopters precise location in 3D space. With this information their algorithm determined proper throttle and heading for the helicopter to reach its goal. One classic example of their work involved two helicopters which were told to “defend” one of the creators (Zahid). Zahid wore a helmet with markers on it so the computer, and helicopters, knew where he was. Then, as he walked around the room the two helicopters followed beside him protecting him from any potential aggressors.