New MIDEN – Unveiling of the upgraded MIDEN for Fall ‘23

New MIDEN – Unveiling of the upgraded MIDEN for Fall ‘23

New dual view capabilities

Maredith Byrd


We have upgraded the MIDEN! The new projectors use LEDs with much brighter and higher resolution using four Christie Digital M 4K25 RGB Laser Projectors. The new projectors use LEDs that have a longer lifespan. We used to have to limit how often and how long the MIDEN was run because the previous lamps had a very limited lifespan of just 1250 hours. For a 10′ x 10′ Screen, the resolution for each screen will be 2160×2160, which is double the previous resolution. There are now 25,000 hours of Lifespan at 100% brightness and 50,000 hours at 50% brightness.The new capabilities allow for two people to experience the view at once. They can see the same virtual content aligned to each of their unique perspectives and simultaneously interact with the content.

In a typical setup, 3D stereoscopic content (like what you would experience in a 3D movie) is projected onto three walls and the floor and stitched seamlessly together. Users wear a set of motion-tracked glasses that allow their perspective to be updated depending on where they are standing or looking, and use a motion-tracked video game controller to navigate beyond the confines of the 10’x10’ room. To the user wearing the 3D glasses, the projected content appears entirely to scale and has realistic depth – they can look underneath tables that appear to be situated in front of them, despite the table being projected onto one of the walls.

The MIDEN supports 3D Modeling formats exported by the most popular modeling software: Blender, 3ds Max, Maya, Sketchup, Rhino, Revit, etc. These models can be exported in the following formats and then imported into our “Jugular” software: OBJ, FBX, STL, and VRML formats. The MIDEN can also produce Unreal Engine scenes where we use the nDisplay plugin to split the scene into 4 different cameras to correspond with the 4 projectors in the MIDEN. 

MIDEN users experience immersion in a virtual environment without it blocking their view of themselves or their surroundings as a VR headset does. Since VR “CAVE” is a trademarked term, ours is called the MIDEN, which stands for Michigan Immersive Digital Experience Nexus and the MIDEN takes traditional “CAVE” technology much further – it is driven by our in-house developed rendering engine that affords more flexibility than a typical “CAVE” setup.

The MIDEN is more accessible than VR headsets, meaning it takes less time to set up and begin using compared to headsets. The game controller used is a standard Xbox-type gaming pad, familiar to most gamers. The MIDEN has increased immersion, the vision of the real world is not hidden, so users do not have to worry about trip hazards or becoming disoriented. The MIDEN users see their real body unlike in a VR headset where the body is most likely a virtual avatar. This results in less motion sickness. 

It can be used for Architectural Review, Data Analysis, Art Installations, Learning 3D modeling, and much more. From seeing the true scale of a structure in relation to the body to sensory experiences with unique visuals and spatialized audio, the MIDEN is capable of assisting these projects to a new level.

The MIDEN is available to anyone to use for a project, class exercise, or tour by request. They can contact emergingtech@umich.edu to arrange to use it. Use of the MIDEN does require staff to run it, and we recommend anyone looking to view their custom content in the MIDEN arrange a few sessions ahead of their event to test their content and ensure their scene is configured properly.

Two individuals in the MIDEN point to the same virtual image with different views.</center>
This is how the MIDEN configures itself.

Security Robots Study

Security Robots

Using XR to conduct studies in robotics

Maredith Byrd


Xin Ye is a University of Michigan Master’s Student at the School of Information. She approached The Duderstadt Center with her Master’s Thesis Defense Project to test the favorability of humanoid robots. Stephanie O’Malley at the Visualization Studio helped Xin to develop a simulation using three types of security robots with varying features to see if a more humanoid robot is viewed with more favorable experiences.

Panoramic of Umich Hallway

The simulation’s goal is to make participants feel like they were interacting with a real robot standing in front of them, so the MIDEN was the perfect tool to use for this experiment. The MIDEN (Michigan Immersive Digital Experience Nexus) is a 10 x 10 x 10 square box that relies on projections so the user can naturally walk in a virtual environment. An environment is constructed in Unreal Engine and projected into the MIDEN allowing the user to still see their physical body within the projected digital world, and the digital world is created to be highly detailed. 

Panoramic of the MIDEN

Users step into the MIDEN and by wearing 3D glasses are immersed in a digital environment that recreates common locations on a college campus: such as a university hallway/commons area OR an outdoor parking lot. After a short while, the participant gains the attention of the security robot, and it approaches them to question them.

Setting up the MIDEN

Xin Ye then triggers the appropriate response so users think the robot is responding intelligently. The robots were all configured to have different triggerable answers to participants that Xin Ye could initiate behind the curtains of the MIDEN. This is a technique referred to in studies as “Wizard of Oz” because the participant thinks the robotic projection has an artificial intelligence just as a real robot in this situation would possess when in reality it is a human deciding the appropriate response.

Knightscope
Ramsee
Pepper

This project aimed to evaluate the human perception of different types of security robots – some more humanoid than others, to see if a more humanoid robot was viewed more favorably. Three different types of robots were used: Knightscope, Ramsee, and Pepper. Knightscope is a cone-shaped robot that lacks any humanoid features. Ramsee is a little more humanoid with simple facial features, while Pepper is the most humanoid with more complex features as well as arms and legs.  

Participants interacted with 1 of 3 different robot types. The robot would approach the participant in the MIDEN, and question them – asking for them to present an MCard, put on a face mask, or if they’ve witnessed anything suspicious. To ensure that these robots all had a fair chance, each used the same “Microsoft David” automated male voice. Once the dialogue chain is complete, the robot thanks the participant and moves away. The participant then removes the 3D glasses and is taken to another location in the building for an exit interview. After the simulation, participants were interviewed about their interactions with the robots. If any participant realized that it was a human controlling the robot, they were disqualified from the study. 

Knightscope in Hallway
Ramsee in Hallway

Xin Ye presented her findings in a paper titled, “Human Security Robot Interaction and Anthropomorphism: An Examination of Pepper, RAMSEE, and Knightscope Robots” at the 32nd IEEE International Conference on Robot & Human Interactive Communication in Busan, South Korea.