<?xml version="1.0" encoding="UTF-8" standalone="yes"?><oembed><version><![CDATA[1.0]]></version><provider_name><![CDATA[TECTON 3D]]></provider_name><provider_url><![CDATA[https://tecton3d.wordpress.com]]></provider_url><author_name><![CDATA[castroecosta]]></author_name><author_url><![CDATA[https://tecton3d.wordpress.com/author/castroecosta/]]></author_url><title><![CDATA[T3: Exploring Virtual&nbsp;Mockup]]></title><type><![CDATA[link]]></type><html><![CDATA[<p>This task will research on interaction techniques to visualized 3D virtual mockups considering two useful scenarios for design review. In the first scenario, the user is visiting at a one to one scale a building in an immersive VR experience. While the second provides exploration of a 3D model using a &#8220;god like&#8221; view such as it occurs when looking to physical mockups. Due to the different scaling of the model and its relationship with the user, it can suit different tasks of design review such as urban planning or building accessibility analysis. However to turn these virtual representation closer to their real scenarios, i.e. visiting a constructed building or analyzing a physical mockup, interaction techniques need to be adapted for navigation and model exploration. Luis Bruno which PhD is entitled Walk-In Place and Locomotion techniques for Immersive Virtual Environments and Bolseiro BIM1 will conduct the field research on this topic with Bruno Araujo regarding mimicking architectural physical mockups. The task is organized into the four following subtasks.</p>
<p>Subtask 3.1: 3D Immersive Authoring Framework for Virtual Mockup<br />
This subtask will extend the existing Open Source Framework Open5 (FIVE: Framework virtual environments) with component to support user data such as 2D plants or map and terrain description. This task will enable to see this information over-layed on the 3D environment allowing to be used by the interaction and modeling tasks of the project. In addition it will allow to visualize 3D grammar based generated models using head mounted display or stereoscopic multitouch tabletops.</p>
<p>Subtask 3.2: Using Body Gesture and MultiSensory Device for Virtual Locomotion<br />
This subtask will allow using body gestures and multisensory devices to explore 3D models using a Head Mounted Display. Locomotion techniques will be researched to enable users to navigate into a one to one scale virtual environment using the walking in place techniques enriched with sensors to control the navigation naturally. It allows architects to experiment a building and assess possible design problems during the conception phase using virtual reality.</p>
<p>Subtask 3.3: Virtual Mockup Exploration on Stereoscopic Multitouch Displays<br />
This subtask will handle the virtual mockup exploration mimicking traditional architectural physical mockups. For this scenario we rely on stereoscopic multitouch display where the user can visualize 3D models as they were lying on top of the tabletop. It allows the collocation between the user working space and virtual content. Using multitouch based interaction techniques and gestures above the surface allow devising interaction techniques similar to interact with physical objects. Bimanual interaction techniques need to be further explored in such environment due to the continuity between surface and space. 3D Manipulations need adaptation increasing usability of virtual models in free space due to the lack of feedback.</p>
<p>Subtask 3.4: Demonstrator for 3D Model Inspection and 3D Scene Composition<br />
The final subtask of this workpackage is a demonstrator of the interaction techniques into two prototypes customized to Architectural needs. The first will propose 3D model Inspection to assess building accessibility in a virtual visit environment where the user is using a head mounted display and performs body gestures to navigate. The second is the virtual mockup tabletop scenario to assemble 3D models and compose 3D architectural scenes.</p>
]]></html></oembed>