Virtual Training Studio: A Virtual Environment-Based Training System for Mechanical Assembly Operations


Main Participants: Satyandra K. Gupta, D.K. Anand, J.E. Brough, M. Schwartz, and A. Thakur

Sponsors: This project is sponsored by Naval Surface Warfare Center at Indian Head, Maryland, Center for Energetic Concepts Development at the University of Maryland, and National Science Foundation.

Keywords: Virtual Environments, Virtual Reality, Training, Assembly Instruction Generation, and Assembly Planning.


Motivation

The workforce in most industries requires continued training and update. Current training methods, for the most part, involve a combination of paper-based manuals, DVD/ video-based instructions and/or hands on master-apprentice training. Due to the rapid influx of new and changing technologies and their associated complexities, accelerated training is a necessity in order to maintain an advanced and educated workforce. We believe that existing training methods can be further improved in terms of cost, effectiveness, time expenditure and quality through the use of digital technologies such as virtual environments (VE). The advent of personal virtual environments offers many new possibilities for creating accelerated training technologies.

Our exploratory studies indicated that people preferred to utilize the virtual environment differently, for training purposes, based on the task at hand and the individual training styles of the user. We found that sometimes it is useful to get 3D visual clues from 3D animation and sometimes it is useful to see images of real parts. Sometimes practicing assembly tasks in the virtual environment helps facilitate training and aids in transferring that knowledge to real life. To meet this requirement, we have developed a system that supports three different training modes. Developing these training modes and providing the ability to seamlessly switch between them required us to develop several new features.

The virtual environment based training system we have developed is called Virtual Training Studio (VTS). The VTS aims to improve existing training methods through the use of a Virtual Environment based multi-media training infrastructure that allows users to learn using different modes of instruction presentation while focusing mainly on cognitive aspects of training as opposed to highly realistic physics based simulations. The VTS is composed of the following three modules: Virtual Workspace, Virtual Author and Virtual Mentor. Virtual Workspace provides the underlying VE multi-modal infrastructure. It provides the platform for the other two modules to function and integrates the hardware and software into a cohesive package. Virtual Author is a component of the VTS that allows non-programmers to quickly create new tutorials. Virtual Mentor is a module, running on top of the Virtual Workspace, which checks for user errors, assists users in the training process and provides additional details to further clarify the action required.


Main Results and Their Anticipated Impact

Overview of virtual training studio: The VTS was designed to be an affordable Personal Virtual Environment (PVE) for training. We developed a low cost wand design and use an off the shelf head mounted display (HMD). The level of physics based modelling that has been implemented as well as the hardware selected reflects this design decision.

The user interacts with the tutorial using a Head Mounted Display (HMD) and a wireless wand. Four optical trackers (infrared cameras) and two gyroscopes are used to track the position and orientation of the user and the wand. The wand consists of an off the shelf wireless presenter, an infrared LED, and a wireless gyroscope. Inside the Virtual Reality environment, the user can manipulate the parts and the buttons using a virtual laser pointer, which is controlled by the wireless wand. A wireless gyroscope and another infrared LED are mounted on the HMD. The cameras track the two LEDs and use triangulation to return the x, y, z, positions. Use of haptics and gloves was avoided in order to keep the cost of the system down. After user testing with a glove-based version of the system, utilizing 2 5DT DataGloves, we made the decision to create a wand based system due to the complications of the user interface in the use of gloves and the simplicity and user friendliness of the wand interface. The glove-based interface, when integrated with our system, forced users to memorize some gestures, caused excessive arm and body movement. These problems could have been overcome by use of a more expensive glove. But we decided against it to reduce the system cost.

The software infrastructure of the VTS was built using a combination of programming languages: C/C++, Python, OpenGl. Additionally, a number of libraries were used: WoldViz’s Vizard for general purpose loading and transformation of VRML models, ColDet for collision detection, Gnu Triangulated Surface library (GTS) for segmentation and wxPython for the Graphical User Interface.

Virtual Workspace: Virtual Workspace houses the necessary framework to allow manipulation of objects, collision detection, execution of animations, and it integrates the hardware with the software to provide the user an intuitive, easy to use interface to the virtual environment. Virtual Workspace also acts as the platform for the Virtual Author and the Virtual Mentor. A major new feature of the Virtual Workspace is dynamic generation of animations. The current version of the Virtual Workspace places the user in a furnished room with a table at the center and a projector screen on one of the walls. Parts used in the tutorial are placed on the table, while video as well as text instructions are displayed on the projector screen. The user interacts with the VE using a single wand, represented in the VE as a virtual laser pointer, to pick up, move and rotate objects and to click on buttons located on the control panel at the front of the room. The implementation of the Virtual Workspace also includes the option to interact with the VE through a desktop personal computer (PC) interface. Virtual Workspace offers three primary modes of training: 3D Animation Mode which allows users to view the entire assembly via animations, Interactive Simulation Mode which is a fully user driven mode that allows users to manually perform the assembly tasks and Video Mode which allows users to view the entire assembly via video clips. Trainees can switch between these modes at any time with the click of a button.

Virtual Author: The goal of the Virtual Author is to enable the instructor to quickly create multi-media training instructions for use in the Virtual Workspace without writing any code. The Virtual Author package includes a ProEngineer (ProE) plug-in written in ProE Toolkit, which allows an engineer to load an assembly into ProE and export it to the file formats used in the VTS – VRML and STL. We decided to use VRML and STL formats to ensure that the VTS system can work with a wide variety of CAD systems.

The instructor begins the authoring process by loading a set of VRML and STL CAD models into a tool called Part Loader where the instructor declares a tutorial specific dictionary for Virtual Author. The dictionary is created by giving names to parts and selected features. The instructor also uses the tool to give all the CAD models initial arrangement on the virtual table. At the end of the dictionary declaration process, the tool generates a data file which is loaded into Virtual Author on start up. The instructor then steps into the virtual environment and performs a virtual demonstration which the system monitors and records.

During the virtual demonstration, the instructor picks up one part or subassembly with a single virtual laser pointer and inserts it into another part or subassembly. Hence, there is always a moving subassembly and a receiving subassembly which remains stationary. After the instructor carries out the assembly inside the virtual environment for a particular step, the Virtual Author performs motion smoothening by calculating the final assembly path, calculating the insertion point, and more precisely realigning the held assembly with the receiving assembly. Motion smoothening is necessary due to the fact that the system does not prevent one object from passing through another upon collision and the fact that highly precise placement and alignment of parts may not be possible inside the virtual environment. Not permitting the parts to intersect at all during the motion would have required computationally expensive constraint management techniques that may slow down the scene refresh rate. Hence we allow CAD models to intersect with each other during virtual demonstrations. Most such intersections are eliminated from the training instructions using a motion smoothening technique. Motion smoothening allows Virtual Author to deal with minor placement and orientation errors during virtual demonstrations that result due to no enforcement of non-penetration constraints and lack of haptics feedback.

For each step that the instructor demonstrates in the virtual environment the instructor also declares symmetries and specifies the symmetry types. The part symmetry information is later used by Virtual Workspace to allow trainees to assemble parts using alternate insertion locations and orientations. For each step, highly detailed text instructions are generated automatically by combining data about collision detection, part motion and alignment constraints with the dictionary declared by the instructor. Text instructions enable trainees to refresh their memories about the assembly process at the shop floor where VE installations are not available. Automated generation of text instructions reduces the text instruction generation time and ensures that there is no missing step in the text instructions. In addition to the text instructions, Virtual Author also automatically generates data for dynamic animations and interactive simulation for later use in Virtual Workspace.

During the final phase the instructor also has the option of loading video clips (.avi files) and audio (.wav files) and associating them with each step. Both the motion smoothening techniques as well as automatic text from motion generation are heavily dependent on extraction of alignment constraints from polygonal models. We came up with a simple, heuristics-based method for extracting planar and cylindrical surfaces, and their characteristics, from triangulated polyhedral models.

Virtual Mentor: The goal of the Virtual Mentor is to simulate the classical master-apprentice training model by monitoring the actions of the user in the Virtual Workspace and assisting the user at appropriate times to enhance the trainee’s understanding of the assembly/disassembly process. If users make repeated errors, then the system will attempt to clarify instructions by adaptively changing the level of detail and inserting targeted training sessions. The instruction level of detail will be changed by regulating the detail of text/audio instructions and regulating the detail level of visual aids such as arrows, highlights, and animations. The current version of the Virtual Mentor performs the following tasks:

In the most interactive mode, called Interactive Simulation, the user first positions and orients a part so that the interfaces align and the components can be assembled. The user can then click on a ‘‘Complete’’ button. If the part is positioned and oriented correctly near the insertion marker, allowing for a certain margin for error, the assembly of the part is completed via animation. If the orientation or position of the part is incorrect, an error message is given and the user must realign the part so that assembly can be completed. In this manual mode, Virtual Mentor must check for alternate orientations and insertion positions based on the symmetries that were specified in the Virtual Author.

The extensive logging that the Virtual Mentor currently performs is the first step toward an adaptive Virtual Mentor that adjusts the level of detail and provides dynamic, performance- based hints. Currently, the analysis of the logs and adapting of instructions is performed interactively by the user. Adapting of instructions or annotation of ambiguous instructions is done by analysing the logs. Ongoing work, however, aims to achieve a higher level of automation in this area.

Case Study: We conducted a detailed user study involving 30 subjects and two tutorials to assess performance of our system. Thirty subjects were selected from three different groups: ten undergraduate engineering students, ten graduate engineering students and ten working engineers. The purpose of this study was to gather large amounts of data from each user and mine this data to gain a better understanding of how people were training in the VTS. Also of interest was which features and training modes were most used, how long people were training and user response through pre and post-training questionnaires.

The main findings of the study were as follows:


Related Publications

The following papers provide more details on the above-described results.


Contact For additional information please contact:

Dr. Satyandra K. Gupta
Department of Mechanical Engineering and Institute for Systems Research
2135 Martin Hall
University of Maryland
College Park, Md-20742
Phone: 301-405-5306
FAX: 301-314-9477

WWW: http://www.glue.umd.edu/~skgupta/