Abstract

When a physical task is to be accomplished in an environment which is too remote or hostile for human beings (e.g. in orbit or in nuclear reactors), the use of robotic devices can be beneficial. Rather than using completely autonomous artificial intelligence, however, my thesis focuses on "telerobotics:" robots which are remotely commanded by humans. This allows us to combine advantages from both sides: mechanical strength and precision with human intelligence and control. By providing users with video feeds and force feedback on appropriate haptic devices, a sense of "telepresence" can be created, so that the user feels as though he/she is physically present at the remote location and can manipulate objects to accomplish the task quickly and efficiently.

This approach, however, can become challenging when communication delays exist between the user and the remote robot, e.g. over the Internet or in space robotics. Indeed, for delays on the order of seconds or more, a teleoperation system with direct force feedback can become not only unstable, but also difficult to use from the perspective of human perception, due to discrepancies between the user's actions and the responses from the environment.

The central theme of this thesis is to make telerobotics possible under such large delays, by feeding back virtual models of the remote environment instead of simple force and video. As the remote robot interacts with its environment, sensor information and estimation techniques are used to create physical models of the key properties of the objects in that environment, such as their locations and geometric sizes. These models are then sent through the delayed communication channel to the human user, where they are displayed both visually (using computer graphics and a delayed video stream) and haptically (using force feedback robots). In this manner, the user only needs to interact with an intuitive, virtually augmented local environment which is free of delay. The virtual world mimics the remote environment and gets updated whenever the latter changes, preventing errors from adding up over time. The user can thus produce useful force and motion trajectories which are transmitted to the slave as commands for accomplishing the task at hand. As an added feature, the system can further take advantage of human intelligence by involving users in the model estimation process, e.g. by identifying geometries from delayed video feeds.

In this dissertation, I present the architecture and implementation of such a system on actual robotic devices. Starting from a simple one degree of freedom example, key challenges in each aspect of the master and slave interfaces are identified and addressed before being applied to higher dimensions. Experimental data is presented, demonstrating a full, working teleoperation system involving four seconds of round trip communications delay and successful execution of motion and force manipulation commands in a planar, two degrees of freedom, high-stiffness environment. This work represents a proof of concept demonstration of the philosophy of model mediated teleoperation and, hopefully, a foundation from which future development of the field can progress.

Details

Title
Model mediation for time-delayed teleoperation
Author
Mitra, Probal
Year
2008
Publisher
ProQuest Dissertations & Theses
ISBN
978-0-549-85128-8
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
304470913
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.