Teleoperated Robotics

Effect of Link and Joint Flexibility in a Teleoperated Robot

In applications such as space and surgical robotics, the use of thin, lightweight manipulators and cable-driven end effectors results in link and joint flexibility of the manipulator. In bilateral teleoperation, however, any flexibility in a link or joint of the robot reduces the effective stiffness of the slave and the transparency of teleoperation.

In this research, we analyzed master-slave teleoperation transparency under slave robot joint and link flexibility, and evaluated the added benefits of using extra sensors at the end-effector of the flexible robot.

Velocity (or position) feedback from the tip of the flexible robot improves free-space position tracking performance, which in the absence of such feedback is hampered by the system’s anti-resonance. It is of practical interest to maintain a good position tracking bandwidth in order to enable accurate and fast manipulation. Additionally, when interaction forces with an environment are measured by a force sensor and fed back to the user’s hand, tip velocity feedback enhances the bandwidth of hard-contact force tracking. This is important because if force tracking response is bandlimited, the system will not be able to accurately simulate high-frequency haptic phenomena such as edges or surface texture of an object. Moreover, the flexibility in the joint or link will be transmitted to the user during a hard contact task unless tip velocity feedback is used. The significance of this result is in the fact that if the robot flexibility is transmitted to the user, it will limit the perception of hitting a hard object (such as bone) and will make it more difficult to utilize haptic cues for soft-tissue stiffness discrimination. This has direct consequences, for example, in tissue palpation as a means to detect cancerous tissue, which has a different stiffness compared to healthy tissue.

Researchers: Mahdi Tavakoli

Sponsors:

World Modeling by Tele-Manipulation

Teleoperation is the ideal method to perform sophisticated tasks in unknown environments. In this control mode, all the operator’s actions are based on his/her interpretation of the camera images. In many cases, however, humans are not good at quantification and cannot interpret this visual information in an exact manner, prohibiting them from designing the correct strategy.

The proposed solution is to create a modeling system that will use sensor information (i.e., machine perception) used for control purposes to determine the properties of the objects and environment as the task progresses. One practical application is a teleoperator assistant that can provide information to the operator in real time during task execution. This includes quantitative measurements such as the size, shape, and orientation of objects; these properties can, for example, help determine appropriate insertion strategies in a teleoperated assembly task.

Researchers: Pierre Dupont