Completely sensor based motions

Activities for sensor based motion are provided by implementations of GoalMotionInterface and HoldMotionInterface. Both interfaces create Activities that track a target which may be changing over time. The difference between ’Goal’ and ’Hold’ is as follows. ’Goal’ motions use online planning to follow the target w.r.t. a robot’s kinematics and dynamics. They will not violate physical constraints like the robot’s velocity and acceleration. This may, however, result in a temporal and spatial delay between the target’s and the robot’s motion. In contrast, ’Hold’ motions try to follow the goal at all cost, without respecting any constraint. This implies that the target should coincide with the robot’s motion center at the start of a ’Hold’ motion, otherwise the robot will try to ”‘jump”’ to the target instantly. ’Hold’ motions should only be used in cases where the temporal behavior of the target’s motion is known to some extent (i.e. that it does not violate the robot dynamics) or in combination with appropriate error handling.

The following parts will at first explain Activities for sensor based motion in joint space, and after that introduce Activities for sensor based motion in cartesian space.

Joint space sensor based motion

Sensor based motions in joint space are currently only supported by implementations of the GoalMotionInterface. It provides the following methods:


Robotics API support for: Joint space sensor based motion

Provided by:


Also available in:



followJointGoal(DoubleSensor[], DeviceParameters...)

Valid DeviceParameters:

OverrideParameter, MotionCenterParameter, RobotToolParameter, CartesianParameters, ControllerParameter, RedundancyDeviceParameter, AlphaParameter

Cartesian space sensor based motion