High-Performance Vision Guidance

VSTeach

VSTeach

Primitive Description and Usage

  • Description: This primitive collects samples of image feature data or object poses from the visual system multiple times and outputs their average values.

  • Example Usage: Use this primitive to record the features of the target image or the poses of the target object after moving the robot to the target position. You can record the values of the output parameters of this primitive into variables and then use them in the primitives [HPImageBasedVS] or [HPPoseBasedVS].

Primitive Input Parameters

Input Parameter

Description

Type

Unit

Default Value & Range

objName*

Object name used by visual servoing

OBJNAME

none

objIndex

Object index used by visual servoing

INT

none

0 [0 1]

collectTimes

Number of times of collecting image feature samples

INT

none

30 [10 200]

*Parameters marked with an asterisk must be assigned a value prior to executing the primitive.

Primitive State Parameters

State Parameter

Description

Type

Unit

terminated

Flag to indicate whether the primitive is terminated. It is set to TRUE if the primitive is terminated.

BOOL

none

timePeriod

The time spent on running the current primitive.

DOUBLE

s

teachFinished

Flag to indicate whether the teaching data collection is complete

BOOL

none

Primitive Output Parameters

Output Parameter

Description

Type

Unit

tcpPoseOut

The TCP pose when the primitive is terminated. It is represented in the world coordinate system.

COORD

m-deg

taughtFeaturePts2D

Positions of multiple taught feature points in the 2D camera coordinate system. Its value can be used as the input parameter [targetFeaturePts] for the primitive [HPImageBasedVS].

ARRAY_VEC_2d

none

taughtAlignPts2D

Positions of the feature points of the aligned object in the 2D camera coordinate system during teaching. Its value can be used as the input parameter [alignObjPts] for the primitive [HPImageBasedVS].

ARRAY_VEC_2d

none

taughtFeaturePtsNoise

Noise of the taught 2D feature points. Its value can be used as the input parameter [visualDetectNoise] for the primitive [HPImageBasedVS].

ARRAY_VEC_2d

none

taughtFeaturePts3D

Positions of multiple taught feature points in the 3D camera coordinate system. Its value can be used as the input parameter [targetFeaturePts] for the primitive [HPPoseBasedVS].

ARRAY_VEC_3d

m

taughtAlignPts3D

Positions of the feature points of the aligned object in the 3D camera coordinate system during teaching. Its value can be used as the input parameter [alignObjPts] for the primitive [HPPoseBasedVS].

ARRAY_VEC_3d

m

taughtObjPose

Pose of the target object taught in the 3D camera coordinate system. Its value can be used as the input parameter [targetObjPose] for the primitive [HPPoseBasedVS].

POSE

mm-deg

taughtAlignPose

Pose of the aligned object in the 3D camera coordinate system during teaching. Its value can be used as the input parameter [alignObjPose] for the primitive [HPPoseBasedVS].

POSE

mm-deg

Default Transition Condition

State Parameter

Condition

Value

teachFinished

=

1

logo

HPImageBasedVS

HPImageBasedVS

Primitive Description and Usage

  • Description: This primitive uses Image-Based Visual Servoing (IBVS), leveraging real-time visual feedback to continuously control the robot to track objects with random and unpredictable positions, achieving high-precision, high-responsiveness, and quasi-static tracking. During this process, the robot moves along the visual servoing axes to align the feature points on the target object with the target feature points.

  • Example Usage: Use this primitive to control the robot to locate an object whose position is uncertain or changing on the plane. This primitive can also be used to align two objects, such as aligning an FPC cable with a socket.

Primitive Input Parameters

Input Parameter

Description

Type

Unit

Default Value & Range

objName*

Object name used by visual servoing

OBJNAME

none

objIndex

Object index used by visual servoing

INT

none

0 [0 1]

targetFeaturePts*

Position of the target feature points in the 2D camera coordinate system. Each point is represented by two pixel values (X, Y).

ARRAY_VEC_2d

none

targetDepth*

Depth of target feature points in the camera coordinate system

ARRAY_DOUBLE

m

[0.01 1.2]

vsAxis

Axes that are allowed to move in the camera coordinate system

VEC_6i

none

1 1 1 0 0 0 [0 0 0 0 0 0 1 1 1 0 0 0]

velScale

Velocity scale of visual servoing. Caution: Setting this value too large may cause robot vibration.

DOUBLE

none

10 [1.0 100.0]

maxVel

Maximum linear velocity of robot movement

DOUBLE

m/s

0.05 [0.001 0.5]

imageConvToler

Tolerance for determining whether the image has converged. When the error between the captured image features and the target features is within the tolerance, it will be determined that convergence has occurred.

DOUBLE

none

5.0 [1 50]

targetConvTimes

Number of consecutive times the target simultaneously meets the image convergence tolerances. When the number of consecutive convergences reaches this set value, the parameter [objAligned] will be set to TRUE; 0 indicates that [objAligned] will never be TRUE.

INT

none

4 [0 20]

timeoutTime

Maximum time interval allowed for receiving data. A fault will be triggered if the robot fails to receive visual feedback data within this time interval.

DOUBLE

s

3 [0.1 10]

enableObjAlign

Flag to enable the object alignment function. This function can be used to align the feature points of two objects.

BOOL

none

0 [0 / 1]

alignObjPts

Position of feature points of aligned object in the 2D camera coordinate system. Each point is represented by two pixel values (X, Y). The aligned object is in the robot hand and is moved by the robot.

ARRAY_VEC_2d

none

optVelScale

Increase the velocity scale of visual servoing in the Z and Rz directions. It can help optimize the robot’s Cartesian motion trajectory.

VEC_2d

none

0 0 [0 0 100 0]

visualDetectNoise

Noise of the visual detection result of each feature point

ARRAY_VEC_2d

none

[0 0 20 20]

*Parameters marked with an asterisk must be assigned a value prior to executing the primitive.

Primitive State Parameters

State Parameter

Description

Type

Unit

terminated

Flag to indicate whether the primitive is terminated. It is set to TRUE if the primitive is terminated.

BOOL

none

timePeriod

The time spent on running the current primitive.

DOUBLE

s

objAligned

Flag to indicate whether visual servoing has successfully aligned the target object

BOOL

none

Primitive Output Parameters

Output Parameter

Description

Type

Unit

tcpPoseOut

The TCP pose when the primitive is terminated. It is represented in the world coordinate system.

COORD

m-deg

Default Transition Condition

State Parameter

Condition

Value

reachedTarget

=

1

logo