halcondotnet
Represents an instance of a bar code reader.
Represents a generic instance of a handle.
Represents an uninitialized handle instance
Relinquish ownership of managed handle
Caller must ensure that handle is destroyed at some time
after this instance is no longer used.
Returns true if the handle has been initialized.
A handle will be uninitialized when creating it with a
no-argument constructor or after calling Dispose();
Releases the resources used by this handle object
Invalidates the handle but keeps the HALCON handle alive, which
only makes sense if the handle is used externally and cleared later,
e.g. by an HOperatorSet based module or another language interface.
Provides a simple string representation of the handle id
as hex number, which is mainly useful for debug outputs (to
see if two handles are identical)
Cast to IntPtr returns HALCON ID of handle resources
Caller must ensure that input object is kept alive
Returns the HALCON ID for the handle
Caller must ensure that input handle is kept alive
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Clear the content of a handle.
Instance represents: Handle to clear.
Deserialize a serialized item.
Modified instance represents: Handle containing the deserialized item.
Handle containing the serialized item to be deserialized.
Serialize the content of a handle.
Instance represents: Handle that should be serialized.
Handle containing the serialized item.
Test if a tuple is serializable.
Instance represents: Tuple to check for serializability.
Boolean value indicating if the input can be serialized.
Check if a handle is valid.
Instance represents: The handle to check for validity.
The validity of the handle, 1 or 0.
Return the semantic type of a tuple.
Instance represents: Input tuple.
Semantic type of the input tuple as a string.
Read a bar code model from a file and create a new model.
Modified instance represents: Handle of the bar code model.
Name of the bar code model file. Default: "bar_code_model.bcm"
Create a model of a bar code reader.
Modified instance represents: Handle for using and accessing the bar code model.
Names of the generic parameters that can be adjusted for the bar code model. Default: []
Values of the generic parameters that can be adjusted for the bar code model. Default: []
Create a model of a bar code reader.
Modified instance represents: Handle for using and accessing the bar code model.
Names of the generic parameters that can be adjusted for the bar code model. Default: []
Values of the generic parameters that can be adjusted for the bar code model. Default: []
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Deserialize a bar code model.
Modified instance represents: Handle of the bar code model.
Handle of the serialized item.
Serialize a bar code model.
Instance represents: Handle of the bar code model.
Handle of the serialized item.
Read a bar code model from a file and create a new model.
Modified instance represents: Handle of the bar code model.
Name of the bar code model file. Default: "bar_code_model.bcm"
Write a bar code model to a file.
Instance represents: Handle of the bar code model.
Name of the bar code model file. Default: "bar_code_model.bcm"
Access iconic objects that were created during the search or decoding of bar code symbols.
Instance represents: Handle of the bar code model.
Indicating the bar code results respectively candidates for which the data is required. Default: "all"
Name of the iconic object to return. Default: "candidate_regions"
Objects that are created as intermediate results during the detection or evaluation of bar codes.
Access iconic objects that were created during the search or decoding of bar code symbols.
Instance represents: Handle of the bar code model.
Indicating the bar code results respectively candidates for which the data is required. Default: "all"
Name of the iconic object to return. Default: "candidate_regions"
Objects that are created as intermediate results during the detection or evaluation of bar codes.
Get the alphanumerical results that were accumulated during the decoding of bar code symbols.
Instance represents: Handle of the bar code model.
Indicating the bar code results respectively candidates for which the data is required. Default: "all"
Names of the resulting data to return. Default: "decoded_types"
List with the results.
Get the alphanumerical results that were accumulated during the decoding of bar code symbols.
Instance represents: Handle of the bar code model.
Indicating the bar code results respectively candidates for which the data is required. Default: "all"
Names of the resulting data to return. Default: "decoded_types"
List with the results.
Decode bar code symbols within a rectangle.
Instance represents: Handle of the bar code model.
Input image.
Type of the searched bar code. Default: "EAN-13"
Row index of the center. Default: 50.0
Column index of the center. Default: 100.0
Orientation of rectangle in radians. Default: 0.0
Half of the length of the rectangle along the reading direction of the bar code. Default: 200.0
Half of the length of the rectangle perpendicular to the reading direction of the bar code. Default: 100.0
Data strings of all successfully decoded bar codes.
Decode bar code symbols within a rectangle.
Instance represents: Handle of the bar code model.
Input image.
Type of the searched bar code. Default: "EAN-13"
Row index of the center. Default: 50.0
Column index of the center. Default: 100.0
Orientation of rectangle in radians. Default: 0.0
Half of the length of the rectangle along the reading direction of the bar code. Default: 200.0
Half of the length of the rectangle perpendicular to the reading direction of the bar code. Default: 100.0
Data strings of all successfully decoded bar codes.
Detect and read bar code symbols in an image.
Instance represents: Handle of the bar code model.
Input image. If the image has a reduced domain, the barcode search is reduced to that domain. This usually reduces the runtime of the operator. However, if the barcode is not fully inside the domain, the barcode cannot be decoded correctly.
Type of the searched bar code. Default: "auto"
Data strings of all successfully decoded bar codes.
Regions of the successfully decoded bar code symbols.
Detect and read bar code symbols in an image.
Instance represents: Handle of the bar code model.
Input image. If the image has a reduced domain, the barcode search is reduced to that domain. This usually reduces the runtime of the operator. However, if the barcode is not fully inside the domain, the barcode cannot be decoded correctly.
Type of the searched bar code. Default: "auto"
Data strings of all successfully decoded bar codes.
Regions of the successfully decoded bar code symbols.
Get the names of the parameters that can be used in set_bar_code* and get_bar_code* operators for a given bar code model
Instance represents: Handle of the bar code model.
Properties of the parameters. Default: "trained_general"
Names of the generic parameters.
Get parameters that are used by the bar code reader when processing a specific bar code type.
Instance represents: Handle of the bar code model.
Names of the bar code types for which parameters should be queried. Default: "EAN-13"
Names of the generic parameters that are to be queried for the bar code model. Default: "check_char"
Values of the generic parameters.
Get parameters that are used by the bar code reader when processing a specific bar code type.
Instance represents: Handle of the bar code model.
Names of the bar code types for which parameters should be queried. Default: "EAN-13"
Names of the generic parameters that are to be queried for the bar code model. Default: "check_char"
Values of the generic parameters.
Get one or several parameters that describe the bar code model.
Instance represents: Handle of the bar code model.
Names of the generic parameters that are to be queried for the bar code model. Default: "element_size_min"
Values of the generic parameters.
Get one or several parameters that describe the bar code model.
Instance represents: Handle of the bar code model.
Names of the generic parameters that are to be queried for the bar code model. Default: "element_size_min"
Values of the generic parameters.
Set selected parameters of the bar code model for selected bar code types
Instance represents: Handle of the bar code model.
Names of the bar code types for which parameters should be set. Default: "EAN-13"
Names of the generic parameters that shall be adjusted for finding and decoding bar codes. Default: "check_char"
Values of the generic parameters that are adjusted for finding and decoding bar codes. Default: "absent"
Set selected parameters of the bar code model for selected bar code types
Instance represents: Handle of the bar code model.
Names of the bar code types for which parameters should be set. Default: "EAN-13"
Names of the generic parameters that shall be adjusted for finding and decoding bar codes. Default: "check_char"
Values of the generic parameters that are adjusted for finding and decoding bar codes. Default: "absent"
Set selected parameters of the bar code model.
Instance represents: Handle of the bar code model.
Names of the generic parameters that shall be adjusted for finding and decoding bar codes. Default: "element_size_min"
Values of the generic parameters that are adjusted for finding and decoding bar codes. Default: 8
Set selected parameters of the bar code model.
Instance represents: Handle of the bar code model.
Names of the generic parameters that shall be adjusted for finding and decoding bar codes. Default: "element_size_min"
Values of the generic parameters that are adjusted for finding and decoding bar codes. Default: 8
Delete a bar code model and free the allocated memory
Handle of the bar code model.
Delete a bar code model and free the allocated memory
Instance represents: Handle of the bar code model.
Create a model of a bar code reader.
Modified instance represents: Handle for using and accessing the bar code model.
Names of the generic parameters that can be adjusted for the bar code model. Default: []
Values of the generic parameters that can be adjusted for the bar code model. Default: []
Create a model of a bar code reader.
Modified instance represents: Handle for using and accessing the bar code model.
Names of the generic parameters that can be adjusted for the bar code model. Default: []
Values of the generic parameters that can be adjusted for the bar code model. Default: []
Represents an instance of a barrier synchronization object.
Create a barrier synchronization object.
Modified instance represents: Barrier synchronization object.
Barrier attribute. Default: []
Barrier attribute value. Default: []
Barrier team size. Default: 1
Create a barrier synchronization object.
Modified instance represents: Barrier synchronization object.
Barrier attribute. Default: []
Barrier attribute value. Default: []
Barrier team size. Default: 1
Destroy a barrier synchronization object.
Instance represents: Barrier synchronization object.
Wait on the release of a barrier synchronization object.
Instance represents: Barrier synchronization object.
Create a barrier synchronization object.
Modified instance represents: Barrier synchronization object.
Barrier attribute. Default: []
Barrier attribute value. Default: []
Barrier team size. Default: 1
Create a barrier synchronization object.
Modified instance represents: Barrier synchronization object.
Barrier attribute. Default: []
Barrier attribute value. Default: []
Barrier team size. Default: 1
Represents an instance of the data structure used to inspect beads.
Create a model to inspect beads or adhesive in images.
Modified instance represents: Handle for using and accessing the bead inspection model.
XLD contour specifying the expected bead's shape and position.
Optimal bead thickness. Default: 50
Tolerance of bead's thickness with respect to TargetThickness. Default: 15
Tolerance of the bead's center position. Default: 15
The bead's polarity. Default: "light"
Names of the generic parameters that can be adjusted for the bead inspection model. Default: []
Values of the generic parameters that can be adjusted for the bead inspection model. Default: []
Create a model to inspect beads or adhesive in images.
Modified instance represents: Handle for using and accessing the bead inspection model.
XLD contour specifying the expected bead's shape and position.
Optimal bead thickness. Default: 50
Tolerance of bead's thickness with respect to TargetThickness. Default: 15
Tolerance of the bead's center position. Default: 15
The bead's polarity. Default: "light"
Names of the generic parameters that can be adjusted for the bead inspection model. Default: []
Values of the generic parameters that can be adjusted for the bead inspection model. Default: []
Get the value of a parameter in a specific bead inspection model.
Instance represents: Handle of the bead inspection model.
Name of the model parameter that is queried. Default: "target_thickness"
Value of the queried model parameter.
Get the value of a parameter in a specific bead inspection model.
Instance represents: Handle of the bead inspection model.
Name of the model parameter that is queried. Default: "target_thickness"
Value of the queried model parameter.
Set parameters of the bead inspection model.
Instance represents: Handle of the bead inspection model.
Name of the model parameter that shall be adjusted for the specified bead inspection model. Default: "target_thickness"
Value of the model parameter that shall be adjusted for the specified bead inspection model. Default: 40
Set parameters of the bead inspection model.
Instance represents: Handle of the bead inspection model.
Name of the model parameter that shall be adjusted for the specified bead inspection model. Default: "target_thickness"
Value of the model parameter that shall be adjusted for the specified bead inspection model. Default: 40
Inspect beads in an image, as defined by the bead inspection model.
Instance represents: Handle of the bead inspection model to be used.
Image to apply bead inspection on.
The detected right contour of the beads.
Detected error segments
Types of detected errors.
The detected left contour of the beads.
Delete the bead inspection model and free the allocated memory.
Instance represents: Handle of the bead inspection model.
Create a model to inspect beads or adhesive in images.
Modified instance represents: Handle for using and accessing the bead inspection model.
XLD contour specifying the expected bead's shape and position.
Optimal bead thickness. Default: 50
Tolerance of bead's thickness with respect to TargetThickness. Default: 15
Tolerance of the bead's center position. Default: 15
The bead's polarity. Default: "light"
Names of the generic parameters that can be adjusted for the bead inspection model. Default: []
Values of the generic parameters that can be adjusted for the bead inspection model. Default: []
Create a model to inspect beads or adhesive in images.
Modified instance represents: Handle for using and accessing the bead inspection model.
XLD contour specifying the expected bead's shape and position.
Optimal bead thickness. Default: 50
Tolerance of bead's thickness with respect to TargetThickness. Default: 15
Tolerance of the bead's center position. Default: 15
The bead's polarity. Default: "light"
Names of the generic parameters that can be adjusted for the bead inspection model. Default: []
Values of the generic parameters that can be adjusted for the bead inspection model. Default: []
Represents an instance of a background estimator.
Generate and initialize a data set for the background estimation.
Modified instance represents: ID of the BgEsti data set.
initialization image.
1. system matrix parameter. Default: 0.7
2. system matrix parameter. Default: 0.7
Gain type. Default: "fixed"
Kalman gain / foreground adaptation time. Default: 0.002
Kalman gain / background adaptation time. Default: 0.02
Threshold adaptation. Default: "on"
Foreground/background threshold. Default: 7.0
Number of statistic data sets. Default: 10
Confidence constant. Default: 3.25
Constant for decay time. Default: 15.0
Delete the background estimation data set.
Instance represents: ID of the BgEsti data set.
Return the estimated background image.
Instance represents: ID of the BgEsti data set.
Estimated background image of the current data set.
Change the estimated background image.
Instance represents: ID of the BgEsti data set.
Current image.
Region describing areas to change.
Estimate the background and return the foreground region.
Instance represents: ID of the BgEsti data set.
Current image.
Region of the detected foreground.
Return the parameters of the data set.
Instance represents: ID of the BgEsti data set.
2. system matrix parameter.
Gain type.
Kalman gain / foreground adaptation time.
Kalman gain / background adaptation time.
Threshold adaptation.
Foreground / background threshold.
Number of statistic data sets.
Confidence constant.
Constant for decay time.
1. system matrix parameter.
Change the parameters of the data set.
Instance represents: ID of the BgEsti data set.
1. system matrix parameter. Default: 0.7
2. system matrix parameter. Default: 0.7
Gain type. Default: "fixed"
Kalman gain / foreground adaptation time. Default: 0.002
Kalman gain / background adaptation time. Default: 0.02
Threshold adaptation. Default: "on"
Foreground/background threshold. Default: 7.0
Number of statistic data sets. Default: 10
Confidence constant. Default: 3.25
Constant for decay time. Default: 15.0
Generate and initialize a data set for the background estimation.
Modified instance represents: ID of the BgEsti data set.
initialization image.
1. system matrix parameter. Default: 0.7
2. system matrix parameter. Default: 0.7
Gain type. Default: "fixed"
Kalman gain / foreground adaptation time. Default: 0.002
Kalman gain / background adaptation time. Default: 0.02
Threshold adaptation. Default: "on"
Foreground/background threshold. Default: 7.0
Number of statistic data sets. Default: 10
Confidence constant. Default: 3.25
Constant for decay time. Default: 15.0
Represents an instance of a camera calibration model.
Restore a calibration data model from a file.
Modified instance represents: Handle of a calibration data model.
The path and file name of the model file.
Create a HALCON calibration data model.
Modified instance represents: Handle of the created calibration data model.
Type of the calibration setup. Default: "calibration_object"
Number of cameras in the calibration setup. Default: 1
Number of calibration objects. Default: 1
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Free the memory of a calibration data model.
Instance represents: Handle of a calibration data model.
Deserialize a serialized calibration data model.
Modified instance represents: Handle of a calibration data model.
Handle of the serialized item.
Serialize a calibration data model.
Instance represents: Handle of a calibration data model.
Handle of the serialized item.
Restore a calibration data model from a file.
Modified instance represents: Handle of a calibration data model.
The path and file name of the model file.
Store a calibration data model into a file.
Instance represents: Handle of a calibration data model.
The file name of the model to be saved.
Perform a hand-eye calibration.
Instance represents: Handle of a calibration data model.
Average residual error of the optimization.
Determine all camera parameters by a simultaneous minimization process.
Instance represents: Handle of a calibration data model.
Back projection root mean square error (RMSE) of the optimization.
Remove a data set from a calibration data model.
Instance represents: Handle of a calibration data model.
Type of the calibration data item. Default: "tool"
Index of the affected item. Default: 0
Remove a data set from a calibration data model.
Instance represents: Handle of a calibration data model.
Type of the calibration data item. Default: "tool"
Index of the affected item. Default: 0
Set data in a calibration data model.
Instance represents: Handle of a calibration data model.
Type of calibration data item. Default: "model"
Index of the affected item (depending on the selected ItemType). Default: "general"
Parameter(s) to set. Default: "reference_camera"
New value(s). Default: 0
Set data in a calibration data model.
Instance represents: Handle of a calibration data model.
Type of calibration data item. Default: "model"
Index of the affected item (depending on the selected ItemType). Default: "general"
Parameter(s) to set. Default: "reference_camera"
New value(s). Default: 0
Find the HALCON calibration plate and set the extracted points and contours in a calibration data model.
Instance represents: Handle of a calibration data model.
Input image.
Index of the observing camera. Default: 0
Index of the calibration object. Default: 0
Index of the observed calibration object. Default: 0
Names of the generic parameters to be set. Default: []
Values of the generic parameters to be set. Default: []
Remove observation data from a calibration data model.
Instance represents: Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the observed calibration object. Default: 0
Index of the observed calibration object pose. Default: 0
Get contour-based observation data from a calibration data model.
Instance represents: Handle of a calibration data model.
Name of contour objects to be returned. Default: "marks"
Index of the observing camera. Default: 0
Index of the observed calibration object. Default: 0
Index of the observed calibration object pose. Default: 0
Contour-based result(s).
Get observed calibration object poses from a calibration data model.
Instance represents: Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the observed calibration object. Default: 0
Index of the observed calibration object pose. Default: 0
Stored observed calibration object pose relative to the observing camera.
Set observed calibration object poses in a calibration data model.
Instance represents: Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the calibration object. Default: 0
Index of the observed calibration object. Default: 0
Pose of the observed calibration object relative to the observing camera.
Get point-based observation data from a calibration data model.
Instance represents: Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the observed calibration object. Default: 0
Index of the observed calibration object pose. Default: 0
Row coordinates of the detected points.
Column coordinates of the detected points.
Correspondence of the detected points to the points of the observed calibration object.
Roughly estimated pose of the observed calibration object relative to the observing camera.
Set point-based observation data in a calibration data model.
Instance represents: Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the calibration object. Default: 0
Index of the observed calibration object. Default: 0
Row coordinates of the extracted points.
Column coordinates of the extracted points.
Correspondence of the extracted points to the calibration marks of the observed calibration object. Default: "all"
Roughly estimated pose of the observed calibration object relative to the observing camera.
Query information about the relations between cameras, calibration objects, and calibration object poses.
Instance represents: Handle of a calibration data model.
Kind of referred object. Default: "camera"
Camera index or calibration object index (depending on the selected ItemType). Default: 0
Calibration object numbers.
List of calibration object indices or list of camera indices (depending on ItemType).
Query data stored or computed in a calibration data model.
Instance represents: Handle of a calibration data model.
Type of calibration data item. Default: "camera"
Index of the affected item (depending on the selected ItemType). Default: 0
The name of the inspected data. Default: "params"
Requested data.
Query data stored or computed in a calibration data model.
Instance represents: Handle of a calibration data model.
Type of calibration data item. Default: "camera"
Index of the affected item (depending on the selected ItemType). Default: 0
The name of the inspected data. Default: "params"
Requested data.
Define a calibration object in a calibration model.
Instance represents: Handle of a calibration data model.
Calibration object index. Default: 0
3D point coordinates or a description file name.
Define a calibration object in a calibration model.
Instance represents: Handle of a calibration data model.
Calibration object index. Default: 0
3D point coordinates or a description file name.
Set type and initial parameters of a camera in a calibration data model.
Instance represents: Handle of a calibration data model.
Camera index. Default: 0
Type of the camera. Default: []
Initial camera internal parameters.
Set type and initial parameters of a camera in a calibration data model.
Instance represents: Handle of a calibration data model.
Camera index. Default: 0
Type of the camera. Default: []
Initial camera internal parameters.
Create a HALCON calibration data model.
Modified instance represents: Handle of the created calibration data model.
Type of the calibration setup. Default: "calibration_object"
Number of cameras in the calibration setup. Default: 1
Number of calibration objects. Default: 1
Represents an instance of a camera setup model.
Restore a camera setup model from a file.
Modified instance represents: Handle to the camera setup model.
The path and file name of the model file.
Create a model for a setup of calibrated cameras.
Modified instance represents: Handle to the camera setup model.
Number of cameras in the setup. Default: 2
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Create a HALCON stereo model.
Instance represents: Handle to the camera setup model.
Reconstruction method. Default: "surface_pairwise"
Name of the model parameter to be set. Default: []
Value of the model parameter to be set. Default: []
Handle of the stereo model.
Create a HALCON stereo model.
Instance represents: Handle to the camera setup model.
Reconstruction method. Default: "surface_pairwise"
Name of the model parameter to be set. Default: []
Value of the model parameter to be set. Default: []
Handle of the stereo model.
Free the memory of a calibration setup model.
Instance represents: Handle of the camera setup model.
Serialize a camera setup model.
Instance represents: Handle to the camera setup model.
Handle of the serialized item.
Deserialize a serialized camera setup model.
Modified instance represents: Handle to the camera setup model.
Handle of the serialized item.
Store a camera setup model into a file.
Instance represents: Handle to the camera setup model.
The file name of the model to be saved.
Restore a camera setup model from a file.
Modified instance represents: Handle to the camera setup model.
The path and file name of the model file.
Get generic camera setup model parameters.
Instance represents: Handle to the camera setup model.
Index of the camera in the setup. Default: 0
Names of the generic parameters to be queried.
Values of the generic parameters to be queried.
Get generic camera setup model parameters.
Instance represents: Handle to the camera setup model.
Index of the camera in the setup. Default: 0
Names of the generic parameters to be queried.
Values of the generic parameters to be queried.
Set generic camera setup model parameters.
Instance represents: Handle to the camera setup model.
Unique index of the camera in the setup. Default: 0
Names of the generic parameters to be set.
Values of the generic parameters to be set.
Set generic camera setup model parameters.
Instance represents: Handle to the camera setup model.
Unique index of the camera in the setup. Default: 0
Names of the generic parameters to be set.
Values of the generic parameters to be set.
Define type, parameters, and relative pose of a camera in a camera setup model.
Instance represents: Handle to the camera setup model.
Index of the camera in the setup.
Type of the camera. Default: []
Internal camera parameters.
Pose of the camera relative to the setup's coordinate system.
Define type, parameters, and relative pose of a camera in a camera setup model.
Instance represents: Handle to the camera setup model.
Index of the camera in the setup.
Type of the camera. Default: []
Internal camera parameters.
Pose of the camera relative to the setup's coordinate system.
Create a model for a setup of calibrated cameras.
Modified instance represents: Handle to the camera setup model.
Number of cameras in the setup. Default: 2
Represents internal camera parameters.
Provides access to the internally used tuple data
Provides access to the value at the specified index
Create an uninitialized instance.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
Instance represents: Internal camera parameters of the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Distance image.
Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
Instance represents: Internal camera parameters of the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Distance image.
Compute the distance values for a rectified stereo image pair using multigrid methods.
Instance represents: Internal camera parameters of the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity if CalculateScore is set to 'true'.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Distance image.
Compute the distance values for a rectified stereo image pair using multigrid methods.
Instance represents: Internal camera parameters of the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity if CalculateScore is set to 'true'.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Distance image.
Compute the fundamental matrix from the relative orientation of two cameras.
Instance represents: Parameters of the 1. camera.
Relative orientation of the cameras (3D pose).
6x6 covariance matrix of relative pose. Default: []
Parameters of the 2. camera.
9x9 covariance matrix of the fundamental matrix.
Computed fundamental matrix.
Compute the relative orientation between two cameras given image point correspondences and known camera parameters and reconstruct 3D space points.
Instance represents: Camera parameters of the 1st camera.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Camera parameters of the 2nd camera.
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
6x6 covariance matrix of the relative camera orientation.
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Computed relative orientation of the cameras (3D pose).
Compute the relative orientation between two cameras given image point correspondences and known camera parameters and reconstruct 3D space points.
Instance represents: Camera parameters of the 1st camera.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Camera parameters of the 2nd camera.
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
6x6 covariance matrix of the relative camera orientation.
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Computed relative orientation of the cameras (3D pose).
Compute the relative orientation between two cameras by automatically finding correspondences between image points.
Instance represents: Parameters of the 1st camera.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Parameters of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
6x6 covariance matrix of the relative orientation.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed relative orientation of the cameras (3D pose).
Compute the relative orientation between two cameras by automatically finding correspondences between image points.
Instance represents: Parameters of the 1st camera.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Parameters of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
6x6 covariance matrix of the relative orientation.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed relative orientation of the cameras (3D pose).
Compute the distance values for a rectified stereo image pair using correlation techniques.
Instance represents: Internal camera parameters of the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Evaluation of a distance value.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: 0
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.0
Downstream filters. Default: "none"
Distance interpolation. Default: "none"
Distance image.
Compute the distance values for a rectified stereo image pair using correlation techniques.
Instance represents: Internal camera parameters of the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Evaluation of a distance value.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: 0
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.0
Downstream filters. Default: "none"
Distance interpolation. Default: "none"
Distance image.
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Instance represents: Internal parameters of the projective camera 1.
Internal parameters of the projective camera 2.
Point transformation from camera 2 to camera 1.
Row coordinate of a point in image 1.
Column coordinate of a point in image 1.
Row coordinate of the corresponding point in image 2.
Column coordinate of the corresponding point in image 2.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Distance of the 3D point to the lines of sight.
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Instance represents: Internal parameters of the projective camera 1.
Internal parameters of the projective camera 2.
Point transformation from camera 2 to camera 1.
Row coordinate of a point in image 1.
Column coordinate of a point in image 1.
Row coordinate of the corresponding point in image 2.
Column coordinate of the corresponding point in image 2.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Distance of the 3D point to the lines of sight.
Transform a disparity image into 3D points in a rectified stereo system.
Instance represents: Internal camera parameters of the rectified camera 1.
Disparity image.
Y coordinates of the points in the rectified camera system 1.
Z coordinates of the points in the rectified camera system 1.
Internal camera parameters of the rectified camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
X coordinates of the points in the rectified camera system 1.
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Instance represents: Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Row coordinate of a point in the rectified image 1.
Column coordinate of a point in the rectified image 1.
Disparity of the images of the world point.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Instance represents: Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Row coordinate of a point in the rectified image 1.
Column coordinate of a point in the rectified image 1.
Disparity of the images of the world point.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Transform a disparity value into a distance value in a rectified binocular stereo system.
Instance represents: Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Disparity between the images of the world point.
Distance of a world point to the rectified camera system.
Transform a disparity value into a distance value in a rectified binocular stereo system.
Instance represents: Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Disparity between the images of the world point.
Distance of a world point to the rectified camera system.
Transfrom a distance value into a disparity in a rectified stereo system.
Instance represents: Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Distance of a world point to camera 1.
Disparity between the images of the point.
Transfrom a distance value into a disparity in a rectified stereo system.
Instance represents: Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Distance of a world point to camera 1.
Disparity between the images of the point.
Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common rectified image plane.
Instance represents: Internal parameters of camera 1.
Image containing the mapping data of camera 2.
Internal parameters of camera 2.
Point transformation from camera 2 to camera 1.
Subsampling factor. Default: 1.0
Type of rectification. Default: "viewing_direction"
Type of mapping. Default: "bilinear"
Rectified internal parameters of camera 1.
Rectified internal parameters of camera 2.
Point transformation from the rectified camera 1 to the original camera 1.
Point transformation from the rectified camera 1 to the original camera 1.
Point transformation from the rectified camera 2 to the rectified camera 1.
Image containing the mapping data of camera 1.
Determine all camera parameters of a binocular stereo system.
Instance represents: Initial values for the internal parameters of camera 1.
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
Initial values for the internal parameters of camera 2.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
Camera parameters to be estimated. Default: "all"
Internal parameters of camera 2.
Ordered tuple with all poses of the calibration model in relation to camera 1.
Ordered tuple with all poses of the calibration model in relation to camera 2.
Pose of camera 2 in relation to camera 1.
Average error distances in pixels.
Internal parameters of camera 1.
Determine all camera parameters of a binocular stereo system.
Instance represents: Initial values for the internal parameters of camera 1.
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
Initial values for the internal parameters of camera 2.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
Camera parameters to be estimated. Default: "all"
Internal parameters of camera 2.
Ordered tuple with all poses of the calibration model in relation to camera 1.
Ordered tuple with all poses of the calibration model in relation to camera 2.
Pose of camera 2 in relation to camera 1.
Average error distances in pixels.
Internal parameters of camera 1.
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Instance represents: Camera parameter (inner orientation) obtained from camera calibration.
Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
3D pose of the object.
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Instance represents: Camera parameter (inner orientation) obtained from camera calibration.
Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
3D pose of the object.
Create a descriptor model for calibrated perspective matching.
Instance represents: The parameters of the internal orientation of the camera.
Input image whose domain will be used to create the model.
The reference pose of the object in the reference image.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
The handle to the descriptor model.
Prepare a deformable model for planar calibrated matching from XLD contours.
Instance represents: The parameters of the internal orientation of the camera.
Input contours that will be used to create the model.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Prepare a deformable model for planar calibrated matching from XLD contours.
Instance represents: The parameters of the internal orientation of the camera.
Input contours that will be used to create the model.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Create a deformable model for calibrated perspective matching.
Instance represents: The parameters of the internal orientation of the camera.
Input image whose domain will be used to create the model.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Handle of the model.
Create a deformable model for calibrated perspective matching.
Instance represents: The parameters of the internal orientation of the camera.
Input image whose domain will be used to create the model.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Handle of the model.
Project the edges of a 3D shape model into image coordinates.
Instance represents: Internal camera parameters.
Handle of the 3D shape model.
3D pose of the 3D shape model in the world coordinate system.
Remove hidden surfaces? Default: "true"
Smallest face angle for which the edge is displayed Default: 0.523599
Contour representation of the model view.
Project the edges of a 3D shape model into image coordinates.
Instance represents: Internal camera parameters.
Handle of the 3D shape model.
3D pose of the 3D shape model in the world coordinate system.
Remove hidden surfaces? Default: "true"
Smallest face angle for which the edge is displayed Default: 0.523599
Contour representation of the model view.
Prepare a 3D object model for matching.
Instance represents: Internal camera parameters.
Handle of the 3D object model.
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without unit). Default: 0
Meaning of the rotation values of the reference orientation. Default: "gba"
Minimum longitude of the model views. Default: -0.35
Maximum longitude of the model views. Default: 0.35
Minimum latitude of the model views. Default: -0.35
Maximum latitude of the model views. Default: 0.35
Minimum camera roll angle of the model views. Default: -3.1416
Maximum camera roll angle of the model views. Default: 3.1416
Minimum camera-object-distance of the model views. Default: 0.3
Maximum camera-object-distance of the model views. Default: 0.4
Minimum contrast of the objects in the search images. Default: 10
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Handle of the 3D shape model.
Prepare a 3D object model for matching.
Instance represents: Internal camera parameters.
Handle of the 3D object model.
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without unit). Default: 0
Meaning of the rotation values of the reference orientation. Default: "gba"
Minimum longitude of the model views. Default: -0.35
Maximum longitude of the model views. Default: 0.35
Minimum latitude of the model views. Default: -0.35
Maximum latitude of the model views. Default: 0.35
Minimum camera roll angle of the model views. Default: -3.1416
Maximum camera roll angle of the model views. Default: 3.1416
Minimum camera-object-distance of the model views. Default: 0.3
Maximum camera-object-distance of the model views. Default: 0.4
Minimum contrast of the objects in the search images. Default: 10
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Handle of the 3D shape model.
Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given region.
Instance represents: Internal camera parameters.
Region in the image plane.
Handle of the 3D object model.
3D pose of the world coordinate system in camera coordinates.
Handle of the reduced 3D object model.
Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given region.
Instance represents: Internal camera parameters.
Region in the image plane.
Handle of the 3D object model.
3D pose of the world coordinate system in camera coordinates.
Handle of the reduced 3D object model.
Render 3D object models to get an image.
Instance represents: Camera parameters of the scene.
Handles of the 3D object models.
3D poses of the objects.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Rendered scene.
Render 3D object models to get an image.
Instance represents: Camera parameters of the scene.
Handles of the 3D object models.
3D poses of the objects.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Rendered scene.
Display 3D object models.
Instance represents: Camera parameters of the scene.
Window handle.
Handles of the 3D object models.
3D poses of the objects. Default: []
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Display 3D object models.
Instance represents: Camera parameters of the scene.
Window handle.
Handles of the 3D object models.
3D poses of the objects. Default: []
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Transform 3D points from a 3D object model to images.
Instance represents: Camera parameters.
Image with the Y-Coordinates of the 3D points.
Image with the Z-Coordinates of the 3D points.
Handle of the 3D object model.
Type of the conversion. Default: "cartesian"
Pose of the 3D object model.
Image with the X-Coordinates of the 3D points.
Project a 3D object model into image coordinates.
Instance represents: Internal camera parameters.
Handle of the 3D object model.
3D pose of the world coordinate system in camera coordinates.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Projected model contours.
Project a 3D object model into image coordinates.
Instance represents: Internal camera parameters.
Handle of the 3D object model.
3D pose of the world coordinate system in camera coordinates.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Projected model contours.
Add a camera to a 3D scene.
Instance represents: Parameters of the new camera.
Handle of the 3D scene.
Index of the new camera in the 3D scene.
Compute the calibrated scene flow between two stereo image pairs.
Instance represents: Internal camera parameters of the rectified camera 1.
Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Internal camera parameters of the rectified camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Handle of the 3D object model.
Compute the calibrated scene flow between two stereo image pairs.
Instance represents: Internal camera parameters of the rectified camera 1.
Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Internal camera parameters of the rectified camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Handle of the 3D object model.
Compute an absolute pose out of point correspondences between world and image coordinates.
Instance represents: The inner camera parameters from camera calibration.
X-Component of world coordinates.
Y-Component of world coordinates.
Z-Component of world coordinates.
Row-Component of image coordinates.
Column-Component of image coordinates.
Kind of algorithm Default: "iterative"
Type of pose quality to be returned in Quality. Default: "error"
Pose quality.
Pose.
Compute an absolute pose out of point correspondences between world and image coordinates.
Instance represents: The inner camera parameters from camera calibration.
X-Component of world coordinates.
Y-Component of world coordinates.
Z-Component of world coordinates.
Row-Component of image coordinates.
Column-Component of image coordinates.
Kind of algorithm Default: "iterative"
Type of pose quality to be returned in Quality. Default: "error"
Pose quality.
Pose.
Calibrate the radial distortion.
Modified instance represents: Internal camera parameters.
Contours that are available for the calibration.
Width of the images from which the contours were extracted. Default: 640
Height of the images from which the contours were extracted. Default: 480
Threshold for the classification of outliers. Default: 0.05
Seed value for the random number generator. Default: 42
Determines the distortion model. Default: "division"
Determines how the distortion center will be estimated. Default: "variable"
Controls the deviation of the distortion center from the image center; larger values allow larger deviations from the image center; 0 switches the penalty term off. Default: 0.0
Contours that were used for the calibration
Compute a camera matrix from internal camera parameters.
Instance represents: Internal camera parameters.
Width of the images that correspond to CameraMatrix.
Height of the images that correspond to CameraMatrix.
3x3 projective camera matrix that corresponds to CameraParam.
Compute the internal camera parameters from a camera matrix.
Modified instance represents: Internal camera parameters.
3x3 projective camera matrix that determines the internal camera parameters.
Kappa.
Width of the images that correspond to CameraMatrix.
Height of the images that correspond to CameraMatrix.
Determine the 3D pose of a rectangle from its perspective 2D projection
Instance represents: Internal camera parameters.
Contour(s) to be examined.
Width of the rectangle in meters.
Height of the rectangle in meters.
Weighting mode for the optimization phase. Default: "nonweighted"
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 3.0 for 'tukey'). Default: 2.0
Covariances of the pose values.
Root-mean-square value of the final residual error.
3D pose of the rectangle.
Determine the 3D pose of a rectangle from its perspective 2D projection
Instance represents: Internal camera parameters.
Contour(s) to be examined.
Width of the rectangle in meters.
Height of the rectangle in meters.
Weighting mode for the optimization phase. Default: "nonweighted"
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 3.0 for 'tukey'). Default: 2.0
Covariances of the pose values.
Root-mean-square value of the final residual error.
3D pose of the rectangle.
Determine the 3D pose of a circle from its perspective 2D projection.
Instance represents: Internal camera parameters.
Contours to be examined.
Radius of the circle in object space.
Type of output parameters. Default: "pose"
3D pose of the second circle.
3D pose of the first circle.
Determine the 3D pose of a circle from its perspective 2D projection.
Instance represents: Internal camera parameters.
Contours to be examined.
Radius of the circle in object space.
Type of output parameters. Default: "pose"
3D pose of the second circle.
3D pose of the first circle.
Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
Instance represents: Old camera parameters.
New camera parameters.
Type of the mapping. Default: "bilinear"
Image containing the mapping data.
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world coordinate system.
Instance represents: Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Width of the images to be transformed.
Height of the images to be transformed.
Width of the resulting mapped images in pixels.
Height of the resulting mapped images in pixels.
Scale or unit. Default: "m"
Type of the mapping. Default: "bilinear"
Image containing the mapping data.
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world coordinate system.
Instance represents: Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Width of the images to be transformed.
Height of the images to be transformed.
Width of the resulting mapped images in pixels.
Height of the resulting mapped images in pixels.
Scale or unit. Default: "m"
Type of the mapping. Default: "bilinear"
Image containing the mapping data.
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
Instance represents: Internal camera parameters.
Input image.
3D pose of the world coordinate system in camera coordinates.
Width of the resulting image in pixels.
Height of the resulting image in pixels.
Scale or unit Default: "m"
Type of interpolation. Default: "bilinear"
Transformed image.
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
Instance represents: Internal camera parameters.
Input image.
3D pose of the world coordinate system in camera coordinates.
Width of the resulting image in pixels.
Height of the resulting image in pixels.
Scale or unit Default: "m"
Type of interpolation. Default: "bilinear"
Transformed image.
Transform image points into the plane z=0 of a world coordinate system.
Instance represents: Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Row coordinates of the points to be transformed. Default: 100.0
Column coordinates of the points to be transformed. Default: 100.0
Scale or dimension Default: "m"
X coordinates of the points in the world coordinate system.
Y coordinates of the points in the world coordinate system.
Transform image points into the plane z=0 of a world coordinate system.
Instance represents: Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Row coordinates of the points to be transformed. Default: 100.0
Column coordinates of the points to be transformed. Default: 100.0
Scale or dimension Default: "m"
X coordinates of the points in the world coordinate system.
Y coordinates of the points in the world coordinate system.
Perform a hand-eye calibration.
Instance represents: Internal camera parameters.
Linear list containing all the x coordinates of the calibration points (in the order of the images).
Linear list containing all the y coordinates of the calibration points (in the order of the images).
Linear list containing all the z coordinates of the calibration points (in the order of the images).
Linear list containing all row coordinates of the calibration points (in the order of the images).
Linear list containing all the column coordinates of the calibration points (in the order of the images).
Number of the calibration points for each image.
Known 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates; stationary camera: robot tool in robot base coordinates).
Method of hand-eye calibration. Default: "nonlinear"
Type of quality assessment. Default: "error_pose"
Computed 3D pose of the calibration points in robot base coordinates (moving camera) or in robot tool coordinates (stationary camera), respectively.
Quality assessment of the result.
Computed relative camera pose: 3D pose of the robot tool (moving camera) or robot base (stationary camera), respectively, in camera coordinates.
Perform a hand-eye calibration.
Instance represents: Internal camera parameters.
Linear list containing all the x coordinates of the calibration points (in the order of the images).
Linear list containing all the y coordinates of the calibration points (in the order of the images).
Linear list containing all the z coordinates of the calibration points (in the order of the images).
Linear list containing all row coordinates of the calibration points (in the order of the images).
Linear list containing all the column coordinates of the calibration points (in the order of the images).
Number of the calibration points for each image.
Known 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates; stationary camera: robot tool in robot base coordinates).
Method of hand-eye calibration. Default: "nonlinear"
Type of quality assessment. Default: "error_pose"
Computed 3D pose of the calibration points in robot base coordinates (moving camera) or in robot tool coordinates (stationary camera), respectively.
Quality assessment of the result.
Computed relative camera pose: 3D pose of the robot tool (moving camera) or robot base (stationary camera), respectively, in camera coordinates.
Change the radial distortion of contours.
Instance represents: Internal camera parameter for Contours.
Original contours.
Internal camera parameter for ContoursRectified.
Resulting contours with modified radial distortion.
Change the radial distortion of pixel coordinates.
Instance represents: The inner camera parameters of the camera used to create the input pixel coordinates.
Original row component of pixel coordinates.
Original column component of pixel coordinates.
The inner camera parameters of a camera.
Row component of pixel coordinates after changing the radial distortion.
Column component of pixel coordinates after changing the radial distortion.
Change the radial distortion of an image.
Instance represents: Internal camera parameter for Image.
Original image.
Region of interest in ImageRectified.
Internal camera parameter for Image.
Resulting image with modified radial distortion.
Determine new camera parameters in accordance to the specified radial distortion.
Instance represents: Internal camera parameters (original).
Mode Default: "adaptive"
Desired radial distortions. Default: 0.0
Internal camera parameters (modified).
Determine new camera parameters in accordance to the specified radial distortion.
Instance represents: Internal camera parameters (original).
Mode Default: "adaptive"
Desired radial distortions. Default: 0.0
Internal camera parameters (modified).
Compute the line of sight corresponding to a point in the image.
Instance represents: Internal camera parameters.
Row coordinate of the pixel.
Column coordinate of the pixel.
X coordinate of the first point on the line of sight in the camera coordinate system
Y coordinate of the first point on the line of sight in the camera coordinate system
Z coordinate of the first point on the line of sight in the camera coordinate system
X coordinate of the second point on the line of sight in the camera coordinate system
Y coordinate of the second point on the line of sight in the camera coordinate system
Z coordinate of the second point on the line of sight in the camera coordinate system
Project 3D points into (sub-)pixel image coordinates.
Instance represents: Internal camera parameters.
X coordinates of the 3D points to be projected in the camera coordinate system.
Y coordinates of the 3D points to be projected in the camera coordinate system.
Z coordinates of the 3D points to be projected in the camera coordinate system.
Row coordinates of the projected points (in pixels).
Column coordinates of the projected points (in pixels).
Convert internal camera parameters and a 3D pose into a 3x4 projection matrix.
Instance represents: Internal camera parameters.
3D pose.
3x4 projection matrix.
Deserialize the serialized internal camera parameters.
Modified instance represents: Internal camera parameters.
Handle of the serialized item.
Serialize the internal camera parameters.
Instance represents: Internal camera parameters.
Handle of the serialized item.
Read internal camera parameters from a file.
Modified instance represents: Internal camera parameters.
File name of internal camera parameters. Default: "campar.dat"
Write internal camera parameters into a file.
Instance represents: Internal camera parameters.
File name of internal camera parameters. Default: "campar.dat"
Simulate an image with calibration plate.
Instance represents: Internal camera parameters.
File name of the calibration plate description. Default: "calplate_320mm.cpd"
External camera parameters (3D pose of the calibration plate in camera coordinates).
Gray value of image background. Default: 128
Gray value of calibration plate. Default: 80
Gray value of calibration marks. Default: 224
Scaling factor to reduce oversampling. Default: 1.0
Simulated calibration image.
Project and visualize the 3D model of the calibration plate in the image.
Instance represents: Internal camera parameters.
Window in which the calibration plate should be visualized.
File name of the calibration plate description. Default: "calplate_320.cpd"
External camera parameters (3D pose of the calibration plate in camera coordinates).
Scaling factor for the visualization. Default: 1.0
Determine all camera parameters by a simultaneous minimization process.
Instance represents: Initial values for the internal camera parameters.
Ordered tuple with all x coordinates of the calibration marks (in meters).
Ordered tuple with all y coordinates of the calibration marks (in meters).
Ordered tuple with all z coordinates of the calibration marks (in meters).
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
Ordered tuple with all initial values for the external camera parameters.
Camera parameters to be estimated. Default: "all"
Ordered tuple with all external camera parameters.
Average error distance in pixels.
Internal camera parameters.
Determine all camera parameters by a simultaneous minimization process.
Instance represents: Initial values for the internal camera parameters.
Ordered tuple with all x coordinates of the calibration marks (in meters).
Ordered tuple with all y coordinates of the calibration marks (in meters).
Ordered tuple with all z coordinates of the calibration marks (in meters).
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
Ordered tuple with all initial values for the external camera parameters.
Camera parameters to be estimated. Default: "all"
Ordered tuple with all external camera parameters.
Average error distance in pixels.
Internal camera parameters.
Extract rectangularly arranged 2D calibration marks from the image and calculate initial values for the external camera parameters.
Instance represents: Initial values for the internal camera parameters.
Input image.
Region of the calibration plate.
File name of the calibration plate description. Default: "caltab_100.descr"
Initial threshold value for contour detection. Default: 128
Loop value for successive reduction of StartThresh. Default: 10
Minimum threshold for contour detection. Default: 18
Filter parameter for contour detection, see edges_image. Default: 0.9
Minimum length of the contours of the marks. Default: 15.0
Maximum expected diameter of the marks. Default: 100.0
Tuple with column coordinates of the detected marks.
Estimation for the external camera parameters.
Tuple with row coordinates of the detected marks.
Define type, parameters, and relative pose of a camera in a camera setup model.
Instance represents: Internal camera parameters.
Handle to the camera setup model.
Index of the camera in the setup.
Type of the camera. Default: []
Pose of the camera relative to the setup's coordinate system.
Define type, parameters, and relative pose of a camera in a camera setup model.
Instance represents: Internal camera parameters.
Handle to the camera setup model.
Index of the camera in the setup.
Type of the camera. Default: []
Pose of the camera relative to the setup's coordinate system.
Set type and initial parameters of a camera in a calibration data model.
Instance represents: Initial camera internal parameters.
Handle of a calibration data model.
Camera index. Default: 0
Type of the camera. Default: []
Set type and initial parameters of a camera in a calibration data model.
Instance represents: Initial camera internal parameters.
Handle of a calibration data model.
Camera index. Default: 0
Type of the camera. Default: []
Represents an instance of a classifier.
Create a new classifier.
Modified instance represents: Handle of the classifier.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Train a classificator using a multi-channel image.
Instance represents: Handle of the classifier.
Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Multi-channel training image.
Classify pixels using hyper-cuboids.
Instance represents: Handle of the classifier.
Multi channel input image.
Classification result.
Deserialize a serialized classifier.
Instance represents: Handle of the classifier.
Handle of the serialized item.
Serialize a classifier.
Instance represents: Handle of the classifier.
Handle of the serialized item.
Save a classifier in a file.
Instance represents: Handle of the classifier.
Name of the file which contains the written data.
Set system parameters for classification.
Instance represents: Handle of the classifier.
Name of the wanted parameter. Default: "split_error"
Value of the parameter. Default: 0.1
Set system parameters for classification.
Instance represents: Handle of the classifier.
Name of the wanted parameter. Default: "split_error"
Value of the parameter. Default: 0.1
Read a classifier from a file.
Instance represents: Handle of the classifier.
Filename of the classifier.
Train the classifier with one data set.
Instance represents: Handle of the classifier.
Number of the data set to train.
Name of the protocol file. Default: "training_prot"
Number of arrays of attributes to learn. Default: 500
Classification error for termination. Default: 0.05
Error during the assignment. Default: 100
Train the classifier.
Instance represents: Handle of the classifier.
Array of attributes to learn. Default: [1.0,1.5,2.0]
Class to which the array has to be assigned. Default: 1
Get information about the current parameter.
Instance represents: Handle of the classifier.
Name of the system parameter. Default: "split_error"
Value of the system parameter.
Destroy the classifier.
Instance represents: Handle of the classifier.
Create a new classifier.
Modified instance represents: Handle of the classifier.
Describe the classes of a box classifier.
Instance represents: Handle of the classifier.
Highest dimension for output. Default: 3
Indices of the boxes.
Lower bounds of the boxes (for each dimension).
Higher bounds of the boxes (for each dimension).
Number of training samples that were used to define this box (for each dimension).
Number of training samples that were assigned incorrectly to the box.
Indices of the classes.
Describe the classes of a box classifier.
Instance represents: Handle of the classifier.
Highest dimension for output. Default: 3
Indices of the boxes.
Lower bounds of the boxes (for each dimension).
Higher bounds of the boxes (for each dimension).
Number of training samples that were used to define this box (for each dimension).
Number of training samples that were assigned incorrectly to the box.
Indices of the classes.
Classify a set of arrays.
Instance represents: Handle of the classifier.
Key of the test data.
Error during the assignment.
Classify a tuple of attributes with rejection class.
Instance represents: Handle of the classifier.
Array of attributes which has to be classified. Default: 1.0
Number of the class, to which the array of attributes had been assigned or -1 for the rejection class.
Classify a tuple of attributes.
Instance represents: Handle of the classifier.
Array of attributes which has to be classified. Default: 1.0
Number of the class to which the array of attributes had been assigned.
Represents an instance of a Gaussian mixture model.
Read a Gaussian Mixture Model from a file.
Modified instance represents: GMM handle.
File name.
Create a Gaussian Mixture Model for classification
Modified instance represents: GMM handle.
Number of dimensions of the feature space. Default: 3
Number of classes of the GMM. Default: 5
Number of centers per class. Default: 1
Type of the covariance matrices. Default: "spherical"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the GMM with random values. Default: 42
Create a Gaussian Mixture Model for classification
Modified instance represents: GMM handle.
Number of dimensions of the feature space. Default: 3
Number of classes of the GMM. Default: 5
Number of centers per class. Default: 1
Type of the covariance matrices. Default: "spherical"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the GMM with random values. Default: 42
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Classify an image with a Gaussian Mixture Model.
Instance represents: GMM handle.
Input image.
Threshold for the rejection of the classification. Default: 0.5
Segmented classes.
Add training samples from an image to the training data of a Gaussian Mixture Model.
Instance represents: GMM handle.
Training image.
Regions of the classes to be trained.
Standard deviation of the Gaussian noise added to the training data. Default: 0.0
Get the training data of a Gaussian Mixture Model (GMM).
Instance represents: Handle of a GMM that contains training data.
Handle of the training data of the classifier.
Add training data to a Gaussian Mixture Model (GMM).
Instance represents: Handle of a GMM which receives the training data.
Handle of training data for a classifier.
Selects an optimal combination from a set of features to classify the provided data.
Modified instance represents: A trained GMM classifier using only the selected features.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the classifier. Default: []
Values of generic parameters to configure the classifier. Default: []
The achieved score using two-fold cross-validation.
The selected feature set, contains indices or names.
Selects an optimal combination from a set of features to classify the provided data.
Modified instance represents: A trained GMM classifier using only the selected features.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the classifier. Default: []
Values of generic parameters to configure the classifier. Default: []
The achieved score using two-fold cross-validation.
The selected feature set, contains indices or names.
Create a look-up table using a gaussian mixture model to classify byte images.
Instance represents: GMM handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Handle of the LUT classifier.
Clear a Gaussian Mixture Model.
GMM handle.
Clear a Gaussian Mixture Model.
Instance represents: GMM handle.
Clear the training data of a Gaussian Mixture Model.
GMM handle.
Clear the training data of a Gaussian Mixture Model.
Instance represents: GMM handle.
Deserialize a serialized Gaussian Mixture Model.
Modified instance represents: GMM handle.
Handle of the serialized item.
Serialize a Gaussian Mixture Model (GMM).
Instance represents: GMM handle.
Handle of the serialized item.
Read a Gaussian Mixture Model from a file.
Modified instance represents: GMM handle.
File name.
Write a Gaussian Mixture Model to a file.
Instance represents: GMM handle.
File name.
Read the training data of a Gaussian Mixture Model from a file.
Instance represents: GMM handle.
File name.
Write the training data of a Gaussian Mixture Model to a file.
Instance represents: GMM handle.
File name.
Calculate the class of a feature vector by a Gaussian Mixture Model.
Instance represents: GMM handle.
Feature vector.
Number of best classes to determine. Default: 1
A-posteriori probability of the classes.
Probability density of the feature vector.
Normalized k-sigma-probability for the feature vector.
Result of classifying the feature vector with the GMM.
Evaluate a feature vector by a Gaussian Mixture Model.
Instance represents: GMM handle.
Feature vector.
Probability density of the feature vector.
Normalized k-sigma-probability for the feature vector.
A-posteriori probability of the classes.
Train a Gaussian Mixture Model.
Instance represents: GMM handle.
Maximum number of iterations of the expectation maximization algorithm Default: 100
Threshold for relative change of the error for the expectation maximization algorithm to terminate. Default: 0.001
Mode to determine the a-priori probabilities of the classes Default: "training"
Regularization value for preventing covariance matrix singularity. Default: 0.0001
Number of executed iterations per class
Number of found centers per class
Compute the information content of the preprocessed feature vectors of a GMM.
Instance represents: GMM handle.
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Cumulative information content of the transformed feature vectors.
Relative information content of the transformed feature vectors.
Return the number of training samples stored in the training data of a Gaussian Mixture Model (GMM).
Instance represents: GMM handle.
Number of stored training samples.
Return a training sample from the training data of a Gaussian Mixture Models (GMM).
Instance represents: GMM handle.
Index of the stored training sample.
Class of the training sample.
Feature vector of the training sample.
Add a training sample to the training data of a Gaussian Mixture Model.
Instance represents: GMM handle.
Feature vector of the training sample to be stored.
Class of the training sample to be stored.
Standard deviation of the Gaussian noise added to the training data. Default: 0.0
Return the parameters of a Gaussian Mixture Model.
Instance represents: GMM handle.
Number of classes of the GMM.
Minimum number of centers per GMM class.
Maximum number of centers per GMM class.
Type of the covariance matrices.
Number of dimensions of the feature space.
Create a Gaussian Mixture Model for classification
Modified instance represents: GMM handle.
Number of dimensions of the feature space. Default: 3
Number of classes of the GMM. Default: 5
Number of centers per class. Default: 1
Type of the covariance matrices. Default: "spherical"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the GMM with random values. Default: 42
Create a Gaussian Mixture Model for classification
Modified instance represents: GMM handle.
Number of dimensions of the feature space. Default: 3
Number of classes of the GMM. Default: 5
Number of centers per class. Default: 1
Type of the covariance matrices. Default: "spherical"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the GMM with random values. Default: 42
Represents an instance of a k-NearestNeighbor classifier.
Read the k-NN classifier from a file.
Modified instance represents: Handle of the k-NN classifier.
File name of the classifier.
Create a k-nearest neighbors (k-NN) classifier.
Modified instance represents: Handle of the k-NN classifier.
Number of dimensions of the feature. Default: 10
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Classify an image with a k-Nearest-Neighbor classifier.
Instance represents: Handle of the k-NN classifier.
Input image.
Distance of the pixel's nearest neighbor.
Threshold for the rejection of the classification. Default: 0.5
Segmented classes.
Add training samples from an image to the training data of a k-Nearest-Neighbor classifier.
Instance represents: Handle of the k-NN classifier.
Training image.
Regions of the classes to be trained.
Get the training data of a k-nearest neighbors (k-NN) classifier.
Instance represents: Handle of the k-NN classifier that contains training data.
Handle of the training data of the classifier.
Add training data to a k-nearest neighbors (k-NN) classifier.
Instance represents: Handle of a k-NN which receives the training data.
Training data for a classifier.
Selects an optimal subset from a set of features to solve a certain classification problem.
Modified instance represents: A trained k-NN classifier using only the selected features.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The achieved score using two-fold cross-validation.
The selected feature set, contains indices or names.
Selects an optimal subset from a set of features to solve a certain classification problem.
Modified instance represents: A trained k-NN classifier using only the selected features.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The achieved score using two-fold cross-validation.
The selected feature set, contains indices or names.
Clear a k-NN classifier.
Instance represents: Handle of the k-NN classifier.
Return the number of training samples stored in the training data of a k-nearest neighbors (k-NN) classifier.
Instance represents: Handle of the k-NN classifier.
Number of stored training samples.
Return a training sample from the training data of a k-nearest neighbors (k-NN) classifier.
Instance represents: Handle of the k-NN classifier.
Index of the training sample.
Class of the training sample.
Feature vector of the training sample.
Deserialize a serialized k-NN classifier.
Modified instance represents: Handle of the k-NN classifier.
Handle of the serialized item.
Serialize a k-NN classifier.
Instance represents: Handle of the k-NN classifier.
Handle of the serialized item.
Read the k-NN classifier from a file.
Modified instance represents: Handle of the k-NN classifier.
File name of the classifier.
Save the k-NN classifier in a file.
Instance represents: Handle of the k-NN classifier.
Name of the file in which the classifier will be written.
Get parameters of a k-NN classification.
Instance represents: Handle of the k-NN classifier.
Names of the parameters that can be read from the k-NN classifier. Default: ["method","k"]
Values of the selected parameters.
Set parameters for k-NN classification.
Instance represents: Handle of the k-NN classifier.
Names of the generic parameters that can be adjusted for the k-NN classifier. Default: ["method","k","max_num_classes"]
Values of the generic parameters that can be adjusted for the k-NN classifier. Default: ["classes_distance",5,1]
Search for the next neighbors for a given feature vector.
Instance represents: Handle of the k-NN classifier.
Features that should be classified.
A rating for the results. This value contains either a distance, a frequency or a weighted frequency.
The classification result, either class IDs or sample indices.
Creates the search trees for a k-NN classifier.
Instance represents: Handle of the k-NN classifier.
Names of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Values of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Add a sample to a k-nearest neighbors (k-NN) classifier.
Instance represents: Handle of the k-NN classifier.
List of features to add.
Class IDs of the features.
Add a sample to a k-nearest neighbors (k-NN) classifier.
Instance represents: Handle of the k-NN classifier.
List of features to add.
Class IDs of the features.
Create a k-nearest neighbors (k-NN) classifier.
Modified instance represents: Handle of the k-NN classifier.
Number of dimensions of the feature. Default: 10
Create a look-up table using a k-nearest neighbors classifier (k-NN) to classify byte images.
Instance represents: Handle of the k-NN classifier.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Handle of the LUT classifier.
Represents an instance of a classification lookup table
Create a look-up table using a k-nearest neighbors classifier (k-NN) to classify byte images.
Modified instance represents: Handle of the LUT classifier.
Handle of the k-NN classifier.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Create a look-up table using a gaussian mixture model to classify byte images.
Modified instance represents: Handle of the LUT classifier.
GMM handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Create a look-up table using a Support-Vector-Machine to classify byte images.
Modified instance represents: Handle of the LUT classifier.
SVM handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Create a look-up table using a multi-layer perceptron to classify byte images.
Modified instance represents: Handle of the LUT classifier.
MLP handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Classify a byte image using a look-up table.
Instance represents: Handle of the LUT classifier.
Input image.
Segmented classes.
Clear a look-up table classifier.
Handle of the LUT classifier.
Clear a look-up table classifier.
Instance represents: Handle of the LUT classifier.
Create a look-up table using a k-nearest neighbors classifier (k-NN) to classify byte images.
Modified instance represents: Handle of the LUT classifier.
Handle of the k-NN classifier.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Create a look-up table using a gaussian mixture model to classify byte images.
Modified instance represents: Handle of the LUT classifier.
GMM handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Create a look-up table using a Support-Vector-Machine to classify byte images.
Modified instance represents: Handle of the LUT classifier.
SVM handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Create a look-up table using a multi-layer perceptron to classify byte images.
Modified instance represents: Handle of the LUT classifier.
MLP handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Represents an instance of a multilayer perceptron.
Read a multilayer perceptron from a file.
Modified instance represents: MLP handle.
File name.
Create a multilayer perceptron for classification or regression.
Modified instance represents: MLP handle.
Number of input variables (features) of the MLP. Default: 20
Number of hidden units of the MLP. Default: 10
Number of output variables (classes) of the MLP. Default: 5
Type of the activation function in the output layer of the MLP. Default: "softmax"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the MLP with random values. Default: 42
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Classify an image with a multilayer perceptron.
Instance represents: MLP handle.
Input image.
Threshold for the rejection of the classification. Default: 0.5
Segmented classes.
Add training samples from an image to the training data of a multilayer perceptron.
Instance represents: MLP handle.
Training image.
Regions of the classes to be trained.
Get the training data of a multilayer perceptron (MLP).
Instance represents: Handle of a MLP that contains training data.
Handle of the training data of the classifier.
Add training data to a multilayer perceptron (MLP).
Instance represents: MLP handle which receives the training data.
Training data for a classifier.
Selects an optimal combination of features to classify the provided data.
Modified instance represents: A trained MLP classifier using only the selected features.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The achieved score using two-fold cross-validation.
The selected feature set, contains indices referring.
Selects an optimal combination of features to classify the provided data.
Modified instance represents: A trained MLP classifier using only the selected features.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The achieved score using two-fold cross-validation.
The selected feature set, contains indices referring.
Create a look-up table using a multi-layer perceptron to classify byte images.
Instance represents: MLP handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Handle of the LUT classifier.
Clear a multilayer perceptron.
MLP handle.
Clear a multilayer perceptron.
Instance represents: MLP handle.
Clear the training data of a multilayer perceptron.
MLP handle.
Clear the training data of a multilayer perceptron.
Instance represents: MLP handle.
Deserialize a serialized multilayer perceptron.
Modified instance represents: MLP handle.
Handle of the serialized item.
Serialize a multilayer perceptron (MLP).
Instance represents: MLP handle.
Handle of the serialized item.
Read a multilayer perceptron from a file.
Modified instance represents: MLP handle.
File name.
Write a multilayer perceptron to a file.
Instance represents: MLP handle.
File name.
Read the training data of a multilayer perceptron from a file.
Instance represents: MLP handle.
File name.
Write the training data of a multilayer perceptron to a file.
Instance represents: MLP handle.
File name.
Calculate the class of a feature vector by a multilayer perceptron.
Instance represents: MLP handle.
Feature vector.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the feature vector.
Result of classifying the feature vector with the MLP.
Calculate the class of a feature vector by a multilayer perceptron.
Instance represents: MLP handle.
Feature vector.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the feature vector.
Result of classifying the feature vector with the MLP.
Calculate the evaluation of a feature vector by a multilayer perceptron.
Instance represents: MLP handle.
Feature vector.
Result of evaluating the feature vector with the MLP.
Train a multilayer perceptron.
Instance represents: MLP handle.
Maximum number of iterations of the optimization algorithm. Default: 200
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm. Default: 1.0
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm. Default: 0.01
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.
Mean error of the MLP on the training data.
Compute the information content of the preprocessed feature vectors of a multilayer perceptron.
Instance represents: MLP handle.
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Cumulative information content of the transformed feature vectors.
Relative information content of the transformed feature vectors.
Return the number of training samples stored in the training data of a multilayer perceptron.
Instance represents: MLP handle.
Number of stored training samples.
Return a training sample from the training data of a multilayer perceptron.
Instance represents: MLP handle.
Number of stored training sample.
Target vector of the training sample.
Feature vector of the training sample.
Get the parameters of a rejection class.
Instance represents: MLP handle.
Names of the generic parameters to return. Default: "sampling_strategy"
Values of the generic parameters.
Get the parameters of a rejection class.
Instance represents: MLP handle.
Names of the generic parameters to return. Default: "sampling_strategy"
Values of the generic parameters.
Set the parameters of a rejection class.
Instance represents: MLP handle.
Names of the generic parameters. Default: "sampling_strategy"
Values of the generic parameters. Default: "hyperbox_around_all_classes"
Set the parameters of a rejection class.
Instance represents: MLP handle.
Names of the generic parameters. Default: "sampling_strategy"
Values of the generic parameters. Default: "hyperbox_around_all_classes"
Add a training sample to the training data of a multilayer perceptron.
Instance represents: MLP handle.
Feature vector of the training sample to be stored.
Class or target vector of the training sample to be stored.
Add a training sample to the training data of a multilayer perceptron.
Instance represents: MLP handle.
Feature vector of the training sample to be stored.
Class or target vector of the training sample to be stored.
Return the regularization parameters of a multilayer perceptron.
Instance represents: MLP handle.
Name of the regularization parameter to return. Default: "weight_prior"
Value of the regularization parameter.
Set the regularization parameters of a multilayer perceptron.
Instance represents: MLP handle.
Name of the regularization parameter to set. Default: "weight_prior"
Value of the regularization parameter. Default: 1.0
Set the regularization parameters of a multilayer perceptron.
Instance represents: MLP handle.
Name of the regularization parameter to set. Default: "weight_prior"
Value of the regularization parameter. Default: 1.0
Return the parameters of a multilayer perceptron.
Instance represents: MLP handle.
Number of hidden units of the MLP.
Number of output variables (classes) of the MLP.
Type of the activation function in the output layer of the MLP.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features.
Number of input variables (features) of the MLP.
Create a multilayer perceptron for classification or regression.
Modified instance represents: MLP handle.
Number of input variables (features) of the MLP. Default: 20
Number of hidden units of the MLP. Default: 10
Number of output variables (classes) of the MLP. Default: 5
Type of the activation function in the output layer of the MLP. Default: "softmax"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the MLP with random values. Default: 42
Represents an instance of a support vector machine.
Read a support vector machine from a file.
Modified instance represents: SVM handle.
File name.
Create a support vector machine for pattern classification.
Modified instance represents: SVM handle.
Number of input variables (features) of the SVM. Default: 10
The kernel type. Default: "rbf"
Additional parameter for the kernel function. In case of RBF kernel the value for gamma@f$ Default: 0.02
Regularisation constant of the SVM. Default: 0.05
Number of classes. Default: 5
The mode of the SVM. Default: "one-versus-one"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Classify an image with a support vector machine.
Instance represents: SVM handle.
Input image.
Segmented classes.
Add training samples from an image to the training data of a support vector machine.
Instance represents: SVM handle.
Training image.
Regions of the classes to be trained.
Get the training data of a support vector machine (SVM).
Instance represents: Handle of a SVM that contains training data.
Handle of the training data of the classifier.
Add training data to a support vector machine (SVM).
Instance represents: Handle of a SVM which receives the training data.
Training data for a classifier.
Selects an optimal combination of features to classify the provided data.
Modified instance represents: A trained SVM classifier using only the selected features.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The achieved score using two-fold cross-validation.
The selected feature set, contains indices.
Selects an optimal combination of features to classify the provided data.
Modified instance represents: A trained SVM classifier using only the selected features.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The achieved score using two-fold cross-validation.
The selected feature set, contains indices.
Create a look-up table using a Support-Vector-Machine to classify byte images.
Instance represents: SVM handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Handle of the LUT classifier.
Clear a support vector machine.
SVM handle.
Clear a support vector machine.
Instance represents: SVM handle.
Clear the training data of a support vector machine.
SVM handle.
Clear the training data of a support vector machine.
Instance represents: SVM handle.
Deserialize a serialized support vector machine (SVM).
Modified instance represents: SVM handle.
Handle of the serialized item.
Serialize a support vector machine (SVM).
Instance represents: SVM handle.
Handle of the serialized item.
Read a support vector machine from a file.
Modified instance represents: SVM handle.
File name.
Write a support vector machine to a file.
Instance represents: SVM handle.
File name.
Read the training data of a support vector machine from a file.
Instance represents: SVM handle.
File name.
Write the training data of a support vector machine to a file.
Instance represents: SVM handle.
File name.
Evaluate a feature vector by a support vector machine.
Instance represents: SVM handle.
Feature vector.
Result of evaluating the feature vector with the SVM.
Classify a feature vector by a support vector machine.
Instance represents: SVM handle.
Feature vector.
Number of best classes to determine. Default: 1
Result of classifying the feature vector with the SVM.
Approximate a trained support vector machine by a reduced support vector machine for faster classification.
Instance represents: Original SVM handle.
Type of postprocessing to reduce number of SV. Default: "bottom_up"
Minimum number of remaining SVs. Default: 2
Maximum allowed error of reduction. Default: 0.001
SVMHandle of reduced SVM.
Train a support vector machine.
Instance represents: SVM handle.
Stop parameter for training. Default: 0.001
Mode of training. For normal operation: 'default'. If SVs already included in the SVM should be used for training: 'add_sv_to_train_set'. For alpha seeding: the respective SVM handle. Default: "default"
Train a support vector machine.
Instance represents: SVM handle.
Stop parameter for training. Default: 0.001
Mode of training. For normal operation: 'default'. If SVs already included in the SVM should be used for training: 'add_sv_to_train_set'. For alpha seeding: the respective SVM handle. Default: "default"
Compute the information content of the preprocessed feature vectors of a support vector machine
Instance represents: SVM handle.
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Cumulative information content of the transformed feature vectors.
Relative information content of the transformed feature vectors.
Return the number of support vectors of a support vector machine.
Instance represents: SVM handle.
Number of SV of each sub-SVM.
Total number of support vectors.
Return the index of a support vector from a trained support vector machine.
Instance represents: SVM handle.
Index of the stored support vector.
Index of the support vector in the training set.
Return the number of training samples stored in the training data of a support vector machine.
Instance represents: SVM handle.
Number of stored training samples.
Return a training sample from the training data of a support vector machine.
Instance represents: SVM handle.
Number of the stored training sample.
Target vector of the training sample.
Feature vector of the training sample.
Add a training sample to the training data of a support vector machine.
Instance represents: SVM handle.
Feature vector of the training sample to be stored.
Class of the training sample to be stored.
Add a training sample to the training data of a support vector machine.
Instance represents: SVM handle.
Feature vector of the training sample to be stored.
Class of the training sample to be stored.
Return the parameters of a support vector machine.
Instance represents: SVM handle.
The kernel type.
Additional parameter for the kernel.
Regularization constant of the SVM.
Number of classes of the test data.
The mode of the SVM.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization').
Number of input variables (features) of the SVM.
Create a support vector machine for pattern classification.
Modified instance represents: SVM handle.
Number of input variables (features) of the SVM. Default: 10
The kernel type. Default: "rbf"
Additional parameter for the kernel function. In case of RBF kernel the value for gamma@f$ Default: 0.02
Regularisation constant of the SVM. Default: 0.05
Number of classes. Default: 5
The mode of the SVM. Default: "one-versus-one"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Represents an instance of a training data management class.
Read the training data for classifiers from a file.
Modified instance represents: Handle of the training data.
File name of the training data.
Create a handle for training data for classifiers.
Modified instance represents: Handle of the training data.
Number of dimensions of the feature vector. Default: 10
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Deserialize serialized training data for classifiers.
Modified instance represents: Handle of the training data.
Handle of the serialized item.
Serialize training data for classifiers.
Instance represents: Handle of the training data.
Handle of the serialized item.
Read the training data for classifiers from a file.
Modified instance represents: Handle of the training data.
File name of the training data.
Save the training data for classifiers in a file.
Instance represents: Handle of the training data.
Name of the file in which the training data will be written.
Select certain features from training data to create training data containing less features.
Instance represents: Handle of the training data.
Indices or names to select the subfeatures or columns.
Handle of the reduced training data.
Define subfeatures in training data.
Instance represents: Handle of the training data that should be partitioned into subfeatures.
Length of the subfeatures.
Names of the subfeatures.
Get the training data of a Gaussian Mixture Model (GMM).
Modified instance represents: Handle of the training data of the classifier.
Handle of a GMM that contains training data.
Add training data to a Gaussian Mixture Model (GMM).
Instance represents: Handle of training data for a classifier.
Handle of a GMM which receives the training data.
Get the training data of a multilayer perceptron (MLP).
Modified instance represents: Handle of the training data of the classifier.
Handle of a MLP that contains training data.
Add training data to a multilayer perceptron (MLP).
Instance represents: Training data for a classifier.
MLP handle which receives the training data.
Get the training data of a k-nearest neighbors (k-NN) classifier.
Modified instance represents: Handle of the training data of the classifier.
Handle of the k-NN classifier that contains training data.
Add training data to a k-nearest neighbors (k-NN) classifier.
Instance represents: Training data for a classifier.
Handle of a k-NN which receives the training data.
Get the training data of a support vector machine (SVM).
Modified instance represents: Handle of the training data of the classifier.
Handle of a SVM that contains training data.
Add training data to a support vector machine (SVM).
Instance represents: Training data for a classifier.
Handle of a SVM which receives the training data.
Return the number of training samples stored in the training data.
Instance represents: Handle of training data.
Number of stored training samples.
Return a training sample from training data.
Instance represents: Handle of training data for a classifier.
Number of stored training sample.
Class of the training sample.
Feature vector of the training sample.
Clears training data for classifiers.
Instance represents: Handle of training data for a classifier.
Add a training sample to training data.
Instance represents: Handle of the training data.
The order of the feature vector. Default: "row"
Feature vector of the training sample.
Class of the training sample.
Create a handle for training data for classifiers.
Modified instance represents: Handle of the training data.
Number of dimensions of the feature vector. Default: 10
Selects an optimal combination of features to classify the provided data.
Instance represents: Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The selected feature set, contains indices referring.
The achieved score using two-fold cross-validation.
A trained MLP classifier using only the selected features.
Selects an optimal combination of features to classify the provided data.
Instance represents: Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The selected feature set, contains indices referring.
The achieved score using two-fold cross-validation.
A trained MLP classifier using only the selected features.
Selects an optimal combination of features to classify the provided data.
Instance represents: Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The selected feature set, contains indices.
The achieved score using two-fold cross-validation.
A trained SVM classifier using only the selected features.
Selects an optimal combination of features to classify the provided data.
Instance represents: Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The selected feature set, contains indices.
The achieved score using two-fold cross-validation.
A trained SVM classifier using only the selected features.
Selects an optimal combination from a set of features to classify the provided data.
Instance represents: Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the classifier. Default: []
Values of generic parameters to configure the classifier. Default: []
The selected feature set, contains indices or names.
The achieved score using two-fold cross-validation.
A trained GMM classifier using only the selected features.
Selects an optimal combination from a set of features to classify the provided data.
Instance represents: Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the classifier. Default: []
Values of generic parameters to configure the classifier. Default: []
The selected feature set, contains indices or names.
The achieved score using two-fold cross-validation.
A trained GMM classifier using only the selected features.
Selects an optimal subset from a set of features to solve a certain classification problem.
Instance represents: Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The selected feature set, contains indices or names.
The achieved score using two-fold cross-validation.
A trained k-NN classifier using only the selected features.
Selects an optimal subset from a set of features to solve a certain classification problem.
Instance represents: Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
The selected feature set, contains indices or names.
The achieved score using two-fold cross-validation.
A trained k-NN classifier using only the selected features.
Represents an instance of a color space transformation lookup table
Creates the look-up-table for transformation of an image from the RGB color space to an arbitrary color space.
Modified instance represents: Handle of the look-up-table for color space transformation.
Color space of the output image. Default: "hsv"
Direction of color space transformation. Default: "from_rgb"
Number of bits of the input image. Default: 8
Release the look-up-table needed for color space transformation.
Instance represents: Handle of the look-up-table handle for the color space transformation.
Color space transformation using pre-generated look-up-table.
Instance represents: Handle of the look-up-table for the color space transformation.
Input image (channel 1).
Input image (channel 2).
Input image (channel 3).
Color-transformed output image (channel 2).
Color-transformed output image (channel 3).
Color-transformed output image (channel 1).
Creates the look-up-table for transformation of an image from the RGB color space to an arbitrary color space.
Modified instance represents: Handle of the look-up-table for color space transformation.
Color space of the output image. Default: "hsv"
Direction of color space transformation. Default: "from_rgb"
Number of bits of the input image. Default: 8
Represents an instance of a model for the component-based matching.
Read a component model from a file.
Modified instance represents: Handle of the component model.
File name.
Prepare a component model for matching based on explicitly specified components and relations.
Modified instance represents: Handle of the component model.
Input image from which the shape models of the model components should be created.
Input regions from which the shape models of the model components should be created.
Variation of the model components in row direction.
Variation of the model components in column direction.
Angle variation of the model components.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Lower hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Upper hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Minimum size of the contour regions in the model. Default: "auto"
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Prepare a component model for matching based on explicitly specified components and relations.
Modified instance represents: Handle of the component model.
Input image from which the shape models of the model components should be created.
Input regions from which the shape models of the model components should be created.
Variation of the model components in row direction.
Variation of the model components in column direction.
Angle variation of the model components.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Lower hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Upper hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Minimum size of the contour regions in the model. Default: "auto"
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Prepare a component model for matching based on trained components.
Modified instance represents: Handle of the component model.
Handle of the training result.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Prepare a component model for matching based on trained components.
Modified instance represents: Handle of the component model.
Handle of the training result.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Return the components of a found instance of a component model.
Instance represents: Handle of the component model.
Start index of each found instance of the component model in the tuples describing the component matches.
End index of each found instance of the component model to the tuples describing the component matches.
Row coordinate of the found component matches.
Column coordinate of the found component matches.
Rotation angle of the found component matches.
Score of the found component matches.
Index of the found components.
Index of the found instance of the component model to be returned.
Mark the orientation of the components. Default: "false"
Row coordinate of all components of the selected model instance.
Column coordinate of all components of the selected model instance.
Rotation angle of all components of the selected model instance.
Score of all components of the selected model instance.
Found components of the selected component model instance.
Return the components of a found instance of a component model.
Instance represents: Handle of the component model.
Start index of each found instance of the component model in the tuples describing the component matches.
End index of each found instance of the component model to the tuples describing the component matches.
Row coordinate of the found component matches.
Column coordinate of the found component matches.
Rotation angle of the found component matches.
Score of the found component matches.
Index of the found components.
Index of the found instance of the component model to be returned.
Mark the orientation of the components. Default: "false"
Row coordinate of all components of the selected model instance.
Column coordinate of all components of the selected model instance.
Rotation angle of all components of the selected model instance.
Score of all components of the selected model instance.
Found components of the selected component model instance.
Find the best matches of a component model in an image.
Instance represents: Handle of the component model.
Input image in which the component model should be found.
Index of the root component.
Smallest rotation of the root component Default: -0.39
Extent of the rotation of the root component. Default: 0.79
Minimum score of the instances of the component model to be found. Default: 0.5
Number of instances of the component model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the component models to be found. Default: 0.5
Behavior if the root component is missing. Default: "stop_search"
Behavior if a component is missing. Default: "prune_branch"
Pose prediction of components that are not found. Default: "none"
Minimum score of the instances of the components to be found. Default: 0.5
Subpixel accuracy of the component poses if not equal to 'none'. Default: "least_squares"
Number of pyramid levels for the components used in the matching (and lowest pyramid level to use if $|NumLevelsComp| = 2n$). Default: 0
"Greediness" of the search heuristic for the components (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
End index of each found instance of the component model in the tuples describing the component matches.
Score of the found instances of the component model.
Row coordinate of the found component matches.
Column coordinate of the found component matches.
Rotation angle of the found component matches.
Score of the found component matches.
Index of the found components.
Start index of each found instance of the component model in the tuples describing the component matches.
Find the best matches of a component model in an image.
Instance represents: Handle of the component model.
Input image in which the component model should be found.
Index of the root component.
Smallest rotation of the root component Default: -0.39
Extent of the rotation of the root component. Default: 0.79
Minimum score of the instances of the component model to be found. Default: 0.5
Number of instances of the component model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the component models to be found. Default: 0.5
Behavior if the root component is missing. Default: "stop_search"
Behavior if a component is missing. Default: "prune_branch"
Pose prediction of components that are not found. Default: "none"
Minimum score of the instances of the components to be found. Default: 0.5
Subpixel accuracy of the component poses if not equal to 'none'. Default: "least_squares"
Number of pyramid levels for the components used in the matching (and lowest pyramid level to use if $|NumLevelsComp| = 2n$). Default: 0
"Greediness" of the search heuristic for the components (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
End index of each found instance of the component model in the tuples describing the component matches.
Score of the found instances of the component model.
Row coordinate of the found component matches.
Column coordinate of the found component matches.
Rotation angle of the found component matches.
Score of the found component matches.
Index of the found components.
Start index of each found instance of the component model in the tuples describing the component matches.
Free the memory of a component model.
Instance represents: Handle of the component model.
Return the search tree of a component model.
Instance represents: Handle of the component model.
Relations of components that are connected in the search tree.
Index of the root component.
Image for which the tree is to be returned. Default: "model_image"
Component index of the start node of an arc in the search tree.
Component index of the end node of an arc in the search tree.
Row coordinate of the center of the rectangle representing the relation.
Column index of the center of the rectangle representing the relation.
Orientation of the rectangle representing the relation (radians).
First radius (half length) of the rectangle representing the relation.
Second radius (half width) of the rectangle representing the relation.
Smallest relative orientation angle.
Extent of the relative orientation angle.
Search tree.
Return the search tree of a component model.
Instance represents: Handle of the component model.
Relations of components that are connected in the search tree.
Index of the root component.
Image for which the tree is to be returned. Default: "model_image"
Component index of the start node of an arc in the search tree.
Component index of the end node of an arc in the search tree.
Row coordinate of the center of the rectangle representing the relation.
Column index of the center of the rectangle representing the relation.
Orientation of the rectangle representing the relation (radians).
First radius (half length) of the rectangle representing the relation.
Second radius (half width) of the rectangle representing the relation.
Smallest relative orientation angle.
Extent of the relative orientation angle.
Search tree.
Return the parameters of a component model.
Instance represents: Handle of the component model.
Ranking of the model components expressing their suitability to act as root component.
Handles of the shape models of the individual model components.
Minimum score of the instances of the components to be found.
Return the parameters of a component model.
Instance represents: Handle of the component model.
Ranking of the model components expressing their suitability to act as root component.
Handles of the shape models of the individual model components.
Minimum score of the instances of the components to be found.
Deserialize a serialized component model.
Modified instance represents: Handle of the component model.
Handle of the serialized item.
Serialize a component model.
Instance represents: Handle of the component model.
Handle of the serialized item.
Read a component model from a file.
Modified instance represents: Handle of the component model.
File name.
Write a component model to a file.
Instance represents: Handle of the component model.
File name.
Prepare a component model for matching based on explicitly specified components and relations.
Modified instance represents: Handle of the component model.
Input image from which the shape models of the model components should be created.
Input regions from which the shape models of the model components should be created.
Variation of the model components in row direction.
Variation of the model components in column direction.
Angle variation of the model components.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Lower hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Upper hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Minimum size of the contour regions in the model. Default: "auto"
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Prepare a component model for matching based on explicitly specified components and relations.
Modified instance represents: Handle of the component model.
Input image from which the shape models of the model components should be created.
Input regions from which the shape models of the model components should be created.
Variation of the model components in row direction.
Variation of the model components in column direction.
Angle variation of the model components.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Lower hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Upper hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Minimum size of the contour regions in the model. Default: "auto"
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Prepare a component model for matching based on trained components.
Modified instance represents: Handle of the component model.
Handle of the training result.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Prepare a component model for matching based on trained components.
Modified instance represents: Handle of the component model.
Handle of the training result.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Represents an instance of a training result for the component-based matching.
Train components and relations for the component-based matching.
Modified instance represents: Handle of the training result.
Input image from which the shape models of the initial components should be created.
Contour regions or enclosing regions of the initial components.
Training images that are used for training the model components.
Contour regions of rigid model components.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of connected contour regions. Default: "auto"
Minimum score of the instances of the initial components to be found. Default: 0.5
Search tolerance in row direction. Default: -1
Search tolerance in column direction. Default: -1
Angle search tolerance. Default: -1
Decision whether the training emphasis should lie on a fast computation or on a high robustness. Default: "speed"
Criterion for solving ambiguous matches of the initial components in the training images. Default: "rigidity"
Maximum contour overlap of the found initial components in a training image. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Train components and relations for the component-based matching.
Modified instance represents: Handle of the training result.
Input image from which the shape models of the initial components should be created.
Contour regions or enclosing regions of the initial components.
Training images that are used for training the model components.
Contour regions of rigid model components.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of connected contour regions. Default: "auto"
Minimum score of the instances of the initial components to be found. Default: 0.5
Search tolerance in row direction. Default: -1
Search tolerance in column direction. Default: -1
Angle search tolerance. Default: -1
Decision whether the training emphasis should lie on a fast computation or on a high robustness. Default: "speed"
Criterion for solving ambiguous matches of the initial components in the training images. Default: "rigidity"
Maximum contour overlap of the found initial components in a training image. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Prepare a component model for matching based on trained components.
Instance represents: Handle of the training result.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Handle of the component model.
Prepare a component model for matching based on trained components.
Instance represents: Handle of the training result.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Handle of the component model.
Free the memory of a component training result.
Instance represents: Handle of the training result.
Return the relations between the model components that are contained in a training result.
Instance represents: Handle of the training result.
Index of reference component.
Image for which the component relations are to be returned. Default: "model_image"
Row coordinate of the center of the rectangle representing the relation.
Column index of the center of the rectangle representing the relation.
Orientation of the rectangle representing the relation (radians).
First radius (half length) of the rectangle representing the relation.
Second radius (half width) of the rectangle representing the relation.
Smallest relative orientation angle.
Extent of the relative orientation angles.
Region representation of the relations.
Return the relations between the model components that are contained in a training result.
Instance represents: Handle of the training result.
Index of reference component.
Image for which the component relations are to be returned. Default: "model_image"
Row coordinate of the center of the rectangle representing the relation.
Column index of the center of the rectangle representing the relation.
Orientation of the rectangle representing the relation (radians).
First radius (half length) of the rectangle representing the relation.
Second radius (half width) of the rectangle representing the relation.
Smallest relative orientation angle.
Extent of the relative orientation angles.
Region representation of the relations.
Return the initial or model components in a certain image.
Instance represents: Handle of the training result.
Type of returned components or index of an initial component. Default: "model_components"
Image for which the components are to be returned. Default: "model_image"
Mark the orientation of the components. Default: "false"
Row coordinate of the found instances of all initial components or model components.
Column coordinate of the found instances of all initial components or model components.
Rotation angle of the found instances of all components.
Score of the found instances of all components.
Contour regions of the initial components or of the model components.
Return the initial or model components in a certain image.
Instance represents: Handle of the training result.
Type of returned components or index of an initial component. Default: "model_components"
Image for which the components are to be returned. Default: "model_image"
Mark the orientation of the components. Default: "false"
Row coordinate of the found instances of all initial components or model components.
Column coordinate of the found instances of all initial components or model components.
Rotation angle of the found instances of all components.
Score of the found instances of all components.
Contour regions of the initial components or of the model components.
Modify the relations within a training result.
Instance represents: Handle of the training result.
Model component(s) relative to which the movement(s) should be modified. Default: "all"
Model component(s) of which the relative movement(s) should be modified. Default: "all"
Change of the position relation in pixels.
Change of the orientation relation in radians.
Modify the relations within a training result.
Instance represents: Handle of the training result.
Model component(s) relative to which the movement(s) should be modified. Default: "all"
Model component(s) of which the relative movement(s) should be modified. Default: "all"
Change of the position relation in pixels.
Change of the orientation relation in radians.
Deserialize a component training result.
Modified instance represents: Handle of the training result.
Handle of the serialized item.
Serialize a component training result.
Instance represents: Handle of the training result.
Handle of the serialized item.
Read a component training result from a file.
Modified instance represents: Handle of the training result.
File name.
Write a component training result to a file.
Instance represents: Handle of the training result.
File name.
Adopt new parameters that are used to create the model components into the training result.
Instance represents: Handle of the training result.
Training images that were used for training the model components.
Criterion for solving the ambiguities. Default: "rigidity"
Maximum contour overlap of the found initial components. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Contour regions of rigid model components.
Inspect the rigid model components obtained from the training.
Instance represents: Handle of the training result.
Criterion for solving the ambiguities. Default: "rigidity"
Maximum contour overlap of the found initial components. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Contour regions of rigid model components.
Train components and relations for the component-based matching.
Modified instance represents: Handle of the training result.
Input image from which the shape models of the initial components should be created.
Contour regions or enclosing regions of the initial components.
Training images that are used for training the model components.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of connected contour regions. Default: "auto"
Minimum score of the instances of the initial components to be found. Default: 0.5
Search tolerance in row direction. Default: -1
Search tolerance in column direction. Default: -1
Angle search tolerance. Default: -1
Decision whether the training emphasis should lie on a fast computation or on a high robustness. Default: "speed"
Criterion for solving ambiguous matches of the initial components in the training images. Default: "rigidity"
Maximum contour overlap of the found initial components in a training image. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Contour regions of rigid model components.
Train components and relations for the component-based matching.
Modified instance represents: Handle of the training result.
Input image from which the shape models of the initial components should be created.
Contour regions or enclosing regions of the initial components.
Training images that are used for training the model components.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of connected contour regions. Default: "auto"
Minimum score of the instances of the initial components to be found. Default: 0.5
Search tolerance in row direction. Default: -1
Search tolerance in column direction. Default: -1
Angle search tolerance. Default: -1
Decision whether the training emphasis should lie on a fast computation or on a high robustness. Default: "speed"
Criterion for solving ambiguous matches of the initial components in the training images. Default: "rigidity"
Maximum contour overlap of the found initial components in a training image. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Contour regions of rigid model components.
Class representing a compute device handle.
Open a compute device.
Modified instance represents: Compute device handle.
Compute device Identifier.
Query compute device parameters.
Instance represents: Compute device handle.
Name of the parameter to query. Default: "buffer_cache_capacity"
Value of the parameter.
Set parameters of an compute device.
Instance represents: Compute device handle.
Name of the parameter to set. Default: "buffer_cache_capacity"
New parameter value.
Set parameters of an compute device.
Instance represents: Compute device handle.
Name of the parameter to set. Default: "buffer_cache_capacity"
New parameter value.
Close all compute devices.
Close a compute_device.
Instance represents: Compute device handle.
Deactivate all compute devices.
Deactivate a compute device.
Instance represents: Compute device handle.
Activate a compute device.
Instance represents: Compute device handle.
Initialize a compute device.
Instance represents: Compute device handle.
List of operators to prepare. Default: "all"
Open a compute device.
Modified instance represents: Compute device handle.
Compute device Identifier.
Get information on a compute device.
Compute device handle.
Name of Information to query. Default: "name"
Returned information.
Get the list of available compute devices.
List of available compute devices.
Represents an instance of a condition synchronization object.
Create a condition variable synchronization object.
Modified instance represents: Condition synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Create a condition variable synchronization object.
Modified instance represents: Condition synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Destroy a condition synchronization object.
Instance represents: Condition synchronization object.
Signal a condition synchronization object.
Instance represents: Condition synchronization object.
Signal a condition synchronization object.
Instance represents: Condition synchronization object.
Bounded wait on the signal of a condition synchronization object.
Instance represents: Condition synchronization object.
Mutex synchronization object.
Timeout in micro seconds.
wait on the signal of a condition synchronization object.
Instance represents: Condition synchronization object.
Mutex synchronization object.
Create a condition variable synchronization object.
Modified instance represents: Condition synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Create a condition variable synchronization object.
Modified instance represents: Condition synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Represents an instance of a 2D data code reader.
Read a 2D data code model from a file and create a new model.
Modified instance represents: Handle of the created 2D data code model.
Name of the 2D data code model file. Default: "data_code_model.dcm"
Create a model of a 2D data code class.
Modified instance represents: Handle for using and accessing the 2D data code model.
Type of the 2D data code. Default: "Data Matrix ECC 200"
Names of the generic parameters that can be adjusted for the 2D data code model. Default: []
Values of the generic parameters that can be adjusted for the 2D data code model. Default: []
Create a model of a 2D data code class.
Modified instance represents: Handle for using and accessing the 2D data code model.
Type of the 2D data code. Default: "Data Matrix ECC 200"
Names of the generic parameters that can be adjusted for the 2D data code model. Default: []
Values of the generic parameters that can be adjusted for the 2D data code model. Default: []
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Access iconic objects that were created during the search for 2D data code symbols.
Instance represents: Handle of the 2D data code model.
Handle of the 2D data code candidate. Either an integer (usually the ResultHandle of find_data_code_2d) or a string representing a group of candidates. Default: "all_candidates"
Name of the iconic object to return. Default: "candidate_xld"
Objects that are created as intermediate results during the detection or evaluation of 2D data codes.
Access iconic objects that were created during the search for 2D data code symbols.
Instance represents: Handle of the 2D data code model.
Handle of the 2D data code candidate. Either an integer (usually the ResultHandle of find_data_code_2d) or a string representing a group of candidates. Default: "all_candidates"
Name of the iconic object to return. Default: "candidate_xld"
Objects that are created as intermediate results during the detection or evaluation of 2D data codes.
Get the alphanumerical results that were accumulated during the search for 2D data code symbols.
Instance represents: Handle of the 2D data code model.
Handle of the 2D data code candidate. Either an integer (usually the ResultHandle of find_data_code_2d) or a string representing a group of candidates. Default: "all_candidates"
Names of the results of the 2D data code to return. Default: "status"
List with the results.
Get the alphanumerical results that were accumulated during the search for 2D data code symbols.
Instance represents: Handle of the 2D data code model.
Handle of the 2D data code candidate. Either an integer (usually the ResultHandle of find_data_code_2d) or a string representing a group of candidates. Default: "all_candidates"
Names of the results of the 2D data code to return. Default: "status"
List with the results.
Detect and read 2D data code symbols in an image or train the 2D data code model.
Instance represents: Handle of the 2D data code model.
Input image. If the image has a reduced domain, the data code search is reduced to that domain. This usually reduces the runtime of the operator. However, if the datacode is not fully inside the domain, the datacode might not be found correctly. In rare cases, data codes may be found outside the domain. If these results are undesirable, they have to be subsequently eliminated.
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Handles of all successfully decoded 2D data code symbols.
Decoded data strings of all detected 2D data code symbols in the image.
XLD contours that surround the successfully decoded data code symbols. The order of the contour points reflects the orientation of the detected symbols. The contours begin in the top left corner (see 'orientation' at get_data_code_2d_results) and continue clockwise. Alignment{left} Figure[1][1][60]{get_data_code_2d_results-xld_qrcode} Order of points of SymbolXLDs Figure Alignment @f$
Detect and read 2D data code symbols in an image or train the 2D data code model.
Instance represents: Handle of the 2D data code model.
Input image. If the image has a reduced domain, the data code search is reduced to that domain. This usually reduces the runtime of the operator. However, if the datacode is not fully inside the domain, the datacode might not be found correctly. In rare cases, data codes may be found outside the domain. If these results are undesirable, they have to be subsequently eliminated.
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Handles of all successfully decoded 2D data code symbols.
Decoded data strings of all detected 2D data code symbols in the image.
XLD contours that surround the successfully decoded data code symbols. The order of the contour points reflects the orientation of the detected symbols. The contours begin in the top left corner (see 'orientation' at get_data_code_2d_results) and continue clockwise. Alignment{left} Figure[1][1][60]{get_data_code_2d_results-xld_qrcode} Order of points of SymbolXLDs Figure Alignment @f$
Set selected parameters of the 2D data code model.
Instance represents: Handle of the 2D data code model.
Names of the generic parameters that shall be adjusted for the 2D data code. Default: "polarity"
Values of the generic parameters that are adjusted for the 2D data code. Default: "light_on_dark"
Set selected parameters of the 2D data code model.
Instance represents: Handle of the 2D data code model.
Names of the generic parameters that shall be adjusted for the 2D data code. Default: "polarity"
Values of the generic parameters that are adjusted for the 2D data code. Default: "light_on_dark"
Get one or several parameters that describe the 2D data code model.
Instance represents: Handle of the 2D data code model.
Names of the generic parameters that are to be queried for the 2D data code model. Default: "polarity"
Values of the generic parameters.
Get one or several parameters that describe the 2D data code model.
Instance represents: Handle of the 2D data code model.
Names of the generic parameters that are to be queried for the 2D data code model. Default: "polarity"
Values of the generic parameters.
Get for a given 2D data code model the names of the generic parameters or objects that can be used in the other 2D data code operators.
Instance represents: Handle of the 2D data code model.
Name of the parameter group. Default: "get_result_params"
List containing the names of the supported generic parameters.
Deserialize a serialized 2D data code model.
Modified instance represents: Handle of the 2D data code model.
Handle of the serialized item.
Serialize a 2D data code model.
Instance represents: Handle of the 2D data code model.
Handle of the serialized item.
Read a 2D data code model from a file and create a new model.
Modified instance represents: Handle of the created 2D data code model.
Name of the 2D data code model file. Default: "data_code_model.dcm"
Writes a 2D data code model into a file.
Instance represents: Handle of the 2D data code model.
Name of the 2D data code model file. Default: "data_code_model.dcm"
Delete a 2D data code model and free the allocated memory.
Instance represents: Handle of the 2D data code model.
Create a model of a 2D data code class.
Modified instance represents: Handle for using and accessing the 2D data code model.
Type of the 2D data code. Default: "Data Matrix ECC 200"
Names of the generic parameters that can be adjusted for the 2D data code model. Default: []
Values of the generic parameters that can be adjusted for the 2D data code model. Default: []
Create a model of a 2D data code class.
Modified instance represents: Handle for using and accessing the 2D data code model.
Type of the 2D data code. Default: "Data Matrix ECC 200"
Names of the generic parameters that can be adjusted for the 2D data code model. Default: []
Values of the generic parameters that can be adjusted for the 2D data code model. Default: []
Represents an instance of a deformable model for matching.
Read a deformable model from a file.
Modified instance represents: Handle of the model.
File name.
Prepare a deformable model for planar calibrated matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Prepare a deformable model for planar calibrated matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Prepare a deformable model for planar uncalibrated matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Prepare a deformable model for planar uncalibrated matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Create a deformable model for calibrated perspective matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Create a deformable model for calibrated perspective matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Creates a deformable model for uncalibrated, perspective matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Creates a deformable model for uncalibrated, perspective matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Return the origin (reference point) of a deformable model.
Instance represents: Handle of the model.
Row coordinate of the origin of the deformable model.
Column coordinate of the origin of the deformable model.
Set the origin (reference point) of a deformable model.
Instance represents: Handle of the model.
Row coordinate of the origin of the deformable model.
Column coordinate of the origin of the deformable model.
Set selected parameters of the deformable model.
Instance represents: Handle of the model.
Parameter names.
Parameter values.
Return the parameters of a deformable model.
Instance represents: Handle of the model.
Names of the generic parameters that are to be queried for the deformable model. Default: "angle_start"
Values of the generic parameters.
Return the parameters of a deformable model.
Instance represents: Handle of the model.
Names of the generic parameters that are to be queried for the deformable model. Default: "angle_start"
Values of the generic parameters.
Return the contour representation of a deformable model.
Instance represents: Handle of the model.
Pyramid level for which the contour representation should be returned. Default: 1
Contour representation of the deformable model.
Deserialize a deformable model.
Modified instance represents: Handle of the model.
Handle of the serialized item.
Serialize a deformable model.
Instance represents: Handle of a model to be saved.
Handle of the serialized item.
Read a deformable model from a file.
Modified instance represents: Handle of the model.
File name.
Write a deformable model to a file.
Instance represents: Handle of a model to be saved.
The path and filename of the model to be saved.
Free the memory of a deformable model.
Handle of the model.
Free the memory of a deformable model.
Instance represents: Handle of the model.
Find the best matches of a local deformable model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Vector field of the rectification transformation.
Contours of the found instances of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minumum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching. Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Switch for requested iconic result. Default: []
The general parameter names. Default: []
Values of the general parameters. Default: []
Scores of the found instances of the model.
Row coordinates of the found instances of the model.
Column coordinates of the found instances of the model.
Rectified image of the found model.
Find the best matches of a calibrated deformable model in an image and return their 3D pose.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
6 standard deviations or 36 covariances of the pose parameters.
Score of the found instances of the model.
Pose of the object.
Find the best matches of a calibrated deformable model in an image and return their 3D pose.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
6 standard deviations or 36 covariances of the pose parameters.
Score of the found instances of the model.
Pose of the object.
Find the best matches of a planar projective invariant deformable model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
Score of the found instances of the model.
Homographies between model and found instances.
Find the best matches of a planar projective invariant deformable model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
Score of the found instances of the model.
Homographies between model and found instances.
Set the metric of a local deformable model that was created from XLD contours.
Instance represents: Handle of the model.
Input image used for the determination of the polarity.
Vector field of the local deformation.
Match metric. Default: "use_polarity"
Set the metric of a planar calibrated deformable model that was created from XLD contours.
Instance represents: Handle of the model.
Input image used for the determination of the polarity.
Pose of the model in the image.
Match metric. Default: "use_polarity"
Set the metric of a planar uncalibrated deformable model that was created from XLD contours.
Instance represents: Handle of the model.
Input image used for the determination of the polarity.
Transformation matrix.
Match metric. Default: "use_polarity"
Prepare a deformable model for local deformable matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Prepare a deformable model for local deformable matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Prepare a deformable model for planar calibrated matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Prepare a deformable model for planar calibrated matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Prepare a deformable model for planar uncalibrated matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Prepare a deformable model for planar uncalibrated matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Creates a deformable model for local, deformable matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Creates a deformable model for local, deformable matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Create a deformable model for calibrated perspective matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Create a deformable model for calibrated perspective matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Creates a deformable model for uncalibrated, perspective matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Creates a deformable model for uncalibrated, perspective matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Represents an instance of a deformable surface matching result.
Get details of a result from deformable surface based matching.
Instance represents: Handle of the deformable surface matching result.
Name of the result property. Default: "sampled_scene"
Index of the result property. Default: 0
Value of the result property.
Get details of a result from deformable surface based matching.
Instance represents: Handle of the deformable surface matching result.
Name of the result property. Default: "sampled_scene"
Index of the result property. Default: 0
Value of the result property.
Free the memory of a deformable surface matching result.
Handle of the deformable surface matching result.
Free the memory of a deformable surface matching result.
Instance represents: Handle of the deformable surface matching result.
Refine the position and deformation of a deformable surface model in a 3D scene.
Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Relative sampling distance of the scene. Default: 0.05
Initial deformation of the 3D object model
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the refined model.
Refine the position and deformation of a deformable surface model in a 3D scene.
Modified instance represents: Handle of the matching result.
Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Relative sampling distance of the scene. Default: 0.05
Initial deformation of the 3D object model
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the refined model.
Find the best match of a deformable surface model in a 3D scene.
Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Minimum score of the returned match. Default: 0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the found instances of the surface model.
Find the best match of a deformable surface model in a 3D scene.
Modified instance represents: Handle of the matching result.
Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Minimum score of the returned match. Default: 0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Represents an instance of a deformable surface model.
Read a deformable surface model from a file.
Modified instance represents: Handle of the read deformable surface model.
Name of the file to read.
Create the data structure needed to perform deformable surface-based matching.
Modified instance represents: Handle of the deformable surface model.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.05
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Create the data structure needed to perform deformable surface-based matching.
Modified instance represents: Handle of the deformable surface model.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.05
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Free the memory of a deformable surface model.
Handle of the deformable surface model.
Free the memory of a deformable surface model.
Instance represents: Handle of the deformable surface model.
Deserialize a deformable surface model.
Modified instance represents: Handle of the deformable surface model.
Handle of the serialized item.
Serialize a deformable surface_model.
Instance represents: Handle of the deformable surface model.
Handle of the serialized item.
Read a deformable surface model from a file.
Modified instance represents: Handle of the read deformable surface model.
Name of the file to read.
Write a deformable surface model to a file.
Instance represents: Handle of the deformable surface model to write.
File name to write to.
Refine the position and deformation of a deformable surface model in a 3D scene.
Instance represents: Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Relative sampling distance of the scene. Default: 0.05
Initial deformation of the 3D object model
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the refined model.
Refine the position and deformation of a deformable surface model in a 3D scene.
Instance represents: Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Relative sampling distance of the scene. Default: 0.05
Initial deformation of the 3D object model
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the refined model.
Find the best match of a deformable surface model in a 3D scene.
Instance represents: Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Minimum score of the returned match. Default: 0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the found instances of the surface model.
Find the best match of a deformable surface model in a 3D scene.
Instance represents: Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Minimum score of the returned match. Default: 0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the found instances of the surface model.
Return the parameters and properties of a deformable surface model.
Instance represents: Handle of the deformable surface model.
Name of the parameter. Default: "sampled_model"
Value of the parameter.
Return the parameters and properties of a deformable surface model.
Instance represents: Handle of the deformable surface model.
Name of the parameter. Default: "sampled_model"
Value of the parameter.
Add a reference point to a deformable surface model.
Instance represents: Handle of the deformable surface model.
x-coordinates of a reference point.
x-coordinates of a reference point.
x-coordinates of a reference point.
Index of the new reference point.
Add a reference point to a deformable surface model.
Instance represents: Handle of the deformable surface model.
x-coordinates of a reference point.
x-coordinates of a reference point.
x-coordinates of a reference point.
Index of the new reference point.
Add a sample deformation to a deformable surface model
Instance represents: Handle of the deformable surface model.
Handle of the deformed 3D object model.
Add a sample deformation to a deformable surface model
Instance represents: Handle of the deformable surface model.
Handle of the deformed 3D object model.
Create the data structure needed to perform deformable surface-based matching.
Modified instance represents: Handle of the deformable surface model.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.05
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Create the data structure needed to perform deformable surface-based matching.
Modified instance represents: Handle of the deformable surface model.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.05
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Represents an instance of a descriptor model.
Read a descriptor model from a file.
Modified instance represents: Handle of the model.
File name.
Create a descriptor model for calibrated perspective matching.
Modified instance represents: The handle to the descriptor model.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
Prepare a descriptor model for interest point matching.
Modified instance represents: The handle to the descriptor model.
Input image whose domain will be used to create the model.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Free the memory of a descriptor model.
Handle of the descriptor model.
Free the memory of a descriptor model.
Instance represents: Handle of the descriptor model.
Deserialize a descriptor model.
Modified instance represents: Handle of the model.
Handle of the serialized item.
Serialize a descriptor model.
Instance represents: Handle of a model to be saved.
Handle of the serialized item.
Read a descriptor model from a file.
Modified instance represents: Handle of the model.
File name.
Write a descriptor model to a file.
Instance represents: Handle of a model to be saved.
The path and filename of the model to be saved.
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Instance represents: The handle to the descriptor model.
Input image where the model should be found.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Camera parameter (inner orientation) obtained from camera calibration.
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
3D pose of the object.
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Instance represents: The handle to the descriptor model.
Input image where the model should be found.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Camera parameter (inner orientation) obtained from camera calibration.
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
3D pose of the object.
Find the best matches of a descriptor model in an image.
Instance represents: The handle to the descriptor model.
Input image where the model should be found.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
Homography between model and found instance.
Find the best matches of a descriptor model in an image.
Instance represents: The handle to the descriptor model.
Input image where the model should be found.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
Homography between model and found instance.
Query the interest points of the descriptor model or the last processed search image.
Instance represents: The handle to the descriptor model.
Set of interest points. Default: "model"
Subset of interest points. Default: "all"
Row coordinates of interest points.
Column coordinates of interest points.
Query the interest points of the descriptor model or the last processed search image.
Instance represents: The handle to the descriptor model.
Set of interest points. Default: "model"
Subset of interest points. Default: "all"
Row coordinates of interest points.
Column coordinates of interest points.
Return the parameters of a descriptor model.
Instance represents: The object handle to the descriptor model.
The detectors parameter names.
Values of the detectors parameters.
The descriptors parameter names.
Values of the descriptors parameters.
The type of the detector.
Create a descriptor model for calibrated perspective matching.
Modified instance represents: The handle to the descriptor model.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
Prepare a descriptor model for interest point matching.
Modified instance represents: The handle to the descriptor model.
Input image whose domain will be used to create the model.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
Query alphanumerical results that were accumulated during descriptor-based matching.
Instance represents: Handle of a descriptor model.
Handle of the object for which the results are queried. Default: "all"
Name of the results to be queried. Default: "num_points"
Returned results.
Query alphanumerical results that were accumulated during descriptor-based matching.
Instance represents: Handle of a descriptor model.
Handle of the object for which the results are queried. Default: "all"
Name of the results to be queried. Default: "num_points"
Returned results.
Return the origin of a descriptor model.
Instance represents: Handle of a descriptor model.
Position of origin in row direction.
Position of origin in column direction.
Return the origin of a descriptor model.
Instance represents: Handle of a descriptor model.
Position of origin in row direction.
Position of origin in column direction.
Sets the origin of a descriptor model.
Instance represents: Handle of a descriptor model.
Translation of origin in row direction. Default: 0
Translation of origin in column direction. Default: 0
Sets the origin of a descriptor model.
Instance represents: Handle of a descriptor model.
Translation of origin in row direction. Default: 0
Translation of origin in column direction. Default: 0
Represents an instance of a Dictionary.
Create a new empty dictionary.
Modified instance represents: Handle of the newly created dictionary.
Read a dictionary from a file.
Modified instance represents: Dictionary handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Read a dictionary from a file.
Modified instance represents: Dictionary handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Copy a dictionary.
Instance represents: Dictionary handle.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Copied dictionary handle.
Copy a dictionary.
Instance represents: Dictionary handle.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Copied dictionary handle.
Create a new empty dictionary.
Modified instance represents: Handle of the newly created dictionary.
Retrieve an object associated with the key from the dictionary.
Instance represents: Dictionary handle.
Key string.
Object value retrieved from the dictionary.
Retrieve an object associated with the key from the dictionary.
Instance represents: Dictionary handle.
Key string.
Object value retrieved from the dictionary.
Query dictionary parameters or information about a dictionary.
Instance represents: Dictionary handle.
Names of the dictionary parameters or info queries. Default: "keys"
Dictionary keys the parameter/query should be applied to (empty for GenParamName = 'keys').
Values of the dictionary parameters or info queries.
Query dictionary parameters or information about a dictionary.
Instance represents: Dictionary handle.
Names of the dictionary parameters or info queries. Default: "keys"
Dictionary keys the parameter/query should be applied to (empty for GenParamName = 'keys').
Values of the dictionary parameters or info queries.
Retrieve a tuple associated with the key from the dictionary.
Instance represents: Dictionary handle.
Key string.
Tuple value retrieved from the dictionary.
Retrieve a tuple associated with the key from the dictionary.
Instance represents: Dictionary handle.
Key string.
Tuple value retrieved from the dictionary.
Read a dictionary from a file.
Modified instance represents: Dictionary handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Read a dictionary from a file.
Modified instance represents: Dictionary handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Remove keys from a dictionary.
Instance represents: Dictionary handle.
Key to remove.
Remove keys from a dictionary.
Instance represents: Dictionary handle.
Key to remove.
Add a key/object pair to the dictionary.
Instance represents: Dictionary handle.
Object to be associated with the key.
Key string.
Add a key/object pair to the dictionary.
Instance represents: Dictionary handle.
Object to be associated with the key.
Key string.
Add a key/tuple pair to the dictionary.
Instance represents: Dictionary handle.
Key string.
Tuple value to be associated with the key.
Add a key/tuple pair to the dictionary.
Instance represents: Dictionary handle.
Key string.
Tuple value to be associated with the key.
Write a dictionary to a file.
Instance represents: Dictionary handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Write a dictionary to a file.
Instance represents: Dictionary handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Represents an instance of a Deep Neural Network.
Read a deep-learning-based classifier from a file.
Modified instance represents: Handle of the deep learning classifier.
File name. Default: "pretrained_dl_classifier_compact.hdl"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Infer the class affiliations for a set of images using a deep-learning-based classifier.
Instance represents: Handle of the deep-learning-based classifier.
Tuple of input images.
Handle of the deep learning classification results.
Clear a deep-learning-based classifier.
Handle of the deep-learning-based classifier.
Clear a deep-learning-based classifier.
Instance represents: Handle of the deep-learning-based classifier.
Deserialize a deep-learning-based classifier.
Modified instance represents: Handle of the deep-learning-based classifier.
Handle of the serialized item.
Return the parameters of a deep-learning-based classifier.
Instance represents: Handle of the deep-learning-based classifier.
Name of the generic parameter. Default: "gpu"
Value of the generic parameter.
Return the parameters of a deep-learning-based classifier.
Instance represents: Handle of the deep-learning-based classifier.
Name of the generic parameter. Default: "gpu"
Value of the generic parameter.
Read a deep-learning-based classifier from a file.
Modified instance represents: Handle of the deep learning classifier.
File name. Default: "pretrained_dl_classifier_compact.hdl"
Serialize a deep-learning-based classifier.
Instance represents: Handle of the deep-learning-based classifier.
Handle of the serialized item.
Set the parameters of a deep-learning-based classifier.
Instance represents: Handle of the deep-learning-based classifier.
Name of the generic parameter. Default: "classes"
Value of the generic parameter. Default: ["class_1","class_2","class_3"]
Set the parameters of a deep-learning-based classifier.
Instance represents: Handle of the deep-learning-based classifier.
Name of the generic parameter. Default: "classes"
Value of the generic parameter. Default: ["class_1","class_2","class_3"]
Write a deep-learning-based classifier in a file.
Instance represents: Handle of the deep-learning-based classifier.
File name.
Represents an instance of a Deep Neural Network inference step result.
Infer the class affiliations for a set of images using a deep-learning-based classifier.
Modified instance represents: Handle of the deep learning classification results.
Tuple of input images.
Handle of the deep-learning-based classifier.
Clear a handle containing the results of the deep-learning-based classification.
Handle of the deep learning classification results.
Clear a handle containing the results of the deep-learning-based classification.
Instance represents: Handle of the deep learning classification results.
Retrieve classification results inferred by a deep-learning-based classifier.
Instance represents: Handle of the deep learning classification results.
Index of the image in the batch. Default: "all"
Name of the generic parameter. Default: "predicted_classes"
Value of the generic parameter, either the confidence values, the class names or class indices.
Retrieve classification results inferred by a deep-learning-based classifier.
Instance represents: Handle of the deep learning classification results.
Index of the image in the batch. Default: "all"
Name of the generic parameter. Default: "predicted_classes"
Value of the generic parameter, either the confidence values, the class names or class indices.
Represents an instance of a Deep Neural Network training step result.
Perform a training step of a deep-learning-based classifier on a batch of images.
Modified instance represents: Handle of the training results from the deep-learning-based classifier.
Images comprising the batch.
Handle of the deep-learning-based classifier.
Corresponding labels for each of the images. Default: []
Clear the handle of a deep-learning-based classifier training result.
Handle of the training results from the deep-learning-based classifier.
Clear the handle of a deep-learning-based classifier training result.
Instance represents: Handle of the training results from the deep-learning-based classifier.
Return the results for the single training step of a deep-learning-based classifier.
Instance represents: Handle of the training results from the deep-learning-based classifier.
Name of the generic parameter. Default: "loss"
Value of the generic parameter.
Return the results for the single training step of a deep-learning-based classifier.
Instance represents: Handle of the training results from the deep-learning-based classifier.
Name of the generic parameter. Default: "loss"
Value of the generic parameter.
Perform a training step of a deep-learning-based classifier on a batch of images.
Modified instance represents: Handle of the training results from the deep-learning-based classifier.
Images comprising the batch.
Handle of the deep-learning-based classifier.
Corresponding labels for each of the images. Default: []
Represents an instance of a Deep Neural Network.
Create a deep learning network for object detection.
Modified instance represents: Deep learning model for object detection.
Deep learning classifier, used as backbone network. Default: "pretrained_dl_classifier_compact.hdl"
Number of classes. Default: 3
Parameters for the object detection model. Default: []
Read a deep learning model from a file.
Modified instance represents: Handle of the deep learning model.
Filename Default: "pretrained_dl_segmentation_compact.hdl"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Apply a deep-learning-based network on a set of images for inference.
Instance represents: Handle of the deep learning model.
Input data.
Requested outputs. Default: []
Handle containing result data.
Clear a deep learning model.
Handle of the deep learning model.
Clear a deep learning model.
Instance represents: Handle of the deep learning model.
Create a deep learning network for object detection.
Modified instance represents: Deep learning model for object detection.
Deep learning classifier, used as backbone network. Default: "pretrained_dl_classifier_compact.hdl"
Number of classes. Default: 3
Parameters for the object detection model. Default: []
Deserialize a deep learning model.
Modified instance represents: Handle of the deep learning model.
Handle of the serialized item.
Return the parameters of a deep learning model.
Instance represents: Handle of the deep learning model.
Name of the generic parameter. Default: "batch_size"
Value of the generic parameter.
Read a deep learning model from a file.
Modified instance represents: Handle of the deep learning model.
Filename Default: "pretrained_dl_segmentation_compact.hdl"
Serialize a deep learning model.
Instance represents: Handle of the deep learning model.
Handle of the serialized item.
Set the parameters of a deep learning model.
Instance represents: Handle of the deep learning model.
Name of the generic parameter. Default: "learning_rate"
Value of the generic parameter. Default: 0.001
Set the parameters of a deep learning model.
Instance represents: Handle of the deep learning model.
Name of the generic parameter. Default: "learning_rate"
Value of the generic parameter. Default: 0.001
Train a deep learning model.
Instance represents: Deep learning model handle.
Tuple of Dictionaries with input images and corresponding information.
Dictionary with the train result data.
Write a deep learning model in a file.
Instance represents: Handle of the deep learning model.
Filename
Represents an instance of a drawing object.
Create a circle which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row coordinate of the center. Default: 100
Column coordinate of the center. Default: 100
Radius of the circle. Default: 80
Create a rectangle of any orientation which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row coordinate of the center. Default: 150
Column coordinate of the center. Default: 150
Orientation of the first half axis in radians. Default: 0
First half axis. Default: 100
Second half axis. Default: 100
Create a rectangle parallel to the coordinate axis which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row coordinate of the upper left corner. Default: 100
Column coordinate of the upper left corner. Default: 100
Row coordinate of the lower right corner. Default: 200
Column coordinate of the lower right corner. Default: 200
Adds a callback for the resize event, that is, this callback is
executed whenever the user changes any of the dimensions of the draw
object.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the attach event, that is, this callback is
executed when a drawing object is attached to the window.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the detach event, that is, this callback is
executed when a drawing object is detached from the window.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the resize event, that is, this callback is
executed whenever the drawing object's position changes.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the resize event, that is, this callback is
executed whenever the drawing object is selected.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the resize event, that is, this callback is
executed whenever the user changes any of the dimensions of the draw
object.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the resize event, that is, this callback is
executed whenever the drawing object's position changes.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the resize event, that is, this callback is
executed whenever the drawing object is selected.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the attach event, that is, this callback is
executed when a drawing object is attached to the window.
Callback function with the signature defined by
HDrawingObjectCallback
Adds a callback for the detach event, that is, this callback is
executed when a drawing object is detached from the window.
Callback function with the signature defined by
HDrawingObjectCallback
Method to create drawing objects by explicitly specifying the type.
Type of the drawing object. Can be any of the specified by
the enum type HDrawingObjectType.
List of parameters for the corresponding drawing object.
See the constructors listed in HOperatorSet for more details.
Add a callback function to a drawing object.
Instance represents: Handle of the drawing object.
Events to be captured.
Callback functions.
Add a callback function to a drawing object.
Instance represents: Handle of the drawing object.
Events to be captured.
Callback functions.
Detach the background image from a HALCON window.
Window handle.
Attach a background image to a HALCON window.
Background image.
Window handle.
Create a text object which can be moved interactively.
Modified instance represents: Handle of the drawing object.
Row coordinate of the text position. Default: 12
Column coordinate of the text position. Default: 12
Character string to be displayed. Default: "Text"
Return the iconic object of a drawing object.
Instance represents: Handle of the drawing object.
Copy of the iconic object represented by the drawing object.
Delete drawing object.
Instance represents: Handle of the drawing object.
Set the parameters of a drawing object.
Instance represents: Handle of the drawing object.
Parameter names of the drawing object.
Parameter values.
Set the parameters of a drawing object.
Instance represents: Handle of the drawing object.
Parameter names of the drawing object.
Parameter values.
Get the parameters of a drawing object.
Instance represents: Handle of the drawing object.
Parameter names of the drawing object.
Parameter values.
Get the parameters of a drawing object.
Instance represents: Handle of the drawing object.
Parameter names of the drawing object.
Parameter values.
Set the contour of an interactive draw XLD.
Instance represents: Handle of the drawing object.
XLD contour.
Create a XLD contour which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row coordinates of the polygon. Default: [100,200,200,100]
Column coordinates of the polygon. Default: [100,100,200,200]
Create a circle sector which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row coordinate of the center. Default: 100
Column coordinate of the center. Default: 100
Radius of the circle. Default: 80
Start angle of the arc. Default: 0
End angle of the arc. Default: 3.14159
Create an elliptic sector which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row index of the center. Default: 200
Column index of the center. Default: 200
Orientation of the first half axis in radians. Default: 0
First half axis. Default: 100
Second half axis. Default: 60
Start angle of the arc. Default: 0
End angle of the arc. Default: 3.14159
Create a line which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row index of the first line point. Default: 100
Column index of the first line point. Default: 100
Row index of the second line point. Default: 200
Column index of the second line point. Default: 200
Create a circle which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row coordinate of the center. Default: 100
Column coordinate of the center. Default: 100
Radius of the circle. Default: 80
Create an ellipse which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row index of the center. Default: 200
Column index of the center. Default: 200
Orientation of the first half axis in radians. Default: 0
First half axis. Default: 100
Second half axis. Default: 60
Create a rectangle of any orientation which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row coordinate of the center. Default: 150
Column coordinate of the center. Default: 150
Orientation of the first half axis in radians. Default: 0
First half axis. Default: 100
Second half axis. Default: 100
Create a rectangle parallel to the coordinate axis which can be modified interactively.
Modified instance represents: Handle of the drawing object.
Row coordinate of the upper left corner. Default: 100
Column coordinate of the upper left corner. Default: 100
Row coordinate of the lower right corner. Default: 200
Column coordinate of the lower right corner. Default: 200
Send an event to a buffer window signaling a mouse double click event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse double click event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a window buffer signaling a mouse down event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a window buffer signaling a mouse down event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse drag event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse drag event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse up event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse up event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Returns the corresponding HALCON id.
Signature for the drawing object callbacks in managed code.
Signature for the drawing object callbacks in managed code.
Represents a dual quaternion.
Create an uninitialized instance.
Convert a 3D pose to a unit dual quaternion.
Modified instance represents: Unit dual quaternion.
3D pose.
Convert a screw into a dual quaternion.
Modified instance represents: Dual quaternion.
Format of the screw parameters. Default: "moment"
X component of the direction vector of the screw axis.
Y component of the direction vector of the screw axis.
Z component of the direction vector of the screw axis.
X component of the moment vector or a point on the screw axis.
Y component of the moment vector or a point on the screw axis.
Z component of the moment vector or a point on the screw axis.
Rotation angle in radians.
Translation.
Convert a screw into a dual quaternion.
Modified instance represents: Dual quaternion.
Format of the screw parameters. Default: "moment"
X component of the direction vector of the screw axis.
Y component of the direction vector of the screw axis.
Z component of the direction vector of the screw axis.
X component of the moment vector or a point on the screw axis.
Y component of the moment vector or a point on the screw axis.
Z component of the moment vector or a point on the screw axis.
Rotation angle in radians.
Translation.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Composes two dual quaternions
Convert to pose
Convert to matrix
Conjugate a dual quaternion
Create a dual quaternion from eight double values
Create a dual quaternion from two quaternions
Deserialize a serialized dual quaternion.
Modified instance represents: Dual quaternion.
Handle of the serialized item.
Multiply two dual quaternions.
Left dual quaternion.
Right dual quaternion.
Product of the dual quaternions.
Multiply two dual quaternions.
Instance represents: Left dual quaternion.
Right dual quaternion.
Product of the dual quaternions.
Conjugate a dual quaternion.
Dual quaternion.
Conjugate of the dual quaternion.
Conjugate a dual quaternion.
Instance represents: Dual quaternion.
Conjugate of the dual quaternion.
Interpolate two dual quaternions.
Instance represents: Dual quaternion as the start point of the interpolation.
Dual quaternion as the end point of the interpolation.
Interpolation parameter. Default: 0.5
Interpolated dual quaternion.
Interpolate two dual quaternions.
Instance represents: Dual quaternion as the start point of the interpolation.
Dual quaternion as the end point of the interpolation.
Interpolation parameter. Default: 0.5
Interpolated dual quaternion.
Normalize a dual quaternion.
Unit dual quaternion.
Normalized dual quaternion.
Normalize a dual quaternion.
Instance represents: Unit dual quaternion.
Normalized dual quaternion.
Convert a unit dual quaternion into a homogeneous transformation matrix.
Instance represents: Unit dual quaternion.
Transformation matrix.
Convert a dual quaternion to a 3D pose.
Unit dual quaternion.
3D pose.
Convert a dual quaternion to a 3D pose.
Instance represents: Unit dual quaternion.
3D pose.
Convert a unit dual quaternion into a screw.
Instance represents: Unit dual quaternion.
Format of the screw parameters. Default: "moment"
X component of the direction vector of the screw axis.
Y component of the direction vector of the screw axis.
Z component of the direction vector of the screw axis.
X component of the moment vector or a point on the screw axis.
Y component of the moment vector or a point on the screw axis.
Z component of the moment vector or a point on the screw axis.
Rotation angle in radians.
Translation.
Transform a 3D line with a unit dual quaternion.
Instance represents: Unit dual quaternion representing the transformation.
Format of the line parameters. Default: "moment"
X component of the direction vector of the line.
Y component of the direction vector of the line.
Z component of the direction vector of the line.
X component of the moment vector or a point on the line.
Y component of the moment vector or a point on the line.
Z component of the moment vector or a point on the line.
X component of the direction vector of the transformed line.
Y component of the direction vector of the transformed line.
Z component of the direction vector of the transformed line.
X component of the moment vector or a point on the transformed line.
Y component of the moment vector or a point on the transformed line.
Z component of the moment vector or a point on the transformed line.
Transform a 3D line with a unit dual quaternion.
Instance represents: Unit dual quaternion representing the transformation.
Format of the line parameters. Default: "moment"
X component of the direction vector of the line.
Y component of the direction vector of the line.
Z component of the direction vector of the line.
X component of the moment vector or a point on the line.
Y component of the moment vector or a point on the line.
Z component of the moment vector or a point on the line.
X component of the direction vector of the transformed line.
Y component of the direction vector of the transformed line.
Z component of the direction vector of the transformed line.
X component of the moment vector or a point on the transformed line.
Y component of the moment vector or a point on the transformed line.
Z component of the moment vector or a point on the transformed line.
Convert a 3D pose to a unit dual quaternion.
3D pose.
Unit dual quaternion.
Convert a 3D pose to a unit dual quaternion.
Modified instance represents: Unit dual quaternion.
3D pose.
Convert a screw into a dual quaternion.
Modified instance represents: Dual quaternion.
Format of the screw parameters. Default: "moment"
X component of the direction vector of the screw axis.
Y component of the direction vector of the screw axis.
Z component of the direction vector of the screw axis.
X component of the moment vector or a point on the screw axis.
Y component of the moment vector or a point on the screw axis.
Z component of the moment vector or a point on the screw axis.
Rotation angle in radians.
Translation.
Convert a screw into a dual quaternion.
Modified instance represents: Dual quaternion.
Format of the screw parameters. Default: "moment"
X component of the direction vector of the screw axis.
Y component of the direction vector of the screw axis.
Z component of the direction vector of the screw axis.
X component of the moment vector or a point on the screw axis.
Y component of the moment vector or a point on the screw axis.
Z component of the moment vector or a point on the screw axis.
Rotation angle in radians.
Translation.
Serialize a dual quaternion.
Instance represents: Dual quaternion.
Handle of the serialized item.
Normal return value
TRUE
FALSE
Stop processing
Call failed
for internal use
operator was canceled for hdevengine
operator was generally cancelled
Wrong type of control parameter: 1
Wrong type of control parameter: 2
Wrong type of control parameter: 3
Wrong type of control parameter: 4
Wrong type of control parameter: 5
Wrong type of control parameter: 6
Wrong type of control parameter: 7
Wrong type of control parameter: 8
Wrong type of control parameter: 9
Wrong type of control parameter: 10
Wrong type of control parameter: 11
Wrong type of control parameter: 12
Wrong type of control parameter: 13
Wrong type of control parameter: 14
Wrong type of control parameter: 15
Wrong type of control parameter: 16
Wrong type of control parameter: 17
Wrong type of control parameter: 18
Wrong type of control parameter: 19
Wrong type of control parameter: 20
Wrong value of control parameter: 1
Wrong value of control parameter: 2
Wrong value of control parameter: 3
Wrong value of control parameter: 4
Wrong value of control parameter: 5
Wrong value of control parameter: 6
Wrong value of control parameter: 7
Wrong value of control parameter: 8
Wrong value of control parameter: 9
Wrong value of control parameter: 10
Wrong value of control parameter: 11
Wrong value of control parameter: 12
Wrong value of control parameter: 13
Wrong value of control parameter: 14
Wrong value of control parameter: 15
Wrong value of control parameter: 16
Wrong value of control parameter: 17
Wrong value of control parameter: 18
Wrong value of control parameter: 19
Wrong value of control parameter: 20
Wrong value of component
Wrong value of gray value component
Wrong number of values of ctrl.par.: 1
Wrong number of values of ctrl.par.: 2
Wrong number of values of ctrl.par.: 3
Wrong number of values of ctrl.par.: 4
Wrong number of values of ctrl.par.: 5
Wrong number of values of ctrl.par.: 6
Wrong number of values of ctrl.par.: 7
Wrong number of values of ctrl.par.: 8
Wrong number of values of ctrl.par.: 9
Wrong number of values of ctrl.par.: 10
Wrong number of values of ctrl.par.: 11
Wrong number of values of ctrl.par.: 12
Wrong number of values of ctrl.par.: 13
Wrong number of values of ctrl.par.: 14
Wrong number of values of ctrl.par.: 15
Wrong number of values of ctrl.par.: 16
Wrong number of values of ctrl.par.: 17
Wrong number of values of ctrl.par.: 18
Wrong number of values of ctrl.par.: 19
Wrong number of values of ctrl.par.: 20
Number of input objects too big
Wrong number of values of object par.: 1
Wrong number of values of object par.: 2
Wrong number of values of object par.: 3
Wrong number of values of object par.: 4
Wrong number of values of object par.: 5
Wrong number of values of object par.: 6
Wrong number of values of object par.: 7
Wrong number of values of object par.: 8
Wrong number of values of object par.: 9
Number of output objects too big
Wrong specification of parameter (error in file: xxx.def)
Initialize Halcon: reset_obj_db(Width,Height,Components)
Used number of symbolic object names too big
No license found
Lost connection to license server
No modules in license (no VENDOR_STRING)
No license for this operator
Time zone offset from GMT is > 24 hours
Vendor keys do not support this platform
Bad vendor keys
Unknown vendor key type
malloc() call failed
Vendor keys have expired
Second call to lc_inti() (multiple jobs), and vendorkeys do not support multiple jobs
Vendor key data not supplied
lmclient.h/liblmgr.a version mismatch
Networking software not available on this machine
Old vendor keys supplied
License key in license file does not match other data in file
Encryption handshake with daemon failed
'key' structure is incorrect type, or feature == NULL, or num_licenses == 0
System clock has been set back
Version argument is invalid floating point format
License server busy starting another copy of itself
Cannot establish a connection with a license server
Feature is queued. lc_status will determine when it is available
Vendor keys do not support this function
Checkout request filtered by the vendor-defined filter routine
Checkout exceeds MAX specified in options file
All licenses in use
No license server specified for counted license
Can not find feature in the license file
Server has different license file than client - client's license has feature, but server's does not
License file does not support a version this new
This platform not authorized by license - running on platform not included in PLATFORMS list
License server busy
Could not find license.dat
Invalid license file syntax
Cannot connect to a license server
No TCP "license" service exists
No socket connection to license manager server
Invalid host
Feature has expired
Invalid date format in license file
Invalid returned data from license server
Cannot find SERVER hostname in network database
Cannot read data from license server
Cannot write data to license server
Error in select system call
Feature checkin failure detected at license
Users are queued for this feature
License server does not support this version of this feature
Request for more licenses than this feature supports
Cannot read /dev/kmem
Cannot read /vmunix
Cannot find ethernet device
Cannot read license file
Feature not yet available
No such attribute
Clock difference too large between client and server
Feature database corrupted in daemon
Duplicate selection mismatch for this feature
User/host on EXCLUDE list for feature
User/host not on INCLUDE list for feature
Feature was never checked out
Invalid FLEXlm key data supplied
Clock setting check not available in daemon
Date too late for binary format
FLEXlm not initialized
Server did not respond to message
Request rejected by vendor defined filter
No FEATURESET line present in license file
Incorrect FEATURESET line in license file
Cannot compute FEATURESET line
socket() call failed
setsockopt() failed
Message checksum failure
Cannot read license file from server
Not a license administrator
lmremove request too soon
Attempt to read beyond the end of LF path
SYS$SETIMR call failed
Internal FLEXlm Erro - Please report to Globetrotter Software
FLEXadmin API functions not avilable
Invalid PACKAGE line in license file
Server FLEXlm version older than client's
Incorrect number of USERS/HOSTS INCLUDED in options file
Server doesn't support this request
This license object already in use
Future license file format
Feature removed during lmreread
This feature is available in a different license pool
Network connect to THIS_HOST failed
Server node is down or not responding
The desired vendor daemon is down
The decimal format license is typed incorrectly
All licenses are reserved for others
Terminal Server remote client not allowed
Cannot borrow that long
License server out of network connections
Dongle not attached, or can't read dongle
Missing dongle driver
FLEXlock checkouts attempted
SIGN= attribute required
CRO not supported for this platform
BORROW failed
BORROW period has expired
FLOAT_OK license must have exactly one dongle hostid
Unable to delete local borrow info"
Returning borrowed license early not enabled
Returning borrowed license on server failed
Checkout just a PACKAGE failed
Composite Hostid not initialized
An item needed for Composite Hostid missing or invalid
Borrowed license doesn't match any known server license
Error enabling event log
Event logging is disabled
Error writing to event log
Timeout
Bad message command
Error writing to socket
Failed to generate version specific license
Vers.-specific signatures not supported
License template contains redundant signature specifiers
Invalid V71_LK signature
Invalid V71_SIGN signature
Invalid V80_LK signature
Invalid V80_SIGN signature
Invalid V81_LK signature
Invalid V81_SIGN signature
Invalid V81_SIGN2 signature
Invalid V84_LK signature
Invalid V84_SIGN signature
Invalid V84_SIGN2 signature
License key required but missing from the certificate
Bad AUTH={} signature
TS record invalid
Cannot open TS
Invalid Fulfillment record
Invalid activation request received
No fulfillment exists in trusted storage which matches the request
Invalid activation response received
Can't return the fulfillment
Return would exceed max count(s)
No repair count left
Specified operation is not allowed
User/host on EXCLUDE list for entitlement
User/host not in INCLUDE list for entitlement
Activation error
Invalid date format in trusted storage
Message encryption failed
Message decryption failed
Bad filter context
SUPERSEDE feature conflict
Invalid SUPERSEDE_SIGN syntax
SUPERSEDE_SIGN does not contain a feature name and license signature
ONE_TS_OK is not supported in this Windows Platform
Internal error
Only one terminal server remote client checkout is allowed for this feature
Internal error
Internal error
Internal error
More than one ethernet hostid not supported in composite hostid definition
The number of characters in the license file paths exceeds the permissible limit
Invalid TZ keyword syntax
Invalid time zone override specification in the client
The time zone information could not be obtained
License client time zone not authorized for license rights
Invalid syntax for VM_PLATFORMS keyword
Feature can be checked out from physical machine only
Feature can be checked out from virtual machine only
Vendor keys do not support Virtualization feature
Checkout request denied as it exceeds the MAX limit specified in the options file
Binding agent API - Internal error
Binding agent communication error
Invalid Binding agent version
Failed to load ServerQuery request
Failed to generate ServerQuery response
Invalid IP address used while overriding
No CodeMeter Runtime installed
Installed CodeMeter Runtime is too old
License is for wrong HALCON edition
License contains unknown FLAGS
HALCON preview version expired
Error codes concerning the HALCON core, 2100..2199
Wrong index for output object parameter
Wrong index for input object parameter
Wrong index for image object
Wrong number region/image component
Wrong relation name
Access to undefined gray value component
Wrong image width
Wrong image height
Undefined gray value component
Inconsistent data of data base (typing)
Wrong index for input control parameter
Data of data base not defined (internal error)
legacy: Number of operators too big
User extension not properly installed
legacy: Number of packages too large
No such package installed
incompatible HALCON versions
incompatible operator interface
wrong extension package id
wrong operator id
wrong operator information id
Wrong Hctuple array type
Wrong Hcpar type
Wrong Hctuple index
Wrong version of file
Wrong handle type
wrong vector type
wrong vector dimension
Wrong (unknown) HALCON handle
Wrong HALCON id, no data available
HALCON id out of range
Handle is NULL
Handle was cleared
Handle type does not serialize
hlibxpi Init function of an extension * that was build with xpi was not * called
hlib didn't find the init function * of the extension it is connecting to * -> old extension without hlibxpi or * the function export failed
Unresolved function in hlibxpi
HALCON extension requires a HALCON * version that is newer than the * connected hlib
the (major) version of the hlibxpi * which is used by the connecting * extension is too small for hlib
the major version of the hlibxpi * which is used by the hlib is too * small
the minor version of the hlibxpi * which is used by the hlib is too * small
Wrong major version in symbol struct * (internal: should not happen)
HLib version could not be detected
Wrong hardware information file format
Wrong hardware information file version
Error while reading the hardware knowledge
Error while writing the hardware knowledge
Tag not found
No CPU Info
No AOP Info
No AOP Info for this HALCON variant
No AOP Info for this HALCON architecture
No AOP Info for specified Operator found
undefined AOP model
wrong tag derivate
internal error
hw check was canceled
Wrong access to global variable
Used global variable does not exist
Used global variable not accessible via GLOBAL_ID
Halcon server to terminate is still working on a job
No such HALCON software agent
Hardware check for parallelization not possible on a single-processor machine
(Seq.) HALCON does not support parallel hardware check (use Parallel HALCON instead)
Initialization of agent failed
Termination of agent failed
Inconsistent hardware description file
Inconsistent agent information file
Inconsistent agent knowledge file
The file with the parallelization information does not match to the currently HALCON version/revision
The file with the parallelization information does not match to the currently used machine
Inconsistent knowledge base of HALCON software agent
Unknown communication type
Unknown message type for HALCON software agent
Error while saving the parallelization knowledge
Wrong type of work information
Wrong type of application information
Wrong type of experience information
Unknown name of HALCON software agent
Unknown name and communication address of HALCON software agent
cpu representative (HALCON software agent) not reachable
cpu refuses work
Description of scheduling resource not found
Not accessible function of HALCON software agent
Wrong type: HALCON scheduling resource
Wrong state: HALCON scheduling resource
Unknown parameter type: HALCON scheduling resource
Unknown parameter value: HALCON scheduling resource
Wrong post processing of control parameter
Error while trying to get time
Error while trying to get the number of processors
Error while accessing temporary file
message queue wait operation canceled
message queue overflow
Threads still wait on message queue while * clearing it.
Invalid file format for a message
Dict does not contain requested key
Incorrect tuple length in dict
Incorrect tuple type in dict
Error while forcing a context switch
Error while accessing cpu affinity
Error while setting cpu affinity
wrong synchronization object
wrong operator call object
input object not initialized
input control not initialized
output object not initialized
output control not initialized
Creation of pthread failed
pthread-detach failed
pthread-join failed
Initialization of mutex variable failed
Deletion of mutex variable failed
Lock of mutex variable failed
Unlock of mutex variable failed
Failed to signal pthread condition var.
Failed to wait for pthread cond. var.
Failed to init pthread condition var.
Failed to destroy pthread condition var.
Failed to signal event.
Failed to wait for event.
Failed to init event.
Failed to destroy event.
Failed to create a tsd key.
Failed to set a thread specific data key.
Failed to get a tsd key.
Failed to free a tsd key.
Aborted waiting at a barrier
'Free list' is empty while scheduling
Communication partner not checked in
The communication system can't be started while running
Communication partner not checked in
Initialization of Barrier failed
Waiting at a barrier failed
Destroying of an barrier failed
Region completely outside of the image domain
Region (partially) outside of the definition range of the image
Intersected definition range region/image empty
Image with empty definition range
No common image point of two images
Wrong region for image (first row < 0)
Wrong region for image (column in last row >= image width)
Number of images unequal in input pars.
Image height too small
Image width too small
Internal error: Multiple call of HRLInitSeg()
Internal error: HRLSeg() not initialized
Wrong size of filter for Gauss
Filter size exceeds image size
Filter size evan
Filter size to big
Region is empty
Domains of the input images differ
Row value of a coordinate > 2^15-1 (XL: 2^31 - 1)
Row value of a coordinate < -2^15 (XL: -2^31)
Column value of a coordinate > 2^15-1 (XL: 2^31 - 1)
Column value of a coordinate < -2^15 (XL: -2^31)
Wrong segmentation threshold
Unknown feature
Unknown gray value feature
Internal error in HContCut
Error in HContToPol: distance of points too big
Error in HContToPol: contour too long
Too many rows (IPImageTransform)
Scaling factor = 0.0 (IPImageScale)
Wrong range in transformation matrix
Internal error in IPvvf: no element free
Number of input objects is zero
At least one input object has an empty region
Operation allowed for rectangular images 2**n only
Too many relevant points (IPHysterese)
Number of labels in image too big
No labels with negative values allowed
Wrong filter size (too small ?)
Images with different image size
Target image too wide or too far on the right
Target image too narrow or too far on the left
Target image too high or too far down
Target image too low or too far up
Number of channels in the input parameters are different
Wrong color filter array type
Wrong color filter array interpolation
Homogeneous matrix does not represent an affine transformation
Inpainting region too close to the image border
source and destination differ in size
to many features
Reflection axis undefined
Coocurrence Matrix: Too little columns for quantisation
Coocurrence Matrix: Too little rows for quantisation
Wrong number of columns
Wrong number of rows
Number has too many digits
Matrix is not symmetric
Matrix is too big
Wrong structure of file
Less than 2 matrices
Not enough memory
Can not read the file
Can not open file for writing
Too many lookup table colors
Too many Hough points (lines)
Target image has got wrong height (not big enough)
Wrong interpolation mode
Region not compact or not connected
Wrong filter index for filter size 3
Wrong filter index for filter size 5
Wrong filter index for filter size 7
Wrong filter size; only 3/5/7
Number of suitable pixels too small to reliably estimate the noise
Different number of entries/exits in HContCut
Reference to contour is missing
Wrong XLD type
Border point is set to FG
Maximum contour length exceeded
Maximum number of contours exceeded
Contour too short for fetch_angle_xld
Regression parameters of contours already computed
Regression parameters of contours not yet entered!
Data base: XLD object has been deleted
Data base: Object has no XLD-ID
Wrong number of contour points allocated
Contour attribute not defined
Ellipse fitting failed
Circle fitting failed
All points classified as outliers (ClippingFactor too small or used points not similar to primitive)
Quadrangle fitting failed
No points for at least one rectangle side
A contour point lies outside of the image
Not enough points for model fitting
No ARC/INFO world file
No ARC/INFO generate file
Unexpected end of file while reading DXF file
Cannot read DXF-group code from file
Inconsistent number of attributes per point in DXF file
Inconsistent number of attributes and names in DXF file
Inconsistent number of global attributes and names in DXF file
Cannot read attributes from DXF file
Cannot read global attributes from DXF file
Cannot read attribute names from DXF file
Wrong generic parameter name
Internal DXF I/O error: Wrong data type
Isolated point while contour merging
Constraints cannot be fulfilled
No segment in contour
Only one or no point in template contour
Syntax error in file for training
Maximum number of attributes per example exceeded
Not possible to open file for training
Too many data sets for training
Too many examples for one data set for training
Too many classes
Maximum number of cuboids exceeded
Not possible to open classificator's file
Error while saving the classificator
Not possible to open protocol file
Classificator with this name is already existent
Maximum number of classificators exceeded
Name of classificator is too long, >= 20
Classificator with this name is not existent
Current classificator is not defined
Wrong id in classification file
The version of the classifier is not supported
Serialized item does not contain a valid classifier
Text model does not contain a classifier yet (use set_text_model_param)
Adding new features is not possible, because the dataset has been normalized during training. Please create a new classfier and add all training data again or disable normalization during training.
Invalid file format for GMM training samples
The version of the GMM training samples is not supported
Wrong training sample file format
nvalid file format for Gaussian Mixture Model (GMM)
The version of the Gaussian Mixture Model (GMM) is not supported
Unknown error when training GMM
Collapsed covariance matrix
No samples for at least one class
Too few samples for at least one class
GMM is not trained
GMM has no training data
Serialized item does not contain a valid Gaussian Mixture Model (GMM)
Unknown output function
Target not in 0-1 encoding
No training samples stored in the classifier
Invalid file format for MLP training samples
The version of the MLP training samples is not supported
Wrong training sample format
MLP is not a classifier
Invalid file format for multilayer perceptron (MLP)
The version of the multilayer perceptron (MLP) is not supported
Wrong number of channels
Wrong number of MLP parameters
Serialized item does not contain a valid multilayer perceptron (MLP)
The number of image channels and the number of dimensions of the look-up table do not match
A look-up table can be build for 2 or 3 channels only
Cannot create look-up table. Please choose a larger 'bit_depth' or select the 'fast' 'class_selection'.
No training samples stored in the classifier
Invalid file format for SVM training samples
The version of the SVM training samples is not supported
Wrong training sample format
Invalid file format for support vector machine (SVM)
The version of the support vector machine (SVM) is not supported
Wrong number of classes
Chosen nu is too big
SVM Training failed
SVMs do not fit together
No SV in SVM to add to training
Kernel must be RBF
Not all classes contained in training data
SVM not trained
Classifier not trained
Serialized item does not contain a valid support vector machine (SVM)
Wrong rotation number
Wrong letter for Golay element
Wrong reference point
Wrong number of iterations
Mophology: system error
Wrong type of boundary
Morphology: Wrong number of input obj.
Morphology: Wrong number of output obj.
Morphology: Wrong number of input control parameter
Morphology: Wrong number of output control parameter
Morphology: Struct. element is infinite
Morphology: Wrong name for struct. elem.
Wrong number of run length rows (chords): smaller than 0
Number of chords too big, increase * current_runlength_number using set_system
Run length row with negative length
Run length row >= image height
Run length row < 0
Run length column >= image width
Lauflaengenspalte < 0
For CHORD_TYPE: Number of row too big
For CHORD_TYPE: Number of row too small
For CHORD_TYPE: Number of column too big
Exceeding the maximum number of run lengths while automatical expansion
Region->compl neither TRUE/FALSE
Region->max_num < Region->num
Number of chords too big for num_max
Operator cannot be implemented for complemented regions
Image width < 0
Image width >= MAX_FORMAT
Image height <= 0
Image height >= MAX_FORMAT
Image width <= 0
Image height <= 0
Too many segments
INT8 images are available on 64 bit systems only
Point at infinity cannot be converted to a Euclidean point
Covariance matrix could not be determined
RANSAC algorithm didn't find enough point correspondences
RANSAC algorithm didn't find enough point correspondences
Internal diagnosis: fallback method had to be used
Projective transformation is singular
Mosaic is under-determined
Input covariance matrix is not positive definite
The number of input points too large.
Inconsistent number of point correspondences.
No path from reference image to one or more images.
Image with specified index does not exist.
Matrix is not a camera matrix.
Skew is not zero.
Illegal focal length.
Kappa is not zero.
It is not possible to determine all parameters for in the variable case.
No valid implementation selected.
Kappa can only be determined with the gold-standard method for fixed camera parameters.
Conflicting number of images and projection mode.
Error in projection: Point not in any cube map.
No solution found.
Tilt is not zero.
Illegal combination of parameters and estimation method.
No suitable contours found
No stable solution found
Instable solution found
Not enough contours for calibration
Invalid file format for FFT optimization data
The version of the FFT optimization data is not supported
Optimization data was created with a different HALCON version (Standard HALCON / Parallel HALCON)
Storing of the optimization data failed
Serialized item does not contain valid FFT optimization data
Invalid disparity range for binocular_disparity_ms method
Epipoles are situated within the image domain
Fields of view of both cameras do not intersect each other
Rectification impossible
Wrong type of target_thickness parameter
Wrong type of thickness_tolerance parameter
Wrong type of position_tolerance parameter
Wrong type of sigma parameter
Wrong value of sigma parameter
Wrong type of threshold parameter
Wrong value of target_thickness parameter
Wrong value of thickness_tolerance parameter
Wrong value of position_tolerance parameter
Wrong value of threshold parameter
Wrong type of refinement parameter
Wrong value of refinement parameter
Wrong type of resolution parameter
Wrong type of resolution parameter
Wrong type of polarity parameter
Wrong type of polarity parameter
No sheet-of-light model available
Wrong input image size (width)
Wrong input image size (height)
profile region does not fit the domain of definition of the input image
Calibration extend not set
Undefined disparity image
Undefined domain for disparity image
Undefined camera parameter
Undefined pose of the lightplane
Undefined pose of the camera coordinate system
Undefined transformation from the camera to the lightplane coordinate system
Undefined movement pose for xyz calibration
Wrong value of scale parameter
Wrong parameter name
Wrong type of parameter method
Wrong type of parameter ambiguity
Wrong type of parameter score
Wrong type of parameter calibration
Wrong type of parameter number_profiles
Wrong type of element in parameter camera_parameter
Wrong type of element in pose
Wrong value of parameter method
Wrong type of parameter min_gray
Wrong value of parameter ambiguity
Wrong value of parameter score_type
Wrong value of parameter calibration
Wrong value of parameter number_profiles
Wrong type of camera
Wrong number of values of parameter camera_parameter
Wrong number of values of pose
Calibration target not found
The calibration algorithm failed to find a valid solution.
Wrong type of parameter calibration_object
Invalid calibration object
No calibration object set
Invalid file format for sheet-of-light model
The version of the sheet-of-light model is not supported
Camera type not supported by calibrate_sheet_of_light_model
Parameter does not match the set 'calibration'
The gray values of the disparity image do not fit the height of the camera
Wrong texture inspection model type
Texture Model is not trained
Texture Model has no training data
Invalid file format for Texture inspection model
The version of the Texture inspection model is not supported
Wrong training sample file format
The version of the training sample file is not supported
At least one of the images is too small
The samples do not match the current texture model
No images within the texture model
The light source positions are linearly dependent
No sufficient image indication
Internal error: Function has equal signs in HZBrent
Kalman: Dimension n,m or p has got a undefined value
Kalman: File does not exist
Kalman: Error in file (row of dimension)
Kalman: Error in file (row of marking)
Error in file (value is no float)
Kalman: Matrix A is missing in file
Kalman: In Datei fehlt Matrix C
Kalman: Matrix Q is missing in file
Kalman: Matrix R is missing in file
Kalman: G or u is missing in file
Kalman: Covariant matrix is not symmetric
Kalman: Equation system is singular
structured light model is not in persistent mode
the min_stripe_width is too large for the chosen pattern_width or pattern_height
the single_stripe_width is too large for the chosen pattern_width or pattern_height
min_stripe_width has to be smaller than single_stripe_width.
single_stripe_width is too small for min_stripe_width.
The SLM is not prepared for decoding.
The SLM does not contain the queried object.
The version of the structured light model is not supported
Invalid file format for a structured light model
Wrong pattern type
The SLM is not decoded.
Wrong model type
Object is a object tupel
Object has been deleted already
Wrong object-ID
Object tupel has been deleted already
Wrong object tupel-ID
Object tupel is a object
Object-ID is NULL (0)
Object-ID outside the valid range
Access to deleted image
Access to image with wrong key
Access to deleted region
Access to region with wrong key
Wrong value for image channel
Index too big
Index not defined
No OpenCL available
OpenCL Error occured
No compute devices available
No device implementation for this parameter
Out of device memory
Invalid work group shape
Invalid compute device
CUDA Error occured
cuDNN Error occured
cuBLAS Error occured
Set batch_size not supported
CUDA implementations not available
Unsupported version of cuDNN
Requested feature not supported by cuDNN
CUDA driver is out-of-date
Error occurred in HCPUDNN library
Training is unsupported with the selected runtime. Please switch to 'gpu' runtime.
CPU based inference is not supported on this platform
ACL error ocurred
Internal visualization error
Unexpected color type
Number of color settings exceeded
Wrong (logical) window number
Error while opening the window
Wrong window coordinates
It is not possible to open another window
Device resp. operator not available
Unknown color
No window has been opened for desired action
Wrong filling mode for regions
Wrong gray value (0..255)
Wrong pixel value
Wrong line width
Wrong name of cursor
Wrong color table
Wrong representation mode
Wrong representation color
Wrong dither matrix
Wrong image transformation
Unsuitable image type for image trafo.
Wrong zooming factor for image trafo.
Wrong representation mode
Wrong code of device
Wrong number for father window
Wrong window size
Wrong window type
No current window has been set
Wrong color combination or range (RGB)
Wrong number of pixels set
Wrong value for comprise
set_fix with 1/4 image levels and static not valid
set_lut not valid in child windows
Number of concurrent used color tables is too big
Wrong device for window dump
Wrong window size for window dump
System variable DISPLAY not defined
Wrong thickness for window margin
System variable DISPLAY has been set wrong (<host>:0.0)
Too many fonts loaded
Wrong font name
No valid cursor position
Window is not a textual window
Window is not a image window
String too long or too high
Too little space in the window rightw.
Window is not suitable for the mouse
Here Windows on a equal machine is permitted only
Wrong mode while opening a window
Wrong window mode for operation
Operation not possible with fixed pixel
Color tables for 8 image levels only
Wrong mode for pseudo real colors
Wrong pixel value for LUT
Wrong image size for pseudo real colors
Error in procedure HRLUT
Wrong number of entries in color table for set_lut
Wrong values for image area
Wrong line pattern
Wrong number of parameters for line pattern
Wrong number of colors
Wrong value for mode of area creation
Spy window is not set (set_spy)
No file for spy has been set (set_spy)
Wrong parameter output depth (set_spy)
Wrong window size for window dump
Wrong color table: Wrong file name or query_lut()
Wrong color table: Empty string ?
Using this hardware set_lut('default') is allowed only
Error while calling online help
Row can not be projected
Operation is unsuitable using a computer with fixed color table
Computer represents gray scales only
LUT of this display is full
Internal error: wrong color code
Wrong type for window attribute
Wrong name for window attribute
negativ height of area (or 0)
negativ width of area (or 0)
Window not completely visible
Font not allowed for this operation
Window was created in different thread
Drawing object already attached to another window
Internal error: only RGB-Mode
No more (image-)windows available
Depth was not stored with window
Object index was not stored with window
Operator does not support primitives without point coordinates
Maximum image size for Windows Remote Desktop exceeded
No OpenGL support available
No depth information available
OpenGL error
Required framebuffer object is unsupported
OpenGL accelerated hidden surface removal not supported on this machine
Invalid window parameter
Invalid value for window parameter
Unknown mode
No image attached
invalid navigation mode
Internal file error
Error while file synchronisation
insufficient rights
Bad file descriptor
File not found
Error while writing image data (sufficient memory ?)
Error while writing image descriptor (sufficient memory ?)
Error while reading image data (format of image too small ?)
Error while reading image data (format of image too big ?)
Error while reading image descriptor: File too small
Image matrices are different
Help file not found (setenv HALCONROOT)
Help index not found (setenv HALCONROOT)
File <standard_input> can not be closed
<standard_output/error> can not be closed
File can not be closed
Error while writing to file
Exceeding of maximum number of files
Wrong file name
Error while opening the file
Wrong file mode
Wrong type for pixel (e.g. byte)
Wrong image width (too big ?)
Wrong image height (too big ?)
File already exhausted before reading an image
File exhausted before terminating the image
Wrong value for resolution (dpi)
Wrong output image size (width)
Wrong output image size (height)
Wrong number of parameter values: Format description
Wrong parameter name for operator
Wrong slot name for parameter
Operator class is missing in help file
Wrong or inconsistent help/ *.idx or help/ *.sta
File help/ *.idx not found
File help/ *.sta not found
Inconsistent file help/ *.sta
No explication file (.exp) found
No file found in known graphic format
Wrong graphic format
Inconsistent file halcon.num
File with extension 'tiff' is no Tiff-file
Wrong file format
gnuplot could not be started
Output file for gnuplot could not be opened
Not a valid gnuplot output stream
No PNM format
Inconsistent or old help file
Invalid file encoding
File not open
No files in use so far (none opened)
Invalid file format for regions
Error while reading region data: Format of region too big.
Encoding for binary files not allowed
Serial port not open
No serial port available
Could not open serial port
Could not close serial port
Could not get serial port attributes
Could not set serial port attributes
Wrong baud rate for serial connection
Wrong number of data bits for serial connection
Wrong flow control for serial connection
Could not flush serial port
Error during write to serial port
Error during read from serial port
Serialized item does not contain valid regions.
The version of the regions is not supported.
Serialized item does not contain valid images.
The version of the images is not supported.
Serialized item does not contain valid XLD objects.
The version of the XLD objects is not supported.
Serialized item does not contain valid objects.
The version of the objects is not supported.
XLD object data can only be read by HALCON XL
Unexpected object detected
File has not been opened in text format
File has not been opened in binary file format
Cannot create directory
Cannot remove directory
Cannot get current directory
Cannot set current directory
Need to call XInitThreads()
No image acquisition device opened
IA: wrong color depth
IA: wrong device
IA: determination of video format not possible
IA: no video signal
Unknown image acquisition device
IA: failed grabbing of an image
IA: wrong resolution chosen
IA: wrong image part chosen
IA: wrong pixel ratio chosen
IA: handle not valid
IA: instance not valid (already closed?)
Image acquisition device could not be initialized
IA: external triggering not supported
IA: wrong camera input line (multiplex)
IA: wrong color space
IA: wrong port
IA: wrong camera type
IA: maximum number of acquisition device classes exceeded
IA: device busy
IA: asynchronous grab not supported
IA: unsupported parameter
IA: timeout
IA: invalid gain
IA: invalid field
IA: invalid parameter type
IA: invalid parameter value
IA: function not supported
IA: incompatible interface version
IA: could not set parameter value
IA: could not query parameter setting
IA: parameter not available in current configuration
IA: device could not be closed properly
IA: camera configuration file could not be opened
IA: unsupported callback type
IA: device lost
IA: grab aborted
IO: timeout
IO: incompatible interface version
IO: handle not valid
IO: device busy
IO: insufficient user rights
IO: device or channel not found
IO: invalid parameter type
IO: invalid parameter value
IO: invalid parameter number
IO: unsupported parameter
IO: parameter not available in curr config.
IO: function not supported
IO: maximum number of dio classes exceeded
IO: driver of io device not available
IO: operation aborted
IO: invalid data type
IO: device lost
IO: could not set parameter value
IO: could not query parameter setting
IO: device could not be closed properly
Image type is not supported
Invalid pixel format passed to filter function
Internal JpegXR error.
Syntax error in output format string
Maximum number of channels exceeded
Unspecified error in JXR library
Bad magic number in JXR library
Feature not implemented in JXR library
File read/write error in JXR library
Bad file format in JXR library
Error while closing the image file
Error while opening the image file
Premature end of the image file
Image dimensions too large for this file format
Image too large for this HALCON version
Too many iconic objects for this file format
File format is unsupported
File is no PCX-File
Unknown encoding
More than 4 image plains
Wrong magic in color table
Wrong number of bytes in span
Wrong number of bits/pixels
Wrong number of plains
File is no GIF-File
GIF: Wrong version
GIF: Wrong descriptor
GIF: Wrong color table
GIF: Premature end of file
GIF: Wrong number of images
GIF: Wrong image extension
GIF: Wrong left top width
GIF: Cyclic index of table
GIF: Wrong image data
File is no Sun-Raster-File
Wrong header
Wrong image width
Wrong image height
Wrong color map
Wrong image data
Wrong type of pixel
Wrong type of pixel
Wrong visual class
Wrong X10 header
Wrong X11 header
Wrong X10 colormap
Wrong X11 colormap
Wrong pixmap
Unknown version
Error while reading an image
Error while reading a file
Wrong colormap
Too many colors
Wrong photometric interpretation
Wrong photometric depth
Image is no binary file
Unsupported TIFF format
Wrong file format specification
TIFF file is corrupt
Required TIFF tag is missing
File is no BMP-File
Premature end of file
Incomplete header
Unknown bitmap format
Unknown compression format
Wrong color table
Write error on output
File does not contain a binary image
Wrong number of components in image
Unknown error from libjpeg
Not implemented feature in libjpeg
File access error in libjpeg
Tmp file access error in libjpeg
Memory error in libjpeg
Error in input image
File is not a PNG file
Unknown interlace type
Unsupported color type
Image is no binary file
Image size too big
File corrupt
Image precision too high
Error while encoding
Image size too big
File does not contain only images
Socket can not be set to block
Socket can not be set to unblock
Received data is no tuple
Received data is no image
Received data is no region
Received data is no xld object
Error while reading from socket
Error while writing to socket
Illegal number of bytes with get_rl
Buffer overflow in read_data
Socket can not be created
Bind on socket failed
Socket information is not available
Socket cannot listen for incoming connections
Connection could not be accepted
Connection request failed
Hostname could not be resolved
Unknown tuple type on socket
Timeout occured on socket
No more sockets available
Socket is not initialized
Invalid socket
Socket is NULL
Received data type is too large
Wrong socket type.
Received data is not packed.
Socket parameter operation failed.
The data does not match the format specification.
Invalid format specification.
Received data is no serialized item
Too many contours/polygons for this file format
The version of the quaternion is not supported
Serialized item does not contain a valid quaternion
The version of the homogeneous matrix is not supported
Serialized item does not contain a valid homogeneous matrix
The version of the homogeneous 3D matrix is not supported
Serialized item does not contain a valid homogeneous 3D matrix
The version of the tuple is not supported
Serialized item does not contain a valid tuple
Tuple data can only be read on 64-bit systems
The version of the camera parameters (pose) is not supported
Serialized item does not contain valid camera parameters (pose)
The version of the internal camera parameters is not supported
Serialized item does not contain valid internal camera parameters
The version of the dual quaternion is not supported
Serialized item does not contain a valid dual quaternion
Access to undefined memory area
Not enough memory available
Memory partition on heap has been overwritten
HAlloc: 0 bytes requested
Tmp-memory management: Call freeing memory although nothing had been allocated
Tmp-memory management: Null pointer while freeing
Tmp-memory management: Could not find memory element
memory management: wrong memory type
Not enough video memory available
No memory block allocated at last
System parameter for memory-allocation inconsistent
Invalid alignement
Process creation failed
Wrong index for output control par.
Wrong number of values: Output control parameter
Wrong type: Output control parameter
Wrong data type for object key (input objects)
Range for integer had been passed
Inconsistent Halcon version
Not enough memory for strings allocated
Internal error: Proc is NULL
Unknown symbolic object key (input obj.)
Wrong number of output object parameter
Output type <string> expected
Output type <long> expected
Output type <float> expected
Object parameter is a zero pointer
Tupel had been deleted; values are not valid any more
CNN: Internal data error
CNN: Invalid memory type
CNN: Invalid data serialization
CNN: Implementation not available
CNN: Wrong number of input data
CNN: Invalid implementation type
CNN: Training is not supported in the current environment.
For this operation a GPU with certain minimal requirements is required. See installation guide for details.
For this operation the CUDA library needs to be available. (See installation guide for details.)
OCR File: Error while reading classifier
Wrong generic parameter name
One of the parameters returns several values and has to be used exclusively
Wrong generic parameter name
Invalid labels.
OCR File: Wrong file version
Invalid classes: At least one class apears twice
For this operation the cuBLAS library needs to be available. (See installation guide for details.)
For this operation the CUDNN library needs to be available. (See installation guide for details.)
File 'find_text_support.hotc' not found (please place this file in the ocr subdirectory of the root directory of your HALCON installation or in the current working directory)
Training step failed. This might be caused by unsuitable training parameters
Weights in Graph have been overwritten previously and are lost.
New input size is too small to produce meaningful features
Result is not available.
New number of channels must be either 1 or 3.
New input number of channels can't be set to 3 if network is specified for number of channels 1
Device batch size larger than batch size.
Invalid specification of a parameter.
Memory size exceeds maximal allowed value.
New batch size causes integer overflow
Invalid input image size for detection model
Invalid parameter value \ for current layer
Invalid parameter num \ for current layer
Invalid parameter type \ for current layer
Graph: Internal error
Graph: Invalid data serialization
Graph: Invalid index
HCNNGraph: Internal error
HCNNGraph: Invalid data serialization
HCNNGraph: Invalid layer specification
HCNNGraph: Graph not properly initialized
CNN-Graph: Invalid memory type
CNN-Graph: Invalid number of layers
CNN-Graph: Invalid index
CNN-Graph: Invalid specification status
CNN-Graph: Graph is not allowed to be changed after initialization
CNN-Graph: Missing preprocessing
CNN-Graph: Invalid vertex degree
CNN-Graph: Invalid output shape
CNN-Graph: Invalid specification
CNN-Graph: Invalid graph definition
CNN-Graph: Architecture not suitable for the adaption of the number of output classes
CNN-Graph: Architecture not suitable for the adaption of the image size"
DL: Error writing file
DL: Error reading file
DL: Wrong file version
DL: Inputs missing in input dict
DL: Inputs have incorrect batch size
DL: Invalid layer name
DL: Duplicate layer name
DL: Invalid output layer
DL: Parameter is not available
DL: Tuple inputs have incorrect length
DL: Tuple inputs have incorrect type
DL: Some inputs have incorrect values
DL: Some class ids are not unique
DL: Some class ids are invalid
DL: Input data of class id conversion is invalid.
DL: Type already defined
DL: Cannot identify inference inputs.
DL: Some class ids overlap with ignore class ids.
DL: Wrong number of output layer
DL: Batch size multiplier needs to be greater than 0
DL: Inputs have incorrect batch size. The number of needed inputs is defined by batch_size * batch_size_multiplier
DL: Wrong scales during FPN creation
DL: Backbone unusable for FPN creation
DL: Backbone feature maps not divisible by 2
DL: Internal error using anchors
DL: Invalid detector parameter
DL: Invalid detector parameter value
DL: Invalid docking layer
DL: Invalid detection type
apply_dl_model: no default outputs allowed
DLModule is not loaded
Unknown operator name
register_comp_used is not activated
Unknown operator class
convol/mask: Error while opening file
convol/mask: Premature end of file
convol/mask: Conversion error
convol/mask: Wrong row-/column number
convol/mask: Mask size overflow
convol/mask: Too many elements entered
convol: Wrong margin type
convol: No mask object has got empty region
convol: Weight factor is 0
convol: Inconsistent number of weights
rank: Wrong rank value
convol/rank: Error while handling margin
Error while parsing filter mask file
Wrong number of coefficients for convolution (sigma too big?)
No valid ID for data set
No data set active (set_bg_esti)
ID already used for data set
No data set created (create_bg_esti)
Not possible to pass an object list
Image has other size than the background image in data set
Up-date-region is bigger than background image
Number of statistic data sets is too small
Wrong value for adapt mode
Wrong value for frame mode
Number of point corresponcences too small
Invalid method
Maximum number of fonts exceeded
Wrong ID (Number) for font
OCR internal error: wrong ID
OCR not initialised: no font was read in
No font activated
OCR internal error: Wrong threshold in angle determination
OCR internal error: Wrong attribute
The version of the OCR classifier is not supported
OCR File: Inconsistent number of nodes
OCR File: File too short
OCR: Internal error 1
OCR: Internal error 2
Wrong type of OCR tool (no 'box' or 'net')
The version of the OCR training characters is not supported
Image too large for training file
Region too large for training file
Protected OCR training file
Protected OCR training file: wrong passw.
Serialized item does not contain a valid OCR classifier
OCR training file concatenation failed: identical input and output files
Invalid file format for MLP classifier
The version of the MLP classifier is not supported
Serialized item does not contain a valid MLP classifier
Invalid file format for SVM classifier
The version of the SVM classifier is not supported
Serialized item does not contain a valid SVM classifier
Invalid file format for k-NN classifier
Serialized item does not contain a valid k-NN classifier
Invalid file format for CNN classifier
The version of the CNN classifier is not supported
Serialized item does not contain a valid CNN classifier
Result name is not available for this mode
OCV system not initialized
The version of the OCV tool is not supported
Wrong name for an OCV object
Training has already been applied
No training has been applied
Serialized item does not contain a valid OCV tool
Wrong number of function points
List of values is not a function
Wrong ordering of values (not ascending)
Illegal distance of function points
Function is not monotonic.
Wrong function type.
The input points could not be arranged in a regular grid
Error while creating the output map
Auto rotation failed
Mark segmentation failed
Contour extraction failed
No finder pattern found
At least 3 calibration points have to be indicated
Inconsistent finder pattern positions
No calibration table found
Error while reading calibration table description file
Minimum threshold while searching for ellipses
Read error / format error in calibration table description file
Error in projection: s_x = 0 or s_y = 0 or z = 0
Error in inverse projection
Not possible to open camera parameter file
Format error in file: No colon
Format error in file: 2. colon is missing
Format error in file: Semicolon is missing
Not possible to open camera parameter (pose) file
Format error in camera parameter (pose) file
Not possible to open calibration target description file
Not possible to open postscript file of calibration target
Error while norming the vector
Fitting of calibration target failed
No next mark found
Normal equation system is not solvable
Average quadratic error is too big for 3D position of mark
Non elliptic contour
Wrong parameter value slvand()
Wrong function results slvand()
Distance of marks in calibration target description file is not possible
Specified flag for degree of freedom not valid
Minimum error did not fall below
Wrong type in Pose (rotation / translation)
Image size does not match the measurement in camera parameters
Point could not be projected into linescan image
Diameter of calibration marks could not be determined
Orientation of calibration plate could not be determined
Calibration plate does not lie completely inside the image
Wrong number of calibration marks extracted
Unknown name of parameter group
Focal length must be non-negative
Function not available for cameras with telecentric lenses
Function not available for line scan cameras
Ellipse is degenerated to a point
No orientation mark found
Camera calibration did not converge
Function not available for cameras with hypercentric lenses
Point cannot be distorted.
Wrong edge filter.
Pixel size must be non-negative or zero
Tilt is in the wrong range
Rot is in the wrong range
Camera parameters are invalid
Focal length must be positive
Magnification must be positive
Illegal image plane distance
model not optimized yet - no res's
auxilary model results not available
setup not 'visibly' interconnected
camera parameter mismatch
camera type mismatch
camera type not supported
invalid camera ID
invalid cal.obj. ID
invalid cal.obj. instance ID
undefined camera
repeated observ. index
undefined calib. object description
Invalid file format for calibration data model
The version of the calibration data model is not supported
zero-motion in linear scan camera
multi-camera and -calibobj not supported for all camera types
incomplete data, required for legacy calibration
Invalid file format for camera setup model
The version of the camera setup model is not supported
full HALCON-caltab descr'n required
invalid observation ID
Serialized item does not contain a valid camera setup model
Serialized item does not contain a valid calibration data model
Invalid tool pose id
Undefined tool pose
Invalid calib data model type
The camera setup model contains an uninitialized camera
The hand-eye algorithm failed to find a solution.
invalid observation pose
Not enough calibration object poses
undefined camera type
No camera pair set by set_stereo_model_image_pairs
No reconstructed point is visible for coloring
No camera pair yields reconstructed points (please check parameters of disparity method or bounding box)
Partitioning of bounding box is too fine (please adapt the parameter 'resolution' or the bounding box)
Invalid disparity range for binocular_disparity_ms method
Invalid param for binoculuar method
invalid stereo model type
stereo model is not in persistent mode
invalid bounding box
stereo reconstruction: image sizes must correspond to camera setup
bounding box is behind basis line
Ambigious calibration: Please, recalibrate with improved input data!
Pose of calibration plate not determined
Calibration failed: Please check your input data and calibrate again!
No observation data supplied!
The calibration object has to be seen at least once by every camera, if less than four cameras are used.
Invalid file format for template
The version of the template is not supported
Error during changing the file mode
Inconsistent match file: Coordinates out of range
The image(s) is not a pyramid
Number of template points too small
Template data can only be read by HALCON XL
Serialized item does not contain a valid NCC model
Serialized item does not contain a valid template
Number of shape model points too small
Gray and color shape models mixed
Shape model data can only be read by HALCON XL
Shape model was not created from XLDs
Serialized item does not contain a valid shape model
Shape model contour too near to clutter region
Shape model does not contain clutter parameters
Shape models are not of the same clutter type
Shape model has an invalid clutter contrast
Initial components have different region types
Solution of ambiguous matches failed
Computation of the incomplete gamma function not converged
Too many nodes while computing the minimum spanning arborescence
Component training data can only be read by HALCON XL
Component model data can only be read by HALCON XL
Serialized item does not contain a valid component model
Serialized item does not contain a valid component training result
Size of the training image and the variation model differ
Variation model has not been prepared for segmentation
Invalid variation model training mode
Invalid file format for variation model
The version of the variation model is not supported
Training data has been cleared
Serialized item does not contain a valid variation model
No more measure objects available
Measure object is not initialized
Invalid measure object
Measure object is NULL
Measure object has wrong image size
Invalid file format for measure object
The version of the measure object is not supported
Measure object data can only be read by HALCON XL
Serialized item does not contain a valid measure object
Metrology model is not initialized
Invalid metrology object
Not enough valid measures for fitting the metrology object
Invalid file format for metrology model
The version of the metrology model is not supported
Fuzzy function is not set
Serialized item does not contain a valid metrology model
Camera parameters are not set
Pose of the measurement plane is not set
Mode of metrology model cannot be set since an object has already been added
If the pose of the metrology object has been set several times, the operator is not longer allowed
All objects of a metrology model must have the same world pose and camera parameters.
Input type of metrology model does not correspond with the current input type
Dynamic library could not be opened
Dynamic library could not be closed
Symbol not found in dynamic library
Interface library not * available
Not enough information for rad. calib.
Wrong number of modules
Wrong number of elements
Unknown character (for this code)
Wrong name for attribute in barcode descriptor
Wrong thickness of element
No region found
Wrong type of bar code
Empty model list
Training cannot be done for multiple bar code types
Cannot get bar code type specific parameter with get_bar_code_param. Use get_bar_code_param_specific
Cannot get this object for multiple bar code types. Try again with single bar code type
Wrong binary (file) format
Wrong version of binary file
The model must be in persistency mode to deliver the required object/result
Incorrect index of scanline's gray values
Neither find_bar_code nor decode_bar_code_rectanlge2 has been called in 'persistent' mode on this model
Specified code type is not supported
Wrong foreground specified
Wrong matrix size specified
Wrong symbol shape specified
Wrong generic parameter name
Wrong generic parameter value
Wrong symbol printing mode
Symbol region too near to image border
No rectangular modul boundings found
Couldn't identify symbol finder
Symbol region with wrong dimension
Classification failed
Decoding failed
Reader programing not supported
General 2d data code error
Corrupt signature of 2d data code handle
Invalid 2d data code handle
List of 2d data code models is empty
Access to uninitialized (or not persistent) internal data
Invalid 'Candidate' parameter
It's not possible to return more than one parameter for several candidates
One of the parameters returns several values and has to be used exclucively for a single candidate
Parameter for default settings must be the first in the parameter list
Unexpected 2d data code error
Invalid parameter value
Unknown parameter name
Invalid 'polarity'
Invalid 'symbol_shape'
Invalid symbol size
Invalid module size
Invalid 'module_shape'
Invalid 'orientation'
Invalid 'contrast_min'
Invalid 'measure_thresh'
Invalid 'alt_measure_red'
Invalid 'slant_max'
Invalid 'L_dist_max'
Invalid 'L_length_min'
Invalid module gap
Invalid 'default_parameters'
Invalid 'back_texture'
Invalid 'mirrored'
Invalid 'classificator'
Invalid 'persistence'
Invalid model type
Invalid 'module_roi_part'
Invalid 'finder_pattern_tolerance'
Invalid 'mod_aspect_max'
Invalid 'small_modules_robustness'
Invalid 'contrast_tolerance'
Invalid header in 2d data code model file
Invalid code signature in 2d data code model file
Corrupted line in 2d data code model file
Invalid module aspect ratio
wrong number of layers
wrong data code model version
Serialized item does not contain a valid 2D data code model
Wrong binary (file) format
Invalid parameter value
Invalid 'num_levels'
Invalid 'optimization'
Invalid 'metric'
Invalid 'min_face_angle'
Invalid 'min_size'
Invalid 'model_tolerance'
Invalid 'fast_pose_refinment'
Invalid 'lowest_model_level'
Invalid 'part_size'
The projected model is too large (increase the value for DistMin or the image size in CamParam)
Invalid 'opengl_accuracy'
Invalid 'recompute_score'
Invalid 'longitude_min'
Invalid 'longitude_max'
Invalid 'latitude_min
Invalid 'latitude_max'
Invalid 'cam_roll_min'
Invalid 'cam_roll_max'
Invalid 'dist_min'
Invalid 'dist_max'
Invalid 'num_matches'
Invalid 'max_overlap'
Invalid 'pose_refinement'
Invalid 'cov_pose_mode'
In. 'outlier_suppression'
Invalid 'border_model'
Pose is not well-defined
Invalid file format for 3D shape model
The version of the 3D shape model is not supported
3D shape model can only be read by HALCON XL
3D object model does not contain any faces
Serialized item does not contain a valid 3D shape model
Invalid 'union_adjacent_contours'
Invalid file format for descriptor model
The version of the descriptor model is not supported
Invalid 'radius'
Invalid 'check_neighbor'
Invalid 'min_check_neighbor_diff'
Invalid 'min_score'
Invalid 'sigma_grad'
Invalid 'sigma_smooth'
Invalid 'alpha'
Invalid 'threshold'
Invalid 'depth'
Invalid 'number_trees'
Invalid 'min_score_descr'
Invalid 'patch_size'
Invalid 'tilt'
Invalid 'guided_matching'
Invalid 'subpix'
Too few feature points can be found
Invalid 'min_rot'
Invalid 'max_rot'
Invalid 'min_scale'
Invalid 'max_scale'
Invalid 'mask_size_grd'
Invalid 'mask_size_smooth'
Model broken
Invalid 'descriptor_type'
Invalid 'matcher'
Too many point classes - cannot be written to file
Serialized item does not contain a valid descriptor model
Function not implemented on this machine
Image to process has wrong gray value type
Wrong image component
Undefined gray values
Wrong image format for operation (too big or too small)
Wrong number of image components for image output
String is too long (max. 1024 characters)
Wrong pixel type for this operation
Operation not realized yet for this pixel type
Image is no color image with three channels
Image acquisition devices are not supported in the demo version
Packages are not supported in the demo version
Internal Error: Unknown value
Wrong paramter for this operation
Image domain too small
Draw operator has been canceled
Error during matching of regular * expression
Operator is not available in the student version of HALCON
Packages are not available in the student version of HALCON
The selected image acquisition device is not available in the student version of HALCON
No data points available
Object type is not supported.
Operator is disabled.
Too many unknown variables in linear equation
No (unique) solution for the linear equation
Too little equations in linear equation
Points do not define a line
Matrix is not invertible
Singular value decomposition did not converge
Matrix has too few rows for singular value partition
Eigenvalue computation did not converge
Eigenvalue computation did not converge
Matrix is singular
Function matching did not converge
Input matrix undefined
Input matrix with wrong dimension
Input matrix is not quadratic
Matrix operation failed
Matrix is not positive definite
Matrix element division by 0
Matrix is not an upper triangular matrix
Matrix is not a lower triangular matrix
Matrix element is negative
Matrix file: Invalid character
Matrix file: matrix incomplete
Invalid file format for matrix
Resulting matrix has complex values
Wrong value in matrix of exponents
The version of the matrix is not supported
Serialized item does not contain a valid matrix
Internal Error: Wrong Node
Inconsistent red black tree
Internal error
Number of points too small
First 3 points are collinear
Identical points in triangulation
Array not allocated large enough
Triangle is degenerate
Inconsistent triangulation
Self-intersecting polygon
Inconsistent polygon data
Ambiguous great circle arc intersection
Ambiguous great circle arc
Illegal parameter
Not enough points for planar triangular meshing
The first three points of the triangular meshing are collinear
Planar triangular meshing contains identical input points
Invalid points for planar triangular meshing
Internal error: allocated array too small for planar triangular meshing
Internal error: planar triangular meshing inconsistent
Node index outside triangulation range
Local inconsistencies for all points with valid neighbors (parameters only allow few valid neighborhoods or point cloud not subsampled)
Eye point and reference point coincide
Real part of the dual quaternion has length 0
Timeout occurred
Invalid 'timeout'
Timeout occured after cached transformations have been freed (internal error)
Invalid 'sub_object_size'
Invalid 'min_size'
Invalid number of least-squares iterations
Invalid 'angle_step'
Invalid 'scale_r_step'
Invalid 'scale_c_step'
Invalid 'max_angle_distortion'
Invalid 'max_aniso_scale_distortion'
Invalid 'min_size'
Invalid 'cov_pose_mode'
Model contains no calibration information
Generic parameter name does not exist
camera has different resolution than image
Invalid file format for deformable model
The version of the deformable model is not supported
Invalid 'deformation_smoothness'
Invalid 'expand_border'
Model origin outside of axis-aligned bounding rectangle of template region
Serialized item does not contain a valid deformable model
Object model has no points
Object model has no faces
Object model has no normals
Invalid file format for 3D surface model
The version of the 3D surface model is not supported
Serialized item does not contain a valid 3D surface model
Poses generate too many symmetries
Invalid 3D file
Invalid 3D Object Model
Unknown 3D file type
The version of the 3D object model is not supported
Required attribute is missing
Required attribute point_coord is missing
Required attribute point_normal is missing
Required attribute face_triangle is missing
Required attribute line_array is missing
Required attribute f_trineighb is missing
Required attribute face_polygon is missing
Required attribute xyz_mapping is missing
Required attribute o_primitive is missing
Required attribute shape_model is missing
Required extended attribute missing in 3D object model
Serialized item does not contain a valid 3D object model
Primitive in 3D object model has no extended data
Operation invalid, 3D object model already contains triangles
Operation invalid, 3D object model already contains lines
Operation invalid, 3D object model already contains faces or polygons
In a global registration an input object has no neighbors
All components of points must be set at once
All components of normals must be set at once
Number of values doesn't correspond to number of already existing points
Number of values doesn't correspond to number of already existing normals
Number of values doesn't correspond to already existing triangulation
Number of values doesn't correspond to length of already existing polygons
Number of values doesn't correspond to length of already existing polylines
Number of values doesn't correspond already existing 2D mapping
Number of values doesn't correspond to already existing extended attribute
Per-face intensity is used with point attribute
Attribute is not (yet) supported
No point within bounding box
distance_in_front is smaller than the resolution
The minimum thickness is smaller than the surface tolerance
Triangles of the 3D object model are not suitable for this operator
Too few suitable 3D points in the 3D object model
Not a valid serialized item file
Serialized item: premature end of file
Invalid 'image_resize_method'
Invalid 'image_resize_value'
Invalid 'rating_method'
At least one type of image information must be used
Sample identifier does not contain color information
Sample identifier does not contain texture information
Sample image does not contain enough information
Sample identifier does not contain unprepared data (use add_sample_identifier_preparation_data)
Sample identifier has not been prepared yet (use prepare_sample_identifier)
Sample identifier does not contain untrained data (use add_sample_identifier_training_data)
Sample identifier has not been trained yet (use train_sample_identifier)
Sample identifier does not contain result data
Sample identifier must contain at least two training objects (use add_sample_identifier_training_data)
More than one user thread still uses HALCON * resources during finalization
User defined error codes must be larger than this value
No license found
Lost connection to license server
No modules in license (no VENDOR_STRING)
No license for this operator
Time zone offset from GMT is > 24 hours
Vendor keys do not support this platform
Bad vendor keys
Unknown vendor key type
malloc() call failed
Vendor keys have expired
Second call to lc_inti() (multiple jobs), and vendore keys do not support multiple jobs
Vendor key data not supplied
lmclient.h/liblmgr.a version mismatch
Networking software not available on this machine
Old vendor keys supplied
License key in license file does not match other data in file
Encryption handshake with daemon failed
'key' structure is incorrect type, or feature == NULL, or num_licenses == 0
System clock has been set back
Version argument is invalid floating point format
License server busy starting another copy of itself
Cannot establish a connection with a license server
Feature is queued. lc_status will determine when it is available
Vendor keys do not support this function
Checkout request filtered by the vendor-defined filter routine
Checkout exceeds MAX specified in options file
All licenses in use
No license server specified for counted license
Can not find feature in the license file
Server has different license file than client - client's license has feature, but server's does not
License file does not support a version this new
This platform not authorized by license - running on platform not included in PLATFORMS list
License server busy
Could not find license.dat
Invalid license file syntax
Cannot connect to a license server
No TCP "license" service exists
No socket connection to license manager server
Invalid host
Feature has expired
Invalid date format in license file
Invalid returned data from license server
Cannot find SERVER hostname in network database
Cannot read data from license server
Cannot write data to license server
Error in select system call
Feature checkin failure detected at license
Users are queued for this feature
License server does not support this version of this feature
Request for more licenses than this feature supports
Cannot read /dev/kmem
Cannot read /vmunix
Cannot find ethernet device
Cannot read license file
Feature not yet available
No such attribute
Clock difference too large between client and server
Feature database corrupted in daemon
Duplicate selection mismatch for this feature
User/host on EXCLUDE list for feature
User/host not on INCLUDE list for feature
Feature was never checked out
Invalid FLEXlm key data supplied
Clock setting check not available in daemon
Date too late for binary format
FLEXlm not initialized
Server did not respond to message
Request rejected by vendor defined filter
No FEATURESET line present in license file
Incorrect FEATURESET line in license file
Cannot compute FEATURESET line
socket() call failed
setsockopt() failed
Message checksum failure
Cannot read license file from server
Not a license administrator
lmremove request too soon
Attempt to read beyond the end of LF path
SYS$SETIMR call failed
Internal FLEXlm Erro - Please report to Globetrotter Software
FLEXadmin API functions not avilable
Invalid PACKAGE line in license file
Server FLEXlm version older than client's
Incorrect number of USERS/HOSTS INCLUDED in options file
Server doesn't support this request
This license object already in use
Future license file format
Feature removed during lmreread
This feature is available in a different license pool
Network connect to THIS_HOST failed
Server node is down or not responding
The desired vendor daemon is down
The decimal format license is typed incorrectly
All licenses are reserved for others
Terminal Server remote client not allowed
Cannot borrow that long
License server out of network connections
Dongle not attached, or can't read dongle
Missing dongle driver
FLEXlock checkouts attempted
SIGN= attribute required
CRO not supported for this platform
BORROW failed
BORROW period has expired
FLOAT_OK license must have exactly one dongle hostid
Unable to delete local borrow info"
Returning borrowed license early not enabled
Returning borrowed license on server failed
Checkout just a PACKAGE failed
Composite Hostid not initialized
An item needed for Composite Hostid missing or invalid
Borrowed license doesn't match any known server license
Error enabling event log
Event logging is disabled
Error writing to event log
Timeout
Bad message command
Error writing to socket
Failed to generate version specific license
Vers.-specific signatures not supported
License template contains redundant signature specifiers
Invalid V71_LK signature
Invalid V71_SIGN signature
Invalid V80_LK signature
Invalid V80_SIGN signature
Invalid V81_LK signature
Invalid V81_SIGN signature
Invalid V81_SIGN2 signature
Invalid V84_LK signature
Invalid V84_SIGN signature
Invalid V84_SIGN2 signature
License key required but missing from the certificate
Bad AUTH={} signature
TS record invalid
Cannot open TS
Invalid Fulfillment record
Invalid activation request received
No fulfillment exists in trusted storage which matches the request
Invalid activation response received
Can't return the fulfillment
Return would exceed max count(s)
No repair count left
Specified operation is not allowed
User/host on EXCLUDE list for entitlement
User/host not in INCLUDE list for entitlement
Activation error
Invalid date format in trusted storage
Message encryption failed
Message decryption failed
Bad filter context
SUPERSEDE feature conflict
Invalid SUPERSEDE_SIGN syntax
SUPERSEDE_SIGN does not contain a feature name and license signature
ONE_TS_OK is not supported in this Windows Platform
Internal error
Only one terminal server remote client checkout is allowed for this feature
Internal error
Internal error
Internal error
More than one ethernet hostid not supported in composite hostid definition
The number of characters in the license file paths exceeds the permissible limit
Invalid TZ keyword syntax
Invalid time zone override specification in the client
The time zone information could not be obtained
License client time zone not authorized for license rights
Invalid syntax for VM_PLATFORMS keyword
Feature can be checked out from physical machine only
Feature can be checked out from virtual machine only
Vendor keys do not support Virtualization feature
Checkout request denied as it exceeds the MAX limit specified in the options file
Binding agent API - Internal error
Binding agent communication error
Invalid Binding agent version
Failed to load ServerQuery request
Failed to generate ServerQuery response
Invalid IP address used while overriding
Represents an instance of an event synchronization object.
Create an event synchronization object.
Modified instance represents: Event synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Create an event synchronization object.
Modified instance represents: Event synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Clear the event synchronization object.
Instance represents: Event synchronization object.
Unlock an event synchronization object.
Instance represents: Event synchronization object.
Lock an event synchronization object only if it is unlocked.
Instance represents: Event synchronization object.
Object already locked?
Lock an event synchronization object.
Instance represents: Event synchronization object.
Create an event synchronization object.
Modified instance represents: Event synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Create an event synchronization object.
Modified instance represents: Event synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Represents an instance of a training used for the classifier.
Read a training data set from a file.
Modified instance represents: Identification of the data set to train.
Filename of the data set to train. Default: "sampset1"
Read a training data set from a file.
Modified instance represents: Identification of the data set to train.
Filename of the data set to train. Default: "sampset1"
Train the classifier with one data set.
Instance represents: Number of the data set to train.
Handle of the classifier.
Name of the protocol file. Default: "training_prot"
Number of arrays of attributes to learn. Default: 500
Classification error for termination. Default: 0.05
Error during the assignment. Default: 100
Free memory of a data set.
Instance represents: Number of the data set.
Classify a set of arrays.
Instance represents: Key of the test data.
Handle of the classifier.
Error during the assignment.
Represents an instance of a file.
Open a file in text or binary format.
Modified instance represents: File handle.
Name of file to be opened. Default: "standard"
Type of file access and optional the string encoding. Default: "output"
Open a file in text or binary format.
Modified instance represents: File handle.
Name of file to be opened. Default: "standard"
Type of file access and optional the string encoding. Default: "output"
Open a file in text or binary format.
Modified instance represents: File handle.
Name of file to be opened. Default: "standard"
Type of file access and optional the string encoding. Default: "output"
Open a file in text or binary format.
Modified instance represents: File handle.
Name of file to be opened. Default: "standard"
Type of file access and optional the string encoding. Default: "output"
Write strings and numbers into a text file.
Instance represents: File handle.
Values to be written into the file. Default: "hallo"
Write strings and numbers into a text file.
Instance represents: File handle.
Values to be written into the file. Default: "hallo"
Read a character line from a text file.
Instance represents: File handle.
Reached end of file before any character was read.
Read line.
Read a string from a text file.
Instance represents: File handle.
Reached end of file before any character was added to the output string.
Read character sequence.
Read one character from a text file.
Instance represents: File handle.
Read character, which can be multi-byte or the control string 'eof'.
Write a line break and clear the output buffer.
Instance represents: File handle.
Closing a text file.
File handle.
Closing a text file.
Instance represents: File handle.
Represents an instance of an image acquisition device.
Open and configure an image acquisition device.
Modified instance represents: Handle of the opened image acquisition device.
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library (Linux/macOS). Default: "File"
Desired horizontal resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Desired vertical resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Width of desired image part (absolute value or 0 for HorizontalResolution - 2*StartColumn). Default: 0
Height of desired image part (absolute value or 0 for VerticalResolution - 2*StartRow). Default: 0
Line number of upper left corner of desired image part (or border height if ImageHeight = 0). Default: 0
Column number of upper left corner of desired image part (or border width if ImageWidth = 0). Default: 0
Desired half image or full image. Default: "default"
Number of transferred bits per pixel and image channel (-1: device-specific default value). Default: -1
Output color format of the grabbed images, typically 'gray' or 'raw' for single-channel or 'rgb' or 'yuv' for three-channel images ('default': device-specific default value). Default: "default"
Generic parameter with device-specific meaning. Default: -1
External triggering. Default: "default"
Type of used camera ('default': device-specific default value). Default: "default"
Device the image acquisition device is connected to ('default': device-specific default value). Default: "default"
Port the image acquisition device is connected to (-1: device-specific default value). Default: -1
Camera input line of multiplexer (-1: device-specific default value). Default: -1
Open and configure an image acquisition device.
Modified instance represents: Handle of the opened image acquisition device.
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library (Linux/macOS). Default: "File"
Desired horizontal resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Desired vertical resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Width of desired image part (absolute value or 0 for HorizontalResolution - 2*StartColumn). Default: 0
Height of desired image part (absolute value or 0 for VerticalResolution - 2*StartRow). Default: 0
Line number of upper left corner of desired image part (or border height if ImageHeight = 0). Default: 0
Column number of upper left corner of desired image part (or border width if ImageWidth = 0). Default: 0
Desired half image or full image. Default: "default"
Number of transferred bits per pixel and image channel (-1: device-specific default value). Default: -1
Output color format of the grabbed images, typically 'gray' or 'raw' for single-channel or 'rgb' or 'yuv' for three-channel images ('default': device-specific default value). Default: "default"
Generic parameter with device-specific meaning. Default: -1
External triggering. Default: "default"
Type of used camera ('default': device-specific default value). Default: "default"
Device the image acquisition device is connected to ('default': device-specific default value). Default: "default"
Port the image acquisition device is connected to (-1: device-specific default value). Default: -1
Camera input line of multiplexer (-1: device-specific default value). Default: -1
Query specific parameters of an image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Parameter of interest. Default: "revision"
Parameter value.
Query specific parameters of an image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Parameter of interest. Default: "revision"
Parameter value.
Set specific parameters of an image acquistion device.
Instance represents: Handle of the acquisition device to be used.
Parameter name.
Parameter value to be set.
Set specific parameters of an image acquistion device.
Instance represents: Handle of the acquisition device to be used.
Parameter name.
Parameter value to be set.
Query callback function of an image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Callback type. Default: "transfer_end"
Pointer to user-specific context data.
Pointer to the callback function.
Register a callback function for an image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Callback type. Default: "transfer_end"
Pointer to the callback function to be set.
Pointer to user-specific context data.
Asynchronous grab of images and preprocessed image data from the specified image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Pre-processed image regions.
Pre-processed XLD contours.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Pre-processed control data.
Grabbed image data.
Asynchronous grab of images and preprocessed image data from the specified image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Pre-processed image regions.
Pre-processed XLD contours.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Pre-processed control data.
Grabbed image data.
Synchronous grab of images and preprocessed image data from the specified image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Preprocessed image regions.
Preprocessed XLD contours.
Preprocessed control data.
Grabbed image data.
Synchronous grab of images and preprocessed image data from the specified image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Preprocessed image regions.
Preprocessed XLD contours.
Preprocessed control data.
Grabbed image data.
Asynchronous grab of an image from the specified image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Grabbed image.
Start an asynchronous grab from the specified image acquisition device.
Instance represents: Handle of the acquisition device to be used.
This parameter is obsolete and has no effect. Default: -1.0
Synchronous grab of an image from the specified image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Grabbed image.
Close specified image acquisition device.
Instance represents: Handle of the image acquisition device to be closed.
Open and configure an image acquisition device.
Modified instance represents: Handle of the opened image acquisition device.
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library (Linux/macOS). Default: "File"
Desired horizontal resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Desired vertical resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Width of desired image part (absolute value or 0 for HorizontalResolution - 2*StartColumn). Default: 0
Height of desired image part (absolute value or 0 for VerticalResolution - 2*StartRow). Default: 0
Line number of upper left corner of desired image part (or border height if ImageHeight = 0). Default: 0
Column number of upper left corner of desired image part (or border width if ImageWidth = 0). Default: 0
Desired half image or full image. Default: "default"
Number of transferred bits per pixel and image channel (-1: device-specific default value). Default: -1
Output color format of the grabbed images, typically 'gray' or 'raw' for single-channel or 'rgb' or 'yuv' for three-channel images ('default': device-specific default value). Default: "default"
Generic parameter with device-specific meaning. Default: -1
External triggering. Default: "default"
Type of used camera ('default': device-specific default value). Default: "default"
Device the image acquisition device is connected to ('default': device-specific default value). Default: "default"
Port the image acquisition device is connected to (-1: device-specific default value). Default: -1
Camera input line of multiplexer (-1: device-specific default value). Default: -1
Open and configure an image acquisition device.
Modified instance represents: Handle of the opened image acquisition device.
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library (Linux/macOS). Default: "File"
Desired horizontal resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Desired vertical resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Width of desired image part (absolute value or 0 for HorizontalResolution - 2*StartColumn). Default: 0
Height of desired image part (absolute value or 0 for VerticalResolution - 2*StartRow). Default: 0
Line number of upper left corner of desired image part (or border height if ImageHeight = 0). Default: 0
Column number of upper left corner of desired image part (or border width if ImageWidth = 0). Default: 0
Desired half image or full image. Default: "default"
Number of transferred bits per pixel and image channel (-1: device-specific default value). Default: -1
Output color format of the grabbed images, typically 'gray' or 'raw' for single-channel or 'rgb' or 'yuv' for three-channel images ('default': device-specific default value). Default: "default"
Generic parameter with device-specific meaning. Default: -1
External triggering. Default: "default"
Type of used camera ('default': device-specific default value). Default: "default"
Device the image acquisition device is connected to ('default': device-specific default value). Default: "default"
Port the image acquisition device is connected to (-1: device-specific default value). Default: -1
Camera input line of multiplexer (-1: device-specific default value). Default: -1
Query look-up table of the image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Red level of the LUT entries.
Green level of the LUT entries.
Blue level of the LUT entries.
Set look-up table of the image acquisition device.
Instance represents: Handle of the acquisition device to be used.
Red level of the LUT entries.
Green level of the LUT entries.
Blue level of the LUT entries.
Represents an instance of a 1d function.
Create an uninitialized instance.
Create a function from a sequence of y-values.
Modified instance represents: Created function.
X value for function points.
Create a function from a sequence of y-values.
Modified instance represents: Created function.
X value for function points.
Create a function from a set of (x,y) pairs.
Modified instance represents: Created function.
X value for function points.
Y-value for function points.
Create a function from a set of (x,y) pairs.
Modified instance represents: Created function.
X value for function points.
Y-value for function points.
Adds a constant offset to the function's Y values
Adds a constant offset to the function's Y values
Subtracts a constant offset from the function's Y values
Negates the Y values of the function
Scales the function's Y values
Scales the function's Y values
Scales the function's Y values
Composes two functions (not a pointwise multiplication)
Calculates the inverse of the function
Plot a function using gnuplot.
Instance represents: Function to be plotted.
Identifier for the gnuplot output stream.
Compose two functions.
Instance represents: Input function 1.
Input function 2.
Border treatment for the input functions. Default: "constant"
Composed function.
Calculate the inverse of a function.
Instance represents: Input function.
Inverse of the input function.
Calculate the derivatives of a function.
Instance represents: Input function
Type of derivative Default: "first"
Derivative of the input function
Calculate the local minimum and maximum points of a function.
Instance represents: Input function
Handling of plateaus Default: "strict_min_max"
Interpolation of the input function Default: "true"
Minimum points of the input function
Maximum points of the input function
Calculate the zero crossings of a function.
Instance represents: Input function
Zero crossings of the input function
Multiplication and addition of the y values.
Instance represents: Input function.
Factor for scaling of the y values. Default: 2.0
Constant which is added to the y values. Default: 0.0
Transformed function.
Negation of the y values.
Instance represents: Input function.
Function with the negated y values.
Absolute value of the y values.
Instance represents: Input function.
Function with the absolute values of the y values.
Return the value of a function at an arbitrary position.
Instance represents: Input function.
X coordinate at which the function should be evaluated.
Border treatment for the input function. Default: "constant"
Y value at the given x value.
Return the value of a function at an arbitrary position.
Instance represents: Input function.
X coordinate at which the function should be evaluated.
Border treatment for the input function. Default: "constant"
Y value at the given x value.
Access a function value using the index of the control points.
Instance represents: Input function.
Index of the control points.
X value at the given control points.
Y value at the given control points.
Access a function value using the index of the control points.
Instance represents: Input function.
Index of the control points.
X value at the given control points.
Y value at the given control points.
Number of control points of the function.
Instance represents: Input function.
Number of control points.
Smallest and largest y value of the function.
Instance represents: Input function.
Smallest y value.
Largest y value.
Smallest and largest x value of the function.
Instance represents: Input function.
Smallest x value.
Largest x value.
Access to the x/y values of a function.
Instance represents: Input function.
X values of the function.
Y values of the function.
Sample a function equidistantly in an interval.
Instance represents: Input function.
Minimum x value of the output function.
Maximum x value of the output function.
Distance of the samples.
Border treatment for the input function. Default: "constant"
Sampled function.
Sample a function equidistantly in an interval.
Instance represents: Input function.
Minimum x value of the output function.
Maximum x value of the output function.
Distance of the samples.
Border treatment for the input function. Default: "constant"
Sampled function.
Transform a function using given transformation parameters.
Instance represents: Input function.
Transformation parameters between the functions.
Transformed function.
Calculate transformation parameters between two functions.
Instance represents: Function 1.
Function 2.
Border treatment for function 2. Default: "constant"
Values of the parameters to remain constant. Default: [1.0,0.0,1.0,0.0]
Should a parameter be adapted for it? Default: ["true","true","true","true"]
Quadratic error of the output function.
Covariance Matrix of the transformation parameters.
Transformation parameters between the functions.
Compute the distance of two functions.
Instance represents: Input function 1.
Input function 2.
Modes of invariants. Default: "length"
Variance of the optional smoothing with a Gaussian filter. Default: 0.0
Distance of the functions.
Compute the distance of two functions.
Instance represents: Input function 1.
Input function 2.
Modes of invariants. Default: "length"
Variance of the optional smoothing with a Gaussian filter. Default: 0.0
Distance of the functions.
Smooth an equidistant 1D function with a Gaussian function.
Instance represents: Function to be smoothed.
Sigma of the Gaussian function for the smoothing. Default: 2.0
Smoothed function.
Compute the positive and negative areas of a function.
Instance represents: Input function.
Area under the negative part of the function.
Area under the positive part of the function.
Read a function from a file.
Modified instance represents: Function from the file.
Name of the file to be read.
Write a function to a file.
Instance represents: Function to be written.
Name of the file to be written.
Create a function from a sequence of y-values.
Modified instance represents: Created function.
X value for function points.
Create a function from a sequence of y-values.
Modified instance represents: Created function.
X value for function points.
Create a function from a set of (x,y) pairs.
Modified instance represents: Created function.
X value for function points.
Y-value for function points.
Create a function from a set of (x,y) pairs.
Modified instance represents: Created function.
X value for function points.
Y-value for function points.
Smooth an equidistant 1D function by averaging its values.
Instance represents: 1D function.
Size of the averaging mask. Default: 9
Number of iterations for the smoothing. Default: 3
Smoothed function.
Represents an instance of a connection to a gnuplot process.
Plot a function using gnuplot.
Instance represents: Identifier for the gnuplot output stream.
Function to be plotted.
Plot control values using gnuplot.
Instance represents: Identifier for the gnuplot output stream.
Control values to be plotted (y-values).
Visualize images using gnuplot.
Instance represents: Identifier for the gnuplot output stream.
Image to be plotted.
Number of samples in the x-direction. Default: 64
Number of samples in the y-direction. Default: 64
Rotation of the plot about the x-axis. Default: 60
Rotation of the plot about the z-axis. Default: 30
Plot the image with hidden surfaces removed. Default: "hidden3d"
Visualize images using gnuplot.
Instance represents: Identifier for the gnuplot output stream.
Image to be plotted.
Number of samples in the x-direction. Default: 64
Number of samples in the y-direction. Default: 64
Rotation of the plot about the x-axis. Default: 60
Rotation of the plot about the z-axis. Default: 30
Plot the image with hidden surfaces removed. Default: "hidden3d"
Close all open gnuplot files or terminate an active gnuplot sub-process.
Instance represents: Identifier for the gnuplot output stream.
Open a gnuplot file for visualization of images and control values.
Modified instance represents: Identifier for the gnuplot output stream.
Base name for control and data files.
Open a pipe to a gnuplot process for visualization of images and control values.
Modified instance represents: Identifier for the gnuplot output stream.
Represents a homogeneous 2D transformation matrix.
Generate the homogeneous transformation matrix of the identical 2D transformation.
Modified instance represents: Transformation matrix.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Read the geo coding from an ARC/INFO world file.
Modified instance represents: Transformation matrix from image to world coordinates.
Name of the ARC/INFO world file.
Apply a projective transformation to an XLD contour.
Instance represents: Homogeneous projective transformation matrix.
Input contours.
Output contours.
Apply an arbitrary affine transformation to XLD polygons.
Instance represents: Input transformation matrix.
Input XLD polygons.
Transformed XLD polygons.
Apply an arbitrary affine 2D transformation to XLD contours.
Instance represents: Input transformation matrix.
Input XLD contours.
Transformed XLD contours.
Deserialize a serialized homogeneous 2D transformation matrix.
Modified instance represents: Transformation matrix.
Handle of the serialized item.
Serialize a homogeneous 2D transformation matrix.
Instance represents: Transformation matrix.
Handle of the serialized item.
Perform a bundle adjustment of an image mosaic.
Number of different images that are used for the calibration.
Index of the reference image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Row coordinates of corresponding points in the respective source images.
Column coordinates of corresponding points in the respective source images.
Row coordinates of corresponding points in the respective destination images.
Column coordinates of corresponding points in the respective destination images.
Number of point correspondences in the respective image pair.
Transformation class to be used. Default: "projective"
Row coordinates of the points reconstructed by the bundle adjustment.
Column coordinates of the points reconstructed by the bundle adjustment.
Average error per reconstructed point.
Array of 3x3 projective transformation matrices that determine the position of the images in the mosaic.
Perform a bundle adjustment of an image mosaic.
Number of different images that are used for the calibration.
Index of the reference image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Row coordinates of corresponding points in the respective source images.
Column coordinates of corresponding points in the respective source images.
Row coordinates of corresponding points in the respective destination images.
Column coordinates of corresponding points in the respective destination images.
Number of point correspondences in the respective image pair.
Transformation class to be used. Default: "projective"
Row coordinates of the points reconstructed by the bundle adjustment.
Column coordinates of the points reconstructed by the bundle adjustment.
Average error per reconstructed point.
Array of 3x3 projective transformation matrices that determine the position of the images in the mosaic.
Compute a projective transformation matrix and the radial distortion coefficient between two images by finding correspondences between points based on known approximations of the projective transformation matrix and the radial distortion coefficient.
Instance represents: Approximation of the homogeneous projective transformation matrix between the two images.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Approximation of the radial distortion coefficient in the two images.
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed homogeneous projective transformation matrix.
Compute a projective transformation matrix and the radial distortion coefficient between two images by finding correspondences between points based on known approximations of the projective transformation matrix and the radial distortion coefficient.
Instance represents: Approximation of the homogeneous projective transformation matrix between the two images.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Approximation of the radial distortion coefficient in the two images.
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images and the radial distortion coefficient by automatically finding correspondences between points.
Modified instance represents: Computed homogeneous projective transformation matrix.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for the transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed radial distortion coefficient.
Compute a projective transformation matrix between two images and the radial distortion coefficient by automatically finding correspondences between points.
Modified instance represents: Computed homogeneous projective transformation matrix.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for the transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed radial distortion coefficient.
Compute a projective transformation matrix between two images by finding correspondences between points based on a known approximation of the projective transformation matrix.
Instance represents: Approximation of the Homogeneous projective transformation matrix between the two images.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images by finding correspondences between points based on a known approximation of the projective transformation matrix.
Instance represents: Approximation of the Homogeneous projective transformation matrix between the two images.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images by finding correspondences between points.
Modified instance represents: Homogeneous projective transformation matrix.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift. Default: 0
Average column coordinate shift. Default: 0
Half height of matching search window. Default: 256
Half width of matching search window. Default: 256
Range of rotation angles. Default: 0.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Indices of matched input points in image 2.
Indices of matched input points in image 1.
Compute a projective transformation matrix between two images by finding correspondences between points.
Modified instance represents: Homogeneous projective transformation matrix.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift. Default: 0
Average column coordinate shift. Default: 0
Half height of matching search window. Default: 256
Half width of matching search window. Default: 256
Range of rotation angles. Default: 0.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Indices of matched input points in image 2.
Indices of matched input points in image 1.
Compute a projective transformation matrix and the radial distortion coefficient using given image point correspondences.
Modified instance represents: Homogeneous projective transformation matrix.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Width of the images from which the points were extracted.
Height of the images from which the points were extracted.
Estimation algorithm. Default: "gold_standard"
Root-Mean-Square transformation error.
Computed radial distortion coefficient.
Compute a homogeneous transformation matrix using given point correspondences.
Modified instance represents: Homogeneous projective transformation matrix.
Input points 1 (x coordinate).
Input points 1 (y coordinate).
Input points 1 (w coordinate).
Input points 2 (x coordinate).
Input points 2 (y coordinate).
Input points 2 (w coordinate).
Estimation algorithm. Default: "normalized_dlt"
Compute a projective transformation matrix using given point correspondences.
Modified instance represents: Homogeneous projective transformation matrix.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Estimation algorithm. Default: "normalized_dlt"
Row coordinate variance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
9x9 covariance matrix of the projective transformation matrix.
Compute the affine transformation parameters from a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Scaling factor along the y direction.
Rotation angle.
Slant angle.
Translation along the x direction.
Translation along the y direction.
Scaling factor along the x direction.
Compute a rigid affine transformation from points and angles.
Modified instance represents: Output transformation matrix.
Row coordinate of the original point.
Column coordinate of the original point.
Angle of the original point.
Row coordinate of the transformed point.
Column coordinate of the transformed point.
Angle of the transformed point.
Compute a rigid affine transformation from points and angles.
Modified instance represents: Output transformation matrix.
Row coordinate of the original point.
Column coordinate of the original point.
Angle of the original point.
Row coordinate of the transformed point.
Column coordinate of the transformed point.
Angle of the transformed point.
Approximate an affine transformation from point-to-line correspondences.
Modified instance represents: Output transformation matrix.
Type of the transformation to compute. Default: "rigid"
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the first point on the corresponding line.
Y coordinates of the first point on the corresponding line.
X coordinates of the second point on the corresponding line.
Y coordinates of the second point on the corresponding line.
Approximate a rigid affine transformation from point correspondences.
Modified instance represents: Output transformation matrix.
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Approximate an similarity transformation from point correspondences.
Modified instance represents: Output transformation matrix.
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Approximate an anisotropic similarity transformation from point correspondences.
Modified instance represents: Output transformation matrix.
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Approximate an affine transformation from point correspondences.
Modified instance represents: Output transformation matrix.
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Project pixel coordinates using a homogeneous projective transformation matrix.
Instance represents: Homogeneous projective transformation matrix.
Input pixel(s) (row coordinate). Default: 64
Input pixel(s) (column coordinate). Default: 64
Output pixel(s) (row coordinate).
Output pixel(s) (column coordinate).
Project pixel coordinates using a homogeneous projective transformation matrix.
Instance represents: Homogeneous projective transformation matrix.
Input pixel(s) (row coordinate). Default: 64
Input pixel(s) (column coordinate). Default: 64
Output pixel(s) (row coordinate).
Output pixel(s) (column coordinate).
Project a homogeneous 2D point using a projective transformation matrix.
Instance represents: Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (w coordinate).
Output point (y coordinate).
Output point (w coordinate).
Output point (x coordinate).
Project a homogeneous 2D point using a projective transformation matrix.
Instance represents: Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (w coordinate).
Output point (y coordinate).
Output point (w coordinate).
Output point (x coordinate).
Apply an arbitrary affine 2D transformation to pixel coordinates.
Instance represents: Input transformation matrix.
Input pixel(s) (row coordinate). Default: 64
Input pixel(s) (column coordinate). Default: 64
Output pixel(s) (row coordinate).
Output pixel(s) (column coordinate).
Apply an arbitrary affine 2D transformation to pixel coordinates.
Instance represents: Input transformation matrix.
Input pixel(s) (row coordinate). Default: 64
Input pixel(s) (column coordinate). Default: 64
Output pixel(s) (row coordinate).
Output pixel(s) (column coordinate).
Apply an arbitrary affine 2D transformation to points.
Instance represents: Input transformation matrix.
Input point(s) (x or row coordinate). Default: 64
Input point(s) (y or column coordinate). Default: 64
Output point(s) (y or column coordinate).
Output point(s) (x or row coordinate).
Apply an arbitrary affine 2D transformation to points.
Instance represents: Input transformation matrix.
Input point(s) (x or row coordinate). Default: 64
Input point(s) (y or column coordinate). Default: 64
Output point(s) (y or column coordinate).
Output point(s) (x or row coordinate).
Compute the determinant of a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Determinant of the input matrix.
Transpose a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Output transformation matrix.
Invert a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Output transformation matrix.
Multiply two homogeneous 2D transformation matrices.
Instance represents: Left input transformation matrix.
Right input transformation matrix.
Output transformation matrix.
Add a reflection to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Point that defines the axis (x coordinate). Default: 16
Point that defines the axis (y coordinate). Default: 32
Output transformation matrix.
Add a reflection to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Point that defines the axis (x coordinate). Default: 16
Point that defines the axis (y coordinate). Default: 32
Output transformation matrix.
Add a reflection to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
First point of the axis (x coordinate). Default: 0
First point of the axis (y coordinate). Default: 0
Second point of the axis (x coordinate). Default: 16
Second point of the axis (y coordinate). Default: 32
Output transformation matrix.
Add a reflection to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
First point of the axis (x coordinate). Default: 0
First point of the axis (y coordinate). Default: 0
Second point of the axis (x coordinate). Default: 16
Second point of the axis (y coordinate). Default: 32
Output transformation matrix.
Add a slant to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Slant angle. Default: 0.78
Coordinate axis that is slanted. Default: "x"
Output transformation matrix.
Add a slant to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Slant angle. Default: 0.78
Coordinate axis that is slanted. Default: "x"
Output transformation matrix.
Add a slant to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Slant angle. Default: 0.78
Coordinate axis that is slanted. Default: "x"
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a slant to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Slant angle. Default: 0.78
Coordinate axis that is slanted. Default: "x"
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a rotation to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Rotation angle. Default: 0.78
Output transformation matrix.
Add a rotation to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Rotation angle. Default: 0.78
Output transformation matrix.
Add a rotation to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Rotation angle. Default: 0.78
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a rotation to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Rotation angle. Default: 0.78
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a scaling to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Output transformation matrix.
Add a scaling to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Output transformation matrix.
Add a scaling to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a scaling to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a translation to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Output transformation matrix.
Add a translation to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Output transformation matrix.
Add a translation to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Output transformation matrix.
Add a translation to a homogeneous 2D transformation matrix.
Instance represents: Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Output transformation matrix.
Generate the homogeneous transformation matrix of the identical 2D transformation.
Modified instance represents: Transformation matrix.
Compute the projective 3d reconstruction of points based on the fundamental matrix.
Instance represents: Fundamental matrix.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
9x9 covariance matrix of the fundamental matrix. Default: []
X coordinates of the reconstructed points in projective 3D space.
Y coordinates of the reconstructed points in projective 3D space.
Z coordinates of the reconstructed points in projective 3D space.
W coordinates of the reconstructed points in projective 3D space.
Covariance matrices of the reconstructed points.
Compute the projective 3d reconstruction of points based on the fundamental matrix.
Instance represents: Fundamental matrix.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
9x9 covariance matrix of the fundamental matrix. Default: []
X coordinates of the reconstructed points in projective 3D space.
Y coordinates of the reconstructed points in projective 3D space.
Z coordinates of the reconstructed points in projective 3D space.
W coordinates of the reconstructed points in projective 3D space.
Covariance matrices of the reconstructed points.
Compute the projective rectification of weakly calibrated binocular stereo images.
Instance represents: Fundamental matrix.
Image coding the rectification of the 2. image.
9x9 covariance matrix of the fundamental matrix. Default: []
Width of the 1. image. Default: 512
Height of the 1. image. Default: 512
Width of the 2. image. Default: 512
Height of the 2. image. Default: 512
Subsampling factor. Default: 1
Type of mapping. Default: "no_map"
9x9 covariance matrix of the rectified fundamental matrix.
Projective transformation of the 1. image.
Projective transformation of the 2. image.
Image coding the rectification of the 1. image.
Compute the projective rectification of weakly calibrated binocular stereo images.
Instance represents: Fundamental matrix.
Image coding the rectification of the 2. image.
9x9 covariance matrix of the fundamental matrix. Default: []
Width of the 1. image. Default: 512
Height of the 1. image. Default: 512
Width of the 2. image. Default: 512
Height of the 2. image. Default: 512
Subsampling factor. Default: 1
Type of mapping. Default: "no_map"
9x9 covariance matrix of the rectified fundamental matrix.
Projective transformation of the 1. image.
Projective transformation of the 2. image.
Image coding the rectification of the 1. image.
Compute the fundamental matrix and the radial distortion coefficient given a set of image point correspondences and reconstruct 3D points.
Modified instance represents: Computed fundamental matrix.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Width of the images from which the points were extracted.
Height of the images from which the points were extracted.
Estimation algorithm. Default: "gold_standard"
Root-Mean-Square epipolar distance error.
X coordinates of the reconstructed points in projective 3D space.
Y coordinates of the reconstructed points in projective 3D space.
Z coordinates of the reconstructed points in projective 3D space.
W coordinates of the reconstructed points in projective 3D space.
Computed radial distortion coefficient.
Compute the fundamental matrix from the relative orientation of two cameras.
Modified instance represents: Computed fundamental matrix.
Relative orientation of the cameras (3D pose).
6x6 covariance matrix of relative pose. Default: []
Parameters of the 1. camera.
Parameters of the 2. camera.
9x9 covariance matrix of the fundamental matrix.
Compute the fundamental matrix from an essential matrix.
Instance represents: Essential matrix.
9x9 covariance matrix of the essential matrix. Default: []
Camera matrix of the 1. camera.
Camera matrix of the 2. camera.
9x9 covariance matrix of the fundamental matrix.
Computed fundamental matrix.
Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D points.
Instance represents: Camera matrix of the 1st camera.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Camera matrix of the 2nd camera.
Algorithm for the computation of the essential matrix and for special camera orientations. Default: "normalized_dlt"
9x9 covariance matrix of the essential matrix.
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Computed essential matrix.
Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D points.
Instance represents: Camera matrix of the 1st camera.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Camera matrix of the 2nd camera.
Algorithm for the computation of the essential matrix and for special camera orientations. Default: "normalized_dlt"
9x9 covariance matrix of the essential matrix.
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Computed essential matrix.
Compute the fundamental matrix given a set of image point correspondences and reconstruct 3D points.
Modified instance represents: Computed fundamental matrix.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Estimation algorithm. Default: "normalized_dlt"
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed points in projective 3D space.
Y coordinates of the reconstructed points in projective 3D space.
Z coordinates of the reconstructed points in projective 3D space.
W coordinates of the reconstructed points in projective 3D space.
Covariance matrices of the reconstructed 3D points.
9x9 covariance matrix of the fundamental matrix.
Compute the fundamental matrix and the radial distortion coefficient for a pair of stereo images by automatically finding correspondences between image points.
Modified instance represents: Computed fundamental matrix.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "gold_standard"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Root-Mean-Square epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed radial distortion coefficient.
Compute the fundamental matrix and the radial distortion coefficient for a pair of stereo images by automatically finding correspondences between image points.
Modified instance represents: Computed fundamental matrix.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "gold_standard"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Root-Mean-Square epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed radial distortion coefficient.
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image points.
Instance represents: Camera matrix of the 1st camera.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Camera matrix of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the essential matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
9x9 covariance matrix of the essential matrix.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed essential matrix.
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image points.
Instance represents: Camera matrix of the 1st camera.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Camera matrix of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the essential matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
9x9 covariance matrix of the essential matrix.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed essential matrix.
Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between image points.
Modified instance represents: Computed fundamental matrix.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
9x9 covariance matrix of the fundamental matrix.
Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between image points.
Modified instance represents: Computed fundamental matrix.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
9x9 covariance matrix of the fundamental matrix.
Apply a projective transformation to a region.
Instance represents: Homogeneous projective transformation matrix.
Input regions.
Interpolation method for the transformation. Default: "bilinear"
Output regions.
Apply an arbitrary affine 2D transformation to regions.
Instance represents: Input transformation matrix.
Region(s) to be rotated and scaled.
Should the transformation be done using interpolation? Default: "nearest_neighbor"
Transformed output region(s).
Apply a projective transformation to an image and specify the output image size.
Instance represents: Homogeneous projective transformation matrix.
Input image.
Interpolation method for the transformation. Default: "bilinear"
Output image width.
Output image height.
Should the domain of the input image also be transformed? Default: "false"
Output image.
Apply a projective transformation to an image.
Instance represents: Homogeneous projective transformation matrix.
Input image.
Interpolation method for the transformation. Default: "bilinear"
Adapt the size of the output image automatically? Default: "false"
Should the domain of the input image also be transformed? Default: "false"
Output image.
Apply an arbitrary affine 2D transformation to an image and specify the output image size.
Instance represents: Input transformation matrix.
Input image.
Type of interpolation. Default: "constant"
Width of the output image. Default: 640
Height of the output image. Default: 480
Transformed image.
Apply an arbitrary affine 2D transformation to images.
Instance represents: Input transformation matrix.
Input image.
Type of interpolation. Default: "constant"
Adaption of size of result image. Default: "false"
Transformed image.
Approximate an affine map from a displacement vector field.
Modified instance represents: Output transformation matrix.
Input image.
Compute a camera matrix from internal camera parameters.
Modified instance represents: 3x3 projective camera matrix that corresponds to CameraParam.
Internal camera parameters.
Width of the images that correspond to CameraMatrix.
Height of the images that correspond to CameraMatrix.
Compute the internal camera parameters from a camera matrix.
Instance represents: 3x3 projective camera matrix that determines the internal camera parameters.
Kappa.
Width of the images that correspond to CameraMatrix.
Height of the images that correspond to CameraMatrix.
Internal camera parameters.
Perform a self-calibration of a stationary projective camera.
Number of different images that are used for the calibration.
Width of the images from which the points were extracted.
Height of the images from which the points were extracted.
Index of the reference image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Row coordinates of corresponding points in the respective source images.
Column coordinates of corresponding points in the respective source images.
Row coordinates of corresponding points in the respective destination images.
Column coordinates of corresponding points in the respective destination images.
Number of point correspondences in the respective image pair.
Estimation algorithm for the calibration. Default: "gold_standard"
Camera model to be used. Default: ["focus","principal_point"]
Are the camera parameters identical for all images? Default: "true"
Radial distortion of the camera.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
X-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Y-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Z-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Average error per reconstructed point if EstimationMethod $=$ 'gold_standard' is used.
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Perform a self-calibration of a stationary projective camera.
Number of different images that are used for the calibration.
Width of the images from which the points were extracted.
Height of the images from which the points were extracted.
Index of the reference image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Row coordinates of corresponding points in the respective source images.
Column coordinates of corresponding points in the respective source images.
Row coordinates of corresponding points in the respective destination images.
Column coordinates of corresponding points in the respective destination images.
Number of point correspondences in the respective image pair.
Estimation algorithm for the calibration. Default: "gold_standard"
Camera model to be used. Default: ["focus","principal_point"]
Are the camera parameters identical for all images? Default: "true"
Radial distortion of the camera.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
X-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Y-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Z-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Average error per reconstructed point if EstimationMethod $=$ 'gold_standard' is used.
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Represents a homogeneous 3D transformation matrix.
Generate the homogeneous transformation matrix of the identical 3D transformation.
Modified instance represents: Transformation matrix.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Deserialize a serialized homogeneous 3D transformation matrix.
Modified instance represents: Transformation matrix.
Handle of the serialized item.
Serialize a homogeneous 3D transformation matrix.
Instance represents: Transformation matrix.
Handle of the serialized item.
Project a homogeneous 3D point using a projective transformation matrix.
Instance represents: Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Input point (w coordinate).
Output point (y coordinate).
Output point (z coordinate).
Output point (w coordinate).
Output point (x coordinate).
Project a homogeneous 3D point using a projective transformation matrix.
Instance represents: Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Input point (w coordinate).
Output point (y coordinate).
Output point (z coordinate).
Output point (w coordinate).
Output point (x coordinate).
Project a 3D point using a projective transformation matrix.
Instance represents: Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Output point (y coordinate).
Output point (z coordinate).
Output point (x coordinate).
Project a 3D point using a projective transformation matrix.
Instance represents: Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Output point (y coordinate).
Output point (z coordinate).
Output point (x coordinate).
Apply an arbitrary affine 3D transformation to points.
Instance represents: Input transformation matrix.
Input point(s) (x coordinate). Default: 64
Input point(s) (y coordinate). Default: 64
Input point(s) (z coordinate). Default: 64
Output point(s) (y coordinate).
Output point(s) (z coordinate).
Output point(s) (x coordinate).
Apply an arbitrary affine 3D transformation to points.
Instance represents: Input transformation matrix.
Input point(s) (x coordinate). Default: 64
Input point(s) (y coordinate). Default: 64
Input point(s) (z coordinate). Default: 64
Output point(s) (y coordinate).
Output point(s) (z coordinate).
Output point(s) (x coordinate).
Approximate a 3D transformation from point correspondences.
Modified instance represents: Output transformation matrix.
Type of the transformation to compute. Default: "rigid"
X coordinates of the original points.
Y coordinates of the original points.
Z coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Z coordinates of the transformed points.
Compute the determinant of a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Determinant of the input matrix.
Transpose a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Output transformation matrix.
Invert a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Output transformation matrix.
Multiply two homogeneous 3D transformation matrices.
Instance represents: Left input transformation matrix.
Right input transformation matrix.
Output transformation matrix.
Add a rotation to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Rotation angle. Default: 0.78
Axis, to be rotated around. Default: "x"
Output transformation matrix.
Add a rotation to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Rotation angle. Default: 0.78
Axis, to be rotated around. Default: "x"
Output transformation matrix.
Add a rotation to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Rotation angle. Default: 0.78
Axis, to be rotated around. Default: "x"
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Fixed point of the transformation (z coordinate). Default: 0
Output transformation matrix.
Add a rotation to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Rotation angle. Default: 0.78
Axis, to be rotated around. Default: "x"
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Fixed point of the transformation (z coordinate). Default: 0
Output transformation matrix.
Add a scaling to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Scale factor along the z-axis. Default: 2
Output transformation matrix.
Add a scaling to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Scale factor along the z-axis. Default: 2
Output transformation matrix.
Add a scaling to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Scale factor along the z-axis. Default: 2
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Fixed point of the transformation (z coordinate). Default: 0
Output transformation matrix.
Add a scaling to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Scale factor along the z-axis. Default: 2
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Fixed point of the transformation (z coordinate). Default: 0
Output transformation matrix.
Add a translation to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Translation along the z-axis. Default: 64
Output transformation matrix.
Add a translation to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Translation along the z-axis. Default: 64
Output transformation matrix.
Add a translation to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Translation along the z-axis. Default: 64
Output transformation matrix.
Add a translation to a homogeneous 3D transformation matrix.
Instance represents: Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Translation along the z-axis. Default: 64
Output transformation matrix.
Generate the homogeneous transformation matrix of the identical 3D transformation.
Modified instance represents: Transformation matrix.
Project an affine 3D transformation matrix to a 2D projective transformation matrix.
Instance represents: 3x4 3D transformation matrix.
Row coordinate of the principal point. Default: 256
Column coordinate of the principal point. Default: 256
Focal length in pixels. Default: 256
Homogeneous projective transformation matrix.
Project an affine 3D transformation matrix to a 2D projective transformation matrix.
Instance represents: 3x4 3D transformation matrix.
Row coordinate of the principal point. Default: 256
Column coordinate of the principal point. Default: 256
Focal length in pixels. Default: 256
Homogeneous projective transformation matrix.
Project a homogeneous 3D point using a 3x4 projection matrix.
Instance represents: 3x4 projection matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Input point (w coordinate).
Output point (y coordinate).
Output point (w coordinate).
Output point (x coordinate).
Project a homogeneous 3D point using a 3x4 projection matrix.
Instance represents: 3x4 projection matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Input point (w coordinate).
Output point (y coordinate).
Output point (w coordinate).
Output point (x coordinate).
Project a 3D point using a 3x4 projection matrix.
Instance represents: 3x4 projection matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Output point (y coordinate).
Output point (x coordinate).
Project a 3D point using a 3x4 projection matrix.
Instance represents: 3x4 projection matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Output point (y coordinate).
Output point (x coordinate).
Convert a homogeneous transformation matrix into a 3D pose.
Instance represents: Homogeneous transformation matrix.
Equivalent 3D pose.
Represents an instance of an image object(-array).
Represents an instance of an iconic object(-array). Base class for images, regions and XLDs
Represents an uninitialized HALCON object key
Returns true if the iconic object has been initialized.
An object will be uninitialized when creating it with a
no-argument constructor or after calling Dispose();
Returns a new HALCON ID referencing this iconic object, which will
remain valid even after this object is disposed (and vice versa).
This is only useful if the ID shall be used in another language
interface (in fact, the key needs to be externally disposed,
a feature not even offered by the .NET language interface).
Releases the resources used by this tool object
Returns the HALCON ID for this iconic object
Caller must ensure that object is kept alive
Create an uninitialized iconic object
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Convert an "integer number" into an iconic object.
Modified instance represents: Created objects.
Tuple of object surrogates.
Convert an "integer number" into an iconic object.
Modified instance represents: Created objects.
Tuple of object surrogates.
Convert an iconic object into an "integer number."
Instance represents: Objects for which the surrogates are to be returned.
Starting index of the surrogates to be returned. Default: 1
Number of surrogates to be returned. Default: -1
Tuple containing the surrogates.
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Number of objects in a tuple.
Instance represents: Objects to be examined.
Number of objects in the tuple Objects.
Informations about the components of an image object.
Instance represents: Image object to be examined.
Required information about object components. Default: "creator"
Components to be examined (0 for region/XLD). Default: 0
Requested information.
Informations about the components of an image object.
Instance represents: Image object to be examined.
Required information about object components. Default: "creator"
Components to be examined (0 for region/XLD). Default: 0
Requested information.
Name of the class of an image object.
Instance represents: Image objects to be examined.
Name of class.
Create an empty object tuple.
Modified instance represents: No objects.
Displays image objects (image, region, XLD).
Instance represents: Image object to be displayed.
Window handle.
Read an iconic object.
Modified instance represents: Iconic object.
Name of file.
Write an iconic object.
Instance represents: Iconic object.
Name of file.
Deserialize a serialized iconic object.
Modified instance represents: Iconic object.
Handle of the serialized item.
Serialize an iconic object.
Instance represents: Iconic object.
Handle of the serialized item.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Returns the iconic object(s) at the specified index
Create an uninitialized iconic object
Create an image from a pointer to the pixels.
Modified instance represents: Created image with new image matrix.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to first gray value.
Create an image with constant gray value.
Modified instance represents: Created image with new image matrix.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Read an image with different file formats.
Modified instance represents: Read image.
Name of the image to be read. Default: "printer_chip/printer_chip_01"
Read an image with different file formats.
Modified instance represents: Read image.
Name of the image to be read. Default: "printer_chip/printer_chip_01"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Inverts an image
Adds two images
Subtracts image2 from image1
Multiplies two images
Adds a constant gray value offset
Adds a constant gray value offset
Subtracts a constant gray value offset
Scales an image by the specified factor
Scales an image by the specified factor
Scales an image by the specified divisor
Segment image using dynamic threshold
Segment image using dynamic threshold
Segment image using constant threshold
Segment image using constant threshold
Segment image using constant threshold
Segment image using constant threshold
Reduces the domain of an image
Returns the domain of an image
Image restoration by Wiener filtering.
Instance represents: Corrupted image.
impulse response (PSF) of degradation (in spatial domain).
Region for noise estimation.
Width of filter mask. Default: 3
Height of filter mask. Default: 3
Restored image.
Image restoration by Wiener filtering.
Instance represents: Corrupted image.
impulse response (PSF) of degradation (in spatial domain).
Smoothed version of corrupted image.
Restored image.
Generate an impulse response of a (linearly) motion blurring.
Modified instance represents: Impulse response of motion-blur.
Width of impulse response image. Default: 256
Height of impulse response image. Default: 256
Degree of motion-blur. Default: 20.0
Angle between direction of motion and x-axis (anticlockwise). Default: 0
PSF prototype resp. type of motion. Default: 3
Simulation of (linearly) motion blur.
Instance represents: image to be blurred.
extent of blurring. Default: 20.0
Angle between direction of motion and x-axis (anticlockwise). Default: 0
impulse response of motion blur. Default: 3
motion blurred image.
Generate an impulse response of an uniform out-of-focus blurring.
Modified instance represents: Impulse response of uniform out-of-focus blurring.
Width of result image. Default: 256
Height of result image. Default: 256
Degree of Blurring. Default: 5.0
Simulate an uniform out-of-focus blurring of an image.
Instance represents: Image to blur.
Degree of blurring. Default: 5.0
Blurred image.
Compare an image to a variation model.
Instance represents: Image of the object to be compared.
ID of the variation model.
Method used for comparing the variation model. Default: "absolute"
Region containing the points that differ substantially from the model.
Compare an image to a variation model.
Instance represents: Image of the object to be compared.
ID of the variation model.
Region containing the points that differ substantially from the model.
Train a variation model.
Instance represents: Images of the object to be trained.
ID of the variation model.
Compute a projective transformation matrix and the radial distortion coefficient between two images by finding correspondences between points based on known approximations of the projective transformation matrix and the radial distortion coefficient.
Instance represents: Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Approximation of the homogeneous projective transformation matrix between the two images.
Approximation of the radial distortion coefficient in the two images.
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed homogeneous projective transformation matrix.
Compute a projective transformation matrix and the radial distortion coefficient between two images by finding correspondences between points based on known approximations of the projective transformation matrix and the radial distortion coefficient.
Instance represents: Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Approximation of the homogeneous projective transformation matrix between the two images.
Approximation of the radial distortion coefficient in the two images.
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images and the radial distortion coefficient by automatically finding correspondences between points.
Instance represents: Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for the transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images and the radial distortion coefficient by automatically finding correspondences between points.
Instance represents: Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for the transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images by finding correspondences between points based on a known approximation of the projective transformation matrix.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Approximation of the Homogeneous projective transformation matrix between the two images.
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images by finding correspondences between points based on a known approximation of the projective transformation matrix.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Approximation of the Homogeneous projective transformation matrix between the two images.
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images by finding correspondences between points.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift. Default: 0
Average column coordinate shift. Default: 0
Half height of matching search window. Default: 256
Half width of matching search window. Default: 256
Range of rotation angles. Default: 0.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Homogeneous projective transformation matrix.
Compute a projective transformation matrix between two images by finding correspondences between points.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift. Default: 0
Average column coordinate shift. Default: 0
Half height of matching search window. Default: 256
Half width of matching search window. Default: 256
Range of rotation angles. Default: 0.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Homogeneous projective transformation matrix.
Receive an image over a socket connection.
Modified instance represents: Received image.
Socket number.
Send an image over a socket connection.
Instance represents: Image to be sent.
Socket number.
Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Distance image.
Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Distance image.
Compute the disparities of a rectified stereo image pair using multi-scanline optimization.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Disparity map.
Compute the disparities of a rectified stereo image pair using multi-scanline optimization.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Disparity map.
Compute the distance values for a rectified stereo image pair using multigrid methods.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity if CalculateScore is set to 'true'.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Distance image.
Compute the distance values for a rectified stereo image pair using multigrid methods.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity if CalculateScore is set to 'true'.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Distance image.
Compute the disparities of a rectified stereo image pair using multigrid methods.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity if CalculateScore is set to 'true'.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure should be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Disparity map.
Compute the disparities of a rectified stereo image pair using multigrid methods.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity if CalculateScore is set to 'true'.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure should be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Disparity map.
Compute the projective rectification of weakly calibrated binocular stereo images.
Modified instance represents: Image coding the rectification of the 1. image.
Fundamental matrix.
9x9 covariance matrix of the fundamental matrix. Default: []
Width of the 1. image. Default: 512
Height of the 1. image. Default: 512
Width of the 2. image. Default: 512
Height of the 2. image. Default: 512
Subsampling factor. Default: 1
Type of mapping. Default: "no_map"
9x9 covariance matrix of the rectified fundamental matrix.
Projective transformation of the 1. image.
Projective transformation of the 2. image.
Image coding the rectification of the 2. image.
Compute the projective rectification of weakly calibrated binocular stereo images.
Modified instance represents: Image coding the rectification of the 1. image.
Fundamental matrix.
9x9 covariance matrix of the fundamental matrix. Default: []
Width of the 1. image. Default: 512
Height of the 1. image. Default: 512
Width of the 2. image. Default: 512
Height of the 2. image. Default: 512
Subsampling factor. Default: 1
Type of mapping. Default: "no_map"
9x9 covariance matrix of the rectified fundamental matrix.
Projective transformation of the 1. image.
Projective transformation of the 2. image.
Image coding the rectification of the 2. image.
Compute the fundamental matrix and the radial distortion coefficient for a pair of stereo images by automatically finding correspondences between image points.
Instance represents: Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "gold_standard"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Computed radial distortion coefficient.
Root-Mean-Square epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed fundamental matrix.
Compute the fundamental matrix and the radial distortion coefficient for a pair of stereo images by automatically finding correspondences between image points.
Instance represents: Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "gold_standard"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Computed radial distortion coefficient.
Root-Mean-Square epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed fundamental matrix.
Compute the relative orientation between two cameras by automatically finding correspondences between image points.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Parameters of the 1st camera.
Parameters of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
6x6 covariance matrix of the relative orientation.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed relative orientation of the cameras (3D pose).
Compute the relative orientation between two cameras by automatically finding correspondences between image points.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Parameters of the 1st camera.
Parameters of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
6x6 covariance matrix of the relative orientation.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed relative orientation of the cameras (3D pose).
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image points.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Camera matrix of the 1st camera.
Camera matrix of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the essential matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
9x9 covariance matrix of the essential matrix.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed essential matrix.
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image points.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Camera matrix of the 1st camera.
Camera matrix of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the essential matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
9x9 covariance matrix of the essential matrix.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed essential matrix.
Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between image points.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
9x9 covariance matrix of the fundamental matrix.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed fundamental matrix.
Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between image points.
Instance represents: Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
9x9 covariance matrix of the fundamental matrix.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Computed fundamental matrix.
Compute the distance values for a rectified stereo image pair using correlation techniques.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Evaluation of a distance value.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: 0
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.0
Downstream filters. Default: "none"
Distance interpolation. Default: "none"
Distance image.
Compute the distance values for a rectified stereo image pair using correlation techniques.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Evaluation of a distance value.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: 0
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.0
Downstream filters. Default: "none"
Distance interpolation. Default: "none"
Distance image.
Compute the disparities of a rectified image pair using correlation techniques.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Evaluation of the disparity values.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.5
Downstream filters. Default: "none"
Subpixel interpolation of disparities. Default: "none"
Disparity map.
Compute the disparities of a rectified image pair using correlation techniques.
Instance represents: Rectified image of camera 1.
Rectified image of camera 2.
Evaluation of the disparity values.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.5
Downstream filters. Default: "none"
Subpixel interpolation of disparities. Default: "none"
Disparity map.
Transform a disparity image into 3D points in a rectified stereo system.
Instance represents: Disparity image.
Y coordinates of the points in the rectified camera system 1.
Z coordinates of the points in the rectified camera system 1.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
X coordinates of the points in the rectified camera system 1.
Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common rectified image plane.
Modified instance represents: Image containing the mapping data of camera 1.
Internal parameters of camera 1.
Internal parameters of camera 2.
Point transformation from camera 2 to camera 1.
Subsampling factor. Default: 1.0
Type of rectification. Default: "viewing_direction"
Type of mapping. Default: "bilinear"
Rectified internal parameters of camera 1.
Rectified internal parameters of camera 2.
Point transformation from the rectified camera 1 to the original camera 1.
Point transformation from the rectified camera 1 to the original camera 1.
Point transformation from the rectified camera 2 to the rectified camera 1.
Image containing the mapping data of camera 2.
Get the iconic results of a measurement performed with the sheet-of light technique.
Modified instance represents: Desired measurement result.
Handle of the sheet-of-light model to be used.
Specify which result of the measurement shall be provided. Default: "disparity"
Get the iconic results of a measurement performed with the sheet-of light technique.
Modified instance represents: Desired measurement result.
Handle of the sheet-of-light model to be used.
Specify which result of the measurement shall be provided. Default: "disparity"
Apply the calibration transformations to the input disparity image.
Instance represents: Height or range image to be calibrated.
Handle of the sheet-of-light model.
Set sheet of light profiles by measured disparities.
Instance represents: Disparity image that contains several profiles.
Handle of the sheet-of-light model.
Poses describing the movement of the scene under measurement between the previously processed profile image and the current profile image.
Process the profile image provided as input and store the resulting disparity to the sheet-of-light model.
Instance represents: Input image.
Handle of the sheet-of-light model.
Pose describing the movement of the scene under measurement between the previously processed profile image and the current profile image.
Shade a height field.
Instance represents: Height field to be shaded.
Angle between the light source and the positive z-axis (in degrees). Default: 0.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 0.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Should shadows be calculated? Default: "false"
Shaded image.
Shade a height field.
Instance represents: Height field to be shaded.
Angle between the light source and the positive z-axis (in degrees). Default: 0.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 0.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Should shadows be calculated? Default: "false"
Shaded image.
Estimate the albedo of a surface and the amount of ambient light.
Instance represents: Image for which albedo and ambient are to be estimated.
Amount of ambient light.
Amount of light reflected by the surface.
Estimate the albedo of a surface and the amount of ambient light.
Instance represents: Image for which albedo and ambient are to be estimated.
Amount of ambient light.
Amount of light reflected by the surface.
Estimate the slant of a light source and the albedo of a surface.
Instance represents: Image for which slant and albedo are to be estimated.
Amount of light reflected by the surface.
Angle of the light sources and the positive z-axis (in degrees).
Estimate the slant of a light source and the albedo of a surface.
Instance represents: Image for which slant and albedo are to be estimated.
Amount of light reflected by the surface.
Angle of the light sources and the positive z-axis (in degrees).
Estimate the slant of a light source and the albedo of a surface.
Instance represents: Image for which slant and albedo are to be estimated.
Amount of light reflected by the surface.
Angle between the light sources and the positive z-axis (in degrees).
Estimate the slant of a light source and the albedo of a surface.
Instance represents: Image for which slant and albedo are to be estimated.
Amount of light reflected by the surface.
Angle between the light sources and the positive z-axis (in degrees).
Estimate the tilt of a light source.
Instance represents: Image for which the tilt is to be estimated.
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Estimate the tilt of a light source.
Instance represents: Image for which the tilt is to be estimated.
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Reconstruct a surface from surface gradients.
Instance represents: The gradient field of the image.
Type of the reconstruction method. Default: "poisson"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Reconstructed height field.
Reconstruct a surface according to the photometric stereo technique.
Instance represents: Array with at least three input images with different directions of illumination.
The gradient field of the surface.
The albedo of the surface.
Angle between the camera and the direction of illumination (in degrees). Default: 45.0
Angle of the direction of illumination within the object plane (in degrees). Default: 45.0
Types of the requested results. Default: "all"
Type of the reconstruction method. Default: "poisson"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Reconstructed height field.
Reconstruct a surface from a gray value image.
Instance represents: Shaded input image.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Reconstructed height field.
Reconstruct a surface from a gray value image.
Instance represents: Shaded input image.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Reconstructed height field.
Reconstruct a surface from a gray value image.
Instance represents: Shaded input image.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Reconstructed height field.
Reconstruct a surface from a gray value image.
Instance represents: Shaded input image.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Reconstructed height field.
Reconstruct a surface from a gray value image.
Instance represents: Shaded input image.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Reconstructed height field.
Reconstruct a surface from a gray value image.
Instance represents: Shaded input image.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Reconstructed height field.
Find text in an image.
Instance represents: Input image.
Text model specifying the text to be segmented.
Result of the segmentation.
Classify a byte image using a look-up table.
Instance represents: Input image.
Handle of the LUT classifier.
Segmented classes.
Classify an image with a k-Nearest-Neighbor classifier.
Instance represents: Input image.
Distance of the pixel's nearest neighbor.
Handle of the k-NN classifier.
Threshold for the rejection of the classification. Default: 0.5
Segmented classes.
Add training samples from an image to the training data of a k-Nearest-Neighbor classifier.
Instance represents: Training image.
Regions of the classes to be trained.
Handle of the k-NN classifier.
Classify an image with a Gaussian Mixture Model.
Instance represents: Input image.
GMM handle.
Threshold for the rejection of the classification. Default: 0.5
Segmented classes.
Add training samples from an image to the training data of a Gaussian Mixture Model.
Instance represents: Training image.
Regions of the classes to be trained.
GMM handle.
Standard deviation of the Gaussian noise added to the training data. Default: 0.0
Classify an image with a support vector machine.
Instance represents: Input image.
SVM handle.
Segmented classes.
Add training samples from an image to the training data of a support vector machine.
Instance represents: Training image.
Regions of the classes to be trained.
SVM handle.
Classify an image with a multilayer perceptron.
Instance represents: Input image.
MLP handle.
Threshold for the rejection of the classification. Default: 0.5
Segmented classes.
Add training samples from an image to the training data of a multilayer perceptron.
Instance represents: Training image.
Regions of the classes to be trained.
MLP handle.
Construct classes for class_ndim_norm.
Instance represents: Multi-channel training image.
Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Metric to be used. Default: "euclid"
Maximum cluster radius. Default: 10.0
The ratio of the number of pixels in a cluster to the total number of pixels (in percent) must be larger than MinNumberPercent (otherwise the cluster is not output). Default: 0.01
Coordinates of all cluster centers.
Overlap of the rejection class with the classified objects (1: no overlap).
Cluster radii or half edge lengths.
Construct classes for class_ndim_norm.
Instance represents: Multi-channel training image.
Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Metric to be used. Default: "euclid"
Maximum cluster radius. Default: 10.0
The ratio of the number of pixels in a cluster to the total number of pixels (in percent) must be larger than MinNumberPercent (otherwise the cluster is not output). Default: 0.01
Coordinates of all cluster centers.
Overlap of the rejection class with the classified objects (1: no overlap).
Cluster radii or half edge lengths.
Train a classificator using a multi-channel image.
Instance represents: Multi-channel training image.
Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Handle of the classifier.
Classify pixels using hyper-cuboids.
Instance represents: Multi channel input image.
Handle of the classifier.
Classification result.
Classify pixels using hyper-spheres or hyper-cubes.
Instance represents: Multi channel input image.
Metric to be used. Default: "euclid"
Return one region or one region for each cluster. Default: "single"
Cluster radii or half edge lengths (returned by learn_ndim_norm).
Coordinates of the cluster centers (returned by learn_ndim_norm).
Classification result.
Classify pixels using hyper-spheres or hyper-cubes.
Instance represents: Multi channel input image.
Metric to be used. Default: "euclid"
Return one region or one region for each cluster. Default: "single"
Cluster radii or half edge lengths (returned by learn_ndim_norm).
Coordinates of the cluster centers (returned by learn_ndim_norm).
Classification result.
Segment an image using two-dimensional pixel classification.
Instance represents: Input image (first channel).
Input image (second channel).
Region defining the feature space.
Classified regions.
Segment two images by clustering.
Instance represents: First input image.
Second input image.
Threshold (maximum distance to the cluster's center). Default: 15
Number of classes (cluster centers). Default: 5
Classification result.
Compare two images pixel by pixel.
Instance represents: Input image.
Comparison image.
Mode: return similar or different pixels. Default: "diff_outside"
Lower bound of the tolerated gray value difference. Default: -5
Upper bound of the tolerated gray value difference. Default: 5
Offset gray value subtracted from the input image. Default: 0
Row coordinate by which the comparison image is translated. Default: 0
Column coordinate by which the comparison image is translated. Default: 0
Points in which the two images are similar/different.
Perform a threshold segmentation for extracting characters.
Instance represents: Input image.
Region in which the histogram is computed.
Sigma for the Gaussian smoothing of the histogram. Default: 2.0
Percentage for the gray value difference. Default: 95
Calculated threshold.
Dark regions (characters).
Perform a threshold segmentation for extracting characters.
Instance represents: Input image.
Region in which the histogram is computed.
Sigma for the Gaussian smoothing of the histogram. Default: 2.0
Percentage for the gray value difference. Default: 95
Calculated threshold.
Dark regions (characters).
Extract regions with equal gray values from an image.
Instance represents: Label image.
Regions having a constant gray value.
Suppress non-maximum points on an edge.
Instance represents: Amplitude (gradient magnitude) image.
Select horizontal/vertical or undirected NMS. Default: "hvnms"
Image with thinned edge regions.
Suppress non-maximum points on an edge using a direction image.
Instance represents: Amplitude (gradient magnitude) image.
Direction image.
Select non-maximum-suppression or interpolating NMS. Default: "nms"
Image with thinned edge regions.
Perform a hysteresis threshold operation on an image.
Instance represents: Input image.
Lower threshold for the gray values. Default: 30
Upper threshold for the gray values. Default: 60
Maximum length of a path of "potential" points to reach a "secure" point. Default: 10
Segmented region.
Perform a hysteresis threshold operation on an image.
Instance represents: Input image.
Lower threshold for the gray values. Default: 30
Upper threshold for the gray values. Default: 60
Maximum length of a path of "potential" points to reach a "secure" point. Default: 10
Segmented region.
Segment an image using binary thresholding.
Instance represents: Input Image.
Segmentation method. Default: "max_separability"
Extract foreground or background? Default: "dark"
Used threshold.
Segmented output region.
Segment an image using binary thresholding.
Instance represents: Input Image.
Segmentation method. Default: "max_separability"
Extract foreground or background? Default: "dark"
Used threshold.
Segmented output region.
Segment an image using local thresholding.
Instance represents: Input Image.
Segmentation method. Default: "adapted_std_deviation"
Extract foreground or background? Default: "dark"
List of generic parameter names. Default: []
List of generic parameter values. Default: []
Segmented output region.
Segment an image using local thresholding.
Instance represents: Input Image.
Segmentation method. Default: "adapted_std_deviation"
Extract foreground or background? Default: "dark"
List of generic parameter names. Default: []
List of generic parameter values. Default: []
Segmented output region.
Threshold an image by local mean and standard deviation analysis.
Instance represents: Input image.
Mask width for mean and deviation calculation. Default: 15
Mask height for mean and deviation calculation. Default: 15
Factor for the standard deviation of the gray values. Default: 0.2
Minimum gray value difference from the mean. Default: 2
Threshold type. Default: "dark"
Segmented regions.
Threshold an image by local mean and standard deviation analysis.
Instance represents: Input image.
Mask width for mean and deviation calculation. Default: 15
Mask height for mean and deviation calculation. Default: 15
Factor for the standard deviation of the gray values. Default: 0.2
Minimum gray value difference from the mean. Default: 2
Threshold type. Default: "dark"
Segmented regions.
Segment an image using a local threshold.
Instance represents: Input image.
Image containing the local thresholds.
Offset applied to ThresholdImage. Default: 5.0
Extract light, dark or similar areas? Default: "light"
Segmented regions.
Segment an image using a local threshold.
Instance represents: Input image.
Image containing the local thresholds.
Offset applied to ThresholdImage. Default: 5.0
Extract light, dark or similar areas? Default: "light"
Segmented regions.
Segment an image using global threshold.
Instance represents: Input image.
Lower threshold for the gray values. Default: 128.0
Upper threshold for the gray values. Default: 255.0
Segmented region.
Segment an image using global threshold.
Instance represents: Input image.
Lower threshold for the gray values. Default: 128.0
Upper threshold for the gray values. Default: 255.0
Segmented region.
Extract level crossings from an image with subpixel accuracy.
Instance represents: Input image.
Threshold for the level crossings. Default: 128
Extracted level crossings.
Extract level crossings from an image with subpixel accuracy.
Instance represents: Input image.
Threshold for the level crossings. Default: 128
Extracted level crossings.
Segment an image using regiongrowing for multi-channel images.
Instance represents: Input image.
Metric for the distance of the feature vectors. Default: "2-norm"
Lower threshold for the features' distance. Default: 0.0
Upper threshold for the features' distance. Default: 20.0
Minimum size of the output regions. Default: 30
Segmented regions.
Segment an image using regiongrowing for multi-channel images.
Instance represents: Input image.
Metric for the distance of the feature vectors. Default: "2-norm"
Lower threshold for the features' distance. Default: 0.0
Upper threshold for the features' distance. Default: 20.0
Minimum size of the output regions. Default: 30
Segmented regions.
Segment an image using regiongrowing.
Instance represents: Input image.
Vertical distance between tested pixels (height of the raster). Default: 3
Horizontal distance between tested pixels (height of the raster). Default: 3
Points with a gray value difference less then or equal to tolerance are accumulated into the same object. Default: 6.0
Minimum size of the output regions. Default: 100
Segmented regions.
Segment an image using regiongrowing.
Instance represents: Input image.
Vertical distance between tested pixels (height of the raster). Default: 3
Horizontal distance between tested pixels (height of the raster). Default: 3
Points with a gray value difference less then or equal to tolerance are accumulated into the same object. Default: 6.0
Minimum size of the output regions. Default: 100
Segmented regions.
Perform a regiongrowing using mean gray values.
Instance represents: Input image.
Row coordinates of the starting points. Default: []
Column coordinates of the starting points. Default: []
Maximum deviation from the mean. Default: 5.0
Minimum size of a region. Default: 100
Segmented regions.
Perform a regiongrowing using mean gray values.
Instance represents: Input image.
Row coordinates of the starting points. Default: []
Column coordinates of the starting points. Default: []
Maximum deviation from the mean. Default: 5.0
Minimum size of a region. Default: 100
Segmented regions.
Segment an image by "pouring water" over it.
Instance represents: Input image.
Mode of operation. Default: "all"
All gray values smaller than this threshold are disregarded. Default: 0
All gray values larger than this threshold are disregarded. Default: 255
Segmented regions.
Extract watershed basins from an image using a threshold.
Instance represents: Image to be segmented.
Threshold for the watersheds. Default: 10
Segments found (dark basins).
Extract watershed basins from an image using a threshold.
Instance represents: Image to be segmented.
Threshold for the watersheds. Default: 10
Segments found (dark basins).
Extract watersheds and basins from an image.
Instance represents: Input image.
Watersheds between the basins.
Segmented basins.
Extract zero crossings from an image.
Instance represents: Input image.
Zero crossings.
Extract zero crossings from an image with subpixel accuracy.
Instance represents: Input image.
Extracted zero crossings.
Threshold operator for signed images.
Instance represents: Input image.
Regions smaller than MinSize are suppressed. Default: 20
Regions whose maximum absolute gray value is smaller than MinGray are suppressed. Default: 5.0
Regions that have a gray value smaller than Threshold (or larger than -Threshold) are suppressed. Default: 2.0
Positive and negative regions.
Expand a region starting at a given line.
Instance represents: Input image.
Row or column coordinate. Default: 256
Stopping criterion. Default: "gradient"
Segmentation mode (row or column). Default: "row"
Threshold for the expansion. Default: 3.0
Extracted segments.
Expand a region starting at a given line.
Instance represents: Input image.
Row or column coordinate. Default: 256
Stopping criterion. Default: "gradient"
Segmentation mode (row or column). Default: "row"
Threshold for the expansion. Default: 3.0
Extracted segments.
Detect all local minima in an image.
Instance represents: Image to be processed.
Extracted local minima as regions.
Detect all gray value lowlands.
Instance represents: Image to be processed.
Extracted lowlands as regions (one region for each lowland).
Detect the centers of all gray value lowlands.
Instance represents: Image to be processed.
Centers of gravity of the extracted lowlands as regions (one region for each lowland).
Detect all local maxima in an image.
Instance represents: Input image.
Extracted local maxima as a region.
Detect all gray value plateaus.
Instance represents: Input image.
Extracted plateaus as regions (one region for each plateau).
Detect the centers of all gray value plateaus.
Instance represents: Input image.
Centers of gravity of the extracted plateaus as regions (one region for each plateau).
Segment an image using thresholds determined from its histogram.
Instance represents: Input image.
Sigma for the Gaussian smoothing of the histogram. Default: 2.0
Regions with gray values within the automatically determined intervals.
Segment an image using thresholds determined from its histogram.
Instance represents: Input image.
Sigma for the Gaussian smoothing of the histogram. Default: 2.0
Regions with gray values within the automatically determined intervals.
Segment an image using an automatically determined threshold.
Instance represents: Input image.
Dark regions of the image.
Fast thresholding of images using global thresholds.
Instance represents: Input image.
Lower threshold for the gray values. Default: 128
Upper threshold for the gray values. Default: 255.0
Minimum size of objects to be extracted. Default: 20
Segmented regions.
Fast thresholding of images using global thresholds.
Instance represents: Input image.
Lower threshold for the gray values. Default: 128
Upper threshold for the gray values. Default: 255.0
Minimum size of objects to be extracted. Default: 20
Segmented regions.
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Instance represents: Image (possibly multi-channel) for gray value or color comparison.
Regions for which the gaps are to be closed, or which are to be separated.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Maximum difference between the gray value or color at the region's border and a candidate for expansion. Default: 32
Expanded or separated regions.
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Instance represents: Image (possibly multi-channel) for gray value or color comparison.
Regions for which the gaps are to be closed, or which are to be separated.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Maximum difference between the gray value or color at the region's border and a candidate for expansion. Default: 32
Expanded or separated regions.
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Instance represents: Image (possibly multi-channel) for gray value or color comparison.
Regions for which the gaps are to be closed, or which are to be separated.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Reference gray value or color for comparison. Default: 128
Maximum difference between the reference gray value or color and a candidate for expansion. Default: 32
Expanded or separated regions.
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Instance represents: Image (possibly multi-channel) for gray value or color comparison.
Regions for which the gaps are to be closed, or which are to be separated.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Reference gray value or color for comparison. Default: 128
Maximum difference between the reference gray value or color and a candidate for expansion. Default: 32
Expanded or separated regions.
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Set single gray values in an image.
Instance represents: Image to be modified.
Row coordinates of the pixels to be modified. Default: 0
Column coordinates of the pixels to be modified. Default: 0
Gray values to be used. Default: 255.0
Set single gray values in an image.
Instance represents: Image to be modified.
Row coordinates of the pixels to be modified. Default: 0
Column coordinates of the pixels to be modified. Default: 0
Gray values to be used. Default: 255.0
Paint XLD objects into an image.
Instance represents: Image in which the xld objects are to be painted.
XLD objects to be painted into the input image.
Desired gray value of the xld object. Default: 255.0
Image containing the result.
Paint XLD objects into an image.
Instance represents: Image in which the xld objects are to be painted.
XLD objects to be painted into the input image.
Desired gray value of the xld object. Default: 255.0
Image containing the result.
Paint regions into an image.
Instance represents: Image in which the regions are to be painted.
Regions to be painted into the input image.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Image containing the result.
Paint regions into an image.
Instance represents: Image in which the regions are to be painted.
Regions to be painted into the input image.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Image containing the result.
Overpaint regions in an image.
Instance represents: Image in which the regions are to be painted.
Regions to be painted into the input image.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Overpaint regions in an image.
Instance represents: Image in which the regions are to be painted.
Regions to be painted into the input image.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Create an image with a specified constant gray value.
Instance represents: Input image.
Gray value to be used for the output image. Default: 0
Image with constant gray value.
Create an image with a specified constant gray value.
Instance represents: Input image.
Gray value to be used for the output image. Default: 0
Image with constant gray value.
Paint the gray values of an image into another image.
Instance represents: Input image containing the desired gray values.
Input image to be painted over.
Result image.
Overpaint the gray values of an image.
Instance represents: Input image to be painted over.
Input image containing the desired gray values.
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Copy an image and allocate new memory for it.
Instance represents: Image to be copied.
Copied image.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Create a three-channel image from a pointer to the interleaved pixels.
Modified instance represents: Created image with new image matrix.
Pointer to interleaved pixels.
Format of the input pixels. Default: "rgb"
Width of input image. Default: 512
Height of input image. Default: 512
Reserved.
Pixel type of output image. Default: "byte"
Width of output image. Default: 0
Height of output image. Default: 0
Line number of upper left corner of desired image part. Default: 0
Column number of upper left corner of desired image part. Default: 0
Number of used bits per pixel and channel of the output image (-1: All bits are used). Default: -1
Number of bits that the color values of the input pixels are shifted to the right (only uint2 images). Default: 0
Create an image from three pointers to the pixels (red/green/blue).
Modified instance represents: Created image with new image matrix.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to first red value (channel 1).
Pointer to first green value (channel 2).
Pointer to first blue value (channel 3).
Create an image from a pointer to the pixels.
Modified instance represents: Created image with new image matrix.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to first gray value.
Create an image with constant gray value.
Modified instance represents: Created image with new image matrix.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Create a gray value ramp.
Modified instance represents: Created image with new image matrix.
Gradient in line direction. Default: 1.0
Gradient in column direction. Default: 1.0
Mean gray value. Default: 128
Line index of reference point. Default: 256
Column index of reference point. Default: 256
Width of image. Default: 512
Height of image. Default: 512
Create a three-channel image from three pointers on the pixels with storage management.
Modified instance represents: Created HALCON image.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to the first gray value of the first channel.
Pointer to the first gray value of the second channel.
Pointer to the first gray value of the third channel.
Pointer to the procedure re-releasing the memory of the image when deleting the object. Default: 0
Create an image from a pointer on the pixels with storage management.
Modified instance represents: Created HALCON image.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to the first gray value.
Pointer to the procedure re-releasing the memory of the image when deleting the object. Default: 0
Create an image with a rectangular domain from a pointer on the pixels (with storage management).
Modified instance represents: Created HALCON image.
Pointer to the first pixel.
Width of the image. Default: 512
Height of the image. Default: 512
Distance (in bytes) between pixel m in row n and pixel m in row n+1 of the 'input image'.
Distance between two neighboring pixels in bits . Default: 8
Number of used bits per pixel. Default: 8
Copy image data. Default: "false"
Pointer to the procedure releasing the memory of the image when deleting the object. Default: 0
Access to the image data pointer and the image data inside the smallest rectangle of the domain of the input image.
Instance represents: Input image (Himage).
Width of the output image.
Height of the output image.
Width(input image)*(HorizontalBitPitch/8).
Distance between two neighboring pixels in bits .
Number of used bits per pixel.
Pointer to the image data.
Access the pointers of a colored image.
Instance represents: Input image.
Pointer to the pixels of the first channel.
Pointer to the pixels of the second channel.
Pointer to the pixels of the third channel.
Type of image.
Width of image.
Height of image.
Access the pointers of a colored image.
Instance represents: Input image.
Pointer to the pixels of the first channel.
Pointer to the pixels of the second channel.
Pointer to the pixels of the third channel.
Type of image.
Width of image.
Height of image.
Access the pointer of a channel.
Instance represents: Input image.
Type of image.
Width of image.
Height of image.
Pointer to the image data in the HALCON database.
Access the pointer of a channel.
Instance represents: Input image.
Type of image.
Width of image.
Height of image.
Pointer to the image data in the HALCON database.
Return the type of an image.
Instance represents: Input image.
Type of image.
Return the size of an image.
Instance represents: Input image.
Width of image.
Height of image.
Return the size of an image.
Instance represents: Input image.
Width of image.
Height of image.
Request time at which the image was created.
Instance represents: Input image.
Seconds (0..59).
Minutes (0..59).
Hours (0..23).
Day of the month (1..31).
Day of the year (1..366).
Month (1..12).
Year (xxxx).
Milliseconds (0..999).
Return gray values of an image at the positions given by tuples of rows and columns.
Instance represents: Image whose gray values are to be accessed.
Row coordinates of positions. Default: 0
Column coordinates of positions. Default: 0
Interpolation method. Default: "bilinear"
Gray values of the selected image coordinates.
Return gray values of an image at the positions given by tuples of rows and columns.
Instance represents: Image whose gray values are to be accessed.
Row coordinates of positions. Default: 0
Column coordinates of positions. Default: 0
Interpolation method. Default: "bilinear"
Gray values of the selected image coordinates.
Access the gray values of an image object.
Instance represents: Image whose gray value is to be accessed.
Row coordinates of pixels to be viewed. Default: 0
Column coordinates of pixels to be viewed. Default: 0
Gray values of indicated pixels.
Access the gray values of an image object.
Instance represents: Image whose gray value is to be accessed.
Row coordinates of pixels to be viewed. Default: 0
Column coordinates of pixels to be viewed. Default: 0
Gray values of indicated pixels.
Verification of a pattern using an OCV tool.
Instance represents: Characters to be verified.
Handle of the OCV tool.
Name of the character. Default: "a"
Adaption to vertical and horizontal translation. Default: "true"
Adaption to vertical and horizontal scaling of the size. Default: "true"
Adaption to changes of the orientation (not implemented). Default: "false"
Adaption to additive and scaling gray value changes. Default: "true"
Minimum difference between objects. Default: 10
Evaluation of the character.
Verification of a pattern using an OCV tool.
Instance represents: Characters to be verified.
Handle of the OCV tool.
Name of the character. Default: "a"
Adaption to vertical and horizontal translation. Default: "true"
Adaption to vertical and horizontal scaling of the size. Default: "true"
Adaption to changes of the orientation (not implemented). Default: "false"
Adaption to additive and scaling gray value changes. Default: "true"
Minimum difference between objects. Default: 10
Evaluation of the character.
Training of an OCV tool.
Instance represents: Pattern to be trained.
Handle of the OCV tool to be trained.
Name(s) of the object(s) to analyse. Default: "a"
Mode for training (only one mode implemented). Default: "single"
Training of an OCV tool.
Instance represents: Pattern to be trained.
Handle of the OCV tool to be trained.
Name(s) of the object(s) to analyse. Default: "a"
Mode for training (only one mode implemented). Default: "single"
Compute the features of a character.
Instance represents: Input character.
Handle of the k-NN classifier.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Compute the features of a character.
Instance represents: Input character.
Handle of the OCR classifier.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Compute the features of a character.
Instance represents: Input character.
Handle of the OCR classifier.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Cut out an image area relative to the domain.
Instance represents: Input image.
Number of rows clipped at the top. Default: -1
Number of columns clipped at the left. Default: -1
Number of rows clipped at the bottom. Default: -1
Number of columns clipped at the right. Default: -1
Image area.
Access the features which correspond to a character.
Instance represents: Characters to be trained.
ID of the desired OCR-classifier.
Feature vector.
Write characters into a training file.
Instance represents: Characters to be trained.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Write characters into a training file.
Instance represents: Characters to be trained.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Read training specific characters from files and convert to images.
Modified instance represents: Images read from file.
Names of the training files. Default: ""
Names of the characters to be extracted. Default: "0"
Names of the read characters.
Read training specific characters from files and convert to images.
Modified instance represents: Images read from file.
Names of the training files. Default: ""
Names of the characters to be extracted. Default: "0"
Names of the read characters.
Read training characters from files and convert to images.
Modified instance represents: Images read from file.
Names of the training files. Default: ""
Names of the read characters.
Read training characters from files and convert to images.
Modified instance represents: Images read from file.
Names of the training files. Default: ""
Names of the read characters.
Perform a gray value bottom hat transformation on an image.
Instance represents: Input image.
Structuring element.
Bottom hat image.
Perform a gray value top hat transformation on an image.
Instance represents: Input image.
Structuring element.
Top hat image.
Perform a gray value closing on an image.
Instance represents: Input image.
Structuring element.
Gray-closed image.
Perform a gray value opening on an image.
Instance represents: Input image.
Structuring element.
Gray-opened image.
Perform a gray value dilation on an image.
Instance represents: Input image.
Structuring element.
Gray-dilated image.
Perform a gray value erosion on an image.
Instance represents: Input image.
Structuring element.
Gray-eroded image.
Load a structuring element for gray morphology.
Modified instance represents: Generated structuring element.
Name of the file containing the structuring element.
Generate ellipsoidal structuring elements for gray morphology.
Modified instance represents: Generated structuring element.
Pixel type. Default: "byte"
Width of the structuring element. Default: 5
Height of the structuring element. Default: 5
Maximum gray value of the structuring element. Default: 0
Generate ellipsoidal structuring elements for gray morphology.
Modified instance represents: Generated structuring element.
Pixel type. Default: "byte"
Width of the structuring element. Default: 5
Height of the structuring element. Default: 5
Maximum gray value of the structuring element. Default: 0
Extracting points with a particular gray value along a rectangle or an annular arc.
Instance represents: Input image.
Measure object handle.
Sigma of gaussian smoothing. Default: 1.0
Threshold. Default: 128.0
Selection of points. Default: "all"
Row coordinates of points with threshold value.
Column coordinates of points with threshold value.
Distance between consecutive points.
Extract a gray value profile perpendicular to a rectangle or annular arc.
Instance represents: Input image.
Measure object handle.
Gray value profile.
Extract straight edge pairs perpendicular to a rectangle or an annular arc.
Instance represents: Input image.
Measure object handle.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select the first gray value transition of the edge pairs. Default: "all"
Constraint of pairing. Default: "no_restriction"
Number of edge pairs. Default: 10
Row coordinate of the first edge.
Column coordinate of the first edge.
Edge amplitude of the first edge (with sign).
Row coordinate of the second edge.
Column coordinate of the second edge.
Edge amplitude of the second edge (with sign).
Row coordinate of the center of the edge pair.
Column coordinate of the center of the edge pair.
Fuzzy evaluation of the edge pair.
Distance between the edges of the edge pair.
Extract straight edge pairs perpendicular to a rectangle or an annular arc.
Instance represents: Input image.
Measure object handle.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select the first gray value transition of the edge pairs. Default: "all"
Row coordinate of the first edge point.
Column coordinate of the first edge point.
Edge amplitude of the first edge (with sign).
Row coordinate of the second edge point.
Column coordinate of the second edge point.
Edge amplitude of the second edge (with sign).
Row coordinate of the center of the edge pair.
Column coordinate of the center of the edge pair.
Fuzzy evaluation of the edge pair.
Distance between edges of an edge pair.
Distance between consecutive edge pairs.
Extract straight edges perpendicular to a rectangle or an annular arc.
Instance represents: Input image.
Measure object handle.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select light/dark or dark/light edges. Default: "all"
Row coordinate of the edge point.
Column coordinate of the edge point.
Edge amplitude of the edge (with sign).
Fuzzy evaluation of the edges.
Distance between consecutive edges.
Extract straight edge pairs perpendicular to a rectangle or annular arc.
Instance represents: Input image.
Measure object handle.
Sigma of gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Type of gray value transition that determines how edges are grouped to edge pairs. Default: "all"
Selection of edge pairs. Default: "all"
Row coordinate of the center of the first edge.
Column coordinate of the center of the first edge.
Edge amplitude of the first edge (with sign).
Row coordinate of the center of the second edge.
Column coordinate of the center of the second edge.
Edge amplitude of the second edge (with sign).
Distance between edges of an edge pair.
Distance between consecutive edge pairs.
Extract straight edges perpendicular to a rectangle or annular arc.
Instance represents: Input image.
Measure object handle.
Sigma of gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Light/dark or dark/light edge. Default: "all"
Selection of end points. Default: "all"
Row coordinate of the center of the edge.
Column coordinate of the center of the edge.
Edge amplitude of the edge (with sign).
Distance between consecutive edges.
Identify objects with a sample identifier.
Instance represents: Image showing the object to be identified.
Handle of the sample identifier.
Number of suggested object indices. Default: 1
Rating threshold. Default: 0.0
Generic parameter name. Default: []
Generic parameter value. Default: []
Rating value of the identified object.
Index of the identified object.
Identify objects with a sample identifier.
Instance represents: Image showing the object to be identified.
Handle of the sample identifier.
Number of suggested object indices. Default: 1
Rating threshold. Default: 0.0
Generic parameter name. Default: []
Generic parameter value. Default: []
Rating value of the identified object.
Index of the identified object.
Add training data to an existing sample identifier.
Instance represents: Image that shows an object.
Handle of the sample identifier.
Index of the object visible in the SampleImage.
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Add training data to an existing sample identifier.
Instance represents: Image that shows an object.
Handle of the sample identifier.
Index of the object visible in the SampleImage.
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Add preparation data to an existing sample identifier.
Instance represents: Image that shows an object.
Handle of the sample identifier.
Index of the object visible in the SampleImage. Default: "unknown"
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Add preparation data to an existing sample identifier.
Instance represents: Image that shows an object.
Handle of the sample identifier.
Index of the object visible in the SampleImage. Default: "unknown"
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Determine the parameters of a shape model.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Kind of optimization. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Parameters to be determined automatically. Default: "all"
Value of the automatically determined parameter.
Name of the automatically determined parameter.
Determine the parameters of a shape model.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Kind of optimization. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Parameters to be determined automatically. Default: "all"
Value of the automatically determined parameter.
Name of the automatically determined parameter.
Find the best matches of multiple anisotropically scaled shape models.
Instance represents: Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the models in the row direction. Default: 0.9
Maximum scale of the models in the row direction. Default: 1.1
Minimum scale of the models in the column direction. Default: 0.9
Maximum scale of the models in the column direction. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models in the row direction.
Scale of the found instances of the models in the column direction.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple anisotropically scaled shape models.
Instance represents: Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the models in the row direction. Default: 0.9
Maximum scale of the models in the row direction. Default: 1.1
Minimum scale of the models in the column direction. Default: 0.9
Maximum scale of the models in the column direction. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models in the row direction.
Scale of the found instances of the models in the column direction.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple isotropically scaled shape models.
Instance represents: Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the models. Default: 0.9
Maximum scale of the models. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple isotropically scaled shape models.
Instance represents: Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the models. Default: 0.9
Maximum scale of the models. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple shape models.
Instance represents: Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple shape models.
Instance represents: Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of an anisotropically scaled shape model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model in the row direction. Default: 0.9
Maximum scale of the model in the row direction. Default: 1.1
Minimum scale of the model in the column direction. Default: 0.9
Maximum scale of the model in the column direction. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model in the row direction.
Scale of the found instances of the model in the column direction.
Score of the found instances of the model.
Find the best matches of an anisotropically scaled shape model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model in the row direction. Default: 0.9
Maximum scale of the model in the row direction. Default: 1.1
Minimum scale of the model in the column direction. Default: 0.9
Maximum scale of the model in the column direction. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model in the row direction.
Scale of the found instances of the model in the column direction.
Score of the found instances of the model.
Find the best matches of an isotropically scaled shape model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model.
Score of the found instances of the model.
Find the best matches of an isotropically scaled shape model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model.
Score of the found instances of the model.
Find the best matches of a shape model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Find the best matches of a shape model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Set the metric of a shape model that was created from XLD contours.
Instance represents: Input image used for the determination of the polarity.
Handle of the model.
Transformation matrix.
Match metric. Default: "use_polarity"
Set selected parameters of the shape model.
Handle of the model.
Parameter names.
Parameter values.
Prepare an anisotropically scaled shape model for matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Prepare an anisotropically scaled shape model for matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Prepare an isotropically scaled shape model for matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Prepare an isotropically scaled shape model for matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Prepare a shape model for matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Prepare a shape model for matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Create the representation of a shape model.
Instance represents: Input image.
Model region pyramid
Number of pyramid levels. Default: 4
Threshold or hysteresis thresholds for the contrast of the object in the image and optionally minimum size of the object parts. Default: 30
Image pyramid of the input image
Create the representation of a shape model.
Instance represents: Input image.
Model region pyramid
Number of pyramid levels. Default: 4
Threshold or hysteresis thresholds for the contrast of the object in the image and optionally minimum size of the object parts. Default: 30
Image pyramid of the input image
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Instance represents: Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Camera parameter (inner orientation) obtained from camera calibration.
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
3D pose of the object.
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Instance represents: Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Camera parameter (inner orientation) obtained from camera calibration.
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
3D pose of the object.
Find the best matches of a descriptor model in an image.
Instance represents: Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
Homography between model and found instance.
Find the best matches of a descriptor model in an image.
Instance represents: Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
Homography between model and found instance.
Create a descriptor model for calibrated perspective matching.
Instance represents: Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
The handle to the descriptor model.
Prepare a descriptor model for interest point matching.
Instance represents: Input image whose domain will be used to create the model.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
The handle to the descriptor model.
Determine the parameters of a deformable model.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Kind of optimization. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The general parameter names. Default: []
Values of the general parameter. Default: []
Parameters to be determined automatically. Default: "all"
Value of the automatically determined parameter.
Name of the automatically determined parameter.
Determine the parameters of a deformable model.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Kind of optimization. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The general parameter names. Default: []
Values of the general parameter. Default: []
Parameters to be determined automatically. Default: "all"
Value of the automatically determined parameter.
Name of the automatically determined parameter.
Find the best matches of a local deformable model in an image.
Instance represents: Input image in which the model should be found.
Vector field of the rectification transformation.
Contours of the found instances of the model.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minumum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching. Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Switch for requested iconic result. Default: []
The general parameter names. Default: []
Values of the general parameters. Default: []
Scores of the found instances of the model.
Row coordinates of the found instances of the model.
Column coordinates of the found instances of the model.
Rectified image of the found model.
Find the best matches of a calibrated deformable model in an image and return their 3D pose.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
6 standard deviations or 36 covariances of the pose parameters.
Score of the found instances of the model.
Pose of the object.
Find the best matches of a calibrated deformable model in an image and return their 3D pose.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
6 standard deviations or 36 covariances of the pose parameters.
Score of the found instances of the model.
Pose of the object.
Find the best matches of a planar projective invariant deformable model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
Score of the found instances of the model.
Homographies between model and found instances.
Find the best matches of a planar projective invariant deformable model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
Score of the found instances of the model.
Homographies between model and found instances.
Set the metric of a local deformable model that was created from XLD contours.
Instance represents: Input image used for the determination of the polarity.
Vector field of the local deformation.
Handle of the model.
Match metric. Default: "use_polarity"
Set the metric of a planar calibrated deformable model that was created from XLD contours.
Instance represents: Input image used for the determination of the polarity.
Handle of the model.
Pose of the model in the image.
Match metric. Default: "use_polarity"
Set the metric of a planar uncalibrated deformable model that was created from XLD contours.
Instance represents: Input image used for the determination of the polarity.
Handle of the model.
Transformation matrix.
Match metric. Default: "use_polarity"
Creates a deformable model for local, deformable matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Creates a deformable model for local, deformable matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Create a deformable model for calibrated perspective matching.
Instance represents: Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Handle of the model.
Create a deformable model for calibrated perspective matching.
Instance represents: Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Handle of the model.
Creates a deformable model for uncalibrated, perspective matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Creates a deformable model for uncalibrated, perspective matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Find the best matches of an NCC model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.8
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Find the best matches of an NCC model in an image.
Instance represents: Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.8
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Set selected parameters of the NCC model.
Handle of the model.
Parameter names.
Parameter values.
Prepare an NCC model for matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Match metric. Default: "use_polarity"
Handle of the model.
Prepare an NCC model for matching.
Instance represents: Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Match metric. Default: "use_polarity"
Handle of the model.
Find the best matches of a component model in an image.
Instance represents: Input image in which the component model should be found.
Handle of the component model.
Index of the root component.
Smallest rotation of the root component Default: -0.39
Extent of the rotation of the root component. Default: 0.79
Minimum score of the instances of the component model to be found. Default: 0.5
Number of instances of the component model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the component models to be found. Default: 0.5
Behavior if the root component is missing. Default: "stop_search"
Behavior if a component is missing. Default: "prune_branch"
Pose prediction of components that are not found. Default: "none"
Minimum score of the instances of the components to be found. Default: 0.5
Subpixel accuracy of the component poses if not equal to 'none'. Default: "least_squares"
Number of pyramid levels for the components used in the matching (and lowest pyramid level to use if $|NumLevelsComp| = 2n$). Default: 0
"Greediness" of the search heuristic for the components (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
End index of each found instance of the component model in the tuples describing the component matches.
Score of the found instances of the component model.
Row coordinate of the found component matches.
Column coordinate of the found component matches.
Rotation angle of the found component matches.
Score of the found component matches.
Index of the found components.
Start index of each found instance of the component model in the tuples describing the component matches.
Find the best matches of a component model in an image.
Instance represents: Input image in which the component model should be found.
Handle of the component model.
Index of the root component.
Smallest rotation of the root component Default: -0.39
Extent of the rotation of the root component. Default: 0.79
Minimum score of the instances of the component model to be found. Default: 0.5
Number of instances of the component model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the component models to be found. Default: 0.5
Behavior if the root component is missing. Default: "stop_search"
Behavior if a component is missing. Default: "prune_branch"
Pose prediction of components that are not found. Default: "none"
Minimum score of the instances of the components to be found. Default: 0.5
Subpixel accuracy of the component poses if not equal to 'none'. Default: "least_squares"
Number of pyramid levels for the components used in the matching (and lowest pyramid level to use if $|NumLevelsComp| = 2n$). Default: 0
"Greediness" of the search heuristic for the components (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
End index of each found instance of the component model in the tuples describing the component matches.
Score of the found instances of the component model.
Row coordinate of the found component matches.
Column coordinate of the found component matches.
Rotation angle of the found component matches.
Score of the found component matches.
Index of the found components.
Start index of each found instance of the component model in the tuples describing the component matches.
Prepare a component model for matching based on explicitly specified components and relations.
Instance represents: Input image from which the shape models of the model components should be created.
Input regions from which the shape models of the model components should be created.
Variation of the model components in row direction.
Variation of the model components in column direction.
Angle variation of the model components.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Lower hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Upper hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Minimum size of the contour regions in the model. Default: "auto"
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Handle of the component model.
Prepare a component model for matching based on explicitly specified components and relations.
Instance represents: Input image from which the shape models of the model components should be created.
Input regions from which the shape models of the model components should be created.
Variation of the model components in row direction.
Variation of the model components in column direction.
Angle variation of the model components.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Lower hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Upper hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Minimum size of the contour regions in the model. Default: "auto"
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Ranking of the model components expressing the suitability to act as the root component.
Handle of the component model.
Adopt new parameters that are used to create the model components into the training result.
Instance represents: Training images that were used for training the model components.
Handle of the training result.
Criterion for solving the ambiguities. Default: "rigidity"
Maximum contour overlap of the found initial components. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Contour regions of rigid model components.
Train components and relations for the component-based matching.
Instance represents: Input image from which the shape models of the initial components should be created.
Contour regions or enclosing regions of the initial components.
Training images that are used for training the model components.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of connected contour regions. Default: "auto"
Minimum score of the instances of the initial components to be found. Default: 0.5
Search tolerance in row direction. Default: -1
Search tolerance in column direction. Default: -1
Angle search tolerance. Default: -1
Decision whether the training emphasis should lie on a fast computation or on a high robustness. Default: "speed"
Criterion for solving ambiguous matches of the initial components in the training images. Default: "rigidity"
Maximum contour overlap of the found initial components in a training image. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Handle of the training result.
Contour regions of rigid model components.
Train components and relations for the component-based matching.
Instance represents: Input image from which the shape models of the initial components should be created.
Contour regions or enclosing regions of the initial components.
Training images that are used for training the model components.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of connected contour regions. Default: "auto"
Minimum score of the instances of the initial components to be found. Default: 0.5
Search tolerance in row direction. Default: -1
Search tolerance in column direction. Default: -1
Angle search tolerance. Default: -1
Decision whether the training emphasis should lie on a fast computation or on a high robustness. Default: "speed"
Criterion for solving ambiguous matches of the initial components in the training images. Default: "rigidity"
Maximum contour overlap of the found initial components in a training image. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Handle of the training result.
Contour regions of rigid model components.
Extract the initial components of a component model.
Instance represents: Input image from which the initial components should be extracted.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of the initial components. Default: "auto"
Type of automatic segmentation. Default: "connection"
Names of optional control parameters. Default: []
Values of optional control parameters. Default: []
Contour regions of initial components.
Extract the initial components of a component model.
Instance represents: Input image from which the initial components should be extracted.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of the initial components. Default: "auto"
Type of automatic segmentation. Default: "connection"
Names of optional control parameters. Default: []
Values of optional control parameters. Default: []
Contour regions of initial components.
Find the best matches of a 3D shape model in an image.
Instance represents: Input image in which the model should be found.
Handle of the 3D shape model.
Minimum score of the instances of the model to be found. Default: 0.7
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
6 standard deviations or 36 covariances of the pose parameters.
Score of the found instances of the 3D shape model.
3D pose of the 3D shape model.
Convert one-channel images into a multi-channel image
Instance represents: One-channel images to be combined into a one-channel image.
Multi-channel image.
Convert a multi-channel image into One-channel images
Instance represents: Multi-channel image to be decomposed.
Generated one-channel images.
Convert 7 images into a seven-channel image.
Instance represents: Input image 1.
Input image 2.
Input image 3.
Input image 4.
Input image 5.
Input image 6.
Input image 7.
Multi-channel image.
Convert 6 images into a six-channel image.
Instance represents: Input image 1.
Input image 2.
Input image 3.
Input image 4.
Input image 5.
Input image 6.
Multi-channel image.
Convert 5 images into a five-channel image.
Instance represents: Input image 1.
Input image 2.
Input image 3.
Input image 4.
Input image 5.
Multi-channel image.
Convert 4 images into a four-channel image.
Instance represents: Input image 1.
Input image 2.
Input image 3.
Input image 4.
Multi-channel image.
Convert 3 images into a three-channel image.
Instance represents: Input image 1.
Input image 2.
Input image 3.
Multi-channel image.
Convert two images into a two-channel image.
Instance represents: Input image 1.
Input image 2.
Multi-channel image.
Convert a seven-channel image into seven images.
Instance represents: Multi-channel image.
Output image 2.
Output image 3.
Output image 4.
Output image 5.
Output image 6.
Output image 7.
Output image 1.
Convert a six-channel image into six images.
Instance represents: Multi-channel image.
Output image 2.
Output image 3.
Output image 4.
Output image 5.
Output image 6.
Output image 1.
Convert a five-channel image into five images.
Instance represents: Multi-channel image.
Output image 2.
Output image 3.
Output image 4.
Output image 5.
Output image 1.
Convert a four-channel image into four images.
Instance represents: Multi-channel image.
Output image 2.
Output image 3.
Output image 4.
Output image 1.
Convert a three-channel image into three images.
Instance represents: Multi-channel image.
Output image 2.
Output image 3.
Output image 1.
Convert a two-channel image into two images.
Instance represents: Multi-channel image.
Output image 2.
Output image 1.
Count channels of image.
Instance represents: One- or multi-channel image.
Number of channels.
Append additional matrices (channels) to the image.
Instance represents: Multi-channel image.
Image to be appended.
Image appended by Image.
Access a channel of a multi-channel image.
Instance represents: Multi-channel image.
Index of channel to be accessed. Default: 1
One channel of MultiChannelImage.
Tile multiple image objects into a large image with explicit positioning information.
Instance represents: Input images.
Row coordinate of the upper left corner of the input images in the output image. Default: 0
Column coordinate of the upper left corner of the input images in the output image. Default: 0
Row coordinate of the upper left corner of the copied part of the respective input image. Default: -1
Column coordinate of the upper left corner of the copied part of the respective input image. Default: -1
Row coordinate of the lower right corner of the copied part of the respective input image. Default: -1
Column coordinate of the lower right corner of the copied part of the respective input image. Default: -1
Width of the output image. Default: 512
Height of the output image. Default: 512
Tiled output image.
Tile multiple image objects into a large image with explicit positioning information.
Instance represents: Input images.
Row coordinate of the upper left corner of the input images in the output image. Default: 0
Column coordinate of the upper left corner of the input images in the output image. Default: 0
Row coordinate of the upper left corner of the copied part of the respective input image. Default: -1
Column coordinate of the upper left corner of the copied part of the respective input image. Default: -1
Row coordinate of the lower right corner of the copied part of the respective input image. Default: -1
Column coordinate of the lower right corner of the copied part of the respective input image. Default: -1
Width of the output image. Default: 512
Height of the output image. Default: 512
Tiled output image.
Tile multiple image objects into a large image.
Instance represents: Input images.
Number of columns to use for the output image. Default: 1
Order of the input images in the output image. Default: "vertical"
Tiled output image.
Tile multiple images into a large image.
Instance represents: Input image.
Number of columns to use for the output image. Default: 1
Order of the input images in the output image. Default: "vertical"
Tiled output image.
Cut out of defined gray values.
Instance represents: Input image.
Image area.
Cut out one or more rectangular image areas.
Instance represents: Input image.
Line index of upper left corner of image area. Default: 100
Column index of upper left corner of image area. Default: 100
Line index of lower right corner of image area. Default: 200
Column index of lower right corner of image area. Default: 200
Image area.
Cut out one or more rectangular image areas.
Instance represents: Input image.
Line index of upper left corner of image area. Default: 100
Column index of upper left corner of image area. Default: 100
Line index of lower right corner of image area. Default: 200
Column index of lower right corner of image area. Default: 200
Image area.
Cut out one or more rectangular image areas.
Instance represents: Input image.
Line index of upper left corner of image area. Default: 100
Column index of upper left corner of image area. Default: 100
Width of new image. Default: 128
Height of new image. Default: 128
Image area.
Cut out one or more rectangular image areas.
Instance represents: Input image.
Line index of upper left corner of image area. Default: 100
Column index of upper left corner of image area. Default: 100
Width of new image. Default: 128
Height of new image. Default: 128
Image area.
Change image size.
Instance represents: Input image.
Width of new image. Default: 512
Height of new image. Default: 512
Image with new format.
Change definition domain of an image.
Instance represents: Input image.
New definition domain.
Image with new definition domain.
Reduce the domain of an image to a rectangle.
Instance represents: Input image.
Line index of upper left corner of image area. Default: 100
Column index of upper left corner of image area. Default: 100
Line index of lower right corner of image area. Default: 200
Column index of lower right corner of image area. Default: 200
Image with reduced definition domain.
Reduce the domain of an image.
Instance represents: Input image.
New definition domain.
Image with reduced definition domain.
Expand the domain of an image to maximum.
Instance represents: Input image.
Image with maximum definition domain.
Get the domain of an image.
Instance represents: Input images.
Definition domains of input images.
Detect lines in edge images with the help of the Hough transform using local gradient direction and return them in normal form.
Instance represents: Image containing the edge direction. The edges are described by the image domain.
Regions of the input image that contributed to the local maxima.
Uncertainty of edge direction (in degrees). Default: 2
Resolution in the angle area (in 1/degrees). Default: 4
Smoothing filter for hough image. Default: "mean"
Required smoothing filter size. Default: 5
Threshold value in the Hough image. Default: 100
Minimum distance of two maxima in the Hough image (direction: angle). Default: 5
Minimum distance of two maxima in the Hough image (direction: distance). Default: 5
Create line regions if 'true'. Default: "true"
Angles (in radians) of the detected lines' normal vectors.
Distance of the detected lines from the origin.
Hough transform.
Compute the Hough transform for lines using local gradient direction.
Instance represents: Image containing the edge direction. The edges must be described by the image domain.
Uncertainty of the edge direction (in degrees). Default: 2
Resolution in the angle area (in 1/degrees). Default: 4
Hough transform.
Segment the rectification grid region in the image.
Instance represents: Input image.
Minimum contrast. Default: 8.0
Radius of the circular structuring element. Default: 7.5
Output region containing the rectification grid.
Segment the rectification grid region in the image.
Instance represents: Input image.
Minimum contrast. Default: 8.0
Radius of the circular structuring element. Default: 7.5
Output region containing the rectification grid.
Establish connections between the grid points of the rectification grid.
Instance represents: Input image.
Row coordinates of the grid points.
Column coordinates of the grid points.
Size of the applied Gaussians. Default: 0.9
Maximum distance of the connecting lines from the grid points. Default: 5.5
Output contours.
Establish connections between the grid points of the rectification grid.
Instance represents: Input image.
Row coordinates of the grid points.
Column coordinates of the grid points.
Size of the applied Gaussians. Default: 0.9
Maximum distance of the connecting lines from the grid points. Default: 5.5
Output contours.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input image.
Input contours.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input image.
Input contours.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Calculates image coordinates for a point in a 3D plot window.
Instance represents: Displayed image.
Window handle.
Row coordinate in the window.
Column coordinate in the window.
Row coordinate in the image.
Column coordinate in the image.
Height value.
Calculates image coordinates for a point in a 3D plot window.
Instance represents: Displayed image.
Window handle.
Row coordinate in the window.
Column coordinate in the window.
Row coordinate in the image.
Column coordinate in the image.
Height value.
Write the window content in an image object.
Modified instance represents: Saved image.
Window handle.
Displays gray value images.
Instance represents: Gray value image to display.
Window handle.
Displays images with several channels.
Instance represents: Multichannel images to be displayed.
Window handle.
Number of channel or the numbers of the RGB-channels Default: 1
Displays images with several channels.
Instance represents: Multichannel images to be displayed.
Window handle.
Number of channel or the numbers of the RGB-channels Default: 1
Displays a color (RGB) image
Instance represents: Color image to display.
Window handle.
Visualize images using gnuplot.
Instance represents: Image to be plotted.
Identifier for the gnuplot output stream.
Number of samples in the x-direction. Default: 64
Number of samples in the y-direction. Default: 64
Rotation of the plot about the x-axis. Default: 60
Rotation of the plot about the z-axis. Default: 30
Plot the image with hidden surfaces removed. Default: "hidden3d"
Visualize images using gnuplot.
Instance represents: Image to be plotted.
Identifier for the gnuplot output stream.
Number of samples in the x-direction. Default: 64
Number of samples in the y-direction. Default: 64
Rotation of the plot about the x-axis. Default: 60
Rotation of the plot about the z-axis. Default: 30
Plot the image with hidden surfaces removed. Default: "hidden3d"
Filter an image using a Laws texture filter.
Instance represents: Images to which the texture transformation is to be applied.
Desired filter. Default: "el"
Shift to reduce the gray value dynamics. Default: 2
Size of the filter kernel. Default: 5
Texture images.
Calculate the standard deviation of gray values within rectangular windows.
Instance represents: Image for which the standard deviation is to be calculated.
Width of the mask in which the standard deviation is calculated. Default: 11
Height of the mask in which the standard deviation is calculated. Default: 11
Image containing the standard deviation.
Calculate the entropy of gray values within a rectangular window.
Instance represents: Image for which the entropy is to be calculated.
Width of the mask in which the entropy is calculated. Default: 9
Height of the mask in which the entropy is calculated. Default: 9
Entropy image.
Perform an isotropic diffusion of an image.
Instance represents: Input image.
Standard deviation of the Gauss distribution. Default: 1.0
Number of iterations. Default: 10
Output image.
Perform an anisotropic diffusion of an image.
Instance represents: Input image.
Diffusion coefficient as a function of the edge amplitude. Default: "weickert"
Contrast parameter. Default: 5.0
Time step. Default: 1.0
Number of iterations. Default: 10
Output image.
Smooth an image using various filters.
Instance represents: Image to be smoothed.
Filter. Default: "deriche2"
Filterparameter: small values cause strong smoothing (vice versa by using bei 'gauss'). Default: 0.5
Smoothed image.
Non-linear smoothing with the sigma filter.
Instance represents: Image to be smoothed.
Height of the mask (number of lines). Default: 5
Width of the mask (number of columns). Default: 5
Max. deviation to the average. Default: 3
Smoothed image.
Calculate the average of maximum and minimum inside any mask.
Instance represents: Image to be filtered.
Filter mask.
Border treatment. Default: "mirrored"
Filtered image.
Calculate the average of maximum and minimum inside any mask.
Instance represents: Image to be filtered.
Filter mask.
Border treatment. Default: "mirrored"
Filtered image.
Smooth an image with an arbitrary rank mask.
Instance represents: Image to be filtered.
Image whose region serves as filter mask.
Number of averaged pixels. Typical value: Surface(Mask) / 2. Default: 5
Border treatment. Default: "mirrored"
Filtered output image.
Smooth an image with an arbitrary rank mask.
Instance represents: Image to be filtered.
Image whose region serves as filter mask.
Number of averaged pixels. Typical value: Surface(Mask) / 2. Default: 5
Border treatment. Default: "mirrored"
Filtered output image.
Separated median filtering with rectangle masks.
Instance represents: Image to be filtered.
Width of rank mask. Default: 25
Height of rank mask. Default: 25
Border treatment. Default: "mirrored"
Median filtered image.
Separated median filtering with rectangle masks.
Instance represents: Image to be filtered.
Width of rank mask. Default: 25
Height of rank mask. Default: 25
Border treatment. Default: "mirrored"
Median filtered image.
Compute a median filter with rectangular masks.
Instance represents: Image to be filtered.
Width of the filter mask. Default: 15
Height of the filter mask. Default: 15
Filtered image.
Compute a median filter with various masks.
Instance represents: Image to be filtered.
Filter mask type. Default: "circle"
Radius of the filter mask. Default: 1
Border treatment. Default: "mirrored"
Filtered image.
Compute a median filter with various masks.
Instance represents: Image to be filtered.
Filter mask type. Default: "circle"
Radius of the filter mask. Default: 1
Border treatment. Default: "mirrored"
Filtered image.
Weighted median filtering with different rank masks.
Instance represents: Image to be filtered.
Type of median mask. Default: "inner"
mask size. Default: 3
Median filtered image.
Compute a rank filter with rectangular masks.
Instance represents: Image to be filtered.
Width of the filter mask. Default: 15
Height of the filter mask. Default: 15
Rank of the output gray value. Default: 5
Filtered image.
Compute a rank filter with arbitrary masks.
Instance represents: Image to be filtered.
Filter mask.
Rank of the output gray value. Default: 5
Border treatment. Default: "mirrored"
Filtered image.
Compute a rank filter with arbitrary masks.
Instance represents: Image to be filtered.
Filter mask.
Rank of the output gray value. Default: 5
Border treatment. Default: "mirrored"
Filtered image.
Opening, Median and Closing with circle or rectangle mask.
Instance represents: Image to be filtered.
Shape of the mask. Default: "circle"
Radius of the filter mask. Default: 1
Filter Mode: 0 corresponds to a gray value opening , 50 corresponds to a median and 100 to a gray values closing. Default: 10
Border treatment. Default: "mirrored"
Filtered Image.
Opening, Median and Closing with circle or rectangle mask.
Instance represents: Image to be filtered.
Shape of the mask. Default: "circle"
Radius of the filter mask. Default: 1
Filter Mode: 0 corresponds to a gray value opening , 50 corresponds to a median and 100 to a gray values closing. Default: 10
Border treatment. Default: "mirrored"
Filtered Image.
Smooth by averaging.
Instance represents: Image to be smoothed.
Width of filter mask. Default: 9
Height of filter mask. Default: 9
Smoothed image.
Smooth an image using the binomial filter.
Instance represents: Input image.
Filter width. Default: 5
Filter height. Default: 5
Smoothed image.
Smooth an image using discrete Gaussian functions.
Instance represents: Image to be smoothed.
Required filter size. Default: 5
Filtered image.
Smooth using discrete gauss functions.
Instance represents: Image to be smoothed.
Required filter size. Default: 5
Filtered image.
Smooth an image in the spatial domain to suppress noise.
Instance represents: Image to smooth.
Width of filter mask. Default: 3
Height of filter mask. Default: 3
Gap between local maximum/minimum and all other gray values of the neighborhood. Default: 1.0
Replacement rule (1 = next minimum/maximum, 2 = average, 3 =median). Default: 3
Smoothed image.
Interpolate 2 video half images.
Instance represents: Gray image consisting of two half images.
Instruction whether even or odd lines should be replaced/removed. Default: "odd"
Full image with interpolated/removed lines.
Return gray values with given rank from multiple channels.
Instance represents: Multichannel gray image.
Rank of the gray value images to return. Default: 2
Result of the rank function.
Average gray values over several channels.
Instance represents: Multichannel gray image.
Result of averaging.
Replace values outside of thresholds with average value.
Instance represents: Input image.
Width of filter mask. Default: 3
Height of filter mask. Default: 3
Minimum gray value. Default: 1
Maximum gray value. Default: 254
Smoothed image.
Suppress salt and pepper noise.
Instance represents: Input image.
Width of filter mask. Default: 3
Height of filter mask. Default: 3
Minimum gray value. Default: 1
Maximum gray value. Default: 254
Smoothed image.
Find corners using the Sojka operator.
Instance represents: Input image.
Required filter size. Default: 9
Sigma of the weight function according to the distance to the corner candidate. Default: 2.5
Sigma of the weight function for the distance to the ideal gray value edge. Default: 0.75
Threshold for the magnitude of the gradient. Default: 30.0
Threshold for Apparentness. Default: 90.0
Threshold for the direction change in a corner point (radians). Default: 0.5
Subpixel precise calculation of the corner points. Default: "false"
Row coordinates of the detected corner points.
Column coordinates of the detected corner points.
Find corners using the Sojka operator.
Instance represents: Input image.
Required filter size. Default: 9
Sigma of the weight function according to the distance to the corner candidate. Default: 2.5
Sigma of the weight function for the distance to the ideal gray value edge. Default: 0.75
Threshold for the magnitude of the gradient. Default: 30.0
Threshold for Apparentness. Default: 90.0
Threshold for the direction change in a corner point (radians). Default: 0.5
Subpixel precise calculation of the corner points. Default: "false"
Row coordinates of the detected corner points.
Column coordinates of the detected corner points.
Enhance circular dots in an image.
Instance represents: Input image.
Diameter of the dots to be enhanced. Default: 5
Enhance dark, light, or all dots. Default: "light"
Shift of the filter response. Default: 0
Output image.
Subpixel precise detection of local minima in an image.
Instance represents: Input image.
Method for the calculation of the partial derivatives. Default: "facet"
Sigma of the Gaussian. If Filter is 'facet', Sigma may be 0.0 to avoid the smoothing of the input image.
Minimum absolute value of the eigenvalues of the Hessian matrix. Default: 5.0
Row coordinates of the detected minima.
Column coordinates of the detected minima.
Subpixel precise detection of local maxima in an image.
Instance represents: Input image.
Method for the calculation of the partial derivatives. Default: "facet"
Sigma of the Gaussian. If Filter is 'facet', Sigma may be 0.0 to avoid the smoothing of the input image.
Minimum absolute value of the eigenvalues of the Hessian matrix. Default: 5.0
Row coordinates of the detected maxima.
Column coordinates of the detected maxima.
Subpixel precise detection of saddle points in an image.
Instance represents: Input image.
Method for the calculation of the partial derivatives. Default: "facet"
Sigma of the Gaussian. If Filter is 'facet', Sigma may be 0.0 to avoid the smoothing of the input image.
Minimum absolute value of the eigenvalues of the Hessian matrix. Default: 5.0
Row coordinates of the detected saddle points.
Column coordinates of the detected saddle points.
Subpixel precise detection of critical points in an image.
Instance represents: Input image.
Method for the calculation of the partial derivatives. Default: "facet"
Sigma of the Gaussian. If Filter is 'facet', Sigma may be 0.0 to avoid the smoothing of the input image.
Minimum absolute value of the eigenvalues of the Hessian matrix. Default: 5.0
Row coordinates of the detected minima.
Column coordinates of the detected minima.
Row coordinates of the detected maxima.
Column coordinates of the detected maxima.
Row coordinates of the detected saddle points.
Column coordinates of the detected saddle points.
Detect points of interest using the Harris operator.
Instance represents: Input image.
Amount of smoothing used for the calculation of the gradient. Default: 0.7
Amount of smoothing used for the integration of the gradients. Default: 2.0
Weight of the squared trace of the squared gradient matrix. Default: 0.08
Minimum filter response for the points. Default: 1000.0
Row coordinates of the detected points.
Column coordinates of the detected points.
Detect points of interest using the Harris operator.
Instance represents: Input image.
Amount of smoothing used for the calculation of the gradient. Default: 0.7
Amount of smoothing used for the integration of the gradients. Default: 2.0
Weight of the squared trace of the squared gradient matrix. Default: 0.08
Minimum filter response for the points. Default: 1000.0
Row coordinates of the detected points.
Column coordinates of the detected points.
Detect points of interest using the binomial approximation of the Harris operator.
Instance represents: Input image.
Amount of binomial smoothing used for the calculation of the gradient. Default: 5
Amount of smoothing used for the integration of the gradients. Default: 15
Weight of the squared trace of the squared gradient matrix. Default: 0.08
Minimum filter response for the points. Default: 1000.0
Turn on or off subpixel refinement. Default: "on"
Row coordinates of the detected points.
Column coordinates of the detected points.
Detect points of interest using the binomial approximation of the Harris operator.
Instance represents: Input image.
Amount of binomial smoothing used for the calculation of the gradient. Default: 5
Amount of smoothing used for the integration of the gradients. Default: 15
Weight of the squared trace of the squared gradient matrix. Default: 0.08
Minimum filter response for the points. Default: 1000.0
Turn on or off subpixel refinement. Default: "on"
Row coordinates of the detected points.
Column coordinates of the detected points.
Detect points of interest using the Lepetit operator.
Instance represents: Input image.
Radius of the circle. Default: 3
Number of checked neighbors on the circle. Default: 1
Threshold of grayvalue difference to each circle point. Default: 15
Threshold of grayvalue difference to all circle points. Default: 30
Subpixel accuracy of point coordinates. Default: "interpolation"
Row-coordinates of the detected points.
Column-coordinates of the detected points.
Detect points of interest using the Foerstner operator.
Instance represents: Input image.
Amount of smoothing used for the calculation of the gradient. If Smoothing is 'mean', SigmaGrad is ignored. Default: 1.0
Amount of smoothing used for the integration of the gradients. Default: 2.0
Amount of smoothing used in the optimization functions. Default: 3.0
Threshold for the segmentation of inhomogeneous image areas. Default: 200
Threshold for the segmentation of point areas. Default: 0.3
Used smoothing method. Default: "gauss"
Elimination of multiply detected points. Default: "false"
Row coordinates of the detected junction points.
Column coordinates of the detected junction points.
Row part of the covariance matrix of the detected junction points.
Mixed part of the covariance matrix of the detected junction points.
Column part of the covariance matrix of the detected junction points.
Row coordinates of the detected area points.
Column coordinates of the detected area points.
Row part of the covariance matrix of the detected area points.
Mixed part of the covariance matrix of the detected area points.
Column part of the covariance matrix of the detected area points.
Detect points of interest using the Foerstner operator.
Instance represents: Input image.
Amount of smoothing used for the calculation of the gradient. If Smoothing is 'mean', SigmaGrad is ignored. Default: 1.0
Amount of smoothing used for the integration of the gradients. Default: 2.0
Amount of smoothing used in the optimization functions. Default: 3.0
Threshold for the segmentation of inhomogeneous image areas. Default: 200
Threshold for the segmentation of point areas. Default: 0.3
Used smoothing method. Default: "gauss"
Elimination of multiply detected points. Default: "false"
Row coordinates of the detected junction points.
Column coordinates of the detected junction points.
Row part of the covariance matrix of the detected junction points.
Mixed part of the covariance matrix of the detected junction points.
Column part of the covariance matrix of the detected junction points.
Row coordinates of the detected area points.
Column coordinates of the detected area points.
Row part of the covariance matrix of the detected area points.
Mixed part of the covariance matrix of the detected area points.
Column part of the covariance matrix of the detected area points.
Estimate the image noise from a single image.
Instance represents: Input image.
Method to estimate the image noise. Default: "foerstner"
Percentage of used image points. Default: 20
Standard deviation of the image noise.
Estimate the image noise from a single image.
Instance represents: Input image.
Method to estimate the image noise. Default: "foerstner"
Percentage of used image points. Default: 20
Standard deviation of the image noise.
Determine the noise distribution of an image.
Instance represents: Corresponding image.
Region from which the noise distribution is to be estimated.
Size of the mean filter. Default: 21
Noise distribution of all input regions.
Add noise to an image.
Instance represents: Input image.
Maximum noise amplitude. Default: 60.0
Noisy image.
Add noise to an image.
Instance represents: Input image.
Noise distribution.
Noisy image.
Calculate standard deviation over several channels.
Instance represents: Multichannel gray image.
Result of calculation.
Perform an inpainting by texture propagation.
Instance represents: Input image.
Inpainting region.
Size of the inpainting blocks. Default: 9
Size of the search window. Default: 30
Influence of the edge amplitude on the inpainting order. Default: 1.0
Post-iteration for artifact reduction. Default: "none"
Gray value tolerance for post-iteration. Default: 1.0
Output image.
Perform an inpainting by coherence transport.
Instance represents: Input image.
Inpainting region.
Radius of the pixel neighborhood. Default: 5.0
Sharpness parameter in percent. Default: 25.0
Pre-smoothing parameter. Default: 1.41
Smoothing parameter for the direction estimation. Default: 4.0
Channel weights. Default: 1
Output image.
Perform an inpainting by coherence transport.
Instance represents: Input image.
Inpainting region.
Radius of the pixel neighborhood. Default: 5.0
Sharpness parameter in percent. Default: 25.0
Pre-smoothing parameter. Default: 1.41
Smoothing parameter for the direction estimation. Default: 4.0
Channel weights. Default: 1
Output image.
Perform an inpainting by smoothing of level lines.
Instance represents: Input image.
Inpainting region.
Smoothing for derivative operator. Default: 0.5
Time step. Default: 0.5
Number of iterations. Default: 10
Output image.
Perform an inpainting by coherence enhancing diffusion.
Instance represents: Input image.
Inpainting region.
Smoothing for derivative operator. Default: 0.5
Smoothing for diffusion coefficients. Default: 3.0
Time step. Default: 0.5
Number of iterations. Default: 10
Output image.
Perform an inpainting by anisotropic diffusion.
Instance represents: Input image.
Inpainting region.
Type of edge sharpening algorithm. Default: "weickert"
Contrast parameter. Default: 5.0
Step size. Default: 0.5
Number of iterations. Default: 10
Smoothing coefficient for edge information. Default: 3.0
Output image.
Perform a harmonic interpolation on an image region.
Instance represents: Input image.
Inpainting region.
Computational accuracy. Default: 0.001
Output image.
Expand the domain of an image and set the gray values in the expanded domain.
Instance represents: Input image with domain to be expanded.
Radius of the gray value expansion, measured in pixels. Default: 2
Output image with new gray values in the expanded domain.
Compute the topographic primal sketch of an image.
Instance represents: Image for which the topographic primal sketch is to be computed.
Label image containing the 11 classes.
Compute an affine transformation of the color values of a multichannel image.
Instance represents: Multichannel input image.
Transformation matrix for the color values.
Multichannel output image.
Compute the transformation matrix of the principal component analysis of multichannel images.
Instance represents: Multichannel input image.
Transformation matrix for the computation of the inverse PCA.
Mean gray value of the channels.
Covariance matrix of the channels.
Information content of the transformed channels.
Transformation matrix for the computation of the PCA.
Compute the principal components of multichannel images.
Instance represents: Multichannel input image.
Information content of each output channel.
Multichannel output image.
Determine the fuzzy entropy of regions.
Instance represents: Input image containing the fuzzy membership values.
Regions for which the fuzzy entropy is to be calculated.
Start of the fuzzy function. Default: 0
End of the fuzzy function. Default: 255
Fuzzy entropy of a region.
Calculate the fuzzy perimeter of a region.
Instance represents: Input image containing the fuzzy membership values.
Regions for which the fuzzy perimeter is to be calculated.
Start of the fuzzy function. Default: 0
End of the fuzzy function. Default: 255
Fuzzy perimeter of a region.
Perform a gray value closing with a selected mask.
Instance represents: Image for which the minimum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Image containing the minimum gray values.
Perform a gray value closing with a selected mask.
Instance represents: Image for which the minimum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Image containing the minimum gray values.
Perform a gray value opening with a selected mask.
Instance represents: Image for which the minimum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Image containing the minimum gray values.
Perform a gray value opening with a selected mask.
Instance represents: Image for which the minimum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Image containing the minimum gray values.
Determine the minimum gray value within a selected mask.
Instance represents: Image for which the minimum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Image containing the minimum gray values.
Determine the minimum gray value within a selected mask.
Instance represents: Image for which the minimum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Image containing the minimum gray values.
Determine the maximum gray value within a selected mask.
Instance represents: Image for which the maximum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Image containing the maximum gray values.
Determine the maximum gray value within a selected mask.
Instance represents: Image for which the maximum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Image containing the maximum gray values.
Determine the gray value range within a rectangle.
Instance represents: Image for which the gray value range is to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Image containing the gray value range.
Perform a gray value closing with a rectangular mask.
Instance represents: Input image.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Gray-closed image.
Perform a gray value opening with a rectangular mask.
Instance represents: Input image.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Gray-opened image.
Determine the minimum gray value within a rectangle.
Instance represents: Image for which the minimum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Image containing the minimum gray values.
Determine the maximum gray value within a rectangle.
Instance represents: Image for which the maximum gray values are to be calculated.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Image containing the maximum gray values.
Thinning of gray value images.
Instance represents: Image to be thinned.
Thinned image.
Transform an image with a gray-value look-up-table
Instance represents: Image whose gray values are to be transformed.
Table containing the transformation.
Transformed image.
Calculate the correlation between an image and an arbitrary filter mask
Instance represents: Images for which the correlation will be calculated.
Filter mask as file name or tuple. Default: "sobel"
Border treatment. Default: "mirrored"
Result of the correlation.
Calculate the correlation between an image and an arbitrary filter mask
Instance represents: Images for which the correlation will be calculated.
Filter mask as file name or tuple. Default: "sobel"
Border treatment. Default: "mirrored"
Result of the correlation.
Convert the type of an image.
Instance represents: Image whose image type is to be changed.
Desired image type (i.e., type of the gray values). Default: "byte"
Converted image.
Convert two real-valued images into a vector field image.
Instance represents: Vector component in the row direction.
Vector component in the column direction.
Semantic kind of the vector field. Default: "vector_field_relative"
Displacement vector field.
Convert a vector field image into two real-valued images.
Instance represents: Vector field.
Vector component in the column direction.
Vector component in the row direction.
Convert two real images into a complex image.
Instance represents: Real part.
Imaginary part.
Complex image.
Convert a complex image into two real images.
Instance represents: Complex image.
Imaginary part.
Real part.
Paint regions with their average gray value.
Instance represents: original gray-value image.
Input regions.
Result image with painted regions.
Calculate the lowest possible gray value on an arbitrary path to the image border for each point in the image.
Instance represents: Image being processed.
Result image.
Symmetry of gray values along a row.
Instance represents: Input image.
Extension of search area. Default: 40
Angle of test direction. Default: 0.0
Exponent for weighting. Default: 0.5
Symmetry image.
Selection of gray values of a multi-channel image using an index image.
Instance represents: Multi-channel gray value image.
Image, where pixel values are interpreted as channel index.
Resulting image.
Extract depth using multiple focus levels.
Instance represents: Multichannel gray image consisting of multiple focus levels.
Confidence of depth estimation.
Filter used to find sharp pixels. Default: "highpass"
Method used to find sharp pixels. Default: "next_maximum"
Depth image.
Extract depth using multiple focus levels.
Instance represents: Multichannel gray image consisting of multiple focus levels.
Confidence of depth estimation.
Filter used to find sharp pixels. Default: "highpass"
Method used to find sharp pixels. Default: "next_maximum"
Depth image.
Compute the uncalibrated scene flow between two stereo image pairs.
Instance represents: Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Estimated change in disparity.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Estimated optical flow.
Compute the uncalibrated scene flow between two stereo image pairs.
Instance represents: Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Estimated change in disparity.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Estimated optical flow.
Unwarp an image using a vector field.
Instance represents: Input image.
Input vector field.
Unwarped image.
Convolve a vector field with derivatives of the Gaussian.
Instance represents: Input vector field.
Sigma of the Gaussian. Default: 1.0
Component to be calculated. Default: "mean_curvature"
Filtered result images.
Convolve a vector field with derivatives of the Gaussian.
Instance represents: Input vector field.
Sigma of the Gaussian. Default: 1.0
Component to be calculated. Default: "mean_curvature"
Filtered result images.
Compute the length of the vectors of a vector field.
Instance represents: Input vector field
Mode for computing the length of the vectors. Default: "length"
Length of the vectors of the vector field.
Compute the optical flow between two images.
Instance represents: Input image 1.
Input image 2.
Algorithm for computing the optical flow. Default: "fdrig"
Standard deviation for initial Gaussian smoothing. Default: 0.8
Standard deviation of the integration filter. Default: 1.0
Weight of the smoothing term relative to the data term. Default: 20.0
Weight of the gradient constancy relative to the gray value constancy. Default: 5.0
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "accurate"
Optical flow.
Compute the optical flow between two images.
Instance represents: Input image 1.
Input image 2.
Algorithm for computing the optical flow. Default: "fdrig"
Standard deviation for initial Gaussian smoothing. Default: 0.8
Standard deviation of the integration filter. Default: 1.0
Weight of the smoothing term relative to the data term. Default: 20.0
Weight of the gradient constancy relative to the gray value constancy. Default: 5.0
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "accurate"
Optical flow.
Matching a template and an image in a resolution pyramid.
Instance represents: Input image.
The domain of this image will be matched with Image.
Desired matching criterion. Default: "dfd"
Startlevel in the resolution pyramid (highest resolution: Level 0). Default: 1
Threshold to determine the "region of interest". Default: 30
Result image and result region: values of the matching criterion within the determined "region of interest".
Preparing a pattern for template matching with rotation.
Instance represents: Input image whose domain will be processed for the pattern matching.
Maximal number of pyramid levels. Default: 4
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Step rate (angle precision) of matching. Default: 0.0982
Kind of optimizing. Default: "sort"
Kind of grayvalues. Default: "original"
Template number.
Preparing a pattern for template matching.
Instance represents: Input image whose domain will be processed for the pattern matching.
Not yet in use. Default: 255
Maximal number of pyramid levels. Default: 4
Kind of optimizing. Default: "sort"
Kind of grayvalues. Default: "original"
Template number.
Adapting a template to the size of an image.
Instance represents: Image which determines the size of the later matching.
Template number.
Searching all good gray value matches in a pyramid.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Number of levels in the pyramid. Default: 3
All points which have an error below a certain threshold.
Searching all good gray value matches in a pyramid.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Number of levels in the pyramid. Default: 3
All points which have an error below a certain threshold.
Searching the best gray value matches in a pre generated pyramid.
Instance represents: Image pyramid inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Resolution level up to which the method "best match" is used. Default: "original"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching the best gray value matches in a pre generated pyramid.
Instance represents: Image pyramid inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Resolution level up to which the method "best match" is used. Default: "original"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching the best gray value matches in a pyramid.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 4
Resolution level up to which the method "best match" is used. Default: 2
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching the best gray value matches in a pyramid.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 4
Resolution level up to which the method "best match" is used. Default: 2
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching all good matches of a template and an image.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 20.0
All points whose error lies below a certain threshold.
Searching the best matching of a template and a pyramid with rotation.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 40.0
Subpixel accuracy in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and a pyramid with rotation.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 40.0
Subpixel accuracy in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image with rotation.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 30.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image with rotation.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 30.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Maximum average difference of the grayvalues. Default: 20.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image.
Instance represents: Input image inside of which the pattern has to be found.
Template number.
Maximum average difference of the grayvalues. Default: 20.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues of the best match.
Matching of a template and an image.
Instance represents: Input image.
Area to be searched in the input image.
This area will be "matched" by Image within the RegionOfInterest.
Desired matching criterion. Default: "dfd"
Result image: values of the matching criterion.
Searching corners in images.
Instance represents: Input image.
Desired filtersize of the graymask. Default: 3
Weighting. Default: 0.04
Result of the filtering.
Calculating a Gauss pyramid.
Instance represents: Input image.
Kind of filtermask. Default: "weighted"
Factor for scaling down. Default: 0.5
Output images.
Calculating the monotony operation.
Instance represents: Input image.
Result of the monotony operator.
Edge extraction using bandpass filters.
Instance represents: Input images.
Filter type: currently only 'lines' is supported. Default: "lines"
Bandpass-filtered images.
Detect color lines and their width.
Instance represents: Input image.
Amount of Gaussian smoothing to be applied. Default: 1.5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Should the line width be extracted? Default: "true"
Should junctions be added where they cannot be extracted? Default: "true"
Extracted lines.
Detect color lines and their width.
Instance represents: Input image.
Amount of Gaussian smoothing to be applied. Default: 1.5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Should the line width be extracted? Default: "true"
Should junctions be added where they cannot be extracted? Default: "true"
Extracted lines.
Detect lines and their width.
Instance represents: Input image.
Amount of Gaussian smoothing to be applied. Default: 1.5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Extract bright or dark lines. Default: "light"
Should the line width be extracted? Default: "true"
Line model used to correct the line position and width. Default: "bar-shaped"
Should junctions be added where they cannot be extracted? Default: "true"
Extracted lines.
Detect lines and their width.
Instance represents: Input image.
Amount of Gaussian smoothing to be applied. Default: 1.5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Extract bright or dark lines. Default: "light"
Should the line width be extracted? Default: "true"
Line model used to correct the line position and width. Default: "bar-shaped"
Should junctions be added where they cannot be extracted? Default: "true"
Extracted lines.
Detection of lines using the facet model.
Instance represents: Input image.
Size of the facet model mask. Default: 5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Extract bright or dark lines. Default: "light"
Extracted lines.
Detection of lines using the facet model.
Instance represents: Input image.
Size of the facet model mask. Default: 5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Extract bright or dark lines. Default: "light"
Extracted lines.
Store a filter mask in the spatial domain as a real-image.
Modified instance represents: Filter in the spatial domain.
Filter mask as file name or tuple. Default: "gauss"
Scaling factor. Default: 1.0
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Store a filter mask in the spatial domain as a real-image.
Modified instance represents: Filter in the spatial domain.
Filter mask as file name or tuple. Default: "gauss"
Scaling factor. Default: 1.0
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a mean filter in the frequency domain.
Modified instance represents: Mean filter as image in the frequency domain.
Shape of the filter mask in the spatial domain. Default: "ellipse"
Diameter of the mean filter in the principal direction of the filter in the spatial domain. Default: 11.0
Diameter of the mean filter perpendicular to the principal direction of the filter in the spatial domain. Default: 11.0
Principal direction of the filter in the spatial domain. Default: 0.0
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a Gaussian filter in the frequency domain.
Modified instance represents: Gaussian filter as image in the frequency domain.
Standard deviation of the Gaussian in the principal direction of the filter in the spatial domain. Default: 1.0
Standard deviation of the Gaussian perpendicular to the principal direction of the filter in the spatial domain. Default: 1.0
Principal direction of the filter in the spatial domain. Default: 0.0
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a derivative filter in the frequency domain.
Modified instance represents: Derivative filter as image in the frequency domain.
Derivative to be computed. Default: "x"
Exponent used in the reverse transform. Default: 1
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a bandpass filter with Gaussian or sinusoidal shape.
Modified instance represents: Bandpass filter as image in the frequency domain.
Distance of the filter's maximum from the DC term. Default: 0.1
Bandwidth of the filter (standard deviation). Default: 0.01
Filter type. Default: "sin"
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a bandpass filter with sinusoidal shape.
Modified instance represents: Bandpass filter as image in the frequency domain.
Distance of the filter's maximum from the DC term. Default: 0.1
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate an ideal band filter.
Modified instance represents: Band filter in the frequency domain.
Minimum frequency. Default: 0.1
Maximum frequency. Default: 0.2
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate an ideal bandpass filter.
Modified instance represents: Bandpass filter in the frequency domain.
Minimum frequency. Default: 0.1
Maximum frequency. Default: 0.2
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate an ideal lowpass filter.
Modified instance represents: Lowpass filter in the frequency domain.
Cutoff frequency. Default: 0.1
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate an ideal highpass filter.
Modified instance represents: Highpass filter in the frequency domain.
Cutoff frequency. Default: 0.1
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Return the power spectrum of a complex image.
Instance represents: Input image in frequency domain.
Power spectrum of the input image.
Return the power spectrum of a complex image.
Instance represents: Input image in frequency domain.
Power spectrum of the input image.
Return the power spectrum of a complex image.
Instance represents: Input image in frequency domain.
Power spectrum of the input image.
Return the phase of a complex image in degrees.
Instance represents: Input image in frequency domain.
Phase of the image in degrees.
Return the phase of a complex image in radians.
Instance represents: Input image in frequency domain.
Phase of the image in radians.
Calculate the energy of a two-channel image.
Instance represents: 1st channel of input image (usually: Gabor image).
2nd channel of input image (usually: Hilbert image).
Image containing the local energy.
Convolve an image with a Gabor filter in the frequency domain.
Instance represents: Input image.
Gabor/Hilbert-Filter.
Result of the Hilbert filter.
Result of the Gabor filter.
Generate a Gabor filter.
Modified instance represents: Gabor and Hilbert filter.
Angle range, inversely proportional to the range of orientations. Default: 1.4
Distance of the center of the filter to the DC term. Default: 0.4
Bandwidth range, inversely proportional to the range of frequencies being passed. Default: 1.0
Angle of the principal orientation. Default: 1.5
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Compute the phase correlation of two images in the frequency domain.
Instance represents: Fourier-transformed input image 1.
Fourier-transformed input image 2.
Phase correlation of the input images in the frequency domain.
Compute the correlation of two images in the frequency domain.
Instance represents: Fourier-transformed input image 1.
Fourier-transformed input image 2.
Correlation of the input images in the frequency domain.
Convolve an image with a filter in the frequency domain.
Instance represents: Complex input image.
Filter in frequency domain.
Result of applying the filter.
Compute the real-valued fast Fourier transform of an image.
Instance represents: Input image.
Calculate forward or reverse transform. Default: "to_freq"
Normalizing factor of the transform. Default: "sqrt"
Image type of the output image. Default: "complex"
Width of the image for which the runtime should be optimized. Default: 512
Fourier-transformed image.
Compute the inverse fast Fourier transform of an image.
Instance represents: Input image.
Inverse-Fourier-transformed image.
Compute the fast Fourier transform of an image.
Instance represents: Input image.
Fourier-transformed image.
Compute the fast Fourier transform of an image.
Instance represents: Input image.
Calculate forward or reverse transform. Default: "to_freq"
Sign of the exponent. Default: -1
Normalizing factor of the transform. Default: "sqrt"
Location of the DC term in the frequency domain. Default: "dc_center"
Image type of the output image. Default: "complex"
Fourier-transformed image.
Apply a shock filter to an image.
Instance represents: Input image.
Time step. Default: 0.5
Number of iterations. Default: 10
Type of edge detector. Default: "canny"
Smoothing of edge detector. Default: 1.0
Output image.
Apply the mean curvature flow to an image.
Instance represents: Input image.
Smoothing parameter for derivative operator. Default: 0.5
Time step. Default: 0.5
Number of iterations. Default: 10
Output image.
Perform a coherence enhancing diffusion of an image.
Instance represents: Input image.
Smoothing for derivative operator. Default: 0.5
Smoothing for diffusion coefficients. Default: 3.0
Time step. Default: 0.5
Number of iterations. Default: 10
Output image.
Histogram linearization of images
Instance represents: Image to be enhanced.
Image with linearized gray values.
Illuminate image.
Instance represents: Image to be enhanced.
Width of low pass mask. Default: 101
Height of low pass mask. Default: 101
Scales the "`correction gray value"' added to the original gray values. Default: 0.7
"`Illuminated"' image.
Enhance contrast of the image.
Instance represents: Image to be enhanced.
Width of low pass mask. Default: 7
Height of the low pass mask. Default: 7
Intensity of contrast emphasis. Default: 1.0
contrast enhanced image.
Maximum gray value spreading in the value range 0 to 255.
Instance represents: Image to be scaled.
contrast enhanced image.
Detect edges (amplitude and direction) using the Robinson operator.
Instance represents: Input image.
Edge direction image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude) using the Robinson operator.
Instance represents: Input image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude and direction) using the Kirsch operator.
Instance represents: Input image.
Edge direction image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude) using the Kirsch operator.
Instance represents: Input image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude and direction) using the Frei-Chen operator.
Instance represents: Input image.
Edge direction image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude) using the Frei-Chen operator.
Instance represents: Input image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude and direction) using the Prewitt operator.
Instance represents: Input image.
Edge direction image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude) using the Prewitt operator.
Instance represents: Input image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude) using the Sobel operator.
Instance represents: Input image.
Filter type. Default: "sum_abs"
Size of filter mask. Default: 3
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude) using the Sobel operator.
Instance represents: Input image.
Filter type. Default: "sum_abs"
Size of filter mask. Default: 3
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude and direction) using the Sobel operator.
Instance represents: Input image.
Edge direction image.
Filter type. Default: "sum_abs"
Size of filter mask. Default: 3
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude and direction) using the Sobel operator.
Instance represents: Input image.
Edge direction image.
Filter type. Default: "sum_abs"
Size of filter mask. Default: 3
Edge amplitude (gradient magnitude) image.
Detect edges using the Roberts filter.
Instance represents: Input image.
Filter type. Default: "gradient_sum"
Roberts-filtered result images.
Calculate the Laplace operator by using finite differences.
Instance represents: Input image.
Type of the result image, whereas for byte and uint2 the absolute value is used. Default: "absolute"
Size of filter mask. Default: 3
Filter mask used in the Laplace operator Default: "n_4"
Laplace-filtered result image.
Calculate the Laplace operator by using finite differences.
Instance represents: Input image.
Type of the result image, whereas for byte and uint2 the absolute value is used. Default: "absolute"
Size of filter mask. Default: 3
Filter mask used in the Laplace operator Default: "n_4"
Laplace-filtered result image.
Extract high frequency components from an image.
Instance represents: Input image.
Width of the filter mask. Default: 9
Height of the filter mask. Default: 9
High-pass-filtered result image.
Extract subpixel precise color edges using Deriche, Shen, or Canny filters.
Instance represents: Input image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Lower threshold for the hysteresis threshold operation. Default: 20
Upper threshold for the hysteresis threshold operation. Default: 40
Extracted edges.
Extract subpixel precise color edges using Deriche, Shen, or Canny filters.
Instance represents: Input image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Lower threshold for the hysteresis threshold operation. Default: 20
Upper threshold for the hysteresis threshold operation. Default: 40
Extracted edges.
Extract color edges using Canny, Deriche, or Shen filters.
Instance represents: Input image.
Edge direction image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Non-maximum suppression ('none', if not desired). Default: "nms"
Lower threshold for the hysteresis threshold operation (negative if no thresholding is desired). Default: 20
Upper threshold for the hysteresis threshold operation (negative if no thresholding is desired). Default: 40
Edge amplitude (gradient magnitude) image.
Extract sub-pixel precise edges using Deriche, Lanser, Shen, or Canny filters.
Instance represents: Input image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Lower threshold for the hysteresis threshold operation. Default: 20
Upper threshold for the hysteresis threshold operation. Default: 40
Extracted edges.
Extract sub-pixel precise edges using Deriche, Lanser, Shen, or Canny filters.
Instance represents: Input image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Lower threshold for the hysteresis threshold operation. Default: 20
Upper threshold for the hysteresis threshold operation. Default: 40
Extracted edges.
Extract edges using Deriche, Lanser, Shen, or Canny filters.
Instance represents: Input image.
Edge direction image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Non-maximum suppression ('none', if not desired). Default: "nms"
Lower threshold for the hysteresis threshold operation (negative, if no thresholding is desired). Default: 20
Upper threshold for the hysteresis threshold operation (negative, if no thresholding is desired). Default: 40
Edge amplitude (gradient magnitude) image.
Extract edges using Deriche, Lanser, Shen, or Canny filters.
Instance represents: Input image.
Edge direction image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Non-maximum suppression ('none', if not desired). Default: "nms"
Lower threshold for the hysteresis threshold operation (negative, if no thresholding is desired). Default: 20
Upper threshold for the hysteresis threshold operation (negative, if no thresholding is desired). Default: 40
Edge amplitude (gradient magnitude) image.
Convolve an image with derivatives of the Gaussian.
Instance represents: Input images.
Sigma of the Gaussian. Default: 1.0
Derivative or feature to be calculated. Default: "x"
Filtered result images.
Convolve an image with derivatives of the Gaussian.
Instance represents: Input images.
Sigma of the Gaussian. Default: 1.0
Derivative or feature to be calculated. Default: "x"
Filtered result images.
LoG-Operator (Laplace of Gaussian).
Instance represents: Input image.
Smoothing parameter of the Gaussian. Default: 2.0
Laplace filtered image.
LoG-Operator (Laplace of Gaussian).
Instance represents: Input image.
Smoothing parameter of the Gaussian. Default: 2.0
Laplace filtered image.
Approximate the LoG operator (Laplace of Gaussian).
Instance represents: Input image
Smoothing parameter of the Laplace operator to approximate. Default: 3.0
Ratio of the standard deviations used (Marr recommends 1.6). Default: 1.6
LoG image.
Detect straight edge segments.
Instance represents: Input image.
Mask size of the Sobel operator. Default: 5
Minimum edge strength. Default: 32
Maximum distance of the approximating line to its original edge. Default: 3
Minimum length of to resulting line segments. Default: 10
Row coordinate of the line segments' start points.
Column coordinate of the line segments' start points.
Row coordinate of the line segments' end points.
Column coordinate of the line segments' end points.
Release the look-up-table needed for color space transformation.
Handle of the look-up-table handle for the color space transformation.
Color space transformation using pre-generated look-up-table.
Instance represents: Input image (channel 1).
Input image (channel 2).
Input image (channel 3).
Color-transformed output image (channel 2).
Color-transformed output image (channel 3).
Handle of the look-up-table for the color space transformation.
Color-transformed output image (channel 1).
Creates the look-up-table for transformation of an image from the RGB color space to an arbitrary color space.
Color space of the output image. Default: "hsv"
Direction of color space transformation. Default: "from_rgb"
Number of bits of the input image. Default: 8
Handle of the look-up-table for color space transformation.
Convert a single-channel color filter array image into an RGB image.
Instance represents: Input image.
Color filter array type. Default: "bayer_gb"
Interpolation type. Default: "bilinear"
Output image.
Transform an RGB image into a gray scale image.
Instance represents: Three-channel RBG image.
Gray scale image.
Transform an RGB image to a gray scale image.
Instance represents: Input image (red channel).
Input image (green channel).
Input image (blue channel).
Gray scale image.
Transform an image from the RGB color space to an arbitrary color space.
Instance represents: Input image (red channel).
Input image (green channel).
Input image (blue channel).
Color-transformed output image (channel 1).
Color-transformed output image (channel 1).
Color space of the output image. Default: "hsv"
Color-transformed output image (channel 1).
Transform an image from an arbitrary color space to the RGB color space.
Instance represents: Input image (channel 1).
Input image (channel 2).
Input image (channel 3).
Green channel.
Blue channel.
Color space of the input image. Default: "hsv"
Red channel.
Logical "AND" of each pixel using a bit mask.
Instance represents: Input image(s).
Bit field Default: 128
Result image(s) by combination with mask.
Extract a bit from the pixels.
Instance represents: Input image(s).
Bit to be selected. Default: 8
Result image(s) by extraction.
Right shift of all pixels of the image.
Instance represents: Input image(s).
shift value Default: 3
Result image(s) by shift operation.
Left shift of all pixels of the image.
Instance represents: Input image(s).
Shift value. Default: 3
Result image(s) by shift operation.
Complement all bits of the pixels.
Instance represents: Input image(s).
Result image(s) by complement operation.
Bit-by-bit XOR of all pixels of the input images.
Instance represents: Input image(s) 1.
Input image(s) 2.
Result image(s) by XOR-operation.
Bit-by-bit OR of all pixels of the input images.
Instance represents: Input image(s) 1.
Input image(s) 2.
Result image(s) by OR-operation.
Bit-by-bit AND of all pixels of the input images.
Instance represents: Input image(s) 1.
Input image(s) 2.
Result image(s) by AND-operation.
Perform a gamma encoding or decoding of an image.
Instance represents: Input image.
Gamma coefficient of the exponential part of the transformation. Default: 0.416666666667
Offset of the exponential part of the transformation. Default: 0.055
Gray value for which the transformation switches from linear to exponential. Default: 0.0031308
Maximum gray value of the input image type. Default: 255.0
If 'true', perform a gamma encoding, otherwise a gamma decoding. Default: "true"
Output image.
Perform a gamma encoding or decoding of an image.
Instance represents: Input image.
Gamma coefficient of the exponential part of the transformation. Default: 0.416666666667
Offset of the exponential part of the transformation. Default: 0.055
Gray value for which the transformation switches from linear to exponential. Default: 0.0031308
Maximum gray value of the input image type. Default: 255.0
If 'true', perform a gamma encoding, otherwise a gamma decoding. Default: "true"
Output image.
Raise an image to a power.
Instance represents: Input image.
Power to which the gray values are raised. Default: 2
Output image.
Raise an image to a power.
Instance represents: Input image.
Power to which the gray values are raised. Default: 2
Output image.
Calculate the exponentiation of an image.
Instance represents: Input image.
Base of the exponentiation. Default: "e"
Output image.
Calculate the exponentiation of an image.
Instance represents: Input image.
Base of the exponentiation. Default: "e"
Output image.
Calculate the logarithm of an image.
Instance represents: Input image.
Base of the logarithm. Default: "e"
Output image.
Calculate the logarithm of an image.
Instance represents: Input image.
Base of the logarithm. Default: "e"
Output image.
Calculate the arctangent of two images.
Instance represents: Input image 1.
Input image 2.
Output image.
Calculate the arctangent of an image.
Instance represents: Input image.
Output image.
Calculate the arccosine of an image.
Instance represents: Input image.
Output image.
Calculate the arcsine of an image.
Instance represents: Input image.
Output image.
Calculate the tangent of an image.
Instance represents: Input image.
Output image.
Calculate the cosine of an image.
Instance represents: Input image.
Output image.
Calculate the sine of an image.
Instance represents: Input image.
Output image.
Calculate the absolute difference of two images.
Instance represents: Input image 1.
Input image 2.
Scale factor. Default: 1.0
Absolute value of the difference of the input images.
Calculate the absolute difference of two images.
Instance represents: Input image 1.
Input image 2.
Scale factor. Default: 1.0
Absolute value of the difference of the input images.
Calculate the square root of an image.
Instance represents: Input image
Output image
Subtract two images.
Instance represents: Minuend(s).
Subtrahend(s).
Correction factor. Default: 1.0
Correction value. Default: 128.0
Result image(s) by the subtraction.
Subtract two images.
Instance represents: Minuend(s).
Subtrahend(s).
Correction factor. Default: 1.0
Correction value. Default: 128.0
Result image(s) by the subtraction.
Scale the gray values of an image.
Instance represents: Image(s) whose gray values are to be scaled.
Scale factor. Default: 0.01
Offset. Default: 0
Result image(s) by the scale.
Scale the gray values of an image.
Instance represents: Image(s) whose gray values are to be scaled.
Scale factor. Default: 0.01
Offset. Default: 0
Result image(s) by the scale.
Divide two images.
Instance represents: Image(s) 1.
Image(s) 2.
Factor for gray range adaption. Default: 255
Value for gray range adaption. Default: 0
Result image(s) by the division.
Divide two images.
Instance represents: Image(s) 1.
Image(s) 2.
Factor for gray range adaption. Default: 255
Value for gray range adaption. Default: 0
Result image(s) by the division.
Multiply two images.
Instance represents: Image(s) 1.
Image(s) 2.
Factor for gray range adaption. Default: 0.005
Value for gray range adaption. Default: 0
Result image(s) by the product.
Multiply two images.
Instance represents: Image(s) 1.
Image(s) 2.
Factor for gray range adaption. Default: 0.005
Value for gray range adaption. Default: 0
Result image(s) by the product.
Add two images.
Instance represents: Image(s) 1.
Image(s) 2.
Factor for gray value adaption. Default: 0.5
Value for gray value range adaption. Default: 0
Result image(s) by the addition.
Add two images.
Instance represents: Image(s) 1.
Image(s) 2.
Factor for gray value adaption. Default: 0.5
Value for gray value range adaption. Default: 0
Result image(s) by the addition.
Calculate the absolute value (modulus) of an image.
Instance represents: Image(s) for which the absolute gray values are to be calculated.
Result image(s).
Calculate the minimum of two images pixel by pixel.
Instance represents: Image(s) 1.
Image(s) 2.
Result image(s) by the minimization.
Calculate the maximum of two images pixel by pixel.
Instance represents: Image(s) 1.
Image(s) 2.
Result image(s) by the maximization.
Invert an image.
Instance represents: Input image(s).
Image(s) with inverted gray values.
Apply an automatic color correction to panorama images.
Instance represents: Input images.
List of source images.
List of destination images.
Reference image.
Projective matrices.
Estimation algorithm for the correction. Default: "standard"
Parameters to be estimated. Default: ["mult_gray"]
Model of OECF to be used. Default: ["laguerre"]
Output images.
Apply an automatic color correction to panorama images.
Instance represents: Input images.
List of source images.
List of destination images.
Reference image.
Projective matrices.
Estimation algorithm for the correction. Default: "standard"
Parameters to be estimated. Default: ["mult_gray"]
Model of OECF to be used. Default: ["laguerre"]
Output images.
Create 6 cube map images of a spherical mosaic.
Instance represents: Input images.
Rear cube map.
Left cube map.
Right cube map.
Top cube map.
Bottom cube map.
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
Width and height of the resulting cube maps. Default: 1000
Mode of adding the images to the mosaic image. Default: "voronoi"
Mode of image interpolation. Default: "bilinear"
Front cube map.
Create 6 cube map images of a spherical mosaic.
Instance represents: Input images.
Rear cube map.
Left cube map.
Right cube map.
Top cube map.
Bottom cube map.
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
Width and height of the resulting cube maps. Default: 1000
Mode of adding the images to the mosaic image. Default: "voronoi"
Mode of image interpolation. Default: "bilinear"
Front cube map.
Create a spherical mosaic image.
Instance represents: Input images.
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
Minimum latitude of points in the spherical mosaic image. Default: -90
Maximum latitude of points in the spherical mosaic image. Default: 90
Minimum longitude of points in the spherical mosaic image. Default: -180
Maximum longitude of points in the spherical mosaic image. Default: 180
Latitude and longitude angle step width. Default: 0.1
Mode of adding the images to the mosaic image. Default: "voronoi"
Mode of interpolation when creating the mosaic image. Default: "bilinear"
Output image.
Create a spherical mosaic image.
Instance represents: Input images.
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
Minimum latitude of points in the spherical mosaic image. Default: -90
Maximum latitude of points in the spherical mosaic image. Default: 90
Minimum longitude of points in the spherical mosaic image. Default: -180
Maximum longitude of points in the spherical mosaic image. Default: 180
Latitude and longitude angle step width. Default: 0.1
Mode of adding the images to the mosaic image. Default: "voronoi"
Mode of interpolation when creating the mosaic image. Default: "bilinear"
Output image.
Combine multiple images into a mosaic image.
Instance represents: Input images.
Array of 3x3 projective transformation matrices.
Stacking order of the images in the mosaic. Default: "default"
Should the domains of the input images also be transformed? Default: "false"
3x3 projective transformation matrix that describes the translation that was necessary to transform all images completely into the output image.
Output image.
Combine multiple images into a mosaic image.
Instance represents: Input images.
Array of 3x3 projective transformation matrices.
Stacking order of the images in the mosaic. Default: "default"
Should the domains of the input images also be transformed? Default: "false"
3x3 projective transformation matrix that describes the translation that was necessary to transform all images completely into the output image.
Output image.
Combine multiple images into a mosaic image.
Instance represents: Input images.
Index of the central input image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Stacking order of the images in the mosaic. Default: "default"
Should the domains of the input images also be transformed? Default: "false"
Array of 3x3 projective transformation matrices that determine the position of the images in the mosaic.
Output image.
Combine multiple images into a mosaic image.
Instance represents: Input images.
Index of the central input image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Stacking order of the images in the mosaic. Default: "default"
Should the domains of the input images also be transformed? Default: "false"
Array of 3x3 projective transformation matrices that determine the position of the images in the mosaic.
Output image.
Apply a projective transformation to an image and specify the output image size.
Instance represents: Input image.
Homogeneous projective transformation matrix.
Interpolation method for the transformation. Default: "bilinear"
Output image width.
Output image height.
Should the domain of the input image also be transformed? Default: "false"
Output image.
Apply a projective transformation to an image.
Instance represents: Input image.
Homogeneous projective transformation matrix.
Interpolation method for the transformation. Default: "bilinear"
Adapt the size of the output image automatically? Default: "false"
Should the domain of the input image also be transformed? Default: "false"
Output image.
Apply an arbitrary affine 2D transformation to an image and specify the output image size.
Instance represents: Input image.
Input transformation matrix.
Type of interpolation. Default: "constant"
Width of the output image. Default: 640
Height of the output image. Default: 480
Transformed image.
Apply an arbitrary affine 2D transformation to images.
Instance represents: Input image.
Input transformation matrix.
Type of interpolation. Default: "constant"
Adaption of size of result image. Default: "false"
Transformed image.
Zoom an image by a given factor.
Instance represents: Input image.
Scale factor for the width of the image. Default: 0.5
Scale factor for the height of the image. Default: 0.5
Type of interpolation. Default: "constant"
Scaled image.
Zoom an image to a given size.
Instance represents: Input image.
Width of the resulting image. Default: 512
Height of the resulting image. Default: 512
Type of interpolation. Default: "constant"
Scaled image.
Mirror an image.
Instance represents: Input image.
Axis of reflection. Default: "row"
Reflected image.
Rotate an image about its center.
Instance represents: Input image.
Rotation angle. Default: 90
Type of interpolation. Default: "constant"
Rotated image.
Rotate an image about its center.
Instance represents: Input image.
Rotation angle. Default: 90
Type of interpolation. Default: "constant"
Rotated image.
Transform an image in polar coordinates back to cartesian coordinates
Instance represents: Input image.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the first column of the input image to. Default: 0.0
Angle of the ray to map the last column of the input image to. Default: 6.2831853
Radius of the circle to map the first row of the input image to. Default: 0
Radius of the circle to map the last row of the input image to. Default: 100
Width of the output image. Default: 512
Height of the output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Output image.
Transform an image in polar coordinates back to cartesian coordinates
Instance represents: Input image.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the first column of the input image to. Default: 0.0
Angle of the ray to map the last column of the input image to. Default: 6.2831853
Radius of the circle to map the first row of the input image to. Default: 0
Radius of the circle to map the last row of the input image to. Default: 100
Width of the output image. Default: 512
Height of the output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Output image.
Transform an annular arc in an image to polar coordinates.
Instance represents: Input image.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to the first column of the output image. Default: 0.0
Angle of the ray to be mapped to the last column of the output image. Default: 6.2831853
Radius of the circle to be mapped to the first row of the output image. Default: 0
Radius of the circle to be mapped to the last row of the output image. Default: 100
Width of the output image. Default: 512
Height of the output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Output image.
Transform an annular arc in an image to polar coordinates.
Instance represents: Input image.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to the first column of the output image. Default: 0.0
Angle of the ray to be mapped to the last column of the output image. Default: 6.2831853
Radius of the circle to be mapped to the first row of the output image. Default: 0
Radius of the circle to be mapped to the last row of the output image. Default: 100
Width of the output image. Default: 512
Height of the output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Output image.
Transform an image to polar coordinates
Instance represents: Input image in cartesian coordinates.
Row coordinate of the center of the coordinate system. Default: 100
Column coordinate of the center of the coordinate system. Default: 100
Width of the result image. Default: 314
Height of the result image. Default: 200
Result image in polar coordinates.
Approximate an affine map from a displacement vector field.
Instance represents: Input image.
Output transformation matrix.
Deserialize a serialized image object.
Modified instance represents: Image object.
Handle of the serialized item.
Serialize an image object.
Instance represents: Image object.
Handle of the serialized item.
Write images in graphic formats.
Instance represents: Input images.
Graphic format. Default: "tiff"
Fill gray value for pixels not belonging to the image domain (region). Default: 0
Name of image file.
Write images in graphic formats.
Instance represents: Input images.
Graphic format. Default: "tiff"
Fill gray value for pixels not belonging to the image domain (region). Default: 0
Name of image file.
Read images.
Modified instance represents: Image read.
Number of bytes for file header. Default: 0
Number of image columns of the filed image. Default: 512
Number of image lines of the filed image. Default: 512
Starting point of image area (line). Default: 0
Starting point of image area (column). Default: 0
Number of image columns of output image. Default: 512
Number of image lines of output image. Default: 512
Type of pixel values. Default: "byte"
Sequence of bits within one byte. Default: "MSBFirst"
Sequence of bytes within one 'short' unit. Default: "MSBFirst"
Data units within one image line (alignment). Default: "byte"
Number of images in the file. Default: 1
Name of input file.
Read an image with different file formats.
Modified instance represents: Read image.
Name of the image to be read. Default: "printer_chip/printer_chip_01"
Read an image with different file formats.
Modified instance represents: Read image.
Name of the image to be read. Default: "printer_chip/printer_chip_01"
Return gray values of an image at the positions of an XLD contour.
Instance represents: Image whose gray values are to be accessed.
Input XLD contour with the coordinates of the positions.
Interpolation method. Default: "nearest_neighbor"
Gray values of the selected image coordinates.
Calculate gray value moments and approximation by a first order surface (plane).
Instance represents: Corresponding gray values.
Regions to be checked.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Alpha of the approximating surface.
Calculate gray value moments and approximation by a first order surface (plane).
Instance represents: Corresponding gray values.
Regions to be checked.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Alpha of the approximating surface.
Calculate gray value moments and approximation by a second order surface.
Instance represents: Corresponding gray values.
Regions to be checked.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Delta of the approximating surface.
Parameter Epsilon of the approximating surface.
Parameter Zeta of the approximating surface.
Parameter Alpha of the approximating surface.
Calculate gray value moments and approximation by a second order surface.
Instance represents: Corresponding gray values.
Regions to be checked.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Delta of the approximating surface.
Parameter Epsilon of the approximating surface.
Parameter Zeta of the approximating surface.
Parameter Alpha of the approximating surface.
Create a curved gray surface with second order polynomial.
Modified instance represents: Created image with new image matrix.
Pixel type. Default: "byte"
Second order coefficient in vertical direction. Default: 1.0
Second order coefficient in horizontal direction. Default: 1.0
Mixed second order coefficient. Default: 1.0
First order coefficient in vertical direction. Default: 1.0
First order coefficient in horizontal direction. Default: 1.0
Zero order coefficient. Default: 1.0
Row coordinate of the reference point of the surface. Default: 256.0
Column coordinate of the reference point of the surface. Default: 256.0
Width of image. Default: 512
Height of image. Default: 512
Create a tilted gray surface with first order polynomial.
Modified instance represents: Created image with new image matrix.
Pixel type. Default: "byte"
First order coefficient in vertical direction. Default: 1.0
First order coefficient in horizontal direction. Default: 1.0
Zero order coefficient. Default: 1.0
Row coordinate of the reference point of the surface. Default: 256.0
Column coordinate of the reference point of the surface. Default: 256.0
Width of image. Default: 512
Height of image. Default: 512
Determine the minimum and maximum gray values within regions.
Instance represents: Gray value image.
Regions, the features of which are to be calculated.
Percentage below (above) the absolute maximum (minimum). Default: 0
"Minimum" gray value.
"Maximum" gray value.
Difference between Max and Min.
Determine the minimum and maximum gray values within regions.
Instance represents: Gray value image.
Regions, the features of which are to be calculated.
Percentage below (above) the absolute maximum (minimum). Default: 0
"Minimum" gray value.
"Maximum" gray value.
Difference between Max and Min.
Calculate the mean and deviation of gray values.
Instance represents: Gray value image.
Regions in which the features are calculated.
Deviation of gray values within a region.
Mean gray value of a region.
Calculate the mean and deviation of gray values.
Instance represents: Gray value image.
Regions in which the features are calculated.
Deviation of gray values within a region.
Mean gray value of a region.
Calculate the gray value distribution of a single channel image within a certain gray value range.
Instance represents: Input image.
Region in which the histogram is to be calculated.
Minimum gray value. Default: 0
Maximum gray value. Default: 255
Number of bins. Default: 256
Bin size.
Histogram to be calculated.
Calculate the gray value distribution of a single channel image within a certain gray value range.
Instance represents: Input image.
Region in which the histogram is to be calculated.
Minimum gray value. Default: 0
Maximum gray value. Default: 255
Number of bins. Default: 256
Bin size.
Histogram to be calculated.
Calculate the histogram of two-channel gray value images.
Instance represents: Channel 1.
Region in which the histogram is to be calculated.
Channel 2.
Histogram to be calculated.
Calculate the gray value distribution.
Instance represents: Image the gray value distribution of which is to be calculated.
Region in which the histogram is to be calculated.
Quantization of the gray values. Default: 1.0
Absolute frequencies of the gray values.
Calculate the gray value distribution.
Instance represents: Image the gray value distribution of which is to be calculated.
Region in which the histogram is to be calculated.
Quantization of the gray values. Default: 1.0
Absolute frequencies of the gray values.
Calculate the gray value distribution.
Instance represents: Image the gray value distribution of which is to be calculated.
Region in which the histogram is to be calculated.
Frequencies, normalized to the area of the region.
Absolute frequencies of the gray values.
Determine the entropy and anisotropy of images.
Instance represents: Gray value image.
Regions where the features are to be determined.
Measure of the symmetry of gray value distribution.
Information content (entropy) of the gray values.
Determine the entropy and anisotropy of images.
Instance represents: Gray value image.
Regions where the features are to be determined.
Measure of the symmetry of gray value distribution.
Information content (entropy) of the gray values.
Calculate gray value features from a co-occurrence matrix.
Instance represents: Co-occurrence matrix.
Correlation of gray values.
Local homogeneity of gray values.
Gray value contrast.
Homogeneity of the gray values.
Calculate a co-occurrence matrix and derive gray value features thereof.
Instance represents: Corresponding gray values.
Region to be examined.
Number of gray values to be distinguished (2^LdGray@f$2^{LdGray}$). Default: 6
Direction in which the matrix is to be calculated. Default: 0
Correlation of gray values.
Local homogeneity of gray values.
Gray value contrast.
Gray value energy.
Calculate a co-occurrence matrix and derive gray value features thereof.
Instance represents: Corresponding gray values.
Region to be examined.
Number of gray values to be distinguished (2^LdGray@f$2^{LdGray}$). Default: 6
Direction in which the matrix is to be calculated. Default: 0
Correlation of gray values.
Local homogeneity of gray values.
Gray value contrast.
Gray value energy.
Calculate the co-occurrence matrix of a region in an image.
Instance represents: Image providing the gray values.
Region to be checked.
Number of gray values to be distinguished (2^LdGray@f$2^{LdGray}$). Default: 6
Direction of neighbor relation. Default: 0
Co-occurrence matrix (matrices).
Calculate gray value moments and approximation by a plane.
Instance represents: Corresponding gray values.
Regions to be checked.
Mixed moments along a line.
Mixed moments along a column.
Parameter Alpha of the approximating plane.
Parameter Beta of the approximating plane.
Mean gray value.
Calculate gray value moments and approximation by a plane.
Instance represents: Corresponding gray values.
Regions to be checked.
Mixed moments along a line.
Mixed moments along a column.
Parameter Alpha of the approximating plane.
Parameter Beta of the approximating plane.
Mean gray value.
Calculate the deviation of the gray values from the approximating image plane.
Instance represents: Gray value image.
Regions, of which the plane deviation is to be calculated.
Deviation of the gray values within a region.
Compute the orientation and major axes of a region in a gray value image.
Instance represents: Gray value image.
Region(s) to be examined.
Minor axis of the region.
Angle enclosed by the major axis and the x-axis.
Major axis of the region.
Compute the orientation and major axes of a region in a gray value image.
Instance represents: Gray value image.
Region(s) to be examined.
Minor axis of the region.
Angle enclosed by the major axis and the x-axis.
Major axis of the region.
Compute the area and center of gravity of a region in a gray value image.
Instance represents: Gray value image.
Region(s) to be examined.
Row coordinate of the gray value center of gravity.
Column coordinate of the gray value center of gravity.
Gray value volume of the region.
Compute the area and center of gravity of a region in a gray value image.
Instance represents: Gray value image.
Region(s) to be examined.
Row coordinate of the gray value center of gravity.
Column coordinate of the gray value center of gravity.
Gray value volume of the region.
Calculate horizontal and vertical gray-value projections.
Instance represents: Grayvalues for projections.
Region to be processed.
Method to compute the projections. Default: "simple"
Vertical projection.
Horizontal projection.
Detect and read 2D data code symbols in an image or train the 2D data code model.
Instance represents: Input image. If the image has a reduced domain, the data code search is reduced to that domain. This usually reduces the runtime of the operator. However, if the datacode is not fully inside the domain, the datacode might not be found correctly. In rare cases, data codes may be found outside the domain. If these results are undesirable, they have to be subsequently eliminated.
Handle of the 2D data code model.
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Handles of all successfully decoded 2D data code symbols.
Decoded data strings of all detected 2D data code symbols in the image.
XLD contours that surround the successfully decoded data code symbols. The order of the contour points reflects the orientation of the detected symbols. The contours begin in the top left corner (see 'orientation' at get_data_code_2d_results) and continue clockwise. Alignment{left} Figure[1][1][60]{get_data_code_2d_results-xld_qrcode} Order of points of SymbolXLDs Figure Alignment @f$
Detect and read 2D data code symbols in an image or train the 2D data code model.
Instance represents: Input image. If the image has a reduced domain, the data code search is reduced to that domain. This usually reduces the runtime of the operator. However, if the datacode is not fully inside the domain, the datacode might not be found correctly. In rare cases, data codes may be found outside the domain. If these results are undesirable, they have to be subsequently eliminated.
Handle of the 2D data code model.
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Handles of all successfully decoded 2D data code symbols.
Decoded data strings of all detected 2D data code symbols in the image.
XLD contours that surround the successfully decoded data code symbols. The order of the contour points reflects the orientation of the detected symbols. The contours begin in the top left corner (see 'orientation' at get_data_code_2d_results) and continue clockwise. Alignment{left} Figure[1][1][60]{get_data_code_2d_results-xld_qrcode} Order of points of SymbolXLDs Figure Alignment @f$
Convert image maps into other map types.
Instance represents: Input map.
Type of MapConverted. Default: "coord_map_sub_pix"
Width of images to be mapped. Default: "map_width"
Converted map.
Convert image maps into other map types.
Instance represents: Input map.
Type of MapConverted. Default: "coord_map_sub_pix"
Width of images to be mapped. Default: "map_width"
Converted map.
Compute an absolute pose out of point correspondences between world and image coordinates.
X-Component of world coordinates.
Y-Component of world coordinates.
Z-Component of world coordinates.
Row-Component of image coordinates.
Column-Component of image coordinates.
The inner camera parameters from camera calibration.
Kind of algorithm Default: "iterative"
Type of pose quality to be returned in Quality. Default: "error"
Pose quality.
Pose.
Compute an absolute pose out of point correspondences between world and image coordinates.
X-Component of world coordinates.
Y-Component of world coordinates.
Z-Component of world coordinates.
Row-Component of image coordinates.
Column-Component of image coordinates.
The inner camera parameters from camera calibration.
Kind of algorithm Default: "iterative"
Type of pose quality to be returned in Quality. Default: "error"
Pose quality.
Pose.
Compute a pose out of a homography describing the relation between world and image coordinates.
The homography from world- to image coordinates.
The camera calibration matrix K.
Type of pose computation. Default: "decomposition"
Pose of the 2D object.
Perform a radiometric self-calibration of a camera.
Instance represents: Input images.
Ratio of the exposure energies of successive image pairs. Default: 0.5
Features that are used to compute the inverse response function of the camera. Default: "2d_histogram"
Type of the inverse response function of the camera. Default: "discrete"
Smoothness of the inverse response function of the camera. Default: 1.0
Degree of the polynomial if FunctionType = 'polynomial'. Default: 5
Inverse response function of the camera.
Perform a radiometric self-calibration of a camera.
Instance represents: Input images.
Ratio of the exposure energies of successive image pairs. Default: 0.5
Features that are used to compute the inverse response function of the camera. Default: "2d_histogram"
Type of the inverse response function of the camera. Default: "discrete"
Smoothness of the inverse response function of the camera. Default: 1.0
Degree of the polynomial if FunctionType = 'polynomial'. Default: 5
Inverse response function of the camera.
Apply a general transformation to an image.
Instance represents: Image to be mapped.
Image containing the mapping data.
Mapped image.
Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
Modified instance represents: Image containing the mapping data.
Old camera parameters.
New camera parameters.
Type of the mapping. Default: "bilinear"
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world coordinate system.
Modified instance represents: Image containing the mapping data.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Width of the images to be transformed.
Height of the images to be transformed.
Width of the resulting mapped images in pixels.
Height of the resulting mapped images in pixels.
Scale or unit. Default: "m"
Type of the mapping. Default: "bilinear"
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world coordinate system.
Modified instance represents: Image containing the mapping data.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Width of the images to be transformed.
Height of the images to be transformed.
Width of the resulting mapped images in pixels.
Height of the resulting mapped images in pixels.
Scale or unit. Default: "m"
Type of the mapping. Default: "bilinear"
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
Instance represents: Input image.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Width of the resulting image in pixels.
Height of the resulting image in pixels.
Scale or unit Default: "m"
Type of interpolation. Default: "bilinear"
Transformed image.
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
Instance represents: Input image.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Width of the resulting image in pixels.
Height of the resulting image in pixels.
Scale or unit Default: "m"
Type of interpolation. Default: "bilinear"
Transformed image.
Change the radial distortion of an image.
Instance represents: Original image.
Region of interest in ImageRectified.
Internal camera parameter for Image.
Internal camera parameter for Image.
Resulting image with modified radial distortion.
Simulate an image with calibration plate.
Modified instance represents: Simulated calibration image.
File name of the calibration plate description. Default: "calplate_320mm.cpd"
Internal camera parameters.
External camera parameters (3D pose of the calibration plate in camera coordinates).
Gray value of image background. Default: 128
Gray value of calibration plate. Default: 80
Gray value of calibration marks. Default: 224
Scaling factor to reduce oversampling. Default: 1.0
Extract rectangularly arranged 2D calibration marks from the image and calculate initial values for the external camera parameters.
Instance represents: Input image.
Region of the calibration plate.
File name of the calibration plate description. Default: "caltab_100.descr"
Initial values for the internal camera parameters.
Initial threshold value for contour detection. Default: 128
Loop value for successive reduction of StartThresh. Default: 10
Minimum threshold for contour detection. Default: 18
Filter parameter for contour detection, see edges_image. Default: 0.9
Minimum length of the contours of the marks. Default: 15.0
Maximum expected diameter of the marks. Default: 100.0
Tuple with column coordinates of the detected marks.
Estimation for the external camera parameters.
Tuple with row coordinates of the detected marks.
Segment the region of a standard calibration plate with rectangularly arranged marks in the image.
Instance represents: Input image.
File name of the calibration plate description. Default: "caltab_100.descr"
Filter size of the Gaussian. Default: 3
Threshold value for mark extraction. Default: 112
Expected minimal diameter of the marks on the calibration plate. Default: 5
Output region.
Segment the region of a standard calibration plate with rectangularly arranged marks in the image.
Instance represents: Input image.
File name of the calibration plate description. Default: "caltab_100.descr"
Filter size of the Gaussian. Default: 3
Threshold value for mark extraction. Default: 112
Expected minimal diameter of the marks on the calibration plate. Default: 5
Output region.
Decode bar code symbols within a rectangle.
Instance represents: Input image.
Handle of the bar code model.
Type of the searched bar code. Default: "EAN-13"
Row index of the center. Default: 50.0
Column index of the center. Default: 100.0
Orientation of rectangle in radians. Default: 0.0
Half of the length of the rectangle along the reading direction of the bar code. Default: 200.0
Half of the length of the rectangle perpendicular to the reading direction of the bar code. Default: 100.0
Data strings of all successfully decoded bar codes.
Decode bar code symbols within a rectangle.
Instance represents: Input image.
Handle of the bar code model.
Type of the searched bar code. Default: "EAN-13"
Row index of the center. Default: 50.0
Column index of the center. Default: 100.0
Orientation of rectangle in radians. Default: 0.0
Half of the length of the rectangle along the reading direction of the bar code. Default: 200.0
Half of the length of the rectangle perpendicular to the reading direction of the bar code. Default: 100.0
Data strings of all successfully decoded bar codes.
Detect and read bar code symbols in an image.
Instance represents: Input image. If the image has a reduced domain, the barcode search is reduced to that domain. This usually reduces the runtime of the operator. However, if the barcode is not fully inside the domain, the barcode cannot be decoded correctly.
Handle of the bar code model.
Type of the searched bar code. Default: "auto"
Data strings of all successfully decoded bar codes.
Regions of the successfully decoded bar code symbols.
Detect and read bar code symbols in an image.
Instance represents: Input image. If the image has a reduced domain, the barcode search is reduced to that domain. This usually reduces the runtime of the operator. However, if the barcode is not fully inside the domain, the barcode cannot be decoded correctly.
Handle of the bar code model.
Type of the searched bar code. Default: "auto"
Data strings of all successfully decoded bar codes.
Regions of the successfully decoded bar code symbols.
Return the estimated background image.
Modified instance represents: Estimated background image of the current data set.
ID of the BgEsti data set.
Change the estimated background image.
Instance represents: Current image.
Region describing areas to change.
ID of the BgEsti data set.
Estimate the background and return the foreground region.
Instance represents: Current image.
ID of the BgEsti data set.
Region of the detected foreground.
Generate and initialize a data set for the background estimation.
Instance represents: initialization image.
1. system matrix parameter. Default: 0.7
2. system matrix parameter. Default: 0.7
Gain type. Default: "fixed"
Kalman gain / foreground adaptation time. Default: 0.002
Kalman gain / background adaptation time. Default: 0.02
Threshold adaptation. Default: "on"
Foreground/background threshold. Default: 7.0
Number of statistic data sets. Default: 10
Confidence constant. Default: 3.25
Constant for decay time. Default: 15.0
ID of the BgEsti data set.
Asynchronous grab of images and preprocessed image data from the specified image acquisition device.
Modified instance represents: Grabbed image data.
Pre-processed XLD contours.
Handle of the acquisition device to be used.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Pre-processed control data.
Pre-processed image regions.
Asynchronous grab of images and preprocessed image data from the specified image acquisition device.
Modified instance represents: Grabbed image data.
Pre-processed XLD contours.
Handle of the acquisition device to be used.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Pre-processed control data.
Pre-processed image regions.
Synchronous grab of images and preprocessed image data from the specified image acquisition device.
Modified instance represents: Grabbed image data.
Preprocessed XLD contours.
Handle of the acquisition device to be used.
Preprocessed control data.
Preprocessed image regions.
Synchronous grab of images and preprocessed image data from the specified image acquisition device.
Modified instance represents: Grabbed image data.
Preprocessed XLD contours.
Handle of the acquisition device to be used.
Preprocessed control data.
Preprocessed image regions.
Asynchronous grab of an image from the specified image acquisition device.
Modified instance represents: Grabbed image.
Handle of the acquisition device to be used.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Synchronous grab of an image from the specified image acquisition device.
Modified instance represents: Grabbed image.
Handle of the acquisition device to be used.
Add training images to the texture inspection model.
Instance represents: Image of flawless texture.
Handle of the texture inspection model.
Indices of the images that have been added to the texture inspection model.
Inspection of the texture within an image.
Instance represents: Image of the texture to be inspected.
Handle of the texture inspection model.
Handle of the inspection results.
Novelty regions.
bilateral filtering of an image.
Instance represents: Image to be filtered.
Joint image.
Size of the Gaussian of the closeness function. Default: 3.0
Size of the Gaussian of the similarity function. Default: 20.0
Generic parameter name. Default: []
Generic parameter value. Default: []
Filtered output image.
bilateral filtering of an image.
Instance represents: Image to be filtered.
Joint image.
Size of the Gaussian of the closeness function. Default: 3.0
Size of the Gaussian of the similarity function. Default: 20.0
Generic parameter name. Default: []
Generic parameter value. Default: []
Filtered output image.
Find the best matches of multiple NCC models.
Instance represents: Input image in which the model should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.8
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple NCC models.
Instance represents: Input image in which the model should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.8
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Get the training images contained in a texture inspection model.
Modified instance represents: Training images contained in the texture inspection model.
Handle of the texture inspection model.
Guided filtering of an image.
Instance represents: Input image.
Guidance image.
Radius of the filtering operation. Default: 3
Controls the influence of edges on the smoothing. Default: 20.0
Output image.
Create an interleaved image from a multichannel image.
Instance represents: Input multichannel image.
Target format for InterleavedImage. Default: "rgba"
Number of bytes in a row of the output image. Default: "match"
Alpha value for three channel input images. Default: 255
Output interleaved image.
Create an interleaved image from a multichannel image.
Instance represents: Input multichannel image.
Target format for InterleavedImage. Default: "rgba"
Number of bytes in a row of the output image. Default: "match"
Alpha value for three channel input images. Default: 255
Output interleaved image.
Segment image using Maximally Stable Extremal Regions (MSER).
Instance represents: Input image.
Segmented light MSERs.
The polarity of the returned MSERs. Default: "both"
Minimal size of an MSER. Default: 10
Maximal size of an MSER. Default: []
Amount of thresholds for which a region needs to be stable. Default: 15
List of generic parameter names. Default: []
List of generic parameter values. Default: []
Segmented dark MSERs.
Segment image using Maximally Stable Extremal Regions (MSER).
Instance represents: Input image.
Segmented light MSERs.
The polarity of the returned MSERs. Default: "both"
Minimal size of an MSER. Default: 10
Maximal size of an MSER. Default: []
Amount of thresholds for which a region needs to be stable. Default: 15
List of generic parameter names. Default: []
List of generic parameter values. Default: []
Segmented dark MSERs.
Train a texture inspection model.
Handle of the texture inspection model.
Reconstruct a surface from several, differently illuminated images.
Instance represents: The Images.
The Gradient.
The Albedo.
The Result type. Default: "all"
The NormalField.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Get the clutter parameters of a shape model.
Handle of the model.
Parameter names. Default: "use_clutter"
Parameter values.
Transformation matrix.
Minimum contrast of clutter in the search images.
Region where no clutter should occur.
Get the clutter parameters of a shape model.
Handle of the model.
Parameter names. Default: "use_clutter"
Parameter values.
Transformation matrix.
Minimum contrast of clutter in the search images.
Region where no clutter should occur.
Set the clutter parameters of a shape model.
Region where no clutter should occur.
Handle of the model.
Transformation matrix.
Minimum contrast of clutter in the search images. Default: 128
Parameter names.
Parameter values.
Set the clutter parameters of a shape model.
Region where no clutter should occur.
Handle of the model.
Transformation matrix.
Minimum contrast of clutter in the search images. Default: 128
Parameter names.
Parameter values.
Returns the iconic object(s) at the specified index
Class grouping system information related functionality.
Query slots concerning information with relation to the operator get_operator_info.
Slotnames of the operator get_operator_info.
Query slots of the online-information concerning the operator get_param_info.
Slotnames for the operator get_param_info.
Get operators with the given string as a substring of their name.
Substring of the seeked names (empty $ less than = greater than $ all names). Default: "info"
Detected operator names.
Get default data type for the control parameters of a HALCON-operator.
Name of the operator. Default: "get_param_types"
Default type of the output control parameters.
Default type of the input control parameters.
Get number of the different parameter classes of a HALCON-operator.
Name of the operator. Default: "get_param_num"
Number of the input object parameters.
Number of the output object parameters.
Number of the input control parameters.
Number of the output control parameters.
System operator or user procedure.
Name of the called C-function.
Get the names of the parameters of a HALCON-operator.
Name of the operator. Default: "get_param_names"
Names of the output objects.
Names of the input control parameters.
Names of the output control parameters.
Names of the input objects.
Get information concerning a HALCON-operator.
Name of the operator on which more information is needed. Default: "get_operator_info"
Desired information. Default: "abstract"
Information (empty if no information is available)
Get information concerning the operator parameters.
Name of the operator on whose parameter more information is needed. Default: "get_param_info"
Name of the parameter on which more information is needed. Default: "Slot"
Desired information. Default: "description"
Information (empty in case there is no information available).
Search names of all operators assigned to one keyword.
Keyword for which corresponding operators are searched. Default: "Information"
Operators whose slot 'keyword' contains the keyword.
Get keywords which are assigned to operators.
Substring in the names of those operators for which keywords are needed. Default: "get_keywords"
Keywords for the operators.
Get information concerning the chapters on operators.
Operator class or subclass of interest. Default: ""
Operator classes (Chapter = ") or operator subclasses respectively operators.
Get information concerning the chapters on operators.
Operator class or subclass of interest. Default: ""
Operator classes (Chapter = ") or operator subclasses respectively operators.
Query all available window types.
Names of available window types.
Get the output treatment of an image matrix.
Window handle.
Display mode for images.
Query the region display modes.
region display mode names.
Query the possible line widths.
Displayable minimum width.
Displayable maximum width.
Query the number of colors for color output.
Tuple of the possible numbers of colors.
Query information about the specified image acquisition interface.
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library (Linux/macOS). Default: "File"
Name of the chosen query. Default: "info_boards"
List of values (according to Query).
Textual information (according to Query).
Represents an instance of a channel of an I/O device.
Open and configure I/O channels.
Modified instance represents: Handles of the opened I/O channel.
Handle of the opened I/O device.
HALCON I/O channel names of the specified device.
Parameter names. Default: []
Parameter values. Default: []
Perform an action on I/O channels.
Handles of the opened I/O channels.
Name of the action to perform.
List of arguments for the action. Default: []
List of values returned by the action.
Perform an action on I/O channels.
Instance represents: Handles of the opened I/O channels.
Name of the action to perform.
List of arguments for the action. Default: []
List of values returned by the action.
Write a value to the specified I/O channels.
Handles of the opened I/O channels.
Write values.
Status of written values.
Write a value to the specified I/O channels.
Instance represents: Handles of the opened I/O channels.
Write values.
Status of written values.
Read a value from the specified I/O channels.
Handles of the opened I/O channels.
Status of read value.
Read value.
Read a value from the specified I/O channels.
Instance represents: Handles of the opened I/O channels.
Status of read value.
Read value.
Set specific parameters of I/O channels.
Handles of the opened I/O channels.
Parameter names. Default: []
Parameter values to set. Default: []
Set specific parameters of I/O channels.
Instance represents: Handles of the opened I/O channels.
Parameter names. Default: []
Parameter values to set. Default: []
Query specific parameters of I/O channels.
Handles of the opened I/O channels.
Parameter names. Default: "param_name"
Parameter values.
Query specific parameters of I/O channels.
Instance represents: Handles of the opened I/O channels.
Parameter names. Default: "param_name"
Parameter values.
Close I/O channels.
Handles of the opened I/O channels.
Close I/O channels.
Instance represents: Handles of the opened I/O channels.
Open and configure I/O channels.
Handle of the opened I/O device.
HALCON I/O channel names of the specified device.
Parameter names. Default: []
Parameter values. Default: []
Handles of the opened I/O channel.
Open and configure I/O channels.
Modified instance represents: Handles of the opened I/O channel.
Handle of the opened I/O device.
HALCON I/O channel names of the specified device.
Parameter names. Default: []
Parameter values. Default: []
Represents an instance of an I/O device.
Open and configure an I/O device.
Modified instance represents: Handle of the opened I/O device.
HALCON I/O interface name. Default: []
I/O device name. Default: []
Dynamic parameter names. Default: []
Dynamic parameter values. Default: []
Open and configure I/O channels.
Instance represents: Handle of the opened I/O device.
HALCON I/O channel names of the specified device.
Parameter names. Default: []
Parameter values. Default: []
Handles of the opened I/O channel.
Open and configure I/O channels.
Instance represents: Handle of the opened I/O device.
HALCON I/O channel names of the specified device.
Parameter names. Default: []
Parameter values. Default: []
Handles of the opened I/O channel.
Query information about channels of the specified I/O device.
Instance represents: Handle of the opened I/O device.
Channel names to query.
Name of the query. Default: "param_name"
List of values (according to Query).
Query information about channels of the specified I/O device.
Instance represents: Handle of the opened I/O device.
Channel names to query.
Name of the query. Default: "param_name"
List of values (according to Query).
Perform an action on the I/O device.
Instance represents: Handle of the opened I/O device.
Name of the action to perform.
List of arguments for the action. Default: []
List of result values returned by the action.
Perform an action on the I/O device.
Instance represents: Handle of the opened I/O device.
Name of the action to perform.
List of arguments for the action. Default: []
List of result values returned by the action.
Configure a specific I/O device instance.
Instance represents: Handle of the opened I/O device.
Parameter names. Default: []
Parameter values to set. Default: []
Configure a specific I/O device instance.
Instance represents: Handle of the opened I/O device.
Parameter names. Default: []
Parameter values to set. Default: []
Query settings of an I/O device instance.
Instance represents: Handle of the opened I/O device.
Parameter names. Default: "param_name"
Parameter values.
Query settings of an I/O device instance.
Instance represents: Handle of the opened I/O device.
Parameter names. Default: "param_name"
Parameter values.
Close the specified I/O device.
Instance represents: Handle of the opened I/O device.
Open and configure an I/O device.
Modified instance represents: Handle of the opened I/O device.
HALCON I/O interface name. Default: []
I/O device name. Default: []
Dynamic parameter names. Default: []
Dynamic parameter values. Default: []
Perform an action on the I/O interface.
HALCON I/O interface name. Default: []
Name of the action to perform.
List of arguments for the action. Default: []
List of results returned by the action.
Perform an action on the I/O interface.
HALCON I/O interface name. Default: []
Name of the action to perform.
List of arguments for the action. Default: []
List of results returned by the action.
Query information about the specified I/O device interface.
HALCON I/O interface name. Default: []
Parameter name of the query. Default: "io_device_names"
List of result values (according to Query).
Query information about the specified I/O device interface.
HALCON I/O interface name. Default: []
Parameter name of the query. Default: "io_device_names"
List of result values (according to Query).
Represents an instance of a lexicon.
Create a lexicon from a text file.
Modified instance represents: Handle of the lexicon.
Unique name for the new lexicon. Default: "lex1"
Name of a text file containing words for the new lexicon. Default: "words.txt"
Create a lexicon from a tuple of words.
Modified instance represents: Handle of the lexicon.
Unique name for the new lexicon. Default: "lex1"
Word list for the new lexicon. Default: ["word1","word2","word3"]
Clear a lexicon.
Instance represents: Handle of the lexicon.
Find a similar word in a lexicon.
Instance represents: Handle of the lexicon.
Word to be looked up. Default: "word"
Difference between the words in edit operations.
Most similar word found in the lexicon.
Check if a word is contained in a lexicon.
Instance represents: Handle of the lexicon.
Word to be looked up. Default: "word"
Result of the search.
Query all words from a lexicon.
Instance represents: Handle of the lexicon.
List of all words.
Create a lexicon from a text file.
Modified instance represents: Handle of the lexicon.
Unique name for the new lexicon. Default: "lex1"
Name of a text file containing words for the new lexicon. Default: "words.txt"
Create a lexicon from a tuple of words.
Modified instance represents: Handle of the lexicon.
Unique name for the new lexicon. Default: "lex1"
Word list for the new lexicon. Default: ["word1","word2","word3"]
Represents an instance of a matrix.
Read a matrix from a file.
Modified instance represents: Matrix handle.
File name.
Create a matrix.
Modified instance represents: Matrix handle.
Number of rows of the matrix. Default: 3
Number of columns of the matrix. Default: 3
Values for initializing the elements of the matrix. Default: 0
Create a matrix.
Modified instance represents: Matrix handle.
Number of rows of the matrix. Default: 3
Number of columns of the matrix. Default: 3
Values for initializing the elements of the matrix. Default: 0
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Negate the matrix
Add two matrices
Subtract two matrices
Multiplies two matrices
Scale a matrix
Scale a matrix
Solve linear system matrix2 * x = matrix1
Deserialize a serialized matrix.
Modified instance represents: Matrix handle.
Handle of the serialized item.
Serialize a matrix.
Instance represents: Matrix handle.
Handle of the serialized item.
Read a matrix from a file.
Modified instance represents: Matrix handle.
File name.
Write a matrix to a file.
Instance represents: Matrix handle of the input matrix.
Format of the file. Default: "binary"
File name.
Perform an orthogonal decomposition of a matrix.
Instance represents: Matrix handle of the input matrix.
Method of decomposition. Default: "qr"
Type of output matrices. Default: "full"
Computation of the orthogonal matrix. Default: "true"
Matrix handle with the triangular part of the decomposed input matrix.
Matrix handle with the orthogonal part of the decomposed input matrix.
Decompose a matrix.
Instance represents: Matrix handle of the input matrix.
Type of the input matrix. Default: "general"
Matrix handle with the output matrix 2.
Matrix handle with the output matrix 1.
Compute the singular value decomposition of a matrix.
Instance represents: Matrix handle of the input matrix.
Type of computation. Default: "full"
Computation of singular values. Default: "both"
Matrix handle with singular values.
Matrix handle with the right singular vectors.
Matrix handle with the left singular vectors.
Compute the generalized eigenvalues and optionally the generalized eigenvectors of general matrices.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Computation of the eigenvectors. Default: "none"
Matrix handle with the real parts of the eigenvalues.
Matrix handle with the imaginary parts of the eigenvalues.
Matrix handle with the real parts of the eigenvectors.
Matrix handle with the imaginary parts of the eigenvectors.
Compute the generalized eigenvalues and optionally generalized eigenvectors of symmetric input matrices.
Instance represents: Matrix handle of the symmetric input matrix A.
Matrix handle of the symmetric positive definite input matrix B.
Computation of the eigenvectors. Default: "false"
Matrix handle with the eigenvectors.
Matrix handle with the eigenvalues.
Compute the eigenvalues and optionally the eigenvectors of a general matrix.
Instance represents: Matrix handle of the input matrix.
Computation of the eigenvectors. Default: "none"
Matrix handle with the real parts of the eigenvalues.
Matrix handle with the imaginary parts of the eigenvalues.
Matrix handle with the real parts of the eigenvectors.
Matrix handle with the imaginary parts of the eigenvectors.
Compute the eigenvalues and optionally eigenvectors of a symmetric matrix.
Instance represents: Matrix handle of the input matrix.
Computation of the eigenvectors. Default: "false"
Matrix handle with the eigenvectors.
Matrix handle with the eigenvalues.
Compute the solution of a system of equations.
Instance represents: Matrix handle of the input matrix of the left hand side.
The type of the input matrix of the left hand side. Default: "general"
Type of solving and limitation to set singular values to be 0. Default: 0.0
Matrix handle of the input matrix of right hand side.
New matrix handle with the solution.
Compute the determinant of a matrix.
Instance represents: Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
Determinant of the input matrix.
Invert a matrix.
Instance represents: Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
Type of inversion. Default: 0.0
Invert a matrix.
Instance represents: Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
Type of inversion. Default: 0.0
Matrix handle with the inverse matrix.
Transpose a matrix.
Instance represents: Matrix handle of the input matrix.
Transpose a matrix.
Instance represents: Matrix handle of the input matrix.
Matrix handle with the transpose of the input matrix.
Returns the elementwise maximum of a matrix.
Instance represents: Matrix handle of the input matrix.
Type of maximum determination. Default: "columns"
Matrix handle with the maximum values of the input matrix.
Returns the elementwise minimum of a matrix.
Instance represents: Matrix handle of the input matrix.
Type of minimum determination. Default: "columns"
Matrix handle with the minimum values of the input matrix.
Compute the power functions of a matrix.
Instance represents: Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
The power. Default: 2.0
Compute the power functions of a matrix.
Instance represents: Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
The power. Default: 2.0
Compute the power functions of a matrix.
Instance represents: Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
The power. Default: 2.0
Matrix handle with the raised powered matrix.
Compute the power functions of a matrix.
Instance represents: Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
The power. Default: 2.0
Matrix handle with the raised powered matrix.
Compute the power functions of the elements of a matrix.
Instance represents: Matrix handle of the input matrix of the base.
Matrix handle of the input matrix with exponents.
Compute the power functions of the elements of a matrix.
Instance represents: Matrix handle of the input matrix of the base.
Matrix handle of the input matrix with exponents.
Matrix handle with the raised power of the input matrix.
Compute the power functions of the elements of a matrix.
Instance represents: Matrix handle of the input matrix.
The power. Default: 2.0
Compute the power functions of the elements of a matrix.
Instance represents: Matrix handle of the input matrix.
The power. Default: 2.0
Compute the power functions of the elements of a matrix.
Instance represents: Matrix handle of the input matrix.
The power. Default: 2.0
Matrix handle with the raised power of the input matrix.
Compute the power functions of the elements of a matrix.
Instance represents: Matrix handle of the input matrix.
The power. Default: 2.0
Matrix handle with the raised power of the input matrix.
Compute the square root values of the elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Compute the square root values of the elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Matrix handle with the square root values of the input matrix.
Compute the absolute values of the elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Compute the absolute values of the elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Matrix handle with the absolute values of the input matrix.
Norm of a matrix.
Instance represents: Matrix handle of the input matrix.
Type of norm. Default: "2-norm"
Norm of the input matrix.
Returns the elementwise mean of a matrix.
Instance represents: Matrix handle of the input matrix.
Type of mean determination. Default: "columns"
Matrix handle with the mean values of the input matrix.
Returns the elementwise sum of a matrix.
Instance represents: Matrix handle of the input matrix.
Type of summation. Default: "columns"
Matrix handle with the sum of the input matrix.
Divide matrices element-by-element.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Divide matrices element-by-element.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Matrix handle with the divided values of input matrices.
Multiply matrices element-by-element.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Multiply matrices element-by-element.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Matrix handle with the multiplied values of the input matrices.
Scale a matrix.
Instance represents: Matrix handle of the input matrix.
Scale factor. Default: 2.0
Scale a matrix.
Instance represents: Matrix handle of the input matrix.
Scale factor. Default: 2.0
Scale a matrix.
Instance represents: Matrix handle of the input matrix.
Scale factor. Default: 2.0
Matrix handle with the scaled elements.
Scale a matrix.
Instance represents: Matrix handle of the input matrix.
Scale factor. Default: 2.0
Matrix handle with the scaled elements.
Subtract two matrices.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Subtract two matrices.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Matrix handle with the difference of the input matrices.
Add two matrices.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Add two matrices.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Matrix handle with the sum of the input matrices.
Multiply two matrices.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Type of the input matrices. Default: "AB"
Multiply two matrices.
Instance represents: Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Type of the input matrices. Default: "AB"
Matrix handle of the multiplied matrices.
Get the size of a matrix.
Instance represents: Matrix handle of the input matrix.
Number of rows of the matrix.
Number of columns of the matrix.
Repeat a matrix.
Instance represents: Matrix handle of the input matrix.
Number of copies of input matrix in row direction. Default: 2
Number of copies of input matrix in column direction. Default: 2
Matrix handle of the repeated copied matrix.
Copy a matrix.
Instance represents: Matrix handle of the input matrix.
Matrix handle of the copied matrix.
Set the diagonal elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Matrix handle containing the diagonal elements to be set.
Position of the diagonal. Default: 0
Get the diagonal elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Number of the desired diagonal. Default: 0
Matrix handle containing the diagonal elements.
Set a sub-matrix of a matrix.
Instance represents: Matrix handle of the input matrix.
Matrix handle of the input sub-matrix.
Upper row position of the sub-matrix in the matrix. Default: 0
Left column position of the sub-matrix in the matrix. Default: 0
Get a sub-matrix of a matrix.
Instance represents: Matrix handle of the input matrix.
Upper row position of the sub-matrix in the input matrix. Default: 0
Left column position of the sub-matrix in the input matrix. Default: 0
Number of rows of the sub-matrix. Default: 1
Number of columns of the sub-matrix. Default: 1
Matrix handle of the sub-matrix.
Set all values of a matrix.
Instance represents: Matrix handle of the input matrix.
Values to be set.
Set all values of a matrix.
Instance represents: Matrix handle of the input matrix.
Values to be set.
Return all values of a matrix.
Instance represents: Matrix handle of the input matrix.
Values of the matrix elements.
Set one or more elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Row numbers of the matrix elements to be modified. Default: 0
Column numbers of the matrix elements to be modified. Default: 0
Values to be set in the indicated matrix elements. Default: 0
Set one or more elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Row numbers of the matrix elements to be modified. Default: 0
Column numbers of the matrix elements to be modified. Default: 0
Values to be set in the indicated matrix elements. Default: 0
Return one ore more elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Row numbers of matrix elements to be returned. Default: 0
Column numbers of matrix elements to be returned. Default: 0
Values of indicated matrix elements.
Return one ore more elements of a matrix.
Instance represents: Matrix handle of the input matrix.
Row numbers of matrix elements to be returned. Default: 0
Column numbers of matrix elements to be returned. Default: 0
Values of indicated matrix elements.
Free the memory of a matrix.
Matrix handle.
Free the memory of a matrix.
Instance represents: Matrix handle.
Create a matrix.
Modified instance represents: Matrix handle.
Number of rows of the matrix. Default: 3
Number of columns of the matrix. Default: 3
Values for initializing the elements of the matrix. Default: 0
Create a matrix.
Modified instance represents: Matrix handle.
Number of rows of the matrix. Default: 3
Number of columns of the matrix. Default: 3
Values for initializing the elements of the matrix. Default: 0
Indexer for accessing matrix elements
Get the number of rows
Get the number of columns
Represents an instance of a tool to measure distances.
Prepare the extraction of straight edges perpendicular to an annular arc.
Modified instance represents: Measure object handle.
Row coordinate of the center of the arc. Default: 100.0
Column coordinate of the center of the arc. Default: 100.0
Radius of the arc. Default: 50.0
Start angle of the arc in radians. Default: 0.0
Angular extent of the arc in radians. Default: 6.28318
Radius (half width) of the annulus. Default: 10.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Prepare the extraction of straight edges perpendicular to an annular arc.
Modified instance represents: Measure object handle.
Row coordinate of the center of the arc. Default: 100.0
Column coordinate of the center of the arc. Default: 100.0
Radius of the arc. Default: 50.0
Start angle of the arc in radians. Default: 0.0
Angular extent of the arc in radians. Default: 6.28318
Radius (half width) of the annulus. Default: 10.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Prepare the extraction of straight edges perpendicular to a rectangle.
Modified instance represents: Measure object handle.
Row coordinate of the center of the rectangle. Default: 300.0
Column coordinate of the center of the rectangle. Default: 200.0
Angle of longitudinal axis of the rectangle to horizontal (radians). Default: 0.0
Half width of the rectangle. Default: 100.0
Half height of the rectangle. Default: 20.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Prepare the extraction of straight edges perpendicular to a rectangle.
Modified instance represents: Measure object handle.
Row coordinate of the center of the rectangle. Default: 300.0
Column coordinate of the center of the rectangle. Default: 200.0
Angle of longitudinal axis of the rectangle to horizontal (radians). Default: 0.0
Half width of the rectangle. Default: 100.0
Half height of the rectangle. Default: 20.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Serialize a measure object.
Instance represents: Measure object handle.
Handle of the serialized item.
Deserialize a serialized measure object.
Modified instance represents: Measure object handle.
Handle of the serialized item.
Write a measure object to a file.
Instance represents: Measure object handle.
File name.
Read a measure object from a file.
Modified instance represents: Measure object handle.
File name.
Extracting points with a particular gray value along a rectangle or an annular arc.
Instance represents: Measure object handle.
Input image.
Sigma of gaussian smoothing. Default: 1.0
Threshold. Default: 128.0
Selection of points. Default: "all"
Row coordinates of points with threshold value.
Column coordinates of points with threshold value.
Distance between consecutive points.
Delete a measure object.
Instance represents: Measure object handle.
Extract a gray value profile perpendicular to a rectangle or annular arc.
Instance represents: Measure object handle.
Input image.
Gray value profile.
Reset a fuzzy function.
Instance represents: Measure object handle.
Selection of the fuzzy set. Default: "contrast"
Specify a normalized fuzzy function for edge pairs.
Instance represents: Measure object handle.
Favored width of edge pairs. Default: 10.0
Selection of the fuzzy set. Default: "size_abs_diff"
Fuzzy function.
Specify a normalized fuzzy function for edge pairs.
Instance represents: Measure object handle.
Favored width of edge pairs. Default: 10.0
Selection of the fuzzy set. Default: "size_abs_diff"
Fuzzy function.
Specify a fuzzy function.
Instance represents: Measure object handle.
Selection of the fuzzy set. Default: "contrast"
Fuzzy function.
Extract straight edge pairs perpendicular to a rectangle or an annular arc.
Instance represents: Measure object handle.
Input image.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select the first gray value transition of the edge pairs. Default: "all"
Constraint of pairing. Default: "no_restriction"
Number of edge pairs. Default: 10
Row coordinate of the first edge.
Column coordinate of the first edge.
Edge amplitude of the first edge (with sign).
Row coordinate of the second edge.
Column coordinate of the second edge.
Edge amplitude of the second edge (with sign).
Row coordinate of the center of the edge pair.
Column coordinate of the center of the edge pair.
Fuzzy evaluation of the edge pair.
Distance between the edges of the edge pair.
Extract straight edge pairs perpendicular to a rectangle or an annular arc.
Instance represents: Measure object handle.
Input image.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select the first gray value transition of the edge pairs. Default: "all"
Row coordinate of the first edge point.
Column coordinate of the first edge point.
Edge amplitude of the first edge (with sign).
Row coordinate of the second edge point.
Column coordinate of the second edge point.
Edge amplitude of the second edge (with sign).
Row coordinate of the center of the edge pair.
Column coordinate of the center of the edge pair.
Fuzzy evaluation of the edge pair.
Distance between edges of an edge pair.
Distance between consecutive edge pairs.
Extract straight edges perpendicular to a rectangle or an annular arc.
Instance represents: Measure object handle.
Input image.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select light/dark or dark/light edges. Default: "all"
Row coordinate of the edge point.
Column coordinate of the edge point.
Edge amplitude of the edge (with sign).
Fuzzy evaluation of the edges.
Distance between consecutive edges.
Extract straight edge pairs perpendicular to a rectangle or annular arc.
Instance represents: Measure object handle.
Input image.
Sigma of gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Type of gray value transition that determines how edges are grouped to edge pairs. Default: "all"
Selection of edge pairs. Default: "all"
Row coordinate of the center of the first edge.
Column coordinate of the center of the first edge.
Edge amplitude of the first edge (with sign).
Row coordinate of the center of the second edge.
Column coordinate of the center of the second edge.
Edge amplitude of the second edge (with sign).
Distance between edges of an edge pair.
Distance between consecutive edge pairs.
Extract straight edges perpendicular to a rectangle or annular arc.
Instance represents: Measure object handle.
Input image.
Sigma of gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Light/dark or dark/light edge. Default: "all"
Selection of end points. Default: "all"
Row coordinate of the center of the edge.
Column coordinate of the center of the edge.
Edge amplitude of the edge (with sign).
Distance between consecutive edges.
Translate a measure object.
Instance represents: Measure object handle.
Row coordinate of the new reference point. Default: 50.0
Column coordinate of the new reference point. Default: 100.0
Translate a measure object.
Instance represents: Measure object handle.
Row coordinate of the new reference point. Default: 50.0
Column coordinate of the new reference point. Default: 100.0
Prepare the extraction of straight edges perpendicular to an annular arc.
Modified instance represents: Measure object handle.
Row coordinate of the center of the arc. Default: 100.0
Column coordinate of the center of the arc. Default: 100.0
Radius of the arc. Default: 50.0
Start angle of the arc in radians. Default: 0.0
Angular extent of the arc in radians. Default: 6.28318
Radius (half width) of the annulus. Default: 10.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Prepare the extraction of straight edges perpendicular to an annular arc.
Modified instance represents: Measure object handle.
Row coordinate of the center of the arc. Default: 100.0
Column coordinate of the center of the arc. Default: 100.0
Radius of the arc. Default: 50.0
Start angle of the arc in radians. Default: 0.0
Angular extent of the arc in radians. Default: 6.28318
Radius (half width) of the annulus. Default: 10.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Prepare the extraction of straight edges perpendicular to a rectangle.
Modified instance represents: Measure object handle.
Row coordinate of the center of the rectangle. Default: 300.0
Column coordinate of the center of the rectangle. Default: 200.0
Angle of longitudinal axis of the rectangle to horizontal (radians). Default: 0.0
Half width of the rectangle. Default: 100.0
Half height of the rectangle. Default: 20.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Prepare the extraction of straight edges perpendicular to a rectangle.
Modified instance represents: Measure object handle.
Row coordinate of the center of the rectangle. Default: 300.0
Column coordinate of the center of the rectangle. Default: 200.0
Angle of longitudinal axis of the rectangle to horizontal (radians). Default: 0.0
Half width of the rectangle. Default: 100.0
Half height of the rectangle. Default: 20.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Represents an instance of a data container to be sent via message queues.
Create a new empty message.
Modified instance represents: Handle of the newly created message.
Read a message from a file.
Modified instance represents: Message handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Read a message from a file.
Modified instance represents: Message handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Query message parameters or information about the message.
Instance represents: Message handle.
Names of the message parameters or info queries. Default: "message_keys"
Message keys the parameter/query should be applied to.
Values of the message parameters or info queries.
Query message parameters or information about the message.
Instance represents: Message handle.
Names of the message parameters or info queries. Default: "message_keys"
Message keys the parameter/query should be applied to.
Values of the message parameters or info queries.
Set message parameter or invoke commands on the message.
Instance represents: Message handle.
Names of the message parameters or action commands. Default: "remove_key"
Message keys the parameter/command should be applied to.
Values of the message parameters or action commands.
Set message parameter or invoke commands on the message.
Instance represents: Message handle.
Names of the message parameters or action commands. Default: "remove_key"
Message keys the parameter/command should be applied to.
Values of the message parameters or action commands.
Retrieve an object associated with the key from the message.
Instance represents: Message handle.
Key string or integer.
Tuple value retrieved from the message.
Retrieve an object associated with the key from the message.
Instance represents: Message handle.
Key string or integer.
Tuple value retrieved from the message.
Add a key/object pair to the message.
Instance represents: Message handle.
Object to be associated with the key.
Key string or integer.
Add a key/object pair to the message.
Instance represents: Message handle.
Object to be associated with the key.
Key string or integer.
Retrieve a tuple associated with the key from the message.
Instance represents: Message handle.
Key string or integer.
Tuple value retrieved from the message.
Retrieve a tuple associated with the key from the message.
Instance represents: Message handle.
Key string or integer.
Tuple value retrieved from the message.
Add a key/tuple pair to the message.
Instance represents: Message handle.
Key string or integer.
Tuple value to be associated with the key.
Add a key/tuple pair to the message.
Instance represents: Message handle.
Key string or integer.
Tuple value to be associated with the key.
Close a message handle and release all associated resources.
Message handle(s) to be closed.
Close a message handle and release all associated resources.
Instance represents: Message handle(s) to be closed.
Create a new empty message.
Modified instance represents: Handle of the newly created message.
Read a message from a file.
Modified instance represents: Message handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Read a message from a file.
Modified instance represents: Message handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Write a message to a file.
Instance represents: Message handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Write a message to a file.
Instance represents: Message handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Represents an instance of a message queue for inter-thread communication.
Create a new empty message queue.
Modified instance represents: Handle of the newly created message queue.
Query message queue parameters or information about the queue.
Instance represents: Message queue handle.
Names of the queue parameters or info queries. Default: "max_message_num"
Values of the queue parameters or info queries.
Query message queue parameters or information about the queue.
Instance represents: Message queue handle.
Names of the queue parameters or info queries. Default: "max_message_num"
Values of the queue parameters or info queries.
Set message queue parameters or invoke commands on the queue.
Instance represents: Message queue handle.
Names of the queue parameters or action commands. Default: "max_message_num"
Values of the queue parameters or action commands. Default: 1
Set message queue parameters or invoke commands on the queue.
Instance represents: Message queue handle.
Names of the queue parameters or action commands. Default: "max_message_num"
Values of the queue parameters or action commands. Default: 1
Receive one or more messages from the message queue.
Instance represents: Message queue handle.
Names of optional generic parameters Default: "timeout"
Values of optional generic parameters Default: "infinite"
Handle(s) of the dequeued message(s).
Receive one or more messages from the message queue.
Instance represents: Message queue handle.
Names of optional generic parameters Default: "timeout"
Values of optional generic parameters Default: "infinite"
Handle(s) of the dequeued message(s).
Enqueue one or more messages to the message queue.
Instance represents: Message queue handle.
Handle(s) of message(s) to be enqueued.
Names of optional generic parameters.
Values of optional generic parameters.
Enqueue one or more messages to the message queue.
Instance represents: Message queue handle.
Handle(s) of message(s) to be enqueued.
Names of optional generic parameters.
Values of optional generic parameters.
Close a message queue handle and release all associated resources.
Message queue handle(s) to be closed.
Close a message queue handle and release all associated resources.
Instance represents: Message queue handle(s) to be closed.
Create a new empty message queue.
Modified instance represents: Handle of the newly created message queue.
Represents an instance of a metrology model.
Read a metrology model from a file.
Modified instance represents: Handle of the metrology model.
File name.
Create the data structure that is needed to measure geometric shapes.
Modified instance represents: Handle of the metrology model.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Query the model contour of a metrology object in image coordinates.
Instance represents: Handle of the metrology model.
Index of the metrology object. Default: 0
Distance between neighboring contour points. Default: 1.5
Model contour.
Query the model contour of a metrology object in image coordinates.
Instance represents: Handle of the metrology model.
Index of the metrology object. Default: 0
Distance between neighboring contour points. Default: 1.5
Model contour.
Query the result contour of a metrology object.
Instance represents: Handle of the metrology model.
Index of the metrology object. Default: 0
Instance of the metrology object. Default: "all"
Distance between neighboring contour points. Default: 1.5
Result contour for the given metrology object.
Query the result contour of a metrology object.
Instance represents: Handle of the metrology model.
Index of the metrology object. Default: 0
Instance of the metrology object. Default: "all"
Distance between neighboring contour points. Default: 1.5
Result contour for the given metrology object.
Alignment of a metrology model.
Instance represents: Handle of the metrology model.
Row coordinate of the alignment. Default: 0
Column coordinate of the alignment. Default: 0
Rotation angle of the alignment. Default: 0
Alignment of a metrology model.
Instance represents: Handle of the metrology model.
Row coordinate of the alignment. Default: 0
Column coordinate of the alignment. Default: 0
Rotation angle of the alignment. Default: 0
Add a metrology object to a metrology model.
Instance represents: Handle of the metrology model.
Type of the metrology object to be added. Default: "circle"
Parameters of the metrology object to be added.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add a metrology object to a metrology model.
Instance represents: Handle of the metrology model.
Type of the metrology object to be added. Default: "circle"
Parameters of the metrology object to be added.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Get parameters that are valid for the entire metrology model.
Instance represents: Handle of the metrology model.
Name of the generic parameter. Default: "camera_param"
Value of the generic parameter.
Set parameters that are valid for the entire metrology model.
Instance represents: Handle of the metrology model.
Name of the generic parameter. Default: "camera_param"
Value of the generic parameter. Default: []
Set parameters that are valid for the entire metrology model.
Instance represents: Handle of the metrology model.
Name of the generic parameter. Default: "camera_param"
Value of the generic parameter. Default: []
Deserialize a serialized metrology model.
Modified instance represents: Handle of the metrology model.
Handle of the serialized item.
Serialize a metrology model.
Instance represents: Handle of the metrology model.
Handle of the serialized item.
Transform metrology objects of a metrology model, e.g. for alignment.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Translation in row direction.
Translation in column direction.
Rotation angle.
Mode of the transformation. Default: "absolute"
Transform metrology objects of a metrology model, e.g. for alignment.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Translation in row direction.
Translation in column direction.
Rotation angle.
Mode of the transformation. Default: "absolute"
Write a metrology model to a file.
Instance represents: Handle of the metrology model.
File name.
Read a metrology model from a file.
Modified instance represents: Handle of the metrology model.
File name.
Copy a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Handle of the copied metrology model.
Copy a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Handle of the copied metrology model.
Copy metrology metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Indices of the copied metrology objects.
Copy metrology metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Indices of the copied metrology objects.
Get the number of instances of the metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: 0
Number of Instances of the metrology objects.
Get the number of instances of the metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: 0
Number of Instances of the metrology objects.
Get the results of the measurement of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology object. Default: 0
Instance of the metrology object. Default: "all"
Name of the generic parameter. Default: "result_type"
Value of the generic parameter. Default: "all_param"
Result values.
Get the results of the measurement of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology object. Default: 0
Instance of the metrology object. Default: "all"
Name of the generic parameter. Default: "result_type"
Value of the generic parameter. Default: "all_param"
Result values.
Get the measure regions and the results of the edge location for the metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Select light/dark or dark/light edges. Default: "all"
Row coordinates of the measured edges.
Column coordinates of the measured edges.
Rectangular XLD Contours of measure regions.
Get the measure regions and the results of the edge location for the metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Select light/dark or dark/light edges. Default: "all"
Row coordinates of the measured edges.
Column coordinates of the measured edges.
Rectangular XLD Contours of measure regions.
Measure and fit the geometric shapes of all metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Input image.
Get the indices of the metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Indices of the metrology objects.
Reset all fuzzy parameters and fuzzy functions of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Reset all fuzzy parameters and fuzzy functions of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Reset all parameters of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Reset all parameters of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Get a fuzzy parameter of a metroloy model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "fuzzy_thresh"
Values of the generic parameters.
Get a fuzzy parameter of a metroloy model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "fuzzy_thresh"
Values of the generic parameters.
Get one or several parameters of a metroloy model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "num_measures"
Values of the generic parameters.
Get one or several parameters of a metroloy model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "num_measures"
Values of the generic parameters.
Set fuzzy parameters or fuzzy functions for a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "fuzzy_thresh"
Values of the generic parameters. Default: 0.5
Set fuzzy parameters or fuzzy functions for a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "fuzzy_thresh"
Values of the generic parameters. Default: 0.5
Set parameters for the metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "num_instances"
Values of the generic parameters. Default: 1
Set parameters for the metrology objects of a metrology model.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "num_instances"
Values of the generic parameters. Default: 1
Add a rectangle to a metrology model.
Instance represents: Handle of the metrology model.
Row (or Y) coordinate of the center of the rectangle.
Column (or X) coordinate of the center of the rectangle.
Orientation of the main axis [rad].
Length of the larger half edge of the rectangle.
Length of the smaller half edge of the rectangle.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add a rectangle to a metrology model.
Instance represents: Handle of the metrology model.
Row (or Y) coordinate of the center of the rectangle.
Column (or X) coordinate of the center of the rectangle.
Orientation of the main axis [rad].
Length of the larger half edge of the rectangle.
Length of the smaller half edge of the rectangle.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add a line to a metrology model.
Instance represents: Handle of the metrology model.
Row (or Y) coordinate of the start of the line.
Column (or X) coordinate of the start of the line.
Row (or Y) coordinate of the end of the line.
Column (or X) coordinate of the end of the line.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add a line to a metrology model.
Instance represents: Handle of the metrology model.
Row (or Y) coordinate of the start of the line.
Column (or X) coordinate of the start of the line.
Row (or Y) coordinate of the end of the line.
Column (or X) coordinate of the end of the line.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add an ellipse or an elliptic arc to a metrology model.
Instance represents: Handle of the metrology model.
Row (or Y) coordinate of the center of the ellipse.
Column (or X) coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add an ellipse or an elliptic arc to a metrology model.
Instance represents: Handle of the metrology model.
Row (or Y) coordinate of the center of the ellipse.
Column (or X) coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add a circle or a circular arc to a metrology model.
Instance represents: Handle of the metrology model.
Row coordinate (or Y) of the center of the circle or circular arc.
Column (or X) coordinate of the center of the circle or circular arc.
Radius of the circle or circular arc.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add a circle or a circular arc to a metrology model.
Instance represents: Handle of the metrology model.
Row coordinate (or Y) of the center of the circle or circular arc.
Column (or X) coordinate of the center of the circle or circular arc.
Radius of the circle or circular arc.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Delete a metrology model and free the allocated memory.
Instance represents: Handle of the metrology model.
Delete metrology objects and free the allocated memory.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Delete metrology objects and free the allocated memory.
Instance represents: Handle of the metrology model.
Index of the metrology objects. Default: "all"
Set the size of the image of metrology objects.
Instance represents: Handle of the metrology model.
Width of the image to be processed. Default: 640
Height of the image to be processed. Default: 480
Create the data structure that is needed to measure geometric shapes.
Modified instance represents: Handle of the metrology model.
Class grouping methods belonging to no other HALCON class.
Write a tuple to a file.
Tuple with any kind of data.
Name of the file to be written.
Read a tuple from a file.
Name of the file to be read.
Tuple with any kind of data.
This operator is inoperable. It had the following function: Close all serial devices.
This operator is inoperable. It had the following function: Clear all OCV tools.
This operator is inoperable. It had the following function: Destroy all OCR classifiers.
Concat training files.
Names of the single training files. Default: ""
Name of the composed training file. Default: "all_characters"
Concat training files.
Names of the single training files. Default: ""
Name of the composed training file. Default: "all_characters"
Query which characters are stored in a (protected) training file.
Names of the training files. Default: ""
Passwords for protected training files.
Number of characters.
Names of the read characters.
Query which characters are stored in a (protected) training file.
Names of the training files. Default: ""
Passwords for protected training files.
Number of characters.
Names of the read characters.
Query which characters are stored in a training file.
Names of the training files. Default: ""
Number of characters.
Names of the read characters.
Query which characters are stored in a training file.
Names of the training files. Default: ""
Number of characters.
Names of the read characters.
This operator is inoperable. It had the following function: Delete all measure objects.
Convert spherical coordinates of a 3D point to Cartesian coordinates.
Longitude of the 3D point.
Latitude of the 3D point.
Radius of the 3D point.
Normal vector of the equatorial plane (points to the north pole). Default: "-y"
Coordinate axis in the equatorial plane that points to the zero meridian. Default: "-z"
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Convert spherical coordinates of a 3D point to Cartesian coordinates.
Longitude of the 3D point.
Latitude of the 3D point.
Radius of the 3D point.
Normal vector of the equatorial plane (points to the north pole). Default: "-y"
Coordinate axis in the equatorial plane that points to the zero meridian. Default: "-z"
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Convert Cartesian coordinates of a 3D point to spherical coordinates.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Normal vector of the equatorial plane (points to the north pole). Default: "-y"
Coordinate axis in the equatorial plane that points to the zero meridian. Default: "-z"
Latitude of the 3D point.
Radius of the 3D point.
Longitude of the 3D point.
Convert Cartesian coordinates of a 3D point to spherical coordinates.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Normal vector of the equatorial plane (points to the north pole). Default: "-y"
Coordinate axis in the equatorial plane that points to the zero meridian. Default: "-z"
Latitude of the 3D point.
Radius of the 3D point.
Longitude of the 3D point.
Read the description file of a Kalman filter.
Description file for a Kalman filter. Default: "kalman.init"
The lined up matrices A, C, Q, possibly G and u, and if necessary L stored in row-major order.
The matrix R stored in row-major order.
The matrix P0@f$P_{0}$ (error covariance matrix of the initial state estimate) stored in row-major order and the initial state estimate x0@f$x_{0}$ lined up.
The dimensions of the state vector, the measurement vector and the controller vector.
Read an update file of a Kalman filter.
Update file for a Kalman filter. Default: "kalman.updt"
The dimensions of the state vector, measurement vector and controller vector. Default: [3,1,0]
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major order. Default: [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
The matrix R stored in row-major order. Default: [1,2]
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major order.
The matrix R stored in row-major order.
The dimensions of the state vector, measurement vector and controller vector.
Estimate the current state of a system with the help of the Kalman filtering.
The dimensions of the state vector, the measurement and the controller vector. Default: [3,1,0]
The lined up matrices A,C,Q, possibly G and u, and if necessary L which have been stored in row-major order. Default: [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
The matrix R stored in row-major order and the measurement vector y lined up. Default: [1.2,1.0]
The matrix P*@f$P$ (the extrapolation-error covariances) stored in row-major order and the extrapolation vector x*@f$x$ lined up. Default: [0.0,0.0,0.0,0.0,180.5,0.0,0.0,0.0,100.0,0.0,100.0,0.0]
The matrix P~@f$P$ (the estimation-error covariances) stored in row-major order and the estimated state x~@f$x$ lined up.
The matrix P* (the extrapolation-error covariances)stored in row-major order and the extrapolation vector x*@f$x$ lined up.
Generate a PostScript file, which describes the rectification grid.
Width of the checkered pattern in meters (without the two frames). Default: 0.17
Number of squares per row and column. Default: 17
File name of the PostScript file. Default: "rectification_grid.ps"
Generate a projection map that describes the mapping between an arbitrarily distorted image and the rectified image.
Distance of the grid points in the rectified image.
Row coordinates of the grid points in the distorted image.
Column coordinates of the grid points in the distorted image.
Width of the point grid (number of grid points).
Width of the images to be rectified.
Height of the images to be rectified.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Calculate the projection of a point onto a line.
Row coordinate of the point.
Column coordinate of the point.
Row coordinate of the first point on the line.
Column coordinate of the first point on the line.
Row coordinate of the second point on the line.
Column coordinate of the second point on the line.
Row coordinate of the projected point.
Column coordinate of the projected point
Calculate the projection of a point onto a line.
Row coordinate of the point.
Column coordinate of the point.
Row coordinate of the first point on the line.
Column coordinate of the first point on the line.
Row coordinate of the second point on the line.
Column coordinate of the second point on the line.
Row coordinate of the projected point.
Column coordinate of the projected point
Calculate a point of an ellipse corresponding to a specific angle.
Angle corresponding to the resulting point [rad]. Default: 0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Row coordinate of the point on the ellipse.
Column coordinates of the point on the ellipse.
Calculate a point of an ellipse corresponding to a specific angle.
Angle corresponding to the resulting point [rad]. Default: 0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Row coordinate of the point on the ellipse.
Column coordinates of the point on the ellipse.
Calculate the intersection point of two lines.
Row coordinate of the first point of the first line.
Column coordinate of the first point of the first line.
Row coordinate of the second point of the first line.
Column coordinate of the second point of the first line.
Row coordinate of the first point of the second line.
Column coordinate of the first point of the second line.
Row coordinate of the second point of the second line.
Column coordinate of the second point of the second line.
Row coordinate of the intersection point.
Column coordinate of the intersection point.
Are the two lines parallel?
Calculate the intersection point of two lines.
Row coordinate of the first point of the first line.
Column coordinate of the first point of the first line.
Row coordinate of the second point of the first line.
Column coordinate of the second point of the first line.
Row coordinate of the first point of the second line.
Column coordinate of the first point of the second line.
Row coordinate of the second point of the second line.
Column coordinate of the second point of the second line.
Row coordinate of the intersection point.
Column coordinate of the intersection point.
Are the two lines parallel?
Calculate the angle between one line and the horizontal axis.
Row coordinate the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Angle between the line and the horizontal axis [rad].
Calculate the angle between one line and the horizontal axis.
Row coordinate the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Angle between the line and the horizontal axis [rad].
Calculate the angle between two lines.
Row coordinate of the first point of the first line.
Column coordinate of the first point of the first line.
Row coordinate of the second point of the first line.
Column coordinate of the second point of the first line.
Row coordinate of the first point of the second line.
Column coordinate of the first point of the second line.
Row coordinate of the second point of the second line.
Column coordinate of the second point of the second line.
Angle between the lines [rad].
Calculate the angle between two lines.
Row coordinate of the first point of the first line.
Column coordinate of the first point of the first line.
Row coordinate of the second point of the first line.
Column coordinate of the second point of the first line.
Row coordinate of the first point of the second line.
Column coordinate of the first point of the second line.
Row coordinate of the second point of the second line.
Column coordinate of the second point of the second line.
Angle between the lines [rad].
Calculate the distances between a line segment and a line.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line segment and the line.
Maximum distance between the line segment and the line.
Calculate the distances between a line segment and a line.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line segment and the line.
Maximum distance between the line segment and the line.
Calculate the distances between two line segments.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Row coordinate of the first point of the line.
Column of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line segments.
Maximum distance between the line segments.
Calculate the distances between two line segments.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Row coordinate of the first point of the line.
Column of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line segments.
Maximum distance between the line segments.
Calculate the distances between a point and a line segment.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the point and the line segment.
Maximum distance between the point and the line segment.
Calculate the distances between a point and a line segment.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the point and the line segment.
Maximum distance between the point and the line segment.
Calculate the distance between one point and one line.
Row coordinate of the point.
Column of the point.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Distance between the points.
Calculate the distance between one point and one line.
Row coordinate of the point.
Column of the point.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Distance between the points.
Calculate the distance between two points.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the second point.
Column coordinate of the second point.
Distance between the points.
Calculate the distance between two points.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the second point.
Column coordinate of the second point.
Distance between the points.
Information on smoothing filter smooth_image.
Name of required filter. Default: "deriche2"
Filter parameter: small values effect strong smoothing (reversed in case of 'gauss'). Default: 0.5
In case of gauss filter: coefficients of the "positive" half of the 1D impulse answer.
Width of filter is approx. size x size pixels.
Generate a Gaussian noise distribution.
Standard deviation of the Gaussian noise distribution. Default: 2.0
Resulting Gaussian noise distribution.
Generate a salt-and-pepper noise distribution.
Percentage of salt (white noise pixels). Default: 5.0
Percentage of pepper (black noise pixels). Default: 5.0
Resulting noise distribution.
Generate a salt-and-pepper noise distribution.
Percentage of salt (white noise pixels). Default: 5.0
Percentage of pepper (black noise pixels). Default: 5.0
Resulting noise distribution.
Deserialize FFT speed optimization data.
Handle of the serialized item.
Serialize FFT speed optimization data.
Handle of the serialized item.
Load FFT speed optimization data from a file.
File name of the optimization data. Default: "fft_opt.dat"
Store FFT speed optimization data in a file.
File name of the optimization data. Default: "fft_opt.dat"
Optimize the runtime of the real-valued FFT.
Width of the image for which the runtime should be optimized. Default: 512
Height of the image for which the runtime should be optimized. Default: 512
Thoroughness of the search for the optimum runtime. Default: "standard"
Optimize the runtime of the FFT.
Width of the image for which the runtime should be optimized. Default: 512
Height of the image for which the runtime should be optimized. Default: 512
Thoroughness of the search for the optimum runtime. Default: "standard"
Return the filter coefficients of a filter in edges_image.
Name of the edge operator. Default: "lanser2"
1D edge filter ('edge') or 1D smoothing filter ('smooth'). Default: "edge"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 0.5
For Canny filters: Coefficients of the "positive" half of the 1D impulse response. All others: Coefficients of a corresponding non-recursive filter.
Filter width in pixels.
Copy a file to a new location.
File to be copied.
Target location.
Set the current working directory.
Name of current working directory to be set.
Get the current working directory.
Name of current working directory.
Delete an empty directory.
Name of directory to be deleted.
Make a directory.
Name of directory to be created.
List all files in a directory.
Name of directory to be listed.
Processing options. Default: "files"
Found files (and directories).
List all files in a directory.
Name of directory to be listed.
Processing options. Default: "files"
Found files (and directories).
Delete a file.
File to be deleted.
Check whether file exists.
Name of file to be checked. Default: "/bin/cc"
boolean number.
This operator is inoperable. It had the following function: Close all open files.
Select the longest input lines.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
(Maximum) desired number of output lines. Default: 10
Row coordinates of the starting points of the output lines.
Column coordinates of the starting points of the output lines.
Row coordinates of the ending points of the output lines.
Column coordinates of the ending points of the output lines.
Partition lines according to various criteria.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Features to be used for selection.
Desired combination of the features.
Lower limits of the features or 'min'. Default: "min"
Upper limits of the features or 'max'. Default: "max"
Row coordinates of the starting points of the lines fulfilling the conditions.
Column coordinates of the starting points of the lines fulfilling the conditions.
Row coordinates of the ending points of the lines fulfilling the conditions.
Column coordinates of the ending points of the lines fulfilling the conditions.
Row coordinates of the starting points of the lines not fulfilling the conditions.
Column coordinates of the starting points of the lines not fulfilling the conditions.
Row coordinates of the ending points of the lines not fulfilling the conditions.
Column coordinates of the ending points of the lines not fulfilling the conditions.
Partition lines according to various criteria.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Features to be used for selection.
Desired combination of the features.
Lower limits of the features or 'min'. Default: "min"
Upper limits of the features or 'max'. Default: "max"
Row coordinates of the starting points of the lines fulfilling the conditions.
Column coordinates of the starting points of the lines fulfilling the conditions.
Row coordinates of the ending points of the lines fulfilling the conditions.
Column coordinates of the ending points of the lines fulfilling the conditions.
Row coordinates of the starting points of the lines not fulfilling the conditions.
Column coordinates of the starting points of the lines not fulfilling the conditions.
Row coordinates of the ending points of the lines not fulfilling the conditions.
Column coordinates of the ending points of the lines not fulfilling the conditions.
Select lines according to various criteria.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Features to be used for selection. Default: "length"
Desired combination of the features. Default: "and"
Lower limits of the features or 'min'. Default: "min"
Upper limits of the features or 'max'. Default: "max"
Row coordinates of the starting points of the output lines.
Column coordinates of the starting points of the output lines.
Row coordinates of the ending points of the output lines.
Column coordinates of the ending points of the output lines.
Select lines according to various criteria.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Features to be used for selection. Default: "length"
Desired combination of the features. Default: "and"
Lower limits of the features or 'min'. Default: "min"
Upper limits of the features or 'max'. Default: "max"
Row coordinates of the starting points of the output lines.
Column coordinates of the starting points of the output lines.
Row coordinates of the ending points of the output lines.
Column coordinates of the ending points of the output lines.
Calculate the center of gravity, length, and orientation of a line.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Row coordinates of the centers of gravity of the input lines.
Column coordinates of the centers of gravity of the input lines.
Euclidean length of the input lines.
Orientation of the input lines.
Calculate the center of gravity, length, and orientation of a line.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Row coordinates of the centers of gravity of the input lines.
Column coordinates of the centers of gravity of the input lines.
Euclidean length of the input lines.
Orientation of the input lines.
Calculate the orientation of lines.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Orientation of the input lines.
Calculate the orientation of lines.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Orientation of the input lines.
Approximate a contour by arcs and lines.
Row of the contour. Default: 32
Column of the contour. Default: 32
Row of the center of an arc.
Column of the center of an arc.
Angle of an arc.
Row of the starting point of an arc.
Column of the starting point of an arc.
Row of the starting point of a line segment.
Column of the starting point of a line segment.
Row of the ending point of a line segment.
Column of the ending point of a line segment.
Sequence of line (value 0) and arc segments (value 1).
Approximate a contour by arcs and lines.
Row of the contour. Default: 32
Column of the contour. Default: 32
Minimum width of Gauss operator for coordinate smoothing ($ greater than $ 0.4). Default: 0.5
Maximum width of Gauss operator for coordinate smoothing ($ greater than $ 0.4). Default: 2.4
Minimum threshold value of the curvature for accepting a corner (relative to the largest curvature present). Default: 0.3
Maximum threshold value of the curvature for accepting a corner (relative to the largest curvature present). Default: 0.9
Step width for threshold increase. Default: 0.2
Minimum width of Gauss operator for smoothing the curvature function ($ greater than $ 0.4). Default: 0.5
Maximum width of Gauss operator for smoothing the curvature function. Default: 2.4
Minimum width of curve area for curvature determination ($ greater than $ 0.4). Default: 2
Maximum width of curve area for curvature determination. Default: 12
Weighting factor for approximation precision. Default: 1.0
Weighting factor for large segments. Default: 1.0
Weighting factor for small segments. Default: 1.0
Row of the center of an arc.
Column of the center of an arc.
Angle of an arc.
Row of the starting point of an arc.
Column of the starting point of an arc.
Row of the starting point of a line segment.
Column of the starting point of a line segment.
Row of the ending point of a line segment.
Column of the ending point of a line segment.
Sequence of line (value 0) and arc segments (value 1).
This operator is inoperable. It had the following function: Destroy all classifiers.
Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with rectangularly arranged marks.
Number of marks in x direction. Default: 7
Number of marks in y direction. Default: 7
Distance of the marks in meters. Default: 0.0125
Ratio of the mark diameter to the mark distance. Default: 0.5
File name of the calibration plate description. Default: "caltab.descr"
File name of the PostScript file. Default: "caltab.ps"
Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with hexagonally arranged marks.
Number of rows. Default: 27
Number of marks per row. Default: 31
Diameter of the marks. Default: 0.00258065
Row indices of the finder patterns. Default: [13,6,6,20,20]
Column indices of the finder patterns. Default: [15,6,24,6,24]
Polarity of the marks Default: "light_on_dark"
File name of the calibration plate description. Default: "calplate.cpd"
File name of the PostScript file. Default: "calplate.ps"
Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with hexagonally arranged marks.
Number of rows. Default: 27
Number of marks per row. Default: 31
Diameter of the marks. Default: 0.00258065
Row indices of the finder patterns. Default: [13,6,6,20,20]
Column indices of the finder patterns. Default: [15,6,24,6,24]
Polarity of the marks Default: "light_on_dark"
File name of the calibration plate description. Default: "calplate.cpd"
File name of the PostScript file. Default: "calplate.ps"
Read the mark center points from the calibration plate description file.
File name of the calibration plate description. Default: "calplate_320mm.cpd"
X coordinates of the mark center points in the coordinate system of the calibration plate.
Y coordinates of the mark center points in the coordinate system of the calibration plate.
Z coordinates of the mark center points in the coordinate system of the calibration plate.
This operator is inoperable. It had the following function: Delete all background estimation data sets.
This operator is inoperable. It had the following function: Close all image acquisition devices.
Represents an instance of a mutex synchronization object.
Create a mutual exclusion synchronization object.
Modified instance represents: Mutex synchronization object.
Mutex attribute class. Default: []
Mutex attribute kind. Default: []
Create a mutual exclusion synchronization object.
Modified instance represents: Mutex synchronization object.
Mutex attribute class. Default: []
Mutex attribute kind. Default: []
Clear the mutex synchronization object.
Instance represents: Mutex synchronization object.
Unlock a mutex synchronization object.
Instance represents: Mutex synchronization object.
Lock a mutex synchronization object.
Instance represents: Mutex synchronization object.
Mutex already locked?
Lock a mutex synchronization object.
Instance represents: Mutex synchronization object.
Create a mutual exclusion synchronization object.
Modified instance represents: Mutex synchronization object.
Mutex attribute class. Default: []
Mutex attribute kind. Default: []
Create a mutual exclusion synchronization object.
Modified instance represents: Mutex synchronization object.
Mutex attribute class. Default: []
Mutex attribute kind. Default: []
Represents an instance of an NCC model for matching.
Read an NCC model from a file.
Modified instance represents: Handle of the model.
File name.
Prepare an NCC model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Match metric. Default: "use_polarity"
Prepare an NCC model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Match metric. Default: "use_polarity"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Free the memory of an NCC model.
Instance represents: Handle of the model.
Deserialize an NCC model.
Modified instance represents: Handle of the model.
Handle of the serialized item.
Serialize an NCC model.
Instance represents: Handle of the model.
Handle of the serialized item.
Read an NCC model from a file.
Modified instance represents: Handle of the model.
File name.
Write an NCC model to a file.
Instance represents: Handle of the model.
File name.
Determine the parameters of an NCC model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Match metric. Default: "use_polarity"
Parameters to be determined automatically. Default: "all"
Value of the automatically determined parameter.
Name of the automatically determined parameter.
Determine the parameters of an NCC model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Match metric. Default: "use_polarity"
Parameters to be determined automatically. Default: "all"
Value of the automatically determined parameter.
Name of the automatically determined parameter.
Return the parameters of an NCC model.
Instance represents: Handle of the model.
Smallest rotation of the pattern.
Extent of the rotation angles.
Step length of the angles (resolution).
Match metric.
Number of pyramid levels.
Return the origin (reference point) of an NCC model.
Instance represents: Handle of the model.
Row coordinate of the origin of the NCC model.
Column coordinate of the origin of the NCC model.
Set the origin (reference point) of an NCC model.
Instance represents: Handle of the model.
Row coordinate of the origin of the NCC model.
Column coordinate of the origin of the NCC model.
Find the best matches of an NCC model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.8
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Find the best matches of an NCC model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.8
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Set selected parameters of the NCC model.
Instance represents: Handle of the model.
Parameter names.
Parameter values.
Prepare an NCC model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Match metric. Default: "use_polarity"
Prepare an NCC model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Match metric. Default: "use_polarity"
Find the best matches of multiple NCC models.
Input image in which the model should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.8
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple NCC models.
Instance represents: Handle of the models.
Input image in which the model should be found.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.8
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Return the region used to create an NCC model.
Instance represents: Handle of the model.
Model region of the NCC model.
Represents an instance of a 3D object model.
Create an empty 3D object model.
Modified instance represents: Handle of the new 3D object model.
Create a 3D object model that represents a point cloud from a set of 3D points.
Modified instance represents: Handle of the resulting 3D object model.
The x-coordinates of the points in the 3D point cloud.
The y-coordinates of the points in the 3D point cloud.
The z-coordinates of the points in the 3D point cloud.
Create a 3D object model that represents a point cloud from a set of 3D points.
Modified instance represents: Handle of the resulting 3D object model.
The x-coordinates of the points in the 3D point cloud.
The y-coordinates of the points in the 3D point cloud.
The z-coordinates of the points in the 3D point cloud.
Transform 3D points from images to a 3D object model.
Modified instance represents: Handle of the 3D object model.
Image with the X-Coordinates and the ROI of the 3D points.
Image with the Y-Coordinates of the 3D points.
Image with the Z-Coordinates of the 3D points.
Read a 3D object model from a file.
Modified instance represents: Handle of the 3D object model.
Filename of the file to be read. Default: "mvtec_bunny_normals"
Scale of the data in the file. Default: "m"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Status information.
Read a 3D object model from a file.
Modified instance represents: Handle of the 3D object model.
Filename of the file to be read. Default: "mvtec_bunny_normals"
Scale of the data in the file. Default: "m"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Status information.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Get the result of a calibrated measurement performed with the sheet-of-light technique as a 3D object model.
Modified instance represents: Handle of the resulting 3D object model.
Handle for accessing the sheet-of-light model.
Fit 3D primitives into a set of 3D points.
Handle of the input 3D object model.
Names of the generic parameters.
Values of the generic parameters.
Handle of the output 3D object model.
Fit 3D primitives into a set of 3D points.
Instance represents: Handle of the input 3D object model.
Names of the generic parameters.
Values of the generic parameters.
Handle of the output 3D object model.
Segment a set of 3D points into sub-sets with similar characteristics.
Handle of the input 3D object model.
Names of the generic parameters.
Values of the generic parameters.
Handle of the output 3D object model.
Segment a set of 3D points into sub-sets with similar characteristics.
Instance represents: Handle of the input 3D object model.
Names of the generic parameters.
Values of the generic parameters.
Handle of the output 3D object model.
Calculate the 3D surface normals of a 3D object model.
Handle of the 3D object model containing 3D point data.
Normals calculation method. Default: "mls"
Names of generic smoothing parameters. Default: []
Values of generic smoothing parameters. Default: []
Handle of the 3D object model with calculated 3D normals.
Calculate the 3D surface normals of a 3D object model.
Instance represents: Handle of the 3D object model containing 3D point data.
Normals calculation method. Default: "mls"
Names of generic smoothing parameters. Default: []
Values of generic smoothing parameters. Default: []
Handle of the 3D object model with calculated 3D normals.
Smooth the 3D points of a 3D object model.
Handle of the 3D object model containing 3D point data.
Smoothing method. Default: "mls"
Names of generic smoothing parameters. Default: []
Values of generic smoothing parameters. Default: []
Handle of the 3D object model with the smoothed 3D point data.
Smooth the 3D points of a 3D object model.
Instance represents: Handle of the 3D object model containing 3D point data.
Smoothing method. Default: "mls"
Names of generic smoothing parameters. Default: []
Values of generic smoothing parameters. Default: []
Handle of the 3D object model with the smoothed 3D point data.
Create a surface triangulation for a 3D object model.
Handle of the 3D object model containing 3D point data.
Triangulation method. Default: "greedy"
Names of the generic triangulation parameters. Default: []
Values of the generic triangulation parameters. Default: []
Additional information about the triangulation process.
Handle of the 3D object model with the triangulated surface.
Create a surface triangulation for a 3D object model.
Instance represents: Handle of the 3D object model containing 3D point data.
Triangulation method. Default: "greedy"
Names of the generic triangulation parameters. Default: []
Values of the generic triangulation parameters. Default: []
Additional information about the triangulation process.
Handle of the 3D object model with the triangulated surface.
Reconstruct surface from calibrated multi-view stereo images.
Modified instance represents: Handle to the resulting surface.
An image array acquired by the camera setup associated with the stereo model.
Handle of the stereo model.
Refine the position and deformation of a deformable surface model in a 3D scene.
Instance represents: Handle of the 3D object model containing the scene.
Handle of the deformable surface model.
Relative sampling distance of the scene. Default: 0.05
Initial deformation of the 3D object model
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the refined model.
Refine the position and deformation of a deformable surface model in a 3D scene.
Instance represents: Handle of the 3D object model containing the scene.
Handle of the deformable surface model.
Relative sampling distance of the scene. Default: 0.05
Initial deformation of the 3D object model
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the refined model.
Find the best match of a deformable surface model in a 3D scene.
Instance represents: Handle of the 3D object model containing the scene.
Handle of the deformable surface model.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Minimum score of the returned match. Default: 0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the found instances of the surface model.
Find the best match of a deformable surface model in a 3D scene.
Instance represents: Handle of the 3D object model containing the scene.
Handle of the deformable surface model.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Minimum score of the returned match. Default: 0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the matching result.
Score of the found instances of the surface model.
Add a sample deformation to a deformable surface model
Handle of the deformable surface model.
Handle of the deformed 3D object model.
Add a sample deformation to a deformable surface model
Instance represents: Handle of the deformed 3D object model.
Handle of the deformable surface model.
Create the data structure needed to perform deformable surface-based matching.
Instance represents: Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.05
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the deformable surface model.
Create the data structure needed to perform deformable surface-based matching.
Instance represents: Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.05
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the deformable surface model.
Refine the pose of a surface model in a 3D scene.
Instance represents: Handle of the 3D object model containing the scene.
Handle of the surface model.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene.
Instance represents: Handle of the 3D object model containing the scene.
Handle of the surface model.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene.
Instance represents: Handle of the 3D object model containing the scene.
Handle of the surface model.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene.
Instance represents: Handle of the 3D object model containing the scene.
Handle of the surface model.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Create the data structure needed to perform surface-based matching.
Instance represents: Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.03
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the surface model.
Create the data structure needed to perform surface-based matching.
Instance represents: Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.03
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the surface model.
Simplify a triangulated 3D object model.
Handle of the 3D object model that should be simplified.
Method that should be used for simplification. Default: "preserve_point_coordinates"
Degree of simplification (default: percentage of remaining model points).
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the simplified 3D object model.
Simplify a triangulated 3D object model.
Instance represents: Handle of the 3D object model that should be simplified.
Method that should be used for simplification. Default: "preserve_point_coordinates"
Degree of simplification (default: percentage of remaining model points).
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the simplified 3D object model.
Compute the distances of the points of one 3D object model to another 3D object model.
Instance represents: Handle of the source 3D object model.
Handle of the target 3D object model.
Pose of the source 3D object model in the target 3D object model. Default: []
Maximum distance of interest. Default: 0
Names of the generic input parameters. Default: []
Values of the generic input parameters. Default: []
Compute the distances of the points of one 3D object model to another 3D object model.
Instance represents: Handle of the source 3D object model.
Handle of the target 3D object model.
Pose of the source 3D object model in the target 3D object model. Default: []
Maximum distance of interest. Default: 0
Names of the generic input parameters. Default: []
Values of the generic input parameters. Default: []
Combine several 3D object models to a new 3D object model.
Handle of input 3D object models.
Method used for the union. Default: "points_surface"
Handle of the resulting 3D object model.
Combine several 3D object models to a new 3D object model.
Instance represents: Handle of input 3D object models.
Method used for the union. Default: "points_surface"
Handle of the resulting 3D object model.
Set attributes of a 3D object model.
Instance represents: Handle of the 3D object model.
Name of the attributes.
Defines where extended attributes are attached to. Default: []
Attribute values.
Set attributes of a 3D object model.
Instance represents: Handle of the 3D object model.
Name of the attributes.
Defines where extended attributes are attached to. Default: []
Attribute values.
Set attributes of a 3D object model.
Instance represents: Handle of the input 3D object model.
Name of the attributes.
Defines where extended attributes are attached to. Default: []
Attribute values.
Handle of the resulting 3D object model.
Set attributes of a 3D object model.
Instance represents: Handle of the input 3D object model.
Name of the attributes.
Defines where extended attributes are attached to. Default: []
Attribute values.
Handle of the resulting 3D object model.
Create an empty 3D object model.
Modified instance represents: Handle of the new 3D object model.
Sample a 3D object model.
Handle of the 3D object model to be sampled.
Selects between the different subsampling methods. Default: "fast"
Sampling distance. Default: 0.05
Names of the generic parameters that can be adjusted. Default: []
Values of the generic parameters that can be adjusted. Default: []
Handle of the 3D object model that contains the sampled points.
Sample a 3D object model.
Instance represents: Handle of the 3D object model to be sampled.
Selects between the different subsampling methods. Default: "fast"
Sampling distance. Default: 0.05
Names of the generic parameters that can be adjusted. Default: []
Values of the generic parameters that can be adjusted. Default: []
Handle of the 3D object model that contains the sampled points.
Improve the relative transformations between 3D object models based on their overlaps.
Handles of several 3D object models.
Approximate relative transformations between the 3D object models.
Type of interpretation for the transformations. Default: "global"
Target indices of the transformations if From specifies the source indices, otherwise the parameter must be empty. Default: []
Names of the generic parameters that can be adjusted for the global 3D object model registration. Default: []
Values of the generic parameters that can be adjusted for the global 3D object model registration. Default: []
Number of overlapping neighbors for each 3D object model.
Resulting Transformations.
Improve the relative transformations between 3D object models based on their overlaps.
Instance represents: Handles of several 3D object models.
Approximate relative transformations between the 3D object models.
Type of interpretation for the transformations. Default: "global"
Target indices of the transformations if From specifies the source indices, otherwise the parameter must be empty. Default: []
Names of the generic parameters that can be adjusted for the global 3D object model registration. Default: []
Values of the generic parameters that can be adjusted for the global 3D object model registration. Default: []
Number of overlapping neighbors for each 3D object model.
Resulting Transformations.
Search for a transformation between two 3D object models.
Instance represents: Handle of the first 3D object model.
Handle of the second 3D object model.
Method for the registration. Default: "matching"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Overlapping of the two 3D object models.
Pose to transform ObjectModel3D1 in the reference frame of ObjectModel3D2.
Search for a transformation between two 3D object models.
Instance represents: Handle of the first 3D object model.
Handle of the second 3D object model.
Method for the registration. Default: "matching"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Overlapping of the two 3D object models.
Pose to transform ObjectModel3D1 in the reference frame of ObjectModel3D2.
Create a 3D object model that represents a point cloud from a set of 3D points.
Modified instance represents: Handle of the resulting 3D object model.
The x-coordinates of the points in the 3D point cloud.
The y-coordinates of the points in the 3D point cloud.
The z-coordinates of the points in the 3D point cloud.
Create a 3D object model that represents a point cloud from a set of 3D points.
Modified instance represents: Handle of the resulting 3D object model.
The x-coordinates of the points in the 3D point cloud.
The y-coordinates of the points in the 3D point cloud.
The z-coordinates of the points in the 3D point cloud.
Create a 3D object model that represents a box.
The pose that describes the position and orientation of the box. The pose has its origin in the center of the box.
The length of the box along the x-axis.
The length of the box along the y-axis.
The length of the box along the z-axis.
Handle of the resulting 3D object model.
Create a 3D object model that represents a box.
Modified instance represents: Handle of the resulting 3D object model.
The pose that describes the position and orientation of the box. The pose has its origin in the center of the box.
The length of the box along the x-axis.
The length of the box along the y-axis.
The length of the box along the z-axis.
Create a 3D object model that represents a plane.
Modified instance represents: Handle of the resulting 3D object model.
The center and the rotation of the plane.
x coordinates specifying the extent of the plane.
y coordinates specifying the extent of the plane.
Create a 3D object model that represents a plane.
Modified instance represents: Handle of the resulting 3D object model.
The center and the rotation of the plane.
x coordinates specifying the extent of the plane.
y coordinates specifying the extent of the plane.
Create a 3D object model that represents a sphere from x,y,z coordinates.
The x-coordinate of the center point of the sphere.
The y-coordinate of the center point of the sphere.
The z-coordinate of the center point of the sphere.
The radius of the sphere.
Handle of the resulting 3D object model.
Create a 3D object model that represents a sphere from x,y,z coordinates.
Modified instance represents: Handle of the resulting 3D object model.
The x-coordinate of the center point of the sphere.
The y-coordinate of the center point of the sphere.
The z-coordinate of the center point of the sphere.
The radius of the sphere.
Create a 3D object model that represents a sphere.
The pose that describes the position of the sphere.
The radius of the sphere.
Handle of the resulting 3D object model.
Create a 3D object model that represents a sphere.
Modified instance represents: Handle of the resulting 3D object model.
The pose that describes the position of the sphere.
The radius of the sphere.
Create a 3D object model that represents a cylinder.
The pose that describes the position and orientation of the cylinder.
The radius of the cylinder.
Lowest z-coordinate of the cylinder in the direction of the rotation axis.
Highest z-coordinate of the cylinder in the direction of the rotation axis.
Handle of the resulting 3D object model.
Create a 3D object model that represents a cylinder.
Modified instance represents: Handle of the resulting 3D object model.
The pose that describes the position and orientation of the cylinder.
The radius of the cylinder.
Lowest z-coordinate of the cylinder in the direction of the rotation axis.
Highest z-coordinate of the cylinder in the direction of the rotation axis.
Calculate the smallest bounding box around the points of a 3D object model.
Handle of the 3D object model.
The method that is used to estimate the smallest box. Default: "oriented"
The length of the longest side of the box.
The length of the second longest side of the box.
The length of the third longest side of the box.
The pose that describes the position and orientation of the box that is generated. The pose has its origin in the center of the box and is oriented such that the x-axis is aligned with the longest side of the box.
Calculate the smallest bounding box around the points of a 3D object model.
Instance represents: Handle of the 3D object model.
The method that is used to estimate the smallest box. Default: "oriented"
The length of the longest side of the box.
The length of the second longest side of the box.
The length of the third longest side of the box.
The pose that describes the position and orientation of the box that is generated. The pose has its origin in the center of the box and is oriented such that the x-axis is aligned with the longest side of the box.
Calculate the smallest sphere around the points of a 3D object model.
Handle of the 3D object model.
The estimated radius of the sphere.
x-, y-, and z-coordinates describing the center point of the sphere.
Calculate the smallest sphere around the points of a 3D object model.
Instance represents: Handle of the 3D object model.
The estimated radius of the sphere.
x-, y-, and z-coordinates describing the center point of the sphere.
Intersect a 3D object model with a plane.
Handle of the 3D object model.
Pose of the plane. Default: [0,0,0,0,0,0,0]
Handle of the 3D object model that describes the intersection as a set of lines.
Intersect a 3D object model with a plane.
Instance represents: Handle of the 3D object model.
Pose of the plane. Default: [0,0,0,0,0,0,0]
Handle of the 3D object model that describes the intersection as a set of lines.
Calculate the convex hull of a 3D object model.
Handle of the 3D object model.
Handle of the 3D object model that describes the convex hull.
Calculate the convex hull of a 3D object model.
Instance represents: Handle of the 3D object model.
Handle of the 3D object model that describes the convex hull.
Select 3D object models from an array of 3D object models according to global features.
Handles of the available 3D object models to select.
List of features a test is performed on. Default: "has_triangles"
Logical operation to combine the features given in Feature. Default: "and"
Minimum value for the given feature. Default: 1
Maximum value for the given feature. Default: 1
A subset of ObjectModel3D fulfilling the given conditions.
Select 3D object models from an array of 3D object models according to global features.
Instance represents: Handles of the available 3D object models to select.
List of features a test is performed on. Default: "has_triangles"
Logical operation to combine the features given in Feature. Default: "and"
Minimum value for the given feature. Default: 1
Maximum value for the given feature. Default: 1
A subset of ObjectModel3D fulfilling the given conditions.
Calculate the area of all faces of a 3D object model.
Handle of the 3D object model.
Calculated area.
Calculate the area of all faces of a 3D object model.
Instance represents: Handle of the 3D object model.
Calculated area.
Calculate the maximal diameter of a 3D object model.
Handle of the 3D object model.
Calculated diameter.
Calculate the maximal diameter of a 3D object model.
Instance represents: Handle of the 3D object model.
Calculated diameter.
Calculates the mean or the central moment of second order for a 3D object model.
Handle of the 3D object model.
Moment to calculate. Default: "mean_points"
Calculated moment.
Calculates the mean or the central moment of second order for a 3D object model.
Instance represents: Handle of the 3D object model.
Moment to calculate. Default: "mean_points"
Calculated moment.
Calculate the volume of a 3D object model.
Handle of the 3D object model.
Pose of the plane. Default: [0,0,0,0,0,0,0]
Method to combine volumes laying above and below the reference plane. Default: "signed"
Decides whether the orientation of a face should affect the resulting sign of the underlying volume. Default: "true"
Absolute value of the calculated volume.
Calculate the volume of a 3D object model.
Instance represents: Handle of the 3D object model.
Pose of the plane. Default: [0,0,0,0,0,0,0]
Method to combine volumes laying above and below the reference plane. Default: "signed"
Decides whether the orientation of a face should affect the resulting sign of the underlying volume. Default: "true"
Absolute value of the calculated volume.
Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given region.
Region in the image plane.
Handle of the 3D object model.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Handle of the reduced 3D object model.
Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given region.
Instance represents: Handle of the 3D object model.
Region in the image plane.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Handle of the reduced 3D object model.
Determine the connected components of the 3D object model.
Handle of the 3D object model.
Attribute used to calculate the connected components. Default: "distance_3d"
Maximum value for the distance between two connected components. Default: 1.0
Handle of the 3D object models that represent the connected components.
Determine the connected components of the 3D object model.
Instance represents: Handle of the 3D object model.
Attribute used to calculate the connected components. Default: "distance_3d"
Maximum value for the distance between two connected components. Default: 1.0
Handle of the 3D object models that represent the connected components.
Apply a threshold to an attribute of 3D object models.
Handle of the 3D object models.
Attributes the threshold is applied to. Default: "point_coord_z"
Minimum value for the attributes specified by Attrib. Default: 0.5
Maximum value for the attributes specified by Attrib. Default: 1.0
Handle of the reduced 3D object models.
Apply a threshold to an attribute of 3D object models.
Instance represents: Handle of the 3D object models.
Attributes the threshold is applied to. Default: "point_coord_z"
Minimum value for the attributes specified by Attrib. Default: 0.5
Maximum value for the attributes specified by Attrib. Default: 1.0
Handle of the reduced 3D object models.
Get the depth or the index of a displayed 3D object model.
Window handle.
Row coordinates.
Column coordinates.
Information. Default: "depth"
Indices or the depth of the objects at (Row,Column).
Get the depth or the index of a displayed 3D object model.
Window handle.
Row coordinates.
Column coordinates.
Information. Default: "depth"
Indices or the depth of the objects at (Row,Column).
Render 3D object models to get an image.
Handles of the 3D object models.
Camera parameters of the scene.
3D poses of the objects.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Rendered scene.
Render 3D object models to get an image.
Instance represents: Handles of the 3D object models.
Camera parameters of the scene.
3D poses of the objects.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Rendered scene.
Display 3D object models.
Window handle.
Handles of the 3D object models.
Camera parameters of the scene. Default: []
3D poses of the objects. Default: []
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Display 3D object models.
Instance represents: Handles of the 3D object models.
Window handle.
Camera parameters of the scene. Default: []
3D poses of the objects. Default: []
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Copy a 3D object model.
Instance represents: Handle of the input 3D object model.
Attributes to be copied. Default: "all"
Handle of the copied 3D object model.
Copy a 3D object model.
Instance represents: Handle of the input 3D object model.
Attributes to be copied. Default: "all"
Handle of the copied 3D object model.
Prepare a 3D object model for a certain operation.
Handle of the 3D object model.
Purpose of the 3D object model. Default: "shape_based_matching_3d"
Specify if already existing data should be overwritten. Default: "true"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Prepare a 3D object model for a certain operation.
Instance represents: Handle of the 3D object model.
Purpose of the 3D object model. Default: "shape_based_matching_3d"
Specify if already existing data should be overwritten. Default: "true"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Transform 3D points from a 3D object model to images.
Instance represents: Handle of the 3D object model.
Image with the Y-Coordinates of the 3D points.
Image with the Z-Coordinates of the 3D points.
Type of the conversion. Default: "cartesian"
Camera parameters.
Pose of the 3D object model.
Image with the X-Coordinates of the 3D points.
Transform 3D points from images to a 3D object model.
Modified instance represents: Handle of the 3D object model.
Image with the X-Coordinates and the ROI of the 3D points.
Image with the Y-Coordinates of the 3D points.
Image with the Z-Coordinates of the 3D points.
Return attributes of 3D object models.
Handle of the 3D object model.
Names of the generic attributes that are queried for the 3D object model. Default: "num_points"
Values of the generic parameters.
Return attributes of 3D object models.
Instance represents: Handle of the 3D object model.
Names of the generic attributes that are queried for the 3D object model. Default: "num_points"
Values of the generic parameters.
Project a 3D object model into image coordinates.
Instance represents: Handle of the 3D object model.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Projected model contours.
Project a 3D object model into image coordinates.
Instance represents: Handle of the 3D object model.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Projected model contours.
Apply a rigid 3D transformation to 3D object models.
Handles of the 3D object models.
Poses.
Handles of the transformed 3D object models.
Apply a rigid 3D transformation to 3D object models.
Instance represents: Handles of the 3D object models.
Poses.
Handles of the transformed 3D object models.
Apply an arbitrary projective 3D transformation to 3D object models.
Handles of the 3D object models.
Homogeneous projective transformation matrix.
Handles of the transformed 3D object models.
Apply an arbitrary projective 3D transformation to 3D object models.
Instance represents: Handles of the 3D object models.
Homogeneous projective transformation matrix.
Handles of the transformed 3D object models.
Apply an arbitrary affine 3D transformation to 3D object models.
Handles of the 3D object models.
Transformation matrices.
Handles of the transformed 3D object models.
Apply an arbitrary affine 3D transformation to 3D object models.
Instance represents: Handles of the 3D object models.
Transformation matrices.
Handles of the transformed 3D object models.
Free the memory of a 3D object model.
Handle of the 3D object model.
Free the memory of a 3D object model.
Instance represents: Handle of the 3D object model.
Serialize a 3D object model.
Instance represents: Handle of the 3D object model.
Handle of the serialized item.
Deserialize a serialized 3D object model.
Modified instance represents: Handle of the 3D object model.
Handle of the serialized item.
Writes a 3D object model to a file.
Instance represents: Handle of the 3D object model.
Type of the file that is written. Default: "om3"
Name of the file that is written.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Writes a 3D object model to a file.
Instance represents: Handle of the 3D object model.
Type of the file that is written. Default: "om3"
Name of the file that is written.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Read a 3D object model from a file.
Modified instance represents: Handle of the 3D object model.
Filename of the file to be read. Default: "mvtec_bunny_normals"
Scale of the data in the file. Default: "m"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Status information.
Read a 3D object model from a file.
Modified instance represents: Handle of the 3D object model.
Filename of the file to be read. Default: "mvtec_bunny_normals"
Scale of the data in the file. Default: "m"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Status information.
Compute the calibrated scene flow between two stereo image pairs.
Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Handle of the 3D object model.
Compute the calibrated scene flow between two stereo image pairs.
Modified instance represents: Handle of the 3D object model.
Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Find edges in a 3D object model.
Instance represents: Handle of the 3D object model whose edges should be computed.
Edge threshold.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
3D object model containing the edges.
Find edges in a 3D object model.
Instance represents: Handle of the 3D object model whose edges should be computed.
Edge threshold.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
3D object model containing the edges.
Find the best matches of a surface model in a 3D scene and images.
Instance represents: Handle of the 3D object model containing the scene.
Images of the scene.
Handle of the surface model.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene and images.
Instance represents: Handle of the 3D object model containing the scene.
Images of the scene.
Handle of the surface model.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene and in images.
Instance represents: Handle of the 3D object model containing the scene.
Images of the scene.
Handle of the surface model.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene and in images.
Instance represents: Handle of the 3D object model containing the scene.
Images of the scene.
Handle of the surface model.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Fuse 3D object models into a surface.
Handles of the 3D object models.
The two opposite bound box corners.
Used resolution within the bounding box. Default: 1.0
Distance of expected noise to surface. Default: 1.0
Minimum thickness of the object in direction of the surface normal. Default: 1.0
Weight factor for data fidelity. Default: 1.0
Direction of normals of the input models. Default: "inwards"
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Handle of the fused 3D object model.
Fuse 3D object models into a surface.
Instance represents: Handles of the 3D object models.
The two opposite bound box corners.
Used resolution within the bounding box. Default: 1.0
Distance of expected noise to surface. Default: 1.0
Minimum thickness of the object in direction of the surface normal. Default: 1.0
Weight factor for data fidelity. Default: 1.0
Direction of normals of the input models. Default: "inwards"
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Handle of the fused 3D object model.
Represents an instance of an OCR box classifier.
Read an OCR classifier from a file.
Modified instance represents: ID of the read OCR classifier.
Name of the OCR classifier file. Default: "testnet"
Create a new OCR-classifier.
Modified instance represents: ID of the created OCR classifier.
Width of the input layer of the network. Default: 8
Height of the input layer of the network. Default: 10
Interpolation mode concerning scaling of characters. Default: 1
Additional features. Default: "default"
All characters of a set. Default: ["a","b","c"]
Create a new OCR-classifier.
Modified instance represents: ID of the created OCR classifier.
Width of the input layer of the network. Default: 8
Height of the input layer of the network. Default: 10
Interpolation mode concerning scaling of characters. Default: 1
Additional features. Default: "default"
All characters of a set. Default: ["a","b","c"]
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Serialize an OCR classifier.
Instance represents: ID of the OCR classifier.
Handle of the serialized item.
Deserialize a serialized OCR classifier.
Modified instance represents: ID of the OCR classifier.
Handle of the serialized item.
Writing an OCR classifier into a file.
Instance represents: ID of the OCR classifier.
Name of the file for the OCR classifier (without extension). Default: "my_ocr"
Read an OCR classifier from a file.
Modified instance represents: ID of the read OCR classifier.
Name of the OCR classifier file. Default: "testnet"
Classify one character.
Instance represents: ID of the OCR classifier.
Character to be recognized.
Gray values of the characters.
Confidence values of the characters.
Classes (names) of the characters.
Classify characters.
Instance represents: ID of the OCR classifier.
Characters to be recognized.
Gray values for the characters.
Confidence values of the characters.
Class (name) of the characters.
Classify characters.
Instance represents: ID of the OCR classifier.
Characters to be recognized.
Gray values for the characters.
Confidence values of the characters.
Class (name) of the characters.
Get information about an OCR classifier.
Instance represents: ID of the OCR classifier.
Width of the scaled characters.
Height of the scaled characters.
Interpolation mode for scaling the characters.
Width of the largest trained character.
Height of the largest trained character.
Used features.
All characters of the set.
Create a new OCR-classifier.
Modified instance represents: ID of the created OCR classifier.
Width of the input layer of the network. Default: 8
Height of the input layer of the network. Default: 10
Interpolation mode concerning scaling of characters. Default: 1
Additional features. Default: "default"
All characters of a set. Default: ["a","b","c"]
Create a new OCR-classifier.
Modified instance represents: ID of the created OCR classifier.
Width of the input layer of the network. Default: 8
Height of the input layer of the network. Default: 10
Interpolation mode concerning scaling of characters. Default: 1
Additional features. Default: "default"
All characters of a set. Default: ["a","b","c"]
Train an OCR classifier by the input of regions.
Instance represents: ID of the desired OCR-classifier.
Characters to be trained.
Gray values for the characters.
Class (name) of the characters. Default: "a"
Average confidence during a re-classification of the trained characters.
Train an OCR classifier by the input of regions.
Instance represents: ID of the desired OCR-classifier.
Characters to be trained.
Gray values for the characters.
Class (name) of the characters. Default: "a"
Average confidence during a re-classification of the trained characters.
Train an OCR classifier with the help of a training file.
Instance represents: ID of the desired OCR-network.
Names of the training files. Default: "train_ocr"
Average confidence during a re-classification of the trained characters.
Train an OCR classifier with the help of a training file.
Instance represents: ID of the desired OCR-network.
Names of the training files. Default: "train_ocr"
Average confidence during a re-classification of the trained characters.
Define a new conversion table for the characters.
Instance represents: ID of the OCR-network to be changed.
New assign of characters. Default: ["a","b","c"]
Deallocation of the memory of an OCR classifier.
Instance represents: ID of the OCR classifier to be deleted.
Test an OCR classifier.
Instance represents: ID of the desired OCR-classifier.
Characters to be tested.
Gray values for the characters.
Class (name) of the characters. Default: "a"
Confidence for the character to belong to the class.
Test an OCR classifier.
Instance represents: ID of the desired OCR-classifier.
Characters to be tested.
Gray values for the characters.
Class (name) of the characters. Default: "a"
Confidence for the character to belong to the class.
Access the features which correspond to a character.
Instance represents: ID of the desired OCR-classifier.
Characters to be trained.
Feature vector.
Represents an instance of a CNN OCR classifier.
Read an CNN-based OCR classifier from a file.
Modified instance represents: Handle of the OCR classifier.
File name. Default: "Universal_Rej.occ"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Clear an CNN-based OCR classifier.
Handle of the OCR classifier.
Clear an CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Deserialize a serialized CNN-based OCR classifier.
Modified instance represents: Handle of the OCR classifier.
Handle of the serialized item.
Classify multiple characters with an CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Confidence of the class of the characters.
Result of classifying the characters with the CNN.
Classify multiple characters with an CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Confidence of the class of the characters.
Result of classifying the characters with the CNN.
Classify a single character with an CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Character to be recognized.
Gray values of the character.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the character.
Result of classifying the character with the CNN.
Classify a single character with an CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Character to be recognized.
Gray values of the character.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the character.
Result of classifying the character with the CNN.
Classify a related group of characters with an CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the CNN.
Classify a related group of characters with an CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the CNN.
Return the parameters of a CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
A tuple of generic parameter names. Default: "characters"
A tuple of generic parameter values.
Return the parameters of a CNN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
A tuple of generic parameter names. Default: "characters"
A tuple of generic parameter values.
Get the names of the parameters that can be used in get_params_ocr_class_cnn for a given CNN-based OCR classifier.
Instance represents: Handle of OCR classifier.
Names of the generic parameters.
Read an CNN-based OCR classifier from a file.
Modified instance represents: Handle of the OCR classifier.
File name. Default: "Universal_Rej.occ"
Serialize a CNN-based OCR classifier
Instance represents: Handle of the OCR classifier.
Handle of the serialized item.
Represents an instance of a k-NearestNeighbor OCR classifier.
Read an OCR classifier from a file.
Modified instance represents: Handle of the OCR classifier.
File name.
Create an OCR classifier using a k-Nearest Neighbor (k-NN) classifier.
Modified instance represents: Handle of the k-NN classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
This parameter is not yet supported. Default: []
This parameter is not yet supported. Default: []
Create an OCR classifier using a k-Nearest Neighbor (k-NN) classifier.
Modified instance represents: Handle of the k-NN classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
This parameter is not yet supported. Default: []
This parameter is not yet supported. Default: []
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Classify a related group of characters with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the k-NN.
Classify a related group of characters with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the k-NN.
Deserialize a serialized k-NN-based OCR classifier.
Modified instance represents: Handle of the OCR classifier.
Handle of the serialized item.
Serialize a k-NN-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Handle of the serialized item.
Read an OCR classifier from a file.
Modified instance represents: Handle of the OCR classifier.
File name.
Write a k-NN classifier for an OCR task to a file.
Instance represents: Handle of the k-NN classifier for an OCR task.
File name.
Clear an OCR classifier.
Instance represents: Handle of the OCR classifier.
Create an OCR classifier using a k-Nearest Neighbor (k-NN) classifier.
Modified instance represents: Handle of the k-NN classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
This parameter is not yet supported. Default: []
This parameter is not yet supported. Default: []
Create an OCR classifier using a k-Nearest Neighbor (k-NN) classifier.
Modified instance represents: Handle of the k-NN classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
This parameter is not yet supported. Default: []
This parameter is not yet supported. Default: []
Trains an k-NN classifier for an OCR task.
Instance represents: Handle of the k-NN classifier.
Names of the training files. Default: "ocr.trf"
Names of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Values of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Trains an k-NN classifier for an OCR task.
Instance represents: Handle of the k-NN classifier.
Names of the training files. Default: "ocr.trf"
Names of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Values of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Compute the features of a character.
Instance represents: Handle of the k-NN classifier.
Input character.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Return the parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
Type of preprocessing used to transform the feature vectors.
Number of different trees used during the classifcation.
Return the parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
Type of preprocessing used to transform the feature vectors.
Number of different trees used during the classifcation.
Classify multiple characters with an k-NN classifier.
Instance represents: Handle of the k-NN classifier.
Characters to be recognized.
Gray values of the characters.
Confidence of the class of the characters.
Result of classifying the characters with the k-NN.
Classify multiple characters with an k-NN classifier.
Instance represents: Handle of the k-NN classifier.
Characters to be recognized.
Gray values of the characters.
Confidence of the class of the characters.
Result of classifying the characters with the k-NN.
Classify a single character with an OCR classifier.
Instance represents: Handle of the k-NN classifier.
Character to be recognized.
Gray values of the character.
Number of maximal classes to determine. Default: 1
Number of neighbors to consider. Default: 1
Confidence(s) of the class(es) of the character.
Results of classifying the character with the k-NN.
Classify a single character with an OCR classifier.
Instance represents: Handle of the k-NN classifier.
Character to be recognized.
Gray values of the character.
Number of maximal classes to determine. Default: 1
Number of neighbors to consider. Default: 1
Confidence(s) of the class(es) of the character.
Results of classifying the character with the k-NN.
Select an optimal combination of features to classify OCR data.
Modified instance represents: Trained OCR-k-NN classifier.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Select an optimal combination of features to classify OCR data.
Modified instance represents: Trained OCR-k-NN classifier.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Represents an instance of a MLP OCR classifier.
Read an OCR classifier from a file.
Modified instance represents: Handle of the OCR classifier.
File name.
Create an OCR classifier using a multilayer perceptron.
Modified instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
Number of hidden units of the MLP. Default: 80
Type of preprocessing used to transform the feature vectors. Default: "none"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the MLP with random values. Default: 42
Create an OCR classifier using a multilayer perceptron.
Modified instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
Number of hidden units of the MLP. Default: 80
Type of preprocessing used to transform the feature vectors. Default: "none"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the MLP with random values. Default: 42
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Select an optimal combination of features to classify OCR data from a (protected) training file.
Modified instance represents: Trained OCR-MLP classifier.
Names of the training files. Default: ""
Passwords for protected training files.
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Select an optimal combination of features to classify OCR data from a (protected) training file.
Modified instance represents: Trained OCR-MLP classifier.
Names of the training files. Default: ""
Passwords for protected training files.
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Selects an optimal combination of features to classify OCR data.
Modified instance represents: Trained OCR-MLP classifier.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Selects an optimal combination of features to classify OCR data.
Modified instance represents: Trained OCR-MLP classifier.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Clear an OCR classifier.
Handle of the OCR classifier.
Clear an OCR classifier.
Instance represents: Handle of the OCR classifier.
Deserialize a serialized MLP-based OCR classifier.
Modified instance represents: Handle of the OCR classifier.
Handle of the serialized item.
Serialize a MLP-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Handle of the serialized item.
Read an OCR classifier from a file.
Modified instance represents: Handle of the OCR classifier.
File name.
Write an OCR classifier to a file.
Instance represents: Handle of the OCR classifier.
File name.
Compute the features of a character.
Instance represents: Handle of the OCR classifier.
Input character.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Classify a related group of characters with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the MLP.
Classify a related group of characters with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the MLP.
Classify multiple characters with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Confidence of the class of the characters.
Result of classifying the characters with the MLP.
Classify multiple characters with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Confidence of the class of the characters.
Result of classifying the characters with the MLP.
Classify a single character with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Character to be recognized.
Gray values of the character.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the character.
Result of classifying the character with the MLP.
Classify a single character with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Character to be recognized.
Gray values of the character.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the character.
Result of classifying the character with the MLP.
Train an OCR classifier with data from a (protected) training file.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Passwords for protected training files.
Maximum number of iterations of the optimization algorithm. Default: 200
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm. Default: 1.0
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm. Default: 0.01
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.
Mean error of the MLP on the training data.
Train an OCR classifier with data from a (protected) training file.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Passwords for protected training files.
Maximum number of iterations of the optimization algorithm. Default: 200
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm. Default: 1.0
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm. Default: 0.01
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.
Mean error of the MLP on the training data.
Train an OCR classifier.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Maximum number of iterations of the optimization algorithm. Default: 200
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm. Default: 1.0
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm. Default: 0.01
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.
Mean error of the MLP on the training data.
Train an OCR classifier.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Maximum number of iterations of the optimization algorithm. Default: 200
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm. Default: 1.0
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm. Default: 0.01
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.
Mean error of the MLP on the training data.
Compute the information content of the preprocessed feature vectors of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Cumulative information content of the transformed feature vectors.
Relative information content of the transformed feature vectors.
Compute the information content of the preprocessed feature vectors of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Cumulative information content of the transformed feature vectors.
Relative information content of the transformed feature vectors.
Return the rejection class parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Name of the general parameter. Default: "sampling_strategy"
Value of the general parameter.
Return the rejection class parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Name of the general parameter. Default: "sampling_strategy"
Value of the general parameter.
Set the rejection class parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Name of the general parameter. Default: "sampling_strategy"
Value of the general parameter. Default: "hyperbox_around_all_classes"
Set the rejection class parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Name of the general parameter. Default: "sampling_strategy"
Value of the general parameter. Default: "hyperbox_around_all_classes"
Return the regularization parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Name of the regularization parameter to return. Default: "weight_prior"
Value of the regularization parameter.
Set the regularization parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Name of the regularization parameter to return. Default: "weight_prior"
Value of the regularization parameter. Default: 1.0
Set the regularization parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Name of the regularization parameter to return. Default: "weight_prior"
Value of the regularization parameter. Default: 1.0
Return the parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
Number of hidden units of the MLP.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features.
Return the parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
Number of hidden units of the MLP.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features.
Create an OCR classifier using a multilayer perceptron.
Modified instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
Number of hidden units of the MLP. Default: 80
Type of preprocessing used to transform the feature vectors. Default: "none"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the MLP with random values. Default: 42
Create an OCR classifier using a multilayer perceptron.
Modified instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
Number of hidden units of the MLP. Default: 80
Type of preprocessing used to transform the feature vectors. Default: "none"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the MLP with random values. Default: 42
Represents an instance of a SVM OCR classifier.
Read a SVM-based OCR classifier from a file.
Modified instance represents: Handle of the OCR classifier.
File name.
Create an OCR classifier using a support vector machine.
Modified instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
The kernel type. Default: "rbf"
Additional parameter for the kernel function. Default: 0.02
Regularization constant of the SVM. Default: 0.05
The mode of the SVM. Default: "one-versus-one"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Create an OCR classifier using a support vector machine.
Modified instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
The kernel type. Default: "rbf"
Additional parameter for the kernel function. Default: 0.02
Regularization constant of the SVM. Default: 0.05
The mode of the SVM. Default: "one-versus-one"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Select an optimal combination of features to classify OCR data from a (protected) training file.
Modified instance represents: Trained OCR-SVM Classifier.
Names of the training files. Default: ""
Passwords for protected training files.
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Select an optimal combination of features to classify OCR data from a (protected) training file.
Modified instance represents: Trained OCR-SVM Classifier.
Names of the training files. Default: ""
Passwords for protected training files.
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Selects an optimal combination of features to classify OCR data.
Modified instance represents: Trained OCR-SVM Classifier.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Selects an optimal combination of features to classify OCR data.
Modified instance represents: Trained OCR-SVM Classifier.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Achieved score using tow-fold cross-validation.
Selected feature set, contains only entries from FeatureList.
Clear an SVM-based OCR classifier.
Handle of the OCR classifier.
Clear an SVM-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Deserialize a serialized SVM-based OCR classifier.
Modified instance represents: Handle of the OCR classifier.
Handle of the serialized item.
Serialize a SVM-based OCR classifier
Instance represents: Handle of the OCR classifier.
Handle of the serialized item.
Read a SVM-based OCR classifier from a file.
Modified instance represents: Handle of the OCR classifier.
File name.
Write an OCR classifier to a file.
Instance represents: Handle of the OCR classifier.
File name.
Compute the features of a character.
Instance represents: Handle of the OCR classifier.
Input character.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Classify a related group of characters with an OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the SVM.
Classify multiple characters with an SVM-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Characters to be recognized.
Gray values of the characters.
Result of classifying the characters with the SVM.
Classify a single character with an SVM-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Character to be recognized.
Gray values of the character.
Number of best classes to determine. Default: 1
Result of classifying the character with the SVM.
Approximate a trained SVM-based OCR classifier by a reduced SVM.
Instance represents: Original handle of SVM-based OCR-classifier.
Type of postprocessing to reduce number of SVs. Default: "bottom_up"
Minimum number of remaining SVs. Default: 2
Maximum allowed error of reduction. Default: 0.001
SVMHandle of reduced OCR classifier.
Train an OCR classifier with data from a (protected) training file.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Passwords for protected training files.
Stop parameter for training. Default: 0.001
Mode of training. Default: "default"
Train an OCR classifier with data from a (protected) training file.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Passwords for protected training files.
Stop parameter for training. Default: 0.001
Mode of training. Default: "default"
Train an OCR classifier.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Stop parameter for training. Default: 0.001
Mode of training. Default: "default"
Train an OCR classifier.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Stop parameter for training. Default: 0.001
Mode of training. Default: "default"
Compute the information content of the preprocessed feature vectors of an SVM-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Cumulative information content of the transformed feature vectors.
Relative information content of the transformed feature vectors.
Compute the information content of the preprocessed feature vectors of an SVM-based OCR classifier.
Instance represents: Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Cumulative information content of the transformed feature vectors.
Relative information content of the transformed feature vectors.
Return the number of support vectors of an OCR classifier.
Instance represents: OCR handle.
Number of SV of each sub-SVM.
Total number of support vectors.
Return the index of a support vector from a trained OCR classifier that is based on support vector machines.
Instance represents: OCR handle.
Number of stored support vectors.
Index of the support vector in the training set.
Return the parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
The kernel type.
Additional parameters for the kernel function.
Regularization constant of the SVM.
The mode of the SVM.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization').
Return the parameters of an OCR classifier.
Instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
The kernel type.
Additional parameters for the kernel function.
Regularization constant of the SVM.
The mode of the SVM.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization').
Create an OCR classifier using a support vector machine.
Modified instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
The kernel type. Default: "rbf"
Additional parameter for the kernel function. Default: 0.02
Regularization constant of the SVM. Default: 0.05
The mode of the SVM. Default: "one-versus-one"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Create an OCR classifier using a support vector machine.
Modified instance represents: Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
The kernel type. Default: "rbf"
Additional parameter for the kernel function. Default: 0.02
Regularization constant of the SVM. Default: 0.05
The mode of the SVM. Default: "one-versus-one"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Represents an instance of a tool for optical character verification.
Reading an OCV tool from file.
Modified instance represents: Handle of read OCV tool.
Name of the file which has to be read. Default: "test_ocv"
Create a new OCV tool based on gray value projections.
Modified instance represents: Handle of the created OCV tool.
List of names for patterns to be trained. Default: "a"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Verification of a pattern using an OCV tool.
Instance represents: Handle of the OCV tool.
Characters to be verified.
Name of the character. Default: "a"
Adaption to vertical and horizontal translation. Default: "true"
Adaption to vertical and horizontal scaling of the size. Default: "true"
Adaption to changes of the orientation (not implemented). Default: "false"
Adaption to additive and scaling gray value changes. Default: "true"
Minimum difference between objects. Default: 10
Evaluation of the character.
Verification of a pattern using an OCV tool.
Instance represents: Handle of the OCV tool.
Characters to be verified.
Name of the character. Default: "a"
Adaption to vertical and horizontal translation. Default: "true"
Adaption to vertical and horizontal scaling of the size. Default: "true"
Adaption to changes of the orientation (not implemented). Default: "false"
Adaption to additive and scaling gray value changes. Default: "true"
Minimum difference between objects. Default: 10
Evaluation of the character.
Training of an OCV tool.
Instance represents: Handle of the OCV tool to be trained.
Pattern to be trained.
Name(s) of the object(s) to analyse. Default: "a"
Mode for training (only one mode implemented). Default: "single"
Training of an OCV tool.
Instance represents: Handle of the OCV tool to be trained.
Pattern to be trained.
Name(s) of the object(s) to analyse. Default: "a"
Mode for training (only one mode implemented). Default: "single"
Deserialize a serialized OCV tool.
Modified instance represents: Handle of the OCV tool.
Handle of the serialized item.
Serialize an OCV tool.
Instance represents: Handle of the OCV tool.
Handle of the serialized item.
Reading an OCV tool from file.
Modified instance represents: Handle of read OCV tool.
Name of the file which has to be read. Default: "test_ocv"
Saving an OCV tool to file.
Instance represents: Handle of the OCV tool to be written.
Name of the file where the tool has to be saved. Default: "test_ocv"
Clear an OCV tool.
Instance represents: Handle of the OCV tool which has to be freed.
Create a new OCV tool based on gray value projections.
Modified instance represents: Handle of the created OCV tool.
List of names for patterns to be trained. Default: "a"
Create a new OCV tool based on gray value projections.
Modified instance represents: Handle of the created OCV tool.
List of names for patterns to be trained. Default: "a"
Class grouping all HALCON operators.
Compute the union of cotangential contours.
Input XLD contours.
Output XLD contours.
Length of the part of a contour to skip for the determination of tangents. Default: 0.0
Length of the part of a contour to use for the determination of tangents. Default: 30.0
Maximum angle difference between two contours' tangents. Default: 0.78539816
Maximum distance of the contours' end points. Default: 25.0
Maximum distance of the contours' end points perpendicular to their tangents. Default: 10.0
Maximum overlap of two contours. Default: 2.0
Mode describing the treatment of the contours' attributes. Default: "attr_forget"
Transform a contour in polar coordinates back to Cartesian coordinates
Input contour.
Output contour.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the column coordinate 0 of PolarContour to. Default: 0.0
Angle of the ray to map the column coordinate $WidthIn-1$ of PolarContour to. Default: 6.2831853
Radius of the circle to map the row coordinate 0 of PolarContour to. Default: 0
Radius of the circle to map the row coordinate $HeightIn-1$ of PolarContour to. Default: 100
Width of the virtual input image. Default: 512
Height of the virtual input image. Default: 512
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Transform a contour in an annular arc to polar coordinates.
Input contour.
Output contour.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to the column coordinate 0 of PolarTransContour. Default: 0.0
Angle of the ray to be mapped to the column coordinate $Width-1$ of PolarTransContour to. Default: 6.2831853
Radius of the circle to be mapped to the row coordinate 0 of PolarTransContour. Default: 0
Radius of the circle to be mapped to the row coordinate $Height-1$ of PolarTransContour. Default: 100
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Create control data of a NURBS curve that interpolates given points.
Row coordinates of input point list.
Column coordinates of input point list.
Tangents at first and last point. Default: []
Order of the output curve. Default: 3
Row coordinates of the control polygon.
Column coordinates of the control polygon.
The knot vector of the output curve.
Transform a NURBS curve into an XLD contour.
The contour that approximates the NURBS curve.
Row coordinates of the control polygon.
Column coordinates of the control polygon.
The knot vector $u$. Default: "auto"
The weight vector $w$. Default: "auto"
The degree $p$ of the NURBS curve. Default: 3
Maximum distance between the NURBS curve and its approximation. Default: 1.0
Maximum distance between two subsequent Contour points. Default: 5.0
Compute the union of closed polygons.
Polygons enclosing the first region.
Polygons enclosing the second region.
Polygons enclosing the union.
Compute the union of closed contours.
Contours enclosing the first region.
Contours enclosing the second region.
Contours enclosing the union.
Compute the symmetric difference of closed polygons.
Polygons enclosing the first region.
Polygons enclosing the second region.
Polygons enclosing the symmetric difference.
Compute the symmetric difference of closed contours.
Contours enclosing the first region.
Contours enclosing the second region.
Contours enclosing the symmetric difference.
Compute the difference of closed polygons.
Polygons enclosing the region from which the second region is subtracted.
Polygons enclosing the region that is subtracted from the first region.
Polygons enclosing the difference.
Compute the difference of closed contours.
Contours enclosing the region from which the second region is subtracted.
Contours enclosing the region that is subtracted from the first region.
Contours enclosing the difference.
Intersect closed polygons.
Polygons enclosing the first region to be intersected.
Polygons enclosing the second region to be intersected.
Polygons enclosing the intersection.
Intersect closed contours.
Contours enclosing the first region to be intersected.
Contours enclosing the second region to be intersected.
Contours enclosing the intersection.
Compute the union of contours that belong to the same circle.
Contours to be merged.
Merged contours.
Maximum angular distance of two circular arcs. Default: 0.5
Maximum overlap of two circular arcs. Default: 0.1
Maximum angle between the connecting line and the tangents of circular arcs. Default: 0.2
Maximum length of the gap between two circular arcs in pixels. Default: 30
Maximum radius difference of the circles fitted to two arcs. Default: 10
Maximum center distance of the circles fitted to two arcs. Default: 10
Determine whether small contours without fitted circles should also be merged. Default: "true"
Number of iterations. Default: 1
Crop an XLD contour.
Input contours.
Output contours.
Upper border of the cropping rectangle. Default: 0
Left border of the cropping rectangle. Default: 0
Lower border of the cropping rectangle. Default: 512
Right border of the cropping rectangle. Default: 512
Should closed contours produce closed output contours? Default: "true"
Generate one XLD contour in the shape of a cross for each input point.
Generated XLD contours.
Row coordinates of the input points.
Column coordinates of the input points.
Length of the cross bars. Default: 6.0
Orientation of the crosses. Default: 0.785398
Sort contours with respect to their relative position.
Contours to be sorted.
Sorted contours.
Kind of sorting. Default: "upper_left"
Increasing or decreasing sorting order. Default: "true"
Sorting first with respect to row, then to column. Default: "row"
Merge XLD contours from successive line scan images.
Current input contours.
Merged contours from the previous iteration.
Current contours, merged with old ones where applicable.
Contours from the previous iteration which could not be merged with the current ones.
Height of the line scan images. Default: 512
Maximum distance of contours from the image border. Default: 0.0
Image line of the current image, which touches the previous image. Default: "top"
Maximum number of images covered by one contour. Default: 3
Read XLD polygons from a file in ARC/INFO generate format.
Read XLD polygons.
Name of the ARC/INFO file.
Write XLD polygons to a file in ARC/INFO generate format.
XLD polygons to be written.
Name of the ARC/INFO file.
Read XLD contours to a file in ARC/INFO generate format.
Read XLD contours.
Name of the ARC/INFO file.
Write XLD contours to a file in ARC/INFO generate format.
XLD contours to be written.
Name of the ARC/INFO file.
Read the geo coding from an ARC/INFO world file.
Name of the ARC/INFO world file.
Transformation matrix from image to world coordinates.
Compute the parallel contour of an XLD contour.
Contours to be transformed.
Parallel contours.
Mode, with which the direction information is computed. Default: "regression_normal"
Distance of the parallel contour. Default: 1
Create an XLD contour in the shape of a rectangle.
Rectangle contour.
Row coordinate of the center of the rectangle. Default: 300.0
Column coordinate of the center of the rectangle. Default: 200.0
Orientation of the main axis of the rectangle [rad]. Default: 0.0
First radius (half length) of the rectangle. Default: 100.5
Second radius (half width) of the rectangle. Default: 20.5
Compute the distances of all contour points to a rectangle.
Input contour.
Number of points at the beginning and the end of the contours to be ignored for the computation of distances. Default: 0
Row coordinate of the center of the rectangle.
Column coordinate of the center of the rectangle.
Orientation of the main axis of the rectangle [rad].
First radius (half length) of the rectangle.
Second radius (half width) of the rectangle.
Distances of the contour points to the rectangle.
Fit rectangles to XLD contours.
Input contours.
Algorithm for fitting the rectangles. Default: "regression"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as closed. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations (not used for 'regression'). Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 2.0 for 'tukey'). Default: 2.0
Row coordinate of the center of the rectangle.
Column coordinate of the center of the rectangle.
Orientation of the main axis of the rectangle [rad].
First radius (half length) of the rectangle.
Second radius (half width) of the rectangle.
Point order of the contour.
Segment XLD contour parts whose local attributes fulfill given conditions.
Contour to be segmented.
Segmented contour parts.
Contour attributes to be checked. Default: "distance"
Linkage type of the individual attributes. Default: "and"
Lower limits of the attribute values. Default: 150.0
Upper limits of the attribute values. Default: 99999.0
Segment XLD contours into line segments and circular or elliptic arcs.
Contours to be segmented.
Segmented contours.
Mode for the segmentation of the contours. Default: "lines_circles"
Number of points used for smoothing the contours. Default: 5
Maximum distance between a contour and the approximating line (first iteration). Default: 4.0
Maximum distance between a contour and the approximating line (second iteration). Default: 2.0
Approximate XLD contours by circles.
Input contours.
Algorithm for the fitting of circles. Default: "algebraic"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as 'closed'. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations for the robust weighted fitting. Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for Huber and 2.0 for Tukey). Default: 2.0
Row coordinate of the center of the circle.
Column coordinate of the center of the circle.
Radius of circle.
Angle of the start point [rad].
Angle of the end point [rad].
Point order along the boundary.
Approximate XLD contours by line segments.
Input contours.
Algorithm for the fitting of lines. Default: "tukey"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 'drop' and 2.0 for 'tukey'). Default: 2.0
Row coordinates of the starting points of the line segments.
Column coordinates of the starting points of the line segments.
Row coordinates of the end points of the line segments.
Column coordinates of the end points of the line segments.
Line parameter: Row coordinate of the normal vector
Line parameter: Column coordinate of the normal vector
Line parameter: Distance of the line from the origin
Compute the distances of all contour points to an ellipse.
Input contours.
Mode for unsigned or signed distance values. Default: "unsigned"
Number of points at the beginning and the end of the contours to be ignored for the computation of distances. Default: 0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis in radian.
Length of the larger half axis.
Length of the smaller half axis.
Distances of the contour points to the ellipse.
Compute the distance of contours to an ellipse.
Input contours.
Method for the determination of the distances. Default: "geometric"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Number of points at the beginning and the end of the contours to be ignored for the computation of distances. Default: 0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis in radian.
Length of the larger half axis.
Length of the smaller half axis.
Minimum distance.
Maximum distance.
Mean distance.
Standard deviation of the distance.
Approximate XLD contours by ellipses or elliptic arcs.
Input contours.
Algorithm for the fitting of ellipses. Default: "fitzgibbon"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as 'closed'. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Number of circular segments used for the Voss approach. Default: 200
Maximum number of iterations for the robust weighted fitting. Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for '*huber' and 2.0 for '*tukey'). Default: 2.0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Angle of the start point [rad].
Angle of the end point [rad].
point order along the boundary.
Create XLD contours corresponding to circles or circular arcs.
Resulting contours.
Row coordinate of the center of the circles or circular arcs. Default: 200.0
Column coordinate of the center of the circles or circular arcs. Default: 200.0
Radius of the circles or circular arcs. Default: 100.0
Angle of the start points of the circles or circular arcs [rad]. Default: 0.0
Angle of the end points of the circles or circular arcs [rad]. Default: 6.28318
Point order along the circles or circular arcs. Default: "positive"
Distance between neighboring contour points. Default: 1.0
Create an XLD contour that corresponds to an elliptic arc.
Resulting contour.
Row coordinate of the center of the ellipse. Default: 200.0
Column coordinate of the center of the ellipse. Default: 200.0
Orientation of the main axis [rad]. Default: 0.0
Length of the larger half axis. Default: 100.0
Length of the smaller half axis. Default: 50.0
Angle of the start point on the smallest surrounding circle [rad]. Default: 0.0
Angle of the end point on the smallest surrounding circle [rad]. Default: 6.28318
point order along the boundary. Default: "positive"
Resolution: Maximum distance between neighboring contour points. Default: 1.5
Add noise to XLD contours.
Original contours.
Noisy contours.
Number of points used to calculate the regression line. Default: 5
Maximum amplitude of the added noise (equally distributed in [-Amp,Amp]). Default: 1.0
Combine road hypotheses from two resolution levels.
XLD polygons to be examined.
Modified parallels obtained from EdgePolygons.
Extended parallels obtained from EdgePolygons.
Road-center-line polygons to be examined.
Roadsides found.
Maximum angle between two parallel line segments. Default: 0.523598775598
Maximum angle between two collinear line segments. Default: 0.261799387799
Maximum distance between two parallel line segments. Default: 40
Maximum distance between two collinear line segments. Default: 40
Join modified XLD parallels lying on the same polygon.
Extended XLD parallels.
Maximally extended parallels.
Extract parallel XLD polygons enclosing a homogeneous area.
Input XLD parallels.
Corresponding gray value image.
Modified XLD parallels.
Extended XLD parallels.
Minimum quality factor (measure of parallelism). Default: 0.4
Minimum mean gray value. Default: 160
Maximum mean gray value. Default: 220
Maximum allowed standard deviation. Default: 10.0
Return information about the gray values of the area enclosed by XLD parallels.
Input XLD Parallels.
Corresponding gray value image.
Minimum quality factor.
Maximum quality factor.
Minimum mean gray value.
Maximum mean gray value.
Minimum standard deviation.
Maximum standard deviation.
Return an XLD parallel's data (as lines).
Input XLD parallels.
Row coordinates of the points on polygon P1.
Column coordinates of the points on polygon P1.
Lengths of the line segments on polygon P1.
Angles of the line segments on polygon P1.
Row coordinates of the points on polygon P2.
Column coordinates of the points on polygon P2.
Lengths of the line segments on polygon P2.
Angles of the line segments on polygon P2.
Extract parallel XLD polygons.
Input polygons.
Parallel polygons.
Minimum length of the individual polygon segments. Default: 10.0
Maximum distance between the polygon segments. Default: 30.0
Maximum angle difference of the polygon segments. Default: 0.15
Should adjacent parallel relations be merged? Default: "true"
Return an XLD polygon's data (as lines).
Input XLD polygons.
Row coordinates of the lines' start points.
Column coordinates of the lines' start points.
Column coordinates of the lines' end points.
Column coordinates of the lines' end points.
Lengths of the line segments.
Angles of the line segments.
Return an XLD polygon's data.
Input XLD polygon.
Row coordinates of the polygons' points.
Column coordinates of the polygons' points.
Lengths of the line segments.
Angles of the line segments.
Approximate XLD contours by polygons.
Contours to be approximated.
Approximating polygons.
Type of approximation. Default: "ramer"
Threshold for the approximation. Default: 2.0
Split XLD contours at dominant points.
Polygons for which the corresponding contours are to be split.
Split contours.
Mode for the splitting of the contours. Default: "polygon"
Weight for the sensitiveness. Default: 1
Width of the smoothing mask. Default: 5
Apply a projective transformation to an XLD contour.
Input contours.
Output contours.
Homogeneous projective transformation matrix.
Apply an arbitrary affine transformation to XLD polygons.
Input XLD polygons.
Transformed XLD polygons.
Input transformation matrix.
Apply an arbitrary affine 2D transformation to XLD contours.
Input XLD contours.
Transformed XLD contours.
Input transformation matrix.
Close an XLD contour.
Contours to be closed.
Closed contours.
Clip the end points of an XLD contour.
Input contour
Clipped contour
Clipping mode. Default: "num_points"
Clipping length in unit pixels (Mode $=$ 'length') or number (Mode $=$ 'num_points') Default: 3
Clip an XLD contour.
Contours to be clipped.
Clipped contours.
Row coordinate of the upper left corner of the clip rectangle. Default: 0
Column coordinate of the upper left corner of the clip rectangle. Default: 0
Row coordinate of the lower right corner of the clip rectangle. Default: 512
Column coordinate of the lower right corner of the clip rectangle. Default: 512
Select XLD contours with a local maximum of gray values.
XLD contours to be examined.
Corresponding gray value image.
Selected contours.
Minimum percentage of maximum points. Default: 70
Minimum amount by which the gray value at the maximum must be larger than in the profile. Default: 15
Maximum width of profile used to check for maxima. Default: 4
Compute the union of neighboring straight contours that have a similar distance from a given line.
Input XLD contours.
Output XLD contours.
Output XLD contours.
y coordinate of the starting point of the reference line. Default: 0
x coordinate of the starting point of the reference line. Default: 0
y coordinate of the endpoint of the reference line. Default: 0
x coordinate of the endpoint of the reference line. Default: 0
Maximum distance. Default: 1
Maximum Width between two minimas. Default: 1
Size of Smoothfilter Default: 1
Output Values of Histogram.
Compute the union of neighboring straight contours that have a similar direction.
Input XLD contours.
Output XLD contours.
Maximum distance of the contours' endpoints. Default: 5.0
Maximum difference in direction. Default: 0.5
Weighting factor for the two selection criteria. Default: 50.0
Should parallel contours be taken into account? Default: "noparallel"
Number of iterations or 'maximum'. Default: "maximum"
Compute the union of collinear contours (operator with extended functionality).
Input XLD contours.
Output XLD contours.
Maximum distance of the contours' end points in the direction of the reference regression line. Default: 10.0
Maximum distance of the contours' end points in the direction of the reference regression line in relation to the length of the contour which is to be elongated. Default: 1.0
Maximum distance of the contour from the reference regression line (i.e., perpendicular to the line). Default: 2.0
Maximum angle difference between the two contours. Default: 0.1
Maximum range of the overlap. Default: 0.0
Maximum regression error of the resulting contours (NOT USED). Default: -1.0
Threshold for reducing the total costs of unification. Default: 1.0
Influence of the distance in the line direction on the total costs. Default: 1.0
Influence of the distance from the regression line on the total costs. Default: 1.0
Influence of the angle difference on the total costs. Default: 1.0
Influence of the line disturbance by the linking segment (overlap and angle difference) on the total costs. Default: 1.0
Influence of the regression error on the total costs (NOT USED). Default: 0.0
Mode describing the treatment of the contours' attributes Default: "attr_keep"
Unite approximately collinear contours.
Input XLD contours.
Output XLD contours.
Maximum length of the gap between two contours, measured along the regression line of the reference contour. Default: 10.0
Maximum length of the gap between two contours, relative to the length of the reference contour, both measured along the regression line of the reference contour. Default: 1.0
Maximum distance of the second contour from the regression line of the reference contour. Default: 2.0
Maximum angle between the regression lines of two contours. Default: 0.1
Mode that defines the treatment of contour attributes, i.e., if the contour attributes are kept or discarded. Default: "attr_keep"
Compute the union of contours whose end points are close together.
Input XLD contours.
Output XLD contours.
Maximum distance of the contours' end points. Default: 10.0
Maximum distance of the contours' end points in relation to the length of the longer contour. Default: 1.0
Mode describing the treatment of the contours' attributes. Default: "attr_keep"
Select XLD contours according to several features.
Input XLD contours.
Output XLD contours.
Feature to select contours with. Default: "contour_length"
Lower threshold. Default: 0.5
Upper threshold. Default: 200.0
Lower threshold. Default: -0.5
Upper threshold. Default: 0.5
Return XLD contour parameters.
Input XLD contours.
Number of contour points.
X-coordinate of the normal vector of the regression line.
Y-coordinate of the normal vector of the regression line.
Distance of the regression line from the origin.
X-coordinate of the projection of the start point of the contour onto the regression line.
Y-coordinate of the projection of the start point of the contour onto the regression line.
X-coordinate of the projection of the end point of the contour onto the regression line.
Y-coordinate of the projection of the end point of the contour onto the regression line.
Mean distance of the contour points from the regression line.
Standard deviation of the distances from the regression line.
Calculate the parameters of a regression line to an XLD contour.
Input XLD contours.
Resulting XLD contours.
Type of outlier treatment. Default: "no"
Number of iterations for the outlier treatment. Default: 1
Calculate the direction of an XLD contour for each contour point.
Input contour.
Return type of the angles. Default: "abs"
Method for computing the angles. Default: "range"
Number of points to take into account. Default: 3
Direction of the tangent to the contour points.
Smooth an XLD contour.
Contour to be smoothed.
Smoothed contour.
Number of points used to calculate the regression line. Default: 5
Return the number of points in an XLD contour.
Input XLD contour.
Number of contour points.
Return the names of the defined global attributes of an XLD contour.
Input contour.
List of the defined global contour attributes.
Return global attributes values of an XLD contour.
Input XLD contour.
Name of the attribute. Default: "regr_norm_row"
Attribute values.
Return the names of the defined attributes of an XLD contour.
Input contour.
List of the defined contour attributes.
Return point attribute values of an XLD contour.
Input XLD contour.
Name of the attribute. Default: "angle"
Attribute values.
Return the coordinates of an XLD contour.
Input XLD contour.
Row coordinate of the contour's points.
Column coordinate of the contour's points.
Generate XLD contours from regions.
Input regions.
Resulting contours.
Mode of contour generation. Default: "border"
Generate an XLD contour with rounded corners from a polygon (given as tuples).
Resulting contour.
Row coordinates of the polygon. Default: [20,80,80,20,20]
Column coordinates of the polygon. Default: [20,20,80,80,20]
Radii of the rounded corners. Default: [20,20,20,20,20]
Distance of the samples. Default: 1.0
Generate an XLD contour from a polygon (given as tuples).
Resulting contour.
Row coordinates of the polygon. Default: [0,1,2,2,2]
Column coordinates of the polygon. Default: [0,0,0,1,2]
Convert a skeleton into XLD contours.
Skeleton of which the contours are to be determined.
Resulting contours.
Minimum number of points a contour has to have. Default: 1
Contour filter mode. Default: "filter"
Display an XLD object.
XLD object to display.
Window handle.
Image restoration by Wiener filtering.
Corrupted image.
impulse response (PSF) of degradation (in spatial domain).
Region for noise estimation.
Restored image.
Width of filter mask. Default: 3
Height of filter mask. Default: 3
Image restoration by Wiener filtering.
Corrupted image.
impulse response (PSF) of degradation (in spatial domain).
Smoothed version of corrupted image.
Restored image.
Generate an impulse response of a (linearly) motion blurring.
Impulse response of motion-blur.
Width of impulse response image. Default: 256
Height of impulse response image. Default: 256
Degree of motion-blur. Default: 20.0
Angle between direction of motion and x-axis (anticlockwise). Default: 0
PSF prototype resp. type of motion. Default: 3
Simulation of (linearly) motion blur.
image to be blurred.
motion blurred image.
extent of blurring. Default: 20.0
Angle between direction of motion and x-axis (anticlockwise). Default: 0
impulse response of motion blur. Default: 3
Generate an impulse response of an uniform out-of-focus blurring.
Impulse response of uniform out-of-focus blurring.
Width of result image. Default: 256
Height of result image. Default: 256
Degree of Blurring. Default: 5.0
Simulate an uniform out-of-focus blurring of an image.
Image to blur.
Blurred image.
Degree of blurring. Default: 5.0
Deserialize a variation model.
Handle of the serialized item.
ID of the variation model.
Serialize a variation model.
ID of the variation model.
Handle of the serialized item.
Read a variation model from a file.
File name.
ID of the variation model.
Write a variation model to a file.
ID of the variation model.
File name.
Return the threshold images used for image comparison by a variation model.
Threshold image for the lower threshold.
Threshold image for the upper threshold.
ID of the variation model.
Return the images used for image comparison by a variation model.
Image of the trained object.
Variation image of the trained object.
ID of the variation model.
Compare an image to a variation model.
Image of the object to be compared.
Region containing the points that differ substantially from the model.
ID of the variation model.
Method used for comparing the variation model. Default: "absolute"
Compare an image to a variation model.
Image of the object to be compared.
Region containing the points that differ substantially from the model.
ID of the variation model.
Prepare a variation model for comparison with an image.
Reference image of the object.
Variation image of the object.
ID of the variation model.
Absolute minimum threshold for the differences between the image and the variation model. Default: 10
Threshold for the differences based on the variation of the variation model. Default: 2
Prepare a variation model for comparison with an image.
ID of the variation model.
Absolute minimum threshold for the differences between the image and the variation model. Default: 10
Threshold for the differences based on the variation of the variation model. Default: 2
Train a variation model.
Images of the object to be trained.
ID of the variation model.
This operator is inoperable. It had the following function: Free the memory of all variation models.
Free the memory of a variation model.
ID of the variation model.
Free the memory of the training data of a variation model.
ID of the variation model.
Create a variation model for image comparison.
Width of the images to be compared. Default: 640
Height of the images to be compared. Default: 480
Type of the images to be compared. Default: "byte"
Method used for computing the variation model. Default: "standard"
ID of the variation model.
Compute the union set of two input tuples.
Input tuple.
Input tuple.
The union set of two input tuples.
Compute the intersection set of two input tuples.
Input tuple.
Input tuple.
The intersection set of two input tuples.
Compute the difference set of two input tuples.
Input tuple.
Input tuple.
The difference set of two input tuples.
Compute the symmetric difference set of two input tuples.
Input tuple.
Input tuple.
The symmetric difference set of two input tuples.
Test whether the types of the elements of a tuple are of type string.
Input tuple.
Are the elements of the input tuple of type string?
Test whether the types of the elements of a tuple are of type real.
Input tuple.
Are the elements of the input tuple of type real?
Test whether the types of the elements of a tuple are of type integer.
Input tuple.
Are the elements of the input tuple of type integer?
Return the types of the elements of a tuple.
Input tuple.
Types of the elements of the input tuple as integer values.
Test whether a tuple is of type mixed.
Input tuple.
Is the input tuple of type mixed?
Test if the internal representation of a tuple is of type string.
Input tuple.
Is the input tuple of type string?
Test if the internal representation of a tuple is of type real.
Input tuple.
Is the input tuple of type real?
Test if the internal representation of a tuple is of type integer.
Input tuple.
Is the input tuple of type integer?
Return the type of a tuple.
Input tuple.
Type of the input tuple as an integer number.
Calculate the value distribution of a tuple within a certain value range.
Input tuple.
Minimum value.
Maximum value.
Number of bins.
Histogram to be calculated.
Bin size.
Select tuple elements matching a regular expression.
Input strings to match.
Regular expression. Default: ".*"
Matching strings
Test if a string matches a regular expression.
Input strings to match.
Regular expression. Default: ".*"
Number of matching strings
Replace a substring using regular expressions.
Input strings to process.
Regular expression. Default: ".*"
Replacement expression.
Processed strings.
Extract substrings using regular expressions.
Input strings to match.
Regular expression. Default: ".*"
Found matches.
Return a tuple of random numbers between 0 and 1.
Length of tuple to generate.
Tuple of random numbers.
Return the number of elements of a tuple.
Input tuple.
Number of elements of input tuple.
Calculate the sign of a tuple.
Input tuple.
Signs of the input tuple as integer numbers.
Calculate the elementwise maximum of two tuples.
Input tuple 1.
Input tuple 2.
Elementwise maximum of the input tuples.
Calculate the elementwise minimum of two tuples.
Input tuple 1.
Input tuple 2.
Elementwise minimum of the input tuples.
Return the maximal element of a tuple.
Input tuple.
Maximal element of the input tuple elements.
Return the minimal element of a tuple.
Input tuple.
Minimal element of the input tuple elements.
Calculate the cumulative sums of a tuple.
Input tuple.
Cumulative sum of the corresponding tuple elements.
Select the element of rank n of a tuple.
Input tuple.
Rank of the element to select.
Selected tuple element.
Return the median of the elements of a tuple.
Input tuple.
Median of the tuple elements.
Return the sum of all elements of a tuple.
Input tuple.
Sum of tuple elements.
Return the mean value of a tuple of numbers.
Input tuple.
Mean value of tuple elements.
Return the standard deviation of the elements of a tuple.
Input tuple.
Standard deviation of tuple elements.
Discard all but one of successive identical elements of a tuple.
Input tuple.
Tuple without successive identical elements.
Return the index of the last occurrence of a tuple within another tuple.
Input tuple to examine.
Input tuple with values to find.
Index of the last occurrence of the values to find.
Return the index of the first occurrence of a tuple within another tuple.
Input tuple to examine.
Input tuple with values to find.
Index of the first occurrence of the values to find.
Return the indices of all occurrences of a tuple within another tuple.
Input tuple to examine.
Input tuple with values to find.
Indices of the occurrences of the values to find in the tuple to examine.
Sort the elements of a tuple and return the indices of the sorted tuple.
Input tuple.
Sorted tuple.
Sort the elements of a tuple in ascending order.
Input tuple.
Sorted tuple.
Invert a tuple.
Input tuple.
Inverted input tuple.
Concatenate two tuples to a new one.
Input tuple 1.
Input tuple 2.
Concatenation of input tuples.
Select several elements of a tuple.
Input tuple.
Index of first element to select.
Index of last element to select.
Selected tuple elements.
Select all elements from index "n" to the end of a tuple.
Input tuple.
Index of the first element to select.
Selected tuple elements.
Select the first elements of a tuple up to the index "n".
Input tuple.
Index of the last element to select.
Selected tuple elements.
Inserts one or more elements into a tuple at index.
Input tuple.
Start index of elements to be inserted.
Element(s) to insert at index.
Tuple with inserted elements.
Replaces one or more elements of a tuple.
Input tuple.
Index/Indices of elements to be replaced.
Element(s) to replace.
Tuple with replaced elements.
Remove elements from a tuple.
Input tuple.
Indices of the elements to remove.
Reduced tuple.
Select in mask specified elements of a tuple.
Input tuple.
greater than 0 specifies the elements to select.
Selected tuple elements.
Select single elements of a tuple.
Input tuple.
Indices of the elements to select.
Selected tuple element.
Select single character or bit from a tuple.
Input tuple.
Position of character or bit to select.
Tuple containing the selected characters and bits.
Generate a tuple with a sequence of equidistant values.
Start value of the tuple.
Maximum value for the last entry.
Increment value.
The resulting sequence.
Generate a tuple of a specific length and initialize its elements.
Length of tuple to generate.
Constant for initializing the tuple elements.
New Tuple.
Read one or more environment variables.
Tuple containing name(s) of the environment variable(s).
Content of the environment variable(s).
Split strings into substrings using predefined separator symbol(s).
Input tuple with string(s) to split.
Input tuple with separator symbol(s).
Substrings after splitting the input strings.
Cut characters from position "n1" through "n2" out of a string tuple.
Input tuple with string(s) to examine.
Input tuple with start position(s) "n1".
Input tuple with end position(s) "n2".
Characters of the string(s) from position "n1" to "n2".
Cut all characters starting at position "n" out of a string tuple.
Input tuple with string(s) to examine.
Input tuple with position(s) "n".
The last characters of the string(s) starting at position "n".
Cut the first characters up to position "n" out of a string tuple.
Input tuple with string(s) to examine.
Input tuple with position(s) "n".
The first characters of the string(s) up to position "n".
Backward search for characters within a string tuple.
Input tuple with string(s) to examine.
Input tuple with character(s) to search.
Position of searched character(s) within the string(s).
Forward search for characters within a string tuple.
Input tuple with string(s) to examine.
Input tuple with character(s) to search.
Position of searched character(s) within the string(s).
Backward search for strings within a string tuple.
Input tuple with string(s) to examine.
Input tuple with string(s) to search.
Position of searched string(s) within the examined string(s).
Forward search for strings within a string tuple.
Input tuple with string(s) to examine.
Input tuple with string(s) to search.
Position of searched string(s) within the examined string(s).
Determine the length of every string within a tuple of strings.
Input tuple.
Lengths of the single strings of the input tuple.
Test, whether a tuple is elementwise less or equal to another tuple.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether a tuple is elementwise less than another tuple.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether a tuple is elementwise greater or equal to another tuple.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether a tuple is elementwise greater than another tuple.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether two tuples are elementwise not equal.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether two tuples are elementwise equal.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether a tuple is less or equal to another tuple.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether a tuple is less than another tuple.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether a tuple is greater or equal to another tuple.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether a tuple is greater than another tuple.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether two tuples are not equal.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether two tuples are equal.
Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Compute the logical not of a tuple.
Input tuple.
Binary not of the input tuple.
Compute the logical exclusive or of two tuples.
Input tuple 1.
Input tuple 2.
Binary exclusive or of the input tuples.
Compute the logical or of two tuples.
Input tuple 1.
Input tuple 2.
Logical or of the input tuples.
Compute the logical and of two tuples.
Input tuple 1.
Input tuple 2.
Logical and of the input tuples.
Compute the bitwise not of a tuple.
Input tuple.
Binary not of the input tuple.
Compute the bitwise exclusive or of two tuples.
Input tuple 1.
Input tuple 2.
Binary exclusive or of the input tuples.
Compute the bitwise or of two tuples.
Input tuple 1.
Input tuple 2.
Binary or of the input tuples.
Compute the bitwise and of two tuples.
Input tuple 1.
Input tuple 2.
Binary and of the input tuples.
Shift a tuple bitwise to the right.
Input tuple.
Number of places to shift the input tuple.
Shifted input tuple.
Shift a tuple bitwise to the left.
Input tuple.
Number of places to shift the input tuple.
Shifted input tuple.
Convert a tuple of integer numbers into strings.
Input tuple with integer numbers.
Output tuple with strings that are separated by the number 0.
Convert a tuple of strings into a tuple of integer numbers.
Input tuple with strings.
Output tuple with the Unicode character codes or ANSI codes of the input string.
Convert a tuple of integer numbers into strings.
Input tuple with Unicode character codes or ANSI codes.
Output tuple with strings built from the character codes in the input tuple.
Convert a tuple of strings of length 1 into a tuple of integer numbers.
Input tuple with strings of length 1.
Output tuple with Unicode character codes or ANSI codes of the characters passed in the input tuple.
Convert a tuple into a tuple of strings.
Input tuple.
Format string.
Input tuple converted to strings.
Check a tuple (of strings) whether it represents numbers.
Input tuple.
Tuple with boolean numbers.
Convert a tuple (of strings) into a tuple of numbers.
Input tuple.
Input tuple as numbers.
Convert a tuple into a tuple of integer numbers.
Input tuple.
Result of the rounding.
Convert a tuple into a tuple of integer numbers.
Input tuple.
Result of the conversion into integer numbers.
Convert a tuple into a tuple of floating point numbers.
Input tuple.
Input tuple as floating point numbers.
Calculate the ldexp function of two tuples.
Input tuple 1.
Input tuple 2.
Ldexp function of the input tuples.
Calculate the remainder of the floating point division of two tuples.
Input tuple 1.
Input tuple 2.
Remainder of the division of the input tuples.
Calculate the remainder of the integer division of two tuples.
Input tuple 1.
Input tuple 2.
Remainder of the division of the input tuples.
Compute the ceiling function of a tuple.
Input tuple.
Ceiling function of the input tuple.
Compute the floor function of a tuple.
Input tuple.
Floor function of the input tuple.
Calculate the power function of two tuples.
Input tuple 1.
Input tuple 2.
Power function of the input tuples.
Compute the base 10 logarithm of a tuple.
Input tuple.
Base 10 logarithm of the input tuple.
Compute the natural logarithm of a tuple.
Input tuple.
Natural logarithm of the input tuple.
Compute the exponential of a tuple.
Input tuple.
Exponential of the input tuple.
Compute the hyperbolic tangent of a tuple.
Input tuple.
Hyperbolic tangent of the input tuple.
Compute the hyperbolic cosine of a tuple.
Input tuple.
Hyperbolic cosine of the input tuple.
Compute the hyperbolic sine of a tuple.
Input tuple.
Hyperbolic sine of the input tuple.
Convert a tuple from degrees to radians.
Input tuple.
Input tuple in radians.
Convert a tuple from radians to degrees.
Input tuple.
Input tuple in degrees.
Compute the arctangent of a tuple for all four quadrants.
Input tuple of the y-values.
Input tuple of the x-values.
Arctangent of the input tuple.
Compute the arctangent of a tuple.
Input tuple.
Arctangent of the input tuple.
Compute the arccosine of a tuple.
Input tuple.
Arccosine of the input tuple.
Compute the arcsine of a tuple.
Input tuple.
Arcsine of the input tuple.
Compute the tangent of a tuple.
Input tuple.
Tangent of the input tuple.
Compute the cosine of a tuple.
Input tuple.
Cosine of the input tuple.
Compute the sine of a tuple.
Input tuple.
Sine of the input tuple.
Compute the absolute value of a tuple (as floating point numbers).
Input tuple.
Absolute value of the input tuple.
Compute the square root of a tuple.
Input tuple.
Square root of the input tuple.
Compute the absolute value of a tuple.
Input tuple.
Absolute value of the input tuple.
Negate a tuple.
Input tuple.
Negation of the input tuple.
Divide two tuples.
Input tuple 1.
Input tuple 2.
Quotient of the input tuples.
Multiply two tuples.
Input tuple 1.
Input tuple 2.
Product of the input tuples.
Subtract two tuples.
Input tuple 1.
Input tuple 2.
Difference of the input tuples.
Add two tuples.
Input tuple 1.
Input tuple 2.
Sum of the input tuples.
Deserialize a serialized tuple.
Handle of the serialized item.
Tuple.
Serialize a tuple.
Tuple.
Handle of the serialized item.
Write a tuple to a file.
Tuple with any kind of data.
Name of the file to be written.
Read a tuple from a file.
Name of the file to be read.
Tuple with any kind of data.
Compute the average of a set of poses.
Set of poses of which the average if computed.
Empty tuple, or one weight per pose. Default: []
Averaging mode. Default: "iterative"
Weight of the translation. Default: "auto"
Weight of the rotation. Default: "auto"
Weighted mean of the poses.
Deviation of the mean from the input poses.
Perform a rotation by a unit quaternion.
Rotation quaternion.
X coordinate of the point to be rotated.
Y coordinate of the point to be rotated.
Z coordinate of the point to be rotated.
X coordinate of the rotated point.
Y coordinate of the rotated point.
Z coordinate of the rotated point.
Generate the conjugation of a quaternion.
Input quaternion.
Conjugated quaternion.
Normalize a quaternion.
Input quaternion.
Normalized quaternion.
Create a rotation quaternion.
X component of the rotation axis.
Y component of the rotation axis.
Z component of the rotation axis.
Rotation angle in radians.
Rotation quaternion.
Convert a quaternion into the corresponding 3D pose.
Rotation quaternion.
3D Pose.
Invert each pose in a tuple of 3D poses.
Tuple of 3D poses.
Tuple of inverted 3D poses.
Combine 3D poses given in two tuples.
Tuple containing the left poses.
Tuple containing the right poses.
Tuple containing the returned poses.
Convert a quaternion into the corresponding rotation matrix.
Rotation quaternion.
Rotation matrix.
Convert the rotational part of a 3D pose to a quaternion.
3D Pose.
Rotation quaternion.
Interpolation of two quaternions.
Start quaternion.
End quaternion.
Interpolation parameter. Default: 0.5
Interpolated quaternion.
Multiply two quaternions.
Left quaternion.
Right quaternion.
Product of the input quaternions.
Deserialize a serialized homogeneous 3D transformation matrix.
Handle of the serialized item.
Transformation matrix.
Serialize a homogeneous 3D transformation matrix.
Transformation matrix.
Handle of the serialized item.
Deserialize a serialized homogeneous 2D transformation matrix.
Handle of the serialized item.
Transformation matrix.
Serialize a homogeneous 2D transformation matrix.
Transformation matrix.
Handle of the serialized item.
Deserialize a serialized quaternion.
Handle of the serialized item.
Quaternion.
Serialize a quaternion.
Quaternion.
Handle of the serialized item.
Project a homogeneous 3D point using a projective transformation matrix.
Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Input point (w coordinate).
Output point (x coordinate).
Output point (y coordinate).
Output point (z coordinate).
Output point (w coordinate).
Project a 3D point using a projective transformation matrix.
Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Output point (x coordinate).
Output point (y coordinate).
Output point (z coordinate).
Apply an arbitrary affine 3D transformation to points.
Input transformation matrix.
Input point(s) (x coordinate). Default: 64
Input point(s) (y coordinate). Default: 64
Input point(s) (z coordinate). Default: 64
Output point(s) (x coordinate).
Output point(s) (y coordinate).
Output point(s) (z coordinate).
Approximate a 3D transformation from point correspondences.
Type of the transformation to compute. Default: "rigid"
X coordinates of the original points.
Y coordinates of the original points.
Z coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Z coordinates of the transformed points.
Output transformation matrix.
Compute the determinant of a homogeneous 3D transformation matrix.
Input transformation matrix.
Determinant of the input matrix.
Transpose a homogeneous 3D transformation matrix.
Input transformation matrix.
Output transformation matrix.
Invert a homogeneous 3D transformation matrix.
Input transformation matrix.
Output transformation matrix.
Multiply two homogeneous 3D transformation matrices.
Left input transformation matrix.
Right input transformation matrix.
Output transformation matrix.
Add a rotation to a homogeneous 3D transformation matrix.
Input transformation matrix.
Rotation angle. Default: 0.78
Axis, to be rotated around. Default: "x"
Output transformation matrix.
Add a rotation to a homogeneous 3D transformation matrix.
Input transformation matrix.
Rotation angle. Default: 0.78
Axis, to be rotated around. Default: "x"
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Fixed point of the transformation (z coordinate). Default: 0
Output transformation matrix.
Add a scaling to a homogeneous 3D transformation matrix.
Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Scale factor along the z-axis. Default: 2
Output transformation matrix.
Add a scaling to a homogeneous 3D transformation matrix.
Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Scale factor along the z-axis. Default: 2
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Fixed point of the transformation (z coordinate). Default: 0
Output transformation matrix.
Add a translation to a homogeneous 3D transformation matrix.
Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Translation along the z-axis. Default: 64
Output transformation matrix.
Add a translation to a homogeneous 3D transformation matrix.
Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Translation along the z-axis. Default: 64
Output transformation matrix.
Generate the homogeneous transformation matrix of the identical 3D transformation.
Transformation matrix.
Project an affine 3D transformation matrix to a 2D projective transformation matrix.
3x4 3D transformation matrix.
Row coordinate of the principal point. Default: 256
Column coordinate of the principal point. Default: 256
Focal length in pixels. Default: 256
Homogeneous projective transformation matrix.
Perform a bundle adjustment of an image mosaic.
Number of different images that are used for the calibration.
Index of the reference image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Row coordinates of corresponding points in the respective source images.
Column coordinates of corresponding points in the respective source images.
Row coordinates of corresponding points in the respective destination images.
Column coordinates of corresponding points in the respective destination images.
Number of point correspondences in the respective image pair.
Transformation class to be used. Default: "projective"
Array of 3x3 projective transformation matrices that determine the position of the images in the mosaic.
Row coordinates of the points reconstructed by the bundle adjustment.
Column coordinates of the points reconstructed by the bundle adjustment.
Average error per reconstructed point.
Compute a projective transformation matrix and the radial distortion coefficient between two images by finding correspondences between points based on known approximations of the projective transformation matrix and the radial distortion coefficient.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Approximation of the homogeneous projective transformation matrix between the two images.
Approximation of the radial distortion coefficient in the two images.
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Computed homogeneous projective transformation matrix.
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Compute a projective transformation matrix between two images and the radial distortion coefficient by automatically finding correspondences between points.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the projective transformation matrix. Default: "gold_standard"
Threshold for the transformation consistency check. Default: 1
Seed for the random number generator. Default: 0
Computed homogeneous projective transformation matrix.
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Compute a projective transformation matrix between two images by finding correspondences between points based on a known approximation of the projective transformation matrix.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Approximation of the Homogeneous projective transformation matrix between the two images.
Tolerance for the matching search window. Default: 20.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Homogeneous projective transformation matrix.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Compute a projective transformation matrix between two images by finding correspondences between points.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift. Default: 0
Average column coordinate shift. Default: 0
Half height of matching search window. Default: 256
Half width of matching search window. Default: 256
Range of rotation angles. Default: 0.0
Threshold for gray value matching. Default: 10
Transformation matrix estimation algorithm. Default: "normalized_dlt"
Threshold for transformation consistency check. Default: 0.2
Seed for the random number generator. Default: 0
Homogeneous projective transformation matrix.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Compute a projective transformation matrix and the radial distortion coefficient using given image point correspondences.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Width of the images from which the points were extracted.
Height of the images from which the points were extracted.
Estimation algorithm. Default: "gold_standard"
Homogeneous projective transformation matrix.
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Compute a homogeneous transformation matrix using given point correspondences.
Input points 1 (x coordinate).
Input points 1 (y coordinate).
Input points 1 (w coordinate).
Input points 2 (x coordinate).
Input points 2 (y coordinate).
Input points 2 (w coordinate).
Estimation algorithm. Default: "normalized_dlt"
Homogeneous projective transformation matrix.
Compute a projective transformation matrix using given point correspondences.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Estimation algorithm. Default: "normalized_dlt"
Row coordinate variance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Homogeneous projective transformation matrix.
9x9 covariance matrix of the projective transformation matrix.
Compute the affine transformation parameters from a homogeneous 2D transformation matrix.
Input transformation matrix.
Scaling factor along the x direction.
Scaling factor along the y direction.
Rotation angle.
Slant angle.
Translation along the x direction.
Translation along the y direction.
Compute a rigid affine transformation from points and angles.
Row coordinate of the original point.
Column coordinate of the original point.
Angle of the original point.
Row coordinate of the transformed point.
Column coordinate of the transformed point.
Angle of the transformed point.
Output transformation matrix.
Approximate an affine transformation from point-to-line correspondences.
Type of the transformation to compute. Default: "rigid"
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the first point on the corresponding line.
Y coordinates of the first point on the corresponding line.
X coordinates of the second point on the corresponding line.
Y coordinates of the second point on the corresponding line.
Output transformation matrix.
Approximate a rigid affine transformation from point correspondences.
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Output transformation matrix.
Approximate an similarity transformation from point correspondences.
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Output transformation matrix.
Approximate an anisotropic similarity transformation from point correspondences.
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Output transformation matrix.
Approximate an affine transformation from point correspondences.
X coordinates of the original points.
Y coordinates of the original points.
X coordinates of the transformed points.
Y coordinates of the transformed points.
Output transformation matrix.
Project pixel coordinates using a homogeneous projective transformation matrix.
Homogeneous projective transformation matrix.
Input pixel(s) (row coordinate). Default: 64
Input pixel(s) (column coordinate). Default: 64
Output pixel(s) (row coordinate).
Output pixel(s) (column coordinate).
Project a homogeneous 2D point using a projective transformation matrix.
Homogeneous projective transformation matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (w coordinate).
Output point (x coordinate).
Output point (y coordinate).
Output point (w coordinate).
Apply an arbitrary affine 2D transformation to pixel coordinates.
Input transformation matrix.
Input pixel(s) (row coordinate). Default: 64
Input pixel(s) (column coordinate). Default: 64
Output pixel(s) (row coordinate).
Output pixel(s) (column coordinate).
Apply an arbitrary affine 2D transformation to points.
Input transformation matrix.
Input point(s) (x or row coordinate). Default: 64
Input point(s) (y or column coordinate). Default: 64
Output point(s) (x or row coordinate).
Output point(s) (y or column coordinate).
Compute the determinant of a homogeneous 2D transformation matrix.
Input transformation matrix.
Determinant of the input matrix.
Transpose a homogeneous 2D transformation matrix.
Input transformation matrix.
Output transformation matrix.
Invert a homogeneous 2D transformation matrix.
Input transformation matrix.
Output transformation matrix.
Multiply two homogeneous 2D transformation matrices.
Left input transformation matrix.
Right input transformation matrix.
Output transformation matrix.
Add a reflection to a homogeneous 2D transformation matrix.
Input transformation matrix.
Point that defines the axis (x coordinate). Default: 16
Point that defines the axis (y coordinate). Default: 32
Output transformation matrix.
Add a reflection to a homogeneous 2D transformation matrix.
Input transformation matrix.
First point of the axis (x coordinate). Default: 0
First point of the axis (y coordinate). Default: 0
Second point of the axis (x coordinate). Default: 16
Second point of the axis (y coordinate). Default: 32
Output transformation matrix.
Add a slant to a homogeneous 2D transformation matrix.
Input transformation matrix.
Slant angle. Default: 0.78
Coordinate axis that is slanted. Default: "x"
Output transformation matrix.
Add a slant to a homogeneous 2D transformation matrix.
Input transformation matrix.
Slant angle. Default: 0.78
Coordinate axis that is slanted. Default: "x"
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a rotation to a homogeneous 2D transformation matrix.
Input transformation matrix.
Rotation angle. Default: 0.78
Output transformation matrix.
Add a rotation to a homogeneous 2D transformation matrix.
Input transformation matrix.
Rotation angle. Default: 0.78
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a scaling to a homogeneous 2D transformation matrix.
Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Output transformation matrix.
Add a scaling to a homogeneous 2D transformation matrix.
Input transformation matrix.
Scale factor along the x-axis. Default: 2
Scale factor along the y-axis. Default: 2
Fixed point of the transformation (x coordinate). Default: 0
Fixed point of the transformation (y coordinate). Default: 0
Output transformation matrix.
Add a translation to a homogeneous 2D transformation matrix.
Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Output transformation matrix.
Add a translation to a homogeneous 2D transformation matrix.
Input transformation matrix.
Translation along the x-axis. Default: 64
Translation along the y-axis. Default: 64
Output transformation matrix.
Generate the homogeneous transformation matrix of the identical 2D transformation.
Transformation matrix.
This operator is inoperable. It had the following function: Clear all scattered data interpolators.
Clear a scattered data interpolator.
Handle of the scattered data interpolator
Interpolation of scattered data using a scattered data interpolator.
Handle of the scattered data interpolator
Row coordinates of points to be interpolated
Column coordinates of points to be interpolated
Values of interpolated points
Creates an interpolator for the interpolation of scattered data.
Method for the interpolation Default: "thin_plate_splines"
Row coordinates of the points used for the interpolation
Column coordinates of the points used for the interpolation
Values of the points used for the interpolation
Names of the generic parameters that can be adjusted Default: []
Values of the generic parameters that can be adjusted Default: []
Handle of the scattered data interpolator
Creating an image from the interpolation of scattered data.
Interpolated image
Method for the interpolation Default: "thin_plate_splines"
Row coordinates of the points used for the interpolation
Column coordinates of the points used for the interpolation
Values of the points used for the interpolation
Width of the interpolated image Default: 640
Height of the interpolated image Default: 480
Names of the generic parameters that can be adjusted Default: []
Values of the generic parameters that can be adjusted Default: []
Interpolation of an image.
Image to interpolate
Region to interpolate
Interpolated image
Method for the interpolation Default: "thin_plate_splines"
Names of the generic parameters that can be adjusted Default: []
Values of the generic parameters that can be adjusted Default: []
Read out the system time.
Milliseconds (0..999).
Seconds (0..59).
Minutes (0..59).
Hours (0..23).
Day of the month (1..31).
Day of the year (1..366).
Month (1..12).
Year (xxxx).
Query compute device parameters.
Compute device handle.
Name of the parameter to query. Default: "buffer_cache_capacity"
Value of the parameter.
Set parameters of an compute device.
Compute device handle.
Name of the parameter to set. Default: "buffer_cache_capacity"
New parameter value.
Close all compute devices.
Close a compute_device.
Compute device handle.
Deactivate all compute devices.
Deactivate a compute device.
Compute device handle.
Activate a compute device.
Compute device handle.
Initialize a compute device.
Compute device handle.
List of operators to prepare. Default: "all"
Open a compute device.
Compute device Identifier.
Compute device handle.
Get information on a compute device.
Compute device handle.
Name of Information to query. Default: "name"
Returned information.
Get the list of available compute devices.
List of available compute devices.
Clear the buffer of a serial connection.
Serial interface handle.
Buffer to be cleared. Default: "input"
Write to a serial connection.
Serial interface handle.
Characters to write (as tuple of integers).
Read from a serial device.
Serial interface handle.
Number of characters to read. Default: 1
Read characters (as tuple of integers).
Get the parameters of a serial device.
Serial interface handle.
Speed of the serial interface.
Number of data bits of the serial interface.
Type of flow control of the serial interface.
Parity of the serial interface.
Number of stop bits of the serial interface.
Total timeout of the serial interface in ms.
Inter-character timeout of the serial interface in ms.
Set the parameters of a serial device.
Serial interface handle.
Speed of the serial interface. Default: "unchanged"
Number of data bits of the serial interface. Default: "unchanged"
Type of flow control of the serial interface. Default: "unchanged"
Parity of the serial interface. Default: "unchanged"
Number of stop bits of the serial interface. Default: "unchanged"
Total timeout of the serial interface in ms. Default: "unchanged"
Inter-character timeout of the serial interface in ms. Default: "unchanged"
This operator is inoperable. It had the following function: Close all serial devices.
Close a serial device.
Serial interface handle.
Open a serial device.
Name of the serial port. Default: "COM1"
Serial interface handle.
Delaying the execution of the program.
Number of seconds by which the execution of the program will be delayed. Default: 10
Execute a system command.
Command to be called by the system. Default: "ls"
Set HALCON system parameters.
Name of the system parameter to be changed. Default: "init_new_image"
New value of the system parameter. Default: "true"
Activating and deactivating of HALCON control modes.
Desired control mode. Default: "default"
Initialization of the HALCON system.
Default image width (in pixels). Default: 128
Default image height (in pixels). Default: 128
Usual number of channels. Default: 0
Get current value of HALCON system parameters.
Desired system parameter. Default: "init_new_image"
Current value of the system parameter.
State of the HALCON control modes.
Tuplet of the currently activated control modes.
Inquiry after the error text of a HALCON error number.
HALCON error code.
Corresponding error message.
Passed Time.
Processtime since the program start.
Number of entries in the HALCON database.
Relation of interest of the HALCON database. Default: "object"
Number of tuples in the relation.
Receive an image over a socket connection.
Received image.
Socket number.
Send an image over a socket connection.
Image to be sent.
Socket number.
Receive regions over a socket connection.
Received regions.
Socket number.
Send regions over a socket connection.
Regions to be sent.
Socket number.
Receive an XLD object over a socket connection.
Received XLD object.
Socket number.
Send an XLD object over a socket connection.
XLD object to be sent.
Socket number.
Receive a tuple over a socket connection.
Socket number.
Received tuple.
Send a tuple over a socket connection.
Socket number.
Tuple to be sent.
Receive arbitrary data from external devices or applications using a generic socket connection.
Socket number.
Specification how to convert the data to tuples. Default: "z"
Value (or tuple of values) holding the received and converted data.
IP address or hostname and network port of the communication partner.
Send arbitrary data to external devices or applications using a generic socket communication.
Socket number.
Specification how to convert the data. Default: "z"
Value (or tuple of values) holding the data to send.
IP address or hostname and network port of the communication partner. Default: []
Get the value of a socket parameter.
Socket number.
Name of the socket parameter.
Value of the socket parameter.
Set a socket parameter.
Socket number.
Name of the socket parameter.
Value of the socket parameter. Default: "on"
Determine the HALCON data type of the next socket data.
Socket number.
Data type of next HALCON data.
Get the socket descriptor of a socket used by the operating system.
Socket number.
Socket descriptor used by the operating system.
This operator is inoperable. It had the following function: Close all opened sockets.
Close a socket.
Socket number.
Accept a connection request on a listening socket of the protocol type 'HALCON' or 'TCP'/'TCP4'/'TCP6'.
Socket number of the accepting socket.
Should the operator wait until a connection request arrives? Default: "auto"
Socket number.
Open a socket and connect it to an accepting socket.
Hostname of the computer to connect to. Default: "localhost"
Port number.
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Socket number.
Open a socket that accepts connection requests.
Port number. Default: 3000
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Socket number.
Returns the extended error information for the calling thread's last HALCON error.
Operator that set the error code.
Extended error code.
Extended error message.
Query of used modules and the module key.
Names of used modules.
Key for license manager.
Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
Rectified image of camera 1.
Rectified image of camera 2.
Distance image.
Score of the calculated disparity.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Compute the disparities of a rectified stereo image pair using multi-scanline optimization.
Rectified image of camera 1.
Rectified image of camera 2.
Disparity map.
Score of the calculated disparity.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Compute the distance values for a rectified stereo image pair using multigrid methods.
Rectified image of camera 1.
Rectified image of camera 2.
Distance image.
Score of the calculated disparity if CalculateScore is set to 'true'.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Compute the disparities of a rectified stereo image pair using multigrid methods.
Rectified image of camera 1.
Rectified image of camera 2.
Disparity map.
Score of the calculated disparity if CalculateScore is set to 'true'.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure should be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Compute the projective 3d reconstruction of points based on the fundamental matrix.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Fundamental matrix.
9x9 covariance matrix of the fundamental matrix. Default: []
X coordinates of the reconstructed points in projective 3D space.
Y coordinates of the reconstructed points in projective 3D space.
Z coordinates of the reconstructed points in projective 3D space.
W coordinates of the reconstructed points in projective 3D space.
Covariance matrices of the reconstructed points.
Compute the projective rectification of weakly calibrated binocular stereo images.
Image coding the rectification of the 1. image.
Image coding the rectification of the 2. image.
Fundamental matrix.
9x9 covariance matrix of the fundamental matrix. Default: []
Width of the 1. image. Default: 512
Height of the 1. image. Default: 512
Width of the 2. image. Default: 512
Height of the 2. image. Default: 512
Subsampling factor. Default: 1
Type of mapping. Default: "no_map"
9x9 covariance matrix of the rectified fundamental matrix.
Projective transformation of the 1. image.
Projective transformation of the 2. image.
Compute the fundamental matrix and the radial distortion coefficient given a set of image point correspondences and reconstruct 3D points.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Width of the images from which the points were extracted.
Height of the images from which the points were extracted.
Estimation algorithm. Default: "gold_standard"
Computed fundamental matrix.
Computed radial distortion coefficient.
Root-Mean-Square epipolar distance error.
X coordinates of the reconstructed points in projective 3D space.
Y coordinates of the reconstructed points in projective 3D space.
Z coordinates of the reconstructed points in projective 3D space.
W coordinates of the reconstructed points in projective 3D space.
Compute the fundamental matrix from the relative orientation of two cameras.
Relative orientation of the cameras (3D pose).
6x6 covariance matrix of relative pose. Default: []
Parameters of the 1. camera.
Parameters of the 2. camera.
Computed fundamental matrix.
9x9 covariance matrix of the fundamental matrix.
Compute the fundamental matrix from an essential matrix.
Essential matrix.
9x9 covariance matrix of the essential matrix. Default: []
Camera matrix of the 1. camera.
Camera matrix of the 2. camera.
Computed fundamental matrix.
9x9 covariance matrix of the fundamental matrix.
Compute the relative orientation between two cameras given image point correspondences and known camera parameters and reconstruct 3D space points.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Camera parameters of the 1st camera.
Camera parameters of the 2nd camera.
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Computed relative orientation of the cameras (3D pose).
6x6 covariance matrix of the relative camera orientation.
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D points.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Camera matrix of the 1st camera.
Camera matrix of the 2nd camera.
Algorithm for the computation of the essential matrix and for special camera orientations. Default: "normalized_dlt"
Computed essential matrix.
9x9 covariance matrix of the essential matrix.
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Compute the fundamental matrix given a set of image point correspondences and reconstruct 3D points.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Estimation algorithm. Default: "normalized_dlt"
Computed fundamental matrix.
9x9 covariance matrix of the fundamental matrix.
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed points in projective 3D space.
Y coordinates of the reconstructed points in projective 3D space.
Z coordinates of the reconstructed points in projective 3D space.
W coordinates of the reconstructed points in projective 3D space.
Covariance matrices of the reconstructed 3D points.
Compute the fundamental matrix and the radial distortion coefficient for a pair of stereo images by automatically finding correspondences between image points.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Gray value match metric. Default: "ncc"
Size of gray value masks. Default: 10
Average row coordinate offset of corresponding points. Default: 0
Average column coordinate offset of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative rotation of the second image with respect to the first image. Default: 0.0
Threshold for gray value matching. Default: 0.7
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "gold_standard"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Computed fundamental matrix.
Computed radial distortion coefficient.
Root-Mean-Square epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Compute the relative orientation between two cameras by automatically finding correspondences between image points.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Parameters of the 1st camera.
Parameters of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Computed relative orientation of the cameras (3D pose).
6x6 covariance matrix of the relative orientation.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image points.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Camera matrix of the 1st camera.
Camera matrix of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the essential matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Computed essential matrix.
9x9 covariance matrix of the essential matrix.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between image points.
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the fundamental matrix and for special camera orientations. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Computed fundamental matrix.
9x9 covariance matrix of the fundamental matrix.
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Compute the distance values for a rectified stereo image pair using correlation techniques.
Rectified image of camera 1.
Rectified image of camera 2.
Distance image.
Evaluation of a distance value.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: 0
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.0
Downstream filters. Default: "none"
Distance interpolation. Default: "none"
Compute the disparities of a rectified image pair using correlation techniques.
Rectified image of camera 1.
Rectified image of camera 2.
Disparity map.
Evaluation of the disparity values.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.5
Downstream filters. Default: "none"
Subpixel interpolation of disparities. Default: "none"
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Internal parameters of the projective camera 1.
Internal parameters of the projective camera 2.
Point transformation from camera 2 to camera 1.
Row coordinate of a point in image 1.
Column coordinate of a point in image 1.
Row coordinate of the corresponding point in image 2.
Column coordinate of the corresponding point in image 2.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Distance of the 3D point to the lines of sight.
Transform a disparity image into 3D points in a rectified stereo system.
Disparity image.
X coordinates of the points in the rectified camera system 1.
Y coordinates of the points in the rectified camera system 1.
Z coordinates of the points in the rectified camera system 1.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Row coordinate of a point in the rectified image 1.
Column coordinate of a point in the rectified image 1.
Disparity of the images of the world point.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Transform a disparity value into a distance value in a rectified binocular stereo system.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Disparity between the images of the world point.
Distance of a world point to the rectified camera system.
Transfrom a distance value into a disparity in a rectified stereo system.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Point transformation from the rectified camera 2 to the rectified camera 1.
Distance of a world point to camera 1.
Disparity between the images of the point.
Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common rectified image plane.
Image containing the mapping data of camera 1.
Image containing the mapping data of camera 2.
Internal parameters of camera 1.
Internal parameters of camera 2.
Point transformation from camera 2 to camera 1.
Subsampling factor. Default: 1.0
Type of rectification. Default: "viewing_direction"
Type of mapping. Default: "bilinear"
Rectified internal parameters of camera 1.
Rectified internal parameters of camera 2.
Point transformation from the rectified camera 1 to the original camera 1.
Point transformation from the rectified camera 1 to the original camera 1.
Point transformation from the rectified camera 2 to the rectified camera 1.
Determine all camera parameters of a binocular stereo system.
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
Initial values for the internal parameters of camera 1.
Initial values for the internal parameters of camera 2.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
Camera parameters to be estimated. Default: "all"
Internal parameters of camera 1.
Internal parameters of camera 2.
Ordered tuple with all poses of the calibration model in relation to camera 1.
Ordered tuple with all poses of the calibration model in relation to camera 2.
Pose of camera 2 in relation to camera 1.
Average error distances in pixels.
Inquiring for possible settings of the HALCON debugging tool.
Available control modes (see also set_spy).
Corresponding state of the control modes.
Control of the HALCON Debugging Tools.
Control mode Default: "mode"
State of the control mode to be set. Default: "on"
Current configuration of the HALCON debugging-tool.
Control mode Default: "mode"
State of the control mode.
Read a sheet-of-light model from a file and create a new model.
Name of the sheet-of-light model file. Default: "sheet_of_light_model.solm"
Handle of the sheet-of-light model.
Write a sheet-of-light model to a file.
Handle of the sheet-of-light model.
Name of the sheet-of-light model file. Default: "sheet_of_light_model.solm"
Deserialize a sheet-of-light model.
Handle of the serialized item.
Handle of the sheet-of-light model.
Serialize a sheet-of-light model.
Handle of the sheet-of-light model.
Handle of the serialized item.
Create a calibration object for sheet-of-light calibration.
Width of the object. Default: 0.1
Length of the object. Default: 0.15
Minimum height of the ramp. Default: 0.005
Maximum height of the ramp. Default: 0.04
Filename of the model of the calibration object. Default: "calib_object.dxf"
Calibrate a sheet-of-light setup with a 3D calibration object.
Handle of the sheet-of-light model.
Average back projection error of the optimization.
Get the result of a calibrated measurement performed with the sheet-of-light technique as a 3D object model.
Handle for accessing the sheet-of-light model.
Handle of the resulting 3D object model.
Get the iconic results of a measurement performed with the sheet-of light technique.
Desired measurement result.
Handle of the sheet-of-light model to be used.
Specify which result of the measurement shall be provided. Default: "disparity"
Apply the calibration transformations to the input disparity image.
Height or range image to be calibrated.
Handle of the sheet-of-light model.
Set sheet of light profiles by measured disparities.
Disparity image that contains several profiles.
Handle of the sheet-of-light model.
Poses describing the movement of the scene under measurement between the previously processed profile image and the current profile image.
Process the profile image provided as input and store the resulting disparity to the sheet-of-light model.
Input image.
Handle of the sheet-of-light model.
Pose describing the movement of the scene under measurement between the previously processed profile image and the current profile image.
Set selected parameters of the sheet-of-light model.
Handle of the sheet-of-light model.
Name of the model parameter that shall be adjusted for the sheet-of-light model. Default: "method"
Value of the model parameter that shall be adjusted for the sheet-of-light model. Default: "center_of_gravity"
Get the value of a parameter, which has been set in a sheet-of-light model.
Handle of the sheet-of-light model.
Name of the generic parameter that shall be queried. Default: "method"
Value of the model parameter that shall be queried.
For a given sheet-of-light model get the names of the generic iconic or control parameters that can be used in the different sheet-of-light operators.
Handle of the sheet-of-light model.
Name of the parameter group. Default: "create_model_params"
List containing the names of the supported generic parameters.
Reset a sheet-of-light model.
Handle of the sheet-of-light model.
This operator is inoperable. It had the following function: Delete all sheet-of-light models and free the allocated memory.
Delete a sheet-of-light model and free the allocated memory.
Handle of the sheet-of-light model.
Create a model to perform 3D-measurements using the sheet-of-light technique.
Region of the images containing the profiles to be processed. If the provided region is not rectangular, its smallest enclosing rectangle will be used.
Names of the generic parameters that can be adjusted for the sheet-of-light model. Default: "min_gray"
Values of the generic parameters that can be adjusted for the sheet-of-light model. Default: 50
Handle for using and accessing the sheet-of-light model.
Shade a height field.
Height field to be shaded.
Shaded image.
Angle between the light source and the positive z-axis (in degrees). Default: 0.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 0.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Should shadows be calculated? Default: "false"
Estimate the albedo of a surface and the amount of ambient light.
Image for which albedo and ambient are to be estimated.
Amount of light reflected by the surface.
Amount of ambient light.
Estimate the slant of a light source and the albedo of a surface.
Image for which slant and albedo are to be estimated.
Angle of the light sources and the positive z-axis (in degrees).
Amount of light reflected by the surface.
Estimate the slant of a light source and the albedo of a surface.
Image for which slant and albedo are to be estimated.
Angle between the light sources and the positive z-axis (in degrees).
Amount of light reflected by the surface.
Estimate the tilt of a light source.
Image for which the tilt is to be estimated.
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Estimate the tilt of a light source.
Image for which the tilt is to be estimated.
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Reconstruct a surface from surface gradients.
The gradient field of the image.
Reconstructed height field.
Type of the reconstruction method. Default: "poisson"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Reconstruct a surface according to the photometric stereo technique.
Array with at least three input images with different directions of illumination.
Reconstructed height field.
The gradient field of the surface.
The albedo of the surface.
Angle between the camera and the direction of illumination (in degrees). Default: 45.0
Angle of the direction of illumination within the object plane (in degrees). Default: 45.0
Types of the requested results. Default: "all"
Type of the reconstruction method. Default: "poisson"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Reconstruct a surface from a gray value image.
Shaded input image.
Reconstructed height field.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Reconstruct a surface from a gray value image.
Shaded input image.
Reconstructed height field.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Reconstruct a surface from a gray value image.
Shaded input image.
Reconstructed height field.
Angle between the light source and the positive z-axis (in degrees). Default: 45.0
Angle between the light source and the x-axis after projection into the xy-plane (in degrees). Default: 45.0
Amount of light reflected by the surface. Default: 1.0
Amount of ambient light. Default: 0.0
Receive a serialized item over a socket connection.
Socket number.
Handle of the serialized item.
Send a serialized item over a socket connection.
Socket number.
Handle of the serialized item.
Write a serialized item to a file.
File handle.
Handle of the serialized item.
Read a serialized item from a file.
File handle.
Handle of the serialized item.
This operator is inoperable. It had the following function: Delete all current existing serialized items.
Delete a serialized item.
Handle of the serialized item.
Access the data pointer of a serialized item.
Handle of the serialized item.
Data pointer of the serialized item.
Size of the serialized item.
Create a serialized item.
Data pointer of the serialized item.
Size of the serialized item.
Copy mode of the serialized item. Default: "true"
Handle of the serialized item.
Fit 3D primitives into a set of 3D points.
Handle of the input 3D object model.
Names of the generic parameters.
Values of the generic parameters.
Handle of the output 3D object model.
Segment a set of 3D points into sub-sets with similar characteristics.
Handle of the input 3D object model.
Names of the generic parameters.
Values of the generic parameters.
Handle of the output 3D object model.
This operator is inoperable. It had the following function: Clear all text results.
Clear a text result.
Text result to be cleared.
Query an iconic value of a text segmentation result.
Returned result.
Text result.
Name of the result to be returned. Default: "all_lines"
Query a control value of a text segmentation result.
Text result.
Name of the result to be returned. Default: "class"
Value of ResultName.
Find text in an image.
Input image.
Text model specifying the text to be segmented.
Result of the segmentation.
Query parameters of a text model.
Text model.
Parameters to be queried. Default: "min_contrast"
Values of Parameters.
Set parameters of a text model.
Text model.
Names of the parameters to be set. Default: "min_contrast"
Values of the parameters to be set. Default: 10
This operator is inoperable. It had the following function: Clear all text models.
Clear a text model.
Text model to be cleared.
Create a text model.
The Mode of the text model. Default: "auto"
OCR Classifier. Default: "Universal_Rej.occ"
New text model.
Create a text model.
New text model.
Selects characters from a given region.
Region of text lines in which to select the characters.
Selected characters.
Should dot print characters be detected? Default: "false"
Stroke width of a character. Default: "medium"
Width of a character. Default: 25
Height of a character. Default: 25
Add punctuation? Default: "false"
Exist diacritic marks? Default: "false"
Method to partition neighbored characters. Default: "none"
Should lines be partitioned? Default: "false"
Distance of fragments. Default: "medium"
Connect fragments? Default: "false"
Maximum size of clutter. Default: 0
Stop execution after this step. Default: "completion"
Segments characters in a given region of an image.
Area in the image where the text lines are located.
Input image.
Image used for the segmentation.
Region of characters.
Method to segment the characters. Default: "local_auto_shape"
Eliminate horizontal and vertical lines? Default: "false"
Should dot print characters be detected? Default: "false"
Stroke width of a character. Default: "medium"
Width of a character. Default: 25
Height of a character. Default: 25
Value to adjust the segmentation. Default: 0
Minimum gray value difference between text and background. Default: 10
Threshold used to segment the characters.
Determines the slant of characters of a text line or paragraph.
Area of text lines.
Input image.
Height of the text lines. Default: 25
Minimum slant of the characters Default: -0.523599
Maximum slant of the characters Default: 0.523599
Calculated slant of the characters in the region
Determines the orientation of a text line or paragraph.
Area of text lines.
Input image.
Height of the text lines. Default: 25
Minimum rotation of the text lines. Default: -0.523599
Maximum rotation of the text lines. Default: 0.523599
Calculated rotation angle of the text lines.
Classify a byte image using a look-up table.
Input image.
Segmented classes.
Handle of the LUT classifier.
Classify an image with a k-Nearest-Neighbor classifier.
Input image.
Segmented classes.
Distance of the pixel's nearest neighbor.
Handle of the k-NN classifier.
Threshold for the rejection of the classification. Default: 0.5
Add training samples from an image to the training data of a k-Nearest-Neighbor classifier.
Training image.
Regions of the classes to be trained.
Handle of the k-NN classifier.
Classify an image with a Gaussian Mixture Model.
Input image.
Segmented classes.
GMM handle.
Threshold for the rejection of the classification. Default: 0.5
Add training samples from an image to the training data of a Gaussian Mixture Model.
Training image.
Regions of the classes to be trained.
GMM handle.
Standard deviation of the Gaussian noise added to the training data. Default: 0.0
Classify an image with a support vector machine.
Input image.
Segmented classes.
SVM handle.
Add training samples from an image to the training data of a support vector machine.
Training image.
Regions of the classes to be trained.
SVM handle.
Classify an image with a multilayer perceptron.
Input image.
Segmented classes.
MLP handle.
Threshold for the rejection of the classification. Default: 0.5
Add training samples from an image to the training data of a multilayer perceptron.
Training image.
Regions of the classes to be trained.
MLP handle.
Construct classes for class_ndim_norm.
Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Multi-channel training image.
Metric to be used. Default: "euclid"
Maximum cluster radius. Default: 10.0
The ratio of the number of pixels in a cluster to the total number of pixels (in percent) must be larger than MinNumberPercent (otherwise the cluster is not output). Default: 0.01
Cluster radii or half edge lengths.
Coordinates of all cluster centers.
Overlap of the rejection class with the classified objects (1: no overlap).
Train a classificator using a multi-channel image.
Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Multi-channel training image.
Handle of the classifier.
Classify pixels using hyper-cuboids.
Multi channel input image.
Classification result.
Handle of the classifier.
Classify pixels using hyper-spheres or hyper-cubes.
Multi channel input image.
Classification result.
Metric to be used. Default: "euclid"
Return one region or one region for each cluster. Default: "single"
Cluster radii or half edge lengths (returned by learn_ndim_norm).
Coordinates of the cluster centers (returned by learn_ndim_norm).
Segment an image using two-dimensional pixel classification.
Input image (first channel).
Input image (second channel).
Region defining the feature space.
Classified regions.
Segment two images by clustering.
First input image.
Second input image.
Classification result.
Threshold (maximum distance to the cluster's center). Default: 15
Number of classes (cluster centers). Default: 5
Compare two images pixel by pixel.
Input image.
Comparison image.
Points in which the two images are similar/different.
Mode: return similar or different pixels. Default: "diff_outside"
Lower bound of the tolerated gray value difference. Default: -5
Upper bound of the tolerated gray value difference. Default: 5
Offset gray value subtracted from the input image. Default: 0
Row coordinate by which the comparison image is translated. Default: 0
Column coordinate by which the comparison image is translated. Default: 0
Perform a threshold segmentation for extracting characters.
Input image.
Region in which the histogram is computed.
Dark regions (characters).
Sigma for the Gaussian smoothing of the histogram. Default: 2.0
Percentage for the gray value difference. Default: 95
Calculated threshold.
Extract regions with equal gray values from an image.
Label image.
Regions having a constant gray value.
Suppress non-maximum points on an edge.
Amplitude (gradient magnitude) image.
Image with thinned edge regions.
Select horizontal/vertical or undirected NMS. Default: "hvnms"
Suppress non-maximum points on an edge using a direction image.
Amplitude (gradient magnitude) image.
Direction image.
Image with thinned edge regions.
Select non-maximum-suppression or interpolating NMS. Default: "nms"
Perform a hysteresis threshold operation on an image.
Input image.
Segmented region.
Lower threshold for the gray values. Default: 30
Upper threshold for the gray values. Default: 60
Maximum length of a path of "potential" points to reach a "secure" point. Default: 10
Segment an image using binary thresholding.
Input Image.
Segmented output region.
Segmentation method. Default: "max_separability"
Extract foreground or background? Default: "dark"
Used threshold.
Segment an image using local thresholding.
Input Image.
Segmented output region.
Segmentation method. Default: "adapted_std_deviation"
Extract foreground or background? Default: "dark"
List of generic parameter names. Default: []
List of generic parameter values. Default: []
Threshold an image by local mean and standard deviation analysis.
Input image.
Segmented regions.
Mask width for mean and deviation calculation. Default: 15
Mask height for mean and deviation calculation. Default: 15
Factor for the standard deviation of the gray values. Default: 0.2
Minimum gray value difference from the mean. Default: 2
Threshold type. Default: "dark"
Segment an image using a local threshold.
Input image.
Image containing the local thresholds.
Segmented regions.
Offset applied to ThresholdImage. Default: 5.0
Extract light, dark or similar areas? Default: "light"
Segment an image using global threshold.
Input image.
Segmented region.
Lower threshold for the gray values. Default: 128.0
Upper threshold for the gray values. Default: 255.0
Extract level crossings from an image with subpixel accuracy.
Input image.
Extracted level crossings.
Threshold for the level crossings. Default: 128
Segment an image using regiongrowing for multi-channel images.
Input image.
Segmented regions.
Metric for the distance of the feature vectors. Default: "2-norm"
Lower threshold for the features' distance. Default: 0.0
Upper threshold for the features' distance. Default: 20.0
Minimum size of the output regions. Default: 30
Segment an image using regiongrowing.
Input image.
Segmented regions.
Vertical distance between tested pixels (height of the raster). Default: 3
Horizontal distance between tested pixels (height of the raster). Default: 3
Points with a gray value difference less then or equal to tolerance are accumulated into the same object. Default: 6.0
Minimum size of the output regions. Default: 100
Perform a regiongrowing using mean gray values.
Input image.
Segmented regions.
Row coordinates of the starting points. Default: []
Column coordinates of the starting points. Default: []
Maximum deviation from the mean. Default: 5.0
Minimum size of a region. Default: 100
Segment an image by "pouring water" over it.
Input image.
Segmented regions.
Mode of operation. Default: "all"
All gray values smaller than this threshold are disregarded. Default: 0
All gray values larger than this threshold are disregarded. Default: 255
Extract watershed basins from an image using a threshold.
Image to be segmented.
Segments found (dark basins).
Threshold for the watersheds. Default: 10
Extract watersheds and basins from an image.
Input image.
Segmented basins.
Watersheds between the basins.
Extract zero crossings from an image.
Input image.
Zero crossings.
Extract zero crossings from an image with subpixel accuracy.
Input image.
Extracted zero crossings.
Threshold operator for signed images.
Input image.
Positive and negative regions.
Regions smaller than MinSize are suppressed. Default: 20
Regions whose maximum absolute gray value is smaller than MinGray are suppressed. Default: 5.0
Regions that have a gray value smaller than Threshold (or larger than -Threshold) are suppressed. Default: 2.0
Expand a region starting at a given line.
Input image.
Extracted segments.
Row or column coordinate. Default: 256
Stopping criterion. Default: "gradient"
Segmentation mode (row or column). Default: "row"
Threshold for the expansion. Default: 3.0
Detect all local minima in an image.
Image to be processed.
Extracted local minima as regions.
Detect all gray value lowlands.
Image to be processed.
Extracted lowlands as regions (one region for each lowland).
Detect the centers of all gray value lowlands.
Image to be processed.
Centers of gravity of the extracted lowlands as regions (one region for each lowland).
Detect all local maxima in an image.
Input image.
Extracted local maxima as a region.
Detect all gray value plateaus.
Input image.
Extracted plateaus as regions (one region for each plateau).
Detect the centers of all gray value plateaus.
Input image.
Centers of gravity of the extracted plateaus as regions (one region for each plateau).
Determine gray value thresholds from a histogram.
Gray value histogram.
Sigma for the Gaussian smoothing of the histogram. Default: 2.0
Minimum thresholds.
Maximum thresholds.
Segment an image using thresholds determined from its histogram.
Input image.
Regions with gray values within the automatically determined intervals.
Sigma for the Gaussian smoothing of the histogram. Default: 2.0
Segment an image using an automatically determined threshold.
Input image.
Dark regions of the image.
Fast thresholding of images using global thresholds.
Input image.
Segmented regions.
Lower threshold for the gray values. Default: 128
Upper threshold for the gray values. Default: 255.0
Minimum size of objects to be extracted. Default: 20
Transform a region in polar coordinates back to cartesian coordinates.
Input region.
Output region.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the column coordinate 0 of PolarRegion to. Default: 0.0
Angle of the ray to map the column coordinate $WidthIn-1$ of PolarRegion to. Default: 6.2831853
Radius of the circle to map the row coordinate 0 of PolarRegion to. Default: 0
Radius of the circle to map the row coordinate $HeightIn-1$ of PolarRegion to. Default: 100
Width of the virtual input image. Default: 512
Height of the virtual input image. Default: 512
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Transform a region within an annular arc to polar coordinates.
Input region.
Output region.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to column coordinate 0 of PolarTransRegion. Default: 0.0
Angle of the ray to be mapped to column coordinate $Width-1$ of PolarTransRegion. Default: 6.2831853
Radius of the circle to be mapped to row coordinate 0 of PolarTransRegion. Default: 0
Radius of the circle to be mapped to row coordinate $Height-1$ of PolarTransRegion. Default: 100
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Merge regions from line scan images.
Current input regions.
Merged regions from the previous iteration.
Current regions, merged with old ones where applicable.
Regions from the previous iteration which could not be merged with the current ones.
Height of the line scan images. Default: 512
Image line of the current image, which touches the previous image. Default: "top"
Maximum number of images for a single region. Default: 3
Partition a region into rectangles of approximately equal size.
Region to be partitioned.
Partitioned region.
Width of the individual rectangles.
Height of the individual rectangles.
Partition a region horizontally at positions of small vertical extent.
Region to be partitioned.
Partitioned region.
Approximate width of the resulting region parts.
Maximum percental shift of the split position. Default: 20
Convert regions to a label image.
Regions to be converted.
Result image of dimension Width * Height containing the converted regions.
Pixel type of the result image. Default: "int2"
Width of the image to be generated. Default: 512
Height of the image to be generated. Default: 512
Convert a region into a binary byte-image.
Regions to be converted.
Result image of dimension Width * Height containing the converted regions.
Gray value in which the regions are displayed. Default: 255
Gray value in which the background is displayed. Default: 0
Width of the image to be generated. Default: 512
Height of the image to be generated. Default: 512
Return the union of two regions.
Region for which the union with all regions in Region2 is to be computed.
Regions which should be added to Region1.
Resulting regions.
Return the union of all input regions.
Regions of which the union is to be computed.
Union of all input regions.
Compute the closest-point transformation of a region.
Region for which the distance to the border is computed.
Image containing the distance information.
Image containing the coordinates of the closest points.
Type of metric to be used for the closest-point transformation. Default: "city-block"
Compute the distance for pixels inside (true) or outside (false) the input region. Default: "true"
Mode in which the coordinates of the closest points are returned. Default: "absolute"
Width of the output images. Default: 640
Height of the output images. Default: 480
Compute the distance transformation of a region.
Region for which the distance to the border is computed.
Image containing the distance information.
Type of metric to be used for the distance transformation. Default: "city-block"
Compute the distance for pixels inside (true) or outside (false) the input region. Default: "true"
Width of the output image. Default: 640
Height of the output image. Default: 480
Compute the skeleton of a region.
Region to be thinned.
Resulting skeleton.
Apply a projective transformation to a region.
Input regions.
Output regions.
Homogeneous projective transformation matrix.
Interpolation method for the transformation. Default: "bilinear"
Apply an arbitrary affine 2D transformation to regions.
Region(s) to be rotated and scaled.
Transformed output region(s).
Input transformation matrix.
Should the transformation be done using interpolation? Default: "nearest_neighbor"
Reflect a region about an axis.
Region(s) to be reflected.
Reflected region(s).
Axis of symmetry. Default: "row"
Twice the coordinate of the axis of symmetry. Default: 512
Zoom a region.
Region(s) to be zoomed.
Zoomed region(s).
Scale factor in x-direction. Default: 2.0
Scale factor in y-direction. Default: 2.0
Translate a region.
Region(s) to be moved.
Translated region(s).
Row coordinate of the vector by which the region is to be moved. Default: 30
Row coordinate of the vector by which the region is to be moved. Default: 30
Find junctions and end points in a skeleton.
Input skeletons.
Extracted end points.
Extracted junctions.
Calculate the intersection of two regions.
Regions to be intersected with all regions in Region2.
Regions with which Region1 is intersected.
Result of the intersection.
Partition the image plane using given regions.
Regions for which the separating lines are to be determined.
Output region containing the separating lines.
Mode of operation. Default: "mixed"
Fill up holes in regions.
Input regions containing holes.
Regions without holes.
Fill up holes in regions having given shape features.
Input region(s).
Output region(s) with filled holes.
Shape feature used. Default: "area"
Minimum value for Feature. Default: 1.0
Maximum value for Feature. Default: 100.0
Fill gaps between regions or split overlapping regions.
Regions for which the gaps are to be closed, or which are to be separated.
Regions in which no expansion takes place.
Expanded or separated regions.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Clip a region relative to its smallest surrounding rectangle.
Regions to be clipped.
Clipped regions.
Number of rows clipped at the top. Default: 1
Number of rows clipped at the bottom. Default: 1
Number of columns clipped at the left. Default: 1
Number of columns clipped at the right. Default: 1
Clip a region to a rectangle.
Region to be clipped.
Clipped regions.
Row coordinate of the upper left corner of the rectangle. Default: 0
Column coordinate of the upper left corner of the rectangle. Default: 0
Row coordinate of the lower right corner of the rectangle. Default: 256
Column coordinate of the lower right corner of the rectangle. Default: 256
Rank operator for regions.
Region(s) to be transformed.
Resulting region(s).
Width of the filter mask. Default: 15
Height of the filter mask. Default: 15
Minimum number of points lying within the filter mask. Default: 70
Compute connected components of a region.
Input region.
Connected components.
Calculate the symmetric difference of two regions.
Input region 1.
Input region 2.
Resulting region.
Calculate the difference of two regions.
Regions to be processed.
The union of these regions is subtracted from Region.
Resulting region.
Return the complement of a region.
Input region(s).
Complemented regions.
Determine the connected components of the background of given regions.
Input regions.
Connected components of the background.
Generate a region having a given Hamming distance.
Region to be modified.
Regions having the required Hamming distance.
Width of the region to be changed. Default: 100
Height of the region to be changed. Default: 100
Hamming distance between the old and new regions. Default: 1000
Remove noise from a region.
Regions to be modified.
Less noisy regions.
Mode of noise removal. Default: "n_4"
Transform the shape of a region.
Regions to be transformed.
Transformed regions.
Type of transformation. Default: "convex"
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Regions for which the gaps are to be closed, or which are to be separated.
Image (possibly multi-channel) for gray value or color comparison.
Regions in which no expansion takes place.
Expanded or separated regions.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Maximum difference between the gray value or color at the region's border and a candidate for expansion. Default: 32
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Regions for which the gaps are to be closed, or which are to be separated.
Image (possibly multi-channel) for gray value or color comparison.
Regions in which no expansion takes place.
Expanded or separated regions.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Reference gray value or color for comparison. Default: 128
Maximum difference between the reference gray value or color and a candidate for expansion. Default: 32
Split lines represented by one pixel wide, non-branching lines.
Input lines (represented by 1 pixel wide, non-branching regions).
Maximum distance of the line points to the line segment connecting both end points. Default: 3
Row coordinates of the start points of the output lines.
Column coordinates of the start points of the output lines.
Row coordinates of the end points of the output lines.
Column coordinates of the end points of the output lines.
Split lines represented by one pixel wide, non-branching regions.
Input lines (represented by 1 pixel wide, non-branching regions).
Split lines.
Maximum distance of the line points to the line segment connecting both end points. Default: 3
Convert a histogram into a region.
Region containing the histogram.
Input histogram.
Row coordinate of the center of the histogram. Default: 255
Column coordinate of the center of the histogram. Default: 255
Scale factor for the histogram. Default: 1
Eliminate runs of a given length.
Region to be clipped.
Clipped regions.
All runs which are shorter are eliminated. Default: 3
All runs which are longer are eliminated. Default: 1000
Calculate the 3D surface normals of a 3D object model.
Handle of the 3D object model containing 3D point data.
Normals calculation method. Default: "mls"
Names of generic smoothing parameters. Default: []
Values of generic smoothing parameters. Default: []
Handle of the 3D object model with calculated 3D normals.
Smooth the 3D points of a 3D object model.
Handle of the 3D object model containing 3D point data.
Smoothing method. Default: "mls"
Names of generic smoothing parameters. Default: []
Values of generic smoothing parameters. Default: []
Handle of the 3D object model with the smoothed 3D point data.
Create a surface triangulation for a 3D object model.
Handle of the 3D object model containing 3D point data.
Triangulation method. Default: "greedy"
Names of the generic triangulation parameters. Default: []
Values of the generic triangulation parameters. Default: []
Handle of the 3D object model with the triangulated surface.
Additional information about the triangulation process.
This operator is inoperable. It had the following function: Free the memory of all stereo models.
Free the memory of a stereo model.
Handle of the stereo model.
Reconstruct 3D points from calibrated multi-view stereo images.
Handle of the stereo model.
Row coordinates of the detected points.
Column coordinates of the detected points.
Covariance matrices of the detected points. Default: []
Indices of the observing cameras.
Indices of the observed world points.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Indices of the reconstructed 3D points.
Reconstruct surface from calibrated multi-view stereo images.
An image array acquired by the camera setup associated with the stereo model.
Handle of the stereo model.
Handle to the resulting surface.
Get intermediate iconic results of a stereo reconstruction.
Iconic result.
Handle of the stereo model.
Camera indices of the pair ([From, To]).
Name of the iconic result to be returned.
Return the list of image pairs set in a stereo model.
Handle of the stereo model.
Camera indices for the from cameras in the image pairs.
Camera indices for the to cameras in the image pairs.
Specify image pairs to be used for surface stereo reconstruction.
Handle of the stereo model.
Camera indices for the from cameras in the image pairs.
Camera indices for the to cameras in the image pairs.
Get stereo model parameters.
Handle of the stereo model.
Names of the parameters to be set.
Values of the parameters to be set.
Set stereo model parameters.
Handle of the stereo model.
Names of the parameters to be set.
Values of the parameters to be set.
Create a HALCON stereo model.
Handle to the camera setup model.
Reconstruction method. Default: "surface_pairwise"
Name of the model parameter to be set. Default: []
Value of the model parameter to be set. Default: []
Handle of the stereo model.
Query message queue parameters or information about the queue.
Message queue handle.
Names of the queue parameters or info queries. Default: "max_message_num"
Values of the queue parameters or info queries.
Set message queue parameters or invoke commands on the queue.
Message queue handle.
Names of the queue parameters or action commands. Default: "max_message_num"
Values of the queue parameters or action commands. Default: 1
Receive one or more messages from the message queue.
Message queue handle.
Names of optional generic parameters Default: "timeout"
Values of optional generic parameters Default: "infinite"
Handle(s) of the dequeued message(s).
Enqueue one or more messages to the message queue.
Message queue handle.
Handle(s) of message(s) to be enqueued.
Names of optional generic parameters.
Values of optional generic parameters.
Close a message queue handle and release all associated resources.
Message queue handle(s) to be closed.
Create a new empty message queue.
Handle of the newly created message queue.
Query message parameters or information about the message.
Message handle.
Names of the message parameters or info queries. Default: "message_keys"
Message keys the parameter/query should be applied to.
Values of the message parameters or info queries.
Set message parameter or invoke commands on the message.
Message handle.
Names of the message parameters or action commands. Default: "remove_key"
Message keys the parameter/command should be applied to.
Values of the message parameters or action commands.
Retrieve an object associated with the key from the message.
Tuple value retrieved from the message.
Message handle.
Key string or integer.
Add a key/object pair to the message.
Object to be associated with the key.
Message handle.
Key string or integer.
Retrieve a tuple associated with the key from the message.
Message handle.
Key string or integer.
Tuple value retrieved from the message.
Add a key/tuple pair to the message.
Message handle.
Key string or integer.
Tuple value to be associated with the key.
Close a message handle and release all associated resources.
Message handle(s) to be closed.
Create a new empty message.
Handle of the newly created message.
This operator is inoperable. It had the following function: Destroy all condition synchronization objects.
Destroy a condition synchronization object.
Condition synchronization object.
Signal a condition synchronization object.
Condition synchronization object.
Signal a condition synchronization object.
Condition synchronization object.
Bounded wait on the signal of a condition synchronization object.
Condition synchronization object.
Mutex synchronization object.
Timeout in micro seconds.
wait on the signal of a condition synchronization object.
Condition synchronization object.
Mutex synchronization object.
Create a condition variable synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Condition synchronization object.
This operator is inoperable. It had the following function: Destroy all barrier synchronization objects.
Destroy a barrier synchronization object.
Barrier synchronization object.
Wait on the release of a barrier synchronization object.
Barrier synchronization object.
Create a barrier synchronization object.
Barrier attribute. Default: []
Barrier attribute value. Default: []
Barrier team size. Default: 1
Barrier synchronization object.
This operator is inoperable. It had the following function: Clear all event synchronization objects.
Clear the event synchronization object.
Event synchronization object.
Unlock an event synchronization object.
Event synchronization object.
Lock an event synchronization object only if it is unlocked.
Event synchronization object.
Object already locked?
Lock an event synchronization object.
Event synchronization object.
Create an event synchronization object.
Mutex attribute. Default: []
Mutex attribute value. Default: []
Event synchronization object.
This operator is inoperable. It had the following function: Clear all mutex synchronization objects.
Clear the mutex synchronization object.
Mutex synchronization object.
Unlock a mutex synchronization object.
Mutex synchronization object.
Lock a mutex synchronization object.
Mutex synchronization object.
Mutex already locked?
Lock a mutex synchronization object.
Mutex synchronization object.
Create a mutual exclusion synchronization object.
Mutex attribute class. Default: []
Mutex attribute kind. Default: []
Mutex synchronization object.
Query the attributes of a threading / synchronization object.
Threading object.
Class name of threading object.
Name of an attribute.
Value of the attribute.
Set AOP information for operators.
Operator to set information to Default: ""
Further specific index Default: ""
Further specific address Default: ""
Scope of information Default: "max_threads"
AOP information value
Return AOP information for operators.
Operator to get information for
Further index stages Default: ["iconic_type","parameter:0"]
Further index values Default: ["byte",""]
Scope of information Default: "max_threads"
Value of information
Query indexing structure of AOP information for operators.
Operator to get information for Default: ""
Further specific index Default: ""
Further specific address Default: ""
Name of next index stage
Values of next index stage
Check hardware regarding its potential for automatic operator parallelization.
Operators to check Default: ""
Iconic object types to check Default: ""
Knowledge file name Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Write knowledge about hardware dependent behavior of automatic operator parallelization to file.
Name of knowledge file Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Load knowledge about hardware dependent behavior of automatic operator parallelization.
Name of knowledge file Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Knowledge attributes
Updated Operators
Calculate the difference of two object tuples.
Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Set single gray values in an image.
Image to be modified.
Row coordinates of the pixels to be modified. Default: 0
Column coordinates of the pixels to be modified. Default: 0
Gray values to be used. Default: 255.0
Paint XLD objects into an image.
XLD objects to be painted into the input image.
Image in which the xld objects are to be painted.
Image containing the result.
Desired gray value of the xld object. Default: 255.0
Paint regions into an image.
Regions to be painted into the input image.
Image in which the regions are to be painted.
Image containing the result.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Overpaint regions in an image.
Image in which the regions are to be painted.
Regions to be painted into the input image.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Create an image with a specified constant gray value.
Input image.
Image with constant gray value.
Gray value to be used for the output image. Default: 0
Paint the gray values of an image into another image.
Input image containing the desired gray values.
Input image to be painted over.
Result image.
Overpaint the gray values of an image.
Input image to be painted over.
Input image containing the desired gray values.
Convert an "integer number" into an iconic object.
Created objects.
Tuple of object surrogates.
Convert an iconic object into an "integer number."
Objects for which the surrogates are to be returned.
Starting index of the surrogates to be returned. Default: 1
Number of surrogates to be returned. Default: -1
Tuple containing the surrogates.
Copy an iconic object in the HALCON database.
Objects to be copied.
Copied objects.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Concatenate two iconic object tuples.
Object tuple 1.
Object tuple 2.
Concatenated objects.
Delete an iconic object from the HALCON database.
Objects to be deleted.
Copy an image and allocate new memory for it.
Image to be copied.
Copied image.
Select objects from an object tuple.
Input objects.
Selected objects.
Indices of the objects to be selected. Default: 1
Compare iconic objects regarding equality.
Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Test whether a region is contained in another region.
Test region.
Region for comparison.
Is Region1 contained in Region2?
Test whether the regions of two objects are identical.
Test regions.
Comparative regions.
boolean result value.
Compare image objects regarding equality.
Test objects.
Comparative objects.
boolean result value.
Number of objects in a tuple.
Objects to be examined.
Number of objects in the tuple Objects.
Informations about the components of an image object.
Image object to be examined.
Required information about object components. Default: "creator"
Components to be examined (0 for region/XLD). Default: 0
Requested information.
Name of the class of an image object.
Image objects to be examined.
Name of class.
Create a three-channel image from a pointer to the interleaved pixels.
Created image with new image matrix.
Pointer to interleaved pixels.
Format of the input pixels. Default: "rgb"
Width of input image. Default: 512
Height of input image. Default: 512
Reserved.
Pixel type of output image. Default: "byte"
Width of output image. Default: 0
Height of output image. Default: 0
Line number of upper left corner of desired image part. Default: 0
Column number of upper left corner of desired image part. Default: 0
Number of used bits per pixel and channel of the output image (-1: All bits are used). Default: -1
Number of bits that the color values of the input pixels are shifted to the right (only uint2 images). Default: 0
Create a region from an XLD polygon.
Input polygon(s).
Created region(s).
Fill mode of the region(s). Default: "filled"
Create a region from an XLD contour.
Input contour(s).
Created region(s).
Fill mode of the region(s). Default: "filled"
Store a polygon as a "filled" region.
Created region.
Line indices of the base points of the region contour. Default: 100
Column indices of the base points of the region contour. Default: 100
Store a polygon as a region.
Created region.
Line indices of the base points of the region contour. Default: 100
Colum indices of the base points of the region contour. Default: 100
Store individual pixels as image region.
Created region.
Lines of the pixels in the region. Default: 100
Columns of the pixels in the region. Default: 100
Create a region from a runlength coding.
Created region.
Lines of the runs. Default: 100
Columns of the starting points of the runs. Default: 50
Columns of the ending points of the runs. Default: 200
Create a rectangle of any orientation.
Created rectangle.
Line index of the center. Default: 300.0
Column index of the center. Default: 200.0
Angle of the first edge to the horizontal (in radians). Default: 0.0
Half width. Default: 100.0
Half height. Default: 20.0
Create a rectangle parallel to the coordinate axes.
Created rectangle.
Line of upper left corner point. Default: 30.0
Column of upper left corner point. Default: 20.0
Line of lower right corner point. Default: 100.0
Column of lower right corner point. Default: 200.0
Create a random region.
Created random region with expansion Width x Height.
Maximum horizontal expansion of random region. Default: 128
Maximum vertical expansion of random region. Default: 128
Create an image from three pointers to the pixels (red/green/blue).
Created image with new image matrix.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to first red value (channel 1).
Pointer to first green value (channel 2).
Pointer to first blue value (channel 3).
Create an image from a pointer to the pixels.
Created image with new image matrix.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to first gray value.
Create an image with constant gray value.
Created image with new image matrix.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Create an ellipse sector.
Created ellipse(s).
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Orientation of the longer radius (Radius1). Default: 0.0
Longer radius. Default: 100.0
Shorter radius. Default: 60.0
Start angle of the sector. Default: 0.0
End angle of the sector. Default: 3.14159
Create an ellipse.
Created ellipse(s).
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Orientation of the longer radius (Radius1). Default: 0.0
Longer radius. Default: 100.0
Shorter radius. Default: 60.0
Create a circle sector.
Generated circle sector.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Start angle of the circle sector. Default: 0.0
End angle of the circle sector. Default: 3.14159
Create a circle.
Generated circle.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Create a checkered region.
Created checkerboard region.
Largest occurring $x$ value of the region. Default: 511
Largest occurring $y$ value of the region. Default: 511
Width of a field of the checkerboard. Default: 64
Height of a field of the checkerboard. Default: 64
Create a region from lines or pixels.
Created lines/pixel region.
Step width in line direction or zero. Default: 10
Step width in column direction or zero. Default: 10
Type of created pattern. Default: "lines"
Maximum width of pattern. Default: 512
Maximum height of pattern. Default: 512
Create random regions like circles, rectangles and ellipses.
Created regions.
Type of regions to be created. Default: "circle"
Minimum width of the region. Default: 10.0
Maximum width of the region. Default: 20.0
Minimum height of the region. Default: 10.0
Maximum height of the region. Default: 30.0
Minimum rotation angle of the region. Default: -0.7854
Maximum rotation angle of the region. Default: 0.7854
Number of regions. Default: 100
Maximum horizontal expansion. Default: 512
Maximum vertical expansion. Default: 512
Store input lines described in Hesse normal form as regions.
Created regions (one for every line), clipped to maximum image format.
Orientation of the normal vector in radians. Default: 0.0
Distance from the line to the coordinate origin (0.0). Default: 200
Store input lines as regions.
Created regions.
Line coordinates of the starting points of the input lines. Default: 100
Column coordinates of the starting points of the input lines. Default: 50
Line coordinates of the ending points of the input lines. Default: 150
Column coordinates of the ending points of the input lines. Default: 250
Create an empty object tuple.
No objects.
Create an empty region.
Empty region (no pixels).
Create a gray value ramp.
Created image with new image matrix.
Gradient in line direction. Default: 1.0
Gradient in column direction. Default: 1.0
Mean gray value. Default: 128
Line index of reference point. Default: 256
Column index of reference point. Default: 256
Width of image. Default: 512
Height of image. Default: 512
Create a three-channel image from three pointers on the pixels with storage management.
Created HALCON image.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to the first gray value of the first channel.
Pointer to the first gray value of the second channel.
Pointer to the first gray value of the third channel.
Pointer to the procedure re-releasing the memory of the image when deleting the object. Default: 0
Create an image from a pointer on the pixels with storage management.
Created HALCON image.
Pixel type. Default: "byte"
Width of image. Default: 512
Height of image. Default: 512
Pointer to the first gray value.
Pointer to the procedure re-releasing the memory of the image when deleting the object. Default: 0
Create an image with a rectangular domain from a pointer on the pixels (with storage management).
Created HALCON image.
Pointer to the first pixel.
Width of the image. Default: 512
Height of the image. Default: 512
Distance (in bytes) between pixel m in row n and pixel m in row n+1 of the 'input image'.
Distance between two neighboring pixels in bits . Default: 8
Number of used bits per pixel. Default: 8
Copy image data. Default: "false"
Pointer to the procedure releasing the memory of the image when deleting the object. Default: 0
Access to the image data pointer and the image data inside the smallest rectangle of the domain of the input image.
Input image (Himage).
Pointer to the image data.
Width of the output image.
Height of the output image.
Width(input image)*(HorizontalBitPitch/8).
Distance between two neighboring pixels in bits .
Number of used bits per pixel.
Access the pointers of a colored image.
Input image.
Pointer to the pixels of the first channel.
Pointer to the pixels of the second channel.
Pointer to the pixels of the third channel.
Type of image.
Width of image.
Height of image.
Access the pointer of a channel.
Input image.
Pointer to the image data in the HALCON database.
Type of image.
Width of image.
Height of image.
Return the type of an image.
Input image.
Type of image.
Return the size of an image.
Input image.
Width of image.
Height of image.
Request time at which the image was created.
Input image.
Milliseconds (0..999).
Seconds (0..59).
Minutes (0..59).
Hours (0..23).
Day of the month (1..31).
Day of the year (1..366).
Month (1..12).
Year (xxxx).
Return gray values of an image at the positions given by tuples of rows and columns.
Image whose gray values are to be accessed.
Row coordinates of positions. Default: 0
Column coordinates of positions. Default: 0
Interpolation method. Default: "bilinear"
Gray values of the selected image coordinates.
Access the gray values of an image object.
Image whose gray value is to be accessed.
Row coordinates of pixels to be viewed. Default: 0
Column coordinates of pixels to be viewed. Default: 0
Gray values of indicated pixels.
Access the thickness of a region along the main axis.
Region to be analysed.
Thickness of the region along its main axis.
Histogram of the thickness of the region along its main axis.
Polygon approximation of a region.
Region to be approximated.
Maximum distance between the polygon and the edge of the region. Default: 5.0
Line numbers of the base points of the contour.
Column numbers of the base points of the contour.
Access the pixels of a region.
This region is accessed.
Line numbers of the pixels in the region
Column numbers of the pixels in the region.
Access the contour of an object.
Output region.
Line numbers of the contour pixels.
Column numbers of the contour pixels.
Access the runlength coding of a region.
Output region.
Line numbers of the chords.
Column numbers of the starting points of the chords.
Column numbers of the ending points of the chords.
Contour of an object as chain code.
Region to be transformed.
Line of starting point.
Column of starting point.
Direction code of the contour (from starting point).
Access convex hull as contour.
Output region.
Line numbers of contour pixels.
Column numbers of the contour pixels.
Verification of a pattern using an OCV tool.
Characters to be verified.
Handle of the OCV tool.
Name of the character. Default: "a"
Adaption to vertical and horizontal translation. Default: "true"
Adaption to vertical and horizontal scaling of the size. Default: "true"
Adaption to changes of the orientation (not implemented). Default: "false"
Adaption to additive and scaling gray value changes. Default: "true"
Minimum difference between objects. Default: 10
Evaluation of the character.
Training of an OCV tool.
Pattern to be trained.
Handle of the OCV tool to be trained.
Name(s) of the object(s) to analyse. Default: "a"
Mode for training (only one mode implemented). Default: "single"
Deserialize a serialized OCV tool.
Handle of the serialized item.
Handle of the OCV tool.
Serialize an OCV tool.
Handle of the OCV tool.
Handle of the serialized item.
Reading an OCV tool from file.
Name of the file which has to be read. Default: "test_ocv"
Handle of read OCV tool.
Saving an OCV tool to file.
Handle of the OCV tool to be written.
Name of the file where the tool has to be saved. Default: "test_ocv"
This operator is inoperable. It had the following function: Clear all OCV tools.
Clear an OCV tool.
Handle of the OCV tool which has to be freed.
Create a new OCV tool based on gray value projections.
List of names for patterns to be trained. Default: "a"
Handle of the created OCV tool.
Classify a related group of characters with an OCR classifier.
Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Result of classifying the characters with the k-NN.
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Deserialize a serialized k-NN-based OCR classifier.
Handle of the serialized item.
Handle of the OCR classifier.
Serialize a k-NN-based OCR classifier.
Handle of the OCR classifier.
Handle of the serialized item.
Read an OCR classifier from a file.
File name.
Handle of the OCR classifier.
Write a k-NN classifier for an OCR task to a file.
Handle of the k-NN classifier for an OCR task.
File name.
This operator is inoperable. It had the following function: Clear all OCR classifiers.
Clear an OCR classifier.
Handle of the OCR classifier.
Create an OCR classifier using a k-Nearest Neighbor (k-NN) classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
This parameter is not yet supported. Default: []
This parameter is not yet supported. Default: []
Handle of the k-NN classifier.
Trains an k-NN classifier for an OCR task.
Handle of the k-NN classifier.
Names of the training files. Default: "ocr.trf"
Names of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Values of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Compute the features of a character.
Input character.
Handle of the k-NN classifier.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Return the parameters of an OCR classifier.
Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
Type of preprocessing used to transform the feature vectors.
Number of different trees used during the classifcation.
Classify multiple characters with an k-NN classifier.
Characters to be recognized.
Gray values of the characters.
Handle of the k-NN classifier.
Result of classifying the characters with the k-NN.
Confidence of the class of the characters.
Classify a single character with an OCR classifier.
Character to be recognized.
Gray values of the character.
Handle of the k-NN classifier.
Number of maximal classes to determine. Default: 1
Number of neighbors to consider. Default: 1
Results of classifying the character with the k-NN.
Confidence(s) of the class(es) of the character.
Select an optimal combination of features to classify OCR data.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Trained OCR-k-NN classifier.
Selected feature set, contains only entries from FeatureList.
Achieved score using tow-fold cross-validation.
Select an optimal combination of features to classify OCR data from a (protected) training file.
Names of the training files. Default: ""
Passwords for protected training files.
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Trained OCR-MLP classifier.
Selected feature set, contains only entries from FeatureList.
Achieved score using tow-fold cross-validation.
Selects an optimal combination of features to classify OCR data.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Trained OCR-MLP classifier.
Selected feature set, contains only entries from FeatureList.
Achieved score using tow-fold cross-validation.
Select an optimal combination of features to classify OCR data from a (protected) training file.
Names of the training files. Default: ""
Passwords for protected training files.
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Trained OCR-SVM Classifier.
Selected feature set, contains only entries from FeatureList.
Achieved score using tow-fold cross-validation.
Selects an optimal combination of features to classify OCR data.
Names of the training files. Default: ""
List of features that should be considered for selection. Default: ["zoom_factor","ratio","width","height","foreground","foreground_grid_9","foreground_grid_16","anisometry","compactness","convexity","moments_region_2nd_invar","moments_region_2nd_rel_invar","moments_region_3rd_invar","moments_central","phi","num_connect","num_holes","projection_horizontal","projection_vertical","projection_horizontal_invar","projection_vertical_invar","chord_histo","num_runs","pixel","pixel_invar","pixel_binary","gradient_8dir","cooc","moments_gray_plane"]
Method to perform the selection. Default: "greedy"
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 15
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 16
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
Trained OCR-SVM Classifier.
Selected feature set, contains only entries from FeatureList.
Achieved score using tow-fold cross-validation.
This operator is inoperable. It had the following function: Clear all lexica.
Clear a lexicon.
Handle of the lexicon.
Find a similar word in a lexicon.
Handle of the lexicon.
Word to be looked up. Default: "word"
Most similar word found in the lexicon.
Difference between the words in edit operations.
Check if a word is contained in a lexicon.
Handle of the lexicon.
Word to be looked up. Default: "word"
Result of the search.
Query all words from a lexicon.
Handle of the lexicon.
List of all words.
Create a lexicon from a text file.
Unique name for the new lexicon. Default: "lex1"
Name of a text file containing words for the new lexicon. Default: "words.txt"
Handle of the lexicon.
Create a lexicon from a tuple of words.
Unique name for the new lexicon. Default: "lex1"
Word list for the new lexicon. Default: ["word1","word2","word3"]
Handle of the lexicon.
This operator is inoperable. It had the following function: Clear all SVM based OCR classifiers.
Clear an SVM-based OCR classifier.
Handle of the OCR classifier.
Deserialize a serialized SVM-based OCR classifier.
Handle of the serialized item.
Handle of the OCR classifier.
Serialize a SVM-based OCR classifier
Handle of the OCR classifier.
Handle of the serialized item.
Read a SVM-based OCR classifier from a file.
File name.
Handle of the OCR classifier.
Write an OCR classifier to a file.
Handle of the OCR classifier.
File name.
Compute the features of a character.
Input character.
Handle of the OCR classifier.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Classify a related group of characters with an OCR classifier.
Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Result of classifying the characters with the SVM.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Classify multiple characters with an SVM-based OCR classifier.
Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Result of classifying the characters with the SVM.
Classify a single character with an SVM-based OCR classifier.
Character to be recognized.
Gray values of the character.
Handle of the OCR classifier.
Number of best classes to determine. Default: 1
Result of classifying the character with the SVM.
Approximate a trained SVM-based OCR classifier by a reduced SVM.
Original handle of SVM-based OCR-classifier.
Type of postprocessing to reduce number of SVs. Default: "bottom_up"
Minimum number of remaining SVs. Default: 2
Maximum allowed error of reduction. Default: 0.001
SVMHandle of reduced OCR classifier.
Train an OCR classifier with data from a (protected) training file.
Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Passwords for protected training files.
Stop parameter for training. Default: 0.001
Mode of training. Default: "default"
Train an OCR classifier.
Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Stop parameter for training. Default: 0.001
Mode of training. Default: "default"
Compute the information content of the preprocessed feature vectors of an SVM-based OCR classifier.
Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Relative information content of the transformed feature vectors.
Cumulative information content of the transformed feature vectors.
Return the number of support vectors of an OCR classifier.
OCR handle.
Total number of support vectors.
Number of SV of each sub-SVM.
Return the index of a support vector from a trained OCR classifier that is based on support vector machines.
OCR handle.
Number of stored support vectors.
Index of the support vector in the training set.
Return the parameters of an OCR classifier.
Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
The kernel type.
Additional parameters for the kernel function.
Regularization constant of the SVM.
The mode of the SVM.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization').
Create an OCR classifier using a support vector machine.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
The kernel type. Default: "rbf"
Additional parameter for the kernel function. Default: 0.02
Regularization constant of the SVM. Default: 0.05
The mode of the SVM. Default: "one-versus-one"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Handle of the OCR classifier.
This operator is inoperable. It had the following function: Clear all OCR classifiers.
Clear an OCR classifier.
Handle of the OCR classifier.
Deserialize a serialized MLP-based OCR classifier.
Handle of the serialized item.
Handle of the OCR classifier.
Serialize a MLP-based OCR classifier.
Handle of the OCR classifier.
Handle of the serialized item.
Read an OCR classifier from a file.
File name.
Handle of the OCR classifier.
Write an OCR classifier to a file.
Handle of the OCR classifier.
File name.
Compute the features of a character.
Input character.
Handle of the OCR classifier.
Should the feature vector be transformed with the preprocessing? Default: "true"
Feature vector of the character.
Classify a related group of characters with an OCR classifier.
Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Result of classifying the characters with the MLP.
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Classify multiple characters with an OCR classifier.
Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Result of classifying the characters with the MLP.
Confidence of the class of the characters.
Classify a single character with an OCR classifier.
Character to be recognized.
Gray values of the character.
Handle of the OCR classifier.
Number of best classes to determine. Default: 1
Result of classifying the character with the MLP.
Confidence(s) of the class(es) of the character.
Train an OCR classifier with data from a (protected) training file.
Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Passwords for protected training files.
Maximum number of iterations of the optimization algorithm. Default: 200
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm. Default: 1.0
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm. Default: 0.01
Mean error of the MLP on the training data.
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.
Train an OCR classifier.
Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Maximum number of iterations of the optimization algorithm. Default: 200
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm. Default: 1.0
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm. Default: 0.01
Mean error of the MLP on the training data.
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.
Compute the information content of the preprocessed feature vectors of an OCR classifier.
Handle of the OCR classifier.
Names of the training files. Default: "ocr.trf"
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Relative information content of the transformed feature vectors.
Cumulative information content of the transformed feature vectors.
Return the rejection class parameters of an OCR classifier.
Handle of the OCR classifier.
Name of the general parameter. Default: "sampling_strategy"
Value of the general parameter.
Set the rejection class parameters of an OCR classifier.
Handle of the OCR classifier.
Name of the general parameter. Default: "sampling_strategy"
Value of the general parameter. Default: "hyperbox_around_all_classes"
Return the regularization parameters of an OCR classifier.
Handle of the OCR classifier.
Name of the regularization parameter to return. Default: "weight_prior"
Value of the regularization parameter.
Set the regularization parameters of an OCR classifier.
Handle of the OCR classifier.
Name of the regularization parameter to return. Default: "weight_prior"
Value of the regularization parameter. Default: 1.0
Return the parameters of an OCR classifier.
Handle of the OCR classifier.
Width of the rectangle to which the gray values of the segmented character are zoomed.
Height of the rectangle to which the gray values of the segmented character are zoomed.
Interpolation mode for the zooming of the characters.
Features to be used for classification.
Characters of the character set to be read.
Number of hidden units of the MLP.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features.
Create an OCR classifier using a multilayer perceptron.
Width of the rectangle to which the gray values of the segmented character are zoomed. Default: 8
Height of the rectangle to which the gray values of the segmented character are zoomed. Default: 10
Interpolation mode for the zooming of the characters. Default: "constant"
Features to be used for classification. Default: "default"
All characters of the character set to be read. Default: ["0","1","2","3","4","5","6","7","8","9"]
Number of hidden units of the MLP. Default: 80
Type of preprocessing used to transform the feature vectors. Default: "none"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the MLP with random values. Default: 42
Handle of the OCR classifier.
Serialize an OCR classifier.
ID of the OCR classifier.
Handle of the serialized item.
Deserialize a serialized OCR classifier.
Handle of the serialized item.
ID of the OCR classifier.
Writing an OCR classifier into a file.
ID of the OCR classifier.
Name of the file for the OCR classifier (without extension). Default: "my_ocr"
Read an OCR classifier from a file.
Name of the OCR classifier file. Default: "testnet"
ID of the read OCR classifier.
Classify one character.
Character to be recognized.
Gray values of the characters.
ID of the OCR classifier.
Classes (names) of the characters.
Confidence values of the characters.
Classify characters.
Characters to be recognized.
Gray values for the characters.
ID of the OCR classifier.
Class (name) of the characters.
Confidence values of the characters.
Get information about an OCR classifier.
ID of the OCR classifier.
Width of the scaled characters.
Height of the scaled characters.
Interpolation mode for scaling the characters.
Width of the largest trained character.
Height of the largest trained character.
Used features.
All characters of the set.
Create a new OCR-classifier.
Width of the input layer of the network. Default: 8
Height of the input layer of the network. Default: 10
Interpolation mode concerning scaling of characters. Default: 1
Additional features. Default: "default"
All characters of a set. Default: ["a","b","c"]
ID of the created OCR classifier.
Train an OCR classifier by the input of regions.
Characters to be trained.
Gray values for the characters.
ID of the desired OCR-classifier.
Class (name) of the characters. Default: "a"
Average confidence during a re-classification of the trained characters.
Train an OCR classifier with the help of a training file.
ID of the desired OCR-network.
Names of the training files. Default: "train_ocr"
Average confidence during a re-classification of the trained characters.
Protection of training data.
Names of the training files. Default: ""
Passwords for protecting the training files.
Names of the protected training files.
Storing of training characters into a file.
Characters to be trained.
Gray values of the characters.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Define a new conversion table for the characters.
ID of the OCR-network to be changed.
New assign of characters. Default: ["a","b","c"]
Deallocation of the memory of an OCR classifier.
ID of the OCR classifier to be deleted.
Sorting of regions with respect to their relative position.
Regions to be sorted.
Sorted regions.
Kind of sorting. Default: "first_point"
Increasing or decreasing sorting order. Default: "true"
Sorting first with respect to row, then to column. Default: "row"
This operator is inoperable. It had the following function: Destroy all OCR classifiers.
Test an OCR classifier.
Characters to be tested.
Gray values for the characters.
ID of the desired OCR-classifier.
Class (name) of the characters. Default: "a"
Confidence for the character to belong to the class.
Cut out an image area relative to the domain.
Input image.
Image area.
Number of rows clipped at the top. Default: -1
Number of columns clipped at the left. Default: -1
Number of rows clipped at the bottom. Default: -1
Number of columns clipped at the right. Default: -1
Access the features which correspond to a character.
Characters to be trained.
ID of the desired OCR-classifier.
Feature vector.
Concat training files.
Names of the single training files. Default: ""
Name of the composed training file. Default: "all_characters"
Write characters into a training file.
Characters to be trained.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Add characters to a training file.
Characters to be trained.
Gray values of the characters.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Query which characters are stored in a (protected) training file.
Names of the training files. Default: ""
Passwords for protected training files.
Names of the read characters.
Number of characters.
Query which characters are stored in a training file.
Names of the training files. Default: ""
Names of the read characters.
Number of characters.
Read training specific characters from files and convert to images.
Images read from file.
Names of the training files. Default: ""
Names of the characters to be extracted. Default: "0"
Names of the read characters.
Read training characters from files and convert to images.
Images read from file.
Names of the training files. Default: ""
Names of the read characters.
Prune the branches of a region.
Regions to be processed.
Result of the pruning operation.
Length of the branches to be removed. Default: 2
Reduce a region to its boundary.
Regions for which the boundary is to be computed.
Resulting boundaries.
Boundary type. Default: "inner"
Perform a closing after an opening with multiple structuring elements.
Regions to be processed.
Structuring elements.
Fitted regions.
Generate standard structuring elements.
Generated structuring elements.
Type of structuring element to generate. Default: "noise"
Row coordinate of the reference point. Default: 1
Column coordinate of the reference point. Default: 1
Reflect a region about a point.
Region to be reflected.
Transposed region.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Remove the result of a hit-or-miss operation from a region (sequential).
Regions to be processed.
Result of the thinning operator.
Structuring element from the Golay alphabet. Default: "l"
Number of iterations. For 'f', 'f2', 'h' and 'i' the only useful value is 1. Default: 20
Remove the result of a hit-or-miss operation from a region (using a Golay structuring element).
Regions to be processed.
Result of the thinning operator.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Remove the result of a hit-or-miss operation from a region.
Regions to be processed.
Structuring element for the foreground.
Structuring element for the background.
Result of the thinning operator.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Number of iterations. Default: 1
Add the result of a hit-or-miss operation to a region (sequential).
Regions to be processed.
Result of the thickening operator.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Add the result of a hit-or-miss operation to a region (using a Golay structuring element).
Regions to be processed.
Result of the thickening operator.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Add the result of a hit-or-miss operation to a region.
Regions to be processed.
Structuring element for the foreground.
Structuring element for the background.
Result of the thickening operator.
Row coordinate of the reference point. Default: 16
Column coordinate of the reference point. Default: 16
Number of iterations. Default: 1
Hit-or-miss operation for regions using the Golay alphabet (sequential).
Regions to be processed.
Result of the hit-or-miss operation.
Structuring element from the Golay alphabet. Default: "h"
Hit-or-miss operation for regions using the Golay alphabet.
Regions to be processed.
Result of the hit-or-miss operation.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Hit-or-miss operation for regions.
Regions to be processed.
Erosion mask for the input regions.
Erosion mask for the complements of the input regions.
Result of the hit-or-miss operation.
Row coordinate of the reference point. Default: 16
Column coordinate of the reference point. Default: 16
Generate the structuring elements of the Golay alphabet.
Structuring element for the foreground.
Structuring element for the background.
Name of the structuring element. Default: "l"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Row coordinate of the reference point. Default: 16
Column coordinate of the reference point. Default: 16
Thinning of a region.
Regions to be thinned.
Result of the skiz operator.
Number of iterations for the sequential thinning with the element 'l' of the Golay alphabet. Default: 100
Number of iterations for the sequential thinning with the element 'e' of the Golay alphabet. Default: 1
Compute the morphological skeleton of a region.
Regions to be processed.
Resulting morphological skeleton.
Compute the union of bottom_hat and top_hat.
Regions to be processed.
Structuring element (position-invariant).
Union of top hat and bottom hat.
Compute the bottom hat of regions.
Regions to be processed.
Structuring element (position independent).
Result of the bottom hat operator.
Compute the top hat of regions.
Regions to be processed.
Structuring element (position independent).
Result of the top hat operator.
Erode a region (using a reference point).
Regions to be eroded.
Structuring element.
Eroded regions.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Number of iterations. Default: 1
Erode a region.
Regions to be eroded.
Structuring element.
Eroded regions.
Number of iterations. Default: 1
Dilate a region (using a reference point).
Regions to be dilated.
Structuring element.
Dilated regions.
Row coordinate of the reference point.
Column coordinate of the reference point.
Number of iterations. Default: 1
Perform a Minkowski addition on a region.
Regions to be dilated.
Structuring element.
Dilated regions.
Number of iterations. Default: 1
Close a region with a rectangular structuring element.
Regions to be closed.
Closed regions.
Width of the structuring rectangle. Default: 10
Height of the structuring rectangle. Default: 10
Close a region with an element from the Golay alphabet.
Regions to be closed.
Closed regions.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Close a region with a circular structuring element.
Regions to be closed.
Closed regions.
Radius of the circular structuring element. Default: 3.5
Close a region.
Regions to be closed.
Structuring element (position-invariant).
Closed regions.
Separate overlapping regions.
Regions to be opened.
Structuring element (position-invariant).
Opened regions.
Open a region with an element from the Golay alphabet.
Regions to be opened.
Opened regions.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Open a region with a rectangular structuring element.
Regions to be opened.
Opened regions.
Width of the structuring rectangle. Default: 10
Height of the structuring rectangle. Default: 10
Open a region with a circular structuring element.
Regions to be opened.
Opened regions.
Radius of the circular structuring element. Default: 3.5
Open a region.
Regions to be opened.
Structuring element (position-invariant).
Opened regions.
Erode a region sequentially.
Regions to be eroded.
Eroded regions.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Erode a region with an element from the Golay alphabet.
Regions to be eroded.
Eroded regions.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Erode a region with a rectangular structuring element.
Regions to be eroded.
Eroded regions.
Width of the structuring rectangle. Default: 11
Height of the structuring rectangle. Default: 11
Erode a region with a circular structuring element.
Regions to be eroded.
Eroded regions.
Radius of the circular structuring element. Default: 3.5
Erode a region (using a reference point).
Regions to be eroded.
Structuring element.
Eroded regions.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Number of iterations. Default: 1
Erode a region.
Regions to be eroded.
Structuring element.
Eroded regions.
Number of iterations. Default: 1
Dilate a region sequentially.
Regions to be dilated.
Dilated regions.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Dilate a region with an element from the Golay alphabet.
Regions to be dilated.
Dilated regions.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Dilate a region with a rectangular structuring element.
Regions to be dilated.
Dilated regions.
Width of the structuring rectangle. Default: 11
Height of the structuring rectangle. Default: 11
Dilate a region with a circular structuring element.
Regions to be dilated.
Dilated regions.
Radius of the circular structuring element. Default: 3.5
Dilate a region (using a reference point).
Regions to be dilated.
Structuring element.
Dilated regions.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Number of iterations. Default: 1
Dilate a region.
Regions to be dilated.
Structuring element.
Dilated regions.
Number of iterations. Default: 1
Perform a gray value bottom hat transformation on an image.
Input image.
Structuring element.
Bottom hat image.
Perform a gray value top hat transformation on an image.
Input image.
Structuring element.
Top hat image.
Perform a gray value closing on an image.
Input image.
Structuring element.
Gray-closed image.
Perform a gray value opening on an image.
Input image.
Structuring element.
Gray-opened image.
Perform a gray value dilation on an image.
Input image.
Structuring element.
Gray-dilated image.
Perform a gray value erosion on an image.
Input image.
Structuring element.
Gray-eroded image.
Load a structuring element for gray morphology.
Generated structuring element.
Name of the file containing the structuring element.
Generate ellipsoidal structuring elements for gray morphology.
Generated structuring element.
Pixel type. Default: "byte"
Width of the structuring element. Default: 5
Height of the structuring element. Default: 5
Maximum gray value of the structuring element. Default: 0
Query the model contour of a metrology object in image coordinates.
Model contour.
Handle of the metrology model.
Index of the metrology object. Default: 0
Distance between neighboring contour points. Default: 1.5
Query the result contour of a metrology object.
Result contour for the given metrology object.
Handle of the metrology model.
Index of the metrology object. Default: 0
Instance of the metrology object. Default: "all"
Distance between neighboring contour points. Default: 1.5
Alignment of a metrology model.
Handle of the metrology model.
Row coordinate of the alignment. Default: 0
Column coordinate of the alignment. Default: 0
Rotation angle of the alignment. Default: 0
Add a metrology object to a metrology model.
Handle of the metrology model.
Type of the metrology object to be added. Default: "circle"
Parameters of the metrology object to be added.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Get parameters that are valid for the entire metrology model.
Handle of the metrology model.
Name of the generic parameter. Default: "camera_param"
Value of the generic parameter.
Set parameters that are valid for the entire metrology model.
Handle of the metrology model.
Name of the generic parameter. Default: "camera_param"
Value of the generic parameter. Default: []
Deserialize a serialized metrology model.
Handle of the serialized item.
Handle of the metrology model.
Serialize a metrology model.
Handle of the metrology model.
Handle of the serialized item.
Transform metrology objects of a metrology model, e.g. for alignment.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Translation in row direction.
Translation in column direction.
Rotation angle.
Mode of the transformation. Default: "absolute"
Write a metrology model to a file.
Handle of the metrology model.
File name.
Read a metrology model from a file.
File name.
Handle of the metrology model.
Copy a metrology model.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Handle of the copied metrology model.
Copy metrology metrology objects of a metrology model.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Indices of the copied metrology objects.
Get the number of instances of the metrology objects of a metrology model.
Handle of the metrology model.
Index of the metrology objects. Default: 0
Number of Instances of the metrology objects.
Get the results of the measurement of a metrology model.
Handle of the metrology model.
Index of the metrology object. Default: 0
Instance of the metrology object. Default: "all"
Name of the generic parameter. Default: "result_type"
Value of the generic parameter. Default: "all_param"
Result values.
Get the measure regions and the results of the edge location for the metrology objects of a metrology model.
Rectangular XLD Contours of measure regions.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Select light/dark or dark/light edges. Default: "all"
Row coordinates of the measured edges.
Column coordinates of the measured edges.
Measure and fit the geometric shapes of all metrology objects of a metrology model.
Input image.
Handle of the metrology model.
Get the indices of the metrology objects of a metrology model.
Handle of the metrology model.
Indices of the metrology objects.
Reset all fuzzy parameters and fuzzy functions of a metrology model.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Reset all parameters of a metrology model.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Get a fuzzy parameter of a metroloy model.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "fuzzy_thresh"
Values of the generic parameters.
Get one or several parameters of a metroloy model.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "num_measures"
Values of the generic parameters.
Set fuzzy parameters or fuzzy functions for a metrology model.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "fuzzy_thresh"
Values of the generic parameters. Default: 0.5
Set parameters for the metrology objects of a metrology model.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Names of the generic parameters. Default: "num_instances"
Values of the generic parameters. Default: 1
Add a rectangle to a metrology model.
Handle of the metrology model.
Row (or Y) coordinate of the center of the rectangle.
Column (or X) coordinate of the center of the rectangle.
Orientation of the main axis [rad].
Length of the larger half edge of the rectangle.
Length of the smaller half edge of the rectangle.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add a line to a metrology model.
Handle of the metrology model.
Row (or Y) coordinate of the start of the line.
Column (or X) coordinate of the start of the line.
Row (or Y) coordinate of the end of the line.
Column (or X) coordinate of the end of the line.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add an ellipse or an elliptic arc to a metrology model.
Handle of the metrology model.
Row (or Y) coordinate of the center of the ellipse.
Column (or X) coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
Add a circle or a circular arc to a metrology model.
Handle of the metrology model.
Row coordinate (or Y) of the center of the circle or circular arc.
Column (or X) coordinate of the center of the circle or circular arc.
Radius of the circle or circular arc.
Half length of the measure regions perpendicular to the boundary. Default: 20.0
Half length of the measure regions tangetial to the boundary. Default: 5.0
Sigma of the Gaussian function for the smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Index of the created metrology object.
This operator is inoperable. It had the following function: Delete all metrology models and free the allocated memory.
Delete a metrology model and free the allocated memory.
Handle of the metrology model.
Delete metrology objects and free the allocated memory.
Handle of the metrology model.
Index of the metrology objects. Default: "all"
Set the size of the image of metrology objects.
Handle of the metrology model.
Width of the image to be processed. Default: 640
Height of the image to be processed. Default: 480
Create the data structure that is needed to measure geometric shapes.
Handle of the metrology model.
Serialize a measure object.
Measure object handle.
Handle of the serialized item.
Deserialize a serialized measure object.
Handle of the serialized item.
Measure object handle.
Write a measure object to a file.
Measure object handle.
File name.
Read a measure object from a file.
File name.
Measure object handle.
Extracting points with a particular gray value along a rectangle or an annular arc.
Input image.
Measure object handle.
Sigma of gaussian smoothing. Default: 1.0
Threshold. Default: 128.0
Selection of points. Default: "all"
Row coordinates of points with threshold value.
Column coordinates of points with threshold value.
Distance between consecutive points.
This operator is inoperable. It had the following function: Delete all measure objects.
Delete a measure object.
Measure object handle.
Extract a gray value profile perpendicular to a rectangle or annular arc.
Input image.
Measure object handle.
Gray value profile.
Reset a fuzzy function.
Measure object handle.
Selection of the fuzzy set. Default: "contrast"
Specify a normalized fuzzy function for edge pairs.
Measure object handle.
Favored width of edge pairs. Default: 10.0
Selection of the fuzzy set. Default: "size_abs_diff"
Fuzzy function.
Specify a fuzzy function.
Measure object handle.
Selection of the fuzzy set. Default: "contrast"
Fuzzy function.
Extract straight edge pairs perpendicular to a rectangle or an annular arc.
Input image.
Measure object handle.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select the first gray value transition of the edge pairs. Default: "all"
Constraint of pairing. Default: "no_restriction"
Number of edge pairs. Default: 10
Row coordinate of the first edge.
Column coordinate of the first edge.
Edge amplitude of the first edge (with sign).
Row coordinate of the second edge.
Column coordinate of the second edge.
Edge amplitude of the second edge (with sign).
Row coordinate of the center of the edge pair.
Column coordinate of the center of the edge pair.
Fuzzy evaluation of the edge pair.
Distance between the edges of the edge pair.
Extract straight edge pairs perpendicular to a rectangle or an annular arc.
Input image.
Measure object handle.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select the first gray value transition of the edge pairs. Default: "all"
Row coordinate of the first edge point.
Column coordinate of the first edge point.
Edge amplitude of the first edge (with sign).
Row coordinate of the second edge point.
Column coordinate of the second edge point.
Edge amplitude of the second edge (with sign).
Row coordinate of the center of the edge pair.
Column coordinate of the center of the edge pair.
Fuzzy evaluation of the edge pair.
Distance between edges of an edge pair.
Distance between consecutive edge pairs.
Extract straight edges perpendicular to a rectangle or an annular arc.
Input image.
Measure object handle.
Sigma of Gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Minimum fuzzy value. Default: 0.5
Select light/dark or dark/light edges. Default: "all"
Row coordinate of the edge point.
Column coordinate of the edge point.
Edge amplitude of the edge (with sign).
Fuzzy evaluation of the edges.
Distance between consecutive edges.
Extract straight edge pairs perpendicular to a rectangle or annular arc.
Input image.
Measure object handle.
Sigma of gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Type of gray value transition that determines how edges are grouped to edge pairs. Default: "all"
Selection of edge pairs. Default: "all"
Row coordinate of the center of the first edge.
Column coordinate of the center of the first edge.
Edge amplitude of the first edge (with sign).
Row coordinate of the center of the second edge.
Column coordinate of the center of the second edge.
Edge amplitude of the second edge (with sign).
Distance between edges of an edge pair.
Distance between consecutive edge pairs.
Extract straight edges perpendicular to a rectangle or annular arc.
Input image.
Measure object handle.
Sigma of gaussian smoothing. Default: 1.0
Minimum edge amplitude. Default: 30.0
Light/dark or dark/light edge. Default: "all"
Selection of end points. Default: "all"
Row coordinate of the center of the edge.
Column coordinate of the center of the edge.
Edge amplitude of the edge (with sign).
Distance between consecutive edges.
Translate a measure object.
Measure object handle.
Row coordinate of the new reference point. Default: 50.0
Column coordinate of the new reference point. Default: 100.0
Prepare the extraction of straight edges perpendicular to an annular arc.
Row coordinate of the center of the arc. Default: 100.0
Column coordinate of the center of the arc. Default: 100.0
Radius of the arc. Default: 50.0
Start angle of the arc in radians. Default: 0.0
Angular extent of the arc in radians. Default: 6.28318
Radius (half width) of the annulus. Default: 10.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Measure object handle.
Prepare the extraction of straight edges perpendicular to a rectangle.
Row coordinate of the center of the rectangle. Default: 300.0
Column coordinate of the center of the rectangle. Default: 200.0
Angle of longitudinal axis of the rectangle to horizontal (radians). Default: 0.0
Half width of the rectangle. Default: 100.0
Half height of the rectangle. Default: 20.0
Width of the image to be processed subsequently. Default: 512
Height of the image to be processed subsequently. Default: 512
Type of interpolation to be used. Default: "nearest_neighbor"
Measure object handle.
Deserialize a serialized matrix.
Handle of the serialized item.
Matrix handle.
Serialize a matrix.
Matrix handle.
Handle of the serialized item.
Read a matrix from a file.
File name.
Matrix handle.
Write a matrix to a file.
Matrix handle of the input matrix.
Format of the file. Default: "binary"
File name.
Perform an orthogonal decomposition of a matrix.
Matrix handle of the input matrix.
Method of decomposition. Default: "qr"
Type of output matrices. Default: "full"
Computation of the orthogonal matrix. Default: "true"
Matrix handle with the orthogonal part of the decomposed input matrix.
Matrix handle with the triangular part of the decomposed input matrix.
Decompose a matrix.
Matrix handle of the input matrix.
Type of the input matrix. Default: "general"
Matrix handle with the output matrix 1.
Matrix handle with the output matrix 2.
Compute the singular value decomposition of a matrix.
Matrix handle of the input matrix.
Type of computation. Default: "full"
Computation of singular values. Default: "both"
Matrix handle with the left singular vectors.
Matrix handle with singular values.
Matrix handle with the right singular vectors.
Compute the generalized eigenvalues and optionally the generalized eigenvectors of general matrices.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Computation of the eigenvectors. Default: "none"
Matrix handle with the real parts of the eigenvalues.
Matrix handle with the imaginary parts of the eigenvalues.
Matrix handle with the real parts of the eigenvectors.
Matrix handle with the imaginary parts of the eigenvectors.
Compute the generalized eigenvalues and optionally generalized eigenvectors of symmetric input matrices.
Matrix handle of the symmetric input matrix A.
Matrix handle of the symmetric positive definite input matrix B.
Computation of the eigenvectors. Default: "false"
Matrix handle with the eigenvalues.
Matrix handle with the eigenvectors.
Compute the eigenvalues and optionally the eigenvectors of a general matrix.
Matrix handle of the input matrix.
Computation of the eigenvectors. Default: "none"
Matrix handle with the real parts of the eigenvalues.
Matrix handle with the imaginary parts of the eigenvalues.
Matrix handle with the real parts of the eigenvectors.
Matrix handle with the imaginary parts of the eigenvectors.
Compute the eigenvalues and optionally eigenvectors of a symmetric matrix.
Matrix handle of the input matrix.
Computation of the eigenvectors. Default: "false"
Matrix handle with the eigenvalues.
Matrix handle with the eigenvectors.
Compute the solution of a system of equations.
Matrix handle of the input matrix of the left hand side.
The type of the input matrix of the left hand side. Default: "general"
Type of solving and limitation to set singular values to be 0. Default: 0.0
Matrix handle of the input matrix of right hand side.
New matrix handle with the solution.
Compute the determinant of a matrix.
Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
Determinant of the input matrix.
Invert a matrix.
Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
Type of inversion. Default: 0.0
Invert a matrix.
Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
Type of inversion. Default: 0.0
Matrix handle with the inverse matrix.
Transpose a matrix.
Matrix handle of the input matrix.
Transpose a matrix.
Matrix handle of the input matrix.
Matrix handle with the transpose of the input matrix.
Returns the elementwise maximum of a matrix.
Matrix handle of the input matrix.
Type of maximum determination. Default: "columns"
Matrix handle with the maximum values of the input matrix.
Returns the elementwise minimum of a matrix.
Matrix handle of the input matrix.
Type of minimum determination. Default: "columns"
Matrix handle with the minimum values of the input matrix.
Compute the power functions of a matrix.
Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
The power. Default: 2.0
Compute the power functions of a matrix.
Matrix handle of the input matrix.
The type of the input matrix. Default: "general"
The power. Default: 2.0
Matrix handle with the raised powered matrix.
Compute the power functions of the elements of a matrix.
Matrix handle of the input matrix of the base.
Matrix handle of the input matrix with exponents.
Compute the power functions of the elements of a matrix.
Matrix handle of the input matrix of the base.
Matrix handle of the input matrix with exponents.
Matrix handle with the raised power of the input matrix.
Compute the power functions of the elements of a matrix.
Matrix handle of the input matrix.
The power. Default: 2.0
Compute the power functions of the elements of a matrix.
Matrix handle of the input matrix.
The power. Default: 2.0
Matrix handle with the raised power of the input matrix.
Compute the square root values of the elements of a matrix.
Matrix handle of the input matrix.
Compute the square root values of the elements of a matrix.
Matrix handle of the input matrix.
Matrix handle with the square root values of the input matrix.
Compute the absolute values of the elements of a matrix.
Matrix handle of the input matrix.
Compute the absolute values of the elements of a matrix.
Matrix handle of the input matrix.
Matrix handle with the absolute values of the input matrix.
Norm of a matrix.
Matrix handle of the input matrix.
Type of norm. Default: "2-norm"
Norm of the input matrix.
Returns the elementwise mean of a matrix.
Matrix handle of the input matrix.
Type of mean determination. Default: "columns"
Matrix handle with the mean values of the input matrix.
Returns the elementwise sum of a matrix.
Matrix handle of the input matrix.
Type of summation. Default: "columns"
Matrix handle with the sum of the input matrix.
Divide matrices element-by-element.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Divide matrices element-by-element.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Matrix handle with the divided values of input matrices.
Multiply matrices element-by-element.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Multiply matrices element-by-element.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Matrix handle with the multiplied values of the input matrices.
Scale a matrix.
Matrix handle of the input matrix.
Scale factor. Default: 2.0
Scale a matrix.
Matrix handle of the input matrix.
Scale factor. Default: 2.0
Matrix handle with the scaled elements.
Subtract two matrices.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Subtract two matrices.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Matrix handle with the difference of the input matrices.
Add two matrices.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Add two matrices.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Matrix handle with the sum of the input matrices.
Multiply two matrices.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Type of the input matrices. Default: "AB"
Multiply two matrices.
Matrix handle of the input matrix A.
Matrix handle of the input matrix B.
Type of the input matrices. Default: "AB"
Matrix handle of the multiplied matrices.
Get the size of a matrix.
Matrix handle of the input matrix.
Number of rows of the matrix.
Number of columns of the matrix.
Repeat a matrix.
Matrix handle of the input matrix.
Number of copies of input matrix in row direction. Default: 2
Number of copies of input matrix in column direction. Default: 2
Matrix handle of the repeated copied matrix.
Copy a matrix.
Matrix handle of the input matrix.
Matrix handle of the copied matrix.
Set the diagonal elements of a matrix.
Matrix handle of the input matrix.
Matrix handle containing the diagonal elements to be set.
Position of the diagonal. Default: 0
Get the diagonal elements of a matrix.
Matrix handle of the input matrix.
Number of the desired diagonal. Default: 0
Matrix handle containing the diagonal elements.
Set a sub-matrix of a matrix.
Matrix handle of the input matrix.
Matrix handle of the input sub-matrix.
Upper row position of the sub-matrix in the matrix. Default: 0
Left column position of the sub-matrix in the matrix. Default: 0
Get a sub-matrix of a matrix.
Matrix handle of the input matrix.
Upper row position of the sub-matrix in the input matrix. Default: 0
Left column position of the sub-matrix in the input matrix. Default: 0
Number of rows of the sub-matrix. Default: 1
Number of columns of the sub-matrix. Default: 1
Matrix handle of the sub-matrix.
Set all values of a matrix.
Matrix handle of the input matrix.
Values to be set.
Return all values of a matrix.
Matrix handle of the input matrix.
Values of the matrix elements.
Set one or more elements of a matrix.
Matrix handle of the input matrix.
Row numbers of the matrix elements to be modified. Default: 0
Column numbers of the matrix elements to be modified. Default: 0
Values to be set in the indicated matrix elements. Default: 0
Return one ore more elements of a matrix.
Matrix handle of the input matrix.
Row numbers of matrix elements to be returned. Default: 0
Column numbers of matrix elements to be returned. Default: 0
Values of indicated matrix elements.
This operator is inoperable. It had the following function: Clear all matrices from memory.
Free the memory of a matrix.
Matrix handle.
Create a matrix.
Number of rows of the matrix. Default: 3
Number of columns of the matrix. Default: 3
Values for initializing the elements of the matrix. Default: 0
Matrix handle.
This operator is inoperable. It had the following function: Free the memory of all sample identifiers.
Free the memory of a sample identifier.
Handle of the sample identifier.
Deserialize a serialized sample identifier.
Handle of the serialized item.
Handle of the sample identifier.
Read a sample identifier from a file.
File name.
Handle of the sample identifier.
Serialize a sample identifier.
Handle of the sample identifier.
Handle of the serialized item.
Write a sample identifier to a file.
Handle of the sample identifier.
File name.
Identify objects with a sample identifier.
Image showing the object to be identified.
Handle of the sample identifier.
Number of suggested object indices. Default: 1
Rating threshold. Default: 0.0
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the identified object.
Rating value of the identified object.
Get selected parameters of a sample identifier.
Handle of the sample identifier.
Parameter name. Default: "rating_method"
Parameter value.
Set selected parameters of a sample identifier.
Handle of the sample identifier.
Parameter name. Default: "rating_method"
Parameter value. Default: "score_single"
Retrieve information about an object of a sample identifier.
Handle of the sample identifier.
Index of the object for which information is retrieved.
Define, for which kind of object information is retrieved. Default: "num_training_objects"
Information about the object.
Define a name or a description for an object of a sample identifier.
Handle of the sample identifier.
Index of the object for which information is set.
Define, for which kind of object information is set. Default: "training_object_name"
Information about the object.
Remove training data from a sample identifier.
Handle of the sample identifier.
Index of the training object, from which samples should be removed.
Index of the training sample that should be removed.
Remove preparation data from a sample identifier.
Handle of the sample identifier.
Index of the preparation object, of which samples should be removed.
Index of the preparation sample that should be removed.
Train a sample identifier.
Handle of the sample identifier.
Parameter name. Default: []
Parameter value. Default: []
Add training data to an existing sample identifier.
Image that shows an object.
Handle of the sample identifier.
Index of the object visible in the SampleImage.
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Adapt the internal data structure of a sample identifier to the objects to be identified.
Handle of the sample identifier.
Indicates if the preparation data should be removed. Default: "true"
Generic parameter name. Default: []
Generic parameter value. Default: []
Add preparation data to an existing sample identifier.
Image that shows an object.
Handle of the sample identifier.
Index of the object visible in the SampleImage. Default: "unknown"
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Create a new sample identifier.
Parameter name. Default: []
Parameter value. Default: []
Handle of the sample identifier.
Deserialize a serialized shape model.
Handle of the serialized item.
Handle of the model.
Read a shape model from a file.
File name.
Handle of the model.
Serialize a shape model.
Handle of the model.
Handle of the serialized item.
Write a shape model to a file.
Handle of the model.
File name.
This operator is inoperable. It had the following function: Free the memory of all shape models.
Free the memory of a shape model.
Handle of the model.
Return the contour representation of a shape model.
Contour representation of the shape model.
Handle of the model.
Pyramid level for which the contour representation should be returned. Default: 1
Determine the parameters of a shape model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Kind of optimization. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Parameters to be determined automatically. Default: "all"
Name of the automatically determined parameter.
Value of the automatically determined parameter.
Return the parameters of a shape model.
Handle of the model.
Number of pyramid levels.
Smallest rotation of the pattern.
Extent of the rotation angles.
Step length of the angles (resolution).
Minimum scale of the pattern.
Maximum scale of the pattern.
Scale step length (resolution).
Match metric.
Minimum contrast of the objects in the search images.
Return the origin (reference point) of a shape model.
Handle of the model.
Row coordinate of the origin of the shape model.
Column coordinate of the origin of the shape model.
Set the origin (reference point) of a shape model.
Handle of the model.
Row coordinate of the origin of the shape model.
Column coordinate of the origin of the shape model.
Find the best matches of multiple anisotropically scaled shape models.
Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the models in the row direction. Default: 0.9
Maximum scale of the models in the row direction. Default: 1.1
Minimum scale of the models in the column direction. Default: 0.9
Maximum scale of the models in the column direction. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models in the row direction.
Scale of the found instances of the models in the column direction.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple isotropically scaled shape models.
Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the models. Default: 0.9
Maximum scale of the models. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple shape models.
Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of an anisotropically scaled shape model in an image.
Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model in the row direction. Default: 0.9
Maximum scale of the model in the row direction. Default: 1.1
Minimum scale of the model in the column direction. Default: 0.9
Maximum scale of the model in the column direction. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model in the row direction.
Scale of the found instances of the model in the column direction.
Score of the found instances of the model.
Find the best matches of an isotropically scaled shape model in an image.
Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model.
Score of the found instances of the model.
Find the best matches of a shape model in an image.
Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Set the metric of a shape model that was created from XLD contours.
Input image used for the determination of the polarity.
Handle of the model.
Transformation matrix.
Match metric. Default: "use_polarity"
Set selected parameters of the shape model.
Handle of the model.
Parameter names.
Parameter values.
Prepare an anisotropically scaled shape model for matching from XLD contours.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare an isotropically scaled shape model for matching from XLD contours.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare a shape model for matching from XLD contours.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare an anisotropically scaled shape model for matching.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Prepare an isotropically scaled shape model for matching.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Prepare a shape model for matching.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Handle of the model.
Create the representation of a shape model.
Input image.
Image pyramid of the input image
Model region pyramid
Number of pyramid levels. Default: 4
Threshold or hysteresis thresholds for the contrast of the object in the image and optionally minimum size of the object parts. Default: 30
This operator is inoperable. It had the following function: Free the memory of all descriptor models in RAM.
Free the memory of a descriptor model.
Handle of the descriptor model.
Deserialize a descriptor model.
Handle of the serialized item.
Handle of the model.
Serialize a descriptor model.
Handle of a model to be saved.
Handle of the serialized item.
Read a descriptor model from a file.
File name.
Handle of the model.
Write a descriptor model to a file.
Handle of a model to be saved.
The path and filename of the model to be saved.
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Camera parameter (inner orientation) obtained from camera calibration.
Score type to be evaluated in Score. Default: "num_points"
3D pose of the object.
Score of the found instances according to the ScoreType input.
Find the best matches of a descriptor model in an image.
Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Score type to be evaluated in Score. Default: "num_points"
Homography between model and found instance.
Score of the found instances according to the ScoreType input.
Query the interest points of the descriptor model or the last processed search image.
The handle to the descriptor model.
Set of interest points. Default: "model"
Subset of interest points. Default: "all"
Row coordinates of interest points.
Column coordinates of interest points.
Return the parameters of a descriptor model.
The object handle to the descriptor model.
The type of the detector.
The detectors parameter names.
Values of the detectors parameters.
The descriptors parameter names.
Values of the descriptors parameters.
Create a descriptor model for calibrated perspective matching.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
The handle to the descriptor model.
Prepare a descriptor model for interest point matching.
Input image whose domain will be used to create the model.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
The handle to the descriptor model.
Query alphanumerical results that were accumulated during descriptor-based matching.
Handle of a descriptor model.
Handle of the object for which the results are queried. Default: "all"
Name of the results to be queried. Default: "num_points"
Returned results.
Return the origin of a descriptor model.
Handle of a descriptor model.
Position of origin in row direction.
Position of origin in column direction.
Sets the origin of a descriptor model.
Handle of a descriptor model.
Translation of origin in row direction. Default: 0
Translation of origin in column direction. Default: 0
Return the origin (reference point) of a deformable model.
Handle of the model.
Row coordinate of the origin of the deformable model.
Column coordinate of the origin of the deformable model.
Set the origin (reference point) of a deformable model.
Handle of the model.
Row coordinate of the origin of the deformable model.
Column coordinate of the origin of the deformable model.
Set selected parameters of the deformable model.
Handle of the model.
Parameter names.
Parameter values.
Return the parameters of a deformable model.
Handle of the model.
Names of the generic parameters that are to be queried for the deformable model. Default: "angle_start"
Values of the generic parameters.
Return the contour representation of a deformable model.
Contour representation of the deformable model.
Handle of the model.
Pyramid level for which the contour representation should be returned. Default: 1
Determine the parameters of a deformable model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Kind of optimization. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The general parameter names. Default: []
Values of the general parameter. Default: []
Parameters to be determined automatically. Default: "all"
Name of the automatically determined parameter.
Value of the automatically determined parameter.
Deserialize a deformable model.
Handle of the serialized item.
Handle of the model.
Serialize a deformable model.
Handle of a model to be saved.
Handle of the serialized item.
Read a deformable model from a file.
File name.
Handle of the model.
Write a deformable model to a file.
Handle of a model to be saved.
The path and filename of the model to be saved.
This operator is inoperable. It had the following function: Free the memory of all deformable models.
Free the memory of a deformable model.
Handle of the model.
Find the best matches of a local deformable model in an image.
Input image in which the model should be found.
Rectified image of the found model.
Vector field of the rectification transformation.
Contours of the found instances of the model.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minumum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching. Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Switch for requested iconic result. Default: []
The general parameter names. Default: []
Values of the general parameters. Default: []
Scores of the found instances of the model.
Row coordinates of the found instances of the model.
Column coordinates of the found instances of the model.
Find the best matches of a calibrated deformable model in an image and return their 3D pose.
Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
Pose of the object.
6 standard deviations or 36 covariances of the pose parameters.
Score of the found instances of the model.
Find the best matches of a planar projective invariant deformable model in an image.
Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model in row direction. Default: 1.0
Maximum scale of the model in row direction. Default: 1.0
Minimum scale of the model in column direction. Default: 1.0
Maximum scale of the model in column direction. Default: 1.0
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 1.0
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
The general parameter names. Default: []
Values of the general parameters. Default: []
Homographies between model and found instances.
Score of the found instances of the model.
Set the metric of a local deformable model that was created from XLD contours.
Input image used for the determination of the polarity.
Vector field of the local deformation.
Handle of the model.
Match metric. Default: "use_polarity"
Set the metric of a planar calibrated deformable model that was created from XLD contours.
Input image used for the determination of the polarity.
Handle of the model.
Pose of the model in the image.
Match metric. Default: "use_polarity"
Set the metric of a planar uncalibrated deformable model that was created from XLD contours.
Input image used for the determination of the polarity.
Handle of the model.
Transformation matrix.
Match metric. Default: "use_polarity"
Prepare a deformable model for local deformable matching from XLD contours.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Prepare a deformable model for planar calibrated matching from XLD contours.
Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Prepare a deformable model for planar uncalibrated matching from XLD contours.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Creates a deformable model for local, deformable matching.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Create a deformable model for calibrated perspective matching.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object in the reference image.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Handle of the model.
Creates a deformable model for uncalibrated, perspective matching.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
This operator is inoperable. It had the following function: Free the memory of all NCC models.
Free the memory of an NCC model.
Handle of the model.
Deserialize an NCC model.
Handle of the serialized item.
Handle of the model.
Serialize an NCC model.
Handle of the model.
Handle of the serialized item.
Read an NCC model from a file.
File name.
Handle of the model.
Write an NCC model to a file.
Handle of the model.
File name.
Determine the parameters of an NCC model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Match metric. Default: "use_polarity"
Parameters to be determined automatically. Default: "all"
Name of the automatically determined parameter.
Value of the automatically determined parameter.
Return the parameters of an NCC model.
Handle of the model.
Number of pyramid levels.
Smallest rotation of the pattern.
Extent of the rotation angles.
Step length of the angles (resolution).
Match metric.
Return the origin (reference point) of an NCC model.
Handle of the model.
Row coordinate of the origin of the NCC model.
Column coordinate of the origin of the NCC model.
Set the origin (reference point) of an NCC model.
Handle of the model.
Row coordinate of the origin of the NCC model.
Column coordinate of the origin of the NCC model.
Find the best matches of an NCC model in an image.
Input image in which the model should be found.
Handle of the model.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.8
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Set selected parameters of the NCC model.
Handle of the model.
Parameter names.
Parameter values.
Prepare an NCC model for matching.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Match metric. Default: "use_polarity"
Handle of the model.
Return the components of a found instance of a component model.
Found components of the selected component model instance.
Handle of the component model.
Start index of each found instance of the component model in the tuples describing the component matches.
End index of each found instance of the component model to the tuples describing the component matches.
Row coordinate of the found component matches.
Column coordinate of the found component matches.
Rotation angle of the found component matches.
Score of the found component matches.
Index of the found components.
Index of the found instance of the component model to be returned.
Mark the orientation of the components. Default: "false"
Row coordinate of all components of the selected model instance.
Column coordinate of all components of the selected model instance.
Rotation angle of all components of the selected model instance.
Score of all components of the selected model instance.
Find the best matches of a component model in an image.
Input image in which the component model should be found.
Handle of the component model.
Index of the root component.
Smallest rotation of the root component Default: -0.39
Extent of the rotation of the root component. Default: 0.79
Minimum score of the instances of the component model to be found. Default: 0.5
Number of instances of the component model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the component models to be found. Default: 0.5
Behavior if the root component is missing. Default: "stop_search"
Behavior if a component is missing. Default: "prune_branch"
Pose prediction of components that are not found. Default: "none"
Minimum score of the instances of the components to be found. Default: 0.5
Subpixel accuracy of the component poses if not equal to 'none'. Default: "least_squares"
Number of pyramid levels for the components used in the matching (and lowest pyramid level to use if $|NumLevelsComp| = 2n$). Default: 0
"Greediness" of the search heuristic for the components (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Start index of each found instance of the component model in the tuples describing the component matches.
End index of each found instance of the component model in the tuples describing the component matches.
Score of the found instances of the component model.
Row coordinate of the found component matches.
Column coordinate of the found component matches.
Rotation angle of the found component matches.
Score of the found component matches.
Index of the found components.
This operator is inoperable. It had the following function: Free the memory of all component models.
Free the memory of a component model.
Handle of the component model.
Return the search tree of a component model.
Search tree.
Relations of components that are connected in the search tree.
Handle of the component model.
Index of the root component.
Image for which the tree is to be returned. Default: "model_image"
Component index of the start node of an arc in the search tree.
Component index of the end node of an arc in the search tree.
Row coordinate of the center of the rectangle representing the relation.
Column index of the center of the rectangle representing the relation.
Orientation of the rectangle representing the relation (radians).
First radius (half length) of the rectangle representing the relation.
Second radius (half width) of the rectangle representing the relation.
Smallest relative orientation angle.
Extent of the relative orientation angle.
Return the parameters of a component model.
Handle of the component model.
Minimum score of the instances of the components to be found.
Ranking of the model components expressing their suitability to act as root component.
Handles of the shape models of the individual model components.
Deserialize a serialized component model.
Handle of the serialized item.
Handle of the component model.
Serialize a component model.
Handle of the component model.
Handle of the serialized item.
Read a component model from a file.
File name.
Handle of the component model.
Write a component model to a file.
Handle of the component model.
File name.
Prepare a component model for matching based on explicitly specified components and relations.
Input image from which the shape models of the model components should be created.
Input regions from which the shape models of the model components should be created.
Variation of the model components in row direction.
Variation of the model components in column direction.
Angle variation of the model components.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Lower hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Upper hysteresis threshold for the contrast of the components in the model image. Default: "auto"
Minimum size of the contour regions in the model. Default: "auto"
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Handle of the component model.
Ranking of the model components expressing the suitability to act as the root component.
Prepare a component model for matching based on trained components.
Handle of the training result.
Smallest rotation of the component model. Default: -0.39
Extent of the rotation of the component model. Default: 0.79
Minimum contrast of the components in the search images. Default: "auto"
Minimum score of the instances of the components to be found. Default: 0.5
Maximum number of pyramid levels for the components. Default: "auto"
Step length of the angles (resolution) for the components. Default: "auto"
Kind of optimization for the components. Default: "auto"
Match metric used for the components. Default: "use_polarity"
Complete pregeneration of the shape models for the components if equal to 'true'. Default: "false"
Handle of the component model.
Ranking of the model components expressing the suitability to act as the root component.
This operator is inoperable. It had the following function: Free the memory of all component training results.
Free the memory of a component training result.
Handle of the training result.
Return the relations between the model components that are contained in a training result.
Region representation of the relations.
Handle of the training result.
Index of reference component.
Image for which the component relations are to be returned. Default: "model_image"
Row coordinate of the center of the rectangle representing the relation.
Column index of the center of the rectangle representing the relation.
Orientation of the rectangle representing the relation (radians).
First radius (half length) of the rectangle representing the relation.
Second radius (half width) of the rectangle representing the relation.
Smallest relative orientation angle.
Extent of the relative orientation angles.
Return the initial or model components in a certain image.
Contour regions of the initial components or of the model components.
Handle of the training result.
Type of returned components or index of an initial component. Default: "model_components"
Image for which the components are to be returned. Default: "model_image"
Mark the orientation of the components. Default: "false"
Row coordinate of the found instances of all initial components or model components.
Column coordinate of the found instances of all initial components or model components.
Rotation angle of the found instances of all components.
Score of the found instances of all components.
Modify the relations within a training result.
Handle of the training result.
Model component(s) relative to which the movement(s) should be modified. Default: "all"
Model component(s) of which the relative movement(s) should be modified. Default: "all"
Change of the position relation in pixels.
Change of the orientation relation in radians.
Deserialize a component training result.
Handle of the serialized item.
Handle of the training result.
Serialize a component training result.
Handle of the training result.
Handle of the serialized item.
Read a component training result from a file.
File name.
Handle of the training result.
Write a component training result to a file.
Handle of the training result.
File name.
Adopt new parameters that are used to create the model components into the training result.
Training images that were used for training the model components.
Contour regions of rigid model components.
Handle of the training result.
Criterion for solving the ambiguities. Default: "rigidity"
Maximum contour overlap of the found initial components. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Inspect the rigid model components obtained from the training.
Contour regions of rigid model components.
Handle of the training result.
Criterion for solving the ambiguities. Default: "rigidity"
Maximum contour overlap of the found initial components. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Train components and relations for the component-based matching.
Input image from which the shape models of the initial components should be created.
Contour regions or enclosing regions of the initial components.
Training images that are used for training the model components.
Contour regions of rigid model components.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of connected contour regions. Default: "auto"
Minimum score of the instances of the initial components to be found. Default: 0.5
Search tolerance in row direction. Default: -1
Search tolerance in column direction. Default: -1
Angle search tolerance. Default: -1
Decision whether the training emphasis should lie on a fast computation or on a high robustness. Default: "speed"
Criterion for solving ambiguous matches of the initial components in the training images. Default: "rigidity"
Maximum contour overlap of the found initial components in a training image. Default: 0.2
Threshold for clustering the initial components. Default: 0.5
Handle of the training result.
Extract the initial components of a component model.
Input image from which the initial components should be extracted.
Contour regions of initial components.
Lower hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Upper hysteresis threshold for the contrast of the initial components in the image. Default: "auto"
Minimum size of the initial components. Default: "auto"
Type of automatic segmentation. Default: "connection"
Names of optional control parameters. Default: []
Values of optional control parameters. Default: []
Get details of a result from deformable surface based matching.
Handle of the deformable surface matching result.
Name of the result property. Default: "sampled_scene"
Index of the result property. Default: 0
Value of the result property.
Free the memory of a deformable surface matching result.
Handle of the deformable surface matching result.
Free the memory of a deformable surface model.
Handle of the deformable surface model.
Deserialize a deformable surface model.
Handle of the serialized item.
Handle of the deformable surface model.
Serialize a deformable surface_model.
Handle of the deformable surface model.
Handle of the serialized item.
Read a deformable surface model from a file.
Name of the file to read.
Handle of the read deformable surface model.
Write a deformable surface model to a file.
Handle of the deformable surface model to write.
File name to write to.
Refine the position and deformation of a deformable surface model in a 3D scene.
Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Relative sampling distance of the scene. Default: 0.05
Initial deformation of the 3D object model
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the refined model.
Handle of the matching result.
Find the best match of a deformable surface model in a 3D scene.
Handle of the deformable surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Minimum score of the returned match. Default: 0
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result.
Return the parameters and properties of a deformable surface model.
Handle of the deformable surface model.
Name of the parameter. Default: "sampled_model"
Value of the parameter.
Add a reference point to a deformable surface model.
Handle of the deformable surface model.
x-coordinates of a reference point.
x-coordinates of a reference point.
x-coordinates of a reference point.
Index of the new reference point.
Add a sample deformation to a deformable surface model
Handle of the deformable surface model.
Handle of the deformed 3D object model.
Create the data structure needed to perform deformable surface-based matching.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.05
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the deformable surface model.
Get details of a result from surface based matching.
Handle of the surface matching result.
Name of the result property. Default: "pose"
Index of the matching result, starting with 0. Default: 0
Value of the result property.
This operator is inoperable. It had the following function: Free the memory of all surface matching results.
Free the memory of a surface matching result.
Handle of the surface matching result.
This operator is inoperable. It had the following function: Free the memory of all surface models.
Free the memory of a surface model.
Handle of the surface model.
Deserialize a surface model.
Handle of the serialized item.
Handle of the surface model.
Serialize a surface_model.
Handle of the surface model.
Handle of the serialized item.
Read a surface model from a file.
Name of the SFM file.
Handle of the read surface model.
Write a surface model to a file.
Handle of the surface model.
File name.
Refine the pose of a surface model in a 3D scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
3D pose of the surface model in the scene.
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
Find the best matches of a surface model in a 3D scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
3D pose of the surface model in the scene.
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
Return the parameters and properties of a surface model.
Handle of the surface model.
Name of the parameter. Default: "diameter"
Value of the parameter.
Create the data structure needed to perform surface-based matching.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.03
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the surface model.
Create a 3D camera pose from camera center and viewing direction.
X coordinate of the optical center of the camera.
Y coordinate of the optical center of the camera.
Z coordinate of the optical center of the camera.
X coordinate of the 3D point to which the camera is directed.
Y coordinate of the 3D point to which the camera is directed.
Z coordinate of the 3D point to which the camera is directed.
Normal vector of the reference plane (points up). Default: "-y"
Camera roll angle. Default: 0
3D camera pose.
Convert spherical coordinates of a 3D point to Cartesian coordinates.
Longitude of the 3D point.
Latitude of the 3D point.
Radius of the 3D point.
Normal vector of the equatorial plane (points to the north pole). Default: "-y"
Coordinate axis in the equatorial plane that points to the zero meridian. Default: "-z"
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Convert Cartesian coordinates of a 3D point to spherical coordinates.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Normal vector of the equatorial plane (points to the north pole). Default: "-y"
Coordinate axis in the equatorial plane that points to the zero meridian. Default: "-z"
Longitude of the 3D point.
Latitude of the 3D point.
Radius of the 3D point.
This operator is inoperable. It had the following function: Free the memory of all 3D shape models.
Free the memory of a 3D shape model.
Handle of the 3D shape model.
Deserialize a serialized 3D shape model.
Handle of the serialized item.
Handle of the 3D shape model.
Serialize a 3D shape model.
Handle of the 3D shape model.
Handle of the serialized item.
Read a 3D shape model from a file.
File name.
Handle of the 3D shape model.
Write a 3D shape model to a file.
Handle of the 3D shape model.
File name.
Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference coordinate system of a 3D shape model and vice versa.
Handle of the 3D shape model.
Pose to be transformed in the source system.
Direction of the transformation. Default: "ref_to_model"
Transformed 3D pose in the target system.
Project the edges of a 3D shape model into image coordinates.
Contour representation of the model view.
Handle of the 3D shape model.
Internal camera parameters.
3D pose of the 3D shape model in the world coordinate system.
Remove hidden surfaces? Default: "true"
Smallest face angle for which the edge is displayed Default: 0.523599
Return the contour representation of a 3D shape model view.
Contour representation of the model view.
Handle of the 3D shape model.
Pyramid level for which the contour representation should be returned. Default: 1
View for which the contour representation should be returned. Default: 1
3D pose of the 3D shape model at the current view.
Return the parameters of a 3D shape model.
Handle of the 3D shape model.
Names of the generic parameters that are to be queried for the 3D shape model. Default: "num_levels_max"
Values of the generic parameters.
Find the best matches of a 3D shape model in an image.
Input image in which the model should be found.
Handle of the 3D shape model.
Minimum score of the instances of the model to be found. Default: 0.7
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
3D pose of the 3D shape model.
6 standard deviations or 36 covariances of the pose parameters.
Score of the found instances of the 3D shape model.
Prepare a 3D object model for matching.
Handle of the 3D object model.
Internal camera parameters.
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without unit). Default: 0
Meaning of the rotation values of the reference orientation. Default: "gba"
Minimum longitude of the model views. Default: -0.35
Maximum longitude of the model views. Default: 0.35
Minimum latitude of the model views. Default: -0.35
Maximum latitude of the model views. Default: 0.35
Minimum camera roll angle of the model views. Default: -3.1416
Maximum camera roll angle of the model views. Default: 3.1416
Minimum camera-object-distance of the model views. Default: 0.3
Maximum camera-object-distance of the model views. Default: 0.4
Minimum contrast of the objects in the search images. Default: 10
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Handle of the 3D shape model.
Simplify a triangulated 3D object model.
Handle of the 3D object model that should be simplified.
Method that should be used for simplification. Default: "preserve_point_coordinates"
Degree of simplification (default: percentage of remaining model points).
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the simplified 3D object model.
Compute the distances of the points of one 3D object model to another 3D object model.
Handle of the source 3D object model.
Handle of the target 3D object model.
Pose of the source 3D object model in the target 3D object model. Default: []
Maximum distance of interest. Default: 0
Names of the generic input parameters. Default: []
Values of the generic input parameters. Default: []
Combine several 3D object models to a new 3D object model.
Handle of input 3D object models.
Method used for the union. Default: "points_surface"
Handle of the resulting 3D object model.
Set attributes of a 3D object model.
Handle of the 3D object model.
Name of the attributes.
Defines where extended attributes are attached to. Default: []
Attribute values.
Set attributes of a 3D object model.
Handle of the input 3D object model.
Name of the attributes.
Defines where extended attributes are attached to. Default: []
Attribute values.
Handle of the resulting 3D object model.
Create an empty 3D object model.
Handle of the new 3D object model.
Sample a 3D object model.
Handle of the 3D object model to be sampled.
Selects between the different subsampling methods. Default: "fast"
Sampling distance. Default: 0.05
Names of the generic parameters that can be adjusted. Default: []
Values of the generic parameters that can be adjusted. Default: []
Handle of the 3D object model that contains the sampled points.
Improve the relative transformations between 3D object models based on their overlaps.
Handles of several 3D object models.
Approximate relative transformations between the 3D object models.
Type of interpretation for the transformations. Default: "global"
Target indices of the transformations if From specifies the source indices, otherwise the parameter must be empty. Default: []
Names of the generic parameters that can be adjusted for the global 3D object model registration. Default: []
Values of the generic parameters that can be adjusted for the global 3D object model registration. Default: []
Resulting Transformations.
Number of overlapping neighbors for each 3D object model.
Search for a transformation between two 3D object models.
Handle of the first 3D object model.
Handle of the second 3D object model.
Method for the registration. Default: "matching"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Pose to transform ObjectModel3D1 in the reference frame of ObjectModel3D2.
Overlapping of the two 3D object models.
Create a 3D object model that represents a point cloud from a set of 3D points.
The x-coordinates of the points in the 3D point cloud.
The y-coordinates of the points in the 3D point cloud.
The z-coordinates of the points in the 3D point cloud.
Handle of the resulting 3D object model.
Create a 3D object model that represents a box.
The pose that describes the position and orientation of the box. The pose has its origin in the center of the box.
The length of the box along the x-axis.
The length of the box along the y-axis.
The length of the box along the z-axis.
Handle of the resulting 3D object model.
Create a 3D object model that represents a plane.
The center and the rotation of the plane.
x coordinates specifying the extent of the plane.
y coordinates specifying the extent of the plane.
Handle of the resulting 3D object model.
Create a 3D object model that represents a sphere from x,y,z coordinates.
The x-coordinate of the center point of the sphere.
The y-coordinate of the center point of the sphere.
The z-coordinate of the center point of the sphere.
The radius of the sphere.
Handle of the resulting 3D object model.
Create a 3D object model that represents a sphere.
The pose that describes the position of the sphere.
The radius of the sphere.
Handle of the resulting 3D object model.
Create a 3D object model that represents a cylinder.
The pose that describes the position and orientation of the cylinder.
The radius of the cylinder.
Lowest z-coordinate of the cylinder in the direction of the rotation axis.
Highest z-coordinate of the cylinder in the direction of the rotation axis.
Handle of the resulting 3D object model.
Calculate the smallest bounding box around the points of a 3D object model.
Handle of the 3D object model.
The method that is used to estimate the smallest box. Default: "oriented"
The pose that describes the position and orientation of the box that is generated. The pose has its origin in the center of the box and is oriented such that the x-axis is aligned with the longest side of the box.
The length of the longest side of the box.
The length of the second longest side of the box.
The length of the third longest side of the box.
Calculate the smallest sphere around the points of a 3D object model.
Handle of the 3D object model.
x-, y-, and z-coordinates describing the center point of the sphere.
The estimated radius of the sphere.
Intersect a 3D object model with a plane.
Handle of the 3D object model.
Pose of the plane. Default: [0,0,0,0,0,0,0]
Handle of the 3D object model that describes the intersection as a set of lines.
Calculate the convex hull of a 3D object model.
Handle of the 3D object model.
Handle of the 3D object model that describes the convex hull.
Select 3D object models from an array of 3D object models according to global features.
Handles of the available 3D object models to select.
List of features a test is performed on. Default: "has_triangles"
Logical operation to combine the features given in Feature. Default: "and"
Minimum value for the given feature. Default: 1
Maximum value for the given feature. Default: 1
A subset of ObjectModel3D fulfilling the given conditions.
Calculate the area of all faces of a 3D object model.
Handle of the 3D object model.
Calculated area.
Calculate the maximal diameter of a 3D object model.
Handle of the 3D object model.
Calculated diameter.
Calculates the mean or the central moment of second order for a 3D object model.
Handle of the 3D object model.
Moment to calculate. Default: "mean_points"
Calculated moment.
Calculate the volume of a 3D object model.
Handle of the 3D object model.
Pose of the plane. Default: [0,0,0,0,0,0,0]
Method to combine volumes laying above and below the reference plane. Default: "signed"
Decides whether the orientation of a face should affect the resulting sign of the underlying volume. Default: "true"
Absolute value of the calculated volume.
Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given region.
Region in the image plane.
Handle of the 3D object model.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Handle of the reduced 3D object model.
Determine the connected components of the 3D object model.
Handle of the 3D object model.
Attribute used to calculate the connected components. Default: "distance_3d"
Maximum value for the distance between two connected components. Default: 1.0
Handle of the 3D object models that represent the connected components.
Apply a threshold to an attribute of 3D object models.
Handle of the 3D object models.
Attributes the threshold is applied to. Default: "point_coord_z"
Minimum value for the attributes specified by Attrib. Default: 0.5
Maximum value for the attributes specified by Attrib. Default: 1.0
Handle of the reduced 3D object models.
Get the depth or the index of a displayed 3D object model.
Window handle.
Row coordinates.
Column coordinates.
Information. Default: "depth"
Indices or the depth of the objects at (Row,Column).
Render 3D object models to get an image.
Rendered scene.
Handles of the 3D object models.
Camera parameters of the scene.
3D poses of the objects.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Display 3D object models.
Window handle.
Handles of the 3D object models.
Camera parameters of the scene. Default: []
3D poses of the objects. Default: []
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Copy a 3D object model.
Handle of the input 3D object model.
Attributes to be copied. Default: "all"
Handle of the copied 3D object model.
Prepare a 3D object model for a certain operation.
Handle of the 3D object model.
Purpose of the 3D object model. Default: "shape_based_matching_3d"
Specify if already existing data should be overwritten. Default: "true"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Transform 3D points from a 3D object model to images.
Image with the X-Coordinates of the 3D points.
Image with the Y-Coordinates of the 3D points.
Image with the Z-Coordinates of the 3D points.
Handle of the 3D object model.
Type of the conversion. Default: "cartesian"
Camera parameters.
Pose of the 3D object model.
Transform 3D points from images to a 3D object model.
Image with the X-Coordinates and the ROI of the 3D points.
Image with the Y-Coordinates of the 3D points.
Image with the Z-Coordinates of the 3D points.
Handle of the 3D object model.
Return attributes of 3D object models.
Handle of the 3D object model.
Names of the generic attributes that are queried for the 3D object model. Default: "num_points"
Values of the generic parameters.
Project a 3D object model into image coordinates.
Projected model contours.
Handle of the 3D object model.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Apply a rigid 3D transformation to 3D object models.
Handles of the 3D object models.
Poses.
Handles of the transformed 3D object models.
Apply an arbitrary projective 3D transformation to 3D object models.
Handles of the 3D object models.
Homogeneous projective transformation matrix.
Handles of the transformed 3D object models.
Apply an arbitrary affine 3D transformation to 3D object models.
Handles of the 3D object models.
Transformation matrices.
Handles of the transformed 3D object models.
This operator is inoperable. It had the following function: Free the memory of all 3D object models.
Free the memory of a 3D object model.
Handle of the 3D object model.
Serialize a 3D object model.
Handle of the 3D object model.
Handle of the serialized item.
Deserialize a serialized 3D object model.
Handle of the serialized item.
Handle of the 3D object model.
Writes a 3D object model to a file.
Handle of the 3D object model.
Type of the file that is written. Default: "om3"
Name of the file that is written.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Read a 3D object model from a file.
Filename of the file to be read. Default: "mvtec_bunny_normals"
Scale of the data in the file. Default: "m"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Handle of the 3D object model.
Status information.
Read the description file of a Kalman filter.
Description file for a Kalman filter. Default: "kalman.init"
The dimensions of the state vector, the measurement vector and the controller vector.
The lined up matrices A, C, Q, possibly G and u, and if necessary L stored in row-major order.
The matrix R stored in row-major order.
The matrix P0@f$P_{0}$ (error covariance matrix of the initial state estimate) stored in row-major order and the initial state estimate x0@f$x_{0}$ lined up.
Read an update file of a Kalman filter.
Update file for a Kalman filter. Default: "kalman.updt"
The dimensions of the state vector, measurement vector and controller vector. Default: [3,1,0]
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major order. Default: [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
The matrix R stored in row-major order. Default: [1,2]
The dimensions of the state vector, measurement vector and controller vector.
The lined up matrices A,C,Q, possibly G and u, and if necessary L which all have been stored in row-major order.
The matrix R stored in row-major order.
Estimate the current state of a system with the help of the Kalman filtering.
The dimensions of the state vector, the measurement and the controller vector. Default: [3,1,0]
The lined up matrices A,C,Q, possibly G and u, and if necessary L which have been stored in row-major order. Default: [1.0,1.0,0.5,0.0,1.0,1.0,0.0,0.0,1.0,1.0,0.0,0.0,54.3,37.9,48.0,37.9,34.3,42.5,48.0,42.5,43.7]
The matrix R stored in row-major order and the measurement vector y lined up. Default: [1.2,1.0]
The matrix P*@f$P$ (the extrapolation-error covariances) stored in row-major order and the extrapolation vector x*@f$x$ lined up. Default: [0.0,0.0,0.0,0.0,180.5,0.0,0.0,0.0,100.0,0.0,100.0,0.0]
The matrix P* (the extrapolation-error covariances)stored in row-major order and the extrapolation vector x*@f$x$ lined up.
The matrix P~@f$P$ (the estimation-error covariances) stored in row-major order and the estimated state x~@f$x$ lined up.
Query slots concerning information with relation to the operator get_operator_info.
Slotnames of the operator get_operator_info.
Query slots of the online-information concerning the operator get_param_info.
Slotnames for the operator get_param_info.
Get operators with the given string as a substring of their name.
Substring of the seeked names (empty $ less than = greater than $ all names). Default: "info"
Detected operator names.
Get default data type for the control parameters of a HALCON-operator.
Name of the operator. Default: "get_param_types"
Default type of the input control parameters.
Default type of the output control parameters.
Get number of the different parameter classes of a HALCON-operator.
Name of the operator. Default: "get_param_num"
Name of the called C-function.
Number of the input object parameters.
Number of the output object parameters.
Number of the input control parameters.
Number of the output control parameters.
System operator or user procedure.
Get the names of the parameters of a HALCON-operator.
Name of the operator. Default: "get_param_names"
Names of the input objects.
Names of the output objects.
Names of the input control parameters.
Names of the output control parameters.
Get information concerning a HALCON-operator.
Name of the operator on which more information is needed. Default: "get_operator_info"
Desired information. Default: "abstract"
Information (empty if no information is available)
Get information concerning the operator parameters.
Name of the operator on whose parameter more information is needed. Default: "get_param_info"
Name of the parameter on which more information is needed. Default: "Slot"
Desired information. Default: "description"
Information (empty in case there is no information available).
Search names of all operators assigned to one keyword.
Keyword for which corresponding operators are searched. Default: "Information"
Operators whose slot 'keyword' contains the keyword.
Get keywords which are assigned to operators.
Substring in the names of those operators for which keywords are needed. Default: "get_keywords"
Keywords for the operators.
Get information concerning the chapters on operators.
Operator class or subclass of interest. Default: ""
Operator classes (Chapter = ") or operator subclasses respectively operators.
Convert one-channel images into a multi-channel image
One-channel images to be combined into a one-channel image.
Multi-channel image.
Convert a multi-channel image into One-channel images
Multi-channel image to be decomposed.
Generated one-channel images.
Convert 7 images into a seven-channel image.
Input image 1.
Input image 2.
Input image 3.
Input image 4.
Input image 5.
Input image 6.
Input image 7.
Multi-channel image.
Convert 6 images into a six-channel image.
Input image 1.
Input image 2.
Input image 3.
Input image 4.
Input image 5.
Input image 6.
Multi-channel image.
Convert 5 images into a five-channel image.
Input image 1.
Input image 2.
Input image 3.
Input image 4.
Input image 5.
Multi-channel image.
Convert 4 images into a four-channel image.
Input image 1.
Input image 2.
Input image 3.
Input image 4.
Multi-channel image.
Convert 3 images into a three-channel image.
Input image 1.
Input image 2.
Input image 3.
Multi-channel image.
Convert two images into a two-channel image.
Input image 1.
Input image 2.
Multi-channel image.
Convert a seven-channel image into seven images.
Multi-channel image.
Output image 1.
Output image 2.
Output image 3.
Output image 4.
Output image 5.
Output image 6.
Output image 7.
Convert a six-channel image into six images.
Multi-channel image.
Output image 1.
Output image 2.
Output image 3.
Output image 4.
Output image 5.
Output image 6.
Convert a five-channel image into five images.
Multi-channel image.
Output image 1.
Output image 2.
Output image 3.
Output image 4.
Output image 5.
Convert a four-channel image into four images.
Multi-channel image.
Output image 1.
Output image 2.
Output image 3.
Output image 4.
Convert a three-channel image into three images.
Multi-channel image.
Output image 1.
Output image 2.
Output image 3.
Convert a two-channel image into two images.
Multi-channel image.
Output image 1.
Output image 2.
Count channels of image.
One- or multi-channel image.
Number of channels.
Append additional matrices (channels) to the image.
Multi-channel image.
Image to be appended.
Image appended by Image.
Access a channel of a multi-channel image.
Multi-channel image.
One channel of MultiChannelImage.
Index of channel to be accessed. Default: 1
Tile multiple image objects into a large image with explicit positioning information.
Input images.
Tiled output image.
Row coordinate of the upper left corner of the input images in the output image. Default: 0
Column coordinate of the upper left corner of the input images in the output image. Default: 0
Row coordinate of the upper left corner of the copied part of the respective input image. Default: -1
Column coordinate of the upper left corner of the copied part of the respective input image. Default: -1
Row coordinate of the lower right corner of the copied part of the respective input image. Default: -1
Column coordinate of the lower right corner of the copied part of the respective input image. Default: -1
Width of the output image. Default: 512
Height of the output image. Default: 512
Tile multiple image objects into a large image.
Input images.
Tiled output image.
Number of columns to use for the output image. Default: 1
Order of the input images in the output image. Default: "vertical"
Tile multiple images into a large image.
Input image.
Tiled output image.
Number of columns to use for the output image. Default: 1
Order of the input images in the output image. Default: "vertical"
Cut out of defined gray values.
Input image.
Image area.
Cut out one or more rectangular image areas.
Input image.
Image area.
Line index of upper left corner of image area. Default: 100
Column index of upper left corner of image area. Default: 100
Line index of lower right corner of image area. Default: 200
Column index of lower right corner of image area. Default: 200
Cut out one or more rectangular image areas.
Input image.
Image area.
Line index of upper left corner of image area. Default: 100
Column index of upper left corner of image area. Default: 100
Width of new image. Default: 128
Height of new image. Default: 128
Change image size.
Input image.
Image with new format.
Width of new image. Default: 512
Height of new image. Default: 512
Change definition domain of an image.
Input image.
New definition domain.
Image with new definition domain.
Add gray values to regions.
Input regions (without pixel values).
Input image with pixel values for regions.
Output image(s) with regions and pixel values (one image per input region).
Reduce the domain of an image to a rectangle.
Input image.
Image with reduced definition domain.
Line index of upper left corner of image area. Default: 100
Column index of upper left corner of image area. Default: 100
Line index of lower right corner of image area. Default: 200
Column index of lower right corner of image area. Default: 200
Reduce the domain of an image.
Input image.
New definition domain.
Image with reduced definition domain.
Expand the domain of an image to maximum.
Input image.
Image with maximum definition domain.
Get the domain of an image.
Input images.
Definition domains of input images.
Centres of circles for a specific radius.
Binary edge image in which the circles are to be detected.
Centres of those circles which are included in the edge image by Percent percent.
Radius of the circle to be searched in the image. Default: 12
Indicates the percentage (approximately) of the (ideal) circle which must be present in the edge image RegionIn. Default: 60
The modus defines the position of the circle in question: 0 - the radius is equivalent to the outer border of the set pixels. 1 - the radius is equivalent to the centres of the circle lines' pixels. 2 - both 0 and 1 (a little more fuzzy, but more reliable in contrast to circles set slightly differently, necessitates 50
Return the Hough-Transform for circles with a given radius.
Binary edge image in which the circles are to be detected.
Hough transform for circles with a given radius.
Radius of the circle to be searched in the image. Default: 12
Detect lines in edge images with the help of the Hough transform using local gradient direction and return them in normal form.
Image containing the edge direction. The edges are described by the image domain.
Hough transform.
Regions of the input image that contributed to the local maxima.
Uncertainty of edge direction (in degrees). Default: 2
Resolution in the angle area (in 1/degrees). Default: 4
Smoothing filter for hough image. Default: "mean"
Required smoothing filter size. Default: 5
Threshold value in the Hough image. Default: 100
Minimum distance of two maxima in the Hough image (direction: angle). Default: 5
Minimum distance of two maxima in the Hough image (direction: distance). Default: 5
Create line regions if 'true'. Default: "true"
Angles (in radians) of the detected lines' normal vectors.
Distance of the detected lines from the origin.
Compute the Hough transform for lines using local gradient direction.
Image containing the edge direction. The edges must be described by the image domain.
Hough transform.
Uncertainty of the edge direction (in degrees). Default: 2
Resolution in the angle area (in 1/degrees). Default: 4
Detect lines in edge images with the help of the Hough transform and returns it in HNF.
Binary edge image in which the lines are to be detected.
Adjusting the resolution in the angle area. Default: 4
Threshold value in the Hough image. Default: 100
Minimal distance of two maxima in the Hough image (direction: angle). Default: 5
Minimal distance of two maxima in the Hough image (direction: distance). Default: 5
Angles (in radians) of the detected lines' normal vectors.
Distance of the detected lines from the origin.
Produce the Hough transform for lines within regions.
Binary edge image in which lines are to be detected.
Hough transform for lines.
Adjusting the resolution in the angle area. Default: 4
Select those lines from a set of lines (in HNF) which fit best into a region.
Region in which the lines are to be matched.
Region array containing the matched lines.
Angles (in radians) of the normal vectors of the input lines.
Distances of the input lines form the origin.
Widths of the lines. Default: 7
Threshold value for the number of line points in the region. Default: 100
Angles (in radians) of the normal vectors of the selected lines.
Distances of the selected lines from the origin.
Segment the rectification grid region in the image.
Input image.
Output region containing the rectification grid.
Minimum contrast. Default: 8.0
Radius of the circular structuring element. Default: 7.5
Generate a PostScript file, which describes the rectification grid.
Width of the checkered pattern in meters (without the two frames). Default: 0.17
Number of squares per row and column. Default: 17
File name of the PostScript file. Default: "rectification_grid.ps"
Establish connections between the grid points of the rectification grid.
Input image.
Output contours.
Row coordinates of the grid points.
Column coordinates of the grid points.
Size of the applied Gaussians. Default: 0.9
Maximum distance of the connecting lines from the grid points. Default: 5.5
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Input image.
Input contours.
Image containing the mapping data.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Generate a projection map that describes the mapping between an arbitrarily distorted image and the rectified image.
Image containing the mapping data.
Distance of the grid points in the rectified image.
Row coordinates of the grid points in the distorted image.
Column coordinates of the grid points in the distorted image.
Width of the point grid (number of grid points).
Width of the images to be rectified.
Height of the images to be rectified.
Type of mapping. Default: "bilinear"
Gets a copy of the background image of the HALCON window.
Copy of the background image.
Window handle.
Add a callback function to a drawing object.
Handle of the drawing object.
Events to be captured.
Callback functions.
Detach the background image from a HALCON window.
Window handle.
Attach a background image to a HALCON window.
Background image.
Window handle.
Detach an existing drawing object from a HALCON window.
Window Handle.
Handle of the drawing object.
Attach an existing drawing object to a HALCON window.
Window handle.
Handle of the drawing object.
Modify the pose of a 3D plot.
Window handle.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the second point.
Column coordinate of the second point.
Navigation mode. Default: "rotate"
Calculates image coordinates for a point in a 3D plot window.
Displayed image.
Window handle.
Row coordinate in the window.
Column coordinate in the window.
Row coordinate in the image.
Column coordinate in the image.
Height value.
Get the operating system window handle.
Window handle.
Operating system window handle.
Operating system display handle (under Unix-like systems only).
Set the device context of a virtual graphics window (Windows NT).
Window handle.
device context of WINHWnd.
Create a virtual graphics window under Windows.
Windows window handle of a previously created window.
Row coordinate of upper left corner. Default: 0
Column coordinate of upper left corner. Default: 0
Width of the window. Default: 512
Height of the window. Default: 512
Window handle.
Interactive output from two window buffers.
Source window handle of the "`upper window"'.
Source window handle of the "`lower window"'.
Output window handle.
Specify a window type.
Name of the window type which has to be set. Default: "X-Window"
Modify position and size of a window.
Window handle.
Row index of upper left corner in target position. Default: 0
Column index of upper left corner in target position. Default: 0
Width of the window. Default: 512
Height of the window. Default: 512
Get window characteristics.
Name of the attribute that should be returned.
Attribute value.
Set window characteristics.
Name of the attribute that should be modified.
Value of the attribute that should be set.
Query all available window types.
Names of available window types.
Open a graphics window.
Row index of upper left corner. Default: 0
Column index of upper left corner. Default: 0
Width of the window. Default: 256
Height of the window. Default: 256
Logical number of the father window. To specify the display as father you may enter 'root' or 0. Default: 0
Window mode. Default: "visible"
Name of the computer on which you want to open the window. Otherwise the empty string. Default: ""
Window handle.
Open a textual window.
Row index of upper left corner. Default: 0
Column index of upper left corner. Default: 0
Window's width. Default: 256
Window's height. Default: 256
Window border's width. Default: 2
Window border's color. Default: "white"
Background color. Default: "black"
Logical number of the father window. For the display as father you may specify 'root' or 0. Default: 0
Window mode. Default: "visible"
Computer name, where the window has to be opened or empty string. Default: ""
Window handle.
Copy inside an output window.
Window handle.
Row index of upper left corner of the source rectangle. Default: 0
Column index of upper left corner of the source rectangle. Default: 0
Row index of lower right corner of the source rectangle. Default: 64
Column index of lower right corner of the source rectangle. Default: 64
Row index of upper left corner of the target position. Default: 64
Column index of upper left corner of the target position. Default: 64
Get the window type.
Window handle.
Window type
Access to a window's pixel data.
Window handle.
Pointer on red channel of pixel data.
Pointer on green channel of pixel data.
Pointer on blue channel of pixel data.
Length of an image line.
Number of image lines.
Information about a window's size and position.
Window handle.
Row index of upper left corner of the window.
Column index of upper left corner of the window.
Window width.
Window height.
Write the window content in an image object.
Saved image.
Window handle.
Write the window content to a file.
Window handle.
Name of the target device or of the graphic format. Default: "postscript"
File name (without extension). Default: "halcon_dump"
Copy all pixels within rectangles between output windows.
Source window handle.
Destination window handle.
Row index of upper left corner in the source window. Default: 0
Column index of upper left corner in the source window. Default: 0
Row index of lower right corner in the source window. Default: 128
Column index of lower right corner in the source window. Default: 128
Row index of upper left corner in the target window. Default: 0
Column index of upper left corner in the target window. Default: 0
Close an output window.
Window handle.
Delete the contents of an output window.
Window handle.
Delete a rectangle on the output window.
Window handle.
Line index of upper left corner. Default: 10
Column index of upper left corner. Default: 10
Row index of lower right corner. Default: 118
Column index of lower right corner. Default: 118
Print text in a window.
Window handle.
Tuple of output values (all types). Default: "hello"
Set the shape of the text cursor.
Window handle.
Name of cursor shape. Default: "invisible"
Set the position of the text cursor.
Window handle.
Row index of text cursor position. Default: 24
Column index of text cursor position. Default: 12
Read a string in a text window.
Window handle.
Default string (visible before input). Default: ""
Maximum number of characters. Default: 32
Read string.
Read a character from a text window.
Window handle.
Input character (if it is not a control character).
Code for input character.
Set the position of the text cursor to the beginning of the next line.
Window handle.
Get the shape of the text cursor.
Window handle.
Name of the current text cursor.
Get cursor position.
Window handle.
Row index of text cursor position.
Column index of text cursor position.
Get the maximum size of all characters of a font.
Window handle.
Maximum height above baseline.
Maximum extension below baseline.
Maximum character width.
Maximum character height.
Get the spatial size of a string.
Window handle.
Values to consider. Default: "test_string"
Maximum height above baseline.
Maximum extension below baseline.
Text width.
Text height.
Query the available fonts.
Window handle.
Tuple with available font names.
Query all shapes available for text cursors.
Window handle.
Names of the available text cursors.
Set the font used for text output.
Window handle.
Name of new font.
Get the current font.
Window handle.
Name of the current font.
Get the depth or the index of instances in a displayed 3D scene.
Window handle.
Handle of the 3D scene.
Row coordinates.
Column coordinates.
Information. Default: "depth"
Indices or the depth of the objects at (Row,Column).
Set the pose of a 3D scene.
Handle of the 3D scene.
New pose of the 3D scene.
Set parameters of a 3D scene.
Handle of the 3D scene.
Names of the generic parameters. Default: "quality"
Values of the generic parameters. Default: "high"
Set parameters of a light in a 3D scene.
Handle of the 3D scene.
Index of the light source.
Names of the generic parameters. Default: "ambient"
Values of the generic parameters. Default: [0.2,0.2,0.2]
Set the pose of an instance in a 3D scene.
Handle of the 3D scene.
Index of the instance.
New pose of the instance.
Set parameters of an instance in a 3D scene.
Handle of the 3D scene.
Index of the instance.
Names of the generic parameters. Default: "color"
Values of the generic parameters. Default: "green"
Set the pose of a camera in a 3D scene.
Handle of the 3D scene.
Index of the camera.
New pose of the camera.
Render an image of a 3D scene.
Rendered 3D scene.
Handle of the 3D scene.
Index of the camera used to display the scene.
Remove a light from a 3D scene.
Handle of the 3D scene.
Light to remove.
Remove an object instance from a 3D scene.
Handle of the 3D scene.
Index of the instance to remove.
Remove a camera from a 3D scene.
Handle of the 3D scene.
Index of the camera to remove.
Display a 3D scene.
Window handle.
Handle of the 3D scene.
Index of the camera used to display the scene.
Add a light source to a 3D scene.
Handle of the 3D scene.
Position of the new light source. Default: [-100.0,-100.0,0.0]
Type of the new light source. Default: "point_light"
Index of the new light source in the 3D scene.
Add an instance of a 3D object model to a 3D scene.
Handle of the 3D scene.
Handle of the 3D object model.
Pose of the 3D object model.
Index of the new instance in the 3D scene.
Add a camera to a 3D scene.
Handle of the 3D scene.
Parameters of the new camera.
Index of the new camera in the 3D scene.
Delete a 3D scene and free all allocated memory.
Handle of the 3D scene.
Create the data structure that is needed to visualize collections of 3D objects.
Handle of the 3D scene.
Get window parameters.
Window handle.
Name of the parameter. Default: "flush"
Value of the parameter.
Set window parameters.
Window handle.
Name of the parameter. Default: "flush"
Value to be set. Default: "false"
Define the region output shape.
Window handle.
Region output mode. Default: "original"
Set the color definition via RGB values.
Window handle.
Red component of the color. Default: 255
Green component of the color. Default: 0
Blue component of the color. Default: 0
Define a color lookup table index.
Window handle.
Color lookup table index. Default: 128
Define an interpolation method for gray value output.
Window handle.
Interpolation method for image output: 0 (fast, low quality) to 2 (slow, high quality). Default: 0
Modify the displayed image part.
Window handle.
Row of the upper left corner of the chosen image part. Default: 0
Column of the upper left corner of the chosen image part. Default: 0
Row of the lower right corner of the chosen image part. Default: -1
Column of the lower right corner of the chosen image part. Default: -1
Define the gray value output mode.
Window handle.
Output mode. Additional parameters possible. Default: "default"
Define the line width for region contour output.
Window handle.
Line width for region output in contour mode. Default: 1.0
Define a contour output pattern.
Window handle.
Contour pattern. Default: []
Define the approximation error for contour display.
Window handle.
Maximum deviation from the original contour. Default: 0
Define the pixel output function.
Window handle.
Name of the display function. Default: "copy"
Define output colors (HSI-coded).
Window handle.
Hue for region output. Default: 30
Saturation for region output. Default: 255
Intensity for region output. Default: 84
Define gray values for region output.
Window handle.
Gray values for region output. Default: 255
Define the region fill mode.
Window handle.
Fill mode for region output. Default: "fill"
Define the image matrix output clipping.
Window handle.
Clipping mode for gray value output. Default: "object"
Set multiple output colors.
Window handle.
Number of output colors. Default: 12
Set output color.
Window handle.
Output color names. Default: "white"
Get the current region output shape.
Window handle.
Current region output shape.
Get the current color in RGB-coding.
Window handle.
The current color's red value.
The current color's green value.
The current color's blue value.
Get the current color lookup table index.
Window handle.
Index of the current color look-up table.
Get the current interpolation mode for gray value display.
Window handle.
Interpolation mode for image display: 0 (fast, low quality) to 2 (slow, high quality).
Get the image part.
Window handle.
Row index of the image part's upper left corner.
Column index of the image part's upper left corner.
Row index of the image part's lower right corner.
Column index of the image part's lower right corner.
Get the current display mode for gray values.
Window handle.
Name and parameter values of the current display mode.
Get the current line width for contour display.
Window handle.
Current line width for contour display.
Get the current graphic mode for contours.
Window handle.
Template for contour display.
Get the current approximation error for contour display.
Window handle.
Current approximation error for contour display.
Get the current display mode.
Window handle.
Display mode.
Get the HSI coding of the current color.
Window handle.
Hue (color value) of the current color.
Saturation of the current color.
Intensity of the current color.
Get the current region fill mode.
Window handle.
Current region fill mode.
Get the output treatment of an image matrix.
Window handle.
Display mode for images.
Query the region display modes.
region display mode names.
Query the gray value display modes.
Window handle.
Gray value display mode names.
Query the possible line widths.
Displayable minimum width.
Displayable maximum width.
Query the possible graphic modes.
Window handle.
Display function name.
Query the displayable gray values.
Window handle.
Tuple of all displayable gray values.
Query the number of colors for color output.
Tuple of the possible numbers of colors.
Query all color names.
Window handle.
Color names.
Query all color names displayable in the window.
Window handle.
Color names.
Query the icon for region output
Icon for the regions center of gravity.
Window handle.
Icon definition for region output.
Icon for center of gravity.
Window handle.
Displays regions in a window.
Regions to display.
Window handle.
Displays arbitrarily oriented rectangles.
Window handle.
Row index of the center. Default: 48
Column index of the center. Default: 64
Orientation of rectangle in radians. Default: 0.0
Half of the length of the longer side. Default: 48
Half of the length of the shorter side. Default: 32
Display of rectangles aligned to the coordinate axes.
Window handle.
Row index of the upper left corner. Default: 16
Column index of the upper left corner. Default: 16
Row index of the lower right corner. Default: 48
Column index of the lower right corner. Default: 80
Displays a polyline.
Window handle.
Row index Default: [16,80,80]
Column index Default: [48,16,80]
Draws lines in a window.
Window handle.
Row index of the start. Default: 32.0
Column index of the start. Default: 32.0
Row index of end. Default: 64.0
Column index of end. Default: 64.0
Displays crosses in a window.
Window handle.
Row coordinate of the center. Default: 32.0
Column coordinate of the center. Default: 32.0
Length of the bars. Default: 6.0
Orientation. Default: 0.0
Displays gray value images.
Gray value image to display.
Window handle.
Displays images with several channels.
Multichannel images to be displayed.
Window handle.
Number of channel or the numbers of the RGB-channels Default: 1
Displays a color (RGB) image
Color image to display.
Window handle.
Displays ellipses.
Window handle.
Row index of center. Default: 64
Column index of center. Default: 64
Orientation of the ellipse in radians Default: 0.0
Radius of major axis. Default: 24.0
Radius of minor axis. Default: 14.0
Displays a noise distribution.
Window handle.
Gray value distribution (513 values).
Row index of center. Default: 256
Column index of center. Default: 256
Size of display. Default: 1
Displays circles in a window.
Window handle.
Row index of the center. Default: 64
Column index of the center. Default: 64
Radius of the circle. Default: 64
Displays arrows in a window.
Window handle.
Row index of the start. Default: 10.0
Column index of the start. Default: 10.0
Row index of the end. Default: 118.0
Column index of the end. Default: 118.0
Size of the arrowhead. Default: 1.0
Displays circular arcs in a window.
Window handle.
Row coordinate of center point. Default: 64
Column coordinate of center point. Default: 64
Angle between start and end of the arc (in radians). Default: 3.1415926
Row coordinate of the start of the arc. Default: 32
Column coordinate of the start of the arc. Default: 32
Displays image objects (image, region, XLD).
Image object to be displayed.
Window handle.
Set the current mouse pointer shape.
Window handle.
Mouse pointer name. Default: "arrow"
Query the current mouse pointer shape.
Window handle.
Mouse pointer name.
Query all available mouse pointer shapes.
Window handle.
Available mouse pointer names.
Query the subpixel mouse position.
Window handle.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed or 0.
Query the mouse position.
Window handle.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed or 0.
Wait until a mouse button is pressed and get the subpixel mouse position.
Window handle.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
Wait until a mouse button is pressed.
Window handle.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
Write look-up-table (lut) as file.
Window handle.
File name (of file containing the look-up-table). Default: "/tmp/lut"
Graphical view of the look-up-table (lut).
Window handle.
Row of centre of the graphic. Default: 128
Column of centre of the graphic. Default: 128
Scaling of the graphic. Default: 1
Query all available look-up-tables (lut).
Window handle.
Names of look-up-tables.
Get modification parameters of look-up-table (lut).
Window handle.
Modification of color value.
Modification of saturation.
Modification of intensity.
Changing the look-up-table (lut).
Window handle.
Modification of color value. Default: 0.0
Modification of saturation. Default: 1.5
Modification of intensity. Default: 1.5
Get current look-up-table (lut).
Window handle.
Name of look-up-table or tuple of RGB-values.
Set "`look-up-table"' (lut).
Window handle.
Name of look-up-table, values of look-up-table (RGB) or file name. Default: "default"
Get mode of fixing of current look-up-table (lut).
Window handle.
Current Mode of fixing.
Set fixing of "`look-up-table"' (lut)
Window handle.
Mode of fixing. Default: "true"
Get fixing of "`look-up-table"' (lut) for "`real color images"'
Window handle.
Mode of fixing.
Fix "`look-up-table"' (lut) for "`real color images"'.
Window handle.
Mode of fixing. Default: "true"
Plot a function using gnuplot.
Identifier for the gnuplot output stream.
Function to be plotted.
Plot control values using gnuplot.
Identifier for the gnuplot output stream.
Control values to be plotted (y-values).
Visualize images using gnuplot.
Image to be plotted.
Identifier for the gnuplot output stream.
Number of samples in the x-direction. Default: 64
Number of samples in the y-direction. Default: 64
Rotation of the plot about the x-axis. Default: 60
Rotation of the plot about the z-axis. Default: 30
Plot the image with hidden surfaces removed. Default: "hidden3d"
Close all open gnuplot files or terminate an active gnuplot sub-process.
Identifier for the gnuplot output stream.
Open a gnuplot file for visualization of images and control values.
Base name for control and data files.
Identifier for the gnuplot output stream.
Open a pipe to a gnuplot process for visualization of images and control values.
Identifier for the gnuplot output stream.
Create a text object which can be moved interactively.
Row coordinate of the text position. Default: 12
Column coordinate of the text position. Default: 12
Character string to be displayed. Default: "Text"
Handle of the drawing object.
Return the iconic object of a drawing object.
Copy of the iconic object represented by the drawing object.
Handle of the drawing object.
Delete drawing object.
Handle of the drawing object.
Set the parameters of a drawing object.
Handle of the drawing object.
Parameter names of the drawing object.
Parameter values.
Get the parameters of a drawing object.
Handle of the drawing object.
Parameter names of the drawing object.
Parameter values.
Set the contour of an interactive draw XLD.
XLD contour.
Handle of the drawing object.
Create a XLD contour which can be modified interactively.
Row coordinates of the polygon. Default: [100,200,200,100]
Column coordinates of the polygon. Default: [100,100,200,200]
Handle of the drawing object.
Create a circle sector which can be modified interactively.
Row coordinate of the center. Default: 100
Column coordinate of the center. Default: 100
Radius of the circle. Default: 80
Start angle of the arc. Default: 0
End angle of the arc. Default: 3.14159
Handle of the drawing object.
Create an elliptic sector which can be modified interactively.
Row index of the center. Default: 200
Column index of the center. Default: 200
Orientation of the first half axis in radians. Default: 0
First half axis. Default: 100
Second half axis. Default: 60
Start angle of the arc. Default: 0
End angle of the arc. Default: 3.14159
Handle of the drawing object.
Create a line which can be modified interactively.
Row index of the first line point. Default: 100
Column index of the first line point. Default: 100
Row index of the second line point. Default: 200
Column index of the second line point. Default: 200
Handle of the drawing object.
Create a circle which can be modified interactively.
Row coordinate of the center. Default: 100
Column coordinate of the center. Default: 100
Radius of the circle. Default: 80
Handle of the drawing object.
Create an ellipse which can be modified interactively.
Row index of the center. Default: 200
Column index of the center. Default: 200
Orientation of the first half axis in radians. Default: 0
First half axis. Default: 100
Second half axis. Default: 60
Handle of the drawing object.
Create a rectangle of any orientation which can be modified interactively.
Row coordinate of the center. Default: 150
Column coordinate of the center. Default: 150
Orientation of the first half axis in radians. Default: 0
First half axis. Default: 100
Second half axis. Default: 100
Handle of the drawing object.
Create a rectangle parallel to the coordinate axis which can be modified interactively.
Row coordinate of the upper left corner. Default: 100
Column coordinate of the upper left corner. Default: 100
Row coordinate of the lower right corner. Default: 200
Column coordinate of the lower right corner. Default: 200
Handle of the drawing object.
Interactive movement of a region with restriction of positions.
Regions to move.
Points on which it is allowed for a region to move.
Moved regions.
Window handle.
Row index of the reference point. Default: 100
Column index of the reference point. Default: 100
Interactive movement of a region with fixpoint specification.
Regions to move.
Moved regions.
Window handle.
Row index of the reference point. Default: 100
Column index of the reference point. Default: 100
Interactive moving of a region.
Regions to move.
Moved Regions.
Window handle.
Interactive modification of a NURBS curve using interpolation.
Contour of the modified curve.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 5. Default: 3
Row coordinates of the input interpolation points.
Column coordinates of the input interpolation points.
Input tangents.
Row coordinates of the control polygon.
Column coordinates of the control polygon.
Knot vector.
Row coordinates of the points specified by the user.
Column coordinates of the points specified by the user.
Tangents specified by the user.
Interactive drawing of a NURBS curve using interpolation.
Contour of the curve.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 5. Default: 3
Row coordinates of the control polygon.
Column coordinates of the control polygon.
Knot vector.
Row coordinates of the points specified by the user.
Column coordinates of the points specified by the user.
Tangents specified by the user.
Interactive modification of a NURBS curve.
Contour of the modified curve.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 25. Default: 3
Row coordinates of the input control polygon.
Column coordinates of the input control polygon.
Input weight vector.
Row coordinates of the control polygon.
Columns coordinates of the control polygon.
Weight vector.
Interactive drawing of a NURBS curve.
Contour approximating the NURBS curve.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 25. Default: 3
Row coordinates of the control polygon.
Columns coordinates of the control polygon.
Weight vector.
Interactive modification of a contour.
Input contour.
Modified contour.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
Interactive drawing of a contour.
Modified contour.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Interactive drawing of any orientated rectangle.
Window handle.
Row index of the center.
Column index of the center.
Orientation of the bigger half axis in radians.
Bigger half axis.
Smaller half axis.
Row index of the center.
Column index of the center.
Orientation of the bigger half axis in radians.
Bigger half axis.
Smaller half axis.
Interactive drawing of any orientated rectangle.
Window handle.
Row index of the center.
Column index of the center.
Orientation of the bigger half axis in radians.
Bigger half axis.
Smaller half axis.
Draw a rectangle parallel to the coordinate axis.
Window handle.
Row index of the left upper corner.
Column index of the left upper corner.
Row index of the right lower corner.
Column index of the right lower corner.
Row index of the left upper corner.
Column index of the left upper corner.
Row index of the right lower corner.
Column index of the right lower corner.
Draw a rectangle parallel to the coordinate axis.
Window handle.
Row index of the left upper corner.
Column index of the left upper corner.
Row index of the right lower corner.
Column index of the right lower corner.
Draw a point.
Window handle.
Row index of the point.
Column index of the point.
Row index of the point.
Column index of the point.
Draw a point.
Window handle.
Row index of the point.
Column index of the point.
Draw a line.
Window handle.
Row index of the first point of the line.
Column index of the first point of the line.
Row index of the second point of the line.
Column index of the second point of the line.
Row index of the first point of the line.
Column index of the first point of the line.
Row index of the second point of the line.
Column index of the second point of the line.
Draw a line.
Window handle.
Row index of the first point of the line.
Column index of the first point of the line.
Row index of the second point of the line.
Column index of the second point of the line.
Interactive drawing of an ellipse.
Window handle.
Row index of the center.
Column index of the center.
Orientation of the bigger half axis in radians.
Bigger half axis.
Smaller half axis.
Row index of the center.
Column index of the center.
Orientation of the first half axis in radians.
First half axis.
Second half axis.
Interactive drawing of an ellipse.
Window handle.
Row index of the center.
Column index of the center.
Orientation of the first half axis in radians.
First half axis.
Second half axis.
Interactive drawing of a circle.
Window handle.
Row index of the center.
Column index of the center.
Radius of the circle.
Row index of the center.
Column index of the center.
Circle's radius.
Interactive drawing of a circle.
Window handle.
Barycenter's row index.
Barycenter's column index.
Circle's radius.
Interactive drawing of a closed region.
Interactive created region.
Window handle.
Interactive drawing of a polygon row.
Region, which encompasses all painted points.
Window handle.
Calculate the projection of a point onto a line.
Row coordinate of the point.
Column coordinate of the point.
Row coordinate of the first point on the line.
Column coordinate of the first point on the line.
Row coordinate of the second point on the line.
Column coordinate of the second point on the line.
Row coordinate of the projected point.
Column coordinate of the projected point
Calculate a point of an ellipse corresponding to a specific angle.
Angle corresponding to the resulting point [rad]. Default: 0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Row coordinate of the point on the ellipse.
Column coordinates of the point on the ellipse.
Calculate the intersection point of two lines.
Row coordinate of the first point of the first line.
Column coordinate of the first point of the first line.
Row coordinate of the second point of the first line.
Column coordinate of the second point of the first line.
Row coordinate of the first point of the second line.
Column coordinate of the first point of the second line.
Row coordinate of the second point of the second line.
Column coordinate of the second point of the second line.
Row coordinate of the intersection point.
Column coordinate of the intersection point.
Are the two lines parallel?
Calculate the intersection points of two XLD contours
First XLD contour.
Second XLD contour.
Intersection points to be returned. Default: "all"
Row coordinates of the intersection points.
Column coordinates of the intersection points.
Does a part of a contour lies above another contour part?
Calculate the intersection points of a circle or circular arc and an XLD contour
XLD contour.
Row coordinate of the center of the circle or circular arc.
Column coordinate of the center of the circle or circular arc.
Radius of the circle or circular arc.
Angle of the start point of the circle or circular arc [rad]. Default: 0.0
Angle of the end point of the circle or circular arc [rad]. Default: 6.28318
Point order along the circle or circular arc. Default: "positive"
Row coordinates of the intersection points.
Column coordinates of the intersection points.
Calculate the intersection points of two circles or circular arcs
Row coordinate of the center of the first circle or circular arc.
Column coordinate of the center of the first circle or circular arc.
Radius of the first circle or circular arc.
Angle of the start point of the first circle or circular arc [rad]. Default: 0.0
Angle of the end point of the first circle or circular arc [rad]. Default: 6.28318
Point order along the first circle or circular arc. Default: "positive"
Row coordinate of the center of the second circle or circular arc.
Column coordinate of the center of the second circle or circular arc.
Radius of the second circle or circular arc.
Angle of the start point of the second circle or circular arc [rad]. Default: 0.0
Angle of the end point of the second circle or circular arc [rad]. Default: 6.28318
Point order along the second circle or circular arc. Default: "positive"
Row coordinates of the intersection points.
Column coordinates of the intersection points.
Do both circles or circular arcs have a part in common?
Calculate the intersection points of a line and an XLD contour
XLD contour.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Row coordinates of the intersection points.
Column coordinates of the intersection points.
Does a part of the XLD contour lies on the line?
Calculate the intersection points of a line and a circle or circular arc
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Row coordinate of the center of the circle or circular arc.
Column coordinate of the center of the circle or circular arc.
Radius of the circle or circular arc.
Angle of the start point of the circle or circular arc [rad]. Default: 0.0
Angle of the end point of the circle or circular arc [rad]. Default: 6.28318
Point order along the circle or circular arc. Default: "positive"
Row coordinates of the intersection points.
Column coordinates of the intersection points.
Calculate the intersection point of two lines
Row coordinate of the first point of the first line.
Column coordinate of the first point of the first line.
Row coordinate of the second point of the first line.
Column coordinate of the second point of the first line.
Row coordinate of the first point of the second line.
Column coordinate of the first point of the second line.
Row coordinate of the second point of the second line.
Column coordinate of the second point of the second line.
Row coordinate of the intersection point.
Column coordinate of the intersection point.
Are both lines identical?
Calculate the intersection points of a segment and an XLD contour
XLD contour.
Row coordinate of the first point of the segment.
Column coordinate of the first point of the segment.
Row coordinate of the second point of the segment.
Column coordinate of the second point of the segment.
Row coordinates of the intersection points.
Column coordinates of the intersection points.
Do the segment and the XLD contour have a part in common?
Calculate the intersection points of a segment and a circle or circular arc
Row coordinate of the first point of the segment.
Column coordinate of the first point of the segment.
Row coordinate of the second point of the segment.
Column coordinate of the second point of the segment.
Row coordinate of the center of the circle or circular arc.
Column coordinate of the center of the circle or circular arc.
Radius of the circle or circular arc.
Angle of the start point of the circle or circular arc [rad]. Default: 0.0
Angle of the end point of the circle or circular arc [rad]. Default: 6.28318
Point order along the circle or circular arc. Default: "positive"
Row coordinates of the intersection points.
Column coordinates of the intersection points.
Calculate the intersection point of a segment and a line
Row coordinate of the first point of the segment.
Column coordinate of the first point of the segment.
Row coordinate of the second point of the segment.
Column coordinate of the second point of the segment.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Row coordinate of the intersection point.
Column coordinate of the intersection point.
Do the segment and the line have a part in common?
Calculate the intersection point of two line segments
Row coordinate of the first point of the first segment.
Column coordinate of the first point of the first segment.
Row coordinate of the second point of the first segment.
Column coordinate of the second point of the first segment.
Row coordinate of the first point of the second segment.
Column coordinate of the first point of the second segment.
Row coordinate of the second point of the second segment.
Column coordinate of the second point of the second segment.
Row coordinate of the intersection point.
Column coordinate of the intersection point.
Do both segments have a part in common?
Clear a XLD distance transform.
Handle of the XLD distance transform.
Determine the pointwise distance of two contours using an XLD distance transform.
Contour(s) for whose points the distances are calculated.
Copy of Contour containing the distances as an attribute.
Handle of the XLD distance transform of the reference contour.
Read an XLD distance transform from a file.
Name of the file.
Handle of the XLD distance transform.
Deserialize an XLD distance transform.
Handle of the serialized XLD distance transform.
Handle of the deserialized XLD distance transform.
Serialize an XLD distance transform.
Handle of the XLD distance transform.
Handle of the serialized XLD distance transform.
Write an XLD distance transform into a file.
Handle of the XLD distance transform.
Name of the file.
Set new parameters for an XLD distance transform.
Handle of the XLD distance transform.
Names of the generic parameters. Default: "mode"
Values of the generic parameters. Default: "point_to_point"
Get the parameters used to build an XLD distance transform.
Handle of the XLD distance transform.
Names of the generic parameters. Default: "mode"
Values of the generic parameters.
Get the reference contour used to build the XLD distance transform.
Reference contour.
Handle of the XLD distance transform.
Create the XLD distance transform.
Reference contour(s).
Compute the distance to points ('point_to_point') or entire segments ('point_to_segment'). Default: "point_to_point"
Maximum distance of interest. Default: 20.0
Handle of the XLD distance transform.
Calculate the pointwise distance from one contour to another.
Contours for whose points the distances are calculated.
Contours to which the distances are calculated to.
Copy of ContourFrom containing the distances as an attribute.
Compute the distance to points ('point_to_point') or to entire segments ('point_to_segment'). Default: "point_to_point"
Calculate the minimum distance between two contours.
First input contour.
Second input contour.
Distance calculation mode. Default: "fast_point_to_segment"
Minimum distance between the two contours.
Calculate the distance between two contours.
First input contour.
Second input contour.
Distance calculation mode. Default: "point_to_point"
Minimum distance between both contours.
Maximum distance between both contours.
Calculate the distance between a line segment and one contour.
Input contour.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the line segment and the contour.
Maximum distance between the line segment and the contour.
Calculate the distance between a line and one contour.
Input contour.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line and the contour.
Maximum distance between the line and the contour.
Calculate the distance between a point and one contour.
Input contour.
Row coordinate of the point.
Column coordinate of the point.
Minimum distance between the point and the contour.
Maximum distance between the point and the contour.
Calculate the distance between a line segment and one region.
Input region.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the line segment and the region.
Maximum distance between the line segment and the region.
Calculate the distance between a line and a region.
Input region.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line and the region
Maximum distance between the line and the region
Calculate the distance between a point and a region.
Input region.
Row coordinate of the point.
Column coordinate of the point.
Minimum distance between the point and the region.
Maximum distance between the point and the region.
Calculate the angle between one line and the horizontal axis.
Row coordinate the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Angle between the line and the horizontal axis [rad].
Calculate the angle between two lines.
Row coordinate of the first point of the first line.
Column coordinate of the first point of the first line.
Row coordinate of the second point of the first line.
Column coordinate of the second point of the first line.
Row coordinate of the first point of the second line.
Column coordinate of the first point of the second line.
Row coordinate of the second point of the second line.
Column coordinate of the second point of the second line.
Angle between the lines [rad].
Calculate the distances between a line segment and a line.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line segment and the line.
Maximum distance between the line segment and the line.
Calculate the distances between two line segments.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Row coordinate of the first point of the line.
Column of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line segments.
Maximum distance between the line segments.
Calculate the distances between a point and a line segment.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the point and the line segment.
Maximum distance between the point and the line segment.
Calculate the distance between one point and one line.
Row coordinate of the point.
Column of the point.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Distance between the points.
Calculate the distance between two points.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the second point.
Column coordinate of the second point.
Distance between the points.
Compose two functions.
Input function 1.
Input function 2.
Border treatment for the input functions. Default: "constant"
Composed function.
Calculate the inverse of a function.
Input function.
Inverse of the input function.
Calculate the derivatives of a function.
Input function
Type of derivative Default: "first"
Derivative of the input function
Calculate the local minimum and maximum points of a function.
Input function
Handling of plateaus Default: "strict_min_max"
Interpolation of the input function Default: "true"
Minimum points of the input function
Maximum points of the input function
Calculate the zero crossings of a function.
Input function
Zero crossings of the input function
Multiplication and addition of the y values.
Input function.
Factor for scaling of the y values. Default: 2.0
Constant which is added to the y values. Default: 0.0
Transformed function.
Negation of the y values.
Input function.
Function with the negated y values.
Absolute value of the y values.
Input function.
Function with the absolute values of the y values.
Return the value of a function at an arbitrary position.
Input function.
X coordinate at which the function should be evaluated.
Border treatment for the input function. Default: "constant"
Y value at the given x value.
Access a function value using the index of the control points.
Input function.
Index of the control points.
X value at the given control points.
Y value at the given control points.
Number of control points of the function.
Input function.
Number of control points.
Smallest and largest y value of the function.
Input function.
Smallest y value.
Largest y value.
Smallest and largest x value of the function.
Input function.
Smallest x value.
Largest x value.
Access to the x/y values of a function.
Input function.
X values of the function.
Y values of the function.
Sample a function equidistantly in an interval.
Input function.
Minimum x value of the output function.
Maximum x value of the output function.
Distance of the samples.
Border treatment for the input function. Default: "constant"
Sampled function.
Transform a function using given transformation parameters.
Input function.
Transformation parameters between the functions.
Transformed function.
Calculate transformation parameters between two functions.
Function 1.
Function 2.
Border treatment for function 2. Default: "constant"
Values of the parameters to remain constant. Default: [1.0,0.0,1.0,0.0]
Should a parameter be adapted for it? Default: ["true","true","true","true"]
Transformation parameters between the functions.
Quadratic error of the output function.
Covariance Matrix of the transformation parameters.
Compute the distance of two functions.
Input function 1.
Input function 2.
Modes of invariants. Default: "length"
Variance of the optional smoothing with a Gaussian filter. Default: 0.0
Distance of the functions.
Smooth an equidistant 1D function with a Gaussian function.
Function to be smoothed.
Sigma of the Gaussian function for the smoothing. Default: 2.0
Smoothed function.
Compute the positive and negative areas of a function.
Input function.
Area under the positive part of the function.
Area under the negative part of the function.
Read a function from a file.
Name of the file to be read.
Function from the file.
Write a function to a file.
Function to be written.
Name of the file to be written.
Create a function from a sequence of y-values.
X value for function points.
Created function.
Create a function from a set of (x,y) pairs.
X value for function points.
Y-value for function points.
Created function.
Smooth an equidistant 1D function by averaging its values.
1D function.
Size of the averaging mask. Default: 9
Number of iterations for the smoothing. Default: 3
Smoothed function.
Filter an image using a Laws texture filter.
Images to which the texture transformation is to be applied.
Texture images.
Desired filter. Default: "el"
Shift to reduce the gray value dynamics. Default: 2
Size of the filter kernel. Default: 5
Calculate the standard deviation of gray values within rectangular windows.
Image for which the standard deviation is to be calculated.
Image containing the standard deviation.
Width of the mask in which the standard deviation is calculated. Default: 11
Height of the mask in which the standard deviation is calculated. Default: 11
Calculate the entropy of gray values within a rectangular window.
Image for which the entropy is to be calculated.
Entropy image.
Width of the mask in which the entropy is calculated. Default: 9
Height of the mask in which the entropy is calculated. Default: 9
Perform an isotropic diffusion of an image.
Input image.
Output image.
Standard deviation of the Gauss distribution. Default: 1.0
Number of iterations. Default: 10
Perform an anisotropic diffusion of an image.
Input image.
Output image.
Diffusion coefficient as a function of the edge amplitude. Default: "weickert"
Contrast parameter. Default: 5.0
Time step. Default: 1.0
Number of iterations. Default: 10
Smooth an image using various filters.
Image to be smoothed.
Smoothed image.
Filter. Default: "deriche2"
Filterparameter: small values cause strong smoothing (vice versa by using bei 'gauss'). Default: 0.5
Non-linear smoothing with the sigma filter.
Image to be smoothed.
Smoothed image.
Height of the mask (number of lines). Default: 5
Width of the mask (number of columns). Default: 5
Max. deviation to the average. Default: 3
Calculate the average of maximum and minimum inside any mask.
Image to be filtered.
Filter mask.
Filtered image.
Border treatment. Default: "mirrored"
Smooth an image with an arbitrary rank mask.
Image to be filtered.
Image whose region serves as filter mask.
Filtered output image.
Number of averaged pixels. Typical value: Surface(Mask) / 2. Default: 5
Border treatment. Default: "mirrored"
Separated median filtering with rectangle masks.
Image to be filtered.
Median filtered image.
Width of rank mask. Default: 25
Height of rank mask. Default: 25
Border treatment. Default: "mirrored"
Compute a median filter with rectangular masks.
Image to be filtered.
Filtered image.
Width of the filter mask. Default: 15
Height of the filter mask. Default: 15
Compute a median filter with various masks.
Image to be filtered.
Filtered image.
Filter mask type. Default: "circle"
Radius of the filter mask. Default: 1
Border treatment. Default: "mirrored"
Weighted median filtering with different rank masks.
Image to be filtered.
Median filtered image.
Type of median mask. Default: "inner"
mask size. Default: 3
Compute a rank filter with rectangular masks.
Image to be filtered.
Filtered image.
Width of the filter mask. Default: 15
Height of the filter mask. Default: 15
Rank of the output gray value. Default: 5
Compute a rank filter with arbitrary masks.
Image to be filtered.
Filter mask.
Filtered image.
Rank of the output gray value. Default: 5
Border treatment. Default: "mirrored"
Opening, Median and Closing with circle or rectangle mask.
Image to be filtered.
Filtered Image.
Shape of the mask. Default: "circle"
Radius of the filter mask. Default: 1
Filter Mode: 0 corresponds to a gray value opening , 50 corresponds to a median and 100 to a gray values closing. Default: 10
Border treatment. Default: "mirrored"
Smooth by averaging.
Image to be smoothed.
Smoothed image.
Width of filter mask. Default: 9
Height of filter mask. Default: 9
Information on smoothing filter smooth_image.
Name of required filter. Default: "deriche2"
Filter parameter: small values effect strong smoothing (reversed in case of 'gauss'). Default: 0.5
Width of filter is approx. size x size pixels.
In case of gauss filter: coefficients of the "positive" half of the 1D impulse answer.
Smooth an image using the binomial filter.
Input image.
Smoothed image.
Filter width. Default: 5
Filter height. Default: 5
Smooth an image using discrete Gaussian functions.
Image to be smoothed.
Filtered image.
Required filter size. Default: 5
Smooth using discrete gauss functions.
Image to be smoothed.
Filtered image.
Required filter size. Default: 5
Smooth an image in the spatial domain to suppress noise.
Image to smooth.
Smoothed image.
Width of filter mask. Default: 3
Height of filter mask. Default: 3
Gap between local maximum/minimum and all other gray values of the neighborhood. Default: 1.0
Replacement rule (1 = next minimum/maximum, 2 = average, 3 =median). Default: 3
Interpolate 2 video half images.
Gray image consisting of two half images.
Full image with interpolated/removed lines.
Instruction whether even or odd lines should be replaced/removed. Default: "odd"
Return gray values with given rank from multiple channels.
Multichannel gray image.
Result of the rank function.
Rank of the gray value images to return. Default: 2
Average gray values over several channels.
Multichannel gray image.
Result of averaging.
Replace values outside of thresholds with average value.
Input image.
Smoothed image.
Width of filter mask. Default: 3
Height of filter mask. Default: 3
Minimum gray value. Default: 1
Maximum gray value. Default: 254
Suppress salt and pepper noise.
Input image.
Smoothed image.
Width of filter mask. Default: 3
Height of filter mask. Default: 3
Minimum gray value. Default: 1
Maximum gray value. Default: 254
Find corners using the Sojka operator.
Input image.
Required filter size. Default: 9
Sigma of the weight function according to the distance to the corner candidate. Default: 2.5
Sigma of the weight function for the distance to the ideal gray value edge. Default: 0.75
Threshold for the magnitude of the gradient. Default: 30.0
Threshold for Apparentness. Default: 90.0
Threshold for the direction change in a corner point (radians). Default: 0.5
Subpixel precise calculation of the corner points. Default: "false"
Row coordinates of the detected corner points.
Column coordinates of the detected corner points.
Enhance circular dots in an image.
Input image.
Output image.
Diameter of the dots to be enhanced. Default: 5
Enhance dark, light, or all dots. Default: "light"
Shift of the filter response. Default: 0
Subpixel precise detection of local minima in an image.
Input image.
Method for the calculation of the partial derivatives. Default: "facet"
Sigma of the Gaussian. If Filter is 'facet', Sigma may be 0.0 to avoid the smoothing of the input image.
Minimum absolute value of the eigenvalues of the Hessian matrix. Default: 5.0
Row coordinates of the detected minima.
Column coordinates of the detected minima.
Subpixel precise detection of local maxima in an image.
Input image.
Method for the calculation of the partial derivatives. Default: "facet"
Sigma of the Gaussian. If Filter is 'facet', Sigma may be 0.0 to avoid the smoothing of the input image.
Minimum absolute value of the eigenvalues of the Hessian matrix. Default: 5.0
Row coordinates of the detected maxima.
Column coordinates of the detected maxima.
Subpixel precise detection of saddle points in an image.
Input image.
Method for the calculation of the partial derivatives. Default: "facet"
Sigma of the Gaussian. If Filter is 'facet', Sigma may be 0.0 to avoid the smoothing of the input image.
Minimum absolute value of the eigenvalues of the Hessian matrix. Default: 5.0
Row coordinates of the detected saddle points.
Column coordinates of the detected saddle points.
Subpixel precise detection of critical points in an image.
Input image.
Method for the calculation of the partial derivatives. Default: "facet"
Sigma of the Gaussian. If Filter is 'facet', Sigma may be 0.0 to avoid the smoothing of the input image.
Minimum absolute value of the eigenvalues of the Hessian matrix. Default: 5.0
Row coordinates of the detected minima.
Column coordinates of the detected minima.
Row coordinates of the detected maxima.
Column coordinates of the detected maxima.
Row coordinates of the detected saddle points.
Column coordinates of the detected saddle points.
Detect points of interest using the Harris operator.
Input image.
Amount of smoothing used for the calculation of the gradient. Default: 0.7
Amount of smoothing used for the integration of the gradients. Default: 2.0
Weight of the squared trace of the squared gradient matrix. Default: 0.08
Minimum filter response for the points. Default: 1000.0
Row coordinates of the detected points.
Column coordinates of the detected points.
Detect points of interest using the binomial approximation of the Harris operator.
Input image.
Amount of binomial smoothing used for the calculation of the gradient. Default: 5
Amount of smoothing used for the integration of the gradients. Default: 15
Weight of the squared trace of the squared gradient matrix. Default: 0.08
Minimum filter response for the points. Default: 1000.0
Turn on or off subpixel refinement. Default: "on"
Row coordinates of the detected points.
Column coordinates of the detected points.
Detect points of interest using the Lepetit operator.
Input image.
Radius of the circle. Default: 3
Number of checked neighbors on the circle. Default: 1
Threshold of grayvalue difference to each circle point. Default: 15
Threshold of grayvalue difference to all circle points. Default: 30
Subpixel accuracy of point coordinates. Default: "interpolation"
Row-coordinates of the detected points.
Column-coordinates of the detected points.
Detect points of interest using the Foerstner operator.
Input image.
Amount of smoothing used for the calculation of the gradient. If Smoothing is 'mean', SigmaGrad is ignored. Default: 1.0
Amount of smoothing used for the integration of the gradients. Default: 2.0
Amount of smoothing used in the optimization functions. Default: 3.0
Threshold for the segmentation of inhomogeneous image areas. Default: 200
Threshold for the segmentation of point areas. Default: 0.3
Used smoothing method. Default: "gauss"
Elimination of multiply detected points. Default: "false"
Row coordinates of the detected junction points.
Column coordinates of the detected junction points.
Row part of the covariance matrix of the detected junction points.
Mixed part of the covariance matrix of the detected junction points.
Column part of the covariance matrix of the detected junction points.
Row coordinates of the detected area points.
Column coordinates of the detected area points.
Row part of the covariance matrix of the detected area points.
Mixed part of the covariance matrix of the detected area points.
Column part of the covariance matrix of the detected area points.
Estimate the image noise from a single image.
Input image.
Method to estimate the image noise. Default: "foerstner"
Percentage of used image points. Default: 20
Standard deviation of the image noise.
Determine the noise distribution of an image.
Region from which the noise distribution is to be estimated.
Corresponding image.
Size of the mean filter. Default: 21
Noise distribution of all input regions.
Add noise to an image.
Input image.
Noisy image.
Maximum noise amplitude. Default: 60.0
Add noise to an image.
Input image.
Noisy image.
Noise distribution.
Generate a Gaussian noise distribution.
Standard deviation of the Gaussian noise distribution. Default: 2.0
Resulting Gaussian noise distribution.
Generate a salt-and-pepper noise distribution.
Percentage of salt (white noise pixels). Default: 5.0
Percentage of pepper (black noise pixels). Default: 5.0
Resulting noise distribution.
Calculate standard deviation over several channels.
Multichannel gray image.
Result of calculation.
Perform an inpainting by texture propagation.
Input image.
Inpainting region.
Output image.
Size of the inpainting blocks. Default: 9
Size of the search window. Default: 30
Influence of the edge amplitude on the inpainting order. Default: 1.0
Post-iteration for artifact reduction. Default: "none"
Gray value tolerance for post-iteration. Default: 1.0
Perform an inpainting by coherence transport.
Input image.
Inpainting region.
Output image.
Radius of the pixel neighborhood. Default: 5.0
Sharpness parameter in percent. Default: 25.0
Pre-smoothing parameter. Default: 1.41
Smoothing parameter for the direction estimation. Default: 4.0
Channel weights. Default: 1
Perform an inpainting by smoothing of level lines.
Input image.
Inpainting region.
Output image.
Smoothing for derivative operator. Default: 0.5
Time step. Default: 0.5
Number of iterations. Default: 10
Perform an inpainting by coherence enhancing diffusion.
Input image.
Inpainting region.
Output image.
Smoothing for derivative operator. Default: 0.5
Smoothing for diffusion coefficients. Default: 3.0
Time step. Default: 0.5
Number of iterations. Default: 10
Perform an inpainting by anisotropic diffusion.
Input image.
Inpainting region.
Output image.
Type of edge sharpening algorithm. Default: "weickert"
Contrast parameter. Default: 5.0
Step size. Default: 0.5
Number of iterations. Default: 10
Smoothing coefficient for edge information. Default: 3.0
Perform a harmonic interpolation on an image region.
Input image.
Inpainting region.
Output image.
Computational accuracy. Default: 0.001
Expand the domain of an image and set the gray values in the expanded domain.
Input image with domain to be expanded.
Output image with new gray values in the expanded domain.
Radius of the gray value expansion, measured in pixels. Default: 2
Compute the topographic primal sketch of an image.
Image for which the topographic primal sketch is to be computed.
Label image containing the 11 classes.
Compute an affine transformation of the color values of a multichannel image.
Multichannel input image.
Multichannel output image.
Transformation matrix for the color values.
Compute the transformation matrix of the principal component analysis of multichannel images.
Multichannel input image.
Transformation matrix for the computation of the PCA.
Transformation matrix for the computation of the inverse PCA.
Mean gray value of the channels.
Covariance matrix of the channels.
Information content of the transformed channels.
Compute the principal components of multichannel images.
Multichannel input image.
Multichannel output image.
Information content of each output channel.
Determine the fuzzy entropy of regions.
Regions for which the fuzzy entropy is to be calculated.
Input image containing the fuzzy membership values.
Start of the fuzzy function. Default: 0
End of the fuzzy function. Default: 255
Fuzzy entropy of a region.
Calculate the fuzzy perimeter of a region.
Regions for which the fuzzy perimeter is to be calculated.
Input image containing the fuzzy membership values.
Start of the fuzzy function. Default: 0
End of the fuzzy function. Default: 255
Fuzzy perimeter of a region.
Perform a gray value closing with a selected mask.
Image for which the minimum gray values are to be calculated.
Image containing the minimum gray values.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Perform a gray value opening with a selected mask.
Image for which the minimum gray values are to be calculated.
Image containing the minimum gray values.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Determine the minimum gray value within a selected mask.
Image for which the minimum gray values are to be calculated.
Image containing the minimum gray values.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Determine the maximum gray value within a selected mask.
Image for which the maximum gray values are to be calculated.
Image containing the maximum gray values.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Shape of the mask. Default: "octagon"
Determine the gray value range within a rectangle.
Image for which the gray value range is to be calculated.
Image containing the gray value range.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Perform a gray value closing with a rectangular mask.
Input image.
Gray-closed image.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Perform a gray value opening with a rectangular mask.
Input image.
Gray-opened image.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Determine the minimum gray value within a rectangle.
Image for which the minimum gray values are to be calculated.
Image containing the minimum gray values.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Determine the maximum gray value within a rectangle.
Image for which the maximum gray values are to be calculated.
Image containing the maximum gray values.
Height of the filter mask. Default: 11
Width of the filter mask. Default: 11
Thinning of gray value images.
Image to be thinned.
Thinned image.
Transform an image with a gray-value look-up-table
Image whose gray values are to be transformed.
Transformed image.
Table containing the transformation.
Calculate the correlation between an image and an arbitrary filter mask
Images for which the correlation will be calculated.
Result of the correlation.
Filter mask as file name or tuple. Default: "sobel"
Border treatment. Default: "mirrored"
Convert the type of an image.
Image whose image type is to be changed.
Converted image.
Desired image type (i.e., type of the gray values). Default: "byte"
Convert two real-valued images into a vector field image.
Vector component in the row direction.
Vector component in the column direction.
Displacement vector field.
Semantic kind of the vector field. Default: "vector_field_relative"
Convert a vector field image into two real-valued images.
Vector field.
Vector component in the row direction.
Vector component in the column direction.
Convert two real images into a complex image.
Real part.
Imaginary part.
Complex image.
Convert a complex image into two real images.
Complex image.
Real part.
Imaginary part.
Paint regions with their average gray value.
Input regions.
original gray-value image.
Result image with painted regions.
Calculate the lowest possible gray value on an arbitrary path to the image border for each point in the image.
Image being processed.
Result image.
Symmetry of gray values along a row.
Input image.
Symmetry image.
Extension of search area. Default: 40
Angle of test direction. Default: 0.0
Exponent for weighting. Default: 0.5
Selection of gray values of a multi-channel image using an index image.
Multi-channel gray value image.
Image, where pixel values are interpreted as channel index.
Resulting image.
Extract depth using multiple focus levels.
Multichannel gray image consisting of multiple focus levels.
Depth image.
Confidence of depth estimation.
Filter used to find sharp pixels. Default: "highpass"
Method used to find sharp pixels. Default: "next_maximum"
Compute the calibrated scene flow between two stereo image pairs.
Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Pose of the rectified camera 2 in relation to the rectified camera 1.
Handle of the 3D object model.
Compute the uncalibrated scene flow between two stereo image pairs.
Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Estimated optical flow.
Estimated change in disparity.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Unwarp an image using a vector field.
Input image.
Input vector field.
Unwarped image.
Convolve a vector field with derivatives of the Gaussian.
Input vector field.
Filtered result images.
Sigma of the Gaussian. Default: 1.0
Component to be calculated. Default: "mean_curvature"
Compute the length of the vectors of a vector field.
Input vector field
Length of the vectors of the vector field.
Mode for computing the length of the vectors. Default: "length"
Compute the optical flow between two images.
Input image 1.
Input image 2.
Optical flow.
Algorithm for computing the optical flow. Default: "fdrig"
Standard deviation for initial Gaussian smoothing. Default: 0.8
Standard deviation of the integration filter. Default: 1.0
Weight of the smoothing term relative to the data term. Default: 20.0
Weight of the gradient constancy relative to the gray value constancy. Default: 5.0
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "accurate"
Matching a template and an image in a resolution pyramid.
Input image.
The domain of this image will be matched with Image.
Result image and result region: values of the matching criterion within the determined "region of interest".
Desired matching criterion. Default: "dfd"
Startlevel in the resolution pyramid (highest resolution: Level 0). Default: 1
Threshold to determine the "region of interest". Default: 30
Preparing a pattern for template matching with rotation.
Input image whose domain will be processed for the pattern matching.
Maximal number of pyramid levels. Default: 4
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Step rate (angle precision) of matching. Default: 0.0982
Kind of optimizing. Default: "sort"
Kind of grayvalues. Default: "original"
Template number.
Preparing a pattern for template matching.
Input image whose domain will be processed for the pattern matching.
Not yet in use. Default: 255
Maximal number of pyramid levels. Default: 4
Kind of optimizing. Default: "sort"
Kind of grayvalues. Default: "original"
Template number.
Serialize a template.
Handle of the template.
Handle of the serialized item.
Deserialize a serialized template.
Handle of the serialized item.
Template number.
Writing a template to file.
Template number.
file name.
Reading a template from file.
file name.
Template number.
This operator is inoperable. It had the following function: Deallocation of the memory of all templates.
Deallocation of the memory of a template.
Template number.
Gray value offset for template.
Template number.
Offset of gray values. Default: 0
Define reference position for a matching template.
Template number.
Reference position of template (row).
Reference position of template (column).
Adapting a template to the size of an image.
Image which determines the size of the later matching.
Template number.
Searching all good gray value matches in a pyramid.
Input image inside of which the pattern has to be found.
All points which have an error below a certain threshold.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Number of levels in the pyramid. Default: 3
Searching the best gray value matches in a pre generated pyramid.
Image pyramid inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Resolution level up to which the method "best match" is used. Default: "original"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching the best gray value matches in a pyramid.
Input image inside of which the pattern has to be found.
Template number.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 4
Resolution level up to which the method "best match" is used. Default: 2
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching all good matches of a template and an image.
Input image inside of which the pattern has to be found.
All points whose error lies below a certain threshold.
Template number.
Maximal average difference of the grayvalues. Default: 20.0
Searching the best matching of a template and a pyramid with rotation.
Input image inside of which the pattern has to be found.
Template number.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 40.0
Subpixel accuracy in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image with rotation.
Input image inside of which the pattern has to be found.
Template number.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 30.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image.
Input image inside of which the pattern has to be found.
Template number.
Maximum average difference of the grayvalues. Default: 20.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues of the best match.
Matching of a template and an image.
Input image.
Area to be searched in the input image.
This area will be "matched" by Image within the RegionOfInterest.
Result image: values of the matching criterion.
Desired matching criterion. Default: "dfd"
Searching corners in images.
Input image.
Result of the filtering.
Desired filtersize of the graymask. Default: 3
Weighting. Default: 0.04
Calculating a Gauss pyramid.
Input image.
Output images.
Kind of filtermask. Default: "weighted"
Factor for scaling down. Default: 0.5
Calculating the monotony operation.
Input image.
Result of the monotony operator.
Edge extraction using bandpass filters.
Input images.
Bandpass-filtered images.
Filter type: currently only 'lines' is supported. Default: "lines"
Detect color lines and their width.
Input image.
Extracted lines.
Amount of Gaussian smoothing to be applied. Default: 1.5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Should the line width be extracted? Default: "true"
Should junctions be added where they cannot be extracted? Default: "true"
Detect lines and their width.
Input image.
Extracted lines.
Amount of Gaussian smoothing to be applied. Default: 1.5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Extract bright or dark lines. Default: "light"
Should the line width be extracted? Default: "true"
Line model used to correct the line position and width. Default: "bar-shaped"
Should junctions be added where they cannot be extracted? Default: "true"
Detection of lines using the facet model.
Input image.
Extracted lines.
Size of the facet model mask. Default: 5
Lower threshold for the hysteresis threshold operation. Default: 3
Upper threshold for the hysteresis threshold operation. Default: 8
Extract bright or dark lines. Default: "light"
Store a filter mask in the spatial domain as a real-image.
Filter in the spatial domain.
Filter mask as file name or tuple. Default: "gauss"
Scaling factor. Default: 1.0
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a mean filter in the frequency domain.
Mean filter as image in the frequency domain.
Shape of the filter mask in the spatial domain. Default: "ellipse"
Diameter of the mean filter in the principal direction of the filter in the spatial domain. Default: 11.0
Diameter of the mean filter perpendicular to the principal direction of the filter in the spatial domain. Default: 11.0
Principal direction of the filter in the spatial domain. Default: 0.0
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a Gaussian filter in the frequency domain.
Gaussian filter as image in the frequency domain.
Standard deviation of the Gaussian in the principal direction of the filter in the spatial domain. Default: 1.0
Standard deviation of the Gaussian perpendicular to the principal direction of the filter in the spatial domain. Default: 1.0
Principal direction of the filter in the spatial domain. Default: 0.0
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a derivative filter in the frequency domain.
Derivative filter as image in the frequency domain.
Derivative to be computed. Default: "x"
Exponent used in the reverse transform. Default: 1
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a bandpass filter with Gaussian or sinusoidal shape.
Bandpass filter as image in the frequency domain.
Distance of the filter's maximum from the DC term. Default: 0.1
Bandwidth of the filter (standard deviation). Default: 0.01
Filter type. Default: "sin"
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate a bandpass filter with sinusoidal shape.
Bandpass filter as image in the frequency domain.
Distance of the filter's maximum from the DC term. Default: 0.1
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate an ideal band filter.
Band filter in the frequency domain.
Minimum frequency. Default: 0.1
Maximum frequency. Default: 0.2
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate an ideal bandpass filter.
Bandpass filter in the frequency domain.
Minimum frequency. Default: 0.1
Maximum frequency. Default: 0.2
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate an ideal lowpass filter.
Lowpass filter in the frequency domain.
Cutoff frequency. Default: 0.1
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Generate an ideal highpass filter.
Highpass filter in the frequency domain.
Cutoff frequency. Default: 0.1
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Return the power spectrum of a complex image.
Input image in frequency domain.
Power spectrum of the input image.
Return the power spectrum of a complex image.
Input image in frequency domain.
Power spectrum of the input image.
Return the power spectrum of a complex image.
Input image in frequency domain.
Power spectrum of the input image.
Return the phase of a complex image in degrees.
Input image in frequency domain.
Phase of the image in degrees.
Return the phase of a complex image in radians.
Input image in frequency domain.
Phase of the image in radians.
Calculate the energy of a two-channel image.
1st channel of input image (usually: Gabor image).
2nd channel of input image (usually: Hilbert image).
Image containing the local energy.
Convolve an image with a Gabor filter in the frequency domain.
Input image.
Gabor/Hilbert-Filter.
Result of the Gabor filter.
Result of the Hilbert filter.
Generate a Gabor filter.
Gabor and Hilbert filter.
Angle range, inversely proportional to the range of orientations. Default: 1.4
Distance of the center of the filter to the DC term. Default: 0.4
Bandwidth range, inversely proportional to the range of frequencies being passed. Default: 1.0
Angle of the principal orientation. Default: 1.5
Normalizing factor of the filter. Default: "none"
Location of the DC term in the frequency domain. Default: "dc_center"
Width of the image (filter). Default: 512
Height of the image (filter). Default: 512
Compute the phase correlation of two images in the frequency domain.
Fourier-transformed input image 1.
Fourier-transformed input image 2.
Phase correlation of the input images in the frequency domain.
Compute the correlation of two images in the frequency domain.
Fourier-transformed input image 1.
Fourier-transformed input image 2.
Correlation of the input images in the frequency domain.
Convolve an image with a filter in the frequency domain.
Complex input image.
Filter in frequency domain.
Result of applying the filter.
Deserialize FFT speed optimization data.
Handle of the serialized item.
Serialize FFT speed optimization data.
Handle of the serialized item.
Load FFT speed optimization data from a file.
File name of the optimization data. Default: "fft_opt.dat"
Store FFT speed optimization data in a file.
File name of the optimization data. Default: "fft_opt.dat"
Optimize the runtime of the real-valued FFT.
Width of the image for which the runtime should be optimized. Default: 512
Height of the image for which the runtime should be optimized. Default: 512
Thoroughness of the search for the optimum runtime. Default: "standard"
Optimize the runtime of the FFT.
Width of the image for which the runtime should be optimized. Default: 512
Height of the image for which the runtime should be optimized. Default: 512
Thoroughness of the search for the optimum runtime. Default: "standard"
Compute the real-valued fast Fourier transform of an image.
Input image.
Fourier-transformed image.
Calculate forward or reverse transform. Default: "to_freq"
Normalizing factor of the transform. Default: "sqrt"
Image type of the output image. Default: "complex"
Width of the image for which the runtime should be optimized. Default: 512
Compute the inverse fast Fourier transform of an image.
Input image.
Inverse-Fourier-transformed image.
Compute the fast Fourier transform of an image.
Input image.
Fourier-transformed image.
Compute the fast Fourier transform of an image.
Input image.
Fourier-transformed image.
Calculate forward or reverse transform. Default: "to_freq"
Sign of the exponent. Default: -1
Normalizing factor of the transform. Default: "sqrt"
Location of the DC term in the frequency domain. Default: "dc_center"
Image type of the output image. Default: "complex"
Apply a shock filter to an image.
Input image.
Output image.
Time step. Default: 0.5
Number of iterations. Default: 10
Type of edge detector. Default: "canny"
Smoothing of edge detector. Default: 1.0
Apply the mean curvature flow to an image.
Input image.
Output image.
Smoothing parameter for derivative operator. Default: 0.5
Time step. Default: 0.5
Number of iterations. Default: 10
Perform a coherence enhancing diffusion of an image.
Input image.
Output image.
Smoothing for derivative operator. Default: 0.5
Smoothing for diffusion coefficients. Default: 3.0
Time step. Default: 0.5
Number of iterations. Default: 10
Histogram linearization of images
Image to be enhanced.
Image with linearized gray values.
Illuminate image.
Image to be enhanced.
"`Illuminated"' image.
Width of low pass mask. Default: 101
Height of low pass mask. Default: 101
Scales the "`correction gray value"' added to the original gray values. Default: 0.7
Enhance contrast of the image.
Image to be enhanced.
contrast enhanced image.
Width of low pass mask. Default: 7
Height of the low pass mask. Default: 7
Intensity of contrast emphasis. Default: 1.0
Maximum gray value spreading in the value range 0 to 255.
Image to be scaled.
contrast enhanced image.
Detect edges (amplitude and direction) using the Robinson operator.
Input image.
Edge amplitude (gradient magnitude) image.
Edge direction image.
Detect edges (amplitude) using the Robinson operator.
Input image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude and direction) using the Kirsch operator.
Input image.
Edge amplitude (gradient magnitude) image.
Edge direction image.
Detect edges (amplitude) using the Kirsch operator.
Input image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude and direction) using the Frei-Chen operator.
Input image.
Edge amplitude (gradient magnitude) image.
Edge direction image.
Detect edges (amplitude) using the Frei-Chen operator.
Input image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude and direction) using the Prewitt operator.
Input image.
Edge amplitude (gradient magnitude) image.
Edge direction image.
Detect edges (amplitude) using the Prewitt operator.
Input image.
Edge amplitude (gradient magnitude) image.
Detect edges (amplitude) using the Sobel operator.
Input image.
Edge amplitude (gradient magnitude) image.
Filter type. Default: "sum_abs"
Size of filter mask. Default: 3
Detect edges (amplitude and direction) using the Sobel operator.
Input image.
Edge amplitude (gradient magnitude) image.
Edge direction image.
Filter type. Default: "sum_abs"
Size of filter mask. Default: 3
Detect edges using the Roberts filter.
Input image.
Roberts-filtered result images.
Filter type. Default: "gradient_sum"
Calculate the Laplace operator by using finite differences.
Input image.
Laplace-filtered result image.
Type of the result image, whereas for byte and uint2 the absolute value is used. Default: "absolute"
Size of filter mask. Default: 3
Filter mask used in the Laplace operator Default: "n_4"
Extract high frequency components from an image.
Input image.
High-pass-filtered result image.
Width of the filter mask. Default: 9
Height of the filter mask. Default: 9
Return the filter coefficients of a filter in edges_image.
Name of the edge operator. Default: "lanser2"
1D edge filter ('edge') or 1D smoothing filter ('smooth'). Default: "edge"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 0.5
Filter width in pixels.
For Canny filters: Coefficients of the "positive" half of the 1D impulse response. All others: Coefficients of a corresponding non-recursive filter.
Extract subpixel precise color edges using Deriche, Shen, or Canny filters.
Input image.
Extracted edges.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Lower threshold for the hysteresis threshold operation. Default: 20
Upper threshold for the hysteresis threshold operation. Default: 40
Extract color edges using Canny, Deriche, or Shen filters.
Input image.
Edge amplitude (gradient magnitude) image.
Edge direction image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Non-maximum suppression ('none', if not desired). Default: "nms"
Lower threshold for the hysteresis threshold operation (negative if no thresholding is desired). Default: 20
Upper threshold for the hysteresis threshold operation (negative if no thresholding is desired). Default: 40
Extract sub-pixel precise edges using Deriche, Lanser, Shen, or Canny filters.
Input image.
Extracted edges.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Lower threshold for the hysteresis threshold operation. Default: 20
Upper threshold for the hysteresis threshold operation. Default: 40
Extract edges using Deriche, Lanser, Shen, or Canny filters.
Input image.
Edge amplitude (gradient magnitude) image.
Edge direction image.
Edge operator to be applied. Default: "canny"
Filter parameter: small values result in strong smoothing, and thus less detail (opposite for 'canny'). Default: 1.0
Non-maximum suppression ('none', if not desired). Default: "nms"
Lower threshold for the hysteresis threshold operation (negative, if no thresholding is desired). Default: 20
Upper threshold for the hysteresis threshold operation (negative, if no thresholding is desired). Default: 40
Convolve an image with derivatives of the Gaussian.
Input images.
Filtered result images.
Sigma of the Gaussian. Default: 1.0
Derivative or feature to be calculated. Default: "x"
LoG-Operator (Laplace of Gaussian).
Input image.
Laplace filtered image.
Smoothing parameter of the Gaussian. Default: 2.0
Approximate the LoG operator (Laplace of Gaussian).
Input image
LoG image.
Smoothing parameter of the Laplace operator to approximate. Default: 3.0
Ratio of the standard deviations used (Marr recommends 1.6). Default: 1.6
Close edge gaps using the edge amplitude image.
Region containing one pixel thick edges.
Edge amplitude (gradient) image.
Region containing closed edges.
Minimum edge amplitude. Default: 16
Maximal number of points by which edges are extended. Default: 3
Close edge gaps using the edge amplitude image.
Region containing one pixel thick edges.
Edge amplitude (gradient) image.
Region containing closed edges.
Minimum edge amplitude. Default: 16
Detect straight edge segments.
Input image.
Mask size of the Sobel operator. Default: 5
Minimum edge strength. Default: 32
Maximum distance of the approximating line to its original edge. Default: 3
Minimum length of to resulting line segments. Default: 10
Row coordinate of the line segments' start points.
Column coordinate of the line segments' start points.
Row coordinate of the line segments' end points.
Column coordinate of the line segments' end points.
This operator is inoperable. It had the following function: Delete all look-up-tables of the color space transformation.
Release the look-up-table needed for color space transformation.
Handle of the look-up-table handle for the color space transformation.
Color space transformation using pre-generated look-up-table.
Input image (channel 1).
Input image (channel 2).
Input image (channel 3).
Color-transformed output image (channel 1).
Color-transformed output image (channel 2).
Color-transformed output image (channel 3).
Handle of the look-up-table for the color space transformation.
Creates the look-up-table for transformation of an image from the RGB color space to an arbitrary color space.
Color space of the output image. Default: "hsv"
Direction of color space transformation. Default: "from_rgb"
Number of bits of the input image. Default: 8
Handle of the look-up-table for color space transformation.
Convert a single-channel color filter array image into an RGB image.
Input image.
Output image.
Color filter array type. Default: "bayer_gb"
Interpolation type. Default: "bilinear"
Transform an RGB image into a gray scale image.
Three-channel RBG image.
Gray scale image.
Transform an RGB image to a gray scale image.
Input image (red channel).
Input image (green channel).
Input image (blue channel).
Gray scale image.
Transform an image from the RGB color space to an arbitrary color space.
Input image (red channel).
Input image (green channel).
Input image (blue channel).
Color-transformed output image (channel 1).
Color-transformed output image (channel 1).
Color-transformed output image (channel 1).
Color space of the output image. Default: "hsv"
Transform an image from an arbitrary color space to the RGB color space.
Input image (channel 1).
Input image (channel 2).
Input image (channel 3).
Red channel.
Green channel.
Blue channel.
Color space of the input image. Default: "hsv"
Logical "AND" of each pixel using a bit mask.
Input image(s).
Result image(s) by combination with mask.
Bit field Default: 128
Extract a bit from the pixels.
Input image(s).
Result image(s) by extraction.
Bit to be selected. Default: 8
Right shift of all pixels of the image.
Input image(s).
Result image(s) by shift operation.
shift value Default: 3
Left shift of all pixels of the image.
Input image(s).
Result image(s) by shift operation.
Shift value. Default: 3
Complement all bits of the pixels.
Input image(s).
Result image(s) by complement operation.
Bit-by-bit XOR of all pixels of the input images.
Input image(s) 1.
Input image(s) 2.
Result image(s) by XOR-operation.
Bit-by-bit OR of all pixels of the input images.
Input image(s) 1.
Input image(s) 2.
Result image(s) by OR-operation.
Bit-by-bit AND of all pixels of the input images.
Input image(s) 1.
Input image(s) 2.
Result image(s) by AND-operation.
Perform a gamma encoding or decoding of an image.
Input image.
Output image.
Gamma coefficient of the exponential part of the transformation. Default: 0.416666666667
Offset of the exponential part of the transformation. Default: 0.055
Gray value for which the transformation switches from linear to exponential. Default: 0.0031308
Maximum gray value of the input image type. Default: 255.0
If 'true', perform a gamma encoding, otherwise a gamma decoding. Default: "true"
Raise an image to a power.
Input image.
Output image.
Power to which the gray values are raised. Default: 2
Calculate the exponentiation of an image.
Input image.
Output image.
Base of the exponentiation. Default: "e"
Calculate the logarithm of an image.
Input image.
Output image.
Base of the logarithm. Default: "e"
Calculate the arctangent of two images.
Input image 1.
Input image 2.
Output image.
Calculate the arctangent of an image.
Input image.
Output image.
Calculate the arccosine of an image.
Input image.
Output image.
Calculate the arcsine of an image.
Input image.
Output image.
Calculate the tangent of an image.
Input image.
Output image.
Calculate the cosine of an image.
Input image.
Output image.
Calculate the sine of an image.
Input image.
Output image.
Calculate the absolute difference of two images.
Input image 1.
Input image 2.
Absolute value of the difference of the input images.
Scale factor. Default: 1.0
Calculate the square root of an image.
Input image
Output image
Subtract two images.
Minuend(s).
Subtrahend(s).
Result image(s) by the subtraction.
Correction factor. Default: 1.0
Correction value. Default: 128.0
Scale the gray values of an image.
Image(s) whose gray values are to be scaled.
Result image(s) by the scale.
Scale factor. Default: 0.01
Offset. Default: 0
Divide two images.
Image(s) 1.
Image(s) 2.
Result image(s) by the division.
Factor for gray range adaption. Default: 255
Value for gray range adaption. Default: 0
Multiply two images.
Image(s) 1.
Image(s) 2.
Result image(s) by the product.
Factor for gray range adaption. Default: 0.005
Value for gray range adaption. Default: 0
Add two images.
Image(s) 1.
Image(s) 2.
Result image(s) by the addition.
Factor for gray value adaption. Default: 0.5
Value for gray value range adaption. Default: 0
Calculate the absolute value (modulus) of an image.
Image(s) for which the absolute gray values are to be calculated.
Result image(s).
Calculate the minimum of two images pixel by pixel.
Image(s) 1.
Image(s) 2.
Result image(s) by the minimization.
Calculate the maximum of two images pixel by pixel.
Image(s) 1.
Image(s) 2.
Result image(s) by the maximization.
Invert an image.
Input image(s).
Image(s) with inverted gray values.
Apply an automatic color correction to panorama images.
Input images.
Output images.
List of source images.
List of destination images.
Reference image.
Projective matrices.
Estimation algorithm for the correction. Default: "standard"
Parameters to be estimated. Default: ["mult_gray"]
Model of OECF to be used. Default: ["laguerre"]
Create 6 cube map images of a spherical mosaic.
Input images.
Front cube map.
Rear cube map.
Left cube map.
Right cube map.
Top cube map.
Bottom cube map.
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
Width and height of the resulting cube maps. Default: 1000
Mode of adding the images to the mosaic image. Default: "voronoi"
Mode of image interpolation. Default: "bilinear"
Create a spherical mosaic image.
Input images.
Output image.
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
Minimum latitude of points in the spherical mosaic image. Default: -90
Maximum latitude of points in the spherical mosaic image. Default: 90
Minimum longitude of points in the spherical mosaic image. Default: -180
Maximum longitude of points in the spherical mosaic image. Default: 180
Latitude and longitude angle step width. Default: 0.1
Mode of adding the images to the mosaic image. Default: "voronoi"
Mode of interpolation when creating the mosaic image. Default: "bilinear"
Combine multiple images into a mosaic image.
Input images.
Output image.
Array of 3x3 projective transformation matrices.
Stacking order of the images in the mosaic. Default: "default"
Should the domains of the input images also be transformed? Default: "false"
3x3 projective transformation matrix that describes the translation that was necessary to transform all images completely into the output image.
Combine multiple images into a mosaic image.
Input images.
Output image.
Index of the central input image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Stacking order of the images in the mosaic. Default: "default"
Should the domains of the input images also be transformed? Default: "false"
Array of 3x3 projective transformation matrices that determine the position of the images in the mosaic.
Apply a projective transformation to an image and specify the output image size.
Input image.
Output image.
Homogeneous projective transformation matrix.
Interpolation method for the transformation. Default: "bilinear"
Output image width.
Output image height.
Should the domain of the input image also be transformed? Default: "false"
Apply a projective transformation to an image.
Input image.
Output image.
Homogeneous projective transformation matrix.
Interpolation method for the transformation. Default: "bilinear"
Adapt the size of the output image automatically? Default: "false"
Should the domain of the input image also be transformed? Default: "false"
Apply an arbitrary affine 2D transformation to an image and specify the output image size.
Input image.
Transformed image.
Input transformation matrix.
Type of interpolation. Default: "constant"
Width of the output image. Default: 640
Height of the output image. Default: 480
Apply an arbitrary affine 2D transformation to images.
Input image.
Transformed image.
Input transformation matrix.
Type of interpolation. Default: "constant"
Adaption of size of result image. Default: "false"
Zoom an image by a given factor.
Input image.
Scaled image.
Scale factor for the width of the image. Default: 0.5
Scale factor for the height of the image. Default: 0.5
Type of interpolation. Default: "constant"
Zoom an image to a given size.
Input image.
Scaled image.
Width of the resulting image. Default: 512
Height of the resulting image. Default: 512
Type of interpolation. Default: "constant"
Mirror an image.
Input image.
Reflected image.
Axis of reflection. Default: "row"
Rotate an image about its center.
Input image.
Rotated image.
Rotation angle. Default: 90
Type of interpolation. Default: "constant"
Transform an image in polar coordinates back to cartesian coordinates
Input image.
Output image.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the first column of the input image to. Default: 0.0
Angle of the ray to map the last column of the input image to. Default: 6.2831853
Radius of the circle to map the first row of the input image to. Default: 0
Radius of the circle to map the last row of the input image to. Default: 100
Width of the output image. Default: 512
Height of the output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Transform an annular arc in an image to polar coordinates.
Input image.
Output image.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to the first column of the output image. Default: 0.0
Angle of the ray to be mapped to the last column of the output image. Default: 6.2831853
Radius of the circle to be mapped to the first row of the output image. Default: 0
Radius of the circle to be mapped to the last row of the output image. Default: 100
Width of the output image. Default: 512
Height of the output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Transform an image to polar coordinates
Input image in cartesian coordinates.
Result image in polar coordinates.
Row coordinate of the center of the coordinate system. Default: 100
Column coordinate of the center of the coordinate system. Default: 100
Width of the result image. Default: 314
Height of the result image. Default: 200
Approximate an affine map from a displacement vector field.
Input image.
Output transformation matrix.
Deserialize a serialized XLD object.
XLD object.
Handle of the serialized item.
Serialize an XLD object.
XLD object.
Handle of the serialized item.
Read XLD polygons from a DXF file.
Read XLD polygons.
Name of the DXF file.
Names of the generic parameters that can be adjusted for the DXF input. Default: []
Values of the generic parameters that can be adjusted for the DXF input. Default: []
Status information.
Write XLD polygons to a file in DXF format.
XLD polygons to be written.
Name of the DXF file.
Read XLD contours from a DXF file.
Read XLD contours.
Name of the DXF file.
Names of the generic parameters that can be adjusted for the DXF input. Default: []
Values of the generic parameters that can be adjusted for the DXF input. Default: []
Status information.
Write XLD contours to a file in DXF format.
XLD contours to be written.
Name of the DXF file.
Copy a file to a new location.
File to be copied.
Target location.
Set the current working directory.
Name of current working directory to be set.
Get the current working directory.
Name of current working directory.
Delete an empty directory.
Name of directory to be deleted.
Make a directory.
Name of directory to be created.
List all files in a directory.
Name of directory to be listed.
Processing options. Default: "files"
Found files (and directories).
Delete a file.
File to be deleted.
Check whether file exists.
Name of file to be checked. Default: "/bin/cc"
boolean number.
Read an iconic object.
Iconic object.
Name of file.
Write an iconic object.
Iconic object.
Name of file.
Deserialize a serialized iconic object.
Iconic object.
Handle of the serialized item.
Serialize an iconic object.
Iconic object.
Handle of the serialized item.
Deserialize a serialized image object.
Image object.
Handle of the serialized item.
Serialize an image object.
Image object.
Handle of the serialized item.
Deserialize a serialized region.
Region.
Handle of the serialized item.
Serialize a region.
Region.
Handle of the serialized item.
Write regions to a file.
Region of the images which are returned.
Name of region file. Default: "region.hobj"
Write images in graphic formats.
Input images.
Graphic format. Default: "tiff"
Fill gray value for pixels not belonging to the image domain (region). Default: 0
Name of image file.
Read images.
Image read.
Number of bytes for file header. Default: 0
Number of image columns of the filed image. Default: 512
Number of image lines of the filed image. Default: 512
Starting point of image area (line). Default: 0
Starting point of image area (column). Default: 0
Number of image columns of output image. Default: 512
Number of image lines of output image. Default: 512
Type of pixel values. Default: "byte"
Sequence of bits within one byte. Default: "MSBFirst"
Sequence of bytes within one 'short' unit. Default: "MSBFirst"
Data units within one image line (alignment). Default: "byte"
Number of images in the file. Default: 1
Name of input file.
Read binary images or HALCON regions.
Read region.
Name of the region to be read.
Read an image with different file formats.
Read image.
Name of the image to be read. Default: "printer_chip/printer_chip_01"
Open a file in text or binary format.
Name of file to be opened. Default: "standard"
Type of file access and optional the string encoding. Default: "output"
File handle.
Write strings and numbers into a text file.
File handle.
Values to be written into the file. Default: "hallo"
Read a character line from a text file.
File handle.
Read line.
Reached end of file before any character was read.
Read a string from a text file.
File handle.
Read character sequence.
Reached end of file before any character was added to the output string.
Read one character from a text file.
File handle.
Read character, which can be multi-byte or the control string 'eof'.
Write a line break and clear the output buffer.
File handle.
Closing a text file.
File handle.
This operator is inoperable. It had the following function: Close all open files.
Test whether contours or polygons are closed.
Contours or polygons to be tested.
Tuple with boolean numbers.
Return gray values of an image at the positions of an XLD contour.
Image whose gray values are to be accessed.
Input XLD contour with the coordinates of the positions.
Interpolation method. Default: "nearest_neighbor"
Gray values of the selected image coordinates.
Arbitrary geometric moments of contours or polygons treated as point clouds.
Contours or polygons to be examined.
Computation mode. Default: "unnormalized"
Area enclosed by the contour or polygon.
Row coordinate of the centroid.
Column coordinate of the centroid.
First index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
Second index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
The computed moments.
Anisometry of contours or polygons treated as point clouds.
Contours or polygons to be examined.
Anisometry of the contours or polygons.
Parameters of the equivalent ellipse of contours or polygons treated as point clouds.
Contours or polygons to be examined.
Major radius.
Minor radius.
Angle between the major axis and the column axis (radians).
Orientation of contours or polygons treated as point clouds.
Contours or polygons to be examined.
Orientation of the contours or polygons (radians).
Geometric moments M20@f$M_{20}$, M02@f$M_{02}$, and M11@f$M_{11}$ of contours or polygons treated as point clouds.
Contours or polygons to be examined.
Mixed second order moment.
Second order moment along the row axis.
Second order moment along the column axis.
Area and center of gravity (centroid) of contours and polygons treated as point clouds.
Point clouds to be examined in form of contours or polygons.
Area of the point cloud.
Row coordinate of the centroid.
Column coordinate of the centroid.
Test XLD contours or polygons for self intersection.
Input contours or polygons.
Should the input contours or polygons be closed first? Default: "true"
1 for contours or polygons with self intersection and 0 otherwise.
Choose all contours or polygons containing a given point.
Contours or polygons to be examined.
All contours or polygons containing the test point.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
Test whether one or more contours or polygons enclose the given point(s).
Contours or polygons to be tested.
Row coordinates of the points to be tested.
Column coordinates of the points to be tested.
Tuple with boolean numbers.
Select contours or polygons using shape features.
Contours or polygons to be examined.
Contours or polygons fulfilling the condition(s).
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Orientation of contours or polygons.
Contours or polygons to be examined.
Orientation of the contours or polygons (radians).
Shape features derived from the ellipse parameters of contours or polygons.
Contours or polygons to be examined.
Anisometry of the contours or polygons.
Bulkiness of the contours or polygons.
Structure factor of the contours or polygons.
Shape factor for the compactness of contours or polygons.
Contours or polygons to be examined.
Compactness of the input contours or polygons.
Maximum distance between two contour or polygon points.
Contours or polygons to be examined.
Row coordinate of the first extreme point of the contours or polygons.
Column coordinate of the first extreme point of the contours or polygons.
Row coordinate of the second extreme point of the contour or polygons.
Column coordinate of the second extreme point of the contours or polygons.
Distance of the two extreme points of the contours or polygons.
Shape factor for the convexity of contours or polygons.
Contours or polygons to be examined.
Convexity of the input contours or polygons.
Shape factor for the circularity (similarity to a circle) of contours or polygons.
Contours or polygons to be examined.
Roundness of the input contours or polygons.
Parameters of the equivalent ellipse of contours or polygons.
Contours or polygons to be examined.
Major radius.
Minor radius.
Angle between the major axis and the x axis (radians).
Smallest enclosing rectangle with arbitrary orientation of contours or polygons.
Contours or polygons to be examined.
Row coordinate of the center point of the enclosing rectangle.
Column coordinate of the center point of the enclosing rectangle.
Orientation of the enclosing rectangle (arc measure)
First radius (half length) of the enclosing rectangle.
Second radius (half width) of the enclosing rectangle.
Enclosing rectangle parallel to the coordinate axes of contours or polygons.
Contours or polygons to be examined.
Row coordinate of upper left corner point of the enclosing rectangle.
Column coordinate of upper left corner point of the enclosing rectangle.
Row coordinate of lower right corner point of the enclosing rectangle.
Column coordinate of lower right corner point of the enclosing rectangle.
Smallest enclosing circle of contours or polygons.
Contours or polygons to be examined.
Row coordinate of the center of the enclosing circle.
Column coordinate of the center of the enclosing circle.
Radius of the enclosing circle.
Transform the shape of contours or polygons.
Contours or polygons to be transformed.
Transformed contours respectively polygons.
Type of transformation. Default: "convex"
Length of contours or polygons.
Contours or polygons to be examined.
Length of the contour or polygon.
Arbitrary geometric moments of contours or polygons.
Contours or polygons to be examined.
Computation mode. Default: "unnormalized"
Point order along the boundary. Default: "positive"
Area enclosed by the contour or polygon.
Row coordinate of the centroid.
Column coordinate of the centroid.
First index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
Second index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
The computed moments.
Geometric moments M20@f$M_{20}$, M02@f$M_{02}$, and M11@f$M_{11}$ of contours or polygons.
Contours or polygons to be examined.
Mixed second order moment.
Second order moment along the row axis.
Second order moment along the column axis.
Area and center of gravity (centroid) of contours and polygons.
Contours or polygons to be examined.
Area enclosed by the contour or polygon.
Row coordinate of the centroid.
Column coordinate of the centroid.
point order along the boundary ('positive'/'negative').
Geometric moments of regions.
Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Moment of 2nd order.
Moment of 2nd order.
Geometric moments of regions.
Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Moment of 2nd order.
Moment of 3rd order.
Geometric moments of regions.
Regions to be examined.
Moment of 3rd order (line-dependent).
Moment of 3rd order (column-dependent).
Moment of 3rd order (column-dependent).
Moment of 3rd order (line-dependent).
Geometric moments of regions.
Regions to be examined.
Moment of 3rd order (line-dependent).
Moment of 3rd order (column-dependent).
Moment of 3rd order (column-dependent).
Moment of 3rd order (line-dependent).
Smallest surrounding rectangle with any orientation.
Regions to be examined.
Line index of the center.
Column index of the center.
Orientation of the surrounding rectangle (arc measure)
First radius (half length) of the surrounding rectangle.
Second radius (half width) of the surrounding rectangle.
Surrounding rectangle parallel to the coordinate axes.
Regions to be examined.
Line index of upper left corner point.
Column index of upper left corner point.
Line index of lower right corner point.
Column index of lower right corner point.
Smallest surrounding circle of a region.
Regions to be examined.
Line index of the center.
Column index of the center.
Radius of the surrounding circle.
Choose regions having a certain relation to each other.
Regions to be examined.
Region compared to Regions.
Regions fulfilling the condition.
Shape features to be checked. Default: "covers"
Lower border of feature. Default: 50.0
Upper border of the feature. Default: 100.0
Calculate shape features of regions.
Regions to be examined.
Shape features to be calculated. Default: "area"
The calculated features.
Choose regions with the aid of shape features.
Regions to be examined.
Regions fulfilling the condition.
Shape features to be checked. Default: "area"
Linkage type of the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Characteristic values for runlength coding of regions.
Regions to be examined.
Number of runs.
Storing factor in relation to a square.
Mean number of runs per line.
Mean length of runs.
Number of bytes necessary for coding the region.
Search direct neighbors.
Starting regions.
Comparative regions.
Maximal distance of regions. Default: 1
Indices of the found regions from Regions1.
Indices of the found regions from Regions2.
Geometric moments of regions.
Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Geometric moments of regions.
Regions to be examined.
Product of inertia of the axes through the center parallel to the coordinate axes.
Moment of 2nd order (line-dependent).
Moment of 2nd order (column-dependent).
Calculate the geometric moments of regions.
Input regions.
Product of inertia of the axes through the center parallel to the coordinate axes.
Moment of 2nd order (row-dependent).
Moment of 2nd order (column-dependent).
Length of the major axis of the input region.
Length of the minor axis of the input region.
Minimum distance between the contour pixels of two regions each.
Regions to be examined.
Regions to be examined.
Minimum distance between contours of the regions.
Line index on contour in Regions1.
Column index on contour in Regions1.
Line index on contour in Regions2.
Column index on contour in Regions2.
Minimum distance between two regions with the help of dilation.
Regions to be examined.
Regions to be examined.
Minimum distances of the regions.
Maximal distance between two boundary points of a region.
Regions to be examined.
Row index of the first extreme point.
Column index of the first extreme point.
Row index of the second extreme point.
Column index of the second extreme point.
Distance of the two extreme points.
Test if the region contains a given point.
Region(s) to be examined.
Row index of the test pixel(s). Default: 100
Column index of the test pixel(s). Default: 100
Boolean result value.
Index of all regions containing a given pixel.
Regions to be examined.
Line index of the test pixel. Default: 100
Column index of the test pixel. Default: 100
Index of the regions containing the test pixel.
Choose all regions containing a given pixel.
Regions to be examined.
All regions containing the test pixel.
Line index of the test pixel. Default: 100
Column index of the test pixel. Default: 100
Select regions of a given shape.
Input regions to be selected.
Regions with desired shape.
Shape features to be checked. Default: "max_area"
Similarity measure. Default: 70.0
Hamming distance between two regions using normalization.
Regions to be examined.
Comparative regions.
Type of normalization. Default: "center"
Hamming distance of two regions.
Similarity of two regions.
Hamming distance between two regions.
Regions to be examined.
Comparative regions.
Hamming distance of two regions.
Similarity of two regions.
Shape features derived from the ellipse parameters.
Region(s) to be examined.
Shape feature (in case of a circle = 1.0).
Calculated shape feature.
Calculated shape feature.
Calculate the Euler number.
Region(s) to be examined.
Calculated Euler number.
Orientation of a region.
Region(s) to be examined.
Orientation of region (arc measure).
Calculate the parameters of the equivalent ellipse.
Input regions.
Main radius (normalized to the area).
Secondary radius (normalized to the area).
Angle between main radius and x-axis in radians.
Pose relation of regions.
Starting regions
Comparative regions
Desired neighboring relation. Default: "left"
Indices in the input tuples (Regions1 or ParRef{Regions2}), respectively.
Indices in the input tuples (Regions1 or ParRef{Regions2}), respectively.
Pose relation of regions with regard to
Starting regions.
Comparative regions.
Percentage of the area of the comparative region which must be located left/right or Default: 50
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
Horizontal pose relation in which RegionIndex2[n] stands with RegionIndex1[n].
Vertical pose relation in which RegionIndex2[n] stands with RegionIndex1[n].
Shape factor for the convexity of a region.
Region(s) to be examined.
Convexity of the input region(s).
Contour length of a region.
Region(s) to be examined.
Contour length of the input region(s).
Number of connection components and holes
Region(s) to be examined.
Number of connection components of a region.
Number of holes of a region.
Shape factor for the rectangularity of a region.
Region(s) to be examined.
Rectangularity of the input region(s).
Shape factor for the compactness of a region.
Region(s) to be examined.
Compactness of the input region(s).
Shape factor for the circularity (similarity to a circle) of a region.
Region(s) to be examined.
Circularity of the input region(s).
Compute the area of holes of regions.
Region(s) to be examined.
Area(s) of holes of the region(s).
Area and center of regions.
Region(s) to be examined.
Area of the region.
Line index of the center.
Column index of the center.
Distribution of runs needed for runlength encoding of a region.
Region to be examined.
Length distribution of the region (foreground).
Length distribution of the background.
Shape factors from contour.
Region(s) to be examined.
Mean distance from the center.
Standard deviation of Distance.
Shape factor for roundness.
Number of polygon sides.
Largest inner rectangle of a region.
Region to be examined.
Row coordinate of the upper left corner point.
Column coordinate of the upper left corner point.
Row coordinate of the lower right corner point.
Column coordinate of the lower right corner point.
Largest inner circle of a region.
Regions to be examined.
Line index of the center.
Column index of the center.
Radius of the inner circle.
Select the longest input lines.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
(Maximum) desired number of output lines. Default: 10
Row coordinates of the starting points of the output lines.
Column coordinates of the starting points of the output lines.
Row coordinates of the ending points of the output lines.
Column coordinates of the ending points of the output lines.
Partition lines according to various criteria.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Features to be used for selection.
Desired combination of the features.
Lower limits of the features or 'min'. Default: "min"
Upper limits of the features or 'max'. Default: "max"
Row coordinates of the starting points of the lines fulfilling the conditions.
Column coordinates of the starting points of the lines fulfilling the conditions.
Row coordinates of the ending points of the lines fulfilling the conditions.
Column coordinates of the ending points of the lines fulfilling the conditions.
Row coordinates of the starting points of the lines not fulfilling the conditions.
Column coordinates of the starting points of the lines not fulfilling the conditions.
Row coordinates of the ending points of the lines not fulfilling the conditions.
Column coordinates of the ending points of the lines not fulfilling the conditions.
Select lines according to various criteria.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Features to be used for selection. Default: "length"
Desired combination of the features. Default: "and"
Lower limits of the features or 'min'. Default: "min"
Upper limits of the features or 'max'. Default: "max"
Row coordinates of the starting points of the output lines.
Column coordinates of the starting points of the output lines.
Row coordinates of the ending points of the output lines.
Column coordinates of the ending points of the output lines.
Calculate the center of gravity, length, and orientation of a line.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Row coordinates of the centers of gravity of the input lines.
Column coordinates of the centers of gravity of the input lines.
Euclidean length of the input lines.
Orientation of the input lines.
Calculate the orientation of lines.
Row coordinates of the starting points of the input lines.
Column coordinates of the starting points of the input lines.
Row coordinates of the ending points of the input lines.
Column coordinates of the ending points of the input lines.
Orientation of the input lines.
Approximate a contour by arcs and lines.
Row of the contour. Default: 32
Column of the contour. Default: 32
Row of the center of an arc.
Column of the center of an arc.
Angle of an arc.
Row of the starting point of an arc.
Column of the starting point of an arc.
Row of the starting point of a line segment.
Column of the starting point of a line segment.
Row of the ending point of a line segment.
Column of the ending point of a line segment.
Sequence of line (value 0) and arc segments (value 1).
Approximate a contour by arcs and lines.
Row of the contour. Default: 32
Column of the contour. Default: 32
Minimum width of Gauss operator for coordinate smoothing ($ greater than $ 0.4). Default: 0.5
Maximum width of Gauss operator for coordinate smoothing ($ greater than $ 0.4). Default: 2.4
Minimum threshold value of the curvature for accepting a corner (relative to the largest curvature present). Default: 0.3
Maximum threshold value of the curvature for accepting a corner (relative to the largest curvature present). Default: 0.9
Step width for threshold increase. Default: 0.2
Minimum width of Gauss operator for smoothing the curvature function ($ greater than $ 0.4). Default: 0.5
Maximum width of Gauss operator for smoothing the curvature function. Default: 2.4
Minimum width of curve area for curvature determination ($ greater than $ 0.4). Default: 2
Maximum width of curve area for curvature determination. Default: 12
Weighting factor for approximation precision. Default: 1.0
Weighting factor for large segments. Default: 1.0
Weighting factor for small segments. Default: 1.0
Row of the center of an arc.
Column of the center of an arc.
Angle of an arc.
Row of the starting point of an arc.
Column of the starting point of an arc.
Row of the starting point of a line segment.
Column of the starting point of a line segment.
Row of the ending point of a line segment.
Column of the ending point of a line segment.
Sequence of line (value 0) and arc segments (value 1).
Calculate gray value moments and approximation by a first order surface (plane).
Regions to be checked.
Corresponding gray values.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Alpha of the approximating surface.
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Calculate gray value moments and approximation by a second order surface.
Regions to be checked.
Corresponding gray values.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Alpha of the approximating surface.
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Delta of the approximating surface.
Parameter Epsilon of the approximating surface.
Parameter Zeta of the approximating surface.
Create a curved gray surface with second order polynomial.
Created image with new image matrix.
Pixel type. Default: "byte"
Second order coefficient in vertical direction. Default: 1.0
Second order coefficient in horizontal direction. Default: 1.0
Mixed second order coefficient. Default: 1.0
First order coefficient in vertical direction. Default: 1.0
First order coefficient in horizontal direction. Default: 1.0
Zero order coefficient. Default: 1.0
Row coordinate of the reference point of the surface. Default: 256.0
Column coordinate of the reference point of the surface. Default: 256.0
Width of image. Default: 512
Height of image. Default: 512
Create a tilted gray surface with first order polynomial.
Created image with new image matrix.
Pixel type. Default: "byte"
First order coefficient in vertical direction. Default: 1.0
First order coefficient in horizontal direction. Default: 1.0
Zero order coefficient. Default: 1.0
Row coordinate of the reference point of the surface. Default: 256.0
Column coordinate of the reference point of the surface. Default: 256.0
Width of image. Default: 512
Height of image. Default: 512
Determine a histogram of features along all threshold values.
Region in which the features are to be examined.
Gray value image.
Feature to be examined. Default: "convexity"
Row of the pixel which the region must contain. Default: 256
Column of the pixel which the region must contain. Default: 256
Absolute distribution of the feature.
Relative distribution of the feature.
Determine a histogram of features along all threshold values.
Region in which the features are to be examined.
Gray value image.
Feature to be examined. Default: "connected_components"
Absolute distribution of the feature.
Relative distribution of the feature.
Calculates gray value features for a set of regions.
Regions to be examined.
Gray value image.
Names of the features. Default: "mean"
Value sof the features.
Select regions based on gray value features.
Regions to be examined.
Gray value image.
Regions having features within the limits.
Names of the features. Default: "mean"
Logical connection of features. Default: "and"
Lower limit(s) of features. Default: 128.0
Upper limit(s) of features. Default: 255.0
Determine the minimum and maximum gray values within regions.
Regions, the features of which are to be calculated.
Gray value image.
Percentage below (above) the absolute maximum (minimum). Default: 0
"Minimum" gray value.
"Maximum" gray value.
Difference between Max and Min.
Calculate the mean and deviation of gray values.
Regions in which the features are calculated.
Gray value image.
Mean gray value of a region.
Deviation of gray values within a region.
Calculate the gray value distribution of a single channel image within a certain gray value range.
Region in which the histogram is to be calculated.
Input image.
Minimum gray value. Default: 0
Maximum gray value. Default: 255
Number of bins. Default: 256
Histogram to be calculated.
Bin size.
Calculate the histogram of two-channel gray value images.
Region in which the histogram is to be calculated.
Channel 1.
Channel 2.
Histogram to be calculated.
Calculate the gray value distribution.
Region in which the histogram is to be calculated.
Image the gray value distribution of which is to be calculated.
Quantization of the gray values. Default: 1.0
Absolute frequencies of the gray values.
Calculate the gray value distribution.
Region in which the histogram is to be calculated.
Image the gray value distribution of which is to be calculated.
Absolute frequencies of the gray values.
Frequencies, normalized to the area of the region.
Determine the entropy and anisotropy of images.
Regions where the features are to be determined.
Gray value image.
Information content (entropy) of the gray values.
Measure of the symmetry of gray value distribution.
Calculate gray value features from a co-occurrence matrix.
Co-occurrence matrix.
Homogeneity of the gray values.
Correlation of gray values.
Local homogeneity of gray values.
Gray value contrast.
Calculate a co-occurrence matrix and derive gray value features thereof.
Region to be examined.
Corresponding gray values.
Number of gray values to be distinguished (2^LdGray@f$2^{LdGray}$). Default: 6
Direction in which the matrix is to be calculated. Default: 0
Gray value energy.
Correlation of gray values.
Local homogeneity of gray values.
Gray value contrast.
Calculate the co-occurrence matrix of a region in an image.
Region to be checked.
Image providing the gray values.
Co-occurrence matrix (matrices).
Number of gray values to be distinguished (2^LdGray@f$2^{LdGray}$). Default: 6
Direction of neighbor relation. Default: 0
Calculate gray value moments and approximation by a plane.
Regions to be checked.
Corresponding gray values.
Mixed moments along a line.
Mixed moments along a column.
Parameter Alpha of the approximating plane.
Parameter Beta of the approximating plane.
Mean gray value.
Calculate the deviation of the gray values from the approximating image plane.
Regions, of which the plane deviation is to be calculated.
Gray value image.
Deviation of the gray values within a region.
Compute the orientation and major axes of a region in a gray value image.
Region(s) to be examined.
Gray value image.
Major axis of the region.
Minor axis of the region.
Angle enclosed by the major axis and the x-axis.
Compute the area and center of gravity of a region in a gray value image.
Region(s) to be examined.
Gray value image.
Gray value volume of the region.
Row coordinate of the gray value center of gravity.
Column coordinate of the gray value center of gravity.
Calculate horizontal and vertical gray-value projections.
Region to be processed.
Grayvalues for projections.
Method to compute the projections. Default: "simple"
Horizontal projection.
Vertical projection.
Access iconic objects that were created during the search for 2D data code symbols.
Objects that are created as intermediate results during the detection or evaluation of 2D data codes.
Handle of the 2D data code model.
Handle of the 2D data code candidate. Either an integer (usually the ResultHandle of find_data_code_2d) or a string representing a group of candidates. Default: "all_candidates"
Name of the iconic object to return. Default: "candidate_xld"
Get the alphanumerical results that were accumulated during the search for 2D data code symbols.
Handle of the 2D data code model.
Handle of the 2D data code candidate. Either an integer (usually the ResultHandle of find_data_code_2d) or a string representing a group of candidates. Default: "all_candidates"
Names of the results of the 2D data code to return. Default: "status"
List with the results.
Detect and read 2D data code symbols in an image or train the 2D data code model.
Input image. If the image has a reduced domain, the data code search is reduced to that domain. This usually reduces the runtime of the operator. However, if the datacode is not fully inside the domain, the datacode might not be found correctly. In rare cases, data codes may be found outside the domain. If these results are undesirable, they have to be subsequently eliminated.
XLD contours that surround the successfully decoded data code symbols. The order of the contour points reflects the orientation of the detected symbols. The contours begin in the top left corner (see 'orientation' at get_data_code_2d_results) and continue clockwise. Alignment{left} Figure[1][1][60]{get_data_code_2d_results-xld_qrcode} Order of points of SymbolXLDs Figure Alignment @f$
Handle of the 2D data code model.
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Handles of all successfully decoded 2D data code symbols.
Decoded data strings of all detected 2D data code symbols in the image.
Set selected parameters of the 2D data code model.
Handle of the 2D data code model.
Names of the generic parameters that shall be adjusted for the 2D data code. Default: "polarity"
Values of the generic parameters that are adjusted for the 2D data code. Default: "light_on_dark"
Get one or several parameters that describe the 2D data code model.
Handle of the 2D data code model.
Names of the generic parameters that are to be queried for the 2D data code model. Default: "polarity"
Values of the generic parameters.
Get for a given 2D data code model the names of the generic parameters or objects that can be used in the other 2D data code operators.
Handle of the 2D data code model.
Name of the parameter group. Default: "get_result_params"
List containing the names of the supported generic parameters.
Deserialize a serialized 2D data code model.
Handle of the serialized item.
Handle of the 2D data code model.
Serialize a 2D data code model.
Handle of the 2D data code model.
Handle of the serialized item.
Read a 2D data code model from a file and create a new model.
Name of the 2D data code model file. Default: "data_code_model.dcm"
Handle of the created 2D data code model.
Writes a 2D data code model into a file.
Handle of the 2D data code model.
Name of the 2D data code model file. Default: "data_code_model.dcm"
This operator is inoperable. It had the following function: Delete all 2D data code models and free the allocated memory.
Delete a 2D data code model and free the allocated memory.
Handle of the 2D data code model.
Create a model of a 2D data code class.
Type of the 2D data code. Default: "Data Matrix ECC 200"
Names of the generic parameters that can be adjusted for the 2D data code model. Default: []
Values of the generic parameters that can be adjusted for the 2D data code model. Default: []
Handle for using and accessing the 2D data code model.
Deserialize serialized training data for classifiers.
Handle of the serialized item.
Handle of the training data.
Serialize training data for classifiers.
Handle of the training data.
Handle of the serialized item.
Read the training data for classifiers from a file.
File name of the training data.
Handle of the training data.
Save the training data for classifiers in a file.
Handle of the training data.
Name of the file in which the training data will be written.
Select certain features from training data to create training data containing less features.
Handle of the training data.
Indices or names to select the subfeatures or columns.
Handle of the reduced training data.
Define subfeatures in training data.
Handle of the training data that should be partitioned into subfeatures.
Length of the subfeatures.
Names of the subfeatures.
Get the training data of a Gaussian Mixture Model (GMM).
Handle of a GMM that contains training data.
Handle of the training data of the classifier.
Add training data to a Gaussian Mixture Model (GMM).
Handle of a GMM which receives the training data.
Handle of training data for a classifier.
Get the training data of a multilayer perceptron (MLP).
Handle of a MLP that contains training data.
Handle of the training data of the classifier.
Add training data to a multilayer perceptron (MLP).
MLP handle which receives the training data.
Training data for a classifier.
Get the training data of a k-nearest neighbors (k-NN) classifier.
Handle of the k-NN classifier that contains training data.
Handle of the training data of the classifier.
Add training data to a k-nearest neighbors (k-NN) classifier.
Handle of a k-NN which receives the training data.
Training data for a classifier.
Get the training data of a support vector machine (SVM).
Handle of a SVM that contains training data.
Handle of the training data of the classifier.
Add training data to a support vector machine (SVM).
Handle of a SVM which receives the training data.
Training data for a classifier.
Return the number of training samples stored in the training data.
Handle of training data.
Number of stored training samples.
Return a training sample from training data.
Handle of training data for a classifier.
Number of stored training sample.
Feature vector of the training sample.
Class of the training sample.
This operator is inoperable. It had the following function: Clear all training data for classifiers.
Clears training data for classifiers.
Handle of training data for a classifier.
Add a training sample to training data.
Handle of the training data.
The order of the feature vector. Default: "row"
Feature vector of the training sample.
Class of the training sample.
Create a handle for training data for classifiers.
Number of dimensions of the feature vector. Default: 10
Handle of the training data.
Selects an optimal combination of features to classify the provided data.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
A trained MLP classifier using only the selected features.
The selected feature set, contains indices referring.
The achieved score using two-fold cross-validation.
Selects an optimal combination of features to classify the provided data.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
A trained SVM classifier using only the selected features.
The selected feature set, contains indices.
The achieved score using two-fold cross-validation.
Selects an optimal combination from a set of features to classify the provided data.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the classifier. Default: []
Values of generic parameters to configure the classifier. Default: []
A trained GMM classifier using only the selected features.
The selected feature set, contains indices or names.
The achieved score using two-fold cross-validation.
Selects an optimal subset from a set of features to solve a certain classification problem.
Handle of the training data.
Method to perform the selection. Default: "greedy"
Names of generic parameters to configure the selection process and the classifier. Default: []
Values of generic parameters to configure the selection process and the classifier. Default: []
A trained k-NN classifier using only the selected features.
The selected feature set, contains indices or names.
The achieved score using two-fold cross-validation.
This operator is inoperable. It had the following function: Clear all k-NN classifiers.
Clear a k-NN classifier.
Handle of the k-NN classifier.
Return the number of training samples stored in the training data of a k-nearest neighbors (k-NN) classifier.
Handle of the k-NN classifier.
Number of stored training samples.
Return a training sample from the training data of a k-nearest neighbors (k-NN) classifier.
Handle of the k-NN classifier.
Index of the training sample.
Feature vector of the training sample.
Class of the training sample.
Deserialize a serialized k-NN classifier.
Handle of the serialized item.
Handle of the k-NN classifier.
Serialize a k-NN classifier.
Handle of the k-NN classifier.
Handle of the serialized item.
Read the k-NN classifier from a file.
File name of the classifier.
Handle of the k-NN classifier.
Save the k-NN classifier in a file.
Handle of the k-NN classifier.
Name of the file in which the classifier will be written.
Get parameters of a k-NN classification.
Handle of the k-NN classifier.
Names of the parameters that can be read from the k-NN classifier. Default: ["method","k"]
Values of the selected parameters.
Set parameters for k-NN classification.
Handle of the k-NN classifier.
Names of the generic parameters that can be adjusted for the k-NN classifier. Default: ["method","k","max_num_classes"]
Values of the generic parameters that can be adjusted for the k-NN classifier. Default: ["classes_distance",5,1]
Search for the next neighbors for a given feature vector.
Handle of the k-NN classifier.
Features that should be classified.
The classification result, either class IDs or sample indices.
A rating for the results. This value contains either a distance, a frequency or a weighted frequency.
Creates the search trees for a k-NN classifier.
Handle of the k-NN classifier.
Names of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Values of the generic parameters that can be adjusted for the k-NN classifier creation. Default: []
Add a sample to a k-nearest neighbors (k-NN) classifier.
Handle of the k-NN classifier.
List of features to add.
Class IDs of the features.
Create a k-nearest neighbors (k-NN) classifier.
Number of dimensions of the feature. Default: 10
Handle of the k-NN classifier.
This operator is inoperable. It had the following function: Clear all look-up table classifiers.
Clear a look-up table classifier.
Handle of the LUT classifier.
Create a look-up table using a k-nearest neighbors classifier (k-NN) to classify byte images.
Handle of the k-NN classifier.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Handle of the LUT classifier.
Create a look-up table using a gaussian mixture model to classify byte images.
GMM handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Handle of the LUT classifier.
Create a look-up table using a Support-Vector-Machine to classify byte images.
SVM handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Handle of the LUT classifier.
Create a look-up table using a multi-layer perceptron to classify byte images.
MLP handle.
Names of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Values of the generic parameters that can be adjusted for the LUT classifier creation. Default: []
Handle of the LUT classifier.
This operator is inoperable. It had the following function: Clear all Gaussian Mixture Models.
Clear a Gaussian Mixture Model.
GMM handle.
Clear the training data of a Gaussian Mixture Model.
GMM handle.
Deserialize a serialized Gaussian Mixture Model.
Handle of the serialized item.
GMM handle.
Serialize a Gaussian Mixture Model (GMM).
GMM handle.
Handle of the serialized item.
Read a Gaussian Mixture Model from a file.
File name.
GMM handle.
Write a Gaussian Mixture Model to a file.
GMM handle.
File name.
Read the training data of a Gaussian Mixture Model from a file.
GMM handle.
File name.
Write the training data of a Gaussian Mixture Model to a file.
GMM handle.
File name.
Calculate the class of a feature vector by a Gaussian Mixture Model.
GMM handle.
Feature vector.
Number of best classes to determine. Default: 1
Result of classifying the feature vector with the GMM.
A-posteriori probability of the classes.
Probability density of the feature vector.
Normalized k-sigma-probability for the feature vector.
Evaluate a feature vector by a Gaussian Mixture Model.
GMM handle.
Feature vector.
A-posteriori probability of the classes.
Probability density of the feature vector.
Normalized k-sigma-probability for the feature vector.
Train a Gaussian Mixture Model.
GMM handle.
Maximum number of iterations of the expectation maximization algorithm Default: 100
Threshold for relative change of the error for the expectation maximization algorithm to terminate. Default: 0.001
Mode to determine the a-priori probabilities of the classes Default: "training"
Regularization value for preventing covariance matrix singularity. Default: 0.0001
Number of found centers per class
Number of executed iterations per class
Compute the information content of the preprocessed feature vectors of a GMM.
GMM handle.
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Relative information content of the transformed feature vectors.
Cumulative information content of the transformed feature vectors.
Return the number of training samples stored in the training data of a Gaussian Mixture Model (GMM).
GMM handle.
Number of stored training samples.
Return a training sample from the training data of a Gaussian Mixture Models (GMM).
GMM handle.
Index of the stored training sample.
Feature vector of the training sample.
Class of the training sample.
Add a training sample to the training data of a Gaussian Mixture Model.
GMM handle.
Feature vector of the training sample to be stored.
Class of the training sample to be stored.
Standard deviation of the Gaussian noise added to the training data. Default: 0.0
Return the parameters of a Gaussian Mixture Model.
GMM handle.
Number of dimensions of the feature space.
Number of classes of the GMM.
Minimum number of centers per GMM class.
Maximum number of centers per GMM class.
Type of the covariance matrices.
Create a Gaussian Mixture Model for classification
Number of dimensions of the feature space. Default: 3
Number of classes of the GMM. Default: 5
Number of centers per class. Default: 1
Type of the covariance matrices. Default: "spherical"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the GMM with random values. Default: 42
GMM handle.
This operator is inoperable. It had the following function: Clear all support vector machines.
Clear a support vector machine.
SVM handle.
Clear the training data of a support vector machine.
SVM handle.
Deserialize a serialized support vector machine (SVM).
Handle of the serialized item.
SVM handle.
Serialize a support vector machine (SVM).
SVM handle.
Handle of the serialized item.
Read a support vector machine from a file.
File name.
SVM handle.
Write a support vector machine to a file.
SVM handle.
File name.
Read the training data of a support vector machine from a file.
SVM handle.
File name.
Write the training data of a support vector machine to a file.
SVM handle.
File name.
Evaluate a feature vector by a support vector machine.
SVM handle.
Feature vector.
Result of evaluating the feature vector with the SVM.
Classify a feature vector by a support vector machine.
SVM handle.
Feature vector.
Number of best classes to determine. Default: 1
Result of classifying the feature vector with the SVM.
Approximate a trained support vector machine by a reduced support vector machine for faster classification.
Original SVM handle.
Type of postprocessing to reduce number of SV. Default: "bottom_up"
Minimum number of remaining SVs. Default: 2
Maximum allowed error of reduction. Default: 0.001
SVMHandle of reduced SVM.
Train a support vector machine.
SVM handle.
Stop parameter for training. Default: 0.001
Mode of training. For normal operation: 'default'. If SVs already included in the SVM should be used for training: 'add_sv_to_train_set'. For alpha seeding: the respective SVM handle. Default: "default"
Compute the information content of the preprocessed feature vectors of a support vector machine
SVM handle.
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Relative information content of the transformed feature vectors.
Cumulative information content of the transformed feature vectors.
Return the number of support vectors of a support vector machine.
SVM handle.
Total number of support vectors.
Number of SV of each sub-SVM.
Return the index of a support vector from a trained support vector machine.
SVM handle.
Index of the stored support vector.
Index of the support vector in the training set.
Return the number of training samples stored in the training data of a support vector machine.
SVM handle.
Number of stored training samples.
Return a training sample from the training data of a support vector machine.
SVM handle.
Number of the stored training sample.
Feature vector of the training sample.
Target vector of the training sample.
Add a training sample to the training data of a support vector machine.
SVM handle.
Feature vector of the training sample to be stored.
Class of the training sample to be stored.
Return the parameters of a support vector machine.
SVM handle.
Number of input variables (features) of the SVM.
The kernel type.
Additional parameter for the kernel.
Regularization constant of the SVM.
Number of classes of the test data.
The mode of the SVM.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization').
Create a support vector machine for pattern classification.
Number of input variables (features) of the SVM. Default: 10
The kernel type. Default: "rbf"
Additional parameter for the kernel function. In case of RBF kernel the value for gamma@f$ Default: 0.02
Regularisation constant of the SVM. Default: 0.05
Number of classes. Default: 5
The mode of the SVM. Default: "one-versus-one"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
SVM handle.
This operator is inoperable. It had the following function: Clear all multilayer perceptrons.
Clear a multilayer perceptron.
MLP handle.
Clear the training data of a multilayer perceptron.
MLP handle.
Deserialize a serialized multilayer perceptron.
Handle of the serialized item.
MLP handle.
Serialize a multilayer perceptron (MLP).
MLP handle.
Handle of the serialized item.
Read a multilayer perceptron from a file.
File name.
MLP handle.
Write a multilayer perceptron to a file.
MLP handle.
File name.
Read the training data of a multilayer perceptron from a file.
MLP handle.
File name.
Write the training data of a multilayer perceptron to a file.
MLP handle.
File name.
Calculate the class of a feature vector by a multilayer perceptron.
MLP handle.
Feature vector.
Number of best classes to determine. Default: 1
Result of classifying the feature vector with the MLP.
Confidence(s) of the class(es) of the feature vector.
Calculate the evaluation of a feature vector by a multilayer perceptron.
MLP handle.
Feature vector.
Result of evaluating the feature vector with the MLP.
Train a multilayer perceptron.
MLP handle.
Maximum number of iterations of the optimization algorithm. Default: 200
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm. Default: 1.0
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the optimization algorithm. Default: 0.01
Mean error of the MLP on the training data.
Mean error of the MLP on the training data as a function of the number of iterations of the optimization algorithm.
Compute the information content of the preprocessed feature vectors of a multilayer perceptron.
MLP handle.
Type of preprocessing used to transform the feature vectors. Default: "principal_components"
Relative information content of the transformed feature vectors.
Cumulative information content of the transformed feature vectors.
Return the number of training samples stored in the training data of a multilayer perceptron.
MLP handle.
Number of stored training samples.
Return a training sample from the training data of a multilayer perceptron.
MLP handle.
Number of stored training sample.
Feature vector of the training sample.
Target vector of the training sample.
Get the parameters of a rejection class.
MLP handle.
Names of the generic parameters to return. Default: "sampling_strategy"
Values of the generic parameters.
Set the parameters of a rejection class.
MLP handle.
Names of the generic parameters. Default: "sampling_strategy"
Values of the generic parameters. Default: "hyperbox_around_all_classes"
Add a training sample to the training data of a multilayer perceptron.
MLP handle.
Feature vector of the training sample to be stored.
Class or target vector of the training sample to be stored.
Return the regularization parameters of a multilayer perceptron.
MLP handle.
Name of the regularization parameter to return. Default: "weight_prior"
Value of the regularization parameter.
Set the regularization parameters of a multilayer perceptron.
MLP handle.
Name of the regularization parameter to set. Default: "weight_prior"
Value of the regularization parameter. Default: 1.0
Return the parameters of a multilayer perceptron.
MLP handle.
Number of input variables (features) of the MLP.
Number of hidden units of the MLP.
Number of output variables (classes) of the MLP.
Type of the activation function in the output layer of the MLP.
Type of preprocessing used to transform the feature vectors.
Preprocessing parameter: Number of transformed features.
Create a multilayer perceptron for classification or regression.
Number of input variables (features) of the MLP. Default: 20
Number of hidden units of the MLP. Default: 10
Number of output variables (classes) of the MLP. Default: 5
Type of the activation function in the output layer of the MLP. Default: "softmax"
Type of preprocessing used to transform the feature vectors. Default: "normalization"
Preprocessing parameter: Number of transformed features (ignored for Preprocessing $=$ 'none' and Preprocessing $=$ 'normalization'). Default: 10
Seed value of the random number generator that is used to initialize the MLP with random values. Default: 42
MLP handle.
Deserialize a serialized classifier.
Handle of the classifier.
Handle of the serialized item.
Serialize a classifier.
Handle of the classifier.
Handle of the serialized item.
Save a classifier in a file.
Handle of the classifier.
Name of the file which contains the written data.
Set system parameters for classification.
Handle of the classifier.
Name of the wanted parameter. Default: "split_error"
Value of the parameter. Default: 0.1
Read a training data set from a file.
Filename of the data set to train. Default: "sampset1"
Identification of the data set to train.
Read a classifier from a file.
Handle of the classifier.
Filename of the classifier.
Train the classifier with one data set.
Handle of the classifier.
Number of the data set to train.
Name of the protocol file. Default: "training_prot"
Number of arrays of attributes to learn. Default: 500
Classification error for termination. Default: 0.05
Error during the assignment. Default: 100
Train the classifier.
Handle of the classifier.
Array of attributes to learn. Default: [1.0,1.5,2.0]
Class to which the array has to be assigned. Default: 1
Get information about the current parameter.
Handle of the classifier.
Name of the system parameter. Default: "split_error"
Value of the system parameter.
Free memory of a data set.
Number of the data set.
Destroy the classifier.
Handle of the classifier.
Create a new classifier.
Handle of the classifier.
Describe the classes of a box classifier.
Handle of the classifier.
Highest dimension for output. Default: 3
Indices of the classes.
Indices of the boxes.
Lower bounds of the boxes (for each dimension).
Higher bounds of the boxes (for each dimension).
Number of training samples that were used to define this box (for each dimension).
Number of training samples that were assigned incorrectly to the box.
Classify a set of arrays.
Handle of the classifier.
Key of the test data.
Error during the assignment.
Classify a tuple of attributes with rejection class.
Handle of the classifier.
Array of attributes which has to be classified. Default: 1.0
Number of the class, to which the array of attributes had been assigned or -1 for the rejection class.
Classify a tuple of attributes.
Handle of the classifier.
Array of attributes which has to be classified. Default: 1.0
Number of the class to which the array of attributes had been assigned.
This operator is inoperable. It had the following function: Destroy all classifiers.
Convert image maps into other map types.
Input map.
Converted map.
Type of MapConverted. Default: "coord_map_sub_pix"
Width of images to be mapped. Default: "map_width"
Compute an absolute pose out of point correspondences between world and image coordinates.
X-Component of world coordinates.
Y-Component of world coordinates.
Z-Component of world coordinates.
Row-Component of image coordinates.
Column-Component of image coordinates.
The inner camera parameters from camera calibration.
Kind of algorithm Default: "iterative"
Type of pose quality to be returned in Quality. Default: "error"
Pose.
Pose quality.
Compute a pose out of a homography describing the relation between world and image coordinates.
The homography from world- to image coordinates.
The camera calibration matrix K.
Type of pose computation. Default: "decomposition"
Pose of the 2D object.
Calibrate the radial distortion.
Contours that are available for the calibration.
Contours that were used for the calibration
Width of the images from which the contours were extracted. Default: 640
Height of the images from which the contours were extracted. Default: 480
Threshold for the classification of outliers. Default: 0.05
Seed value for the random number generator. Default: 42
Determines the distortion model. Default: "division"
Determines how the distortion center will be estimated. Default: "variable"
Controls the deviation of the distortion center from the image center; larger values allow larger deviations from the image center; 0 switches the penalty term off. Default: 0.0
Internal camera parameters.
Compute a camera matrix from internal camera parameters.
Internal camera parameters.
3x3 projective camera matrix that corresponds to CameraParam.
Width of the images that correspond to CameraMatrix.
Height of the images that correspond to CameraMatrix.
Compute the internal camera parameters from a camera matrix.
3x3 projective camera matrix that determines the internal camera parameters.
Kappa.
Width of the images that correspond to CameraMatrix.
Height of the images that correspond to CameraMatrix.
Internal camera parameters.
Perform a self-calibration of a stationary projective camera.
Number of different images that are used for the calibration.
Width of the images from which the points were extracted.
Height of the images from which the points were extracted.
Index of the reference image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Row coordinates of corresponding points in the respective source images.
Column coordinates of corresponding points in the respective source images.
Row coordinates of corresponding points in the respective destination images.
Column coordinates of corresponding points in the respective destination images.
Number of point correspondences in the respective image pair.
Estimation algorithm for the calibration. Default: "gold_standard"
Camera model to be used. Default: ["focus","principal_point"]
Are the camera parameters identical for all images? Default: "true"
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Radial distortion of the camera.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
X-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Y-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Z-Component of the direction vector of each point if EstimationMethod $=$ 'gold_standard' is used.
Average error per reconstructed point if EstimationMethod $=$ 'gold_standard' is used.
Determine the 3D pose of a rectangle from its perspective 2D projection
Contour(s) to be examined.
Internal camera parameters.
Width of the rectangle in meters.
Height of the rectangle in meters.
Weighting mode for the optimization phase. Default: "nonweighted"
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 3.0 for 'tukey'). Default: 2.0
3D pose of the rectangle.
Covariances of the pose values.
Root-mean-square value of the final residual error.
Determine the 3D pose of a circle from its perspective 2D projection.
Contours to be examined.
Internal camera parameters.
Radius of the circle in object space.
Type of output parameters. Default: "pose"
3D pose of the first circle.
3D pose of the second circle.
Perform a radiometric self-calibration of a camera.
Input images.
Ratio of the exposure energies of successive image pairs. Default: 0.5
Features that are used to compute the inverse response function of the camera. Default: "2d_histogram"
Type of the inverse response function of the camera. Default: "discrete"
Smoothness of the inverse response function of the camera. Default: 1.0
Degree of the polynomial if FunctionType = 'polynomial'. Default: 5
Inverse response function of the camera.
Apply a general transformation to an image.
Image to be mapped.
Image containing the mapping data.
Mapped image.
Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
Image containing the mapping data.
Old camera parameters.
New camera parameters.
Type of the mapping. Default: "bilinear"
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world coordinate system.
Image containing the mapping data.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Width of the images to be transformed.
Height of the images to be transformed.
Width of the resulting mapped images in pixels.
Height of the resulting mapped images in pixels.
Scale or unit. Default: "m"
Type of the mapping. Default: "bilinear"
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
Input image.
Transformed image.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Width of the resulting image in pixels.
Height of the resulting image in pixels.
Scale or unit Default: "m"
Type of interpolation. Default: "bilinear"
Transform an XLD contour into the plane z=0 of a world coordinate system.
Input XLD contours to be transformed in image coordinates.
Transformed XLD contours in world coordinates.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Scale or dimension Default: "m"
Transform image points into the plane z=0 of a world coordinate system.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Row coordinates of the points to be transformed. Default: 100.0
Column coordinates of the points to be transformed. Default: 100.0
Scale or dimension Default: "m"
X coordinates of the points in the world coordinate system.
Y coordinates of the points in the world coordinate system.
Translate the origin of a 3D pose.
original 3D pose.
translation of the origin in x-direction. Default: 0
translation of the origin in y-direction. Default: 0
translation of the origin in z-direction. Default: 0
new 3D pose after applying the translation.
Perform a hand-eye calibration.
Linear list containing all the x coordinates of the calibration points (in the order of the images).
Linear list containing all the y coordinates of the calibration points (in the order of the images).
Linear list containing all the z coordinates of the calibration points (in the order of the images).
Linear list containing all row coordinates of the calibration points (in the order of the images).
Linear list containing all the column coordinates of the calibration points (in the order of the images).
Number of the calibration points for each image.
Known 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates; stationary camera: robot tool in robot base coordinates).
Internal camera parameters.
Method of hand-eye calibration. Default: "nonlinear"
Type of quality assessment. Default: "error_pose"
Computed relative camera pose: 3D pose of the robot tool (moving camera) or robot base (stationary camera), respectively, in camera coordinates.
Computed 3D pose of the calibration points in robot base coordinates (moving camera) or in robot tool coordinates (stationary camera), respectively.
Quality assessment of the result.
Get the representation type of a 3D pose.
3D pose.
Order of rotation and translation.
Meaning of the rotation values.
View of transformation.
Change the representation type of a 3D pose.
Original 3D pose.
Order of rotation and translation. Default: "Rp+T"
Meaning of the rotation values. Default: "gba"
View of transformation. Default: "point"
3D transformation.
Create a 3D pose.
Translation along the x-axis (in [m]). Default: 0.1
Translation along the y-axis (in [m]). Default: 0.1
Translation along the z-axis (in [m]). Default: 0.1
Rotation around x-axis or x component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Rotation around y-axis or y component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Rotation around z-axis or z component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Order of rotation and translation. Default: "Rp+T"
Meaning of the rotation values. Default: "gba"
View of transformation. Default: "point"
3D pose.
Change the radial distortion of contours.
Original contours.
Resulting contours with modified radial distortion.
Internal camera parameter for Contours.
Internal camera parameter for ContoursRectified.
Change the radial distortion of pixel coordinates.
Original row component of pixel coordinates.
Original column component of pixel coordinates.
The inner camera parameters of the camera used to create the input pixel coordinates.
The inner camera parameters of a camera.
Row component of pixel coordinates after changing the radial distortion.
Column component of pixel coordinates after changing the radial distortion.
Change the radial distortion of an image.
Original image.
Region of interest in ImageRectified.
Resulting image with modified radial distortion.
Internal camera parameter for Image.
Internal camera parameter for Image.
Determine new camera parameters in accordance to the specified radial distortion.
Mode Default: "adaptive"
Internal camera parameters (original).
Desired radial distortions. Default: 0.0
Internal camera parameters (modified).
Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with rectangularly arranged marks.
Number of marks in x direction. Default: 7
Number of marks in y direction. Default: 7
Distance of the marks in meters. Default: 0.0125
Ratio of the mark diameter to the mark distance. Default: 0.5
File name of the calibration plate description. Default: "caltab.descr"
File name of the PostScript file. Default: "caltab.ps"
Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with hexagonally arranged marks.
Number of rows. Default: 27
Number of marks per row. Default: 31
Diameter of the marks. Default: 0.00258065
Row indices of the finder patterns. Default: [13,6,6,20,20]
Column indices of the finder patterns. Default: [15,6,24,6,24]
Polarity of the marks Default: "light_on_dark"
File name of the calibration plate description. Default: "calplate.cpd"
File name of the PostScript file. Default: "calplate.ps"
Read the mark center points from the calibration plate description file.
File name of the calibration plate description. Default: "calplate_320mm.cpd"
X coordinates of the mark center points in the coordinate system of the calibration plate.
Y coordinates of the mark center points in the coordinate system of the calibration plate.
Z coordinates of the mark center points in the coordinate system of the calibration plate.
Compute the line of sight corresponding to a point in the image.
Row coordinate of the pixel.
Column coordinate of the pixel.
Internal camera parameters.
X coordinate of the first point on the line of sight in the camera coordinate system
Y coordinate of the first point on the line of sight in the camera coordinate system
Z coordinate of the first point on the line of sight in the camera coordinate system
X coordinate of the second point on the line of sight in the camera coordinate system
Y coordinate of the second point on the line of sight in the camera coordinate system
Z coordinate of the second point on the line of sight in the camera coordinate system
Project a homogeneous 3D point using a 3x4 projection matrix.
3x4 projection matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Input point (w coordinate).
Output point (x coordinate).
Output point (y coordinate).
Output point (w coordinate).
Project a 3D point using a 3x4 projection matrix.
3x4 projection matrix.
Input point (x coordinate).
Input point (y coordinate).
Input point (z coordinate).
Output point (x coordinate).
Output point (y coordinate).
Project 3D points into (sub-)pixel image coordinates.
X coordinates of the 3D points to be projected in the camera coordinate system.
Y coordinates of the 3D points to be projected in the camera coordinate system.
Z coordinates of the 3D points to be projected in the camera coordinate system.
Internal camera parameters.
Row coordinates of the projected points (in pixels).
Column coordinates of the projected points (in pixels).
Convert internal camera parameters and a 3D pose into a 3x4 projection matrix.
Internal camera parameters.
3D pose.
3x4 projection matrix.
Convert a homogeneous transformation matrix into a 3D pose.
Homogeneous transformation matrix.
Equivalent 3D pose.
Convert a 3D pose into a homogeneous transformation matrix.
3D pose.
Equivalent homogeneous transformation matrix.
Deserialize the serialized internal camera parameters.
Handle of the serialized item.
Internal camera parameters.
Serialize the internal camera parameters.
Internal camera parameters.
Handle of the serialized item.
Deserialize a serialized pose.
Handle of the serialized item.
3D pose.
Serialize a pose.
3D pose.
Handle of the serialized item.
Read a 3D pose from a text file.
File name of the external camera parameters. Default: "campose.dat"
3D pose.
Write a 3D pose to a text file.
3D pose.
File name of the external camera parameters. Default: "campose.dat"
Read internal camera parameters from a file.
File name of internal camera parameters. Default: "campar.dat"
Internal camera parameters.
Write internal camera parameters into a file.
Internal camera parameters.
File name of internal camera parameters. Default: "campar.dat"
Simulate an image with calibration plate.
Simulated calibration image.
File name of the calibration plate description. Default: "calplate_320mm.cpd"
Internal camera parameters.
External camera parameters (3D pose of the calibration plate in camera coordinates).
Gray value of image background. Default: 128
Gray value of calibration plate. Default: 80
Gray value of calibration marks. Default: 224
Scaling factor to reduce oversampling. Default: 1.0
Project and visualize the 3D model of the calibration plate in the image.
Window in which the calibration plate should be visualized.
File name of the calibration plate description. Default: "calplate_320.cpd"
Internal camera parameters.
External camera parameters (3D pose of the calibration plate in camera coordinates).
Scaling factor for the visualization. Default: 1.0
Determine all camera parameters by a simultaneous minimization process.
Ordered tuple with all x coordinates of the calibration marks (in meters).
Ordered tuple with all y coordinates of the calibration marks (in meters).
Ordered tuple with all z coordinates of the calibration marks (in meters).
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
Initial values for the internal camera parameters.
Ordered tuple with all initial values for the external camera parameters.
Camera parameters to be estimated. Default: "all"
Internal camera parameters.
Ordered tuple with all external camera parameters.
Average error distance in pixels.
Extract rectangularly arranged 2D calibration marks from the image and calculate initial values for the external camera parameters.
Input image.
Region of the calibration plate.
File name of the calibration plate description. Default: "caltab_100.descr"
Initial values for the internal camera parameters.
Initial threshold value for contour detection. Default: 128
Loop value for successive reduction of StartThresh. Default: 10
Minimum threshold for contour detection. Default: 18
Filter parameter for contour detection, see edges_image. Default: 0.9
Minimum length of the contours of the marks. Default: 15.0
Maximum expected diameter of the marks. Default: 100.0
Tuple with row coordinates of the detected marks.
Tuple with column coordinates of the detected marks.
Estimation for the external camera parameters.
Segment the region of a standard calibration plate with rectangularly arranged marks in the image.
Input image.
Output region.
File name of the calibration plate description. Default: "caltab_100.descr"
Filter size of the Gaussian. Default: 3
Threshold value for mark extraction. Default: 112
Expected minimal diameter of the marks on the calibration plate. Default: 5
Free the memory of all camera setup models.
Free the memory of a calibration setup model.
Handle of the camera setup model.
Serialize a camera setup model.
Handle to the camera setup model.
Handle of the serialized item.
Deserialize a serialized camera setup model.
Handle of the serialized item.
Handle to the camera setup model.
Store a camera setup model into a file.
Handle to the camera setup model.
The file name of the model to be saved.
Restore a camera setup model from a file.
The path and file name of the model file.
Handle to the camera setup model.
Get generic camera setup model parameters.
Handle to the camera setup model.
Index of the camera in the setup. Default: 0
Names of the generic parameters to be queried.
Values of the generic parameters to be queried.
Set generic camera setup model parameters.
Handle to the camera setup model.
Unique index of the camera in the setup. Default: 0
Names of the generic parameters to be set.
Values of the generic parameters to be set.
Define type, parameters, and relative pose of a camera in a camera setup model.
Handle to the camera setup model.
Index of the camera in the setup.
Type of the camera. Default: []
Internal camera parameters.
Pose of the camera relative to the setup's coordinate system.
Create a model for a setup of calibrated cameras.
Number of cameras in the setup. Default: 2
Handle to the camera setup model.
Free the memory of all calibration data models.
Free the memory of a calibration data model.
Handle of a calibration data model.
Deserialize a serialized calibration data model.
Handle of the serialized item.
Handle of a calibration data model.
Serialize a calibration data model.
Handle of a calibration data model.
Handle of the serialized item.
Restore a calibration data model from a file.
The path and file name of the model file.
Handle of a calibration data model.
Store a calibration data model into a file.
Handle of a calibration data model.
The file name of the model to be saved.
Perform a hand-eye calibration.
Handle of a calibration data model.
Average residual error of the optimization.
Determine all camera parameters by a simultaneous minimization process.
Handle of a calibration data model.
Back projection root mean square error (RMSE) of the optimization.
Remove a data set from a calibration data model.
Handle of a calibration data model.
Type of the calibration data item. Default: "tool"
Index of the affected item. Default: 0
Set data in a calibration data model.
Handle of a calibration data model.
Type of calibration data item. Default: "model"
Index of the affected item (depending on the selected ItemType). Default: "general"
Parameter(s) to set. Default: "reference_camera"
New value(s). Default: 0
Find the HALCON calibration plate and set the extracted points and contours in a calibration data model.
Input image.
Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the calibration object. Default: 0
Index of the observed calibration object. Default: 0
Names of the generic parameters to be set. Default: []
Values of the generic parameters to be set. Default: []
Remove observation data from a calibration data model.
Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the observed calibration object. Default: 0
Index of the observed calibration object pose. Default: 0
Get contour-based observation data from a calibration data model.
Contour-based result(s).
Handle of a calibration data model.
Name of contour objects to be returned. Default: "marks"
Index of the observing camera. Default: 0
Index of the observed calibration object. Default: 0
Index of the observed calibration object pose. Default: 0
Get observed calibration object poses from a calibration data model.
Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the observed calibration object. Default: 0
Index of the observed calibration object pose. Default: 0
Stored observed calibration object pose relative to the observing camera.
Set observed calibration object poses in a calibration data model.
Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the calibration object. Default: 0
Index of the observed calibration object. Default: 0
Pose of the observed calibration object relative to the observing camera.
Get point-based observation data from a calibration data model.
Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the observed calibration object. Default: 0
Index of the observed calibration object pose. Default: 0
Row coordinates of the detected points.
Column coordinates of the detected points.
Correspondence of the detected points to the points of the observed calibration object.
Roughly estimated pose of the observed calibration object relative to the observing camera.
Set point-based observation data in a calibration data model.
Handle of a calibration data model.
Index of the observing camera. Default: 0
Index of the calibration object. Default: 0
Index of the observed calibration object. Default: 0
Row coordinates of the extracted points.
Column coordinates of the extracted points.
Correspondence of the extracted points to the calibration marks of the observed calibration object. Default: "all"
Roughly estimated pose of the observed calibration object relative to the observing camera.
Query information about the relations between cameras, calibration objects, and calibration object poses.
Handle of a calibration data model.
Kind of referred object. Default: "camera"
Camera index or calibration object index (depending on the selected ItemType). Default: 0
List of calibration object indices or list of camera indices (depending on ItemType).
Calibration object numbers.
Query data stored or computed in a calibration data model.
Handle of a calibration data model.
Type of calibration data item. Default: "camera"
Index of the affected item (depending on the selected ItemType). Default: 0
The name of the inspected data. Default: "params"
Requested data.
Define a calibration object in a calibration model.
Handle of a calibration data model.
Calibration object index. Default: 0
3D point coordinates or a description file name.
Set type and initial parameters of a camera in a calibration data model.
Handle of a calibration data model.
Camera index. Default: 0
Type of the camera. Default: []
Initial camera internal parameters.
Create a HALCON calibration data model.
Type of the calibration setup. Default: "calibration_object"
Number of cameras in the calibration setup. Default: 1
Number of calibration objects. Default: 1
Handle of the created calibration data model.
Get the value of a parameter in a specific bead inspection model.
Handle of the bead inspection model.
Name of the model parameter that is queried. Default: "target_thickness"
Value of the queried model parameter.
Set parameters of the bead inspection model.
Handle of the bead inspection model.
Name of the model parameter that shall be adjusted for the specified bead inspection model. Default: "target_thickness"
Value of the model parameter that shall be adjusted for the specified bead inspection model. Default: 40
Inspect beads in an image, as defined by the bead inspection model.
Image to apply bead inspection on.
The detected left contour of the beads.
The detected right contour of the beads.
Detected error segments
Handle of the bead inspection model to be used.
Types of detected errors.
Delete the bead inspection model and free the allocated memory.
Handle of the bead inspection model.
Create a model to inspect beads or adhesive in images.
XLD contour specifying the expected bead's shape and position.
Optimal bead thickness. Default: 50
Tolerance of bead's thickness with respect to TargetThickness. Default: 15
Tolerance of the bead's center position. Default: 15
The bead's polarity. Default: "light"
Names of the generic parameters that can be adjusted for the bead inspection model. Default: []
Values of the generic parameters that can be adjusted for the bead inspection model. Default: []
Handle for using and accessing the bead inspection model.
Deserialize a bar code model.
Handle of the serialized item.
Handle of the bar code model.
Serialize a bar code model.
Handle of the bar code model.
Handle of the serialized item.
Read a bar code model from a file and create a new model.
Name of the bar code model file. Default: "bar_code_model.bcm"
Handle of the bar code model.
Write a bar code model to a file.
Handle of the bar code model.
Name of the bar code model file. Default: "bar_code_model.bcm"
Access iconic objects that were created during the search or decoding of bar code symbols.
Objects that are created as intermediate results during the detection or evaluation of bar codes.
Handle of the bar code model.
Indicating the bar code results respectively candidates for which the data is required. Default: "all"
Name of the iconic object to return. Default: "candidate_regions"
Get the alphanumerical results that were accumulated during the decoding of bar code symbols.
Handle of the bar code model.
Indicating the bar code results respectively candidates for which the data is required. Default: "all"
Names of the resulting data to return. Default: "decoded_types"
List with the results.
Decode bar code symbols within a rectangle.
Input image.
Handle of the bar code model.
Type of the searched bar code. Default: "EAN-13"
Row index of the center. Default: 50.0
Column index of the center. Default: 100.0
Orientation of rectangle in radians. Default: 0.0
Half of the length of the rectangle along the reading direction of the bar code. Default: 200.0
Half of the length of the rectangle perpendicular to the reading direction of the bar code. Default: 100.0
Data strings of all successfully decoded bar codes.
Detect and read bar code symbols in an image.
Input image. If the image has a reduced domain, the barcode search is reduced to that domain. This usually reduces the runtime of the operator. However, if the barcode is not fully inside the domain, the barcode cannot be decoded correctly.
Regions of the successfully decoded bar code symbols.
Handle of the bar code model.
Type of the searched bar code. Default: "auto"
Data strings of all successfully decoded bar codes.
Get the names of the parameters that can be used in set_bar_code* and get_bar_code* operators for a given bar code model
Handle of the bar code model.
Properties of the parameters. Default: "trained_general"
Names of the generic parameters.
Get parameters that are used by the bar code reader when processing a specific bar code type.
Handle of the bar code model.
Names of the bar code types for which parameters should be queried. Default: "EAN-13"
Names of the generic parameters that are to be queried for the bar code model. Default: "check_char"
Values of the generic parameters.
Get one or several parameters that describe the bar code model.
Handle of the bar code model.
Names of the generic parameters that are to be queried for the bar code model. Default: "element_size_min"
Values of the generic parameters.
Set selected parameters of the bar code model for selected bar code types
Handle of the bar code model.
Names of the bar code types for which parameters should be set. Default: "EAN-13"
Names of the generic parameters that shall be adjusted for finding and decoding bar codes. Default: "check_char"
Values of the generic parameters that are adjusted for finding and decoding bar codes. Default: "absent"
Set selected parameters of the bar code model.
Handle of the bar code model.
Names of the generic parameters that shall be adjusted for finding and decoding bar codes. Default: "element_size_min"
Values of the generic parameters that are adjusted for finding and decoding bar codes. Default: 8
This operator is inoperable. It had the following function: Delete all bar code models and free the allocated memory
Delete a bar code model and free the allocated memory
Handle of the bar code model.
Create a model of a bar code reader.
Names of the generic parameters that can be adjusted for the bar code model. Default: []
Values of the generic parameters that can be adjusted for the bar code model. Default: []
Handle for using and accessing the bar code model.
Delete the background estimation data set.
ID of the BgEsti data set.
Return the estimated background image.
Estimated background image of the current data set.
ID of the BgEsti data set.
Change the estimated background image.
Current image.
Region describing areas to change.
ID of the BgEsti data set.
Estimate the background and return the foreground region.
Current image.
Region of the detected foreground.
ID of the BgEsti data set.
Return the parameters of the data set.
ID of the BgEsti data set.
1. system matrix parameter.
2. system matrix parameter.
Gain type.
Kalman gain / foreground adaptation time.
Kalman gain / background adaptation time.
Threshold adaptation.
Foreground / background threshold.
Number of statistic data sets.
Confidence constant.
Constant for decay time.
Change the parameters of the data set.
ID of the BgEsti data set.
1. system matrix parameter. Default: 0.7
2. system matrix parameter. Default: 0.7
Gain type. Default: "fixed"
Kalman gain / foreground adaptation time. Default: 0.002
Kalman gain / background adaptation time. Default: 0.02
Threshold adaptation. Default: "on"
Foreground/background threshold. Default: 7.0
Number of statistic data sets. Default: 10
Confidence constant. Default: 3.25
Constant for decay time. Default: 15.0
Generate and initialize a data set for the background estimation.
initialization image.
1. system matrix parameter. Default: 0.7
2. system matrix parameter. Default: 0.7
Gain type. Default: "fixed"
Kalman gain / foreground adaptation time. Default: 0.002
Kalman gain / background adaptation time. Default: 0.02
Threshold adaptation. Default: "on"
Foreground/background threshold. Default: 7.0
Number of statistic data sets. Default: 10
Confidence constant. Default: 3.25
Constant for decay time. Default: 15.0
ID of the BgEsti data set.
This operator is inoperable. It had the following function: Delete all background estimation data sets.
Perform an action on I/O channels.
Handles of the opened I/O channels.
Name of the action to perform.
List of arguments for the action. Default: []
List of values returned by the action.
Write a value to the specified I/O channels.
Handles of the opened I/O channels.
Write values.
Status of written values.
Read a value from the specified I/O channels.
Handles of the opened I/O channels.
Read value.
Status of read value.
Set specific parameters of I/O channels.
Handles of the opened I/O channels.
Parameter names. Default: []
Parameter values to set. Default: []
Query specific parameters of I/O channels.
Handles of the opened I/O channels.
Parameter names. Default: "param_name"
Parameter values.
Close I/O channels.
Handles of the opened I/O channels.
Open and configure I/O channels.
Handle of the opened I/O device.
HALCON I/O channel names of the specified device.
Parameter names. Default: []
Parameter values. Default: []
Handles of the opened I/O channel.
Query information about channels of the specified I/O device.
Handle of the opened I/O device.
Channel names to query.
Name of the query. Default: "param_name"
List of values (according to Query).
Perform an action on the I/O device.
Handle of the opened I/O device.
Name of the action to perform.
List of arguments for the action. Default: []
List of result values returned by the action.
Configure a specific I/O device instance.
Handle of the opened I/O device.
Parameter names. Default: []
Parameter values to set. Default: []
Query settings of an I/O device instance.
Handle of the opened I/O device.
Parameter names. Default: "param_name"
Parameter values.
Close the specified I/O device.
Handle of the opened I/O device.
Open and configure an I/O device.
HALCON I/O interface name. Default: []
I/O device name. Default: []
Dynamic parameter names. Default: []
Dynamic parameter values. Default: []
Handle of the opened I/O device.
Perform an action on the I/O interface.
HALCON I/O interface name. Default: []
Name of the action to perform.
List of arguments for the action. Default: []
List of results returned by the action.
Query information about the specified I/O device interface.
HALCON I/O interface name. Default: []
Parameter name of the query. Default: "io_device_names"
List of result values (according to Query).
Query specific parameters of an image acquisition device.
Handle of the acquisition device to be used.
Parameter of interest. Default: "revision"
Parameter value.
Set specific parameters of an image acquistion device.
Handle of the acquisition device to be used.
Parameter name.
Parameter value to be set.
Query callback function of an image acquisition device.
Handle of the acquisition device to be used.
Callback type. Default: "transfer_end"
Pointer to the callback function.
Pointer to user-specific context data.
Register a callback function for an image acquisition device.
Handle of the acquisition device to be used.
Callback type. Default: "transfer_end"
Pointer to the callback function to be set.
Pointer to user-specific context data.
Asynchronous grab of images and preprocessed image data from the specified image acquisition device.
Grabbed image data.
Pre-processed image regions.
Pre-processed XLD contours.
Handle of the acquisition device to be used.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Pre-processed control data.
Synchronous grab of images and preprocessed image data from the specified image acquisition device.
Grabbed image data.
Preprocessed image regions.
Preprocessed XLD contours.
Handle of the acquisition device to be used.
Preprocessed control data.
Asynchronous grab of an image from the specified image acquisition device.
Grabbed image.
Handle of the acquisition device to be used.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Start an asynchronous grab from the specified image acquisition device.
Handle of the acquisition device to be used.
This parameter is obsolete and has no effect. Default: -1.0
Synchronous grab of an image from the specified image acquisition device.
Grabbed image.
Handle of the acquisition device to be used.
Query information about the specified image acquisition interface.
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library (Linux/macOS). Default: "File"
Name of the chosen query. Default: "info_boards"
Textual information (according to Query).
List of values (according to Query).
This operator is inoperable. It had the following function: Close all image acquisition devices.
Close specified image acquisition device.
Handle of the image acquisition device to be closed.
Open and configure an image acquisition device.
HALCON image acquisition interface name, i.e., name of the corresponding DLL (Windows) or shared library (Linux/macOS). Default: "File"
Desired horizontal resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Desired vertical resolution of image acquisition interface (absolute value or 1 for full resolution, 2 for half resolution, or 4 for quarter resolution). Default: 1
Width of desired image part (absolute value or 0 for HorizontalResolution - 2*StartColumn). Default: 0
Height of desired image part (absolute value or 0 for VerticalResolution - 2*StartRow). Default: 0
Line number of upper left corner of desired image part (or border height if ImageHeight = 0). Default: 0
Column number of upper left corner of desired image part (or border width if ImageWidth = 0). Default: 0
Desired half image or full image. Default: "default"
Number of transferred bits per pixel and image channel (-1: device-specific default value). Default: -1
Output color format of the grabbed images, typically 'gray' or 'raw' for single-channel or 'rgb' or 'yuv' for three-channel images ('default': device-specific default value). Default: "default"
Generic parameter with device-specific meaning. Default: -1
External triggering. Default: "default"
Type of used camera ('default': device-specific default value). Default: "default"
Device the image acquisition device is connected to ('default': device-specific default value). Default: "default"
Port the image acquisition device is connected to (-1: device-specific default value). Default: -1
Camera input line of multiplexer (-1: device-specific default value). Default: -1
Handle of the opened image acquisition device.
Query look-up table of the image acquisition device.
Handle of the acquisition device to be used.
Red level of the LUT entries.
Green level of the LUT entries.
Blue level of the LUT entries.
Set look-up table of the image acquisition device.
Handle of the acquisition device to be used.
Red level of the LUT entries.
Green level of the LUT entries.
Blue level of the LUT entries.
Add a text label to a 3D scene.
Handle of the 3D scene.
Text of the label. Default: "label"
Point of reference of the label.
Position of the label. Default: "top"
Indicates fixed or relative positioning. Default: "point"
Index of the new label in the 3D scene.
Remove a text label from a 3D scene.
Handle of the 3D scene.
Index of the text label to remove.
Set parameters of a text label in a 3D scene.
Handle of the 3D scene.
Index of the text label.
Names of the generic parameters. Default: "color"
Values of the generic parameters. Default: "red"
Add training images to the texture inspection model.
Image of flawless texture.
Handle of the texture inspection model.
Indices of the images that have been added to the texture inspection model.
Inspection of the texture within an image.
Image of the texture to be inspected.
Novelty regions.
Handle of the texture inspection model.
Handle of the inspection results.
bilateral filtering of an image.
Image to be filtered.
Joint image.
Filtered output image.
Size of the Gaussian of the closeness function. Default: 3.0
Size of the Gaussian of the similarity function. Default: 20.0
Generic parameter name. Default: []
Generic parameter value. Default: []
Clear an CNN-based OCR classifier.
Handle of the OCR classifier.
Clear a texture inspection model and free the allocated memory.
Handle of the texture inspection model.
Clear a texture inspection result handle and free the allocated memory.
Handle of the texture inspection results.
Convert image coordinates to window coordinates
Window handle
Row in image coordinates.
Column in image coordinates.
Row (Y) in window coordinates.
Column (X) in window coordinates.
Convert window coordinates to image coordinates
Window handle.
Row (Y) in window coordinates.
Column (X) in window coordinates.
Row in image coordinates.
Column in image coordinates.
Create a texture inspection model.
The type of the created texture inspection model. Default: "basic"
Handle for using and accessing the texture inspection model.
Deserialize a serialized dual quaternion.
Handle of the serialized item.
Dual quaternion.
Deserialize a serialized CNN-based OCR classifier.
Handle of the serialized item.
Handle of the OCR classifier.
Deserialize a serialized texture inspection model.
Handle of the serialized item.
Handle of the texture inspection model.
Display text in a window.
Window handle.
A tuple of strings containing the text message to be displayed. Each value of the tuple will be displayed in a single line. Default: "hello"
If set to 'window', the text position is given with respect to the window coordinate system. If set to 'image', image coordinates are used (this may be useful in zoomed images). Default: "window"
The vertical text alignment or the row coordinate of the desired text position. Default: 12
The horizontal text alignment or the column coordinate of the desired text position. Default: 12
A tuple of strings defining the colors of the texts. Default: "black"
Generic parameter names. Default: []
Generic parameter values. Default: []
Classify multiple characters with an CNN-based OCR classifier.
Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Result of classifying the characters with the CNN.
Confidence of the class of the characters.
Classify a single character with an CNN-based OCR classifier.
Character to be recognized.
Gray values of the character.
Handle of the OCR classifier.
Number of best classes to determine. Default: 1
Result of classifying the character with the CNN.
Confidence(s) of the class(es) of the character.
Classify a related group of characters with an CNN-based OCR classifier.
Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Result of classifying the characters with the CNN.
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Multiply two dual quaternions.
Left dual quaternion.
Right dual quaternion.
Product of the dual quaternions.
Conjugate a dual quaternion.
Dual quaternion.
Conjugate of the dual quaternion.
Interpolate two dual quaternions.
Dual quaternion as the start point of the interpolation.
Dual quaternion as the end point of the interpolation.
Interpolation parameter. Default: 0.5
Interpolated dual quaternion.
Normalize a dual quaternion.
Unit dual quaternion.
Normalized dual quaternion.
Convert a unit dual quaternion into a homogeneous transformation matrix.
Unit dual quaternion.
Transformation matrix.
Convert a dual quaternion to a 3D pose.
Unit dual quaternion.
3D pose.
Convert a unit dual quaternion into a screw.
Unit dual quaternion.
Format of the screw parameters. Default: "moment"
X component of the direction vector of the screw axis.
Y component of the direction vector of the screw axis.
Z component of the direction vector of the screw axis.
X component of the moment vector or a point on the screw axis.
Y component of the moment vector or a point on the screw axis.
Z component of the moment vector or a point on the screw axis.
Rotation angle in radians.
Translation.
Transform a 3D line with a unit dual quaternion.
Unit dual quaternion representing the transformation.
Format of the line parameters. Default: "moment"
X component of the direction vector of the line.
Y component of the direction vector of the line.
Z component of the direction vector of the line.
X component of the moment vector or a point on the line.
Y component of the moment vector or a point on the line.
Z component of the moment vector or a point on the line.
X component of the direction vector of the transformed line.
Y component of the direction vector of the transformed line.
Z component of the direction vector of the transformed line.
X component of the moment vector or a point on the transformed line.
Y component of the moment vector or a point on the transformed line.
Z component of the moment vector or a point on the transformed line.
Find edges in a 3D object model.
Handle of the 3D object model whose edges should be computed.
Edge threshold.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
3D object model containing the edges.
Find the best matches of multiple NCC models.
Input image in which the model should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.8
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "true"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of a surface model in a 3D scene and images.
Images of the scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
3D pose of the surface model in the scene.
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
Flush the contents of a window.
Window handle.
Return the region used to create an NCC model.
Model region of the NCC model.
Handle of the model.
Return the parameters of a CNN-based OCR classifier.
Handle of the OCR classifier.
A tuple of generic parameter names. Default: "characters"
A tuple of generic parameter values.
Get the current color in RGBA-coding.
Window handle.
The current color's red value.
The current color's green value.
The current color's blue value.
The current color's alpha value.
Get intermediate 3D object model of a stereo reconstruction
Handle des Stereomodells.
Namen der Modellparameter.
Werte der Modellparameter.
Get the training images contained in a texture inspection model.
Training images contained in the texture inspection model.
Handle of the texture inspection model.
Query parameters of a texture inspection model.
Handle of the texture inspection model.
Name of the queried model parameter. Default: "novelty_threshold"
Value of the queried model parameter.
Query iconic results of a texture inspection.
Returned iconic object.
Handle of the texture inspection result.
Name of the iconic object to be returned. Default: "novelty_region"
Guided filtering of an image.
Input image.
Guidance image.
Output image.
Radius of the filtering operation. Default: 3
Controls the influence of edges on the smoothing. Default: 20.0
Create an interleaved image from a multichannel image.
Input multichannel image.
Output interleaved image.
Target format for InterleavedImage. Default: "rgba"
Number of bytes in a row of the output image. Default: "match"
Alpha value for three channel input images. Default: 255
Convert a 3D pose to a unit dual quaternion.
3D pose.
Unit dual quaternion.
Get the names of the parameters that can be used in get_params_ocr_class_cnn for a given CNN-based OCR classifier.
Handle of OCR classifier.
Names of the generic parameters.
Read an CNN-based OCR classifier from a file.
File name. Default: "Universal_Rej.occ"
Handle of the OCR classifier.
Read a texture inspection model from a file.
File name.
Handle of the texture inspection model.
Refine the pose of a surface model in a 3D scene and in images.
Images of the scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
3D pose of the surface model in the scene.
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
Clear all or a user-defined subset of the images of a texture inspection model.
Handle of the texture inspection model.
Indices of the images to be deleted from the texture inspection model.
Indices of the images that remain in the texture inspection model.
Convert a screw into a dual quaternion.
Format of the screw parameters. Default: "moment"
X component of the direction vector of the screw axis.
Y component of the direction vector of the screw axis.
Z component of the direction vector of the screw axis.
X component of the moment vector or a point on the screw axis.
Y component of the moment vector or a point on the screw axis.
Z component of the moment vector or a point on the screw axis.
Rotation angle in radians.
Translation.
Dual quaternion.
Segment image using Maximally Stable Extremal Regions (MSER).
Input image.
Segmented dark MSERs.
Segmented light MSERs.
The polarity of the returned MSERs. Default: "both"
Minimal size of an MSER. Default: 10
Maximal size of an MSER. Default: []
Amount of thresholds for which a region needs to be stable. Default: 15
List of generic parameter names. Default: []
List of generic parameter values. Default: []
Send an event to a buffer window signaling a mouse double click event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a window buffer signaling a mouse down event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse drag event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse up event.
Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Serialize a dual quaternion.
Dual quaternion.
Handle of the serialized item.
Serialize a CNN-based OCR classifier
Handle of the OCR classifier.
Handle of the serialized item.
Serialize a texture inspection model.
Handle of the texture inspection model.
Handle of the serialized item.
Sets the callback for content updates in buffer window.
Window handle.
Callback for content updates.
Parameter to CallbackFunction.
Set the color definition via RGBA values.
Window handle.
Red component of the color. Default: 255
Green component of the color. Default: 0
Blue component of the color. Default: 0
Alpha component of the color. Default: 255
Set parameters and properties of a surface model.
Handle of the surface model.
Name of the parameter. Default: "camera_parameter"
Value of the parameter.
Set parameters of a texture inspection model.
Handle of the texture inspection model.
Name of the model parameter to be adjusted. Default: "gen_result_handle"
New value of the model parameter. Default: "true"
Train a texture inspection model.
Handle of the texture inspection model.
Write a texture inspection model to a file.
Handle of the texture inspection model.
File name.
Reconstruct a surface from several, differently illuminated images.
The Images.
The NormalField.
The Gradient.
The Albedo.
The Result type. Default: "all"
Infer the class affiliations for a set of images using a deep-learning-based classifier.
Tuple of input images.
Handle of the deep-learning-based classifier.
Handle of the deep learning classification results.
Clear a deep-learning-based classifier.
Handle of the deep-learning-based classifier.
Clear a handle containing the results of the deep-learning-based classification.
Handle of the deep learning classification results.
Clear the handle of a deep-learning-based classifier training result.
Handle of the training results from the deep-learning-based classifier.
Clear a structured light model and free the allocated memory.
Handle of the structured light model.
Create a structured light model.
The type of the created structured light model. Default: "deflectometry"
Handle for using and accessing the structured light model.
Decode the camera images acquired with a structured light setup.
Acquired camera images.
Handle of the structured light model.
Deserialize a deep-learning-based classifier.
Handle of the serialized item.
Handle of the deep-learning-based classifier.
Deserialize a structured light model.
Handle of the serialized item.
Handle of the structured light model.
Calculate the minimum distance between two contours and the points used for the calculation.
First input contour.
Second input contour.
Distance calculation mode. Default: "fast_point_to_segment"
Minimum distance between the two contours.
Row coordinate of the point on Contour1.
Column coordinate of the point on Contour1.
Row coordinate of the point on Contour2.
Column coordinate of the point on Contour2.
Fuse 3D object models into a surface.
Handles of the 3D object models.
The two opposite bound box corners.
Used resolution within the bounding box. Default: 1.0
Distance of expected noise to surface. Default: 1.0
Minimum thickness of the object in direction of the surface normal. Default: 1.0
Weight factor for data fidelity. Default: 1.0
Direction of normals of the input models. Default: "inwards"
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Handle of the fused 3D object model.
Generate the pattern images to be displayed in a structured light setup.
Generated pattern images.
Handle of the structured light model.
Return the parameters of a deep-learning-based classifier.
Handle of the deep-learning-based classifier.
Name of the generic parameter. Default: "gpu"
Value of the generic parameter.
Retrieve classification results inferred by a deep-learning-based classifier.
Handle of the deep learning classification results.
Index of the image in the batch. Default: "all"
Name of the generic parameter. Default: "predicted_classes"
Value of the generic parameter, either the confidence values, the class names or class indices.
Return the results for the single training step of a deep-learning-based classifier.
Handle of the training results from the deep-learning-based classifier.
Name of the generic parameter. Default: "loss"
Value of the generic parameter.
Query parameters of a structured light model.
Handle of the structured light model.
Name of the queried model parameter. Default: "min_stripe_width"
Value of the queried model parameter.
Get (intermediate) iconic results of a structured light model.
Iconic result.
Handle of the structured light model.
Name of the iconic result to be returned. Default: "correspondence_image"
Compute the width, height, and aspect ratio of the surrounding rectangle parallel to the coordinate axes.
Regions to be examined.
Height of the surrounding rectangle of the region.
Width of the surrounding rectangle of the region.
Aspect ratio of the surrounding rectangle of the region.
Compute the width, height, and aspect ratio of the enclosing rectangle parallel to the coordinate axes of contours or polygons.
Contours or polygons to be examined.
Height of the enclosing rectangle.
Width of the enclosing rectangle.
Aspect ratio of the enclosing rectangle.
Insert objects into an iconic object tuple.
Input object tuple.
Object tuple to insert.
Extended object tuple.
Index to insert objects.
Read a deep-learning-based classifier from a file.
File name. Default: "pretrained_dl_classifier_compact.hdl"
Handle of the deep learning classifier.
Read a structured light model from a file.
File name.
Handle of the structured light model.
Remove objects from an iconic object tuple.
Input object tuple.
Remaining object tuple.
Indices of the objects to be removed.
Replaces one or more elements of an iconic object tuple.
Iconic Input Object.
Element(s) to replace.
Tuple with replaced elements.
Index/Indices of elements to be replaced.
Serialize a deep-learning-based classifier.
Handle of the deep-learning-based classifier.
Handle of the serialized item.
Serialize a structured light model.
Handle of the structured light model.
Handle of the serialized item.
Set the parameters of a deep-learning-based classifier.
Handle of the deep-learning-based classifier.
Name of the generic parameter. Default: "classes"
Value of the generic parameter. Default: ["class_1","class_2","class_3"]
Set a timeout for an operator.
Operator for which the timeout shall be set.
Timeout in seconds. Default: 1
Timeout mode to be set. Default: "cancel"
Set parameters of a structured light model.
Handle of the structured light model.
Name of the model parameter to be adjusted. Default: "min_stripe_width"
New value of the model parameter. Default: 32
Perform a training step of a deep-learning-based classifier on a batch of images.
Images comprising the batch.
Handle of the deep-learning-based classifier.
Corresponding labels for each of the images. Default: []
Handle of the training results from the deep-learning-based classifier.
Write a deep-learning-based classifier in a file.
Handle of the deep-learning-based classifier.
File name.
Write a structured light model to a file.
Handle of the structured light model.
File name.
Clear the content of a handle.
Handle to clear.
Deserialize a serialized item.
Handle containing the serialized item to be deserialized.
Handle containing the deserialized item.
Convert a handle into an integer.
The handle to be casted.
The handle casted to an integer value.
Convert an integer into a handle.
The handle as integer.
The handle as handle.
Serialize the content of a handle.
Handle that should be serialized.
Handle containing the serialized item.
Test if the internal representation of a tuple is of type handle.
Input tuple.
Boolean value indicating if the input tuple is of type handle.
Test whether the elements of a tuple are of type handle.
Input tuple.
Boolean values indicating if the elements of the input tuple are of type handle.
Test if a tuple is serializable.
Tuple to check for serializability.
Boolean value indicating if the input can be serialized.
Test if the elements of a tuple are serializable.
Tuple to check for serializability.
Boolean value indicating if the input elements can be serialized.
Check if a handle is valid.
The handle to check for validity.
The validity of the handle, 1 or 0.
Return the semantic type of a tuple.
Input tuple.
Semantic type of the input tuple as a string.
Return the semantic type of the elements of a tuple.
Input tuple.
Semantic types of the elements of the input tuple as strings.
Apply a deep-learning-based network on a set of images for inference.
Handle of the deep learning model.
Input data.
Requested outputs. Default: []
Handle containing result data.
Clear a deep learning model.
Handle of the deep learning model.
Copy a dictionary.
Dictionary handle.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Copied dictionary handle.
Create a new empty dictionary.
Handle of the newly created dictionary.
Create a deep learning network for object detection.
Deep learning classifier, used as backbone network. Default: "pretrained_dl_classifier_compact.hdl"
Number of classes. Default: 3
Parameters for the object detection model. Default: []
Deep learning model for object detection.
Deserialize a deep learning model.
Handle of the serialized item.
Handle of the deep learning model.
Return the HALCON thread ID of the current thread.
ID representing the current thread.
Retrieve an object associated with the key from the dictionary.
Object value retrieved from the dictionary.
Dictionary handle.
Key string.
Query dictionary parameters or information about a dictionary.
Dictionary handle.
Names of the dictionary parameters or info queries. Default: "keys"
Dictionary keys the parameter/query should be applied to (empty for GenParamName = 'keys').
Values of the dictionary parameters or info queries.
Retrieve a tuple associated with the key from the dictionary.
Dictionary handle.
Key string.
Tuple value retrieved from the dictionary.
Return the parameters of a deep learning model.
Handle of the deep learning model.
Name of the generic parameter. Default: "batch_size"
Value of the generic parameter.
Retrieve an object associated with a key from a handle.
Iconic value of the key.
Handle of which to get the key.
Key to get.
Return information about a handle.
Handle of which to get the parameter.
Parameter to get. Default: "keys"
Optional key. Default: []
Returned value.
Retrieve a tuple associated with a key from a handle.
Handle of which to get the key.
Key to get.
Control value of the key.
Get current value of system information without requiring a license.
Desired system parameter. Default: "available_parameters"
Current value of the system parameter.
Attempt to interrupt an operator running in a different thread.
Thread that runs the operator to interrupt.
Interruption mode. Default: "cancel"
Read a dictionary from a file.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Dictionary handle.
Read a deep learning model from a file.
Filename Default: "pretrained_dl_segmentation_compact.hdl"
Handle of the deep learning model.
Read a message from a file.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Message handle.
Remove keys from a dictionary.
Dictionary handle.
Key to remove.
Serialize a deep learning model.
Handle of the deep learning model.
Handle of the serialized item.
Add a key/object pair to the dictionary.
Object to be associated with the key.
Dictionary handle.
Key string.
Add a key/tuple pair to the dictionary.
Dictionary handle.
Key string.
Tuple value to be associated with the key.
Set the parameters of a deep learning model.
Handle of the deep learning model.
Name of the generic parameter. Default: "learning_rate"
Value of the generic parameter. Default: 0.001
Train a deep learning model.
Deep learning model handle.
Tuple of Dictionaries with input images and corresponding information.
Dictionary with the train result data.
Write a dictionary to a file.
Dictionary handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Write a deep learning model in a file.
Handle of the deep learning model.
Filename
Write a message to a file.
Message handle.
File name.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Calculate the intersection area of oriented rectangles.
Center row coordinate of the first rectangle.
Center column coordinate of the first rectangle.
Angle between the positive horizontal axis and the first edge of the first rectangle (in radians).
Half length of the first edge of the first rectangle.
Half length of the second edge of the first rectangle.
Center row coordinate of the second rectangle.
Center column coordinate of the second rectangle.
Angle between the positive horizontal axis and the first edge of the second rectangle (in radians).
Half length of the first edge of the second rectangle.
Half length of the second edge of the second rectangle.
Intersection area of the first rectangle with the second rectangle.
Get the current contour display fill style.
Window handle.
Current contour fill style.
Get the clutter parameters of a shape model.
Region where no clutter should occur.
Handle of the model.
Parameter names. Default: "use_clutter"
Parameter values.
Transformation matrix.
Minimum contrast of clutter in the search images.
Define the contour display fill style.
Window handle.
Fill style of contour displays. Default: "stroke"
Set the clutter parameters of a shape model.
Region where no clutter should occur.
Handle of the model.
Transformation matrix.
Minimum contrast of clutter in the search images. Default: 128
Parameter names.
Parameter values.
Represents a rigid 3D transformation with 7 parameters (3 for the rotation, 3 for the translation, 1 for the representation type).
Create an uninitialized instance.
Create a 3D pose.
Modified instance represents: 3D pose.
Translation along the x-axis (in [m]). Default: 0.1
Translation along the y-axis (in [m]). Default: 0.1
Translation along the z-axis (in [m]). Default: 0.1
Rotation around x-axis or x component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Rotation around y-axis or y component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Rotation around z-axis or z component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Order of rotation and translation. Default: "Rp+T"
Meaning of the rotation values. Default: "gba"
View of transformation. Default: "point"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Convert to matrix
Compute the average of a set of poses.
Set of poses of which the average if computed.
Empty tuple, or one weight per pose. Default: []
Averaging mode. Default: "iterative"
Weight of the translation. Default: "auto"
Weight of the rotation. Default: "auto"
Deviation of the mean from the input poses.
Weighted mean of the poses.
Compute the average of a set of poses.
Set of poses of which the average if computed.
Empty tuple, or one weight per pose. Default: []
Averaging mode. Default: "iterative"
Weight of the translation. Default: "auto"
Weight of the rotation. Default: "auto"
Deviation of the mean from the input poses.
Weighted mean of the poses.
Invert each pose in a tuple of 3D poses.
Tuple of 3D poses.
Tuple of inverted 3D poses.
Invert each pose in a tuple of 3D poses.
Instance represents: Tuple of 3D poses.
Tuple of inverted 3D poses.
Combine 3D poses given in two tuples.
Tuple containing the left poses.
Tuple containing the right poses.
Tuple containing the returned poses.
Combine 3D poses given in two tuples.
Instance represents: Tuple containing the left poses.
Tuple containing the right poses.
Tuple containing the returned poses.
Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Distance image.
Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Minimum of the expected disparities. Default: -30
Maximum of the expected disparities. Default: 30
Smoothing of surfaces. Default: 50
Smoothing of edges. Default: 50
Parameter name(s) for the multi-scanline algorithm. Default: []
Parameter value(s) for the multi-scanline algorithm. Default: []
Distance image.
Compute the distance values for a rectified stereo image pair using multigrid methods.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity if CalculateScore is set to 'true'.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Distance image.
Compute the distance values for a rectified stereo image pair using multigrid methods.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Score of the calculated disparity if CalculateScore is set to 'true'.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Weight of the gray value constancy in the data term. Default: 1.0
Weight of the gradient constancy in the data term. Default: 30.0
Weight of the smoothness term in relation to the data term. Default: 5.0
Initial guess of the disparity. Default: 0.0
Should the quality measure be returned in Score? Default: "false"
Parameter name(s) for the multigrid algorithm. Default: "default_parameters"
Parameter value(s) for the multigrid algorithm. Default: "fast_accurate"
Distance image.
Compute the fundamental matrix from the relative orientation of two cameras.
Instance represents: Relative orientation of the cameras (3D pose).
6x6 covariance matrix of relative pose. Default: []
Parameters of the 1. camera.
Parameters of the 2. camera.
9x9 covariance matrix of the fundamental matrix.
Computed fundamental matrix.
Compute the relative orientation between two cameras given image point correspondences and known camera parameters and reconstruct 3D space points.
Modified instance represents: Computed relative orientation of the cameras (3D pose).
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Camera parameters of the 1st camera.
Camera parameters of the 2nd camera.
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
6x6 covariance matrix of the relative camera orientation.
Compute the relative orientation between two cameras given image point correspondences and known camera parameters and reconstruct 3D space points.
Modified instance represents: Computed relative orientation of the cameras (3D pose).
Input points in image 1 (row coordinate).
Input points in image 1 (column coordinate).
Input points in image 2 (row coordinate).
Input points in image 2 (column coordinate).
Row coordinate variance of the points in image 1. Default: []
Covariance of the points in image 1. Default: []
Column coordinate variance of the points in image 1. Default: []
Row coordinate variance of the points in image 2. Default: []
Covariance of the points in image 2. Default: []
Column coordinate variance of the points in image 2. Default: []
Camera parameters of the 1st camera.
Camera parameters of the 2nd camera.
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Root-Mean-Square of the epipolar distance error.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
6x6 covariance matrix of the relative camera orientation.
Compute the relative orientation between two cameras by automatically finding correspondences between image points.
Modified instance represents: Computed relative orientation of the cameras (3D pose).
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Parameters of the 1st camera.
Parameters of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
6x6 covariance matrix of the relative orientation.
Compute the relative orientation between two cameras by automatically finding correspondences between image points.
Modified instance represents: Computed relative orientation of the cameras (3D pose).
Input image 1.
Input image 2.
Row coordinates of characteristic points in image 1.
Column coordinates of characteristic points in image 1.
Row coordinates of characteristic points in image 2.
Column coordinates of characteristic points in image 2.
Parameters of the 1st camera.
Parameters of the 2nd camera.
Gray value comparison metric. Default: "ssd"
Size of gray value masks. Default: 10
Average row coordinate shift of corresponding points. Default: 0
Average column coordinate shift of corresponding points. Default: 0
Half height of matching search window. Default: 200
Half width of matching search window. Default: 200
Estimate of the relative orientation of the right image with respect to the left image. Default: 0.0
Threshold for gray value matching. Default: 10
Algorithm for the computation of the relative pose and for special pose types. Default: "normalized_dlt"
Maximal deviation of a point from its epipolar line. Default: 1
Seed for the random number generator. Default: 0
Root-Mean-Square of the epipolar distance error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
6x6 covariance matrix of the relative orientation.
Compute the distance values for a rectified stereo image pair using correlation techniques.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Evaluation of a distance value.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: 0
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.0
Downstream filters. Default: "none"
Distance interpolation. Default: "none"
Distance image.
Compute the distance values for a rectified stereo image pair using correlation techniques.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified image of camera 1.
Rectified image of camera 2.
Evaluation of a distance value.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Matching function. Default: "ncc"
Width of the correlation window. Default: 11
Height of the correlation window. Default: 11
Variance threshold of textured image regions. Default: 0.0
Minimum of the expected disparities. Default: 0
Maximum of the expected disparities. Default: 30
Number of pyramid levels. Default: 1
Threshold of the correlation function. Default: 0.0
Downstream filters. Default: "none"
Distance interpolation. Default: "none"
Distance image.
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Instance represents: Point transformation from camera 2 to camera 1.
Internal parameters of the projective camera 1.
Internal parameters of the projective camera 2.
Row coordinate of a point in image 1.
Column coordinate of a point in image 1.
Row coordinate of the corresponding point in image 2.
Column coordinate of the corresponding point in image 2.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Distance of the 3D point to the lines of sight.
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Instance represents: Point transformation from camera 2 to camera 1.
Internal parameters of the projective camera 1.
Internal parameters of the projective camera 2.
Row coordinate of a point in image 1.
Column coordinate of a point in image 1.
Row coordinate of the corresponding point in image 2.
Column coordinate of the corresponding point in image 2.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Distance of the 3D point to the lines of sight.
Transform a disparity image into 3D points in a rectified stereo system.
Instance represents: Pose of the rectified camera 2 in relation to the rectified camera 1.
Disparity image.
Y coordinates of the points in the rectified camera system 1.
Z coordinates of the points in the rectified camera system 1.
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
X coordinates of the points in the rectified camera system 1.
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Instance represents: Pose of the rectified camera 2 in relation to the rectified camera 1.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Row coordinate of a point in the rectified image 1.
Column coordinate of a point in the rectified image 1.
Disparity of the images of the world point.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Instance represents: Pose of the rectified camera 2 in relation to the rectified camera 1.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Row coordinate of a point in the rectified image 1.
Column coordinate of a point in the rectified image 1.
Disparity of the images of the world point.
X coordinate of the 3D point.
Y coordinate of the 3D point.
Z coordinate of the 3D point.
Transform a disparity value into a distance value in a rectified binocular stereo system.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Disparity between the images of the world point.
Distance of a world point to the rectified camera system.
Transform a disparity value into a distance value in a rectified binocular stereo system.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Disparity between the images of the world point.
Distance of a world point to the rectified camera system.
Transfrom a distance value into a disparity in a rectified stereo system.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Distance of a world point to camera 1.
Disparity between the images of the point.
Transfrom a distance value into a disparity in a rectified stereo system.
Instance represents: Point transformation from the rectified camera 2 to the rectified camera 1.
Rectified internal camera parameters of camera 1.
Rectified internal camera parameters of camera 2.
Distance of a world point to camera 1.
Disparity between the images of the point.
Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common rectified image plane.
Instance represents: Point transformation from camera 2 to camera 1.
Image containing the mapping data of camera 2.
Internal parameters of camera 1.
Internal parameters of camera 2.
Subsampling factor. Default: 1.0
Type of rectification. Default: "viewing_direction"
Type of mapping. Default: "bilinear"
Rectified internal parameters of camera 1.
Rectified internal parameters of camera 2.
Point transformation from the rectified camera 1 to the original camera 1.
Point transformation from the rectified camera 1 to the original camera 1.
Point transformation from the rectified camera 2 to the rectified camera 1.
Image containing the mapping data of camera 1.
Determine all camera parameters of a binocular stereo system.
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
Initial values for the internal parameters of camera 1.
Initial values for the internal parameters of camera 2.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
Camera parameters to be estimated. Default: "all"
Internal parameters of camera 2.
Ordered tuple with all poses of the calibration model in relation to camera 1.
Ordered tuple with all poses of the calibration model in relation to camera 2.
Pose of camera 2 in relation to camera 1.
Average error distances in pixels.
Internal parameters of camera 1.
Determine all camera parameters of a binocular stereo system.
Instance represents: Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
Initial values for the internal parameters of camera 1.
Initial values for the internal parameters of camera 2.
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
Camera parameters to be estimated. Default: "all"
Internal parameters of camera 2.
Ordered tuple with all poses of the calibration model in relation to camera 1.
Ordered tuple with all poses of the calibration model in relation to camera 2.
Pose of camera 2 in relation to camera 1.
Average error distances in pixels.
Internal parameters of camera 1.
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Camera parameter (inner orientation) obtained from camera calibration.
Score type to be evaluated in Score. Default: "num_points"
3D pose of the object.
Score of the found instances according to the ScoreType input.
Find the best matches of a calibrated descriptor model in an image and return their 3D pose.
Modified instance represents: 3D pose of the object.
Input image where the model should be found.
The handle to the descriptor model.
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
Minimum score of the instances of the models to be found. Default: 0.2
Maximal number of found instances. Default: 1
Camera parameter (inner orientation) obtained from camera calibration.
Score type to be evaluated in Score. Default: "num_points"
Score of the found instances according to the ScoreType input.
Create a descriptor model for calibrated perspective matching.
Instance represents: The reference pose of the object in the reference image.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
The type of the detector. Default: "lepetit"
The detector's parameter names. Default: []
Values of the detector's parameters. Default: []
The descriptor's parameter names. Default: []
Values of the descriptor's parameters. Default: []
The seed for the random number generator. Default: 42
The handle to the descriptor model.
Prepare a deformable model for planar calibrated matching from XLD contours.
Instance represents: The reference pose of the object.
Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Prepare a deformable model for planar calibrated matching from XLD contours.
Instance represents: The reference pose of the object.
Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Create a deformable model for calibrated perspective matching.
Instance represents: The reference pose of the object in the reference image.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Handle of the model.
Create a deformable model for calibrated perspective matching.
Instance represents: The reference pose of the object in the reference image.
Input image whose domain will be used to create the model.
The parameters of the internal orientation of the camera.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "none"
Match metric. Default: "use_polarity"
Thresholds or hysteresis thresholds for the contrast of the object in the template image. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
The parameter names. Default: []
Values of the parameters. Default: []
Handle of the model.
Create a 3D camera pose from camera center and viewing direction.
X coordinate of the optical center of the camera.
Y coordinate of the optical center of the camera.
Z coordinate of the optical center of the camera.
X coordinate of the 3D point to which the camera is directed.
Y coordinate of the 3D point to which the camera is directed.
Z coordinate of the 3D point to which the camera is directed.
Normal vector of the reference plane (points up). Default: "-y"
Camera roll angle. Default: 0
3D camera pose.
Create a 3D camera pose from camera center and viewing direction.
Modified instance represents: 3D camera pose.
X coordinate of the optical center of the camera.
Y coordinate of the optical center of the camera.
Z coordinate of the optical center of the camera.
X coordinate of the 3D point to which the camera is directed.
Y coordinate of the 3D point to which the camera is directed.
Z coordinate of the 3D point to which the camera is directed.
Normal vector of the reference plane (points up). Default: "-y"
Camera roll angle. Default: 0
Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference coordinate system of a 3D shape model and vice versa.
Instance represents: Pose to be transformed in the source system.
Handle of the 3D shape model.
Direction of the transformation. Default: "ref_to_model"
Transformed 3D pose in the target system.
Project the edges of a 3D shape model into image coordinates.
Instance represents: 3D pose of the 3D shape model in the world coordinate system.
Handle of the 3D shape model.
Internal camera parameters.
Remove hidden surfaces? Default: "true"
Smallest face angle for which the edge is displayed Default: 0.523599
Contour representation of the model view.
Project the edges of a 3D shape model into image coordinates.
Instance represents: 3D pose of the 3D shape model in the world coordinate system.
Handle of the 3D shape model.
Internal camera parameters.
Remove hidden surfaces? Default: "true"
Smallest face angle for which the edge is displayed Default: 0.523599
Contour representation of the model view.
Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given region.
Region in the image plane.
Handle of the 3D object model.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Handle of the reduced 3D object model.
Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given region.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Region in the image plane.
Handle of the 3D object model.
Internal camera parameters.
Handle of the reduced 3D object model.
Render 3D object models to get an image.
Handles of the 3D object models.
Camera parameters of the scene.
3D poses of the objects.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Rendered scene.
Render 3D object models to get an image.
Instance represents: 3D poses of the objects.
Handles of the 3D object models.
Camera parameters of the scene.
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Rendered scene.
Display 3D object models.
Window handle.
Handles of the 3D object models.
Camera parameters of the scene. Default: []
3D poses of the objects. Default: []
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Display 3D object models.
Instance represents: 3D poses of the objects.
Window handle.
Handles of the 3D object models.
Camera parameters of the scene. Default: []
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Project a 3D object model into image coordinates.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Handle of the 3D object model.
Internal camera parameters.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Projected model contours.
Project a 3D object model into image coordinates.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Handle of the 3D object model.
Internal camera parameters.
Name of the generic parameter. Default: []
Value of the generic parameter. Default: []
Projected model contours.
Compute the calibrated scene flow between two stereo image pairs.
Instance represents: Pose of the rectified camera 2 in relation to the rectified camera 1.
Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Handle of the 3D object model.
Compute the calibrated scene flow between two stereo image pairs.
Instance represents: Pose of the rectified camera 2 in relation to the rectified camera 1.
Input image 1 at time @f$t_{1}$t_1.
Input image 2 at time @f$t_{1}$t_1.
Input image 1 at time @f$t_{2}$t_2.
Input image 2 at time @f$t_{2}$t_2.
Disparity between input images 1 and 2 at time @f$t_{1}$t_1.
Weight of the regularization term relative to the data term (derivatives of the optical flow). Default: 40.0
Weight of the regularization term relative to the data term (derivatives of the disparity change). Default: 40.0
Parameter name(s) for the algorithm. Default: "default_parameters"
Parameter value(s) for the algorithm. Default: "accurate"
Internal camera parameters of the rectified camera 1.
Internal camera parameters of the rectified camera 2.
Handle of the 3D object model.
Compute an absolute pose out of point correspondences between world and image coordinates.
Modified instance represents: Pose.
X-Component of world coordinates.
Y-Component of world coordinates.
Z-Component of world coordinates.
Row-Component of image coordinates.
Column-Component of image coordinates.
The inner camera parameters from camera calibration.
Kind of algorithm Default: "iterative"
Type of pose quality to be returned in Quality. Default: "error"
Pose quality.
Compute an absolute pose out of point correspondences between world and image coordinates.
Modified instance represents: Pose.
X-Component of world coordinates.
Y-Component of world coordinates.
Z-Component of world coordinates.
Row-Component of image coordinates.
Column-Component of image coordinates.
The inner camera parameters from camera calibration.
Kind of algorithm Default: "iterative"
Type of pose quality to be returned in Quality. Default: "error"
Pose quality.
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world coordinate system.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Internal camera parameters.
Width of the images to be transformed.
Height of the images to be transformed.
Width of the resulting mapped images in pixels.
Height of the resulting mapped images in pixels.
Scale or unit. Default: "m"
Type of the mapping. Default: "bilinear"
Image containing the mapping data.
Generate a projection map that describes the mapping between the image plane and a the plane z=0 of a world coordinate system.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Internal camera parameters.
Width of the images to be transformed.
Height of the images to be transformed.
Width of the resulting mapped images in pixels.
Height of the resulting mapped images in pixels.
Scale or unit. Default: "m"
Type of the mapping. Default: "bilinear"
Image containing the mapping data.
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Input image.
Internal camera parameters.
Width of the resulting image in pixels.
Height of the resulting image in pixels.
Scale or unit Default: "m"
Type of interpolation. Default: "bilinear"
Transformed image.
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Input image.
Internal camera parameters.
Width of the resulting image in pixels.
Height of the resulting image in pixels.
Scale or unit Default: "m"
Type of interpolation. Default: "bilinear"
Transformed image.
Transform an XLD contour into the plane z=0 of a world coordinate system.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Input XLD contours to be transformed in image coordinates.
Internal camera parameters.
Scale or dimension Default: "m"
Transformed XLD contours in world coordinates.
Transform an XLD contour into the plane z=0 of a world coordinate system.
Instance represents: 3D pose of the world coordinate system in camera coordinates.
Input XLD contours to be transformed in image coordinates.
Internal camera parameters.
Scale or dimension Default: "m"
Transformed XLD contours in world coordinates.
Translate the origin of a 3D pose.
Instance represents: original 3D pose.
translation of the origin in x-direction. Default: 0
translation of the origin in y-direction. Default: 0
translation of the origin in z-direction. Default: 0
new 3D pose after applying the translation.
Perform a hand-eye calibration.
Linear list containing all the x coordinates of the calibration points (in the order of the images).
Linear list containing all the y coordinates of the calibration points (in the order of the images).
Linear list containing all the z coordinates of the calibration points (in the order of the images).
Linear list containing all row coordinates of the calibration points (in the order of the images).
Linear list containing all the column coordinates of the calibration points (in the order of the images).
Number of the calibration points for each image.
Known 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates; stationary camera: robot tool in robot base coordinates).
Internal camera parameters.
Method of hand-eye calibration. Default: "nonlinear"
Type of quality assessment. Default: "error_pose"
Computed 3D pose of the calibration points in robot base coordinates (moving camera) or in robot tool coordinates (stationary camera), respectively.
Quality assessment of the result.
Computed relative camera pose: 3D pose of the robot tool (moving camera) or robot base (stationary camera), respectively, in camera coordinates.
Perform a hand-eye calibration.
Linear list containing all the x coordinates of the calibration points (in the order of the images).
Linear list containing all the y coordinates of the calibration points (in the order of the images).
Linear list containing all the z coordinates of the calibration points (in the order of the images).
Linear list containing all row coordinates of the calibration points (in the order of the images).
Linear list containing all the column coordinates of the calibration points (in the order of the images).
Number of the calibration points for each image.
Known 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates; stationary camera: robot tool in robot base coordinates).
Internal camera parameters.
Method of hand-eye calibration. Default: "nonlinear"
Type of quality assessment. Default: "error_pose"
Computed 3D pose of the calibration points in robot base coordinates (moving camera) or in robot tool coordinates (stationary camera), respectively.
Quality assessment of the result.
Computed relative camera pose: 3D pose of the robot tool (moving camera) or robot base (stationary camera), respectively, in camera coordinates.
Get the representation type of a 3D pose.
Instance represents: 3D pose.
Meaning of the rotation values.
View of transformation.
Order of rotation and translation.
Change the representation type of a 3D pose.
Instance represents: Original 3D pose.
Order of rotation and translation. Default: "Rp+T"
Meaning of the rotation values. Default: "gba"
View of transformation. Default: "point"
3D transformation.
Create a 3D pose.
Modified instance represents: 3D pose.
Translation along the x-axis (in [m]). Default: 0.1
Translation along the y-axis (in [m]). Default: 0.1
Translation along the z-axis (in [m]). Default: 0.1
Rotation around x-axis or x component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Rotation around y-axis or y component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Rotation around z-axis or z component of the Rodriguez vector (in [ deg] or without unit). Default: 90.0
Order of rotation and translation. Default: "Rp+T"
Meaning of the rotation values. Default: "gba"
View of transformation. Default: "point"
Convert internal camera parameters and a 3D pose into a 3x4 projection matrix.
Instance represents: 3D pose.
Internal camera parameters.
3x4 projection matrix.
Convert a 3D pose into a homogeneous transformation matrix.
Instance represents: 3D pose.
Equivalent homogeneous transformation matrix.
Deserialize a serialized pose.
Modified instance represents: 3D pose.
Handle of the serialized item.
Serialize a pose.
Instance represents: 3D pose.
Handle of the serialized item.
Read a 3D pose from a text file.
Modified instance represents: 3D pose.
File name of the external camera parameters. Default: "campose.dat"
Write a 3D pose to a text file.
Instance represents: 3D pose.
File name of the external camera parameters. Default: "campose.dat"
Simulate an image with calibration plate.
Instance represents: External camera parameters (3D pose of the calibration plate in camera coordinates).
File name of the calibration plate description. Default: "calplate_320mm.cpd"
Internal camera parameters.
Gray value of image background. Default: 128
Gray value of calibration plate. Default: 80
Gray value of calibration marks. Default: 224
Scaling factor to reduce oversampling. Default: 1.0
Simulated calibration image.
Determine all camera parameters by a simultaneous minimization process.
Ordered tuple with all x coordinates of the calibration marks (in meters).
Ordered tuple with all y coordinates of the calibration marks (in meters).
Ordered tuple with all z coordinates of the calibration marks (in meters).
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
Initial values for the internal camera parameters.
Ordered tuple with all initial values for the external camera parameters.
Camera parameters to be estimated. Default: "all"
Ordered tuple with all external camera parameters.
Average error distance in pixels.
Internal camera parameters.
Determine all camera parameters by a simultaneous minimization process.
Instance represents: Ordered tuple with all initial values for the external camera parameters.
Ordered tuple with all x coordinates of the calibration marks (in meters).
Ordered tuple with all y coordinates of the calibration marks (in meters).
Ordered tuple with all z coordinates of the calibration marks (in meters).
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
Initial values for the internal camera parameters.
Camera parameters to be estimated. Default: "all"
Ordered tuple with all external camera parameters.
Average error distance in pixels.
Internal camera parameters.
Extract rectangularly arranged 2D calibration marks from the image and calculate initial values for the external camera parameters.
Modified instance represents: Estimation for the external camera parameters.
Input image.
Region of the calibration plate.
File name of the calibration plate description. Default: "caltab_100.descr"
Initial values for the internal camera parameters.
Initial threshold value for contour detection. Default: 128
Loop value for successive reduction of StartThresh. Default: 10
Minimum threshold for contour detection. Default: 18
Filter parameter for contour detection, see edges_image. Default: 0.9
Minimum length of the contours of the marks. Default: 15.0
Maximum expected diameter of the marks. Default: 100.0
Tuple with column coordinates of the detected marks.
Tuple with row coordinates of the detected marks.
Define type, parameters, and relative pose of a camera in a camera setup model.
Handle to the camera setup model.
Index of the camera in the setup.
Type of the camera. Default: []
Internal camera parameters.
Pose of the camera relative to the setup's coordinate system.
Define type, parameters, and relative pose of a camera in a camera setup model.
Handle to the camera setup model.
Index of the camera in the setup.
Type of the camera. Default: []
Internal camera parameters.
Pose of the camera relative to the setup's coordinate system.
Represents a quaternion.
Create an uninitialized instance.
Create a rotation quaternion.
Modified instance represents: Rotation quaternion.
X component of the rotation axis.
Y component of the rotation axis.
Z component of the rotation axis.
Rotation angle in radians.
Create a rotation quaternion.
Modified instance represents: Rotation quaternion.
X component of the rotation axis.
Y component of the rotation axis.
Z component of the rotation axis.
Rotation angle in radians.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Composes two quaternions
Convert to pose
Convert to matrix
Conjugate a quaternion
Perform a rotation by a unit quaternion.
Instance represents: Rotation quaternion.
X coordinate of the point to be rotated.
Y coordinate of the point to be rotated.
Z coordinate of the point to be rotated.
Y coordinate of the rotated point.
Z coordinate of the rotated point.
X coordinate of the rotated point.
Generate the conjugation of a quaternion.
Instance represents: Input quaternion.
Conjugated quaternion.
Normalize a quaternion.
Instance represents: Input quaternion.
Normalized quaternion.
Create a rotation quaternion.
Modified instance represents: Rotation quaternion.
X component of the rotation axis.
Y component of the rotation axis.
Z component of the rotation axis.
Rotation angle in radians.
Create a rotation quaternion.
Modified instance represents: Rotation quaternion.
X component of the rotation axis.
Y component of the rotation axis.
Z component of the rotation axis.
Rotation angle in radians.
Convert a quaternion into the corresponding 3D pose.
Instance represents: Rotation quaternion.
3D Pose.
Convert a quaternion into the corresponding rotation matrix.
Instance represents: Rotation quaternion.
Rotation matrix.
Convert the rotational part of a 3D pose to a quaternion.
3D Pose.
Rotation quaternion.
Convert the rotational part of a 3D pose to a quaternion.
Modified instance represents: Rotation quaternion.
3D Pose.
Interpolation of two quaternions.
Instance represents: Start quaternion.
End quaternion.
Interpolation parameter. Default: 0.5
Interpolated quaternion.
Multiply two quaternions.
Instance represents: Left quaternion.
Right quaternion.
Product of the input quaternions.
Deserialize a serialized quaternion.
Modified instance represents: Quaternion.
Handle of the serialized item.
Serialize a quaternion.
Instance represents: Quaternion.
Handle of the serialized item.
Represents an instance of a region object(-array).
Create an uninitialized iconic object
Create a rectangle parallel to the coordinate axes.
Modified instance represents: Created rectangle.
Line of upper left corner point. Default: 30.0
Column of upper left corner point. Default: 20.0
Line of lower right corner point. Default: 100.0
Column of lower right corner point. Default: 200.0
Create a rectangle parallel to the coordinate axes.
Modified instance represents: Created rectangle.
Line of upper left corner point. Default: 30.0
Column of upper left corner point. Default: 20.0
Line of lower right corner point. Default: 100.0
Column of lower right corner point. Default: 200.0
Create an ellipse sector.
Modified instance represents: Created ellipse(s).
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Orientation of the longer radius (Radius1). Default: 0.0
Longer radius. Default: 100.0
Shorter radius. Default: 60.0
Start angle of the sector. Default: 0.0
End angle of the sector. Default: 3.14159
Create an ellipse sector.
Modified instance represents: Created ellipse(s).
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Orientation of the longer radius (Radius1). Default: 0.0
Longer radius. Default: 100.0
Shorter radius. Default: 60.0
Start angle of the sector. Default: 0.0
End angle of the sector. Default: 3.14159
Create a circle sector.
Modified instance represents: Generated circle sector.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Start angle of the circle sector. Default: 0.0
End angle of the circle sector. Default: 3.14159
Create a circle sector.
Modified instance represents: Generated circle sector.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Start angle of the circle sector. Default: 0.0
End angle of the circle sector. Default: 3.14159
Create a circle.
Modified instance represents: Generated circle.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Create a circle.
Modified instance represents: Generated circle.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Returns the intersection of regions
Returns the union of regions
Returns the difference of regions
Returns the recomplement of the region. Note that the result
is not necessarily finite, so you might wish to intersect the
result with the image domain you are interested in.
Returns the intersection of the region and the image domain.
In particular, the result will not exceed the image bounds.
Test if one region is a subset of the other
Test if one region is a subset of the other
Returns the Minkowski addition of regions
Returns the Minkowski subtraction of regions
Dilates the region by the specified radius
Dilates the region by the specified radius
Erodes the region by the specified radius
Translates the region
Zooms the region
Zooms the region
Transposes the region
Converts an XLD contour to a filled region
Converts an XLD polygon to a filled region
Returns an XLD contour representing the region border
Generate XLD contours from regions.
Instance represents: Input regions.
Mode of contour generation. Default: "border"
Resulting contours.
Convert a skeleton into XLD contours.
Instance represents: Skeleton of which the contours are to be determined.
Minimum number of points a contour has to have. Default: 1
Contour filter mode. Default: "filter"
Resulting contours.
Receive regions over a socket connection.
Modified instance represents: Received regions.
Socket number.
Send regions over a socket connection.
Instance represents: Regions to be sent.
Socket number.
Create a model to perform 3D-measurements using the sheet-of-light technique.
Instance represents: Region of the images containing the profiles to be processed. If the provided region is not rectangular, its smallest enclosing rectangle will be used.
Names of the generic parameters that can be adjusted for the sheet-of-light model. Default: "min_gray"
Values of the generic parameters that can be adjusted for the sheet-of-light model. Default: 50
Handle for using and accessing the sheet-of-light model.
Create a model to perform 3D-measurements using the sheet-of-light technique.
Instance represents: Region of the images containing the profiles to be processed. If the provided region is not rectangular, its smallest enclosing rectangle will be used.
Names of the generic parameters that can be adjusted for the sheet-of-light model. Default: "min_gray"
Values of the generic parameters that can be adjusted for the sheet-of-light model. Default: 50
Handle for using and accessing the sheet-of-light model.
Selects characters from a given region.
Instance represents: Region of text lines in which to select the characters.
Should dot print characters be detected? Default: "false"
Stroke width of a character. Default: "medium"
Width of a character. Default: 25
Height of a character. Default: 25
Add punctuation? Default: "false"
Exist diacritic marks? Default: "false"
Method to partition neighbored characters. Default: "none"
Should lines be partitioned? Default: "false"
Distance of fragments. Default: "medium"
Connect fragments? Default: "false"
Maximum size of clutter. Default: 0
Stop execution after this step. Default: "completion"
Selected characters.
Segments characters in a given region of an image.
Instance represents: Area in the image where the text lines are located.
Input image.
Region of characters.
Method to segment the characters. Default: "local_auto_shape"
Eliminate horizontal and vertical lines? Default: "false"
Should dot print characters be detected? Default: "false"
Stroke width of a character. Default: "medium"
Width of a character. Default: 25
Height of a character. Default: 25
Value to adjust the segmentation. Default: 0
Minimum gray value difference between text and background. Default: 10
Threshold used to segment the characters.
Image used for the segmentation.
Segments characters in a given region of an image.
Instance represents: Area in the image where the text lines are located.
Input image.
Region of characters.
Method to segment the characters. Default: "local_auto_shape"
Eliminate horizontal and vertical lines? Default: "false"
Should dot print characters be detected? Default: "false"
Stroke width of a character. Default: "medium"
Width of a character. Default: 25
Height of a character. Default: 25
Value to adjust the segmentation. Default: 0
Minimum gray value difference between text and background. Default: 10
Threshold used to segment the characters.
Image used for the segmentation.
Determines the slant of characters of a text line or paragraph.
Instance represents: Area of text lines.
Input image.
Height of the text lines. Default: 25
Minimum slant of the characters Default: -0.523599
Maximum slant of the characters Default: 0.523599
Calculated slant of the characters in the region
Determines the orientation of a text line or paragraph.
Instance represents: Area of text lines.
Input image.
Height of the text lines. Default: 25
Minimum rotation of the text lines. Default: -0.523599
Maximum rotation of the text lines. Default: 0.523599
Calculated rotation angle of the text lines.
Construct classes for class_ndim_norm.
Instance represents: Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Multi-channel training image.
Metric to be used. Default: "euclid"
Maximum cluster radius. Default: 10.0
The ratio of the number of pixels in a cluster to the total number of pixels (in percent) must be larger than MinNumberPercent (otherwise the cluster is not output). Default: 0.01
Coordinates of all cluster centers.
Overlap of the rejection class with the classified objects (1: no overlap).
Cluster radii or half edge lengths.
Construct classes for class_ndim_norm.
Instance represents: Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Multi-channel training image.
Metric to be used. Default: "euclid"
Maximum cluster radius. Default: 10.0
The ratio of the number of pixels in a cluster to the total number of pixels (in percent) must be larger than MinNumberPercent (otherwise the cluster is not output). Default: 0.01
Coordinates of all cluster centers.
Overlap of the rejection class with the classified objects (1: no overlap).
Cluster radii or half edge lengths.
Train a classificator using a multi-channel image.
Instance represents: Foreground pixels to be trained.
Background pixels to be trained (rejection class).
Multi-channel training image.
Handle of the classifier.
Transform a region in polar coordinates back to cartesian coordinates.
Instance represents: Input region.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the column coordinate 0 of PolarRegion to. Default: 0.0
Angle of the ray to map the column coordinate $WidthIn-1$ of PolarRegion to. Default: 6.2831853
Radius of the circle to map the row coordinate 0 of PolarRegion to. Default: 0
Radius of the circle to map the row coordinate $HeightIn-1$ of PolarRegion to. Default: 100
Width of the virtual input image. Default: 512
Height of the virtual input image. Default: 512
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Output region.
Transform a region in polar coordinates back to cartesian coordinates.
Instance represents: Input region.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the column coordinate 0 of PolarRegion to. Default: 0.0
Angle of the ray to map the column coordinate $WidthIn-1$ of PolarRegion to. Default: 6.2831853
Radius of the circle to map the row coordinate 0 of PolarRegion to. Default: 0
Radius of the circle to map the row coordinate $HeightIn-1$ of PolarRegion to. Default: 100
Width of the virtual input image. Default: 512
Height of the virtual input image. Default: 512
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Output region.
Transform a region within an annular arc to polar coordinates.
Instance represents: Input region.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to column coordinate 0 of PolarTransRegion. Default: 0.0
Angle of the ray to be mapped to column coordinate $Width-1$ of PolarTransRegion. Default: 6.2831853
Radius of the circle to be mapped to row coordinate 0 of PolarTransRegion. Default: 0
Radius of the circle to be mapped to row coordinate $Height-1$ of PolarTransRegion. Default: 100
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Output region.
Transform a region within an annular arc to polar coordinates.
Instance represents: Input region.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to column coordinate 0 of PolarTransRegion. Default: 0.0
Angle of the ray to be mapped to column coordinate $Width-1$ of PolarTransRegion. Default: 6.2831853
Radius of the circle to be mapped to row coordinate 0 of PolarTransRegion. Default: 0
Radius of the circle to be mapped to row coordinate $Height-1$ of PolarTransRegion. Default: 100
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Interpolation method for the transformation. Default: "nearest_neighbor"
Output region.
Merge regions from line scan images.
Instance represents: Current input regions.
Merged regions from the previous iteration.
Regions from the previous iteration which could not be merged with the current ones.
Height of the line scan images. Default: 512
Image line of the current image, which touches the previous image. Default: "top"
Maximum number of images for a single region. Default: 3
Current regions, merged with old ones where applicable.
Partition a region into rectangles of approximately equal size.
Instance represents: Region to be partitioned.
Width of the individual rectangles.
Height of the individual rectangles.
Partitioned region.
Partition a region horizontally at positions of small vertical extent.
Instance represents: Region to be partitioned.
Approximate width of the resulting region parts.
Maximum percental shift of the split position. Default: 20
Partitioned region.
Convert regions to a label image.
Instance represents: Regions to be converted.
Pixel type of the result image. Default: "int2"
Width of the image to be generated. Default: 512
Height of the image to be generated. Default: 512
Result image of dimension Width * Height containing the converted regions.
Convert a region into a binary byte-image.
Instance represents: Regions to be converted.
Gray value in which the regions are displayed. Default: 255
Gray value in which the background is displayed. Default: 0
Width of the image to be generated. Default: 512
Height of the image to be generated. Default: 512
Result image of dimension Width * Height containing the converted regions.
Return the union of two regions.
Instance represents: Region for which the union with all regions in Region2 is to be computed.
Regions which should be added to Region1.
Resulting regions.
Return the union of all input regions.
Instance represents: Regions of which the union is to be computed.
Union of all input regions.
Compute the closest-point transformation of a region.
Instance represents: Region for which the distance to the border is computed.
Image containing the coordinates of the closest points.
Type of metric to be used for the closest-point transformation. Default: "city-block"
Compute the distance for pixels inside (true) or outside (false) the input region. Default: "true"
Mode in which the coordinates of the closest points are returned. Default: "absolute"
Width of the output images. Default: 640
Height of the output images. Default: 480
Image containing the distance information.
Compute the distance transformation of a region.
Instance represents: Region for which the distance to the border is computed.
Type of metric to be used for the distance transformation. Default: "city-block"
Compute the distance for pixels inside (true) or outside (false) the input region. Default: "true"
Width of the output image. Default: 640
Height of the output image. Default: 480
Image containing the distance information.
Compute the skeleton of a region.
Instance represents: Region to be thinned.
Resulting skeleton.
Apply a projective transformation to a region.
Instance represents: Input regions.
Homogeneous projective transformation matrix.
Interpolation method for the transformation. Default: "bilinear"
Output regions.
Apply an arbitrary affine 2D transformation to regions.
Instance represents: Region(s) to be rotated and scaled.
Input transformation matrix.
Should the transformation be done using interpolation? Default: "nearest_neighbor"
Transformed output region(s).
Reflect a region about an axis.
Instance represents: Region(s) to be reflected.
Axis of symmetry. Default: "row"
Twice the coordinate of the axis of symmetry. Default: 512
Reflected region(s).
Zoom a region.
Instance represents: Region(s) to be zoomed.
Scale factor in x-direction. Default: 2.0
Scale factor in y-direction. Default: 2.0
Zoomed region(s).
Translate a region.
Instance represents: Region(s) to be moved.
Row coordinate of the vector by which the region is to be moved. Default: 30
Row coordinate of the vector by which the region is to be moved. Default: 30
Translated region(s).
Find junctions and end points in a skeleton.
Instance represents: Input skeletons.
Extracted junctions.
Extracted end points.
Calculate the intersection of two regions.
Instance represents: Regions to be intersected with all regions in Region2.
Regions with which Region1 is intersected.
Result of the intersection.
Partition the image plane using given regions.
Instance represents: Regions for which the separating lines are to be determined.
Mode of operation. Default: "mixed"
Output region containing the separating lines.
Fill up holes in regions.
Instance represents: Input regions containing holes.
Regions without holes.
Fill up holes in regions having given shape features.
Instance represents: Input region(s).
Shape feature used. Default: "area"
Minimum value for Feature. Default: 1.0
Maximum value for Feature. Default: 100.0
Output region(s) with filled holes.
Fill up holes in regions having given shape features.
Instance represents: Input region(s).
Shape feature used. Default: "area"
Minimum value for Feature. Default: 1.0
Maximum value for Feature. Default: 100.0
Output region(s) with filled holes.
Fill gaps between regions or split overlapping regions.
Instance represents: Regions for which the gaps are to be closed, or which are to be separated.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Expanded or separated regions.
Fill gaps between regions or split overlapping regions.
Instance represents: Regions for which the gaps are to be closed, or which are to be separated.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Expanded or separated regions.
Clip a region relative to its smallest surrounding rectangle.
Instance represents: Regions to be clipped.
Number of rows clipped at the top. Default: 1
Number of rows clipped at the bottom. Default: 1
Number of columns clipped at the left. Default: 1
Number of columns clipped at the right. Default: 1
Clipped regions.
Clip a region to a rectangle.
Instance represents: Region to be clipped.
Row coordinate of the upper left corner of the rectangle. Default: 0
Column coordinate of the upper left corner of the rectangle. Default: 0
Row coordinate of the lower right corner of the rectangle. Default: 256
Column coordinate of the lower right corner of the rectangle. Default: 256
Clipped regions.
Rank operator for regions.
Instance represents: Region(s) to be transformed.
Width of the filter mask. Default: 15
Height of the filter mask. Default: 15
Minimum number of points lying within the filter mask. Default: 70
Resulting region(s).
Compute connected components of a region.
Instance represents: Input region.
Connected components.
Calculate the symmetric difference of two regions.
Instance represents: Input region 1.
Input region 2.
Resulting region.
Calculate the difference of two regions.
Instance represents: Regions to be processed.
The union of these regions is subtracted from Region.
Resulting region.
Return the complement of a region.
Instance represents: Input region(s).
Complemented regions.
Determine the connected components of the background of given regions.
Instance represents: Input regions.
Connected components of the background.
Generate a region having a given Hamming distance.
Instance represents: Region to be modified.
Width of the region to be changed. Default: 100
Height of the region to be changed. Default: 100
Hamming distance between the old and new regions. Default: 1000
Regions having the required Hamming distance.
Remove noise from a region.
Instance represents: Regions to be modified.
Mode of noise removal. Default: "n_4"
Less noisy regions.
Transform the shape of a region.
Instance represents: Regions to be transformed.
Type of transformation. Default: "convex"
Transformed regions.
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Instance represents: Regions for which the gaps are to be closed, or which are to be separated.
Image (possibly multi-channel) for gray value or color comparison.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Maximum difference between the gray value or color at the region's border and a candidate for expansion. Default: 32
Expanded or separated regions.
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Instance represents: Regions for which the gaps are to be closed, or which are to be separated.
Image (possibly multi-channel) for gray value or color comparison.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Maximum difference between the gray value or color at the region's border and a candidate for expansion. Default: 32
Expanded or separated regions.
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Instance represents: Regions for which the gaps are to be closed, or which are to be separated.
Image (possibly multi-channel) for gray value or color comparison.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Reference gray value or color for comparison. Default: 128
Maximum difference between the reference gray value or color and a candidate for expansion. Default: 32
Expanded or separated regions.
Fill gaps between regions (depending on gray value or color) or split overlapping regions.
Instance represents: Regions for which the gaps are to be closed, or which are to be separated.
Image (possibly multi-channel) for gray value or color comparison.
Regions in which no expansion takes place.
Number of iterations. Default: "maximal"
Expansion mode. Default: "image"
Reference gray value or color for comparison. Default: 128
Maximum difference between the reference gray value or color and a candidate for expansion. Default: 32
Expanded or separated regions.
Split lines represented by one pixel wide, non-branching lines.
Instance represents: Input lines (represented by 1 pixel wide, non-branching regions).
Maximum distance of the line points to the line segment connecting both end points. Default: 3
Row coordinates of the start points of the output lines.
Column coordinates of the start points of the output lines.
Row coordinates of the end points of the output lines.
Column coordinates of the end points of the output lines.
Split lines represented by one pixel wide, non-branching regions.
Instance represents: Input lines (represented by 1 pixel wide, non-branching regions).
Maximum distance of the line points to the line segment connecting both end points. Default: 3
Split lines.
Convert a histogram into a region.
Modified instance represents: Region containing the histogram.
Input histogram.
Row coordinate of the center of the histogram. Default: 255
Column coordinate of the center of the histogram. Default: 255
Scale factor for the histogram. Default: 1
Eliminate runs of a given length.
Instance represents: Region to be clipped.
All runs which are shorter are eliminated. Default: 3
All runs which are longer are eliminated. Default: 1000
Clipped regions.
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Paint regions into an image.
Instance represents: Regions to be painted into the input image.
Image in which the regions are to be painted.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Image containing the result.
Paint regions into an image.
Instance represents: Regions to be painted into the input image.
Image in which the regions are to be painted.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Image containing the result.
Overpaint regions in an image.
Instance represents: Regions to be painted into the input image.
Image in which the regions are to be painted.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Overpaint regions in an image.
Instance represents: Regions to be painted into the input image.
Image in which the regions are to be painted.
Desired gray values of the regions. Default: 255.0
Paint regions filled or as boundaries. Default: "fill"
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Test whether a region is contained in another region.
Instance represents: Test region.
Region for comparison.
Is Region1 contained in Region2?
Test whether the regions of two objects are identical.
Instance represents: Test regions.
Comparative regions.
boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Store a polygon as a "filled" region.
Modified instance represents: Created region.
Line indices of the base points of the region contour. Default: 100
Column indices of the base points of the region contour. Default: 100
Store a polygon as a region.
Modified instance represents: Created region.
Line indices of the base points of the region contour. Default: 100
Colum indices of the base points of the region contour. Default: 100
Store individual pixels as image region.
Modified instance represents: Created region.
Lines of the pixels in the region. Default: 100
Columns of the pixels in the region. Default: 100
Store individual pixels as image region.
Modified instance represents: Created region.
Lines of the pixels in the region. Default: 100
Columns of the pixels in the region. Default: 100
Create a region from a runlength coding.
Modified instance represents: Created region.
Lines of the runs. Default: 100
Columns of the starting points of the runs. Default: 50
Columns of the ending points of the runs. Default: 200
Create a region from a runlength coding.
Modified instance represents: Created region.
Lines of the runs. Default: 100
Columns of the starting points of the runs. Default: 50
Columns of the ending points of the runs. Default: 200
Create a rectangle of any orientation.
Modified instance represents: Created rectangle.
Line index of the center. Default: 300.0
Column index of the center. Default: 200.0
Angle of the first edge to the horizontal (in radians). Default: 0.0
Half width. Default: 100.0
Half height. Default: 20.0
Create a rectangle of any orientation.
Modified instance represents: Created rectangle.
Line index of the center. Default: 300.0
Column index of the center. Default: 200.0
Angle of the first edge to the horizontal (in radians). Default: 0.0
Half width. Default: 100.0
Half height. Default: 20.0
Create a rectangle parallel to the coordinate axes.
Modified instance represents: Created rectangle.
Line of upper left corner point. Default: 30.0
Column of upper left corner point. Default: 20.0
Line of lower right corner point. Default: 100.0
Column of lower right corner point. Default: 200.0
Create a rectangle parallel to the coordinate axes.
Modified instance represents: Created rectangle.
Line of upper left corner point. Default: 30.0
Column of upper left corner point. Default: 20.0
Line of lower right corner point. Default: 100.0
Column of lower right corner point. Default: 200.0
Create a random region.
Modified instance represents: Created random region with expansion Width x Height.
Maximum horizontal expansion of random region. Default: 128
Maximum vertical expansion of random region. Default: 128
Create an ellipse sector.
Modified instance represents: Created ellipse(s).
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Orientation of the longer radius (Radius1). Default: 0.0
Longer radius. Default: 100.0
Shorter radius. Default: 60.0
Start angle of the sector. Default: 0.0
End angle of the sector. Default: 3.14159
Create an ellipse sector.
Modified instance represents: Created ellipse(s).
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Orientation of the longer radius (Radius1). Default: 0.0
Longer radius. Default: 100.0
Shorter radius. Default: 60.0
Start angle of the sector. Default: 0.0
End angle of the sector. Default: 3.14159
Create an ellipse.
Modified instance represents: Created ellipse(s).
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Orientation of the longer radius (Radius1). Default: 0.0
Longer radius. Default: 100.0
Shorter radius. Default: 60.0
Create an ellipse.
Modified instance represents: Created ellipse(s).
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Orientation of the longer radius (Radius1). Default: 0.0
Longer radius. Default: 100.0
Shorter radius. Default: 60.0
Create a circle sector.
Modified instance represents: Generated circle sector.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Start angle of the circle sector. Default: 0.0
End angle of the circle sector. Default: 3.14159
Create a circle sector.
Modified instance represents: Generated circle sector.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Start angle of the circle sector. Default: 0.0
End angle of the circle sector. Default: 3.14159
Create a circle.
Modified instance represents: Generated circle.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Create a circle.
Modified instance represents: Generated circle.
Line index of center. Default: 200.0
Column index of center. Default: 200.0
Radius of circle. Default: 100.5
Create a checkered region.
Modified instance represents: Created checkerboard region.
Largest occurring $x$ value of the region. Default: 511
Largest occurring $y$ value of the region. Default: 511
Width of a field of the checkerboard. Default: 64
Height of a field of the checkerboard. Default: 64
Create a region from lines or pixels.
Modified instance represents: Created lines/pixel region.
Step width in line direction or zero. Default: 10
Step width in column direction or zero. Default: 10
Type of created pattern. Default: "lines"
Maximum width of pattern. Default: 512
Maximum height of pattern. Default: 512
Create a region from lines or pixels.
Modified instance represents: Created lines/pixel region.
Step width in line direction or zero. Default: 10
Step width in column direction or zero. Default: 10
Type of created pattern. Default: "lines"
Maximum width of pattern. Default: 512
Maximum height of pattern. Default: 512
Create random regions like circles, rectangles and ellipses.
Modified instance represents: Created regions.
Type of regions to be created. Default: "circle"
Minimum width of the region. Default: 10.0
Maximum width of the region. Default: 20.0
Minimum height of the region. Default: 10.0
Maximum height of the region. Default: 30.0
Minimum rotation angle of the region. Default: -0.7854
Maximum rotation angle of the region. Default: 0.7854
Number of regions. Default: 100
Maximum horizontal expansion. Default: 512
Maximum vertical expansion. Default: 512
Create random regions like circles, rectangles and ellipses.
Modified instance represents: Created regions.
Type of regions to be created. Default: "circle"
Minimum width of the region. Default: 10.0
Maximum width of the region. Default: 20.0
Minimum height of the region. Default: 10.0
Maximum height of the region. Default: 30.0
Minimum rotation angle of the region. Default: -0.7854
Maximum rotation angle of the region. Default: 0.7854
Number of regions. Default: 100
Maximum horizontal expansion. Default: 512
Maximum vertical expansion. Default: 512
Store input lines described in Hesse normal form as regions.
Modified instance represents: Created regions (one for every line), clipped to maximum image format.
Orientation of the normal vector in radians. Default: 0.0
Distance from the line to the coordinate origin (0.0). Default: 200
Store input lines described in Hesse normal form as regions.
Modified instance represents: Created regions (one for every line), clipped to maximum image format.
Orientation of the normal vector in radians. Default: 0.0
Distance from the line to the coordinate origin (0.0). Default: 200
Store input lines as regions.
Modified instance represents: Created regions.
Line coordinates of the starting points of the input lines. Default: 100
Column coordinates of the starting points of the input lines. Default: 50
Line coordinates of the ending points of the input lines. Default: 150
Column coordinates of the ending points of the input lines. Default: 250
Store input lines as regions.
Modified instance represents: Created regions.
Line coordinates of the starting points of the input lines. Default: 100
Column coordinates of the starting points of the input lines. Default: 50
Line coordinates of the ending points of the input lines. Default: 150
Column coordinates of the ending points of the input lines. Default: 250
Create an empty region.
Modified instance represents: Empty region (no pixels).
Access the thickness of a region along the main axis.
Instance represents: Region to be analysed.
Histogram of the thickness of the region along its main axis.
Thickness of the region along its main axis.
Polygon approximation of a region.
Instance represents: Region to be approximated.
Maximum distance between the polygon and the edge of the region. Default: 5.0
Line numbers of the base points of the contour.
Column numbers of the base points of the contour.
Polygon approximation of a region.
Instance represents: Region to be approximated.
Maximum distance between the polygon and the edge of the region. Default: 5.0
Line numbers of the base points of the contour.
Column numbers of the base points of the contour.
Access the pixels of a region.
Instance represents: This region is accessed.
Line numbers of the pixels in the region
Column numbers of the pixels in the region.
Access the contour of an object.
Instance represents: Output region.
Line numbers of the contour pixels.
Column numbers of the contour pixels.
Access the runlength coding of a region.
Instance represents: Output region.
Line numbers of the chords.
Column numbers of the starting points of the chords.
Column numbers of the ending points of the chords.
Contour of an object as chain code.
Instance represents: Region to be transformed.
Line of starting point.
Column of starting point.
Direction code of the contour (from starting point).
Access convex hull as contour.
Instance represents: Output region.
Line numbers of contour pixels.
Column numbers of the contour pixels.
Classify a related group of characters with an OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the k-NN.
Classify a related group of characters with an OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the k-NN.
Classify multiple characters with an k-NN classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the k-NN classifier.
Confidence of the class of the characters.
Result of classifying the characters with the k-NN.
Classify multiple characters with an k-NN classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the k-NN classifier.
Confidence of the class of the characters.
Result of classifying the characters with the k-NN.
Classify a single character with an OCR classifier.
Instance represents: Character to be recognized.
Gray values of the character.
Handle of the k-NN classifier.
Number of maximal classes to determine. Default: 1
Number of neighbors to consider. Default: 1
Confidence(s) of the class(es) of the character.
Results of classifying the character with the k-NN.
Classify a single character with an OCR classifier.
Instance represents: Character to be recognized.
Gray values of the character.
Handle of the k-NN classifier.
Number of maximal classes to determine. Default: 1
Number of neighbors to consider. Default: 1
Confidence(s) of the class(es) of the character.
Results of classifying the character with the k-NN.
Classify a related group of characters with an OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the SVM.
Classify multiple characters with an SVM-based OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Result of classifying the characters with the SVM.
Classify a single character with an SVM-based OCR classifier.
Instance represents: Character to be recognized.
Gray values of the character.
Handle of the OCR classifier.
Number of best classes to determine. Default: 1
Result of classifying the character with the SVM.
Classify a related group of characters with an OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the MLP.
Classify a related group of characters with an OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the MLP.
Classify multiple characters with an OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Confidence of the class of the characters.
Result of classifying the characters with the MLP.
Classify multiple characters with an OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Confidence of the class of the characters.
Result of classifying the characters with the MLP.
Classify a single character with an OCR classifier.
Instance represents: Character to be recognized.
Gray values of the character.
Handle of the OCR classifier.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the character.
Result of classifying the character with the MLP.
Classify a single character with an OCR classifier.
Instance represents: Character to be recognized.
Gray values of the character.
Handle of the OCR classifier.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the character.
Result of classifying the character with the MLP.
Classify one character.
Instance represents: Character to be recognized.
Gray values of the characters.
ID of the OCR classifier.
Confidence values of the characters.
Classes (names) of the characters.
Classify characters.
Instance represents: Characters to be recognized.
Gray values for the characters.
ID of the OCR classifier.
Confidence values of the characters.
Class (name) of the characters.
Classify characters.
Instance represents: Characters to be recognized.
Gray values for the characters.
ID of the OCR classifier.
Confidence values of the characters.
Class (name) of the characters.
Train an OCR classifier by the input of regions.
Instance represents: Characters to be trained.
Gray values for the characters.
ID of the desired OCR-classifier.
Class (name) of the characters. Default: "a"
Average confidence during a re-classification of the trained characters.
Train an OCR classifier by the input of regions.
Instance represents: Characters to be trained.
Gray values for the characters.
ID of the desired OCR-classifier.
Class (name) of the characters. Default: "a"
Average confidence during a re-classification of the trained characters.
Protection of training data.
Names of the training files. Default: ""
Passwords for protecting the training files.
Names of the protected training files.
Protection of training data.
Names of the training files. Default: ""
Passwords for protecting the training files.
Names of the protected training files.
Storing of training characters into a file.
Instance represents: Characters to be trained.
Gray values of the characters.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Storing of training characters into a file.
Instance represents: Characters to be trained.
Gray values of the characters.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Sorting of regions with respect to their relative position.
Instance represents: Regions to be sorted.
Kind of sorting. Default: "first_point"
Increasing or decreasing sorting order. Default: "true"
Sorting first with respect to row, then to column. Default: "row"
Sorted regions.
Test an OCR classifier.
Instance represents: Characters to be tested.
Gray values for the characters.
ID of the desired OCR-classifier.
Class (name) of the characters. Default: "a"
Confidence for the character to belong to the class.
Test an OCR classifier.
Instance represents: Characters to be tested.
Gray values for the characters.
ID of the desired OCR-classifier.
Class (name) of the characters. Default: "a"
Confidence for the character to belong to the class.
Add characters to a training file.
Instance represents: Characters to be trained.
Gray values of the characters.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Add characters to a training file.
Instance represents: Characters to be trained.
Gray values of the characters.
Class (name) of the characters.
Name of the training file. Default: "train_ocr"
Prune the branches of a region.
Instance represents: Regions to be processed.
Length of the branches to be removed. Default: 2
Result of the pruning operation.
Reduce a region to its boundary.
Instance represents: Regions for which the boundary is to be computed.
Boundary type. Default: "inner"
Resulting boundaries.
Perform a closing after an opening with multiple structuring elements.
Instance represents: Regions to be processed.
Structuring elements.
Fitted regions.
Generate standard structuring elements.
Modified instance represents: Generated structuring elements.
Type of structuring element to generate. Default: "noise"
Row coordinate of the reference point. Default: 1
Column coordinate of the reference point. Default: 1
Reflect a region about a point.
Instance represents: Region to be reflected.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Transposed region.
Remove the result of a hit-or-miss operation from a region (sequential).
Instance represents: Regions to be processed.
Structuring element from the Golay alphabet. Default: "l"
Number of iterations. For 'f', 'f2', 'h' and 'i' the only useful value is 1. Default: 20
Result of the thinning operator.
Remove the result of a hit-or-miss operation from a region (sequential).
Instance represents: Regions to be processed.
Structuring element from the Golay alphabet. Default: "l"
Number of iterations. For 'f', 'f2', 'h' and 'i' the only useful value is 1. Default: 20
Result of the thinning operator.
Remove the result of a hit-or-miss operation from a region (using a Golay structuring element).
Instance represents: Regions to be processed.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Result of the thinning operator.
Remove the result of a hit-or-miss operation from a region.
Instance represents: Regions to be processed.
Structuring element for the foreground.
Structuring element for the background.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Number of iterations. Default: 1
Result of the thinning operator.
Add the result of a hit-or-miss operation to a region (sequential).
Instance represents: Regions to be processed.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Result of the thickening operator.
Add the result of a hit-or-miss operation to a region (using a Golay structuring element).
Instance represents: Regions to be processed.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Result of the thickening operator.
Add the result of a hit-or-miss operation to a region.
Instance represents: Regions to be processed.
Structuring element for the foreground.
Structuring element for the background.
Row coordinate of the reference point. Default: 16
Column coordinate of the reference point. Default: 16
Number of iterations. Default: 1
Result of the thickening operator.
Hit-or-miss operation for regions using the Golay alphabet (sequential).
Instance represents: Regions to be processed.
Structuring element from the Golay alphabet. Default: "h"
Result of the hit-or-miss operation.
Hit-or-miss operation for regions using the Golay alphabet.
Instance represents: Regions to be processed.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Result of the hit-or-miss operation.
Hit-or-miss operation for regions.
Instance represents: Regions to be processed.
Erosion mask for the input regions.
Erosion mask for the complements of the input regions.
Row coordinate of the reference point. Default: 16
Column coordinate of the reference point. Default: 16
Result of the hit-or-miss operation.
Generate the structuring elements of the Golay alphabet.
Modified instance represents: Structuring element for the foreground.
Name of the structuring element. Default: "l"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Row coordinate of the reference point. Default: 16
Column coordinate of the reference point. Default: 16
Structuring element for the background.
Thinning of a region.
Instance represents: Regions to be thinned.
Number of iterations for the sequential thinning with the element 'l' of the Golay alphabet. Default: 100
Number of iterations for the sequential thinning with the element 'e' of the Golay alphabet. Default: 1
Result of the skiz operator.
Thinning of a region.
Instance represents: Regions to be thinned.
Number of iterations for the sequential thinning with the element 'l' of the Golay alphabet. Default: 100
Number of iterations for the sequential thinning with the element 'e' of the Golay alphabet. Default: 1
Result of the skiz operator.
Compute the morphological skeleton of a region.
Instance represents: Regions to be processed.
Resulting morphological skeleton.
Compute the union of bottom_hat and top_hat.
Instance represents: Regions to be processed.
Structuring element (position-invariant).
Union of top hat and bottom hat.
Compute the bottom hat of regions.
Instance represents: Regions to be processed.
Structuring element (position independent).
Result of the bottom hat operator.
Compute the top hat of regions.
Instance represents: Regions to be processed.
Structuring element (position independent).
Result of the top hat operator.
Erode a region (using a reference point).
Instance represents: Regions to be eroded.
Structuring element.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Number of iterations. Default: 1
Eroded regions.
Erode a region.
Instance represents: Regions to be eroded.
Structuring element.
Number of iterations. Default: 1
Eroded regions.
Dilate a region (using a reference point).
Instance represents: Regions to be dilated.
Structuring element.
Row coordinate of the reference point.
Column coordinate of the reference point.
Number of iterations. Default: 1
Dilated regions.
Perform a Minkowski addition on a region.
Instance represents: Regions to be dilated.
Structuring element.
Number of iterations. Default: 1
Dilated regions.
Close a region with a rectangular structuring element.
Instance represents: Regions to be closed.
Width of the structuring rectangle. Default: 10
Height of the structuring rectangle. Default: 10
Closed regions.
Close a region with an element from the Golay alphabet.
Instance represents: Regions to be closed.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Closed regions.
Close a region with a circular structuring element.
Instance represents: Regions to be closed.
Radius of the circular structuring element. Default: 3.5
Closed regions.
Close a region with a circular structuring element.
Instance represents: Regions to be closed.
Radius of the circular structuring element. Default: 3.5
Closed regions.
Close a region.
Instance represents: Regions to be closed.
Structuring element (position-invariant).
Closed regions.
Separate overlapping regions.
Instance represents: Regions to be opened.
Structuring element (position-invariant).
Opened regions.
Open a region with an element from the Golay alphabet.
Instance represents: Regions to be opened.
Structuring element from the Golay alphabet. Default: "h"
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Opened regions.
Open a region with a rectangular structuring element.
Instance represents: Regions to be opened.
Width of the structuring rectangle. Default: 10
Height of the structuring rectangle. Default: 10
Opened regions.
Open a region with a circular structuring element.
Instance represents: Regions to be opened.
Radius of the circular structuring element. Default: 3.5
Opened regions.
Open a region with a circular structuring element.
Instance represents: Regions to be opened.
Radius of the circular structuring element. Default: 3.5
Opened regions.
Open a region.
Instance represents: Regions to be opened.
Structuring element (position-invariant).
Opened regions.
Erode a region sequentially.
Instance represents: Regions to be eroded.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Eroded regions.
Erode a region with an element from the Golay alphabet.
Instance represents: Regions to be eroded.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Eroded regions.
Erode a region with a rectangular structuring element.
Instance represents: Regions to be eroded.
Width of the structuring rectangle. Default: 11
Height of the structuring rectangle. Default: 11
Eroded regions.
Erode a region with a circular structuring element.
Instance represents: Regions to be eroded.
Radius of the circular structuring element. Default: 3.5
Eroded regions.
Erode a region with a circular structuring element.
Instance represents: Regions to be eroded.
Radius of the circular structuring element. Default: 3.5
Eroded regions.
Erode a region (using a reference point).
Instance represents: Regions to be eroded.
Structuring element.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Number of iterations. Default: 1
Eroded regions.
Erode a region.
Instance represents: Regions to be eroded.
Structuring element.
Number of iterations. Default: 1
Eroded regions.
Dilate a region sequentially.
Instance represents: Regions to be dilated.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Dilated regions.
Dilate a region with an element from the Golay alphabet.
Instance represents: Regions to be dilated.
Structuring element from the Golay alphabet. Default: "h"
Number of iterations. Default: 1
Rotation of the Golay element. Depending on the element, not all rotations are valid. Default: 0
Dilated regions.
Dilate a region with a rectangular structuring element.
Instance represents: Regions to be dilated.
Width of the structuring rectangle. Default: 11
Height of the structuring rectangle. Default: 11
Dilated regions.
Dilate a region with a circular structuring element.
Instance represents: Regions to be dilated.
Radius of the circular structuring element. Default: 3.5
Dilated regions.
Dilate a region with a circular structuring element.
Instance represents: Regions to be dilated.
Radius of the circular structuring element. Default: 3.5
Dilated regions.
Dilate a region (using a reference point).
Instance represents: Regions to be dilated.
Structuring element.
Row coordinate of the reference point. Default: 0
Column coordinate of the reference point. Default: 0
Number of iterations. Default: 1
Dilated regions.
Dilate a region.
Instance represents: Regions to be dilated.
Structuring element.
Number of iterations. Default: 1
Dilated regions.
Add gray values to regions.
Instance represents: Input regions (without pixel values).
Input image with pixel values for regions.
Output image(s) with regions and pixel values (one image per input region).
Centres of circles for a specific radius.
Instance represents: Binary edge image in which the circles are to be detected.
Radius of the circle to be searched in the image. Default: 12
Indicates the percentage (approximately) of the (ideal) circle which must be present in the edge image RegionIn. Default: 60
The modus defines the position of the circle in question: 0 - the radius is equivalent to the outer border of the set pixels. 1 - the radius is equivalent to the centres of the circle lines' pixels. 2 - both 0 and 1 (a little more fuzzy, but more reliable in contrast to circles set slightly differently, necessitates 50
Centres of those circles which are included in the edge image by Percent percent.
Centres of circles for a specific radius.
Instance represents: Binary edge image in which the circles are to be detected.
Radius of the circle to be searched in the image. Default: 12
Indicates the percentage (approximately) of the (ideal) circle which must be present in the edge image RegionIn. Default: 60
The modus defines the position of the circle in question: 0 - the radius is equivalent to the outer border of the set pixels. 1 - the radius is equivalent to the centres of the circle lines' pixels. 2 - both 0 and 1 (a little more fuzzy, but more reliable in contrast to circles set slightly differently, necessitates 50
Centres of those circles which are included in the edge image by Percent percent.
Return the Hough-Transform for circles with a given radius.
Instance represents: Binary edge image in which the circles are to be detected.
Radius of the circle to be searched in the image. Default: 12
Hough transform for circles with a given radius.
Return the Hough-Transform for circles with a given radius.
Instance represents: Binary edge image in which the circles are to be detected.
Radius of the circle to be searched in the image. Default: 12
Hough transform for circles with a given radius.
Detect lines in edge images with the help of the Hough transform and returns it in HNF.
Instance represents: Binary edge image in which the lines are to be detected.
Adjusting the resolution in the angle area. Default: 4
Threshold value in the Hough image. Default: 100
Minimal distance of two maxima in the Hough image (direction: angle). Default: 5
Minimal distance of two maxima in the Hough image (direction: distance). Default: 5
Distance of the detected lines from the origin.
Angles (in radians) of the detected lines' normal vectors.
Produce the Hough transform for lines within regions.
Instance represents: Binary edge image in which lines are to be detected.
Adjusting the resolution in the angle area. Default: 4
Hough transform for lines.
Select those lines from a set of lines (in HNF) which fit best into a region.
Instance represents: Region in which the lines are to be matched.
Angles (in radians) of the normal vectors of the input lines.
Distances of the input lines form the origin.
Widths of the lines. Default: 7
Threshold value for the number of line points in the region. Default: 100
Angles (in radians) of the normal vectors of the selected lines.
Distances of the selected lines from the origin.
Region array containing the matched lines.
Select those lines from a set of lines (in HNF) which fit best into a region.
Instance represents: Region in which the lines are to be matched.
Angles (in radians) of the normal vectors of the input lines.
Distances of the input lines form the origin.
Widths of the lines. Default: 7
Threshold value for the number of line points in the region. Default: 100
Angles (in radians) of the normal vectors of the selected lines.
Distances of the selected lines from the origin.
Region array containing the matched lines.
Query the icon for region output
Modified instance represents: Icon for the regions center of gravity.
Window handle.
Icon definition for region output.
Instance represents: Icon for center of gravity.
Window handle.
Displays regions in a window.
Instance represents: Regions to display.
Window handle.
Interactive movement of a region with restriction of positions.
Instance represents: Regions to move.
Points on which it is allowed for a region to move.
Window handle.
Row index of the reference point. Default: 100
Column index of the reference point. Default: 100
Moved regions.
Interactive movement of a region with fixpoint specification.
Instance represents: Regions to move.
Window handle.
Row index of the reference point. Default: 100
Column index of the reference point. Default: 100
Moved regions.
Interactive moving of a region.
Instance represents: Regions to move.
Window handle.
Moved Regions.
Interactive drawing of a closed region.
Modified instance represents: Interactive created region.
Window handle.
Interactive drawing of a polygon row.
Modified instance represents: Region, which encompasses all painted points.
Window handle.
Calculate the distance between a line segment and one region.
Instance represents: Input region.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the line segment and the region.
Maximum distance between the line segment and the region.
Calculate the distance between a line segment and one region.
Instance represents: Input region.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the line segment and the region.
Maximum distance between the line segment and the region.
Calculate the distance between a line and a region.
Instance represents: Input region.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line and the region
Maximum distance between the line and the region
Calculate the distance between a line and a region.
Instance represents: Input region.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line and the region
Maximum distance between the line and the region
Calculate the distance between a point and a region.
Instance represents: Input region.
Row coordinate of the point.
Column coordinate of the point.
Minimum distance between the point and the region.
Maximum distance between the point and the region.
Calculate the distance between a point and a region.
Instance represents: Input region.
Row coordinate of the point.
Column coordinate of the point.
Minimum distance between the point and the region.
Maximum distance between the point and the region.
Determine the noise distribution of an image.
Instance represents: Region from which the noise distribution is to be estimated.
Corresponding image.
Size of the mean filter. Default: 21
Noise distribution of all input regions.
Determine the fuzzy entropy of regions.
Instance represents: Regions for which the fuzzy entropy is to be calculated.
Input image containing the fuzzy membership values.
Start of the fuzzy function. Default: 0
End of the fuzzy function. Default: 255
Fuzzy entropy of a region.
Calculate the fuzzy perimeter of a region.
Instance represents: Regions for which the fuzzy perimeter is to be calculated.
Input image containing the fuzzy membership values.
Start of the fuzzy function. Default: 0
End of the fuzzy function. Default: 255
Fuzzy perimeter of a region.
Paint regions with their average gray value.
Instance represents: Input regions.
original gray-value image.
Result image with painted regions.
Close edge gaps using the edge amplitude image.
Instance represents: Region containing one pixel thick edges.
Edge amplitude (gradient) image.
Minimum edge amplitude. Default: 16
Maximal number of points by which edges are extended. Default: 3
Region containing closed edges.
Close edge gaps using the edge amplitude image.
Instance represents: Region containing one pixel thick edges.
Edge amplitude (gradient) image.
Minimum edge amplitude. Default: 16
Region containing closed edges.
Deserialize a serialized region.
Modified instance represents: Region.
Handle of the serialized item.
Serialize a region.
Instance represents: Region.
Handle of the serialized item.
Write regions to a file.
Instance represents: Region of the images which are returned.
Name of region file. Default: "region.hobj"
Read binary images or HALCON regions.
Modified instance represents: Read region.
Name of the region to be read.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Moment of 2nd order.
Moment of 2nd order.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Moment of 2nd order.
Moment of 2nd order.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Moment of 3rd order.
Moment of 2nd order.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Moment of 3rd order.
Moment of 2nd order.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 3rd order (column-dependent).
Moment of 3rd order (column-dependent).
Moment of 3rd order (line-dependent).
Moment of 3rd order (line-dependent).
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 3rd order (column-dependent).
Moment of 3rd order (column-dependent).
Moment of 3rd order (line-dependent).
Moment of 3rd order (line-dependent).
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 3rd order (column-dependent).
Moment of 3rd order (column-dependent).
Moment of 3rd order (line-dependent).
Moment of 3rd order (line-dependent).
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 3rd order (column-dependent).
Moment of 3rd order (column-dependent).
Moment of 3rd order (line-dependent).
Moment of 3rd order (line-dependent).
Smallest surrounding rectangle with any orientation.
Instance represents: Regions to be examined.
Line index of the center.
Column index of the center.
Orientation of the surrounding rectangle (arc measure)
First radius (half length) of the surrounding rectangle.
Second radius (half width) of the surrounding rectangle.
Smallest surrounding rectangle with any orientation.
Instance represents: Regions to be examined.
Line index of the center.
Column index of the center.
Orientation of the surrounding rectangle (arc measure)
First radius (half length) of the surrounding rectangle.
Second radius (half width) of the surrounding rectangle.
Surrounding rectangle parallel to the coordinate axes.
Instance represents: Regions to be examined.
Line index of upper left corner point.
Column index of upper left corner point.
Line index of lower right corner point.
Column index of lower right corner point.
Surrounding rectangle parallel to the coordinate axes.
Instance represents: Regions to be examined.
Line index of upper left corner point.
Column index of upper left corner point.
Line index of lower right corner point.
Column index of lower right corner point.
Smallest surrounding circle of a region.
Instance represents: Regions to be examined.
Line index of the center.
Column index of the center.
Radius of the surrounding circle.
Smallest surrounding circle of a region.
Instance represents: Regions to be examined.
Line index of the center.
Column index of the center.
Radius of the surrounding circle.
Choose regions having a certain relation to each other.
Instance represents: Regions to be examined.
Region compared to Regions.
Shape features to be checked. Default: "covers"
Lower border of feature. Default: 50.0
Upper border of the feature. Default: 100.0
Regions fulfilling the condition.
Choose regions having a certain relation to each other.
Instance represents: Regions to be examined.
Region compared to Regions.
Shape features to be checked. Default: "covers"
Lower border of feature. Default: 50.0
Upper border of the feature. Default: 100.0
Regions fulfilling the condition.
Calculate shape features of regions.
Instance represents: Regions to be examined.
Shape features to be calculated. Default: "area"
The calculated features.
Calculate shape features of regions.
Instance represents: Regions to be examined.
Shape features to be calculated. Default: "area"
The calculated features.
Choose regions with the aid of shape features.
Instance represents: Regions to be examined.
Shape features to be checked. Default: "area"
Linkage type of the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Regions fulfilling the condition.
Choose regions with the aid of shape features.
Instance represents: Regions to be examined.
Shape features to be checked. Default: "area"
Linkage type of the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Regions fulfilling the condition.
Characteristic values for runlength coding of regions.
Instance represents: Regions to be examined.
Storing factor in relation to a square.
Mean number of runs per line.
Mean length of runs.
Number of bytes necessary for coding the region.
Number of runs.
Characteristic values for runlength coding of regions.
Instance represents: Regions to be examined.
Storing factor in relation to a square.
Mean number of runs per line.
Mean length of runs.
Number of bytes necessary for coding the region.
Number of runs.
Search direct neighbors.
Instance represents: Starting regions.
Comparative regions.
Maximal distance of regions. Default: 1
Indices of the found regions from Regions2.
Indices of the found regions from Regions1.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 2nd order.
Moment of 2nd order.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 2nd order (line-dependent).
Moment of 2nd order (column-dependent).
Product of inertia of the axes through the center parallel to the coordinate axes.
Geometric moments of regions.
Instance represents: Regions to be examined.
Moment of 2nd order (line-dependent).
Moment of 2nd order (column-dependent).
Product of inertia of the axes through the center parallel to the coordinate axes.
Calculate the geometric moments of regions.
Instance represents: Input regions.
Moment of 2nd order (row-dependent).
Moment of 2nd order (column-dependent).
Length of the major axis of the input region.
Length of the minor axis of the input region.
Product of inertia of the axes through the center parallel to the coordinate axes.
Calculate the geometric moments of regions.
Instance represents: Input regions.
Moment of 2nd order (row-dependent).
Moment of 2nd order (column-dependent).
Length of the major axis of the input region.
Length of the minor axis of the input region.
Product of inertia of the axes through the center parallel to the coordinate axes.
Minimum distance between the contour pixels of two regions each.
Instance represents: Regions to be examined.
Regions to be examined.
Line index on contour in Regions1.
Column index on contour in Regions1.
Line index on contour in Regions2.
Column index on contour in Regions2.
Minimum distance between contours of the regions.
Minimum distance between the contour pixels of two regions each.
Instance represents: Regions to be examined.
Regions to be examined.
Line index on contour in Regions1.
Column index on contour in Regions1.
Line index on contour in Regions2.
Column index on contour in Regions2.
Minimum distance between contours of the regions.
Minimum distance between two regions with the help of dilation.
Instance represents: Regions to be examined.
Regions to be examined.
Minimum distances of the regions.
Maximal distance between two boundary points of a region.
Instance represents: Regions to be examined.
Row index of the first extreme point.
Column index of the first extreme point.
Row index of the second extreme point.
Column index of the second extreme point.
Distance of the two extreme points.
Maximal distance between two boundary points of a region.
Instance represents: Regions to be examined.
Row index of the first extreme point.
Column index of the first extreme point.
Row index of the second extreme point.
Column index of the second extreme point.
Distance of the two extreme points.
Test if the region contains a given point.
Instance represents: Region(s) to be examined.
Row index of the test pixel(s). Default: 100
Column index of the test pixel(s). Default: 100
Boolean result value.
Test if the region contains a given point.
Instance represents: Region(s) to be examined.
Row index of the test pixel(s). Default: 100
Column index of the test pixel(s). Default: 100
Boolean result value.
Index of all regions containing a given pixel.
Instance represents: Regions to be examined.
Line index of the test pixel. Default: 100
Column index of the test pixel. Default: 100
Index of the regions containing the test pixel.
Choose all regions containing a given pixel.
Instance represents: Regions to be examined.
Line index of the test pixel. Default: 100
Column index of the test pixel. Default: 100
All regions containing the test pixel.
Select regions of a given shape.
Instance represents: Input regions to be selected.
Shape features to be checked. Default: "max_area"
Similarity measure. Default: 70.0
Regions with desired shape.
Hamming distance between two regions using normalization.
Instance represents: Regions to be examined.
Comparative regions.
Type of normalization. Default: "center"
Similarity of two regions.
Hamming distance of two regions.
Hamming distance between two regions using normalization.
Instance represents: Regions to be examined.
Comparative regions.
Type of normalization. Default: "center"
Similarity of two regions.
Hamming distance of two regions.
Hamming distance between two regions.
Instance represents: Regions to be examined.
Comparative regions.
Similarity of two regions.
Hamming distance of two regions.
Hamming distance between two regions.
Instance represents: Regions to be examined.
Comparative regions.
Similarity of two regions.
Hamming distance of two regions.
Shape features derived from the ellipse parameters.
Instance represents: Region(s) to be examined.
Calculated shape feature.
Calculated shape feature.
Shape feature (in case of a circle = 1.0).
Shape features derived from the ellipse parameters.
Instance represents: Region(s) to be examined.
Calculated shape feature.
Calculated shape feature.
Shape feature (in case of a circle = 1.0).
Calculate the Euler number.
Instance represents: Region(s) to be examined.
Calculated Euler number.
Orientation of a region.
Instance represents: Region(s) to be examined.
Orientation of region (arc measure).
Calculate the parameters of the equivalent ellipse.
Instance represents: Input regions.
Secondary radius (normalized to the area).
Angle between main radius and x-axis in radians.
Main radius (normalized to the area).
Calculate the parameters of the equivalent ellipse.
Instance represents: Input regions.
Secondary radius (normalized to the area).
Angle between main radius and x-axis in radians.
Main radius (normalized to the area).
Pose relation of regions.
Instance represents: Starting regions
Comparative regions
Desired neighboring relation. Default: "left"
Indices in the input tuples (Regions1 or ParRef{Regions2}), respectively.
Indices in the input tuples (Regions1 or ParRef{Regions2}), respectively.
Pose relation of regions with regard to
Instance represents: Starting regions.
Comparative regions.
Percentage of the area of the comparative region which must be located left/right or Default: 50
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
Horizontal pose relation in which RegionIndex2[n] stands with RegionIndex1[n].
Vertical pose relation in which RegionIndex2[n] stands with RegionIndex1[n].
Indices of the regions in the tuple of the input regions which fulfill the pose relation.
Shape factor for the convexity of a region.
Instance represents: Region(s) to be examined.
Convexity of the input region(s).
Contour length of a region.
Instance represents: Region(s) to be examined.
Contour length of the input region(s).
Number of connection components and holes
Instance represents: Region(s) to be examined.
Number of holes of a region.
Number of connection components of a region.
Number of connection components and holes
Instance represents: Region(s) to be examined.
Number of holes of a region.
Number of connection components of a region.
Shape factor for the rectangularity of a region.
Instance represents: Region(s) to be examined.
Rectangularity of the input region(s).
Shape factor for the compactness of a region.
Instance represents: Region(s) to be examined.
Compactness of the input region(s).
Shape factor for the circularity (similarity to a circle) of a region.
Instance represents: Region(s) to be examined.
Circularity of the input region(s).
Compute the area of holes of regions.
Instance represents: Region(s) to be examined.
Area(s) of holes of the region(s).
Area and center of regions.
Instance represents: Region(s) to be examined.
Line index of the center.
Column index of the center.
Area of the region.
Area and center of regions.
Instance represents: Region(s) to be examined.
Line index of the center.
Column index of the center.
Area of the region.
Distribution of runs needed for runlength encoding of a region.
Instance represents: Region to be examined.
Length distribution of the background.
Length distribution of the region (foreground).
Shape factors from contour.
Instance represents: Region(s) to be examined.
Standard deviation of Distance.
Shape factor for roundness.
Number of polygon sides.
Mean distance from the center.
Shape factors from contour.
Instance represents: Region(s) to be examined.
Standard deviation of Distance.
Shape factor for roundness.
Number of polygon sides.
Mean distance from the center.
Largest inner rectangle of a region.
Instance represents: Region to be examined.
Row coordinate of the upper left corner point.
Column coordinate of the upper left corner point.
Row coordinate of the lower right corner point.
Column coordinate of the lower right corner point.
Largest inner rectangle of a region.
Instance represents: Region to be examined.
Row coordinate of the upper left corner point.
Column coordinate of the upper left corner point.
Row coordinate of the lower right corner point.
Column coordinate of the lower right corner point.
Largest inner circle of a region.
Instance represents: Regions to be examined.
Line index of the center.
Column index of the center.
Radius of the inner circle.
Largest inner circle of a region.
Instance represents: Regions to be examined.
Line index of the center.
Column index of the center.
Radius of the inner circle.
Calculate gray value moments and approximation by a first order surface (plane).
Instance represents: Regions to be checked.
Corresponding gray values.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Alpha of the approximating surface.
Calculate gray value moments and approximation by a first order surface (plane).
Instance represents: Regions to be checked.
Corresponding gray values.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Alpha of the approximating surface.
Calculate gray value moments and approximation by a second order surface.
Instance represents: Regions to be checked.
Corresponding gray values.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Delta of the approximating surface.
Parameter Epsilon of the approximating surface.
Parameter Zeta of the approximating surface.
Parameter Alpha of the approximating surface.
Calculate gray value moments and approximation by a second order surface.
Instance represents: Regions to be checked.
Corresponding gray values.
Algorithm for the fitting. Default: "regression"
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers. Default: 2.0
Parameter Beta of the approximating surface.
Parameter Gamma of the approximating surface.
Parameter Delta of the approximating surface.
Parameter Epsilon of the approximating surface.
Parameter Zeta of the approximating surface.
Parameter Alpha of the approximating surface.
Determine a histogram of features along all threshold values.
Instance represents: Region in which the features are to be examined.
Gray value image.
Feature to be examined. Default: "convexity"
Row of the pixel which the region must contain. Default: 256
Column of the pixel which the region must contain. Default: 256
Relative distribution of the feature.
Absolute distribution of the feature.
Determine a histogram of features along all threshold values.
Instance represents: Region in which the features are to be examined.
Gray value image.
Feature to be examined. Default: "connected_components"
Relative distribution of the feature.
Absolute distribution of the feature.
Calculates gray value features for a set of regions.
Instance represents: Regions to be examined.
Gray value image.
Names of the features. Default: "mean"
Value sof the features.
Calculates gray value features for a set of regions.
Instance represents: Regions to be examined.
Gray value image.
Names of the features. Default: "mean"
Value sof the features.
Select regions based on gray value features.
Instance represents: Regions to be examined.
Gray value image.
Names of the features. Default: "mean"
Logical connection of features. Default: "and"
Lower limit(s) of features. Default: 128.0
Upper limit(s) of features. Default: 255.0
Regions having features within the limits.
Select regions based on gray value features.
Instance represents: Regions to be examined.
Gray value image.
Names of the features. Default: "mean"
Logical connection of features. Default: "and"
Lower limit(s) of features. Default: 128.0
Upper limit(s) of features. Default: 255.0
Regions having features within the limits.
Determine the minimum and maximum gray values within regions.
Instance represents: Regions, the features of which are to be calculated.
Gray value image.
Percentage below (above) the absolute maximum (minimum). Default: 0
"Minimum" gray value.
"Maximum" gray value.
Difference between Max and Min.
Determine the minimum and maximum gray values within regions.
Instance represents: Regions, the features of which are to be calculated.
Gray value image.
Percentage below (above) the absolute maximum (minimum). Default: 0
"Minimum" gray value.
"Maximum" gray value.
Difference between Max and Min.
Calculate the mean and deviation of gray values.
Instance represents: Regions in which the features are calculated.
Gray value image.
Deviation of gray values within a region.
Mean gray value of a region.
Calculate the mean and deviation of gray values.
Instance represents: Regions in which the features are calculated.
Gray value image.
Deviation of gray values within a region.
Mean gray value of a region.
Calculate the gray value distribution of a single channel image within a certain gray value range.
Instance represents: Region in which the histogram is to be calculated.
Input image.
Minimum gray value. Default: 0
Maximum gray value. Default: 255
Number of bins. Default: 256
Bin size.
Histogram to be calculated.
Calculate the gray value distribution of a single channel image within a certain gray value range.
Instance represents: Region in which the histogram is to be calculated.
Input image.
Minimum gray value. Default: 0
Maximum gray value. Default: 255
Number of bins. Default: 256
Bin size.
Histogram to be calculated.
Calculate the histogram of two-channel gray value images.
Instance represents: Region in which the histogram is to be calculated.
Channel 1.
Channel 2.
Histogram to be calculated.
Calculate the gray value distribution.
Instance represents: Region in which the histogram is to be calculated.
Image the gray value distribution of which is to be calculated.
Quantization of the gray values. Default: 1.0
Absolute frequencies of the gray values.
Calculate the gray value distribution.
Instance represents: Region in which the histogram is to be calculated.
Image the gray value distribution of which is to be calculated.
Quantization of the gray values. Default: 1.0
Absolute frequencies of the gray values.
Calculate the gray value distribution.
Instance represents: Region in which the histogram is to be calculated.
Image the gray value distribution of which is to be calculated.
Frequencies, normalized to the area of the region.
Absolute frequencies of the gray values.
Determine the entropy and anisotropy of images.
Instance represents: Regions where the features are to be determined.
Gray value image.
Measure of the symmetry of gray value distribution.
Information content (entropy) of the gray values.
Determine the entropy and anisotropy of images.
Instance represents: Regions where the features are to be determined.
Gray value image.
Measure of the symmetry of gray value distribution.
Information content (entropy) of the gray values.
Calculate a co-occurrence matrix and derive gray value features thereof.
Instance represents: Region to be examined.
Corresponding gray values.
Number of gray values to be distinguished (2^LdGray@f$2^{LdGray}$). Default: 6
Direction in which the matrix is to be calculated. Default: 0
Correlation of gray values.
Local homogeneity of gray values.
Gray value contrast.
Gray value energy.
Calculate a co-occurrence matrix and derive gray value features thereof.
Instance represents: Region to be examined.
Corresponding gray values.
Number of gray values to be distinguished (2^LdGray@f$2^{LdGray}$). Default: 6
Direction in which the matrix is to be calculated. Default: 0
Correlation of gray values.
Local homogeneity of gray values.
Gray value contrast.
Gray value energy.
Calculate the co-occurrence matrix of a region in an image.
Instance represents: Region to be checked.
Image providing the gray values.
Number of gray values to be distinguished (2^LdGray@f$2^{LdGray}$). Default: 6
Direction of neighbor relation. Default: 0
Co-occurrence matrix (matrices).
Calculate gray value moments and approximation by a plane.
Instance represents: Regions to be checked.
Corresponding gray values.
Mixed moments along a line.
Mixed moments along a column.
Parameter Alpha of the approximating plane.
Parameter Beta of the approximating plane.
Mean gray value.
Calculate gray value moments and approximation by a plane.
Instance represents: Regions to be checked.
Corresponding gray values.
Mixed moments along a line.
Mixed moments along a column.
Parameter Alpha of the approximating plane.
Parameter Beta of the approximating plane.
Mean gray value.
Calculate the deviation of the gray values from the approximating image plane.
Instance represents: Regions, of which the plane deviation is to be calculated.
Gray value image.
Deviation of the gray values within a region.
Compute the orientation and major axes of a region in a gray value image.
Instance represents: Region(s) to be examined.
Gray value image.
Minor axis of the region.
Angle enclosed by the major axis and the x-axis.
Major axis of the region.
Compute the orientation and major axes of a region in a gray value image.
Instance represents: Region(s) to be examined.
Gray value image.
Minor axis of the region.
Angle enclosed by the major axis and the x-axis.
Major axis of the region.
Compute the area and center of gravity of a region in a gray value image.
Instance represents: Region(s) to be examined.
Gray value image.
Row coordinate of the gray value center of gravity.
Column coordinate of the gray value center of gravity.
Gray value volume of the region.
Compute the area and center of gravity of a region in a gray value image.
Instance represents: Region(s) to be examined.
Gray value image.
Row coordinate of the gray value center of gravity.
Column coordinate of the gray value center of gravity.
Gray value volume of the region.
Calculate horizontal and vertical gray-value projections.
Instance represents: Region to be processed.
Grayvalues for projections.
Method to compute the projections. Default: "simple"
Vertical projection.
Horizontal projection.
Asynchronous grab of images and preprocessed image data from the specified image acquisition device.
Modified instance represents: Pre-processed image regions.
Pre-processed XLD contours.
Handle of the acquisition device to be used.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Pre-processed control data.
Grabbed image data.
Asynchronous grab of images and preprocessed image data from the specified image acquisition device.
Modified instance represents: Pre-processed image regions.
Pre-processed XLD contours.
Handle of the acquisition device to be used.
Maximum tolerated delay between the start of the asynchronous grab and the delivery of the image [ms]. Default: -1.0
Pre-processed control data.
Grabbed image data.
Synchronous grab of images and preprocessed image data from the specified image acquisition device.
Modified instance represents: Preprocessed image regions.
Preprocessed XLD contours.
Handle of the acquisition device to be used.
Preprocessed control data.
Grabbed image data.
Synchronous grab of images and preprocessed image data from the specified image acquisition device.
Modified instance represents: Preprocessed image regions.
Preprocessed XLD contours.
Handle of the acquisition device to be used.
Preprocessed control data.
Grabbed image data.
Classify multiple characters with an CNN-based OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Confidence of the class of the characters.
Result of classifying the characters with the CNN.
Classify multiple characters with an CNN-based OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Confidence of the class of the characters.
Result of classifying the characters with the CNN.
Classify a single character with an CNN-based OCR classifier.
Instance represents: Character to be recognized.
Gray values of the character.
Handle of the OCR classifier.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the character.
Result of classifying the character with the CNN.
Classify a single character with an CNN-based OCR classifier.
Instance represents: Character to be recognized.
Gray values of the character.
Handle of the OCR classifier.
Number of best classes to determine. Default: 1
Confidence(s) of the class(es) of the character.
Result of classifying the character with the CNN.
Classify a related group of characters with an CNN-based OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the CNN.
Classify a related group of characters with an CNN-based OCR classifier.
Instance represents: Characters to be recognized.
Gray values of the characters.
Handle of the OCR classifier.
Expression describing the allowed word structure.
Number of classes per character considered for the internal word correction. Default: 3
Maximum number of corrected characters. Default: 2
Confidence of the class of the characters.
Word text after classification and correction.
Measure of similarity between corrected word and uncorrected classification results.
Result of classifying the characters with the CNN.
Compute the width, height, and aspect ratio of the surrounding rectangle parallel to the coordinate axes.
Instance represents: Regions to be examined.
Width of the surrounding rectangle of the region.
Aspect ratio of the surrounding rectangle of the region.
Height of the surrounding rectangle of the region.
Compute the width, height, and aspect ratio of the surrounding rectangle parallel to the coordinate axes.
Instance represents: Regions to be examined.
Width of the surrounding rectangle of the region.
Aspect ratio of the surrounding rectangle of the region.
Height of the surrounding rectangle of the region.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Returns the iconic object(s) at the specified index
The area of the region
The center row of the region
The center column of the region
Represents an instance of a sample identifier.
Read a sample identifier from a file.
Modified instance represents: Handle of the sample identifier.
File name.
Create a new sample identifier.
Modified instance represents: Handle of the sample identifier.
Parameter name. Default: []
Parameter value. Default: []
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Free the memory of a sample identifier.
Instance represents: Handle of the sample identifier.
Deserialize a serialized sample identifier.
Modified instance represents: Handle of the sample identifier.
Handle of the serialized item.
Read a sample identifier from a file.
Modified instance represents: Handle of the sample identifier.
File name.
Serialize a sample identifier.
Instance represents: Handle of the sample identifier.
Handle of the serialized item.
Write a sample identifier to a file.
Instance represents: Handle of the sample identifier.
File name.
Identify objects with a sample identifier.
Instance represents: Handle of the sample identifier.
Image showing the object to be identified.
Number of suggested object indices. Default: 1
Rating threshold. Default: 0.0
Generic parameter name. Default: []
Generic parameter value. Default: []
Rating value of the identified object.
Index of the identified object.
Identify objects with a sample identifier.
Instance represents: Handle of the sample identifier.
Image showing the object to be identified.
Number of suggested object indices. Default: 1
Rating threshold. Default: 0.0
Generic parameter name. Default: []
Generic parameter value. Default: []
Rating value of the identified object.
Index of the identified object.
Get selected parameters of a sample identifier.
Instance represents: Handle of the sample identifier.
Parameter name. Default: "rating_method"
Parameter value.
Set selected parameters of a sample identifier.
Instance represents: Handle of the sample identifier.
Parameter name. Default: "rating_method"
Parameter value. Default: "score_single"
Set selected parameters of a sample identifier.
Instance represents: Handle of the sample identifier.
Parameter name. Default: "rating_method"
Parameter value. Default: "score_single"
Retrieve information about an object of a sample identifier.
Instance represents: Handle of the sample identifier.
Index of the object for which information is retrieved.
Define, for which kind of object information is retrieved. Default: "num_training_objects"
Information about the object.
Retrieve information about an object of a sample identifier.
Instance represents: Handle of the sample identifier.
Index of the object for which information is retrieved.
Define, for which kind of object information is retrieved. Default: "num_training_objects"
Information about the object.
Define a name or a description for an object of a sample identifier.
Instance represents: Handle of the sample identifier.
Index of the object for which information is set.
Define, for which kind of object information is set. Default: "training_object_name"
Information about the object.
Define a name or a description for an object of a sample identifier.
Instance represents: Handle of the sample identifier.
Index of the object for which information is set.
Define, for which kind of object information is set. Default: "training_object_name"
Information about the object.
Remove training data from a sample identifier.
Instance represents: Handle of the sample identifier.
Index of the training object, from which samples should be removed.
Index of the training sample that should be removed.
Remove training data from a sample identifier.
Instance represents: Handle of the sample identifier.
Index of the training object, from which samples should be removed.
Index of the training sample that should be removed.
Remove preparation data from a sample identifier.
Instance represents: Handle of the sample identifier.
Index of the preparation object, of which samples should be removed.
Index of the preparation sample that should be removed.
Remove preparation data from a sample identifier.
Instance represents: Handle of the sample identifier.
Index of the preparation object, of which samples should be removed.
Index of the preparation sample that should be removed.
Train a sample identifier.
Instance represents: Handle of the sample identifier.
Parameter name. Default: []
Parameter value. Default: []
Add training data to an existing sample identifier.
Instance represents: Handle of the sample identifier.
Image that shows an object.
Index of the object visible in the SampleImage.
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Add training data to an existing sample identifier.
Instance represents: Handle of the sample identifier.
Image that shows an object.
Index of the object visible in the SampleImage.
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Adapt the internal data structure of a sample identifier to the objects to be identified.
Instance represents: Handle of the sample identifier.
Indicates if the preparation data should be removed. Default: "true"
Generic parameter name. Default: []
Generic parameter value. Default: []
Add preparation data to an existing sample identifier.
Instance represents: Handle of the sample identifier.
Image that shows an object.
Index of the object visible in the SampleImage. Default: "unknown"
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Add preparation data to an existing sample identifier.
Instance represents: Handle of the sample identifier.
Image that shows an object.
Index of the object visible in the SampleImage. Default: "unknown"
Generic parameter name. Default: []
Generic parameter value. Default: []
Index of the object sample.
Create a new sample identifier.
Modified instance represents: Handle of the sample identifier.
Parameter name. Default: []
Parameter value. Default: []
Represents an instance of a scattered data interpolator.
Creates an interpolator for the interpolation of scattered data.
Modified instance represents: Handle of the scattered data interpolator
Method for the interpolation Default: "thin_plate_splines"
Row coordinates of the points used for the interpolation
Column coordinates of the points used for the interpolation
Values of the points used for the interpolation
Names of the generic parameters that can be adjusted Default: []
Values of the generic parameters that can be adjusted Default: []
Clear a scattered data interpolator.
Handle of the scattered data interpolator
Clear a scattered data interpolator.
Instance represents: Handle of the scattered data interpolator
Interpolation of scattered data using a scattered data interpolator.
Instance represents: Handle of the scattered data interpolator
Row coordinates of points to be interpolated
Column coordinates of points to be interpolated
Values of interpolated points
Interpolation of scattered data using a scattered data interpolator.
Instance represents: Handle of the scattered data interpolator
Row coordinates of points to be interpolated
Column coordinates of points to be interpolated
Values of interpolated points
Creates an interpolator for the interpolation of scattered data.
Modified instance represents: Handle of the scattered data interpolator
Method for the interpolation Default: "thin_plate_splines"
Row coordinates of the points used for the interpolation
Column coordinates of the points used for the interpolation
Values of the points used for the interpolation
Names of the generic parameters that can be adjusted Default: []
Values of the generic parameters that can be adjusted Default: []
Represents an instance of a 3D graphic scene.
Create the data structure that is needed to visualize collections of 3D objects.
Modified instance represents: Handle of the 3D scene.
Get the depth or the index of instances in a displayed 3D scene.
Instance represents: Handle of the 3D scene.
Window handle.
Row coordinates.
Column coordinates.
Information. Default: "depth"
Indices or the depth of the objects at (Row,Column).
Get the depth or the index of instances in a displayed 3D scene.
Instance represents: Handle of the 3D scene.
Window handle.
Row coordinates.
Column coordinates.
Information. Default: "depth"
Indices or the depth of the objects at (Row,Column).
Set the pose of a 3D scene.
Instance represents: Handle of the 3D scene.
New pose of the 3D scene.
Set parameters of a 3D scene.
Instance represents: Handle of the 3D scene.
Names of the generic parameters. Default: "quality"
Values of the generic parameters. Default: "high"
Set parameters of a 3D scene.
Instance represents: Handle of the 3D scene.
Names of the generic parameters. Default: "quality"
Values of the generic parameters. Default: "high"
Set parameters of a light in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the light source.
Names of the generic parameters. Default: "ambient"
Values of the generic parameters. Default: [0.2,0.2,0.2]
Set parameters of a light in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the light source.
Names of the generic parameters. Default: "ambient"
Values of the generic parameters. Default: [0.2,0.2,0.2]
Set the pose of an instance in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the instance.
New pose of the instance.
Set the pose of an instance in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the instance.
New pose of the instance.
Set parameters of an instance in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the instance.
Names of the generic parameters. Default: "color"
Values of the generic parameters. Default: "green"
Set parameters of an instance in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the instance.
Names of the generic parameters. Default: "color"
Values of the generic parameters. Default: "green"
Set the pose of a camera in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the camera.
New pose of the camera.
Render an image of a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the camera used to display the scene.
Rendered 3D scene.
Remove a light from a 3D scene.
Instance represents: Handle of the 3D scene.
Light to remove.
Remove an object instance from a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the instance to remove.
Remove an object instance from a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the instance to remove.
Remove a camera from a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the camera to remove.
Display a 3D scene.
Instance represents: Handle of the 3D scene.
Window handle.
Index of the camera used to display the scene.
Display a 3D scene.
Instance represents: Handle of the 3D scene.
Window handle.
Index of the camera used to display the scene.
Add a light source to a 3D scene.
Instance represents: Handle of the 3D scene.
Position of the new light source. Default: [-100.0,-100.0,0.0]
Type of the new light source. Default: "point_light"
Index of the new light source in the 3D scene.
Add an instance of a 3D object model to a 3D scene.
Instance represents: Handle of the 3D scene.
Handle of the 3D object model.
Pose of the 3D object model.
Index of the new instance in the 3D scene.
Add an instance of a 3D object model to a 3D scene.
Instance represents: Handle of the 3D scene.
Handle of the 3D object model.
Pose of the 3D object model.
Index of the new instance in the 3D scene.
Add a camera to a 3D scene.
Instance represents: Handle of the 3D scene.
Parameters of the new camera.
Index of the new camera in the 3D scene.
Delete a 3D scene and free all allocated memory.
Handle of the 3D scene.
Delete a 3D scene and free all allocated memory.
Instance represents: Handle of the 3D scene.
Create the data structure that is needed to visualize collections of 3D objects.
Modified instance represents: Handle of the 3D scene.
Add a text label to a 3D scene.
Instance represents: Handle of the 3D scene.
Text of the label. Default: "label"
Point of reference of the label.
Position of the label. Default: "top"
Indicates fixed or relative positioning. Default: "point"
Index of the new label in the 3D scene.
Add a text label to a 3D scene.
Instance represents: Handle of the 3D scene.
Text of the label. Default: "label"
Point of reference of the label.
Position of the label. Default: "top"
Indicates fixed or relative positioning. Default: "point"
Index of the new label in the 3D scene.
Remove a text label from a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the text label to remove.
Remove a text label from a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the text label to remove.
Set parameters of a text label in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the text label.
Names of the generic parameters. Default: "color"
Values of the generic parameters. Default: "red"
Set parameters of a text label in a 3D scene.
Instance represents: Handle of the 3D scene.
Index of the text label.
Names of the generic parameters. Default: "color"
Values of the generic parameters. Default: "red"
Represents an instance of a connection via a serial port.
Open a serial device.
Modified instance represents: Serial interface handle.
Name of the serial port. Default: "COM1"
Clear the buffer of a serial connection.
Instance represents: Serial interface handle.
Buffer to be cleared. Default: "input"
Write to a serial connection.
Instance represents: Serial interface handle.
Characters to write (as tuple of integers).
Write to a serial connection.
Instance represents: Serial interface handle.
Characters to write (as tuple of integers).
Read from a serial device.
Instance represents: Serial interface handle.
Number of characters to read. Default: 1
Read characters (as tuple of integers).
Get the parameters of a serial device.
Instance represents: Serial interface handle.
Number of data bits of the serial interface.
Type of flow control of the serial interface.
Parity of the serial interface.
Number of stop bits of the serial interface.
Total timeout of the serial interface in ms.
Inter-character timeout of the serial interface in ms.
Speed of the serial interface.
Set the parameters of a serial device.
Instance represents: Serial interface handle.
Speed of the serial interface. Default: "unchanged"
Number of data bits of the serial interface. Default: "unchanged"
Type of flow control of the serial interface. Default: "unchanged"
Parity of the serial interface. Default: "unchanged"
Number of stop bits of the serial interface. Default: "unchanged"
Total timeout of the serial interface in ms. Default: "unchanged"
Inter-character timeout of the serial interface in ms. Default: "unchanged"
Set the parameters of a serial device.
Instance represents: Serial interface handle.
Speed of the serial interface. Default: "unchanged"
Number of data bits of the serial interface. Default: "unchanged"
Type of flow control of the serial interface. Default: "unchanged"
Parity of the serial interface. Default: "unchanged"
Number of stop bits of the serial interface. Default: "unchanged"
Total timeout of the serial interface in ms. Default: "unchanged"
Inter-character timeout of the serial interface in ms. Default: "unchanged"
Close a serial device.
Instance represents: Serial interface handle.
Open a serial device.
Modified instance represents: Serial interface handle.
Name of the serial port. Default: "COM1"
Represents an instance of a serializied item.
Create a serialized item.
Modified instance represents: Handle of the serialized item.
Data pointer of the serialized item.
Size of the serialized item.
Copy mode of the serialized item. Default: "true"
Creates a new serialized item with data given in byte array
The array needs to be kept alive until block is disposed!
Copies a serialized item into a new byte array
Receive a serialized item over a socket connection.
Modified instance represents: Handle of the serialized item.
Socket number.
Send a serialized item over a socket connection.
Instance represents: Handle of the serialized item.
Socket number.
Write a serialized item to a file.
Instance represents: Handle of the serialized item.
File handle.
Read a serialized item from a file.
Modified instance represents: Handle of the serialized item.
File handle.
Delete a serialized item.
Handle of the serialized item.
Delete a serialized item.
Instance represents: Handle of the serialized item.
Access the data pointer of a serialized item.
Instance represents: Handle of the serialized item.
Size of the serialized item.
Data pointer of the serialized item.
Create a serialized item.
Modified instance represents: Handle of the serialized item.
Data pointer of the serialized item.
Size of the serialized item.
Copy mode of the serialized item. Default: "true"
Represents an instance of a shape model for matching.
Read a shape model from a file.
Modified instance represents: Handle of the model.
File name.
Prepare an anisotropically scaled shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare an anisotropically scaled shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare an isotropically scaled shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare an isotropically scaled shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare a shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare a shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare an anisotropically scaled shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare an anisotropically scaled shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare an isotropically scaled shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare an isotropically scaled shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare a shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare a shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Deserialize a serialized shape model.
Modified instance represents: Handle of the model.
Handle of the serialized item.
Read a shape model from a file.
Modified instance represents: Handle of the model.
File name.
Serialize a shape model.
Instance represents: Handle of the model.
Handle of the serialized item.
Write a shape model to a file.
Instance represents: Handle of the model.
File name.
Free the memory of a shape model.
Instance represents: Handle of the model.
Return the contour representation of a shape model.
Instance represents: Handle of the model.
Pyramid level for which the contour representation should be returned. Default: 1
Contour representation of the shape model.
Return the parameters of a shape model.
Instance represents: Handle of the model.
Smallest rotation of the pattern.
Extent of the rotation angles.
Step length of the angles (resolution).
Minimum scale of the pattern.
Maximum scale of the pattern.
Scale step length (resolution).
Match metric.
Minimum contrast of the objects in the search images.
Number of pyramid levels.
Return the parameters of a shape model.
Instance represents: Handle of the model.
Smallest rotation of the pattern.
Extent of the rotation angles.
Step length of the angles (resolution).
Minimum scale of the pattern.
Maximum scale of the pattern.
Scale step length (resolution).
Match metric.
Minimum contrast of the objects in the search images.
Number of pyramid levels.
Return the origin (reference point) of a shape model.
Instance represents: Handle of the model.
Row coordinate of the origin of the shape model.
Column coordinate of the origin of the shape model.
Set the origin (reference point) of a shape model.
Instance represents: Handle of the model.
Row coordinate of the origin of the shape model.
Column coordinate of the origin of the shape model.
Find the best matches of multiple anisotropically scaled shape models.
Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the models in the row direction. Default: 0.9
Maximum scale of the models in the row direction. Default: 1.1
Minimum scale of the models in the column direction. Default: 0.9
Maximum scale of the models in the column direction. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models in the row direction.
Scale of the found instances of the models in the column direction.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple anisotropically scaled shape models.
Instance represents: Handle of the models.
Input image in which the models should be found.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the models in the row direction. Default: 0.9
Maximum scale of the models in the row direction. Default: 1.1
Minimum scale of the models in the column direction. Default: 0.9
Maximum scale of the models in the column direction. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models in the row direction.
Scale of the found instances of the models in the column direction.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple isotropically scaled shape models.
Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the models. Default: 0.9
Maximum scale of the models. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple isotropically scaled shape models.
Instance represents: Handle of the models.
Input image in which the models should be found.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the models. Default: 0.9
Maximum scale of the models. Default: 1.1
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Scale of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple shape models.
Input image in which the models should be found.
Handle of the models.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of multiple shape models.
Instance represents: Handle of the models.
Input image in which the models should be found.
Smallest rotation of the models. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the models to be found. Default: 0.5
Number of instances of the models to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the models to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the models.
Column coordinate of the found instances of the models.
Rotation angle of the found instances of the models.
Score of the found instances of the models.
Index of the found instances of the models.
Find the best matches of an anisotropically scaled shape model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model in the row direction. Default: 0.9
Maximum scale of the model in the row direction. Default: 1.1
Minimum scale of the model in the column direction. Default: 0.9
Maximum scale of the model in the column direction. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model in the row direction.
Scale of the found instances of the model in the column direction.
Score of the found instances of the model.
Find the best matches of an anisotropically scaled shape model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum scale of the model in the row direction. Default: 0.9
Maximum scale of the model in the row direction. Default: 1.1
Minimum scale of the model in the column direction. Default: 0.9
Maximum scale of the model in the column direction. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model in the row direction.
Scale of the found instances of the model in the column direction.
Score of the found instances of the model.
Find the best matches of an isotropically scaled shape model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model.
Score of the found instances of the model.
Find the best matches of an isotropically scaled shape model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.78
Minimum scale of the model. Default: 0.9
Maximum scale of the model. Default: 1.1
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Scale of the found instances of the model.
Score of the found instances of the model.
Find the best matches of a shape model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Find the best matches of a shape model in an image.
Instance represents: Handle of the model.
Input image in which the model should be found.
Smallest rotation of the model. Default: -0.39
Extent of the rotation angles. Default: 0.79
Minimum score of the instances of the model to be found. Default: 0.5
Number of instances of the model to be found (or 0 for all matches). Default: 1
Maximum overlap of the instances of the model to be found. Default: 0.5
Subpixel accuracy if not equal to 'none'. Default: "least_squares"
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Row coordinate of the found instances of the model.
Column coordinate of the found instances of the model.
Rotation angle of the found instances of the model.
Score of the found instances of the model.
Set the metric of a shape model that was created from XLD contours.
Instance represents: Handle of the model.
Input image used for the determination of the polarity.
Transformation matrix.
Match metric. Default: "use_polarity"
Set selected parameters of the shape model.
Instance represents: Handle of the model.
Parameter names.
Parameter values.
Prepare an anisotropically scaled shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare an anisotropically scaled shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare an isotropically scaled shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare an isotropically scaled shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare a shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare a shape model for matching from XLD contours.
Modified instance represents: Handle of the model.
Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Prepare an anisotropically scaled shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare an anisotropically scaled shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare an isotropically scaled shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare an isotropically scaled shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare a shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Prepare a shape model for matching.
Modified instance represents: Handle of the model.
Input image whose domain will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "use_polarity"
Threshold or hysteresis thresholds for the contrast of the object in the template image and optionally minimum size of the object parts. Default: "auto"
Minimum contrast of the objects in the search images. Default: "auto"
Get the clutter parameters of a shape model.
Instance represents: Handle of the model.
Parameter names. Default: "use_clutter"
Parameter values.
Transformation matrix.
Minimum contrast of clutter in the search images.
Region where no clutter should occur.
Get the clutter parameters of a shape model.
Instance represents: Handle of the model.
Parameter names. Default: "use_clutter"
Parameter values.
Transformation matrix.
Minimum contrast of clutter in the search images.
Region where no clutter should occur.
Set the clutter parameters of a shape model.
Instance represents: Handle of the model.
Region where no clutter should occur.
Transformation matrix.
Minimum contrast of clutter in the search images. Default: 128
Parameter names.
Parameter values.
Set the clutter parameters of a shape model.
Instance represents: Handle of the model.
Region where no clutter should occur.
Transformation matrix.
Minimum contrast of clutter in the search images. Default: 128
Parameter names.
Parameter values.
Represents an instance of a 3D shape model for 3D matching.
Read a 3D shape model from a file.
Modified instance represents: Handle of the 3D shape model.
File name.
Prepare a 3D object model for matching.
Modified instance represents: Handle of the 3D shape model.
Handle of the 3D object model.
Internal camera parameters.
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without unit). Default: 0
Meaning of the rotation values of the reference orientation. Default: "gba"
Minimum longitude of the model views. Default: -0.35
Maximum longitude of the model views. Default: 0.35
Minimum latitude of the model views. Default: -0.35
Maximum latitude of the model views. Default: 0.35
Minimum camera roll angle of the model views. Default: -3.1416
Maximum camera roll angle of the model views. Default: 3.1416
Minimum camera-object-distance of the model views. Default: 0.3
Maximum camera-object-distance of the model views. Default: 0.4
Minimum contrast of the objects in the search images. Default: 10
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Prepare a 3D object model for matching.
Modified instance represents: Handle of the 3D shape model.
Handle of the 3D object model.
Internal camera parameters.
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without unit). Default: 0
Meaning of the rotation values of the reference orientation. Default: "gba"
Minimum longitude of the model views. Default: -0.35
Maximum longitude of the model views. Default: 0.35
Minimum latitude of the model views. Default: -0.35
Maximum latitude of the model views. Default: 0.35
Minimum camera roll angle of the model views. Default: -3.1416
Maximum camera roll angle of the model views. Default: 3.1416
Minimum camera-object-distance of the model views. Default: 0.3
Maximum camera-object-distance of the model views. Default: 0.4
Minimum contrast of the objects in the search images. Default: 10
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Free the memory of a 3D shape model.
Handle of the 3D shape model.
Free the memory of a 3D shape model.
Instance represents: Handle of the 3D shape model.
Deserialize a serialized 3D shape model.
Modified instance represents: Handle of the 3D shape model.
Handle of the serialized item.
Serialize a 3D shape model.
Instance represents: Handle of the 3D shape model.
Handle of the serialized item.
Read a 3D shape model from a file.
Modified instance represents: Handle of the 3D shape model.
File name.
Write a 3D shape model to a file.
Instance represents: Handle of the 3D shape model.
File name.
Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference coordinate system of a 3D shape model and vice versa.
Instance represents: Handle of the 3D shape model.
Pose to be transformed in the source system.
Direction of the transformation. Default: "ref_to_model"
Transformed 3D pose in the target system.
Project the edges of a 3D shape model into image coordinates.
Instance represents: Handle of the 3D shape model.
Internal camera parameters.
3D pose of the 3D shape model in the world coordinate system.
Remove hidden surfaces? Default: "true"
Smallest face angle for which the edge is displayed Default: 0.523599
Contour representation of the model view.
Project the edges of a 3D shape model into image coordinates.
Instance represents: Handle of the 3D shape model.
Internal camera parameters.
3D pose of the 3D shape model in the world coordinate system.
Remove hidden surfaces? Default: "true"
Smallest face angle for which the edge is displayed Default: 0.523599
Contour representation of the model view.
Return the contour representation of a 3D shape model view.
Instance represents: Handle of the 3D shape model.
Pyramid level for which the contour representation should be returned. Default: 1
View for which the contour representation should be returned. Default: 1
3D pose of the 3D shape model at the current view.
Contour representation of the model view.
Return the parameters of a 3D shape model.
Instance represents: Handle of the 3D shape model.
Names of the generic parameters that are to be queried for the 3D shape model. Default: "num_levels_max"
Values of the generic parameters.
Return the parameters of a 3D shape model.
Instance represents: Handle of the 3D shape model.
Names of the generic parameters that are to be queried for the 3D shape model. Default: "num_levels_max"
Values of the generic parameters.
Find the best matches of a 3D shape model in an image.
Instance represents: Handle of the 3D shape model.
Input image in which the model should be found.
Minimum score of the instances of the model to be found. Default: 0.7
"Greediness" of the search heuristic (0: safe but slow; 1: fast but matches may be missed). Default: 0.9
Number of pyramid levels used in the matching (and lowest pyramid level to use if $|NumLevels| = 2$). Default: 0
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
6 standard deviations or 36 covariances of the pose parameters.
Score of the found instances of the 3D shape model.
3D pose of the 3D shape model.
Prepare a 3D object model for matching.
Modified instance represents: Handle of the 3D shape model.
Handle of the 3D object model.
Internal camera parameters.
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without unit). Default: 0
Meaning of the rotation values of the reference orientation. Default: "gba"
Minimum longitude of the model views. Default: -0.35
Maximum longitude of the model views. Default: 0.35
Minimum latitude of the model views. Default: -0.35
Maximum latitude of the model views. Default: 0.35
Minimum camera roll angle of the model views. Default: -3.1416
Maximum camera roll angle of the model views. Default: 3.1416
Minimum camera-object-distance of the model views. Default: 0.3
Maximum camera-object-distance of the model views. Default: 0.4
Minimum contrast of the objects in the search images. Default: 10
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Prepare a 3D object model for matching.
Modified instance represents: Handle of the 3D shape model.
Handle of the 3D object model.
Internal camera parameters.
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without unit). Default: 0
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without unit). Default: 0
Meaning of the rotation values of the reference orientation. Default: "gba"
Minimum longitude of the model views. Default: -0.35
Maximum longitude of the model views. Default: 0.35
Minimum latitude of the model views. Default: -0.35
Maximum latitude of the model views. Default: 0.35
Minimum camera roll angle of the model views. Default: -3.1416
Maximum camera roll angle of the model views. Default: 3.1416
Minimum camera-object-distance of the model views. Default: 0.3
Maximum camera-object-distance of the model views. Default: 0.4
Minimum contrast of the objects in the search images. Default: 10
Names of (optional) parameters for controlling the behavior of the operator. Default: []
Values of the optional generic parameters. Default: []
Represents an instance of the data structure required to perform 3D measurements with the sheet-of-light technique.
Read a sheet-of-light model from a file and create a new model.
Modified instance represents: Handle of the sheet-of-light model.
Name of the sheet-of-light model file. Default: "sheet_of_light_model.solm"
Create a model to perform 3D-measurements using the sheet-of-light technique.
Modified instance represents: Handle for using and accessing the sheet-of-light model.
Region of the images containing the profiles to be processed. If the provided region is not rectangular, its smallest enclosing rectangle will be used.
Names of the generic parameters that can be adjusted for the sheet-of-light model. Default: "min_gray"
Values of the generic parameters that can be adjusted for the sheet-of-light model. Default: 50
Create a model to perform 3D-measurements using the sheet-of-light technique.
Modified instance represents: Handle for using and accessing the sheet-of-light model.
Region of the images containing the profiles to be processed. If the provided region is not rectangular, its smallest enclosing rectangle will be used.
Names of the generic parameters that can be adjusted for the sheet-of-light model. Default: "min_gray"
Values of the generic parameters that can be adjusted for the sheet-of-light model. Default: 50
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Read a sheet-of-light model from a file and create a new model.
Modified instance represents: Handle of the sheet-of-light model.
Name of the sheet-of-light model file. Default: "sheet_of_light_model.solm"
Write a sheet-of-light model to a file.
Instance represents: Handle of the sheet-of-light model.
Name of the sheet-of-light model file. Default: "sheet_of_light_model.solm"
Deserialize a sheet-of-light model.
Modified instance represents: Handle of the sheet-of-light model.
Handle of the serialized item.
Serialize a sheet-of-light model.
Instance represents: Handle of the sheet-of-light model.
Handle of the serialized item.
Calibrate a sheet-of-light setup with a 3D calibration object.
Instance represents: Handle of the sheet-of-light model.
Average back projection error of the optimization.
Get the result of a calibrated measurement performed with the sheet-of-light technique as a 3D object model.
Instance represents: Handle for accessing the sheet-of-light model.
Handle of the resulting 3D object model.
Get the iconic results of a measurement performed with the sheet-of light technique.
Instance represents: Handle of the sheet-of-light model to be used.
Specify which result of the measurement shall be provided. Default: "disparity"
Desired measurement result.
Get the iconic results of a measurement performed with the sheet-of light technique.
Instance represents: Handle of the sheet-of-light model to be used.
Specify which result of the measurement shall be provided. Default: "disparity"
Desired measurement result.
Apply the calibration transformations to the input disparity image.
Instance represents: Handle of the sheet-of-light model.
Height or range image to be calibrated.
Set sheet of light profiles by measured disparities.
Instance represents: Handle of the sheet-of-light model.
Disparity image that contains several profiles.
Poses describing the movement of the scene under measurement between the previously processed profile image and the current profile image.
Process the profile image provided as input and store the resulting disparity to the sheet-of-light model.
Instance represents: Handle of the sheet-of-light model.
Input image.
Pose describing the movement of the scene under measurement between the previously processed profile image and the current profile image.
Set selected parameters of the sheet-of-light model.
Instance represents: Handle of the sheet-of-light model.
Name of the model parameter that shall be adjusted for the sheet-of-light model. Default: "method"
Value of the model parameter that shall be adjusted for the sheet-of-light model. Default: "center_of_gravity"
Set selected parameters of the sheet-of-light model.
Instance represents: Handle of the sheet-of-light model.
Name of the model parameter that shall be adjusted for the sheet-of-light model. Default: "method"
Value of the model parameter that shall be adjusted for the sheet-of-light model. Default: "center_of_gravity"
Get the value of a parameter, which has been set in a sheet-of-light model.
Instance represents: Handle of the sheet-of-light model.
Name of the generic parameter that shall be queried. Default: "method"
Value of the model parameter that shall be queried.
For a given sheet-of-light model get the names of the generic iconic or control parameters that can be used in the different sheet-of-light operators.
Instance represents: Handle of the sheet-of-light model.
Name of the parameter group. Default: "create_model_params"
List containing the names of the supported generic parameters.
Reset a sheet-of-light model.
Instance represents: Handle of the sheet-of-light model.
Delete a sheet-of-light model and free the allocated memory.
Instance represents: Handle of the sheet-of-light model.
Create a model to perform 3D-measurements using the sheet-of-light technique.
Modified instance represents: Handle for using and accessing the sheet-of-light model.
Region of the images containing the profiles to be processed. If the provided region is not rectangular, its smallest enclosing rectangle will be used.
Names of the generic parameters that can be adjusted for the sheet-of-light model. Default: "min_gray"
Values of the generic parameters that can be adjusted for the sheet-of-light model. Default: 50
Create a model to perform 3D-measurements using the sheet-of-light technique.
Modified instance represents: Handle for using and accessing the sheet-of-light model.
Region of the images containing the profiles to be processed. If the provided region is not rectangular, its smallest enclosing rectangle will be used.
Names of the generic parameters that can be adjusted for the sheet-of-light model. Default: "min_gray"
Values of the generic parameters that can be adjusted for the sheet-of-light model. Default: 50
Represents an instance of a socket connection.
Open a socket and connect it to an accepting socket.
Modified instance represents: Socket number.
Hostname of the computer to connect to. Default: "localhost"
Port number.
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Open a socket and connect it to an accepting socket.
Modified instance represents: Socket number.
Hostname of the computer to connect to. Default: "localhost"
Port number.
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Open a socket that accepts connection requests.
Modified instance represents: Socket number.
Port number. Default: 3000
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Open a socket that accepts connection requests.
Modified instance represents: Socket number.
Port number. Default: 3000
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Receive an image over a socket connection.
Instance represents: Socket number.
Received image.
Send an image over a socket connection.
Instance represents: Socket number.
Image to be sent.
Receive regions over a socket connection.
Instance represents: Socket number.
Received regions.
Send regions over a socket connection.
Instance represents: Socket number.
Regions to be sent.
Receive an XLD object over a socket connection.
Instance represents: Socket number.
Received XLD object.
Send an XLD object over a socket connection.
Instance represents: Socket number.
XLD object to be sent.
Receive a tuple over a socket connection.
Instance represents: Socket number.
Received tuple.
Send a tuple over a socket connection.
Instance represents: Socket number.
Tuple to be sent.
Send a tuple over a socket connection.
Instance represents: Socket number.
Tuple to be sent.
Receive arbitrary data from external devices or applications using a generic socket connection.
Instance represents: Socket number.
Specification how to convert the data to tuples. Default: "z"
IP address or hostname and network port of the communication partner.
Value (or tuple of values) holding the received and converted data.
Receive arbitrary data from external devices or applications using a generic socket connection.
Instance represents: Socket number.
Specification how to convert the data to tuples. Default: "z"
IP address or hostname and network port of the communication partner.
Value (or tuple of values) holding the received and converted data.
Send arbitrary data to external devices or applications using a generic socket communication.
Instance represents: Socket number.
Specification how to convert the data. Default: "z"
Value (or tuple of values) holding the data to send.
IP address or hostname and network port of the communication partner. Default: []
Send arbitrary data to external devices or applications using a generic socket communication.
Instance represents: Socket number.
Specification how to convert the data. Default: "z"
Value (or tuple of values) holding the data to send.
IP address or hostname and network port of the communication partner. Default: []
Get the value of a socket parameter.
Instance represents: Socket number.
Name of the socket parameter.
Value of the socket parameter.
Get the value of a socket parameter.
Instance represents: Socket number.
Name of the socket parameter.
Value of the socket parameter.
Set a socket parameter.
Instance represents: Socket number.
Name of the socket parameter.
Value of the socket parameter. Default: "on"
Set a socket parameter.
Instance represents: Socket number.
Name of the socket parameter.
Value of the socket parameter. Default: "on"
Determine the HALCON data type of the next socket data.
Instance represents: Socket number.
Data type of next HALCON data.
Get the socket descriptor of a socket used by the operating system.
Instance represents: Socket number.
Socket descriptor used by the operating system.
This operator is inoperable. It had the following function: Close all opened sockets.
Close a socket.
Instance represents: Socket number.
Accept a connection request on a listening socket of the protocol type 'HALCON' or 'TCP'/'TCP4'/'TCP6'.
Instance represents: Socket number of the accepting socket.
Should the operator wait until a connection request arrives? Default: "auto"
Socket number.
Open a socket and connect it to an accepting socket.
Modified instance represents: Socket number.
Hostname of the computer to connect to. Default: "localhost"
Port number.
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Open a socket and connect it to an accepting socket.
Modified instance represents: Socket number.
Hostname of the computer to connect to. Default: "localhost"
Port number.
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Open a socket that accepts connection requests.
Modified instance represents: Socket number.
Port number. Default: 3000
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Open a socket that accepts connection requests.
Modified instance represents: Socket number.
Port number. Default: 3000
Names of the generic parameters that can be adjusted for the socket. Default: []
Values of the generic parameters that can be adjusted for the socket. Default: []
Receive a serialized item over a socket connection.
Instance represents: Socket number.
Handle of the serialized item.
Send a serialized item over a socket connection.
Instance represents: Socket number.
Handle of the serialized item.
Represents an instance of a stereo model.
Create a HALCON stereo model.
Modified instance represents: Handle of the stereo model.
Handle to the camera setup model.
Reconstruction method. Default: "surface_pairwise"
Name of the model parameter to be set. Default: []
Value of the model parameter to be set. Default: []
Create a HALCON stereo model.
Modified instance represents: Handle of the stereo model.
Handle to the camera setup model.
Reconstruction method. Default: "surface_pairwise"
Name of the model parameter to be set. Default: []
Value of the model parameter to be set. Default: []
Free the memory of a stereo model.
Instance represents: Handle of the stereo model.
Reconstruct 3D points from calibrated multi-view stereo images.
Instance represents: Handle of the stereo model.
Row coordinates of the detected points.
Column coordinates of the detected points.
Covariance matrices of the detected points. Default: []
Indices of the observing cameras.
Indices of the observed world points.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Indices of the reconstructed 3D points.
Reconstruct 3D points from calibrated multi-view stereo images.
Instance represents: Handle of the stereo model.
Row coordinates of the detected points.
Column coordinates of the detected points.
Covariance matrices of the detected points. Default: []
Indices of the observing cameras.
Indices of the observed world points.
X coordinates of the reconstructed 3D points.
Y coordinates of the reconstructed 3D points.
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
Indices of the reconstructed 3D points.
Reconstruct surface from calibrated multi-view stereo images.
Instance represents: Handle of the stereo model.
An image array acquired by the camera setup associated with the stereo model.
Handle to the resulting surface.
Get intermediate iconic results of a stereo reconstruction.
Instance represents: Handle of the stereo model.
Camera indices of the pair ([From, To]).
Name of the iconic result to be returned.
Iconic result.
Get intermediate iconic results of a stereo reconstruction.
Instance represents: Handle of the stereo model.
Camera indices of the pair ([From, To]).
Name of the iconic result to be returned.
Iconic result.
Return the list of image pairs set in a stereo model.
Instance represents: Handle of the stereo model.
Camera indices for the to cameras in the image pairs.
Camera indices for the from cameras in the image pairs.
Specify image pairs to be used for surface stereo reconstruction.
Instance represents: Handle of the stereo model.
Camera indices for the from cameras in the image pairs.
Camera indices for the to cameras in the image pairs.
Get stereo model parameters.
Instance represents: Handle of the stereo model.
Names of the parameters to be set.
Values of the parameters to be set.
Get stereo model parameters.
Instance represents: Handle of the stereo model.
Names of the parameters to be set.
Values of the parameters to be set.
Set stereo model parameters.
Instance represents: Handle of the stereo model.
Names of the parameters to be set.
Values of the parameters to be set.
Set stereo model parameters.
Instance represents: Handle of the stereo model.
Names of the parameters to be set.
Values of the parameters to be set.
Create a HALCON stereo model.
Modified instance represents: Handle of the stereo model.
Handle to the camera setup model.
Reconstruction method. Default: "surface_pairwise"
Name of the model parameter to be set. Default: []
Value of the model parameter to be set. Default: []
Create a HALCON stereo model.
Modified instance represents: Handle of the stereo model.
Handle to the camera setup model.
Reconstruction method. Default: "surface_pairwise"
Name of the model parameter to be set. Default: []
Value of the model parameter to be set. Default: []
Get intermediate 3D object model of a stereo reconstruction
Instance represents: Handle des Stereomodells.
Namen der Modellparameter.
Werte der Modellparameter.
Get intermediate 3D object model of a stereo reconstruction
Instance represents: Handle des Stereomodells.
Namen der Modellparameter.
Werte der Modellparameter.
Represents an instance of a structured light model.
Create a structured light model.
Modified instance represents: Handle for using and accessing the structured light model.
The type of the created structured light model. Default: "deflectometry"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Clear a structured light model and free the allocated memory.
Handle of the structured light model.
Clear a structured light model and free the allocated memory.
Instance represents: Handle of the structured light model.
Create a structured light model.
Modified instance represents: Handle for using and accessing the structured light model.
The type of the created structured light model. Default: "deflectometry"
Decode the camera images acquired with a structured light setup.
Instance represents: Handle of the structured light model.
Acquired camera images.
Deserialize a structured light model.
Modified instance represents: Handle of the structured light model.
Handle of the serialized item.
Generate the pattern images to be displayed in a structured light setup.
Instance represents: Handle of the structured light model.
Generated pattern images.
Query parameters of a structured light model.
Instance represents: Handle of the structured light model.
Name of the queried model parameter. Default: "min_stripe_width"
Value of the queried model parameter.
Query parameters of a structured light model.
Instance represents: Handle of the structured light model.
Name of the queried model parameter. Default: "min_stripe_width"
Value of the queried model parameter.
Get (intermediate) iconic results of a structured light model.
Instance represents: Handle of the structured light model.
Name of the iconic result to be returned. Default: "correspondence_image"
Iconic result.
Get (intermediate) iconic results of a structured light model.
Instance represents: Handle of the structured light model.
Name of the iconic result to be returned. Default: "correspondence_image"
Iconic result.
Read a structured light model from a file.
Modified instance represents: Handle of the structured light model.
File name.
Serialize a structured light model.
Instance represents: Handle of the structured light model.
Handle of the serialized item.
Set parameters of a structured light model.
Instance represents: Handle of the structured light model.
Name of the model parameter to be adjusted. Default: "min_stripe_width"
New value of the model parameter. Default: 32
Set parameters of a structured light model.
Instance represents: Handle of the structured light model.
Name of the model parameter to be adjusted. Default: "min_stripe_width"
New value of the model parameter. Default: 32
Write a structured light model to a file.
Instance represents: Handle of the structured light model.
File name.
Represents an instance of a surface matching result.
Get details of a result from surface based matching.
Instance represents: Handle of the surface matching result.
Name of the result property. Default: "pose"
Index of the matching result, starting with 0. Default: 0
Value of the result property.
Get details of a result from surface based matching.
Instance represents: Handle of the surface matching result.
Name of the result property. Default: "pose"
Index of the matching result, starting with 0. Default: 0
Value of the result property.
Free the memory of a surface matching result.
Handle of the surface matching result.
Free the memory of a surface matching result.
Instance represents: Handle of the surface matching result.
Refine the pose of a surface model in a 3D scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene.
Modified instance represents: Handle of the matching result, if enabled in ReturnResultHandle.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene.
Modified instance represents: Handle of the matching result, if enabled in ReturnResultHandle.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene and images.
Images of the scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene and images.
Modified instance represents: Handle of the matching result, if enabled in ReturnResultHandle.
Images of the scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene and in images.
Images of the scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene and in images.
Modified instance represents: Handle of the matching result, if enabled in ReturnResultHandle.
Images of the scene.
Handle of the surface model.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
3D pose of the surface model in the scene.
Represents an instance of a surface model.
Read a surface model from a file.
Modified instance represents: Handle of the read surface model.
Name of the SFM file.
Create the data structure needed to perform surface-based matching.
Modified instance represents: Handle of the surface model.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.03
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Create the data structure needed to perform surface-based matching.
Modified instance represents: Handle of the surface model.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.03
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Free the memory of a surface model.
Handle of the surface model.
Free the memory of a surface model.
Instance represents: Handle of the surface model.
Deserialize a surface model.
Modified instance represents: Handle of the surface model.
Handle of the serialized item.
Serialize a surface_model.
Instance represents: Handle of the surface model.
Handle of the serialized item.
Read a surface model from a file.
Modified instance represents: Handle of the read surface model.
Name of the SFM file.
Write a surface model to a file.
Instance represents: Handle of the surface model.
File name.
Refine the pose of a surface model in a 3D scene.
Instance represents: Handle of the surface model.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene.
Instance represents: Handle of the surface model.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene.
Instance represents: Handle of the surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene.
Instance represents: Handle of the surface model.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Return the parameters and properties of a surface model.
Instance represents: Handle of the surface model.
Name of the parameter. Default: "diameter"
Value of the parameter.
Return the parameters and properties of a surface model.
Instance represents: Handle of the surface model.
Name of the parameter. Default: "diameter"
Value of the parameter.
Create the data structure needed to perform surface-based matching.
Modified instance represents: Handle of the surface model.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.03
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Create the data structure needed to perform surface-based matching.
Modified instance represents: Handle of the surface model.
Handle of the 3D object model.
Sampling distance relative to the object's diameter Default: 0.03
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Find the best matches of a surface model in a 3D scene and images.
Instance represents: Handle of the surface model.
Images of the scene.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Find the best matches of a surface model in a 3D scene and images.
Instance represents: Handle of the surface model.
Images of the scene.
Handle of the 3D object model containing the scene.
Scene sampling distance relative to the diameter of the surface model. Default: 0.05
Fraction of sampled scene points used as key points. Default: 0.2
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the surface model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene and in images.
Instance represents: Handle of the surface model.
Images of the scene.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Refine the pose of a surface model in a 3D scene and in images.
Instance represents: Handle of the surface model.
Images of the scene.
Handle of the 3D object model containing the scene.
Initial pose of the surface model in the scene.
Minimum score of the returned poses. Default: 0
Enable returning a result handle in SurfaceMatchingResultID. Default: "false"
Names of the generic parameters. Default: []
Values of the generic parameters. Default: []
Score of the found instances of the model.
Handle of the matching result, if enabled in ReturnResultHandle.
3D pose of the surface model in the scene.
Set parameters and properties of a surface model.
Instance represents: Handle of the surface model.
Name of the parameter. Default: "camera_parameter"
Value of the parameter.
Set parameters and properties of a surface model.
Instance represents: Handle of the surface model.
Name of the parameter. Default: "camera_parameter"
Value of the parameter.
Class grouping system information and manipulation related functionality.
Delaying the execution of the program.
Number of seconds by which the execution of the program will be delayed. Default: 10
Execute a system command.
Command to be called by the system. Default: "ls"
Set HALCON system parameters.
Name of the system parameter to be changed. Default: "init_new_image"
New value of the system parameter. Default: "true"
Set HALCON system parameters.
Name of the system parameter to be changed. Default: "init_new_image"
New value of the system parameter. Default: "true"
Activating and deactivating of HALCON control modes.
Desired control mode. Default: "default"
Activating and deactivating of HALCON control modes.
Desired control mode. Default: "default"
Initialization of the HALCON system.
Default image width (in pixels). Default: 128
Default image height (in pixels). Default: 128
Usual number of channels. Default: 0
Get current value of HALCON system parameters.
Desired system parameter. Default: "init_new_image"
Current value of the system parameter.
Get current value of HALCON system parameters.
Desired system parameter. Default: "init_new_image"
Current value of the system parameter.
State of the HALCON control modes.
Tuplet of the currently activated control modes.
Inquiry after the error text of a HALCON error number.
HALCON error code.
Corresponding error message.
Passed Time.
Processtime since the program start.
Number of entries in the HALCON database.
Relation of interest of the HALCON database. Default: "object"
Number of tuples in the relation.
Returns the extended error information for the calling thread's last HALCON error.
Extended error code.
Extended error message.
Operator that set the error code.
Query of used modules and the module key.
Key for license manager.
Names of used modules.
Inquiring for possible settings of the HALCON debugging tool.
Corresponding state of the control modes.
Available control modes (see also set_spy).
Control of the HALCON Debugging Tools.
Control mode Default: "mode"
State of the control mode to be set. Default: "on"
Control of the HALCON Debugging Tools.
Control mode Default: "mode"
State of the control mode to be set. Default: "on"
Current configuration of the HALCON debugging-tool.
Control mode Default: "mode"
State of the control mode.
Set AOP information for operators.
Operator to set information to Default: ""
Further specific index Default: ""
Further specific address Default: ""
Scope of information Default: "max_threads"
AOP information value
Set AOP information for operators.
Operator to set information to Default: ""
Further specific index Default: ""
Further specific address Default: ""
Scope of information Default: "max_threads"
AOP information value
Return AOP information for operators.
Operator to get information for
Further index stages Default: ["iconic_type","parameter:0"]
Further index values Default: ["byte",""]
Scope of information Default: "max_threads"
Value of information
Return AOP information for operators.
Operator to get information for
Further index stages Default: ["iconic_type","parameter:0"]
Further index values Default: ["byte",""]
Scope of information Default: "max_threads"
Value of information
Query indexing structure of AOP information for operators.
Operator to get information for Default: ""
Further specific index Default: ""
Further specific address Default: ""
Values of next index stage
Name of next index stage
Query indexing structure of AOP information for operators.
Operator to get information for Default: ""
Further specific index Default: ""
Further specific address Default: ""
Values of next index stage
Name of next index stage
Check hardware regarding its potential for automatic operator parallelization.
Operators to check Default: ""
Iconic object types to check Default: ""
Knowledge file name Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Check hardware regarding its potential for automatic operator parallelization.
Operators to check Default: ""
Iconic object types to check Default: ""
Knowledge file name Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Write knowledge about hardware dependent behavior of automatic operator parallelization to file.
Name of knowledge file Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Write knowledge about hardware dependent behavior of automatic operator parallelization to file.
Name of knowledge file Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Load knowledge about hardware dependent behavior of automatic operator parallelization.
Name of knowledge file Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Updated Operators
Knowledge attributes
Load knowledge about hardware dependent behavior of automatic operator parallelization.
Name of knowledge file Default: ""
Parameter name Default: "none"
Parameter value Default: "none"
Updated Operators
Knowledge attributes
Specify a window type.
Name of the window type which has to be set. Default: "X-Window"
Get window characteristics.
Name of the attribute that should be returned.
Attribute value.
Set window characteristics.
Name of the attribute that should be modified.
Value of the attribute that should be set.
Set window characteristics.
Name of the attribute that should be modified.
Value of the attribute that should be set.
Query all available window types.
Names of available window types.
Return the HALCON thread ID of the current thread.
ID representing the current thread.
Get current value of system information without requiring a license.
Desired system parameter. Default: "available_parameters"
Current value of the system parameter.
Get current value of system information without requiring a license.
Desired system parameter. Default: "available_parameters"
Current value of the system parameter.
Attempt to interrupt an operator running in a different thread.
Thread that runs the operator to interrupt.
Interruption mode. Default: "cancel"
Represents an instance of a template for gray value matching.
Preparing a pattern for template matching with rotation.
Modified instance represents: Template number.
Input image whose domain will be processed for the pattern matching.
Maximal number of pyramid levels. Default: 4
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Step rate (angle precision) of matching. Default: 0.0982
Kind of optimizing. Default: "sort"
Kind of grayvalues. Default: "original"
Preparing a pattern for template matching.
Modified instance represents: Template number.
Input image whose domain will be processed for the pattern matching.
Not yet in use. Default: 255
Maximal number of pyramid levels. Default: 4
Kind of optimizing. Default: "sort"
Kind of grayvalues. Default: "original"
Reading a template from file.
Modified instance represents: Template number.
file name.
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Preparing a pattern for template matching with rotation.
Modified instance represents: Template number.
Input image whose domain will be processed for the pattern matching.
Maximal number of pyramid levels. Default: 4
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Step rate (angle precision) of matching. Default: 0.0982
Kind of optimizing. Default: "sort"
Kind of grayvalues. Default: "original"
Preparing a pattern for template matching.
Modified instance represents: Template number.
Input image whose domain will be processed for the pattern matching.
Not yet in use. Default: 255
Maximal number of pyramid levels. Default: 4
Kind of optimizing. Default: "sort"
Kind of grayvalues. Default: "original"
Serialize a template.
Instance represents: Handle of the template.
Handle of the serialized item.
Deserialize a serialized template.
Modified instance represents: Template number.
Handle of the serialized item.
Writing a template to file.
Instance represents: Template number.
file name.
Reading a template from file.
Modified instance represents: Template number.
file name.
Deallocation of the memory of a template.
Instance represents: Template number.
Gray value offset for template.
Instance represents: Template number.
Offset of gray values. Default: 0
Define reference position for a matching template.
Instance represents: Template number.
Reference position of template (row).
Reference position of template (column).
Adapting a template to the size of an image.
Instance represents: Template number.
Image which determines the size of the later matching.
Searching all good gray value matches in a pyramid.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Maximal average difference of the grayvalues. Default: 30.0
Number of levels in the pyramid. Default: 3
All points which have an error below a certain threshold.
Searching all good gray value matches in a pyramid.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Maximal average difference of the grayvalues. Default: 30.0
Number of levels in the pyramid. Default: 3
All points which have an error below a certain threshold.
Searching the best gray value matches in a pre generated pyramid.
Instance represents: Template number.
Image pyramid inside of which the pattern has to be found.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Resolution level up to which the method "best match" is used. Default: "original"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching the best gray value matches in a pre generated pyramid.
Instance represents: Template number.
Image pyramid inside of which the pattern has to be found.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Resolution level up to which the method "best match" is used. Default: "original"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching the best gray value matches in a pyramid.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 4
Resolution level up to which the method "best match" is used. Default: 2
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching the best gray value matches in a pyramid.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Maximal average difference of the grayvalues. Default: 30.0
Exactness in subpixels in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 4
Resolution level up to which the method "best match" is used. Default: 2
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues in the best match.
Searching all good matches of a template and an image.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Maximal average difference of the grayvalues. Default: 20.0
All points whose error lies below a certain threshold.
Searching the best matching of a template and a pyramid with rotation.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 40.0
Subpixel accuracy in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and a pyramid with rotation.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 40.0
Subpixel accuracy in case of 'true'. Default: "false"
Number of the used resolution levels. Default: 3
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image with rotation.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 30.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image with rotation.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Smallest Rotation of the pattern. Default: -0.39
Maximum positive Extension of AngleStart. Default: 0.79
Maximum average difference of the grayvalues. Default: 30.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Rotation angle of pattern.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Maximum average difference of the grayvalues. Default: 20.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues of the best match.
Searching the best matching of a template and an image.
Instance represents: Template number.
Input image inside of which the pattern has to be found.
Maximum average difference of the grayvalues. Default: 20.0
Subpixel accuracy in case of 'true'. Default: "false"
Row position of the best match.
Column position of the best match.
Average divergence of the grayvalues of the best match.
Represents an instance of a text model for text segmentations.
Create a text model.
Modified instance represents: New text model.
The Mode of the text model. Default: "auto"
OCR Classifier. Default: "Universal_Rej.occ"
Create a text model.
Modified instance represents: New text model.
The Mode of the text model. Default: "auto"
OCR Classifier. Default: "Universal_Rej.occ"
Create a text model.
Modified instance represents: New text model.
Find text in an image.
Instance represents: Text model specifying the text to be segmented.
Input image.
Result of the segmentation.
Query parameters of a text model.
Instance represents: Text model.
Parameters to be queried. Default: "min_contrast"
Values of Parameters.
Set parameters of a text model.
Instance represents: Text model.
Names of the parameters to be set. Default: "min_contrast"
Values of the parameters to be set. Default: 10
Set parameters of a text model.
Instance represents: Text model.
Names of the parameters to be set. Default: "min_contrast"
Values of the parameters to be set. Default: 10
Clear a text model.
Text model to be cleared.
Clear a text model.
Instance represents: Text model to be cleared.
Create a text model.
Modified instance represents: New text model.
The Mode of the text model. Default: "auto"
OCR Classifier. Default: "Universal_Rej.occ"
Create a text model.
Modified instance represents: New text model.
The Mode of the text model. Default: "auto"
OCR Classifier. Default: "Universal_Rej.occ"
Create a text model.
Modified instance represents: New text model.
Represents an instance of a text segmentations result.
Find text in an image.
Modified instance represents: Result of the segmentation.
Input image.
Text model specifying the text to be segmented.
Clear a text result.
Text result to be cleared.
Clear a text result.
Instance represents: Text result to be cleared.
Query an iconic value of a text segmentation result.
Instance represents: Text result.
Name of the result to be returned. Default: "all_lines"
Returned result.
Query an iconic value of a text segmentation result.
Instance represents: Text result.
Name of the result to be returned. Default: "all_lines"
Returned result.
Query a control value of a text segmentation result.
Instance represents: Text result.
Name of the result to be returned. Default: "class"
Value of ResultName.
Query a control value of a text segmentation result.
Instance represents: Text result.
Name of the result to be returned. Default: "class"
Value of ResultName.
Find text in an image.
Modified instance represents: Result of the segmentation.
Input image.
Text model specifying the text to be segmented.
Represents an instance of a texture model for texture inspection.
Create a texture inspection model.
Modified instance represents: Handle for using and accessing the texture inspection model.
The type of the created texture inspection model. Default: "basic"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Add training images to the texture inspection model.
Instance represents: Handle of the texture inspection model.
Image of flawless texture.
Indices of the images that have been added to the texture inspection model.
Inspection of the texture within an image.
Instance represents: Handle of the texture inspection model.
Image of the texture to be inspected.
Handle of the inspection results.
Novelty regions.
Clear a texture inspection model and free the allocated memory.
Handle of the texture inspection model.
Clear a texture inspection model and free the allocated memory.
Instance represents: Handle of the texture inspection model.
Create a texture inspection model.
Modified instance represents: Handle for using and accessing the texture inspection model.
The type of the created texture inspection model. Default: "basic"
Deserialize a serialized texture inspection model.
Modified instance represents: Handle of the texture inspection model.
Handle of the serialized item.
Get the training images contained in a texture inspection model.
Instance represents: Handle of the texture inspection model.
Training images contained in the texture inspection model.
Query parameters of a texture inspection model.
Instance represents: Handle of the texture inspection model.
Name of the queried model parameter. Default: "novelty_threshold"
Value of the queried model parameter.
Query parameters of a texture inspection model.
Instance represents: Handle of the texture inspection model.
Name of the queried model parameter. Default: "novelty_threshold"
Value of the queried model parameter.
Read a texture inspection model from a file.
Modified instance represents: Handle of the texture inspection model.
File name.
Clear all or a user-defined subset of the images of a texture inspection model.
Handle of the texture inspection model.
Indices of the images to be deleted from the texture inspection model.
Indices of the images that remain in the texture inspection model.
Clear all or a user-defined subset of the images of a texture inspection model.
Instance represents: Handle of the texture inspection model.
Indices of the images to be deleted from the texture inspection model.
Indices of the images that remain in the texture inspection model.
Serialize a texture inspection model.
Instance represents: Handle of the texture inspection model.
Handle of the serialized item.
Set parameters of a texture inspection model.
Instance represents: Handle of the texture inspection model.
Name of the model parameter to be adjusted. Default: "gen_result_handle"
New value of the model parameter. Default: "true"
Set parameters of a texture inspection model.
Instance represents: Handle of the texture inspection model.
Name of the model parameter to be adjusted. Default: "gen_result_handle"
New value of the model parameter. Default: "true"
Train a texture inspection model.
Instance represents: Handle of the texture inspection model.
Write a texture inspection model to a file.
Instance represents: Handle of the texture inspection model.
File name.
Represents an instance of a texture inspection result.
Inspection of the texture within an image.
Modified instance represents: Handle of the inspection results.
Image of the texture to be inspected.
Novelty regions.
Handle of the texture inspection model.
Add training images to the texture inspection model.
Image of flawless texture.
Handle of the texture inspection model.
Indices of the images that have been added to the texture inspection model.
Inspection of the texture within an image.
Modified instance represents: Handle of the inspection results.
Image of the texture to be inspected.
Handle of the texture inspection model.
Novelty regions.
Clear a texture inspection result handle and free the allocated memory.
Handle of the texture inspection results.
Clear a texture inspection result handle and free the allocated memory.
Instance represents: Handle of the texture inspection results.
Get the training images contained in a texture inspection model.
Handle of the texture inspection model.
Training images contained in the texture inspection model.
Query iconic results of a texture inspection.
Instance represents: Handle of the texture inspection result.
Name of the iconic object to be returned. Default: "novelty_region"
Returned iconic object.
Query iconic results of a texture inspection.
Instance represents: Handle of the texture inspection result.
Name of the iconic object to be returned. Default: "novelty_region"
Returned iconic object.
Train a texture inspection model.
Handle of the texture inspection model.
Represents an instance of a variation model.
Read a variation model from a file.
Modified instance represents: ID of the variation model.
File name.
Create a variation model for image comparison.
Modified instance represents: ID of the variation model.
Width of the images to be compared. Default: 640
Height of the images to be compared. Default: 480
Type of the images to be compared. Default: "byte"
Method used for computing the variation model. Default: "standard"
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Deserialize a variation model.
Modified instance represents: ID of the variation model.
Handle of the serialized item.
Serialize a variation model.
Instance represents: ID of the variation model.
Handle of the serialized item.
Read a variation model from a file.
Modified instance represents: ID of the variation model.
File name.
Write a variation model to a file.
Instance represents: ID of the variation model.
File name.
Return the threshold images used for image comparison by a variation model.
Instance represents: ID of the variation model.
Threshold image for the upper threshold.
Threshold image for the lower threshold.
Return the images used for image comparison by a variation model.
Instance represents: ID of the variation model.
Variation image of the trained object.
Image of the trained object.
Compare an image to a variation model.
Instance represents: ID of the variation model.
Image of the object to be compared.
Method used for comparing the variation model. Default: "absolute"
Region containing the points that differ substantially from the model.
Compare an image to a variation model.
Instance represents: ID of the variation model.
Image of the object to be compared.
Region containing the points that differ substantially from the model.
Prepare a variation model for comparison with an image.
Instance represents: ID of the variation model.
Reference image of the object.
Variation image of the object.
Absolute minimum threshold for the differences between the image and the variation model. Default: 10
Threshold for the differences based on the variation of the variation model. Default: 2
Prepare a variation model for comparison with an image.
Instance represents: ID of the variation model.
Reference image of the object.
Variation image of the object.
Absolute minimum threshold for the differences between the image and the variation model. Default: 10
Threshold for the differences based on the variation of the variation model. Default: 2
Prepare a variation model for comparison with an image.
Instance represents: ID of the variation model.
Absolute minimum threshold for the differences between the image and the variation model. Default: 10
Threshold for the differences based on the variation of the variation model. Default: 2
Prepare a variation model for comparison with an image.
Instance represents: ID of the variation model.
Absolute minimum threshold for the differences between the image and the variation model. Default: 10
Threshold for the differences based on the variation of the variation model. Default: 2
Train a variation model.
Instance represents: ID of the variation model.
Images of the object to be trained.
Free the memory of a variation model.
Instance represents: ID of the variation model.
Free the memory of the training data of a variation model.
Instance represents: ID of the variation model.
Create a variation model for image comparison.
Modified instance represents: ID of the variation model.
Width of the images to be compared. Default: 640
Height of the images to be compared. Default: 480
Type of the images to be compared. Default: "byte"
Method used for computing the variation model. Default: "standard"
Provides tuple functionality.
The class HTuple represents HALCON tuples (control parameter values)
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Compute the union set of two input tuples.
Instance represents: Input tuple.
Input tuple.
The union set of two input tuples.
Compute the intersection set of two input tuples.
Instance represents: Input tuple.
Input tuple.
The intersection set of two input tuples.
Compute the difference set of two input tuples.
Instance represents: Input tuple.
Input tuple.
The difference set of two input tuples.
Compute the symmetric difference set of two input tuples.
Instance represents: Input tuple.
Input tuple.
The symmetric difference set of two input tuples.
Test whether the types of the elements of a tuple are of type string.
Instance represents: Input tuple.
Are the elements of the input tuple of type string?
Test whether the types of the elements of a tuple are of type real.
Instance represents: Input tuple.
Are the elements of the input tuple of type real?
Test whether the types of the elements of a tuple are of type integer.
Instance represents: Input tuple.
Are the elements of the input tuple of type integer?
Return the types of the elements of a tuple.
Instance represents: Input tuple.
Types of the elements of the input tuple as integer values.
Test whether a tuple is of type mixed.
Instance represents: Input tuple.
Is the input tuple of type mixed?
Test if the internal representation of a tuple is of type string.
Instance represents: Input tuple.
Is the input tuple of type string?
Test if the internal representation of a tuple is of type real.
Instance represents: Input tuple.
Is the input tuple of type real?
Test if the internal representation of a tuple is of type integer.
Instance represents: Input tuple.
Is the input tuple of type integer?
Return the type of a tuple.
Instance represents: Input tuple.
Type of the input tuple as an integer number.
Calculate the value distribution of a tuple within a certain value range.
Instance represents: Input tuple.
Minimum value.
Maximum value.
Number of bins.
Bin size.
Histogram to be calculated.
Select tuple elements matching a regular expression.
Instance represents: Input strings to match.
Regular expression. Default: ".*"
Matching strings
Test if a string matches a regular expression.
Instance represents: Input strings to match.
Regular expression. Default: ".*"
Number of matching strings
Replace a substring using regular expressions.
Instance represents: Input strings to process.
Regular expression. Default: ".*"
Replacement expression.
Processed strings.
Extract substrings using regular expressions.
Instance represents: Input strings to match.
Regular expression. Default: ".*"
Found matches.
Return a tuple of random numbers between 0 and 1.
Length of tuple to generate.
Tuple of random numbers.
Return the number of elements of a tuple.
Instance represents: Input tuple.
Number of elements of input tuple.
Calculate the sign of a tuple.
Instance represents: Input tuple.
Signs of the input tuple as integer numbers.
Calculate the elementwise maximum of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Elementwise maximum of the input tuples.
Calculate the elementwise minimum of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Elementwise minimum of the input tuples.
Return the maximal element of a tuple.
Instance represents: Input tuple.
Maximal element of the input tuple elements.
Return the minimal element of a tuple.
Instance represents: Input tuple.
Minimal element of the input tuple elements.
Calculate the cumulative sums of a tuple.
Instance represents: Input tuple.
Cumulative sum of the corresponding tuple elements.
Select the element of rank n of a tuple.
Instance represents: Input tuple.
Rank of the element to select.
Selected tuple element.
Return the median of the elements of a tuple.
Instance represents: Input tuple.
Median of the tuple elements.
Return the sum of all elements of a tuple.
Instance represents: Input tuple.
Sum of tuple elements.
Return the mean value of a tuple of numbers.
Instance represents: Input tuple.
Mean value of tuple elements.
Return the standard deviation of the elements of a tuple.
Instance represents: Input tuple.
Standard deviation of tuple elements.
Discard all but one of successive identical elements of a tuple.
Instance represents: Input tuple.
Tuple without successive identical elements.
Return the index of the last occurrence of a tuple within another tuple.
Instance represents: Input tuple to examine.
Input tuple with values to find.
Index of the last occurrence of the values to find.
Return the index of the first occurrence of a tuple within another tuple.
Instance represents: Input tuple to examine.
Input tuple with values to find.
Index of the first occurrence of the values to find.
Return the indices of all occurrences of a tuple within another tuple.
Instance represents: Input tuple to examine.
Input tuple with values to find.
Indices of the occurrences of the values to find in the tuple to examine.
Sort the elements of a tuple and return the indices of the sorted tuple.
Instance represents: Input tuple.
Sorted tuple.
Sort the elements of a tuple in ascending order.
Instance represents: Input tuple.
Sorted tuple.
Invert a tuple.
Instance represents: Input tuple.
Inverted input tuple.
Concatenate two tuples to a new one.
Instance represents: Input tuple 1.
Input tuple 2.
Concatenation of input tuples.
Select several elements of a tuple.
Instance represents: Input tuple.
Index of first element to select.
Index of last element to select.
Selected tuple elements.
Select all elements from index "n" to the end of a tuple.
Instance represents: Input tuple.
Index of the first element to select.
Selected tuple elements.
Select the first elements of a tuple up to the index "n".
Instance represents: Input tuple.
Index of the last element to select.
Selected tuple elements.
Inserts one or more elements into a tuple at index.
Instance represents: Input tuple.
Start index of elements to be inserted.
Element(s) to insert at index.
Tuple with inserted elements.
Replaces one or more elements of a tuple.
Instance represents: Input tuple.
Index/Indices of elements to be replaced.
Element(s) to replace.
Tuple with replaced elements.
Remove elements from a tuple.
Instance represents: Input tuple.
Indices of the elements to remove.
Reduced tuple.
Select in mask specified elements of a tuple.
Instance represents: Input tuple.
greater than 0 specifies the elements to select.
Selected tuple elements.
Select single elements of a tuple.
Instance represents: Input tuple.
Indices of the elements to select.
Selected tuple element.
Select single character or bit from a tuple.
Instance represents: Input tuple.
Position of character or bit to select.
Tuple containing the selected characters and bits.
Generate a tuple with a sequence of equidistant values.
Start value of the tuple.
Maximum value for the last entry.
Increment value.
The resulting sequence.
Generate a tuple of a specific length and initialize its elements.
Length of tuple to generate.
Constant for initializing the tuple elements.
New Tuple.
Read one or more environment variables.
Instance represents: Tuple containing name(s) of the environment variable(s).
Content of the environment variable(s).
Split strings into substrings using predefined separator symbol(s).
Instance represents: Input tuple with string(s) to split.
Input tuple with separator symbol(s).
Substrings after splitting the input strings.
Cut characters from position "n1" through "n2" out of a string tuple.
Instance represents: Input tuple with string(s) to examine.
Input tuple with start position(s) "n1".
Input tuple with end position(s) "n2".
Characters of the string(s) from position "n1" to "n2".
Cut all characters starting at position "n" out of a string tuple.
Instance represents: Input tuple with string(s) to examine.
Input tuple with position(s) "n".
The last characters of the string(s) starting at position "n".
Cut the first characters up to position "n" out of a string tuple.
Instance represents: Input tuple with string(s) to examine.
Input tuple with position(s) "n".
The first characters of the string(s) up to position "n".
Backward search for characters within a string tuple.
Instance represents: Input tuple with string(s) to examine.
Input tuple with character(s) to search.
Position of searched character(s) within the string(s).
Forward search for characters within a string tuple.
Instance represents: Input tuple with string(s) to examine.
Input tuple with character(s) to search.
Position of searched character(s) within the string(s).
Backward search for strings within a string tuple.
Instance represents: Input tuple with string(s) to examine.
Input tuple with string(s) to search.
Position of searched string(s) within the examined string(s).
Forward search for strings within a string tuple.
Instance represents: Input tuple with string(s) to examine.
Input tuple with string(s) to search.
Position of searched string(s) within the examined string(s).
Determine the length of every string within a tuple of strings.
Instance represents: Input tuple.
Lengths of the single strings of the input tuple.
Test, whether a tuple is elementwise less or equal to another tuple.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether a tuple is elementwise less than another tuple.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether a tuple is elementwise greater or equal to another tuple.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether a tuple is elementwise greater than another tuple.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether two tuples are elementwise not equal.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test, whether two tuples are elementwise equal.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether a tuple is less or equal to another tuple.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether a tuple is less than another tuple.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether a tuple is greater or equal to another tuple.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether a tuple is greater than another tuple.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether two tuples are not equal.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Test whether two tuples are equal.
Instance represents: Input tuple 1.
Input tuple 2.
Result of the comparison of the input tuples.
Compute the logical not of a tuple.
Instance represents: Input tuple.
Binary not of the input tuple.
Compute the logical exclusive or of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Binary exclusive or of the input tuples.
Compute the logical or of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Logical or of the input tuples.
Compute the logical and of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Logical and of the input tuples.
Compute the bitwise not of a tuple.
Instance represents: Input tuple.
Binary not of the input tuple.
Compute the bitwise exclusive or of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Binary exclusive or of the input tuples.
Compute the bitwise or of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Binary or of the input tuples.
Compute the bitwise and of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Binary and of the input tuples.
Shift a tuple bitwise to the right.
Instance represents: Input tuple.
Number of places to shift the input tuple.
Shifted input tuple.
Shift a tuple bitwise to the left.
Instance represents: Input tuple.
Number of places to shift the input tuple.
Shifted input tuple.
Convert a tuple of integer numbers into strings.
Instance represents: Input tuple with integer numbers.
Output tuple with strings that are separated by the number 0.
Convert a tuple of strings into a tuple of integer numbers.
Instance represents: Input tuple with strings.
Output tuple with the Unicode character codes or ANSI codes of the input string.
Convert a tuple of integer numbers into strings.
Instance represents: Input tuple with Unicode character codes or ANSI codes.
Output tuple with strings built from the character codes in the input tuple.
Convert a tuple of strings of length 1 into a tuple of integer numbers.
Instance represents: Input tuple with strings of length 1.
Output tuple with Unicode character codes or ANSI codes of the characters passed in the input tuple.
Convert a tuple into a tuple of strings.
Instance represents: Input tuple.
Format string.
Input tuple converted to strings.
Check a tuple (of strings) whether it represents numbers.
Instance represents: Input tuple.
Tuple with boolean numbers.
Convert a tuple (of strings) into a tuple of numbers.
Instance represents: Input tuple.
Input tuple as numbers.
Convert a tuple into a tuple of integer numbers.
Instance represents: Input tuple.
Result of the rounding.
Convert a tuple into a tuple of integer numbers.
Instance represents: Input tuple.
Result of the conversion into integer numbers.
Convert a tuple into a tuple of floating point numbers.
Instance represents: Input tuple.
Input tuple as floating point numbers.
Calculate the ldexp function of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Ldexp function of the input tuples.
Calculate the remainder of the floating point division of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Remainder of the division of the input tuples.
Calculate the remainder of the integer division of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Remainder of the division of the input tuples.
Compute the ceiling function of a tuple.
Instance represents: Input tuple.
Ceiling function of the input tuple.
Compute the floor function of a tuple.
Instance represents: Input tuple.
Floor function of the input tuple.
Calculate the power function of two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Power function of the input tuples.
Compute the base 10 logarithm of a tuple.
Instance represents: Input tuple.
Base 10 logarithm of the input tuple.
Compute the natural logarithm of a tuple.
Instance represents: Input tuple.
Natural logarithm of the input tuple.
Compute the exponential of a tuple.
Instance represents: Input tuple.
Exponential of the input tuple.
Compute the hyperbolic tangent of a tuple.
Instance represents: Input tuple.
Hyperbolic tangent of the input tuple.
Compute the hyperbolic cosine of a tuple.
Instance represents: Input tuple.
Hyperbolic cosine of the input tuple.
Compute the hyperbolic sine of a tuple.
Instance represents: Input tuple.
Hyperbolic sine of the input tuple.
Convert a tuple from degrees to radians.
Instance represents: Input tuple.
Input tuple in radians.
Convert a tuple from radians to degrees.
Instance represents: Input tuple.
Input tuple in degrees.
Compute the arctangent of a tuple for all four quadrants.
Instance represents: Input tuple of the y-values.
Input tuple of the x-values.
Arctangent of the input tuple.
Compute the arctangent of a tuple.
Instance represents: Input tuple.
Arctangent of the input tuple.
Compute the arccosine of a tuple.
Instance represents: Input tuple.
Arccosine of the input tuple.
Compute the arcsine of a tuple.
Instance represents: Input tuple.
Arcsine of the input tuple.
Compute the tangent of a tuple.
Instance represents: Input tuple.
Tangent of the input tuple.
Compute the cosine of a tuple.
Instance represents: Input tuple.
Cosine of the input tuple.
Compute the sine of a tuple.
Instance represents: Input tuple.
Sine of the input tuple.
Compute the absolute value of a tuple (as floating point numbers).
Instance represents: Input tuple.
Absolute value of the input tuple.
Compute the square root of a tuple.
Instance represents: Input tuple.
Square root of the input tuple.
Compute the absolute value of a tuple.
Instance represents: Input tuple.
Absolute value of the input tuple.
Negate a tuple.
Instance represents: Input tuple.
Negation of the input tuple.
Divide two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Quotient of the input tuples.
Multiply two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Product of the input tuples.
Subtract two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Difference of the input tuples.
Add two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Sum of the input tuples.
Deserialize a serialized tuple.
Handle of the serialized item.
Tuple.
Serialize a tuple.
Instance represents: Tuple.
Handle of the serialized item.
Write a tuple to a file.
Instance represents: Tuple with any kind of data.
Name of the file to be written.
Read a tuple from a file.
Name of the file to be read.
Tuple with any kind of data.
Clear the content of a handle.
Instance represents: Handle to clear.
Test if the internal representation of a tuple is of type handle.
Instance represents: Input tuple.
Boolean value indicating if the input tuple is of type handle.
Test whether the elements of a tuple are of type handle.
Instance represents: Input tuple.
Boolean values indicating if the elements of the input tuple are of type handle.
Test if a tuple is serializable.
Instance represents: Tuple to check for serializability.
Boolean value indicating if the input can be serialized.
Test if the elements of a tuple are serializable.
Instance represents: Tuple to check for serializability.
Boolean value indicating if the input elements can be serialized.
Check if a handle is valid.
Instance represents: The handle to check for validity.
The validity of the handle, 1 or 0.
Return the semantic type of a tuple.
Instance represents: Input tuple.
Semantic type of the input tuple as a string.
Return the semantic type of the elements of a tuple.
Instance represents: Input tuple.
Semantic types of the elements of the input tuple as strings.
Create an empty tuple
Create tuple containing integer value 0 (false) or 1 (true)
Create a tuple containing a single 32-bit integer value
Create a tuple containing 32-bit integer values
Create a tuple containing a single 64-bit integer value
Create a tuple containing 64-bit integer values
Create an integer tuple representing a pointer value.
The used integer size depends on the executing platform.
Create an integer tuple representing pointer values.
The used integer size depends on the executing platform.
Create a tuple containing a single double value
Create a tuple containing double values
Create a tuple containing a single double value
Create a tuple containing double values
Create a tuple containing a single string value
Create a tuple containing string values
Create a tuple containing a single handle value
Create a tuple containing handle values
Create a tuple containing mixed values.
Only integer, double and string values are valid.
Create a copy of an existing tuple
Create a concatenation of existing tuples
Dispose all handles that are stored in the tuple. For tuples
without handles calling this method has no effect.
Used and overwritte by HTupleMixed and HTupleHandle
Unpins the tuple's data. Notice that PinTuple happens in Store(..).
Get the data of this tuple as a 32-bit integer array.
The tuple may only contain integer data (32-bit or 64-bit).
Get the data of this tuple as a 64-bit integer array.
The tuple may only contain integer data (32-bit or 64-bit).
Get the data of this tuple as a double array.
The tuple may only contain numeric data.
Get the data of this tuple as a string array.
The tuple may only contain string values.
Get the data of this tuple as a handle array.
The tuple may only contain handle values. The
array contains copies of handles that need to
be disposed.
Get the data of this tuple as an object array.
The tuple may contain arbitrary values.
Get the data of this tuple as a float array.
The tuple may only contain numeric data.
Get the data of this tuple as an IntPtr array.
The tuple may only contain integer data matching IntPtr.Size.
Convert first element of a tuple to bool
Convert first element of a tuple to int
Convert first element of a tuple to long
Convert first element of a tuple to double
Convert first element of a tuple to string
Convert first element of a tuple to IntPtr
Convert all elements of a tuple to int[]
Convert all elements of a tuple to long[]
Convert all elements of a tuple to double[]
Convert all elements of a tuple to string[]
Convert all elements of a tuple to HHandle[]
Convert all elements of a tuple to IntPtr[]
Provides a simple string representation of the tuple,
which is mainly useful for debug outputs.
Casting a HTuple to a string does *not* invoke ToString(), as the
former represents an implicit access to tuple.S == tuple[0].S, which
is only legal if the first tuple element is a string.
Append tuple to this tuple
Data to append.
Returns the number of elements of a tuple.
Instance represents: Input tuple.
Number of elements of input tuple.
Add two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Sum of the input tuples.
Subtract two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Difference of the input tuples.
Multiply two tuples.
Instance represents: Input tuple 1.
Input tuple 2.
Product of the input tuples.
Concatenate multiple tuples to a new one.
Instance represents: Input tuple 1.
Further input tuples.
Concatenation of input tuples.
Concatenate two tuple to a new one.
Instance represents: Input tuple 1.
Input tuple 2.
Concatentaion of input tuples.
Get the data type of this tuple
Get the length of this tuple
Provides access to tuple elements at the specified indices
Provides access to the tuple element at the specified index
Provides access to the tuple element at the specified index
Exposes the internal array representation to allow most efficient
(but not safest) access. Tuple type must be HTupleType.INTEGER.
The array length may be greater than the used tuple length.
Exposes the internal array representation to allow most efficient
(but not safest) access. Tuple type must be HTupleType.LONG.
The array length may be greater than the used tuple length.
Exposes the internal array representation to allow most efficient
(but not safest) access. Tuple type must be HTupleType.DOUBLE.
The array length may be greater than the used tuple length.
Exposes the internal array representation to allow most efficient
(but not safest) access. Tuple type must be HTupleType.STRING.
The array length may be greater than the used tuple length.
Exposes the internal array representation to allow most efficient
(but not safest) access. Tuple type must be HTupleType.HANDLE.
The array length may be greater than the used tuple length.
Exposes the internal array representation to allow most efficient
(but not safest) access. Tuple type must be HTupleType.MIXED. It is
not recommended to modify the array. If you do make sure to only
store supported element types int, long, double, string and HHandle.
The array length may be greater than the used tuple length.
Convenience accessor for tuple[0].I
Convenience accessor for tuple[0].L
Convenience accessor for tuple[0].D
Convenience accessor for tuple[0].S
Convenience accessor for tuple[0].H
Convenience accessor for tuple[0].H
Convenience accessor for tuple[0].IP
Represents an instance of a HALCON window.
Open a graphics window.
Modified instance represents: Window handle.
Row index of upper left corner. Default: 0
Column index of upper left corner. Default: 0
Width of the window. Default: 256
Height of the window. Default: 256
Logical number of the father window. To specify the display as father you may enter 'root' or 0. Default: 0
Window mode. Default: "visible"
Name of the computer on which you want to open the window. Otherwise the empty string. Default: ""
Open a graphics window.
Modified instance represents: Window handle.
Row index of upper left corner. Default: 0
Column index of upper left corner. Default: 0
Width of the window. Default: 256
Height of the window. Default: 256
Logical number of the father window. To specify the display as father you may enter 'root' or 0. Default: 0
Window mode. Default: "visible"
Name of the computer on which you want to open the window. Otherwise the empty string. Default: ""
Display an XLD object.
Instance represents: Window handle.
XLD object to display.
Gets a copy of the background image of the HALCON window.
Instance represents: Window handle.
Copy of the background image.
Detach the background image from a HALCON window.
Instance represents: Window handle.
Attach a background image to a HALCON window.
Instance represents: Window handle.
Background image.
Detach an existing drawing object from a HALCON window.
Instance represents: Window Handle.
Handle of the drawing object.
Attach an existing drawing object to a HALCON window.
Instance represents: Window handle.
Handle of the drawing object.
Modify the pose of a 3D plot.
Instance represents: Window handle.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the second point.
Column coordinate of the second point.
Navigation mode. Default: "rotate"
Modify the pose of a 3D plot.
Instance represents: Window handle.
Row coordinate of the first point.
Column coordinate of the first point.
Row coordinate of the second point.
Column coordinate of the second point.
Navigation mode. Default: "rotate"
Calculates image coordinates for a point in a 3D plot window.
Instance represents: Window handle.
Displayed image.
Row coordinate in the window.
Column coordinate in the window.
Row coordinate in the image.
Column coordinate in the image.
Height value.
Calculates image coordinates for a point in a 3D plot window.
Instance represents: Window handle.
Displayed image.
Row coordinate in the window.
Column coordinate in the window.
Row coordinate in the image.
Column coordinate in the image.
Height value.
Get the operating system window handle.
Instance represents: Window handle.
Operating system display handle (under Unix-like systems only).
Operating system window handle.
Set the device context of a virtual graphics window (Windows NT).
Instance represents: Window handle.
device context of WINHWnd.
Create a virtual graphics window under Windows.
Modified instance represents: Window handle.
Windows window handle of a previously created window.
Row coordinate of upper left corner. Default: 0
Column coordinate of upper left corner. Default: 0
Width of the window. Default: 512
Height of the window. Default: 512
Interactive output from two window buffers.
Instance represents: Source window handle of the "`upper window"'.
Source window handle of the "`lower window"'.
Output window handle.
Modify position and size of a window.
Instance represents: Window handle.
Row index of upper left corner in target position. Default: 0
Column index of upper left corner in target position. Default: 0
Width of the window. Default: 512
Height of the window. Default: 512
Open a graphics window.
Modified instance represents: Window handle.
Row index of upper left corner. Default: 0
Column index of upper left corner. Default: 0
Width of the window. Default: 256
Height of the window. Default: 256
Logical number of the father window. To specify the display as father you may enter 'root' or 0. Default: 0
Window mode. Default: "visible"
Name of the computer on which you want to open the window. Otherwise the empty string. Default: ""
Open a graphics window.
Modified instance represents: Window handle.
Row index of upper left corner. Default: 0
Column index of upper left corner. Default: 0
Width of the window. Default: 256
Height of the window. Default: 256
Logical number of the father window. To specify the display as father you may enter 'root' or 0. Default: 0
Window mode. Default: "visible"
Name of the computer on which you want to open the window. Otherwise the empty string. Default: ""
Open a textual window.
Modified instance represents: Window handle.
Row index of upper left corner. Default: 0
Column index of upper left corner. Default: 0
Window's width. Default: 256
Window's height. Default: 256
Window border's width. Default: 2
Window border's color. Default: "white"
Background color. Default: "black"
Logical number of the father window. For the display as father you may specify 'root' or 0. Default: 0
Window mode. Default: "visible"
Computer name, where the window has to be opened or empty string. Default: ""
Open a textual window.
Modified instance represents: Window handle.
Row index of upper left corner. Default: 0
Column index of upper left corner. Default: 0
Window's width. Default: 256
Window's height. Default: 256
Window border's width. Default: 2
Window border's color. Default: "white"
Background color. Default: "black"
Logical number of the father window. For the display as father you may specify 'root' or 0. Default: 0
Window mode. Default: "visible"
Computer name, where the window has to be opened or empty string. Default: ""
Copy inside an output window.
Instance represents: Window handle.
Row index of upper left corner of the source rectangle. Default: 0
Column index of upper left corner of the source rectangle. Default: 0
Row index of lower right corner of the source rectangle. Default: 64
Column index of lower right corner of the source rectangle. Default: 64
Row index of upper left corner of the target position. Default: 64
Column index of upper left corner of the target position. Default: 64
Copy inside an output window.
Instance represents: Window handle.
Row index of upper left corner of the source rectangle. Default: 0
Column index of upper left corner of the source rectangle. Default: 0
Row index of lower right corner of the source rectangle. Default: 64
Column index of lower right corner of the source rectangle. Default: 64
Row index of upper left corner of the target position. Default: 64
Column index of upper left corner of the target position. Default: 64
Get the window type.
Instance represents: Window handle.
Window type
Access to a window's pixel data.
Instance represents: Window handle.
Pointer on red channel of pixel data.
Pointer on green channel of pixel data.
Pointer on blue channel of pixel data.
Length of an image line.
Number of image lines.
Information about a window's size and position.
Instance represents: Window handle.
Row index of upper left corner of the window.
Column index of upper left corner of the window.
Window width.
Window height.
Write the window content in an image object.
Instance represents: Window handle.
Saved image.
Write the window content to a file.
Instance represents: Window handle.
Name of the target device or of the graphic format. Default: "postscript"
File name (without extension). Default: "halcon_dump"
Write the window content to a file.
Instance represents: Window handle.
Name of the target device or of the graphic format. Default: "postscript"
File name (without extension). Default: "halcon_dump"
Copy all pixels within rectangles between output windows.
Instance represents: Source window handle.
Destination window handle.
Row index of upper left corner in the source window. Default: 0
Column index of upper left corner in the source window. Default: 0
Row index of lower right corner in the source window. Default: 128
Column index of lower right corner in the source window. Default: 128
Row index of upper left corner in the target window. Default: 0
Column index of upper left corner in the target window. Default: 0
Copy all pixels within rectangles between output windows.
Instance represents: Source window handle.
Destination window handle.
Row index of upper left corner in the source window. Default: 0
Column index of upper left corner in the source window. Default: 0
Row index of lower right corner in the source window. Default: 128
Column index of lower right corner in the source window. Default: 128
Row index of upper left corner in the target window. Default: 0
Column index of upper left corner in the target window. Default: 0
Close an output window.
Window handle.
Close an output window.
Instance represents: Window handle.
Delete the contents of an output window.
Instance represents: Window handle.
Delete a rectangle on the output window.
Instance represents: Window handle.
Line index of upper left corner. Default: 10
Column index of upper left corner. Default: 10
Row index of lower right corner. Default: 118
Column index of lower right corner. Default: 118
Delete a rectangle on the output window.
Instance represents: Window handle.
Line index of upper left corner. Default: 10
Column index of upper left corner. Default: 10
Row index of lower right corner. Default: 118
Column index of lower right corner. Default: 118
Print text in a window.
Instance represents: Window handle.
Tuple of output values (all types). Default: "hello"
Print text in a window.
Instance represents: Window handle.
Tuple of output values (all types). Default: "hello"
Set the shape of the text cursor.
Instance represents: Window handle.
Name of cursor shape. Default: "invisible"
Set the position of the text cursor.
Instance represents: Window handle.
Row index of text cursor position. Default: 24
Column index of text cursor position. Default: 12
Read a string in a text window.
Instance represents: Window handle.
Default string (visible before input). Default: ""
Maximum number of characters. Default: 32
Read string.
Read a character from a text window.
Instance represents: Window handle.
Code for input character.
Input character (if it is not a control character).
Set the position of the text cursor to the beginning of the next line.
Instance represents: Window handle.
Get the shape of the text cursor.
Instance represents: Window handle.
Name of the current text cursor.
Get cursor position.
Instance represents: Window handle.
Row index of text cursor position.
Column index of text cursor position.
Get the maximum size of all characters of a font.
Instance represents: Window handle.
Maximum extension below baseline.
Maximum character width.
Maximum character height.
Maximum height above baseline.
Get the maximum size of all characters of a font.
Instance represents: Window handle.
Maximum extension below baseline.
Maximum character width.
Maximum character height.
Maximum height above baseline.
Get the spatial size of a string.
Instance represents: Window handle.
Values to consider. Default: "test_string"
Maximum extension below baseline.
Text width.
Text height.
Maximum height above baseline.
Get the spatial size of a string.
Instance represents: Window handle.
Values to consider. Default: "test_string"
Maximum extension below baseline.
Text width.
Text height.
Maximum height above baseline.
Query the available fonts.
Instance represents: Window handle.
Tuple with available font names.
Query all shapes available for text cursors.
Instance represents: Window handle.
Names of the available text cursors.
Set the font used for text output.
Instance represents: Window handle.
Name of new font.
Get the current font.
Instance represents: Window handle.
Name of the current font.
Get window parameters.
Instance represents: Window handle.
Name of the parameter. Default: "flush"
Value of the parameter.
Set window parameters.
Instance represents: Window handle.
Name of the parameter. Default: "flush"
Value to be set. Default: "false"
Set window parameters.
Instance represents: Window handle.
Name of the parameter. Default: "flush"
Value to be set. Default: "false"
Define the region output shape.
Instance represents: Window handle.
Region output mode. Default: "original"
Set the color definition via RGB values.
Instance represents: Window handle.
Red component of the color. Default: 255
Green component of the color. Default: 0
Blue component of the color. Default: 0
Set the color definition via RGB values.
Instance represents: Window handle.
Red component of the color. Default: 255
Green component of the color. Default: 0
Blue component of the color. Default: 0
Define a color lookup table index.
Instance represents: Window handle.
Color lookup table index. Default: 128
Define a color lookup table index.
Instance represents: Window handle.
Color lookup table index. Default: 128
Define an interpolation method for gray value output.
Instance represents: Window handle.
Interpolation method for image output: 0 (fast, low quality) to 2 (slow, high quality). Default: 0
Modify the displayed image part.
Instance represents: Window handle.
Row of the upper left corner of the chosen image part. Default: 0
Column of the upper left corner of the chosen image part. Default: 0
Row of the lower right corner of the chosen image part. Default: -1
Column of the lower right corner of the chosen image part. Default: -1
Modify the displayed image part.
Instance represents: Window handle.
Row of the upper left corner of the chosen image part. Default: 0
Column of the upper left corner of the chosen image part. Default: 0
Row of the lower right corner of the chosen image part. Default: -1
Column of the lower right corner of the chosen image part. Default: -1
Define the gray value output mode.
Instance represents: Window handle.
Output mode. Additional parameters possible. Default: "default"
Define the line width for region contour output.
Instance represents: Window handle.
Line width for region output in contour mode. Default: 1.0
Define a contour output pattern.
Instance represents: Window handle.
Contour pattern. Default: []
Define the approximation error for contour display.
Instance represents: Window handle.
Maximum deviation from the original contour. Default: 0
Define the pixel output function.
Instance represents: Window handle.
Name of the display function. Default: "copy"
Define output colors (HSI-coded).
Instance represents: Window handle.
Hue for region output. Default: 30
Saturation for region output. Default: 255
Intensity for region output. Default: 84
Define output colors (HSI-coded).
Instance represents: Window handle.
Hue for region output. Default: 30
Saturation for region output. Default: 255
Intensity for region output. Default: 84
Define gray values for region output.
Instance represents: Window handle.
Gray values for region output. Default: 255
Define gray values for region output.
Instance represents: Window handle.
Gray values for region output. Default: 255
Define the region fill mode.
Instance represents: Window handle.
Fill mode for region output. Default: "fill"
Define the image matrix output clipping.
Instance represents: Window handle.
Clipping mode for gray value output. Default: "object"
Set multiple output colors.
Instance represents: Window handle.
Number of output colors. Default: 12
Set output color.
Instance represents: Window handle.
Output color names. Default: "white"
Set output color.
Instance represents: Window handle.
Output color names. Default: "white"
Get the current region output shape.
Instance represents: Window handle.
Current region output shape.
Get the current color in RGB-coding.
Instance represents: Window handle.
The current color's red value.
The current color's green value.
The current color's blue value.
Get the current color lookup table index.
Instance represents: Window handle.
Index of the current color look-up table.
Get the current interpolation mode for gray value display.
Instance represents: Window handle.
Interpolation mode for image display: 0 (fast, low quality) to 2 (slow, high quality).
Get the image part.
Instance represents: Window handle.
Row index of the image part's upper left corner.
Column index of the image part's upper left corner.
Row index of the image part's lower right corner.
Column index of the image part's lower right corner.
Get the image part.
Instance represents: Window handle.
Row index of the image part's upper left corner.
Column index of the image part's upper left corner.
Row index of the image part's lower right corner.
Column index of the image part's lower right corner.
Get the current display mode for gray values.
Instance represents: Window handle.
Name and parameter values of the current display mode.
Get the current line width for contour display.
Instance represents: Window handle.
Current line width for contour display.
Get the current graphic mode for contours.
Instance represents: Window handle.
Template for contour display.
Get the current approximation error for contour display.
Instance represents: Window handle.
Current approximation error for contour display.
Get the current display mode.
Instance represents: Window handle.
Display mode.
Get the HSI coding of the current color.
Instance represents: Window handle.
Saturation of the current color.
Intensity of the current color.
Hue (color value) of the current color.
Get the current region fill mode.
Instance represents: Window handle.
Current region fill mode.
Query the gray value display modes.
Instance represents: Window handle.
Gray value display mode names.
Query the possible graphic modes.
Instance represents: Window handle.
Display function name.
Query the displayable gray values.
Instance represents: Window handle.
Tuple of all displayable gray values.
Query all color names.
Instance represents: Window handle.
Color names.
Query all color names displayable in the window.
Instance represents: Window handle.
Color names.
Query the icon for region output
Instance represents: Window handle.
Icon for the regions center of gravity.
Icon definition for region output.
Instance represents: Window handle.
Icon for center of gravity.
Displays regions in a window.
Instance represents: Window handle.
Regions to display.
Displays arbitrarily oriented rectangles.
Instance represents: Window handle.
Row index of the center. Default: 48
Column index of the center. Default: 64
Orientation of rectangle in radians. Default: 0.0
Half of the length of the longer side. Default: 48
Half of the length of the shorter side. Default: 32
Displays arbitrarily oriented rectangles.
Instance represents: Window handle.
Row index of the center. Default: 48
Column index of the center. Default: 64
Orientation of rectangle in radians. Default: 0.0
Half of the length of the longer side. Default: 48
Half of the length of the shorter side. Default: 32
Display of rectangles aligned to the coordinate axes.
Instance represents: Window handle.
Row index of the upper left corner. Default: 16
Column index of the upper left corner. Default: 16
Row index of the lower right corner. Default: 48
Column index of the lower right corner. Default: 80
Display of rectangles aligned to the coordinate axes.
Instance represents: Window handle.
Row index of the upper left corner. Default: 16
Column index of the upper left corner. Default: 16
Row index of the lower right corner. Default: 48
Column index of the lower right corner. Default: 80
Displays a polyline.
Instance represents: Window handle.
Row index Default: [16,80,80]
Column index Default: [48,16,80]
Draws lines in a window.
Instance represents: Window handle.
Row index of the start. Default: 32.0
Column index of the start. Default: 32.0
Row index of end. Default: 64.0
Column index of end. Default: 64.0
Draws lines in a window.
Instance represents: Window handle.
Row index of the start. Default: 32.0
Column index of the start. Default: 32.0
Row index of end. Default: 64.0
Column index of end. Default: 64.0
Displays crosses in a window.
Instance represents: Window handle.
Row coordinate of the center. Default: 32.0
Column coordinate of the center. Default: 32.0
Length of the bars. Default: 6.0
Orientation. Default: 0.0
Displays crosses in a window.
Instance represents: Window handle.
Row coordinate of the center. Default: 32.0
Column coordinate of the center. Default: 32.0
Length of the bars. Default: 6.0
Orientation. Default: 0.0
Displays gray value images.
Instance represents: Window handle.
Gray value image to display.
Displays images with several channels.
Instance represents: Window handle.
Multichannel images to be displayed.
Number of channel or the numbers of the RGB-channels Default: 1
Displays images with several channels.
Instance represents: Window handle.
Multichannel images to be displayed.
Number of channel or the numbers of the RGB-channels Default: 1
Displays a color (RGB) image
Instance represents: Window handle.
Color image to display.
Displays ellipses.
Instance represents: Window handle.
Row index of center. Default: 64
Column index of center. Default: 64
Orientation of the ellipse in radians Default: 0.0
Radius of major axis. Default: 24.0
Radius of minor axis. Default: 14.0
Displays ellipses.
Instance represents: Window handle.
Row index of center. Default: 64
Column index of center. Default: 64
Orientation of the ellipse in radians Default: 0.0
Radius of major axis. Default: 24.0
Radius of minor axis. Default: 14.0
Displays a noise distribution.
Instance represents: Window handle.
Gray value distribution (513 values).
Row index of center. Default: 256
Column index of center. Default: 256
Size of display. Default: 1
Displays circles in a window.
Instance represents: Window handle.
Row index of the center. Default: 64
Column index of the center. Default: 64
Radius of the circle. Default: 64
Displays circles in a window.
Instance represents: Window handle.
Row index of the center. Default: 64
Column index of the center. Default: 64
Radius of the circle. Default: 64
Displays arrows in a window.
Instance represents: Window handle.
Row index of the start. Default: 10.0
Column index of the start. Default: 10.0
Row index of the end. Default: 118.0
Column index of the end. Default: 118.0
Size of the arrowhead. Default: 1.0
Displays arrows in a window.
Instance represents: Window handle.
Row index of the start. Default: 10.0
Column index of the start. Default: 10.0
Row index of the end. Default: 118.0
Column index of the end. Default: 118.0
Size of the arrowhead. Default: 1.0
Displays circular arcs in a window.
Instance represents: Window handle.
Row coordinate of center point. Default: 64
Column coordinate of center point. Default: 64
Angle between start and end of the arc (in radians). Default: 3.1415926
Row coordinate of the start of the arc. Default: 32
Column coordinate of the start of the arc. Default: 32
Displays circular arcs in a window.
Instance represents: Window handle.
Row coordinate of center point. Default: 64
Column coordinate of center point. Default: 64
Angle between start and end of the arc (in radians). Default: 3.1415926
Row coordinate of the start of the arc. Default: 32
Column coordinate of the start of the arc. Default: 32
Displays image objects (image, region, XLD).
Instance represents: Window handle.
Image object to be displayed.
Set the current mouse pointer shape.
Instance represents: Window handle.
Mouse pointer name. Default: "arrow"
Query the current mouse pointer shape.
Instance represents: Window handle.
Mouse pointer name.
Query all available mouse pointer shapes.
Instance represents: Window handle.
Available mouse pointer names.
Query the subpixel mouse position.
Instance represents: Window handle.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed or 0.
Query the mouse position.
Instance represents: Window handle.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed or 0.
Wait until a mouse button is pressed and get the subpixel mouse position.
Instance represents: Window handle.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
Wait until a mouse button is pressed.
Instance represents: Window handle.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
Write look-up-table (lut) as file.
Instance represents: Window handle.
File name (of file containing the look-up-table). Default: "/tmp/lut"
Graphical view of the look-up-table (lut).
Instance represents: Window handle.
Row of centre of the graphic. Default: 128
Column of centre of the graphic. Default: 128
Scaling of the graphic. Default: 1
Query all available look-up-tables (lut).
Instance represents: Window handle.
Names of look-up-tables.
Get modification parameters of look-up-table (lut).
Instance represents: Window handle.
Modification of saturation.
Modification of intensity.
Modification of color value.
Changing the look-up-table (lut).
Instance represents: Window handle.
Modification of color value. Default: 0.0
Modification of saturation. Default: 1.5
Modification of intensity. Default: 1.5
Get current look-up-table (lut).
Instance represents: Window handle.
Name of look-up-table or tuple of RGB-values.
Set "`look-up-table"' (lut).
Instance represents: Window handle.
Name of look-up-table, values of look-up-table (RGB) or file name. Default: "default"
Set "`look-up-table"' (lut).
Instance represents: Window handle.
Name of look-up-table, values of look-up-table (RGB) or file name. Default: "default"
Get mode of fixing of current look-up-table (lut).
Instance represents: Window handle.
Current Mode of fixing.
Set fixing of "`look-up-table"' (lut)
Instance represents: Window handle.
Mode of fixing. Default: "true"
Get fixing of "`look-up-table"' (lut) for "`real color images"'
Instance represents: Window handle.
Mode of fixing.
Fix "`look-up-table"' (lut) for "`real color images"'.
Instance represents: Window handle.
Mode of fixing. Default: "true"
Interactive movement of a region with restriction of positions.
Instance represents: Window handle.
Regions to move.
Points on which it is allowed for a region to move.
Row index of the reference point. Default: 100
Column index of the reference point. Default: 100
Moved regions.
Interactive movement of a region with fixpoint specification.
Instance represents: Window handle.
Regions to move.
Row index of the reference point. Default: 100
Column index of the reference point. Default: 100
Moved regions.
Interactive moving of a region.
Instance represents: Window handle.
Regions to move.
Moved Regions.
Interactive modification of a NURBS curve using interpolation.
Instance represents: Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 5. Default: 3
Row coordinates of the input interpolation points.
Column coordinates of the input interpolation points.
Input tangents.
Row coordinates of the control polygon.
Column coordinates of the control polygon.
Knot vector.
Row coordinates of the points specified by the user.
Column coordinates of the points specified by the user.
Tangents specified by the user.
Contour of the modified curve.
Interactive drawing of a NURBS curve using interpolation.
Instance represents: Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 5. Default: 3
Row coordinates of the control polygon.
Column coordinates of the control polygon.
Knot vector.
Row coordinates of the points specified by the user.
Column coordinates of the points specified by the user.
Tangents specified by the user.
Contour of the curve.
Interactive modification of a NURBS curve.
Instance represents: Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 25. Default: 3
Row coordinates of the input control polygon.
Column coordinates of the input control polygon.
Input weight vector.
Row coordinates of the control polygon.
Columns coordinates of the control polygon.
Weight vector.
Contour of the modified curve.
Interactive drawing of a NURBS curve.
Instance represents: Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 25. Default: 3
Row coordinates of the control polygon.
Columns coordinates of the control polygon.
Weight vector.
Contour approximating the NURBS curve.
Interactive modification of a contour.
Instance represents: Window handle.
Input contour.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
Modified contour.
Interactive drawing of a contour.
Instance represents: Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Modified contour.
Interactive drawing of any orientated rectangle.
Instance represents: Window handle.
Row index of the center.
Column index of the center.
Orientation of the bigger half axis in radians.
Bigger half axis.
Smaller half axis.
Row index of the center.
Column index of the center.
Orientation of the bigger half axis in radians.
Bigger half axis.
Smaller half axis.
Interactive drawing of any orientated rectangle.
Instance represents: Window handle.
Row index of the center.
Column index of the center.
Orientation of the bigger half axis in radians.
Bigger half axis.
Smaller half axis.
Draw a rectangle parallel to the coordinate axis.
Instance represents: Window handle.
Row index of the left upper corner.
Column index of the left upper corner.
Row index of the right lower corner.
Column index of the right lower corner.
Row index of the left upper corner.
Column index of the left upper corner.
Row index of the right lower corner.
Column index of the right lower corner.
Draw a rectangle parallel to the coordinate axis.
Instance represents: Window handle.
Row index of the left upper corner.
Column index of the left upper corner.
Row index of the right lower corner.
Column index of the right lower corner.
Draw a point.
Instance represents: Window handle.
Row index of the point.
Column index of the point.
Row index of the point.
Column index of the point.
Draw a point.
Instance represents: Window handle.
Row index of the point.
Column index of the point.
Draw a line.
Instance represents: Window handle.
Row index of the first point of the line.
Column index of the first point of the line.
Row index of the second point of the line.
Column index of the second point of the line.
Row index of the first point of the line.
Column index of the first point of the line.
Row index of the second point of the line.
Column index of the second point of the line.
Draw a line.
Instance represents: Window handle.
Row index of the first point of the line.
Column index of the first point of the line.
Row index of the second point of the line.
Column index of the second point of the line.
Interactive drawing of an ellipse.
Instance represents: Window handle.
Row index of the center.
Column index of the center.
Orientation of the bigger half axis in radians.
Bigger half axis.
Smaller half axis.
Row index of the center.
Column index of the center.
Orientation of the first half axis in radians.
First half axis.
Second half axis.
Interactive drawing of an ellipse.
Instance represents: Window handle.
Row index of the center.
Column index of the center.
Orientation of the first half axis in radians.
First half axis.
Second half axis.
Interactive drawing of a circle.
Instance represents: Window handle.
Row index of the center.
Column index of the center.
Radius of the circle.
Row index of the center.
Column index of the center.
Circle's radius.
Interactive drawing of a circle.
Instance represents: Window handle.
Barycenter's row index.
Barycenter's column index.
Circle's radius.
Interactive drawing of a closed region.
Instance represents: Window handle.
Interactive created region.
Interactive drawing of a polygon row.
Instance represents: Window handle.
Region, which encompasses all painted points.
Project and visualize the 3D model of the calibration plate in the image.
Instance represents: Window in which the calibration plate should be visualized.
File name of the calibration plate description. Default: "calplate_320.cpd"
Internal camera parameters.
External camera parameters (3D pose of the calibration plate in camera coordinates).
Scaling factor for the visualization. Default: 1.0
Convert image coordinates to window coordinates
Instance represents: Window handle
Row in image coordinates.
Column in image coordinates.
Row (Y) in window coordinates.
Column (X) in window coordinates.
Convert image coordinates to window coordinates
Instance represents: Window handle
Row in image coordinates.
Column in image coordinates.
Row (Y) in window coordinates.
Column (X) in window coordinates.
Convert window coordinates to image coordinates
Instance represents: Window handle.
Row (Y) in window coordinates.
Column (X) in window coordinates.
Row in image coordinates.
Column in image coordinates.
Convert window coordinates to image coordinates
Instance represents: Window handle.
Row (Y) in window coordinates.
Column (X) in window coordinates.
Row in image coordinates.
Column in image coordinates.
Display text in a window.
Instance represents: Window handle.
A tuple of strings containing the text message to be displayed. Each value of the tuple will be displayed in a single line. Default: "hello"
If set to 'window', the text position is given with respect to the window coordinate system. If set to 'image', image coordinates are used (this may be useful in zoomed images). Default: "window"
The vertical text alignment or the row coordinate of the desired text position. Default: 12
The horizontal text alignment or the column coordinate of the desired text position. Default: 12
A tuple of strings defining the colors of the texts. Default: "black"
Generic parameter names. Default: []
Generic parameter values. Default: []
Display text in a window.
Instance represents: Window handle.
A tuple of strings containing the text message to be displayed. Each value of the tuple will be displayed in a single line. Default: "hello"
If set to 'window', the text position is given with respect to the window coordinate system. If set to 'image', image coordinates are used (this may be useful in zoomed images). Default: "window"
The vertical text alignment or the row coordinate of the desired text position. Default: 12
The horizontal text alignment or the column coordinate of the desired text position. Default: 12
A tuple of strings defining the colors of the texts. Default: "black"
Generic parameter names. Default: []
Generic parameter values. Default: []
Flush the contents of a window.
Instance represents: Window handle.
Get the current color in RGBA-coding.
Instance represents: Window handle.
The current color's red value.
The current color's green value.
The current color's blue value.
The current color's alpha value.
Send an event to a buffer window signaling a mouse double click event.
Instance represents: Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse double click event.
Instance represents: Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a window buffer signaling a mouse down event.
Instance represents: Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a window buffer signaling a mouse down event.
Instance represents: Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse drag event.
Instance represents: Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse drag event.
Instance represents: Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse up event.
Instance represents: Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Send an event to a buffer window signaling a mouse up event.
Instance represents: Window handle of the buffer window.
Row coordinate of the mouse cursor in the image coordinate system.
Column coordinate of the mouse cursor in the image coordinate system.
Mouse button(s) pressed.
'true', if HALCON processed the event.
Sets the callback for content updates in buffer window.
Instance represents: Window handle.
Callback for content updates.
Parameter to CallbackFunction.
Sets the callback for content updates in buffer window.
Instance represents: Window handle.
Callback for content updates.
Parameter to CallbackFunction.
Set the color definition via RGBA values.
Instance represents: Window handle.
Red component of the color. Default: 255
Green component of the color. Default: 0
Blue component of the color. Default: 0
Alpha component of the color. Default: 255
Set the color definition via RGBA values.
Instance represents: Window handle.
Red component of the color. Default: 255
Green component of the color. Default: 0
Blue component of the color. Default: 0
Alpha component of the color. Default: 255
Get the current contour display fill style.
Instance represents: Window handle.
Current contour fill style.
Define the contour display fill style.
Instance represents: Window handle.
Fill style of contour displays. Default: "stroke"
Represents an instance of an XLD object-(array).
Create an uninitialized iconic object
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Return an XLD parallel's data (as lines).
Instance represents: Input XLD parallels.
Row coordinates of the points on polygon P1.
Column coordinates of the points on polygon P1.
Lengths of the line segments on polygon P1.
Angles of the line segments on polygon P1.
Row coordinates of the points on polygon P2.
Column coordinates of the points on polygon P2.
Lengths of the line segments on polygon P2.
Angles of the line segments on polygon P2.
Display an XLD object.
Instance represents: XLD object to display.
Window handle.
Receive an XLD object over a socket connection.
Modified instance represents: Received XLD object.
Socket number.
Send an XLD object over a socket connection.
Instance represents: XLD object to be sent.
Socket number.
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Paint XLD objects into an image.
Instance represents: XLD objects to be painted into the input image.
Image in which the xld objects are to be painted.
Desired gray value of the xld object. Default: 255.0
Image containing the result.
Paint XLD objects into an image.
Instance represents: XLD objects to be painted into the input image.
Image in which the xld objects are to be painted.
Desired gray value of the xld object. Default: 255.0
Image containing the result.
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Deserialize a serialized XLD object.
Modified instance represents: XLD object.
Handle of the serialized item.
Serialize an XLD object.
Instance represents: XLD object.
Handle of the serialized item.
Test whether contours or polygons are closed.
Instance represents: Contours or polygons to be tested.
Tuple with boolean numbers.
Arbitrary geometric moments of contours or polygons treated as point clouds.
Instance represents: Contours or polygons to be examined.
Computation mode. Default: "unnormalized"
Area enclosed by the contour or polygon.
Row coordinate of the centroid.
Column coordinate of the centroid.
First index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
Second index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
The computed moments.
Arbitrary geometric moments of contours or polygons treated as point clouds.
Instance represents: Contours or polygons to be examined.
Computation mode. Default: "unnormalized"
Area enclosed by the contour or polygon.
Row coordinate of the centroid.
Column coordinate of the centroid.
First index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
Second index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
The computed moments.
Anisometry of contours or polygons treated as point clouds.
Instance represents: Contours or polygons to be examined.
Anisometry of the contours or polygons.
Parameters of the equivalent ellipse of contours or polygons treated as point clouds.
Instance represents: Contours or polygons to be examined.
Minor radius.
Angle between the major axis and the column axis (radians).
Major radius.
Parameters of the equivalent ellipse of contours or polygons treated as point clouds.
Instance represents: Contours or polygons to be examined.
Minor radius.
Angle between the major axis and the column axis (radians).
Major radius.
Orientation of contours or polygons treated as point clouds.
Instance represents: Contours or polygons to be examined.
Orientation of the contours or polygons (radians).
Geometric moments M20@f$M_{20}$, M02@f$M_{02}$, and M11@f$M_{11}$ of contours or polygons treated as point clouds.
Instance represents: Contours or polygons to be examined.
Second order moment along the row axis.
Second order moment along the column axis.
Mixed second order moment.
Geometric moments M20@f$M_{20}$, M02@f$M_{02}$, and M11@f$M_{11}$ of contours or polygons treated as point clouds.
Instance represents: Contours or polygons to be examined.
Second order moment along the row axis.
Second order moment along the column axis.
Mixed second order moment.
Area and center of gravity (centroid) of contours and polygons treated as point clouds.
Instance represents: Point clouds to be examined in form of contours or polygons.
Row coordinate of the centroid.
Column coordinate of the centroid.
Area of the point cloud.
Area and center of gravity (centroid) of contours and polygons treated as point clouds.
Instance represents: Point clouds to be examined in form of contours or polygons.
Row coordinate of the centroid.
Column coordinate of the centroid.
Area of the point cloud.
Test XLD contours or polygons for self intersection.
Instance represents: Input contours or polygons.
Should the input contours or polygons be closed first? Default: "true"
1 for contours or polygons with self intersection and 0 otherwise.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Test whether one or more contours or polygons enclose the given point(s).
Instance represents: Contours or polygons to be tested.
Row coordinates of the points to be tested.
Column coordinates of the points to be tested.
Tuple with boolean numbers.
Test whether one or more contours or polygons enclose the given point(s).
Instance represents: Contours or polygons to be tested.
Row coordinates of the points to be tested.
Column coordinates of the points to be tested.
Tuple with boolean numbers.
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Orientation of contours or polygons.
Instance represents: Contours or polygons to be examined.
Orientation of the contours or polygons (radians).
Shape features derived from the ellipse parameters of contours or polygons.
Instance represents: Contours or polygons to be examined.
Bulkiness of the contours or polygons.
Structure factor of the contours or polygons.
Anisometry of the contours or polygons.
Shape features derived from the ellipse parameters of contours or polygons.
Instance represents: Contours or polygons to be examined.
Bulkiness of the contours or polygons.
Structure factor of the contours or polygons.
Anisometry of the contours or polygons.
Shape factor for the compactness of contours or polygons.
Instance represents: Contours or polygons to be examined.
Compactness of the input contours or polygons.
Maximum distance between two contour or polygon points.
Instance represents: Contours or polygons to be examined.
Row coordinate of the first extreme point of the contours or polygons.
Column coordinate of the first extreme point of the contours or polygons.
Row coordinate of the second extreme point of the contour or polygons.
Column coordinate of the second extreme point of the contours or polygons.
Distance of the two extreme points of the contours or polygons.
Maximum distance between two contour or polygon points.
Instance represents: Contours or polygons to be examined.
Row coordinate of the first extreme point of the contours or polygons.
Column coordinate of the first extreme point of the contours or polygons.
Row coordinate of the second extreme point of the contour or polygons.
Column coordinate of the second extreme point of the contours or polygons.
Distance of the two extreme points of the contours or polygons.
Shape factor for the convexity of contours or polygons.
Instance represents: Contours or polygons to be examined.
Convexity of the input contours or polygons.
Shape factor for the circularity (similarity to a circle) of contours or polygons.
Instance represents: Contours or polygons to be examined.
Roundness of the input contours or polygons.
Parameters of the equivalent ellipse of contours or polygons.
Instance represents: Contours or polygons to be examined.
Minor radius.
Angle between the major axis and the x axis (radians).
Major radius.
Parameters of the equivalent ellipse of contours or polygons.
Instance represents: Contours or polygons to be examined.
Minor radius.
Angle between the major axis and the x axis (radians).
Major radius.
Smallest enclosing rectangle with arbitrary orientation of contours or polygons.
Instance represents: Contours or polygons to be examined.
Row coordinate of the center point of the enclosing rectangle.
Column coordinate of the center point of the enclosing rectangle.
Orientation of the enclosing rectangle (arc measure)
First radius (half length) of the enclosing rectangle.
Second radius (half width) of the enclosing rectangle.
Smallest enclosing rectangle with arbitrary orientation of contours or polygons.
Instance represents: Contours or polygons to be examined.
Row coordinate of the center point of the enclosing rectangle.
Column coordinate of the center point of the enclosing rectangle.
Orientation of the enclosing rectangle (arc measure)
First radius (half length) of the enclosing rectangle.
Second radius (half width) of the enclosing rectangle.
Enclosing rectangle parallel to the coordinate axes of contours or polygons.
Instance represents: Contours or polygons to be examined.
Row coordinate of upper left corner point of the enclosing rectangle.
Column coordinate of upper left corner point of the enclosing rectangle.
Row coordinate of lower right corner point of the enclosing rectangle.
Column coordinate of lower right corner point of the enclosing rectangle.
Enclosing rectangle parallel to the coordinate axes of contours or polygons.
Instance represents: Contours or polygons to be examined.
Row coordinate of upper left corner point of the enclosing rectangle.
Column coordinate of upper left corner point of the enclosing rectangle.
Row coordinate of lower right corner point of the enclosing rectangle.
Column coordinate of lower right corner point of the enclosing rectangle.
Smallest enclosing circle of contours or polygons.
Instance represents: Contours or polygons to be examined.
Row coordinate of the center of the enclosing circle.
Column coordinate of the center of the enclosing circle.
Radius of the enclosing circle.
Smallest enclosing circle of contours or polygons.
Instance represents: Contours or polygons to be examined.
Row coordinate of the center of the enclosing circle.
Column coordinate of the center of the enclosing circle.
Radius of the enclosing circle.
Transform the shape of contours or polygons.
Instance represents: Contours or polygons to be transformed.
Type of transformation. Default: "convex"
Transformed contours respectively polygons.
Length of contours or polygons.
Instance represents: Contours or polygons to be examined.
Length of the contour or polygon.
Arbitrary geometric moments of contours or polygons.
Instance represents: Contours or polygons to be examined.
Computation mode. Default: "unnormalized"
Point order along the boundary. Default: "positive"
Area enclosed by the contour or polygon.
Row coordinate of the centroid.
Column coordinate of the centroid.
First index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
Second index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
The computed moments.
Arbitrary geometric moments of contours or polygons.
Instance represents: Contours or polygons to be examined.
Computation mode. Default: "unnormalized"
Point order along the boundary. Default: "positive"
Area enclosed by the contour or polygon.
Row coordinate of the centroid.
Column coordinate of the centroid.
First index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
Second index of the desired moments M[P,Q]@f$M_{p,q}$. Default: 1
The computed moments.
Geometric moments M20@f$M_{20}$, M02@f$M_{02}$, and M11@f$M_{11}$ of contours or polygons.
Instance represents: Contours or polygons to be examined.
Second order moment along the row axis.
Second order moment along the column axis.
Mixed second order moment.
Geometric moments M20@f$M_{20}$, M02@f$M_{02}$, and M11@f$M_{11}$ of contours or polygons.
Instance represents: Contours or polygons to be examined.
Second order moment along the row axis.
Second order moment along the column axis.
Mixed second order moment.
Area and center of gravity (centroid) of contours and polygons.
Instance represents: Contours or polygons to be examined.
Row coordinate of the centroid.
Column coordinate of the centroid.
point order along the boundary ('positive'/'negative').
Area enclosed by the contour or polygon.
Area and center of gravity (centroid) of contours and polygons.
Instance represents: Contours or polygons to be examined.
Row coordinate of the centroid.
Column coordinate of the centroid.
point order along the boundary ('positive'/'negative').
Area enclosed by the contour or polygon.
Determine the 3D pose of a rectangle from its perspective 2D projection
Instance represents: Contour(s) to be examined.
Internal camera parameters.
Width of the rectangle in meters.
Height of the rectangle in meters.
Weighting mode for the optimization phase. Default: "nonweighted"
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 3.0 for 'tukey'). Default: 2.0
Covariances of the pose values.
Root-mean-square value of the final residual error.
3D pose of the rectangle.
Determine the 3D pose of a rectangle from its perspective 2D projection
Instance represents: Contour(s) to be examined.
Internal camera parameters.
Width of the rectangle in meters.
Height of the rectangle in meters.
Weighting mode for the optimization phase. Default: "nonweighted"
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 3.0 for 'tukey'). Default: 2.0
Covariances of the pose values.
Root-mean-square value of the final residual error.
3D pose of the rectangle.
Determine the 3D pose of a circle from its perspective 2D projection.
Instance represents: Contours to be examined.
Internal camera parameters.
Radius of the circle in object space.
Type of output parameters. Default: "pose"
3D pose of the second circle.
3D pose of the first circle.
Determine the 3D pose of a circle from its perspective 2D projection.
Instance represents: Contours to be examined.
Internal camera parameters.
Radius of the circle in object space.
Type of output parameters. Default: "pose"
3D pose of the second circle.
3D pose of the first circle.
Compute the width, height, and aspect ratio of the enclosing rectangle parallel to the coordinate axes of contours or polygons.
Instance represents: Contours or polygons to be examined.
Width of the enclosing rectangle.
Aspect ratio of the enclosing rectangle.
Height of the enclosing rectangle.
Compute the width, height, and aspect ratio of the enclosing rectangle parallel to the coordinate axes of contours or polygons.
Instance represents: Contours or polygons to be examined.
Width of the enclosing rectangle.
Aspect ratio of the enclosing rectangle.
Height of the enclosing rectangle.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Returns the iconic object(s) at the specified index
Represents an instance of an XLD contour object(-array).
Create an uninitialized iconic object
Generate XLD contours from regions.
Modified instance represents: Resulting contours.
Input regions.
Mode of contour generation. Default: "border"
Generate an XLD contour from a polygon (given as tuples).
Modified instance represents: Resulting contour.
Row coordinates of the polygon. Default: [0,1,2,2,2]
Column coordinates of the polygon. Default: [0,0,0,1,2]
Compute the union of cotangential contours.
Instance represents: Input XLD contours.
Length of the part of a contour to skip for the determination of tangents. Default: 0.0
Length of the part of a contour to use for the determination of tangents. Default: 30.0
Maximum angle difference between two contours' tangents. Default: 0.78539816
Maximum distance of the contours' end points. Default: 25.0
Maximum distance of the contours' end points perpendicular to their tangents. Default: 10.0
Maximum overlap of two contours. Default: 2.0
Mode describing the treatment of the contours' attributes. Default: "attr_forget"
Output XLD contours.
Compute the union of cotangential contours.
Instance represents: Input XLD contours.
Length of the part of a contour to skip for the determination of tangents. Default: 0.0
Length of the part of a contour to use for the determination of tangents. Default: 30.0
Maximum angle difference between two contours' tangents. Default: 0.78539816
Maximum distance of the contours' end points. Default: 25.0
Maximum distance of the contours' end points perpendicular to their tangents. Default: 10.0
Maximum overlap of two contours. Default: 2.0
Mode describing the treatment of the contours' attributes. Default: "attr_forget"
Output XLD contours.
Transform a contour in polar coordinates back to Cartesian coordinates
Instance represents: Input contour.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the column coordinate 0 of PolarContour to. Default: 0.0
Angle of the ray to map the column coordinate $WidthIn-1$ of PolarContour to. Default: 6.2831853
Radius of the circle to map the row coordinate 0 of PolarContour to. Default: 0
Radius of the circle to map the row coordinate $HeightIn-1$ of PolarContour to. Default: 100
Width of the virtual input image. Default: 512
Height of the virtual input image. Default: 512
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Output contour.
Transform a contour in polar coordinates back to Cartesian coordinates
Instance represents: Input contour.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to map the column coordinate 0 of PolarContour to. Default: 0.0
Angle of the ray to map the column coordinate $WidthIn-1$ of PolarContour to. Default: 6.2831853
Radius of the circle to map the row coordinate 0 of PolarContour to. Default: 0
Radius of the circle to map the row coordinate $HeightIn-1$ of PolarContour to. Default: 100
Width of the virtual input image. Default: 512
Height of the virtual input image. Default: 512
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Output contour.
Transform a contour in an annular arc to polar coordinates.
Instance represents: Input contour.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to the column coordinate 0 of PolarTransContour. Default: 0.0
Angle of the ray to be mapped to the column coordinate $Width-1$ of PolarTransContour to. Default: 6.2831853
Radius of the circle to be mapped to the row coordinate 0 of PolarTransContour. Default: 0
Radius of the circle to be mapped to the row coordinate $Height-1$ of PolarTransContour. Default: 100
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Output contour.
Transform a contour in an annular arc to polar coordinates.
Instance represents: Input contour.
Row coordinate of the center of the arc. Default: 256
Column coordinate of the center of the arc. Default: 256
Angle of the ray to be mapped to the column coordinate 0 of PolarTransContour. Default: 0.0
Angle of the ray to be mapped to the column coordinate $Width-1$ of PolarTransContour to. Default: 6.2831853
Radius of the circle to be mapped to the row coordinate 0 of PolarTransContour. Default: 0
Radius of the circle to be mapped to the row coordinate $Height-1$ of PolarTransContour. Default: 100
Width of the virtual output image. Default: 512
Height of the virtual output image. Default: 512
Output contour.
Transform a NURBS curve into an XLD contour.
Modified instance represents: The contour that approximates the NURBS curve.
Row coordinates of the control polygon.
Column coordinates of the control polygon.
The knot vector $u$. Default: "auto"
The weight vector $w$. Default: "auto"
The degree $p$ of the NURBS curve. Default: 3
Maximum distance between the NURBS curve and its approximation. Default: 1.0
Maximum distance between two subsequent Contour points. Default: 5.0
Transform a NURBS curve into an XLD contour.
Modified instance represents: The contour that approximates the NURBS curve.
Row coordinates of the control polygon.
Column coordinates of the control polygon.
The knot vector $u$. Default: "auto"
The weight vector $w$. Default: "auto"
The degree $p$ of the NURBS curve. Default: 3
Maximum distance between the NURBS curve and its approximation. Default: 1.0
Maximum distance between two subsequent Contour points. Default: 5.0
Compute the union of closed contours.
Instance represents: Contours enclosing the first region.
Contours enclosing the second region.
Contours enclosing the union.
Compute the symmetric difference of closed contours.
Instance represents: Contours enclosing the first region.
Contours enclosing the second region.
Contours enclosing the symmetric difference.
Compute the difference of closed contours.
Instance represents: Contours enclosing the region from which the second region is subtracted.
Contours enclosing the region that is subtracted from the first region.
Contours enclosing the difference.
Intersect closed contours.
Instance represents: Contours enclosing the first region to be intersected.
Contours enclosing the second region to be intersected.
Contours enclosing the intersection.
Compute the union of contours that belong to the same circle.
Instance represents: Contours to be merged.
Maximum angular distance of two circular arcs. Default: 0.5
Maximum overlap of two circular arcs. Default: 0.1
Maximum angle between the connecting line and the tangents of circular arcs. Default: 0.2
Maximum length of the gap between two circular arcs in pixels. Default: 30
Maximum radius difference of the circles fitted to two arcs. Default: 10
Maximum center distance of the circles fitted to two arcs. Default: 10
Determine whether small contours without fitted circles should also be merged. Default: "true"
Number of iterations. Default: 1
Merged contours.
Compute the union of contours that belong to the same circle.
Instance represents: Contours to be merged.
Maximum angular distance of two circular arcs. Default: 0.5
Maximum overlap of two circular arcs. Default: 0.1
Maximum angle between the connecting line and the tangents of circular arcs. Default: 0.2
Maximum length of the gap between two circular arcs in pixels. Default: 30
Maximum radius difference of the circles fitted to two arcs. Default: 10
Maximum center distance of the circles fitted to two arcs. Default: 10
Determine whether small contours without fitted circles should also be merged. Default: "true"
Number of iterations. Default: 1
Merged contours.
Crop an XLD contour.
Instance represents: Input contours.
Upper border of the cropping rectangle. Default: 0
Left border of the cropping rectangle. Default: 0
Lower border of the cropping rectangle. Default: 512
Right border of the cropping rectangle. Default: 512
Should closed contours produce closed output contours? Default: "true"
Output contours.
Crop an XLD contour.
Instance represents: Input contours.
Upper border of the cropping rectangle. Default: 0
Left border of the cropping rectangle. Default: 0
Lower border of the cropping rectangle. Default: 512
Right border of the cropping rectangle. Default: 512
Should closed contours produce closed output contours? Default: "true"
Output contours.
Generate one XLD contour in the shape of a cross for each input point.
Modified instance represents: Generated XLD contours.
Row coordinates of the input points.
Column coordinates of the input points.
Length of the cross bars. Default: 6.0
Orientation of the crosses. Default: 0.785398
Generate one XLD contour in the shape of a cross for each input point.
Modified instance represents: Generated XLD contours.
Row coordinates of the input points.
Column coordinates of the input points.
Length of the cross bars. Default: 6.0
Orientation of the crosses. Default: 0.785398
Sort contours with respect to their relative position.
Instance represents: Contours to be sorted.
Kind of sorting. Default: "upper_left"
Increasing or decreasing sorting order. Default: "true"
Sorting first with respect to row, then to column. Default: "row"
Sorted contours.
Merge XLD contours from successive line scan images.
Instance represents: Current input contours.
Merged contours from the previous iteration.
Contours from the previous iteration which could not be merged with the current ones.
Height of the line scan images. Default: 512
Maximum distance of contours from the image border. Default: 0.0
Image line of the current image, which touches the previous image. Default: "top"
Maximum number of images covered by one contour. Default: 3
Current contours, merged with old ones where applicable.
Merge XLD contours from successive line scan images.
Instance represents: Current input contours.
Merged contours from the previous iteration.
Contours from the previous iteration which could not be merged with the current ones.
Height of the line scan images. Default: 512
Maximum distance of contours from the image border. Default: 0.0
Image line of the current image, which touches the previous image. Default: "top"
Maximum number of images covered by one contour. Default: 3
Current contours, merged with old ones where applicable.
Read XLD contours to a file in ARC/INFO generate format.
Modified instance represents: Read XLD contours.
Name of the ARC/INFO file.
Write XLD contours to a file in ARC/INFO generate format.
Instance represents: XLD contours to be written.
Name of the ARC/INFO file.
Compute the parallel contour of an XLD contour.
Instance represents: Contours to be transformed.
Mode, with which the direction information is computed. Default: "regression_normal"
Distance of the parallel contour. Default: 1
Parallel contours.
Compute the parallel contour of an XLD contour.
Instance represents: Contours to be transformed.
Mode, with which the direction information is computed. Default: "regression_normal"
Distance of the parallel contour. Default: 1
Parallel contours.
Create an XLD contour in the shape of a rectangle.
Modified instance represents: Rectangle contour.
Row coordinate of the center of the rectangle. Default: 300.0
Column coordinate of the center of the rectangle. Default: 200.0
Orientation of the main axis of the rectangle [rad]. Default: 0.0
First radius (half length) of the rectangle. Default: 100.5
Second radius (half width) of the rectangle. Default: 20.5
Create an XLD contour in the shape of a rectangle.
Modified instance represents: Rectangle contour.
Row coordinate of the center of the rectangle. Default: 300.0
Column coordinate of the center of the rectangle. Default: 200.0
Orientation of the main axis of the rectangle [rad]. Default: 0.0
First radius (half length) of the rectangle. Default: 100.5
Second radius (half width) of the rectangle. Default: 20.5
Compute the distances of all contour points to a rectangle.
Instance represents: Input contour.
Number of points at the beginning and the end of the contours to be ignored for the computation of distances. Default: 0
Row coordinate of the center of the rectangle.
Column coordinate of the center of the rectangle.
Orientation of the main axis of the rectangle [rad].
First radius (half length) of the rectangle.
Second radius (half width) of the rectangle.
Distances of the contour points to the rectangle.
Fit rectangles to XLD contours.
Instance represents: Input contours.
Algorithm for fitting the rectangles. Default: "regression"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as closed. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations (not used for 'regression'). Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 2.0 for 'tukey'). Default: 2.0
Row coordinate of the center of the rectangle.
Column coordinate of the center of the rectangle.
Orientation of the main axis of the rectangle [rad].
First radius (half length) of the rectangle.
Second radius (half width) of the rectangle.
Point order of the contour.
Fit rectangles to XLD contours.
Instance represents: Input contours.
Algorithm for fitting the rectangles. Default: "regression"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as closed. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations (not used for 'regression'). Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 2.0 for 'tukey'). Default: 2.0
Row coordinate of the center of the rectangle.
Column coordinate of the center of the rectangle.
Orientation of the main axis of the rectangle [rad].
First radius (half length) of the rectangle.
Second radius (half width) of the rectangle.
Point order of the contour.
Segment XLD contour parts whose local attributes fulfill given conditions.
Instance represents: Contour to be segmented.
Contour attributes to be checked. Default: "distance"
Linkage type of the individual attributes. Default: "and"
Lower limits of the attribute values. Default: 150.0
Upper limits of the attribute values. Default: 99999.0
Segmented contour parts.
Segment XLD contour parts whose local attributes fulfill given conditions.
Instance represents: Contour to be segmented.
Contour attributes to be checked. Default: "distance"
Linkage type of the individual attributes. Default: "and"
Lower limits of the attribute values. Default: 150.0
Upper limits of the attribute values. Default: 99999.0
Segmented contour parts.
Segment XLD contours into line segments and circular or elliptic arcs.
Instance represents: Contours to be segmented.
Mode for the segmentation of the contours. Default: "lines_circles"
Number of points used for smoothing the contours. Default: 5
Maximum distance between a contour and the approximating line (first iteration). Default: 4.0
Maximum distance between a contour and the approximating line (second iteration). Default: 2.0
Segmented contours.
Approximate XLD contours by circles.
Instance represents: Input contours.
Algorithm for the fitting of circles. Default: "algebraic"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as 'closed'. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations for the robust weighted fitting. Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for Huber and 2.0 for Tukey). Default: 2.0
Row coordinate of the center of the circle.
Column coordinate of the center of the circle.
Radius of circle.
Angle of the start point [rad].
Angle of the end point [rad].
Point order along the boundary.
Approximate XLD contours by circles.
Instance represents: Input contours.
Algorithm for the fitting of circles. Default: "algebraic"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as 'closed'. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations for the robust weighted fitting. Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for Huber and 2.0 for Tukey). Default: 2.0
Row coordinate of the center of the circle.
Column coordinate of the center of the circle.
Radius of circle.
Angle of the start point [rad].
Angle of the end point [rad].
Point order along the boundary.
Approximate XLD contours by line segments.
Instance represents: Input contours.
Algorithm for the fitting of lines. Default: "tukey"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 'drop' and 2.0 for 'tukey'). Default: 2.0
Row coordinates of the starting points of the line segments.
Column coordinates of the starting points of the line segments.
Row coordinates of the end points of the line segments.
Column coordinates of the end points of the line segments.
Line parameter: Row coordinate of the normal vector
Line parameter: Column coordinate of the normal vector
Line parameter: Distance of the line from the origin
Approximate XLD contours by line segments.
Instance represents: Input contours.
Algorithm for the fitting of lines. Default: "tukey"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Maximum number of iterations (unused for 'regression'). Default: 5
Clipping factor for the elimination of outliers (typical: 1.0 for 'huber' and 'drop' and 2.0 for 'tukey'). Default: 2.0
Row coordinates of the starting points of the line segments.
Column coordinates of the starting points of the line segments.
Row coordinates of the end points of the line segments.
Column coordinates of the end points of the line segments.
Line parameter: Row coordinate of the normal vector
Line parameter: Column coordinate of the normal vector
Line parameter: Distance of the line from the origin
Compute the distances of all contour points to an ellipse.
Instance represents: Input contours.
Mode for unsigned or signed distance values. Default: "unsigned"
Number of points at the beginning and the end of the contours to be ignored for the computation of distances. Default: 0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis in radian.
Length of the larger half axis.
Length of the smaller half axis.
Distances of the contour points to the ellipse.
Compute the distance of contours to an ellipse.
Instance represents: Input contours.
Method for the determination of the distances. Default: "geometric"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Number of points at the beginning and the end of the contours to be ignored for the computation of distances. Default: 0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis in radian.
Length of the larger half axis.
Length of the smaller half axis.
Minimum distance.
Maximum distance.
Mean distance.
Standard deviation of the distance.
Compute the distance of contours to an ellipse.
Instance represents: Input contours.
Method for the determination of the distances. Default: "geometric"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Number of points at the beginning and the end of the contours to be ignored for the computation of distances. Default: 0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis in radian.
Length of the larger half axis.
Length of the smaller half axis.
Minimum distance.
Maximum distance.
Mean distance.
Standard deviation of the distance.
Approximate XLD contours by ellipses or elliptic arcs.
Instance represents: Input contours.
Algorithm for the fitting of ellipses. Default: "fitzgibbon"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as 'closed'. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Number of circular segments used for the Voss approach. Default: 200
Maximum number of iterations for the robust weighted fitting. Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for '*huber' and 2.0 for '*tukey'). Default: 2.0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Angle of the start point [rad].
Angle of the end point [rad].
point order along the boundary.
Approximate XLD contours by ellipses or elliptic arcs.
Instance represents: Input contours.
Algorithm for the fitting of ellipses. Default: "fitzgibbon"
Maximum number of contour points used for the computation (-1 for all points). Default: -1
Maximum distance between the end points of a contour to be considered as 'closed'. Default: 0.0
Number of points at the beginning and at the end of the contours to be ignored for the fitting. Default: 0
Number of circular segments used for the Voss approach. Default: 200
Maximum number of iterations for the robust weighted fitting. Default: 3
Clipping factor for the elimination of outliers (typical: 1.0 for '*huber' and 2.0 for '*tukey'). Default: 2.0
Row coordinate of the center of the ellipse.
Column coordinate of the center of the ellipse.
Orientation of the main axis [rad].
Length of the larger half axis.
Length of the smaller half axis.
Angle of the start point [rad].
Angle of the end point [rad].
point order along the boundary.
Create XLD contours corresponding to circles or circular arcs.
Modified instance represents: Resulting contours.
Row coordinate of the center of the circles or circular arcs. Default: 200.0
Column coordinate of the center of the circles or circular arcs. Default: 200.0
Radius of the circles or circular arcs. Default: 100.0
Angle of the start points of the circles or circular arcs [rad]. Default: 0.0
Angle of the end points of the circles or circular arcs [rad]. Default: 6.28318
Point order along the circles or circular arcs. Default: "positive"
Distance between neighboring contour points. Default: 1.0
Create XLD contours corresponding to circles or circular arcs.
Modified instance represents: Resulting contours.
Row coordinate of the center of the circles or circular arcs. Default: 200.0
Column coordinate of the center of the circles or circular arcs. Default: 200.0
Radius of the circles or circular arcs. Default: 100.0
Angle of the start points of the circles or circular arcs [rad]. Default: 0.0
Angle of the end points of the circles or circular arcs [rad]. Default: 6.28318
Point order along the circles or circular arcs. Default: "positive"
Distance between neighboring contour points. Default: 1.0
Create an XLD contour that corresponds to an elliptic arc.
Modified instance represents: Resulting contour.
Row coordinate of the center of the ellipse. Default: 200.0
Column coordinate of the center of the ellipse. Default: 200.0
Orientation of the main axis [rad]. Default: 0.0
Length of the larger half axis. Default: 100.0
Length of the smaller half axis. Default: 50.0
Angle of the start point on the smallest surrounding circle [rad]. Default: 0.0
Angle of the end point on the smallest surrounding circle [rad]. Default: 6.28318
point order along the boundary. Default: "positive"
Resolution: Maximum distance between neighboring contour points. Default: 1.5
Create an XLD contour that corresponds to an elliptic arc.
Modified instance represents: Resulting contour.
Row coordinate of the center of the ellipse. Default: 200.0
Column coordinate of the center of the ellipse. Default: 200.0
Orientation of the main axis [rad]. Default: 0.0
Length of the larger half axis. Default: 100.0
Length of the smaller half axis. Default: 50.0
Angle of the start point on the smallest surrounding circle [rad]. Default: 0.0
Angle of the end point on the smallest surrounding circle [rad]. Default: 6.28318
point order along the boundary. Default: "positive"
Resolution: Maximum distance between neighboring contour points. Default: 1.5
Add noise to XLD contours.
Instance represents: Original contours.
Number of points used to calculate the regression line. Default: 5
Maximum amplitude of the added noise (equally distributed in [-Amp,Amp]). Default: 1.0
Noisy contours.
Approximate XLD contours by polygons.
Instance represents: Contours to be approximated.
Type of approximation. Default: "ramer"
Threshold for the approximation. Default: 2.0
Approximating polygons.
Approximate XLD contours by polygons.
Instance represents: Contours to be approximated.
Type of approximation. Default: "ramer"
Threshold for the approximation. Default: 2.0
Approximating polygons.
Apply a projective transformation to an XLD contour.
Instance represents: Input contours.
Homogeneous projective transformation matrix.
Output contours.
Apply an arbitrary affine 2D transformation to XLD contours.
Instance represents: Input XLD contours.
Input transformation matrix.
Transformed XLD contours.
Close an XLD contour.
Instance represents: Contours to be closed.
Closed contours.
Clip the end points of an XLD contour.
Instance represents: Input contour
Clipping mode. Default: "num_points"
Clipping length in unit pixels (Mode $=$ 'length') or number (Mode $=$ 'num_points') Default: 3
Clipped contour
Clip the end points of an XLD contour.
Instance represents: Input contour
Clipping mode. Default: "num_points"
Clipping length in unit pixels (Mode $=$ 'length') or number (Mode $=$ 'num_points') Default: 3
Clipped contour
Clip an XLD contour.
Instance represents: Contours to be clipped.
Row coordinate of the upper left corner of the clip rectangle. Default: 0
Column coordinate of the upper left corner of the clip rectangle. Default: 0
Row coordinate of the lower right corner of the clip rectangle. Default: 512
Column coordinate of the lower right corner of the clip rectangle. Default: 512
Clipped contours.
Select XLD contours with a local maximum of gray values.
Instance represents: XLD contours to be examined.
Corresponding gray value image.
Minimum percentage of maximum points. Default: 70
Minimum amount by which the gray value at the maximum must be larger than in the profile. Default: 15
Maximum width of profile used to check for maxima. Default: 4
Selected contours.
Select XLD contours with a local maximum of gray values.
Instance represents: XLD contours to be examined.
Corresponding gray value image.
Minimum percentage of maximum points. Default: 70
Minimum amount by which the gray value at the maximum must be larger than in the profile. Default: 15
Maximum width of profile used to check for maxima. Default: 4
Selected contours.
Compute the union of neighboring straight contours that have a similar distance from a given line.
Instance represents: Input XLD contours.
Output XLD contours.
y coordinate of the starting point of the reference line. Default: 0
x coordinate of the starting point of the reference line. Default: 0
y coordinate of the endpoint of the reference line. Default: 0
x coordinate of the endpoint of the reference line. Default: 0
Maximum distance. Default: 1
Maximum Width between two minimas. Default: 1
Size of Smoothfilter Default: 1
Output Values of Histogram.
Output XLD contours.
Compute the union of neighboring straight contours that have a similar direction.
Instance represents: Input XLD contours.
Maximum distance of the contours' endpoints. Default: 5.0
Maximum difference in direction. Default: 0.5
Weighting factor for the two selection criteria. Default: 50.0
Should parallel contours be taken into account? Default: "noparallel"
Number of iterations or 'maximum'. Default: "maximum"
Output XLD contours.
Compute the union of neighboring straight contours that have a similar direction.
Instance represents: Input XLD contours.
Maximum distance of the contours' endpoints. Default: 5.0
Maximum difference in direction. Default: 0.5
Weighting factor for the two selection criteria. Default: 50.0
Should parallel contours be taken into account? Default: "noparallel"
Number of iterations or 'maximum'. Default: "maximum"
Output XLD contours.
Compute the union of collinear contours (operator with extended functionality).
Instance represents: Input XLD contours.
Maximum distance of the contours' end points in the direction of the reference regression line. Default: 10.0
Maximum distance of the contours' end points in the direction of the reference regression line in relation to the length of the contour which is to be elongated. Default: 1.0
Maximum distance of the contour from the reference regression line (i.e., perpendicular to the line). Default: 2.0
Maximum angle difference between the two contours. Default: 0.1
Maximum range of the overlap. Default: 0.0
Maximum regression error of the resulting contours (NOT USED). Default: -1.0
Threshold for reducing the total costs of unification. Default: 1.0
Influence of the distance in the line direction on the total costs. Default: 1.0
Influence of the distance from the regression line on the total costs. Default: 1.0
Influence of the angle difference on the total costs. Default: 1.0
Influence of the line disturbance by the linking segment (overlap and angle difference) on the total costs. Default: 1.0
Influence of the regression error on the total costs (NOT USED). Default: 0.0
Mode describing the treatment of the contours' attributes Default: "attr_keep"
Output XLD contours.
Unite approximately collinear contours.
Instance represents: Input XLD contours.
Maximum length of the gap between two contours, measured along the regression line of the reference contour. Default: 10.0
Maximum length of the gap between two contours, relative to the length of the reference contour, both measured along the regression line of the reference contour. Default: 1.0
Maximum distance of the second contour from the regression line of the reference contour. Default: 2.0
Maximum angle between the regression lines of two contours. Default: 0.1
Mode that defines the treatment of contour attributes, i.e., if the contour attributes are kept or discarded. Default: "attr_keep"
Output XLD contours.
Compute the union of contours whose end points are close together.
Instance represents: Input XLD contours.
Maximum distance of the contours' end points. Default: 10.0
Maximum distance of the contours' end points in relation to the length of the longer contour. Default: 1.0
Mode describing the treatment of the contours' attributes. Default: "attr_keep"
Output XLD contours.
Select XLD contours according to several features.
Instance represents: Input XLD contours.
Feature to select contours with. Default: "contour_length"
Lower threshold. Default: 0.5
Upper threshold. Default: 200.0
Lower threshold. Default: -0.5
Upper threshold. Default: 0.5
Output XLD contours.
Return XLD contour parameters.
Instance represents: Input XLD contours.
X-coordinate of the normal vector of the regression line.
Y-coordinate of the normal vector of the regression line.
Distance of the regression line from the origin.
X-coordinate of the projection of the start point of the contour onto the regression line.
Y-coordinate of the projection of the start point of the contour onto the regression line.
X-coordinate of the projection of the end point of the contour onto the regression line.
Y-coordinate of the projection of the end point of the contour onto the regression line.
Mean distance of the contour points from the regression line.
Standard deviation of the distances from the regression line.
Number of contour points.
Calculate the parameters of a regression line to an XLD contour.
Instance represents: Input XLD contours.
Type of outlier treatment. Default: "no"
Number of iterations for the outlier treatment. Default: 1
Resulting XLD contours.
Calculate the direction of an XLD contour for each contour point.
Instance represents: Input contour.
Return type of the angles. Default: "abs"
Method for computing the angles. Default: "range"
Number of points to take into account. Default: 3
Direction of the tangent to the contour points.
Smooth an XLD contour.
Instance represents: Contour to be smoothed.
Number of points used to calculate the regression line. Default: 5
Smoothed contour.
Return the number of points in an XLD contour.
Instance represents: Input XLD contour.
Number of contour points.
Return the names of the defined global attributes of an XLD contour.
Instance represents: Input contour.
List of the defined global contour attributes.
Return global attributes values of an XLD contour.
Instance represents: Input XLD contour.
Name of the attribute. Default: "regr_norm_row"
Attribute values.
Return global attributes values of an XLD contour.
Instance represents: Input XLD contour.
Name of the attribute. Default: "regr_norm_row"
Attribute values.
Return the names of the defined attributes of an XLD contour.
Instance represents: Input contour.
List of the defined contour attributes.
Return point attribute values of an XLD contour.
Instance represents: Input XLD contour.
Name of the attribute. Default: "angle"
Attribute values.
Return the coordinates of an XLD contour.
Instance represents: Input XLD contour.
Row coordinate of the contour's points.
Column coordinate of the contour's points.
Generate an XLD contour with rounded corners from a polygon (given as tuples).
Modified instance represents: Resulting contour.
Row coordinates of the polygon. Default: [20,80,80,20,20]
Column coordinates of the polygon. Default: [20,20,80,80,20]
Radii of the rounded corners. Default: [20,20,20,20,20]
Distance of the samples. Default: 1.0
Generate an XLD contour with rounded corners from a polygon (given as tuples).
Modified instance represents: Resulting contour.
Row coordinates of the polygon. Default: [20,80,80,20,20]
Column coordinates of the polygon. Default: [20,20,80,80,20]
Radii of the rounded corners. Default: [20,20,20,20,20]
Distance of the samples. Default: 1.0
Generate an XLD contour from a polygon (given as tuples).
Modified instance represents: Resulting contour.
Row coordinates of the polygon. Default: [0,1,2,2,2]
Column coordinates of the polygon. Default: [0,0,0,1,2]
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Create a region from an XLD contour.
Instance represents: Input contour(s).
Fill mode of the region(s). Default: "filled"
Created region(s).
Prepare an anisotropically scaled shape model for matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare an anisotropically scaled shape model for matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in the row direction. Default: 0.9
Maximum scale of the pattern in the row direction. Default: 1.1
Scale step length (resolution) in the row direction. Default: "auto"
Minimum scale of the pattern in the column direction. Default: 0.9
Maximum scale of the pattern in the column direction. Default: 1.1
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare an isotropically scaled shape model for matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare an isotropically scaled shape model for matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern. Default: 0.9
Maximum scale of the pattern. Default: 1.1
Scale step length (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare a shape model for matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare a shape model for matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
Smallest rotation of the pattern. Default: -0.39
Extent of the rotation angles. Default: 0.79
Step length of the angles (resolution). Default: "auto"
Kind of optimization and optionally method used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
Handle of the model.
Prepare a deformable model for local deformable matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Prepare a deformable model for local deformable matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Prepare a deformable model for planar calibrated matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Prepare a deformable model for planar calibrated matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
The parameters of the internal orientation of the camera.
The reference pose of the object.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameter. Default: []
Handle of the model.
Prepare a deformable model for planar uncalibrated matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Prepare a deformable model for planar uncalibrated matching from XLD contours.
Instance represents: Input contours that will be used to create the model.
Maximum number of pyramid levels. Default: "auto"
This parameter is not used. Default: []
This parameter is not used. Default: []
Step length of the angles (resolution). Default: "auto"
Minimum scale of the pattern in row direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in row direction. Default: "auto"
Minimum scale of the pattern in column direction. Default: 1.0
This parameter is not used. Default: []
Scale step length (resolution) in the column direction. Default: "auto"
Kind of optimization used for generating the model. Default: "auto"
Match metric. Default: "ignore_local_polarity"
Minimum contrast of the objects in the search images. Default: 5
The generic parameter names. Default: []
Values of the generic parameters. Default: []
Handle of the model.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Interactive modification of a NURBS curve using interpolation.
Modified instance represents: Contour of the modified curve.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 5. Default: 3
Row coordinates of the input interpolation points.
Column coordinates of the input interpolation points.
Input tangents.
Row coordinates of the control polygon.
Column coordinates of the control polygon.
Knot vector.
Row coordinates of the points specified by the user.
Column coordinates of the points specified by the user.
Tangents specified by the user.
Interactive drawing of a NURBS curve using interpolation.
Modified instance represents: Contour of the curve.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 5. Default: 3
Row coordinates of the control polygon.
Column coordinates of the control polygon.
Knot vector.
Row coordinates of the points specified by the user.
Column coordinates of the points specified by the user.
Tangents specified by the user.
Interactive modification of a NURBS curve.
Modified instance represents: Contour of the modified curve.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 25. Default: 3
Row coordinates of the input control polygon.
Column coordinates of the input control polygon.
Input weight vector.
Row coordinates of the control polygon.
Columns coordinates of the control polygon.
Weight vector.
Interactive drawing of a NURBS curve.
Modified instance represents: Contour approximating the NURBS curve.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
The degree $p$ of the NURBS curve. Reasonable values are 3 to 25. Default: 3
Row coordinates of the control polygon.
Columns coordinates of the control polygon.
Weight vector.
Interactive modification of a contour.
Instance represents: Input contour.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Enable editing? Default: "true"
Modified contour.
Interactive drawing of a contour.
Modified instance represents: Modified contour.
Window handle.
Enable rotation? Default: "true"
Enable moving? Default: "true"
Enable scaling? Default: "true"
Keep ratio while scaling? Default: "true"
Calculate the pointwise distance from one contour to another.
Instance represents: Contours for whose points the distances are calculated.
Contours to which the distances are calculated to.
Compute the distance to points ('point_to_point') or to entire segments ('point_to_segment'). Default: "point_to_point"
Copy of ContourFrom containing the distances as an attribute.
Calculate the minimum distance between two contours.
Instance represents: First input contour.
Second input contour.
Distance calculation mode. Default: "fast_point_to_segment"
Minimum distance between the two contours.
Calculate the distance between two contours.
Instance represents: First input contour.
Second input contour.
Distance calculation mode. Default: "point_to_point"
Minimum distance between both contours.
Maximum distance between both contours.
Calculate the distance between two contours.
Instance represents: First input contour.
Second input contour.
Distance calculation mode. Default: "point_to_point"
Minimum distance between both contours.
Maximum distance between both contours.
Calculate the distance between a line segment and one contour.
Instance represents: Input contour.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the line segment and the contour.
Maximum distance between the line segment and the contour.
Calculate the distance between a line segment and one contour.
Instance represents: Input contour.
Row coordinate of the first point of the line segment.
Column coordinate of the first point of the line segment.
Row coordinate of the second point of the line segment.
Column coordinate of the second point of the line segment.
Minimum distance between the line segment and the contour.
Maximum distance between the line segment and the contour.
Calculate the distance between a line and one contour.
Instance represents: Input contour.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line and the contour.
Maximum distance between the line and the contour.
Calculate the distance between a line and one contour.
Instance represents: Input contour.
Row coordinate of the first point of the line.
Column coordinate of the first point of the line.
Row coordinate of the second point of the line.
Column coordinate of the second point of the line.
Minimum distance between the line and the contour.
Maximum distance between the line and the contour.
Calculate the distance between a point and one contour.
Instance represents: Input contour.
Row coordinate of the point.
Column coordinate of the point.
Minimum distance between the point and the contour.
Maximum distance between the point and the contour.
Calculate the distance between a point and one contour.
Instance represents: Input contour.
Row coordinate of the point.
Column coordinate of the point.
Minimum distance between the point and the contour.
Maximum distance between the point and the contour.
Read XLD contours from a DXF file.
Modified instance represents: Read XLD contours.
Name of the DXF file.
Names of the generic parameters that can be adjusted for the DXF input. Default: []
Values of the generic parameters that can be adjusted for the DXF input. Default: []
Status information.
Read XLD contours from a DXF file.
Modified instance represents: Read XLD contours.
Name of the DXF file.
Names of the generic parameters that can be adjusted for the DXF input. Default: []
Values of the generic parameters that can be adjusted for the DXF input. Default: []
Status information.
Write XLD contours to a file in DXF format.
Instance represents: XLD contours to be written.
Name of the DXF file.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Transform the shape of contours or polygons.
Instance represents: Contours or polygons to be transformed.
Type of transformation. Default: "convex"
Transformed contours respectively polygons.
Calibrate the radial distortion.
Instance represents: Contours that are available for the calibration.
Width of the images from which the contours were extracted. Default: 640
Height of the images from which the contours were extracted. Default: 480
Threshold for the classification of outliers. Default: 0.05
Seed value for the random number generator. Default: 42
Determines the distortion model. Default: "division"
Determines how the distortion center will be estimated. Default: "variable"
Controls the deviation of the distortion center from the image center; larger values allow larger deviations from the image center; 0 switches the penalty term off. Default: 0.0
Internal camera parameters.
Contours that were used for the calibration
Transform an XLD contour into the plane z=0 of a world coordinate system.
Instance represents: Input XLD contours to be transformed in image coordinates.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Scale or dimension Default: "m"
Transformed XLD contours in world coordinates.
Transform an XLD contour into the plane z=0 of a world coordinate system.
Instance represents: Input XLD contours to be transformed in image coordinates.
Internal camera parameters.
3D pose of the world coordinate system in camera coordinates.
Scale or dimension Default: "m"
Transformed XLD contours in world coordinates.
Change the radial distortion of contours.
Instance represents: Original contours.
Internal camera parameter for Contours.
Internal camera parameter for ContoursRectified.
Resulting contours with modified radial distortion.
Calculate the minimum distance between two contours and the points used for the calculation.
Instance represents: First input contour.
Second input contour.
Distance calculation mode. Default: "fast_point_to_segment"
Row coordinate of the point on Contour1.
Column coordinate of the point on Contour1.
Row coordinate of the point on Contour2.
Column coordinate of the point on Contour2.
Minimum distance between the two contours.
Calculate the minimum distance between two contours and the points used for the calculation.
Instance represents: First input contour.
Second input contour.
Distance calculation mode. Default: "fast_point_to_segment"
Row coordinate of the point on Contour1.
Column coordinate of the point on Contour1.
Row coordinate of the point on Contour2.
Column coordinate of the point on Contour2.
Minimum distance between the two contours.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Returns the iconic object(s) at the specified index
Represents an instance of a XLD distance transformation.
Read an XLD distance transform from a file.
Modified instance represents: Handle of the XLD distance transform.
Name of the file.
Create the XLD distance transform.
Modified instance represents: Handle of the XLD distance transform.
Reference contour(s).
Compute the distance to points ('point_to_point') or entire segments ('point_to_segment'). Default: "point_to_point"
Maximum distance of interest. Default: 20.0
Create the XLD distance transform.
Modified instance represents: Handle of the XLD distance transform.
Reference contour(s).
Compute the distance to points ('point_to_point') or entire segments ('point_to_segment'). Default: "point_to_point"
Maximum distance of interest. Default: 20.0
Serialize object to binary stream in HALCON format
Deserialize object from binary stream in HALCON format
Clear a XLD distance transform.
Instance represents: Handle of the XLD distance transform.
Determine the pointwise distance of two contours using an XLD distance transform.
Instance represents: Handle of the XLD distance transform of the reference contour.
Contour(s) for whose points the distances are calculated.
Copy of Contour containing the distances as an attribute.
Read an XLD distance transform from a file.
Modified instance represents: Handle of the XLD distance transform.
Name of the file.
Deserialize an XLD distance transform.
Modified instance represents: Handle of the deserialized XLD distance transform.
Handle of the serialized XLD distance transform.
Serialize an XLD distance transform.
Instance represents: Handle of the XLD distance transform.
Handle of the serialized XLD distance transform.
Write an XLD distance transform into a file.
Instance represents: Handle of the XLD distance transform.
Name of the file.
Set new parameters for an XLD distance transform.
Instance represents: Handle of the XLD distance transform.
Names of the generic parameters. Default: "mode"
Values of the generic parameters. Default: "point_to_point"
Set new parameters for an XLD distance transform.
Instance represents: Handle of the XLD distance transform.
Names of the generic parameters. Default: "mode"
Values of the generic parameters. Default: "point_to_point"
Get the parameters used to build an XLD distance transform.
Instance represents: Handle of the XLD distance transform.
Names of the generic parameters. Default: "mode"
Values of the generic parameters.
Get the parameters used to build an XLD distance transform.
Instance represents: Handle of the XLD distance transform.
Names of the generic parameters. Default: "mode"
Values of the generic parameters.
Get the reference contour used to build the XLD distance transform.
Instance represents: Handle of the XLD distance transform.
Reference contour.
Create the XLD distance transform.
Modified instance represents: Handle of the XLD distance transform.
Reference contour(s).
Compute the distance to points ('point_to_point') or entire segments ('point_to_segment'). Default: "point_to_point"
Maximum distance of interest. Default: 20.0
Create the XLD distance transform.
Modified instance represents: Handle of the XLD distance transform.
Reference contour(s).
Compute the distance to points ('point_to_point') or entire segments ('point_to_segment'). Default: "point_to_point"
Maximum distance of interest. Default: 20.0
Represents an instance of an XLD extended parallel object(-array).
Create an uninitialized iconic object
Join modified XLD parallels lying on the same polygon.
Instance represents: Extended XLD parallels.
Maximally extended parallels.
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Transform the shape of contours or polygons.
Instance represents: Contours or polygons to be transformed.
Type of transformation. Default: "convex"
Transformed contours respectively polygons.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Returns the iconic object(s) at the specified index
Represents an instance of an XLD modified parallel object(-array).
Create an uninitialized iconic object
Combine road hypotheses from two resolution levels.
Instance represents: Modified parallels obtained from EdgePolygons.
XLD polygons to be examined.
Extended parallels obtained from EdgePolygons.
Road-center-line polygons to be examined.
Maximum angle between two parallel line segments. Default: 0.523598775598
Maximum angle between two collinear line segments. Default: 0.261799387799
Maximum distance between two parallel line segments. Default: 40
Maximum distance between two collinear line segments. Default: 40
Roadsides found.
Combine road hypotheses from two resolution levels.
Instance represents: Modified parallels obtained from EdgePolygons.
XLD polygons to be examined.
Extended parallels obtained from EdgePolygons.
Road-center-line polygons to be examined.
Maximum angle between two parallel line segments. Default: 0.523598775598
Maximum angle between two collinear line segments. Default: 0.261799387799
Maximum distance between two parallel line segments. Default: 40
Maximum distance between two collinear line segments. Default: 40
Roadsides found.
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Transform the shape of contours or polygons.
Instance represents: Contours or polygons to be transformed.
Type of transformation. Default: "convex"
Transformed contours respectively polygons.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Returns the iconic object(s) at the specified index
Represents an instance of an XLD parallel object(-array).
Create an uninitialized iconic object
Extract parallel XLD polygons enclosing a homogeneous area.
Instance represents: Input XLD parallels.
Corresponding gray value image.
Extended XLD parallels.
Minimum quality factor (measure of parallelism). Default: 0.4
Minimum mean gray value. Default: 160
Maximum mean gray value. Default: 220
Maximum allowed standard deviation. Default: 10.0
Modified XLD parallels.
Extract parallel XLD polygons enclosing a homogeneous area.
Instance represents: Input XLD parallels.
Corresponding gray value image.
Extended XLD parallels.
Minimum quality factor (measure of parallelism). Default: 0.4
Minimum mean gray value. Default: 160
Maximum mean gray value. Default: 220
Maximum allowed standard deviation. Default: 10.0
Modified XLD parallels.
Return information about the gray values of the area enclosed by XLD parallels.
Instance represents: Input XLD Parallels.
Corresponding gray value image.
Minimum quality factor.
Maximum quality factor.
Minimum mean gray value.
Maximum mean gray value.
Minimum standard deviation.
Maximum standard deviation.
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Transform the shape of contours or polygons.
Instance represents: Contours or polygons to be transformed.
Type of transformation. Default: "convex"
Transformed contours respectively polygons.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Returns the iconic object(s) at the specified index
Represents an instance of an XLD polygon object(-array).
Create an uninitialized iconic object
Compute the union of closed polygons.
Instance represents: Polygons enclosing the first region.
Polygons enclosing the second region.
Polygons enclosing the union.
Compute the symmetric difference of closed polygons.
Instance represents: Polygons enclosing the first region.
Polygons enclosing the second region.
Polygons enclosing the symmetric difference.
Compute the difference of closed polygons.
Instance represents: Polygons enclosing the region from which the second region is subtracted.
Polygons enclosing the region that is subtracted from the first region.
Polygons enclosing the difference.
Intersect closed polygons.
Instance represents: Polygons enclosing the first region to be intersected.
Polygons enclosing the second region to be intersected.
Polygons enclosing the intersection.
Read XLD polygons from a file in ARC/INFO generate format.
Modified instance represents: Read XLD polygons.
Name of the ARC/INFO file.
Write XLD polygons to a file in ARC/INFO generate format.
Instance represents: XLD polygons to be written.
Name of the ARC/INFO file.
Combine road hypotheses from two resolution levels.
Instance represents: XLD polygons to be examined.
Modified parallels obtained from EdgePolygons.
Extended parallels obtained from EdgePolygons.
Road-center-line polygons to be examined.
Maximum angle between two parallel line segments. Default: 0.523598775598
Maximum angle between two collinear line segments. Default: 0.261799387799
Maximum distance between two parallel line segments. Default: 40
Maximum distance between two collinear line segments. Default: 40
Roadsides found.
Combine road hypotheses from two resolution levels.
Instance represents: XLD polygons to be examined.
Modified parallels obtained from EdgePolygons.
Extended parallels obtained from EdgePolygons.
Road-center-line polygons to be examined.
Maximum angle between two parallel line segments. Default: 0.523598775598
Maximum angle between two collinear line segments. Default: 0.261799387799
Maximum distance between two parallel line segments. Default: 40
Maximum distance between two collinear line segments. Default: 40
Roadsides found.
Extract parallel XLD polygons.
Instance represents: Input polygons.
Minimum length of the individual polygon segments. Default: 10.0
Maximum distance between the polygon segments. Default: 30.0
Maximum angle difference of the polygon segments. Default: 0.15
Should adjacent parallel relations be merged? Default: "true"
Parallel polygons.
Extract parallel XLD polygons.
Instance represents: Input polygons.
Minimum length of the individual polygon segments. Default: 10.0
Maximum distance between the polygon segments. Default: 30.0
Maximum angle difference of the polygon segments. Default: 0.15
Should adjacent parallel relations be merged? Default: "true"
Parallel polygons.
Return an XLD polygon's data (as lines).
Instance represents: Input XLD polygons.
Row coordinates of the lines' start points.
Column coordinates of the lines' start points.
Column coordinates of the lines' end points.
Column coordinates of the lines' end points.
Lengths of the line segments.
Angles of the line segments.
Return an XLD polygon's data.
Instance represents: Input XLD polygon.
Row coordinates of the polygons' points.
Column coordinates of the polygons' points.
Lengths of the line segments.
Angles of the line segments.
Split XLD contours at dominant points.
Instance represents: Polygons for which the corresponding contours are to be split.
Mode for the splitting of the contours. Default: "polygon"
Weight for the sensitiveness. Default: 1
Width of the smoothing mask. Default: 5
Split contours.
Apply an arbitrary affine transformation to XLD polygons.
Instance represents: Input XLD polygons.
Input transformation matrix.
Transformed XLD polygons.
Calculate the difference of two object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Objects from Objects that are not part of ObjectsSub.
Copy an iconic object in the HALCON database.
Instance represents: Objects to be copied.
Starting index of the objects to be copied. Default: 1
Number of objects to be copied or -1. Default: 1
Copied objects.
Concatenate two iconic object tuples.
Instance represents: Object tuple 1.
Object tuple 2.
Concatenated objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Select objects from an object tuple.
Instance represents: Input objects.
Indices of the objects to be selected. Default: 1
Selected objects.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare iconic objects regarding equality.
Instance represents: Reference objects.
Test objects.
Maximum allowed difference between two gray values or coordinates etc. Default: 0.0
Boolean result value.
Compare image objects regarding equality.
Instance represents: Test objects.
Comparative objects.
boolean result value.
Create a region from an XLD polygon.
Instance represents: Input polygon(s).
Fill mode of the region(s). Default: "filled"
Created region(s).
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Compute the mapping between the distorted image and the rectified image based upon the points of a regular grid.
Instance represents: Input contours.
Input image.
Output contours.
Distance of the grid points in the rectified image.
Rotation to be applied to the point grid. Default: "auto"
Row coordinates of the grid points.
Column coordinates of the grid points.
Type of mapping. Default: "bilinear"
Image containing the mapping data.
Read XLD polygons from a DXF file.
Modified instance represents: Read XLD polygons.
Name of the DXF file.
Names of the generic parameters that can be adjusted for the DXF input. Default: []
Values of the generic parameters that can be adjusted for the DXF input. Default: []
Status information.
Read XLD polygons from a DXF file.
Modified instance represents: Read XLD polygons.
Name of the DXF file.
Names of the generic parameters that can be adjusted for the DXF input. Default: []
Values of the generic parameters that can be adjusted for the DXF input. Default: []
Status information.
Write XLD polygons to a file in DXF format.
Instance represents: XLD polygons to be written.
Name of the DXF file.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Choose all contours or polygons containing a given point.
Instance represents: Contours or polygons to be examined.
Line coordinate of the test point. Default: 100.0
Column coordinate of the test point. Default: 100.0
All contours or polygons containing the test point.
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Select contours or polygons using shape features.
Instance represents: Contours or polygons to be examined.
Shape features to be checked. Default: "area"
Operation type between the individual features. Default: "and"
Lower limits of the features or 'min'. Default: 150.0
Upper limits of the features or 'max'. Default: 99999.0
Contours or polygons fulfilling the condition(s).
Transform the shape of contours or polygons.
Instance represents: Contours or polygons to be transformed.
Type of transformation. Default: "convex"
Transformed contours respectively polygons.
Insert objects into an iconic object tuple.
Instance represents: Input object tuple.
Object tuple to insert.
Index to insert objects.
Extended object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Remove objects from an iconic object tuple.
Instance represents: Input object tuple.
Indices of the objects to be removed.
Remaining object tuple.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Replaces one or more elements of an iconic object tuple.
Instance represents: Iconic Input Object.
Element(s) to replace.
Index/Indices of elements to be replaced.
Tuple with replaced elements.
Returns the iconic object(s) at the specified index
This class manages all communication with the HALCON library
True when running on a 64-bit platform.
True when running on a Windows platform.
Setting DoLicenseError(false) disables the license error dialog and
application termination. Instead, an exception is raised.
Setting HLIUseSpinLock(false) before calling the first operator
will cause HALCON to use mutex synchronization instead of spin locks.
This is usually less efficient but may prevent problems if a large
number of threads with differing priorities is used.
Setting HLIStartUpThreadPool(false) before calling the first
operator will disable the thread pool of HALCON
Aborts a draw_* operator as a right-click would (Windows only)
Returns whereas HALCON character encoding is set to UTF8 or locale.
Unpin the tuple's data, but first check if tuple is null. Notice that
PinTuple happens in HTuple as side effect of store.
Releases the resources used by this tool object
This exception is thrown whenever an error occurs during tuple access.
Enumeration of tuple types, as returned by HTuple.Type
Tuple is empty
Tuple is represented by an array of System.Int32
Tuple is represented by an array of System.Int64
Tuple is represented by an array of System.Double
Tuple is represented by an array of strings
Tuple is represented by an array of HHandle values
Tuple is represented by an object array of boxed values.
Get the value of this element as a 32-bit integer.
The element must represent integer data (32-bit or 64-bit).
Get the value of these elements as a 32-bit integer.
The elements must represent integer data (32-bit or 64-bit).
Get the value of this element as a 64-bit integer.
The elements must represent integer data (32-bit or 64-bit).
Get the value of these elements as a 64-bit integer.
The elements must represent integer data (32-bit or 64-bit).
Get the value of this element as a double.
The element must represent numeric data.
Get the value of these elements as doubles.
The elements must represent numeric data.
Get the value of this element as a string.
The element must represent a string value.
Get the value of these elements as strings.
The elements must represent string values.
Get the value of this element as a handle.
The element must represent a handle value.
Get the value of these elements as strings.
The elements must represent string values.
Get the value of this element as an object.
The element may be of any type. Numeric data will be boxed.
Get the value of these elements as objects.
The elements may be of any type. Numeric data will be boxed.
Get the value of this element as a float.
The element must represent numeric data.
Get the value of these elements as a float.
The elements must represent numeric data.
Get the value of this element as an IntPtr.
The element must represent an integer matching IntPtr.Size.
Get the value of these elements as IntPtrs.
The elements must represent integers matching IntPtr.Size.
Get the data type of this element
Get the length of this element
This exception is thrown whenever an error occurs during
vector operations
The HALCON vector classes are intended to support the export of
HDevelop code that uses vectors, and to pass vector arguments to
procedures that use vector parameters. They are not intended to be
used as generic container classes in user code. For this purpose,
consider using standard container classes such as List<T>.
Also note HVector is abstract, you can only create instances
of HTupleVector or HObjectVector.
Read access to subvector at specified index. In contrast
to the index operator, an exception will be raised if index
is out of range. A reference to the internal subvector is
returned and needs to be cloned for independent manipulation.
Returns true if vector has same dimension, lengths, and elements
Concatenate two vectors, creating new vector
Append vector to this vector
Insert vector at specified index
Remove element at specified index from this vector
Remove all elements from this vector
Clear vector and dispose elements (if necessary). When called
on vector with dimension > 0 the effect is identical to clear
Provides a simple string representation of the vector,
which is mainly useful for debug outputs.
Access to subvector at specified index. The vector will be
enlarged to accomodate index, even in read access. For read
access without enlargement use the member function At(index).
A reference to the internal subvector is returned and needs
to be cloned for independent manipulation.
The HALCON vector classes are intended to support the export of
HDevelop code that uses vectors, and to pass vector arguments to
procedures that use vector parameters. They are not intended to be
used as generic container classes in user code. For this purpose,
consider using standard container classes such as List<T>.
Create empty vector of specified dimension. In case of dimension
0 a leaf vector for an empty tuple is created
Create leaf vector of dimension 0 for the specified tuple
Create 1-dimensional vector by splitting input tuple into
blocks of fixed size (except possibly for the last block).
This corresponds to convert_tuple_to_vector_1d in HDevelop.
Create copy of tuple vector
Read access to subvector at specified index. An exception
will be raised if index is out of range. The returned data
is a copy and may be stored safely.
Returns true if vector has same dimension, lengths, and elements
Concatenate two vectors, creating new vector
Append vector to this vector
Insert vector at specified index
Remove element at specified index from this vector
Remove all elements from this vector
Create an independent copy of this vector
Concatenates all tuples stored in the vector
Provides a simple string representation of the vector,
which is mainly useful for debug outputs.
Access to the tuple value for leaf vectors (dimension 0)
Access to subvector at specified index. The vector will be
enlarged to accomodate index, even in read access. The internal
reference is returned to allow modifications of vector state. For
read access, preferrably use the member function At(index).
The HALCON vector classes are intended to support the export of
HDevelop code that uses vectors, and to pass vector arguments to
procedures that use vector parameters. They are not intended to be
used as generic container classes in user code. For this purpose,
consider using standard container classes such as List<T>.
Create empty vector of specified dimension. In case of dimension
0 a leaf vector for an empty object is created
Create leaf vector of dimension 0 for the specified object
Create copy of object vector
Read access to subvector at specified index. An exception
will be raised if index is out of range. The returned data
is a copy and may be stored safely.
Returns true if vector has same dimension, lengths, and elements
Concatenate two vectors, creating new vector
Append vector to this vector
Insert vector at specified index
Remove element at specified index from this vector
Remove all elements from this vector
Create an independent copy of this vector
Provides a simple string representation of the vector,
which is mainly useful for debug outputs.
Access to the object value for leaf vectors (dimension 0).
Ownership of object resides with object vector and it will
be disposed when the vector is disposed. Copy the object
to create an object that will survive a vector dispose.
When storing an object in the vector, it will be
copied automatically.
Access to subvector at specified index. The vector will be
enlarged to accomodate index, even in read access. The internal
reference is returned to allow modifications of vector state. For
read access, preferrably use the member function At(index).
Summary description for HalconWindowLayoutDialog.
Required designer variable.
Clean up any resources being used.
Required method for Designer support - do not modify
the contents of this method with the code editor.
Provides a HALCON window for your Windows Forms application
Required designer variable.
Clean up any resources being used.
Required method for Designer support - do not modify
the contents of this method with the code editor.
Adapt ImagePart to show the full image.
Size of the HALCON window in pixels.
Without border, this will be identical to the control size.
This rectangle specifies the image part to be displayed.
The method SetFullImagePart() will adapt this property to
show the full image.
Width of optional border in pixels
Color of optional border around window
Occurs after the HALCON window has been initialized
Under Mono/Linux, the HALCON window cannot be initialized
before the Form is visible. Therefore, accessing the window
in the event Load of the Form is not portable.
Occurs when the mouse is moved over the HALCON window. Note that
delta is meaningless here.
Occurs when a button is pressed over the HALCON window. Note that
delta is meaningless here.
Occurs when a button is released over the HALCON window. Note that
delta is meaningless here.
Occurs when the wheel is used over the HALCON window. Note that
button is meaningless here.
Provides data for the HMouseUp, HMouseDown, and HMouseMove events.
Gets which mouse button was pressed.
Gets the number of times the mouse button was pressed and released.
Gets the column coordinate of a mouse click.
Gets the row coordinate of a mouse click.
Gets a signed count of the number of detents the mouse wheel
has rotated. A detent is one notch of the mouse wheel.
Represents the method that will handle the HMouseDown, HMouseUp,
or HMouseMove event of a HWindowControl.
Represents the method that will handle the InitWindow event
of a HWindowControl
Adapt ImagePart to show the full image. If HKeepAspectRatio is on,
the contents of the HALCON window are rescaled while keeping the aspect
ratio. Otherwise, the HALCON window contents are rescaled to fill up
the HSmartWindowControl.
Force the Window Control to be repainted in a thread safe manner.
Real pain in the ass. In WinForms events do not bubble, as it is the case
for most frameworks.
This means, we need to hand code the event chain.
Redirect relevant events up to the containing Form.
Reference to the containing Control.
Clean up event processing.
Utility method that converts HALCON images into System.Drawing.Image used to
display images in Window Controls.
In order to allow interaction with drawing objects and being able to
zoom and drag the contents of the window, we need a mechanism to let
the user decide what to do. Currently, if the user keeps the left
Shift key pressed, he will be able to interact with the drawing
objects. Otherwise, he works in the default modus.
True if the user is currently pressing the left shift key.
Translates native encoding of mouse buttons to HALCON encoding
(see get_mposition).
Shifts the window contents by (dx, dy) pixels.
UserControls under Windows Forms do not support the mouse wheel event.
As a solution, the user can set his MouseWheel event in his form to
call this method.
Please notice that the Visual Studio Designer does not show this event.
The reason is that UserControls do not support this type of event.
Hence, you need to manually add it to the initialization code of your
Windows Form, and set it to call the HSmartWindowControl_MouseWheel
method of the HALCON Window Control.
Required designer variable.
Clean up any resources being used.
true if managed resources should be disposed; otherwise, false.
Required method for Designer support - do not modify
the contents of this method with the code editor.
Occurs when the mouse is moved over the HALCON window. Note that
delta is meaningless here.
Occurs when a button is pressed over the HALCON window. Note that
delta is meaningless here.
Occurs when a button is released over the HALCON window. Note that
delta is meaningless here.
Occurs when a button is double-clicked over a HALCON window. Note
that delta is meaningless here.
Occurs when the wheel is used over a HALCON window while it has
focus. Note that button is meaningless here.
Occurs after the HALCON window has been initialized
Occurs when an internal error in the HSmartWindowControl happens.
Reliable way to check whether we are in designer mode.
Size of the HALCON window in pixels.
Initial part of the HALCON window.
In some situations (like a missing license in runtime), it can be the case that
internal exceptions are thrown, and the user has no way of capturing them.
This callback allows the user to react to such runtime errors.
Modifier to manipulate drawing objects
Manipulate drawing objects without a modifier
Shift key must be pressed to modify drawing objects
Ctrl key must be pressed to modify drawing objects
Alt key must be pressed to modify drawing objects
Mouse wheel behavior
No effect on the contents
Moving the mouse wheel forward zooms in on the contents
Moving the mouse wheel backward zooms in on the contents