Example (c) Example (d)
Figure 18. Test Example Diagram
In example (a), the Hough transform threshold T is taken as 30. There
are many interfering pixels on the left side of the edge detection
image, and there are many non real edge detection lines in the lines
detected by the Hough transform. The original k-means method directly
clusters the Hough transform results and takes the cluster centroid as
the basis for line fitting. In example (a), the method in this article
takes m=1, which is a standard deviation. After clustering is completed,
the threshold is adaptively calculated. The method in this article
preserves reliable straight lines, and then recalculates the clustering
centroid to replace the original k-means clustering centroid. From the
detection results graph, it can be intuitively seen that the fitting
line of this method is more accurate in areas with more interference
than the fitting line of the original k-means method.
In example (b), there are more interfering pixels on the left side of
the edge detection image, making the edge image more noisy and visible
to the human eye. Due to the further increase in interfering pixels, the
error of the original k-means clustering method further increases,
resulting in complete detection failure. Under such interference
conditions, the method in this article can still detect the straight
lines we need.
In example (c), the edge detection result graph shows that there is
interference edge at a distance far from the crane to grab the boom.
Similarly, the original k-means detection result produces a significant
error.
In example (d), there are no other interfering pixels on the four edges
of the crane grabbing the boom in the edge detection image. In such
detection situations, the detection performance of our method is
basically consistent with the original k-means clustering method.
These four sets of examples demonstrate that for edge detection images
with different levels of interference, the detection performance of our
method is superior to the original k-means clustering method, with
higher accuracy and robustness.
3.3 Error result analysis
The following verifies the error analysis of the algorithm in this paper
and the original k-means fitting method for straight lines when the
Hough transform threshold is set to 25, 30, 35, and 40,
respectively. This article
selects four noisy edge detection images for verification, as shown in
Figure 19. In 19 (a), it can also be seen that for the same edge
detection image, both algorithms show a decreasing trend in error as the
Hough transform threshold continues to increase. But when , the error of
the original k-means algorithm in fitting straight lines is
significantly greater than the error of the algorithm in this paper.
When , the errors of the two algorithms are close. Even under low
threshold conditions of Hough transform, the algorithm proposed in this
paper has a smaller error and obvious advantages, enabling it to achieve
very small errors even on noisy edge detection images. And all four
images show this pattern.
To verify the effectiveness of the method proposed in this article under
different lighting conditions, some straight line detection results were
taken as display, as shown in Figure 20. The images in group 20 (a) show
the straight line detection results under outdoor natural lighting
conditions. The shadow of the visible part has little impact on the
detection results, with a slight deviation. It can detect the straight
line of the suspension rod contour normally. Group 20 (b) images show
the results of line detection under indoor lighting conditions, which
can effectively complete the line detection task. Group 20 (c) images
show the results of line detection under dark fill conditions, which can
detect corresponding lines. Due to the dim light, there may be detection
failures. 20 (d) sets of images show the results of line detection under
bright fill light conditions, with sufficient fill light, which can
effectively complete the task of line detection. The images in group 20
(e) show the results of line detection under strong supplementary
lighting conditions. Under this condition, the light is strong, and
compared to slightly darker supplementary lighting conditions, it
significantly highlights the target object, making edge detection more
accurate, and the overall detection effect is the best.
Identify the crane grab boom under different light source conditions,
and test 70 images under each light source condition. The recognition
accuracy results of the algorithm in this article are shown in Table 1.
The average error range is within the interval [0,10], indicating
successful recognition. The average error range is within the interval
(10, ∞), indicating recognition failure. Under dark light conditions,
the average detection error within the range of [0,2] accounts for
90.0%, and the recognition success rate is 92.9%. Under strong
supplementary light conditions, the recognition success rate is the
highest, with an average detection error of 97.1% within 0-2 pixels and
a recognition accuracy of 98.6%.