DOI QR코드

DOI QR Code

High-Speed Maritime Object Detection Scheme for the Protection of the Aid to Navigation

  • Lee, Hyochan (Smart Network Research Center, Korea Electronics Technology Institute) ;
  • Song, Hyunhak (Smart Network Research Center, Korea Electronics Technology Institute) ;
  • Cho, Sungyoon (Smart Network Research Center, Korea Electronics Technology Institute) ;
  • Kwon, Kiwon (Smart Network Research Center, Korea Electronics Technology Institute) ;
  • Park, Sunghyun (Dept. of Information and Communication, Hoseo University) ;
  • Im, Taeho (Dept. of Information and Communication, Hoseo University)
  • Received : 2021.08.19
  • Accepted : 2022.02.05
  • Published : 2022.02.28

Abstract

Buoys used for Aid to Navigation systems are widely used to guide the sea paths and are powered by batteries, requiring continuous battery replacement. However, since human labor is required to replace the batteries, humans can be exposed to dangerous situation, including even collision with shipping vessels. In addition, Maritime sensors are installed on the route signs, so that these are often damaged by collisions with small and medium-sized ships, resulting in significant financial loss. In order to prevent these accidents, maritime object detection technology is essential to alert ships approaching buoys. Existing studies apply a number of filters to eliminate noise and to detect objects within the sea image. For this process, most studies directly access the pixels and process the images. However, this approach typically takes a long time to process because of its complexity and the requirements of significant amounts of computational power. In an emergent situation, it is important to alarm the vessel's rapid approach to buoys in real time to avoid collisions between vessels and route signs, therefore minimizing computation and speeding up processes are critical operations. Therefore, we propose Fast Connected Component Labeling (FCCL) which can reduce computation to minimize the processing time of filter applications, while maintaining the detection performance of existing methods. The results show that the detection performance of the FCCL is close to 30 FPS - approximately 2-5 times faster, when compared to the existing methods - while the average throughput is the same as existing methods.

Keywords

1. Introduction

AtoN (Aid to Navigation) is a direction sign that guides and determines the sea route for a vessel to navigate [1]. Typically, lighthouses, buoys, beacons, etc. exist on the surface of the sea and have textual meanings such as color and number, and the navigator decides a route based on these signs. Therefore, they are called sea traffic lights and play a very important role in ship safety, particularly for major routes.

The buoys consist of batteries, sensors, and communication devices, but not many are equipped with systems such as cameras and RADAR(RAdio Detection And Ranging), LiDAR(Light Detection And Ranging) which monitor the surroundings. In the case of NOAA(National Oceanic and Atmospheric Administration) in the U.S., however, cameras are installed on buoys, but they are only used for recording purposes. The main purpose of the surrounding monitoring systems is the safety of the site. Frequent accidents with ships damage buoys and these damaged buoys can collide with another ship in the sea and cause further accidents. Also, buoy field work (such as replacing batteries or sensors) can be a dangerous task for workers if there is no nearby alarm through monitoring. Therefore, sea monitoring is an essential function of safety to prevent damage to buoys, loss of lives, etc.

Sea object recognition - part of the function of maritime monitoring - is an intelligent assist system for autonomous ships [2-4]. MASS(Maritime Autonomous Surface Ship) is a method of automatically recognizing and identifying floating collision risks through a computer rather than relying on human interaction [5-6]. The recent development of computer vision and artificial intelligence technology enable the identification and distinguishing the nature of the object in the image taken with the camera [7-8].

Despite these technological advances, however, few maritime monitoring systems exist in Korea because of the high cost and required high computational consumption. In the case of buoys, they are maintained through their own battery and the battery is replaced periodically by human intervention. If algorithms that require high operations are used in buoys, parallel operations such as GPU(Graphics Processing Units) need to be utilized, which requires a lot of power consumption and frequent battery replacement, making it difficult to maintain. Therefore, the surrounding detection algorithm of a maritime monitoring system should be properly maintained without consuming much computational resources.

In this paper, we propose a floating object detection method algorithm suitable for monitoring the marine environment. This method assumes a camera is installed on the buoy to detect the objects. In addition, we propose a method to simplify computation by utilizing maritime features to reduce the amount of computation of embedded computers for buoys. The goal is to achieve the same level of detection performance of existing algorithms which are commonly used, although computation is reduced.

2. Related work

2.1 Binarization

Horizontal line detection at sea is an effective way of detecting items floating in the sea by separating the sea and background areas present in the image [9-11]. In general, horizontal line detection is achievable by utilizing either the Edge method or the Binary method. The Edge methods can clearly separate boundaries but are expensive and suitable for complex environments. On the other hand, the Binary method is not very accurate but is suitable for simple environments. In the case of a sea environment, we apply the Binary method because there are relatively few image changes when compared to land, as shown in Fig. 1. Binary separation includes fixed and flexible threshold methods. A flexible threshold method requires more computation than do fixed methods but are advantageous in distinguishing between the ocean and sky. Therefore, we utilize Otsu's method which is a typical flexible threshold algorithm of the binary method to find areas of interest in the marine environment [12]. The way of finding the region of interest after Otsu’s method is as follows: first, the background is saved as black (0), and the sea and the ship as white (1) in a memory buffer. The region of interest are sequentially found by searching pixel values corresponding to the sea and the ship based on the image Y-axis. Based on the Y-axis, the first pixel value to be found is fixed at the top, and the value to be found later is fixed at the bottom to set the region of interest by drawing a straight line. By searching for pixels in the same way as the Y-axis on the X-axis, the algorithm finds the side lines of both sides of the ship, and set the line, the area of interest of the ship is finally found.

E1KOBZ_2022_v16n2_692_f0001.png 이미지Fig. 1. Otsu's algorithm in the marine environment

2.2 Dilation

First, we need to separate the sea from the sky through the binary method and then find the objects on the sea. If the object is binarized properly, it will be simple to detect as shown in Fig. 2. However, there are many variables in offshore objects, which may be crushed by the weather, the color of the ship, and the light reflection, as shown in Fig. 3. In such cases, floating sea objects cannot be detected properly because they are recognized as multiple objects rather than one object.

E1KOBZ_2022_v16n2_692_f0002.png 이미지

Fig. 2. An example that the object is properly detected and distinguished

E1KOBZ_2022_v16n2_692_f0003.png 이미지

Fig. 3. An example that the object is not properly detected and distinguished

This problem can be solved by applying an dilation algorithm to the corresponding image region [13]. The dilation algorithm performs matrix operations through cross-like kernels in the pixel region and as a result 1 pixel is added up to the existing object as shown in Fig. 4-5 shows that the application of an dilation algorithm to the results in Fig. 3: from the first figure on the leftmost, the algorithm is applied once, twice and three times. The more expansion, the more regions are grouped into one object. Thus, the dilation algorithm is efficient in the marine environment and helps to locate objects better.

E1KOBZ_2022_v16n2_692_f0004.png 이미지

Fig. 4. The result of the dilation algorithm

E1KOBZ_2022_v16n2_692_f0005.png 이미지

Fig. 5. The result of applying the dilation algorithm to the marine object

2.3 Labeling

Labeling is utilized to locate marine objects after dilation filters. Labeling identifies those pixels are connected and locates them [14]. As shown in the leftmost in Fig. 5, if the pixel is separated, it is recognized as multiple areas, not one area, and it is not accurately located. On the other hand, the rightmost in Fig. 5 recognizes that it is one area and finds the object location which a high degree of accuracy. Labeling is performed as follows:

First, the following filters are used to verify the adjacency of pixels, as shown in Fig. 6 From the central pixel P(x, y), 4-Connected checks each direction of P(x-1), P(y-1), 8- Connected also checks each direction of P(x-1), P(y-1), P(x-1, y-1), and P(x+1, y-1) to determine whether the pixels are connected or not.

E1KOBZ_2022_v16n2_692_f0006.png 이미지

Fig. 6. Neighborhood filter for Labeling

Typically, the Labeling process is performed twice in total. The first figure of Fig. 7 shows the process of 1-PASS performed through an 8-Connected filter and the second figure shows the result of it. In the result, there are two objects, but the computer recognizes it as three objects. This is because the filter could not check the connectivity because there is a hole in the middle. Therefore, at the hole, these additional operations are required: the fact that B and C are adjacent using the equivalent table. At the end of 1-PASS for the entire image, 2-PASS can replace C with B using the data in the equivalent table and allocate label numbers to objects. Finally, we can find the location of marine objects by the Labeling process.

E1KOBZ_2022_v16n2_692_f0007.png 이미지

Fig. 7. Labeling process of binary connection factors

2.4 Literature review

In Paper 1 [Detection and tracking of ships with open sea with a Rapidly moving buoy mounted camera system], it was proposed to detect horizontal lines using two methods: Binary and Edge, as shown in Fig. 8 [15]. It separates the horizon from the vessel by detecting the horizon and removes the horizon area from the binarized image. Finally, the object is detected through Labeling. This method finds the region of interest well and utilizes the Labeling CCL (Connected Component Labeling), but as mentioned in 2.2 the object is not properly detected in sensitive color environments such as weather effects, light intensity, and ship colors.

E1KOBZ_2022_v16n2_692_f0008.png 이미지

Fig. 8. The flow chart of the ship tracking

In Paper 2 [Multiple Ship Detection and Tracking Using Background Registration and Morphological Operations], the dilation algorithm mentioned in Section 2.2 is used to find sea objects by combining with CCL algorithms, as shown in Fig. 9 [16]. In other words, the method detects objects by restoring back to one original area which was separated from the marine environment. However, this method applies dilation kernels throughout the whole image and has the disadvantage of increasing computation resources because it requires at least 2-3 operations depending on the marine environment.

E1KOBZ_2022_v16n2_692_f0009.png 이미지

Fig. 9. Ship detection by the dilation algorithm and Labeling the connected factors

Paper 3 [Detection of marine vehicles in images and video of open sea] utilizes dilation and morphology techniques of erosion algorithms, as shown in Fig. 10 [17]. The erosion region shrinks the pixels, and the dilation expands the pixel region. Erosion and dilation techniques use different filters and conduct convolution with the entire image. At this time, erosion techniques are applied first to eliminate small noises after binarization. Then dilation techniques expand the separated regions of one object and presents it into one connected form. The object is then detected by applying the Connected Component Labeling. This minimizes noise to reduce the unnecessary effects of object ship detection.

E1KOBZ_2022_v16n2_692_f0010.png 이미지

Fig. 10. Marine vehicle detection by closing and Labeling the connected factors

While this method can prevent noise that can occur in existing dilation methods, it is considered inefficient because it requires twice the computational amount of the conventional dilation methods and it is more suitable for removing noise in complex environments than in simple environments such as marine environments.

Paper 4 [Design of Video Pre-processing Algorithm for High-Speed Processing of Maritime Object Detection System and Deep Learning based Integrated System] proposes multi-connect element Labeling to increase the performance of the previously proposed papers and to reduce computation, as shown in Fig. 11 [18]. As shown in Fig. 8, multi-connect element Labeling is a two-time iterative process of Connected Component Labeling. Through the first process, an object region in the input image is detected, the region is expanded by a specific margin using information such as pixel coordinates and size of the region, and pixels are filled by the object region (i.e., white). The second step detects one object connected through dilation. This does not perform multiple dilation algorithms and does not process unnecessary regions, which can simultaneously extend the region while eliminating noise within the detection region. However, this method also performs filters of linking element Labeling twice, and there are overlapping operations, resulting in unnecessary computations. The large amount of computation required, determines the performance and power consumption of embedded boards.

E1KOBZ_2022_v16n2_692_f0011.png 이미지

Fig. 11. Multi-Labeling the connected factors

Therefore, the aim of this paper is to reduce unnecessary computational volumes and procedures and to process them at a faster rate, while maintaining the object detection performance of existing algorithms. In this paper, the existing algorithms are categories into 4 groups: 1) CCL (Connected Component Labeling), 2) DA (Dilation Algorithm), 3) CA (Closing Algorithm), and 4) ML (Multi Labeling). For the 'Aid to Navigation' safety, the detection performance needs to be maintained and the embedded board must be operated in order to be installed on buoys. The fast-processing software must be operated with low power and computation for practical operation and commercialization.

2.5 Application

The high-speed object detection proposed in this paper is also very useful for marine object recognition. The CNN (Convolution Neural Network) takes the same role as do human eyes in the recent deep learning-based image recognition technologies and it is simply available to recognize various marine floating objects (i.e. ships and buoys) [19] as shown in Fig. 12. CNN creates weights through data learning of desired objects and then can recognize them by use of its neural network.

[20] demonstrated that image pre-processing before the input to the deep learning neural network can efficiently detect objects. As a similar study, [18] conducts deep learning for only object candidates which are detected by image pre-processing as shown in Fig. 13. It is important to find object candidates rapidly and accurately and input them to the deep learning system to improve the system performance.

E1KOBZ_2022_v16n2_692_f0012.png 이미지

Fig. 12. Application of CNN-based marine object recognition

E1KOBZ_2022_v16n2_692_f0013.png 이미지

Fig. 13. Object detection before the deep learning-based object recognition

Therefore, chapter 3 in this paper proposes Fast Connected Component Labeling as a p reprocessing method which can find the candidates more rapidly and accurately in order to improve deep learning-based marine object recognition.

3. Proposed Fast Connected Component Labeling

The problem of the existing methods is that CCL is not properly detected due to the external maritime variables. Hence, it is more efficient to perform after the preprocessing through the DA method. However, as mentioned in section 2-2, it is much more reliable to perform several times rather than only once. Therefore, the Fig. 14 is the ideal marine object detection method.

E1KOBZ_2022_v16n2_692_f0014.png 이미지

Fig. 14. Overall process of marine object detection

However, since this method requires that DA and labeling must be performed several times, there are many redundancy procedures and unnecessary computational elements. CA and ML is inefficient in terms of computer computation because they perform several times. In detail, When an image is entered, binarization is performed through the Otsu’s algorithm. Approximately N times of DA is performed on the binary images. Based on the result of DA which we performed on the marine environment image in section 2.2, three times of DA is not affected by the environment, so we assume three times of DA in this paper. There is also a way to significantly expand the DA kernel and process up to three at a time, but since the kernel size becomes larger, the number of computations on the image is identical. After this DA, the 2-PASS process of Labeling is used to locate and detect objects. The reason for using the same features multiple times is to increase the performance of object detection, which will result in higher power consumption as the computation is performed on the entire image, reducing throughput, and increasing computation. Therefore, in this paper, we propose ways to reduce unnecessary procedures and minimize computations. Operations must be performed on all image pixels for each single function, where all pixels must be accessed and processed at least five times. Therefore, processing one time with minimal pixel access can reduce unnecessary operations and we propose a high-speed method which excludes redundant operations. The proposed algorithm is described in four stages. This is named as the FCCL(Fast Connected Component Labeling) in this paper.

First, the first process is shown in Fig. 15. After receiving images, the process of Otsu’s binarization is applied in the same way as in the previous methods. However, unlike the existing method which performs DA and then CL later (15-A), the DA is not performed immediately. Instead, the 1-PASS process of the label is performed (15-B). The 1-PASS process uses the 8-Connected Filter which is described in the previous chapters. After 1-PASS is performed, the separated objects are given each label number. The existing algorithm (CCL) performs 2-PASS at this stage, but in this study, instead, each isolated object records each coordinate point in a memory. The data recorded via the LUT stores the corner part of each pixel coordinate. The corner section consists of group 1 (minimum value on the X-axis, minimum value on the Y-axis) and group 2 (maximum value on the X-axis, maximum value on the Y-axis). In the first process, this study performs the process of storing information of groups 1 and 2 in the LUT first without performing 2-PASS immediately.

E1KOBZ_2022_v16n2_692_f0015.png 이미지

Fig. 15. The process of the FCCL (1)

The second stage processes the values stored in the LUT as shown in Fig. 16. For the coordinates of the six objects detected, it expands the stored values of group 1 and group 2. The figure only increases 1 Pixel to provide the simple example. In practical implementations, 2-3 Pixels are increased. (This is referred as ‘Margin’). The DA process moves Group 1 from the existing coordinates to the upper left by the Margin, and Group 2 to the lower right by Margin. The second process ends by updating the expanded data values through the LUT. This method performs an identical role as the DA method described in Fig. 14. The DA operation only takes place in the LUT and only the arithmetic operation takes place in the memory. Therefore, it is a stored LUT rather than existing processing methods directly in image pixels – i.e. a processing method within memory, enabling high-speed processing while applying the DA.

E1KOBZ_2022_v16n2_692_f0016.png 이미지

Fig. 16. The process of the FCCL (2)

After the DA process using the LUT, post-processing is required as shown in Fig. 17. Post processing refers to the process of merging regions of detected boxes, which correspond to the 2-PASS process. (Here, light grey is referred to the criterion area and dark grey is referred to the comparison area.) For the expanded pixels, there are three types of case detection. 1) parts within a criterion, 2) inclusion of a criteria part, and 3) point of contact of the criterion part. For these three conditions, the coordinates of the vertex in the lower right corner is changed according to the three cases, as shown in 17-B and 17-C. Thus, the third process ends by updating the changed points to the LUT. The reason why this overlapping extended regions occurs is that adjacent pixels have caused pixels to be separated by binaries and this is highly likely to be an identical object. Therefore, we propose a process for integrating into one object through merging in those three conditions. This process is almost like the role of 2-PASS in Fig. 14 and this method is handled within the LUT (i.e., within the memory).

E1KOBZ_2022_v16n2_692_f0017.png 이미지Fig. 17. The process of the FCCL (3)

At the end of post-processing, the object location is detected through the integration of the LUT as shown in Fig. 18. As shown in 18-A, 1 Label is set as the criterion area and 2 to 6 Labels are set as the comparison area, it figures out if it can be merged or not. Merge conditions are handled if they fall within the three conditions, and the coordinates are updated and stored in the final table which is 18-D, according to the merge conditions. For example, 1 Label and 3 Label of 18-A are overlapped and corresponds to condition1, therefore it is combined into the criterion area Table. The criterion area of 18-B - 2 Label - is combined with 4 Label, which falls under the second condition. 5 Label of 18-C is combined with 6 Label, which falls under the third condition. If there is no merge condition within the LUT, it is recognized as one object and then moved on to the next label. Since the merging process is integrated into the criterion area, the LUT in the comparison area indicates that the object has been merged into one through '0' processing. Lastly, the final merged coordinates can be identified by the 18-D LUT, and the image can be boxed and output to the monitoring system, as shown in 18-E.

E1KOBZ_2022_v16n2_692_f0018.png 이미지

Fig. 18. The process of the FCCL (4)

In summary, the FCCL operates as a flowchart as shown in Fig. 19. The image is entered and binarized in the same way as the existing algorithm(Otsu’s method). The existing methods(which conducts DA and then CL later) perform the DA and then processes the label, but the FCCL performs the label first. Instead, it stores the coordinates of the separated objects into the LUT. Then, the LUT is updated by expanding inside the memory through stored coordinates. In addition, post-processing is performed to perform 2-PASS, which is also performed in the LUT. Three situations occur for DA and merge is performed according to those conditions. In order to perform the conditional merge, the LUT is updated and integrated on its own. The final coordinates of the merged LUTs, the position of the objects in the image is stored and then the marine objects can be detected by displaying them in the image. Existing algorithms(CA and ML) show the high volume of computation since they access and process multiple times throughout the whole image, however, FCCL reduces the computation and processes at high speed by pre-identifying isolated object locations, storing them in LUTs, and updating only a portion of coordinate values through internal memory.

E1KOBZ_2022_v16n2_692_f0019.png 이미지

Fig. 19. The flow chart of the FCCL

4. Experiment Results

The experiment analyses the FCCL with the performance and speed of detecting nearby vessels under the assumption that buoys are monitoring the marine environment. Earlier in Section 2.4, we experiment and compare the FCCL with the existing methods in the performance of 1) CCL, 2) DA, 3) CA, and 4) ML. The on-site experiment where cameras are installed on buoys is difficult, so a part of Singapore Maritime Dataset are used for experiments [21]. The dataset is videos or images and some representative images of them are selected and experimented through about 30 lists. The criteria of image selection are: 1) weather clear/cloudy 2) horizonal height, 3) horizonal gradient, 4) ship's various colors, and 5) sunset. 30 candidates are selected, which are representative in terms of weather conditions. The embedded board Raspberry Pi 3B+ is used for the experiment. Although recently high-speed processing embedded boards such as NVIDIA's Xavier board are available, this experiment aims to achieve nearly 30 FPS(Frame Per Second) using low-cost embeds so that this study selects a single process with no acceleration capabilities. Raspberry Pi is a low-power processor with quad-core ARM(Advanced RISC Machines) processors of more than 1Ghz and supports multiple operating systems, making it more cost-effective for developing embedded software.

First, the performance is measured using the IoU(Intersection over Union), an indicator of the accuracy of the ship detection [22]. The IoU measurement divides the overlapping area between [‘Ground-Truth’ which is the actual answer area (A)] and [the detected area by the computer algorithm (B)], by the area of the sum (Equation 1). For example, based on BoX, 1point of overlap area as shown in Fig. 20 means the regions overlap completely. 0.5 means approximately 2/3 overlap. Here 0.5 becomes the threshold value and determines proper detection. The Box refers to the vertex based on the pixel of the detected object, which is shown as a rectangle. Segmentation is a method to check whether the location of pixels is accurately matched between the actual pixels and the position of the pixels in the detected area. This paper conducts comparative experiments by measuring performance through those two methods.

\(I o U=\frac{A \cap B}{A \cup B-A \cap B}\)       (1)

E1KOBZ_2022_v16n2_692_f0020.png 이미지

Fig. 20. Example of IoU Box and Segmentation

Table 1 shows the results of applying existing algorithms (CCL, DA, CA and ML)and FCCL for 30 marine images selected. The IoU performance for each image is measured and evaluated by each of the four existing algorithms (CCL, DA, CA and ML) and FCCL. The overall mean value of the experimental results explains that the performance of DA twice, ML, and FCCL exceed the threshold of 0.5 or higher. It is shown that the FCCL has similar or better performance of Box IoU compared to other algorithms.

Table 1. Comparison of Box IoU performance of each detection algorithm

E1KOBZ_2022_v16n2_692_t0001.png 이미지

Table 2 shows the results of applying the IoU Segmentation method to the same images. The results shows that the best performance is achieved by applying it twice. On the other hand, the FCCL does not show the good performance achieved in Segmentation. This is because the FCCL reduces performance-wise accuracy to increase processing speed.

Table 2. Comparison of Segmentation IoU performance of each detection algorithm

E1KOBZ_2022_v16n2_692_t0002.png 이미지

Table 3 shows FPS by averaging the time to process one frame for the input image. This study compares the existing algorithms (CCL, DA, CA and ML) with the FCCL, and the experimental results shows that the FCCL can provide several times faster effects than other algorithms.

Table 3. The speed of image processing of each detection algorithm

E1KOBZ_2022_v16n2_692_t0003.png 이미지

Fig. 21 shows the DA which is one part of the segmentation experiment and some of the detected shapes by the FCCL. While two inflations have a nearly similar shape to the original image, the image on which the FCCL is performed is constructed in a cascaded shape and thus is detected in an unusual shape. Instead, the performance of the Box side is almost similar, and is faster in terms of processing speed by the FCCL. This can be explained as a principle similar to quantization.

E1KOBZ_2022_v16n2_692_f0021.png 이미지

Fig. 21. Part of the Segmentation experiment (Red: DA, Blue: FCCL) It shows a processing speed of approximately 2 times faster or more compared to the most similar DA, which is DA (twice or three times). Also, it is able to process approximately 4 to 5 times faster than the CA. As mentioned earlier, we could see that the DA and CA improve

It shows a processing speed of approximately 2 times faster or more compared to the most similar DA, which is DA (twice or three times). Also, it is able to process approximately 4 to 5 times faster than the CA. As mentioned earlier, we could see that the DA and CA improve performance by performing multiple times, but gradually decreasing the processing speed. In addition, the processing speed is not much different from that of Method 1, but the speed does not imply significance because it does not exceed the threshold in detection of performance. In addition, in the case of method 4, the processing speed is relatively slow due to the disadvantage of finding and expanding DA points in advance, like the FCCL, but having to perform Labeling twice. The experiments demonstrate that the FCCL processes quickly using LUT method close to about 30 FPS on the Raspberry Pi 3 board, which is faster than the existing algorithms (CCL,DA,CA and ML). Table 4 summaries the mean values for Table 1- 3. Overall, the FCCL shows best performance in terms of BoX Detection but lower performance in Segmentation.

Table 4. The overall results from table 1 to 3.

E1KOBZ_2022_v16n2_692_t0004.png 이미지

However, due to the nature of the maritime monitoring system, the FCCL is designed to improve the performance of Box Detection because it is more important to detect it roughly and quickly than to detect it in great detail. The processing speed is at least twice as fast as the other algorithms. Considering the speed is controlled only by software, increased the processing speed is significant. In addition, experiments demonstrated that it is practical and applicable by processing at a speed close to 30 FPS on a small embedded-board.

Fig. 22 shows the results from the implementation of the FCCL. The original image and the FCCL are sequentially listed horizontally and the images selected from 1 to 30 vertically are listed in the Figure. There are some exceptions, because detection of horizontal lines on the sea, such as image 7, is not detected by the binary Otsu’s algorithm. It is not easy to distinguish in binary when colors of sky and sea are almost the same. If no object is found after binarization, it is judged that it can be solved by applying a post-processing method such as finding the most changed high-frequency component in the entire image to find the ship. In addition, there are exception cases where binarization is not good due to reflection by the sun, such as image 12. Several experiments showed that the detection box area accounts for a significant portion in the entire image when incorrectly detected due to the weather conditions. Therefore, it may be possible to forcibly remove an abnormally large box area (e.g., about 1/2 or more) and exclude it from the detection target. Alternatively, although more computation is needed, but it is judged that it can be a safer way to determine and remove the object in the box using recognition technology or deep learning. Therefore, future work needs to focus on handling those exceptions on sites.

E1KOBZ_2022_v16n2_692_f0022.png 이미지

Fig. 22. The results of object detection from the marine safety monitoring system

5. Conclusion

In this paper, we optimize algorithms for detecting nearby ship objects through digital image analysis to prevent collisions of suspended solids at sea and enhance maritime safety. In addition, this study designs and proposes the FCCL which is able to process approximately 30 fps solely by software without hardware on the small embedded-board. Through experiments, performance analysis is conducted and compared to existing algorithms (CCL, DA, CA and ML) and the performance achieves similar performance to that of existing algorithms (CCL, DA, CA and ML), however the speed is improved by about 2-5 times speed. In this study, only the detection of objects in the marine environment is applied and described, however this can be implemented as an alarm system to measure approximate distance through image analysis for vessels approaching buoys. Also, if deep learning is applied to this algorithm, it will be possible to identify marine floats. If the identification by deep learning becomes the p reprocess before the FCCL, better process speed and recognition rate is predicted. In this paper, experiments have been conducted using image datasets, however, the FCCL is planned to be designed as a real-time alarm system through CCTV(Closed Circuit Television) images installed on actual buoys on sites.

References

  1. S. Andres, F. Piniella, "Aids to Navigation systems on inland waterways as an element of competitiveness in ULCV traffic," International Journal for Traffic and Transport Engineering, vol.7, no.1, pp.1-18, 2017. https://doi.org/10.7708/ijtte.2017.7(1).01
  2. S. J. Lee, M. I. Roh, H. Lee, M. J. Oh, "Image-based ship detection using deep learning," Ocean Systems Engineering, vol.10, pp.415-434, 2020. https://doi.org/10.12989/OSE.2020.10.4.415
  3. M. Pan, Y. Liu, J. Cao, Y. Li, C. Li, and C. H. Chen, "Visual recognition based on deep learning for navigation mark classification," IEEE Access, vol.8, pp.32767-32775, 2020. https://doi.org/10.1109/access.2020.2973856
  4. Z. Chen, D. Chen, Y. Zhang, X. Cheng, M. Zhang, and C. Wu, "Deep learning for autonomous ship-oriented small ship detection," Saf. Sci., vol.130, no.104812. 2020.
  5. M. Abilio Ramos, I.B. Utne, A. Mosleh, "Collision avoidance on maritime autonomous surface ships: operators' tasks and human failure events," Saf Sci, vol.116, pp.33-44, 2019. https://doi.org/10.1016/j.ssci.2019.02.038
  6. S. Li and K. S. Fung, "Maritime autonomous surface ships (MASS): implementation and legal issues," Maritime Business Review, Vol. 4 No. 4, pp.330-339, 2019. https://doi.org/10.1108/mabr-01-2019-0006
  7. S. Moosbauer, D. Konig, J. Jakel, and M. Teutsch, "A benchmark for deep learning based object detection in maritime environments," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019.
  8. M. Leclerc, R. Tharmarasa, M. C. Florea, A.-C. Boury-Brisset, T. Kirubarajan, and N. Duclos-Hindie, "Ship classification using deep learning techniques for maritime target tracking," in Proc. 21st Int. Conf. Inf. Fusion (FUSION), pp.737-744, 2018.
  9. C. Y. Jeong, H. S. Yang and K. D. Moon, "Fast horizon detection in maritime images using region-of-interests," Int. J. Distrib. Sens. N., vol.14, no.7, pp.1-11, 2018.
  10. C. Y. Jeong, H. S. Yang, and K. D. Moon, "Horizon detection in maritime images using scene parsing network," IET Electron. Lett., vol.54 no.12, pp.760-762, 2018. https://doi.org/10.1049/el.2018.0989
  11. T. Praczyk, "A quick algorithm for horizon line detection in marine images," Journal of Marine Science and Technology, vol.23, no.1, pp.164-177, 2018. https://doi.org/10.1007/s00773-017-0464-8
  12. N. Otsu, "A Threshold Selection Method from Gray-Level Histograms," IEEE Trans. Syst. Man and Cybern., vol.9, pp.62-66, 1979. https://doi.org/10.1109/TSMC.1979.4310076
  13. R. M. Haralick, S. R. Sternberg, and X. Zhuang, "Image analysis using mathematical morphology," IEEE Trans. Pattern Anal. Machine Intell., vol.9, pp.532-550, 1987.
  14. L. Di Stefano, A. Bulgarelli, "A simple and efficient connected components labeling algorithm," in Proc. of 10th International Conference on Image Analysis and Processing, pp.322-327, 1999.
  15. S. Fefilatyev, D. Goldgof, M. Shreve, and C. Lembke, "Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system," Ocean Engineering, vol.54, pp.1-12, 2012. https://doi.org/10.1016/j.oceaneng.2012.06.028
  16. N. Arshad, K. Moon, and J. Kim, "Multiple ship detection and tracking using background registration and morphological operations," Signal Processing and Multimedia, vol.123, pp.121-126, 2010. https://doi.org/10.1007/978-3-642-17641-8_16
  17. S. Fefilatyev, D. Goldgof, "Detection and tracking of marine vehicles in video," in Proc. of 19th International Conference on Pattern Recognition, 2008.
  18. H. H. Song, H. C. Lee, S. J. Lee, H. S. Jeon and T. H. Im, "Design of Video Pre-processing Algorithm for High-speed Processing of Maritime Object Detection System and Deep Learning based Integrated System," Journal of Internet Computing and Services (JICS), vol.21, no.4, pp. 117-126, 2020. https://doi.org/10.7472/JKSII.2020.21.4.117
  19. T. Liu, B. Pang, L. Zhang, W. Yang, and X. Sun, "Sea Surface Object Detection Algorithm Based on YOLO v4 Fused with Reverse Depthwise Separable Convolution (RDSC) for USV," Journal of Marine Science and Engineering, vol.9, no.7, pp.753, 2021. https://doi.org/10.3390/jmse9070753
  20. S. J. Lee, H. C. Lee, H. H. Song, H. S. Jeon, and T. H. Im, "Comparative Analysis of CNN Deep Learning Model Performance Based on Quantification Application for High-Speed Marine Object Classification," Journal of Internet Computing and Services (JICS), vol.22, no.2, pp.59-68, 2021. https://doi.org/10.7472/JKSII.2021.22.2.59
  21. D. K. Prasad, C. K. Prasath, D. Rajan, L. Rachmawati, E. Rajabally, and C. Quek, "Object detection in maritime environment:Performance evaluation of background subtraction methods," IEEE Transactions on Intelligent Transportation Systems, vol.20, no.5, pp.1787-1802, 2019. https://doi.org/10.1109/tits.2018.2836399
  22. H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid and S. Savarese, "Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression," in Proc. of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.