DOI QR코드

DOI QR Code

Visibility detection approach to road scene foggy images

  • Guo, Fan (School of Information Science & Engineering, Central South University) ;
  • Peng, Hui (School of Information Science & Engineering, Central South University) ;
  • Tang, Jin (School of Information Science & Engineering, Central South University) ;
  • Zou, Beiji (School of Information Science & Engineering, Central South University) ;
  • Tang, Chenggong (School of Information Science & Engineering, Central South University)
  • Received : 2016.03.04
  • Accepted : 2016.08.07
  • Published : 2016.09.30

Abstract

A cause of vehicle accidents is the reduced visibility due to bad weather conditions such as fog. Therefore, an onboard vision system should take visibility detection into account. In this paper, we propose a simple and effective approach for measuring the visibility distance using a single camera placed onboard a moving vehicle. The proposed algorithm is controlled by a few parameters and mainly includes camera parameter estimation, region of interest (ROI) estimation and visibility computation. Thanks to the ROI extraction, the position of the inflection point may be measured in practice. Thus, combined with the estimated camera parameters, the visibility distance of the input foggy image can be computed with a single camera and just the presence of road and sky in the scene. To assess the accuracy of the proposed approach, a reference target based visibility detection method is also introduced. The comparative study and quantitative evaluation show that the proposed method can obtain good visibility detection results with relatively fast speed.

Keywords

1. Introduction

A main cause of vehicle accidents is the reduced visibility caused by bad weather conditions such as fog. During foggy weather, humans actually tend to overestimate visibility distance [1], which can lead to excessive driving speeds. A measurement of available visibility would serve to inform the driver that the vehicle speed is not adapted or could even limit the speed automatically depending on specific momentary conditions. Therefore, a system capable of estimating visibility distance constitutes in itself a driving aid. The system is also very useful for various camera-based Advanced Driving Assistance System (ADAS).

Visibility detection can be divided into roughly two categories: the sensor-based method and the vision-based method. For the sensor-based method, the majority of sensors dedicated to measuring visibility distances (such as scatterometer, transmissometer) are expensive to operate, and these sensors are often quite complicated to install and calibrate correctly. Because of its convenience and relatively low cost, the vision-based method has been widely used in visibility detection. For example, Goswami et al. [2] proposed a hybrid approach for visibility enhancement in foggy image. The method works in two phases. In the first phase it enhances the contrast of the image with Contrast-Limited Adaptive Histogram Equalization and in the second phase it upgrades the visibility of the scene through No-Black-Pixel Constraint. Catalin et al. [3] developed a system for detecting visibility in foggy environment. The method used in the system is adequate to highways and is composed of a laser and a camera fixed on a pile. The laser projects the beam towards the other pile and if the camera sees the laser on the next pile means that the visibility is good, if not it measures the length of the laser beam and in this way estimates the visibility distance. Song et al. [4] proposed a real time traffic meteorological visibility distance evaluation algorithm in foggy weather by using dark channel prior and lane detection methodology. Bronte et al. [5] presented a real-time fog detection system using an on-board low cost black-and-white camera for a driving application. The system is based on two clues: one is the estimation of the visibility distance, which is calculated from the camera projection equations and the blurring due to the fog, and the other is the border strength which is reduced in the upper part of the image in foggy scene. Xu et al. [6] proposed a prototype system for estimating visibility quantitatively based on image analysis and learning. Firstly, an image of measured scene including objects and their background is captured by conventional video camera and the image features are extracted from spatial and frequency domains respectively. Secondly, Support Vector Regression (SVR) model is dynamically produced with the training set according to the image. Lastly, the visibility distance of the image is calculated by SVR model with the extracted features as an input vector. Chen et al. [7] implemented a visibility measurement system based on traffic video surveillance system. The system estimates visibility distance using the contrast of road surface with distance information, and integrates the targets' contrast and curve fitting to get the visibility distance. Hautiere’s group made great progress in visibility detection of road scene foggy images [8-10]. For example, the group presented a technique to estimate the mobilized visibility distance through a use of onboard charge-coupled device cameras. The method combines the computations of local contrasts above 5% and of a depth map of the vehicle environment using stereovision within 60ms on a current-day computer [8]. Then, the group proposed a technique for automatically detecting fog and estimating visibility distance through the use of an onboard camera [9]. A visibility distance estimation method based on structure from motion was also proposed by the group. The method uses images acquired by an onboard camera filming the scene and the estimation of vehicle motion. Thus, from this information a spatial partial structure reconstruction can be achieved to estimate the visibility distance [10]. Aiming at Hautiere’s work, Lenor et al. [11] presented a more complex model based on the theory of radiative transfer. Compared with the work of Hautiere et al., the relation between the extinction coefficient of the atmosphere and the inflection point of the luminance curve cannot be formulated explicitly. The additional complexity of the model makes it capable of sufficiently fitting real-world visibility measurements, but also more difficult to handle for real-time purposes.

This paper presents a novel algorithm to detect the visibility distance of the road scene image that is captured by an on-vehicle camera. The visibility detection system is shown in Fig. 1. The proposed approach implements the Koschmieder’s law, and enables computing the visibility distance, a measure defined by the International Commission on Illumination (CIE) as the distance beyond which a black object of an appropriate dimension is perceived with a contrast of less than 5%. The proposed algorithm is controlled by a few parameters and consists of camera parameter estimation, region of interest (ROI) estimation and visibility computation. Thanks to the ROI extraction, the position of the inflection point may be measured in practice. Thus, combined with the two estimated camera parameters, the visibility distance of the input foggy image can be effectively computed with a single camera and the presence of just road and sky in the scene. On the other hand, as a result of the lack of any reference sensor for providing the exact visibility value, we have evaluated the accuracy of the proposed method based on the black-and-white reference targets. The experimental results showed that a good correlation between the measurement obtained with the reference target and the proposed approach can be obtained, and the proposed algorithm may achieve good visibility detection results for road scene foggy images.

Fig. 1.Visibility detetcion system. (a) The vehicle used for collecting data. (b) Placement of the camera on the vehicle. (c) Placement of the computer within the vehicle.

The rest of the paper is organized as follows. In Section 2, we present the Koschmieder’s law and the camera model that are used for detecting visibility distance. In Section 3, we propose a new method for measuring the visibility distance using a single camera placed onboard a moving vehicle. For assessing the accuracy of the proposed method, a reference target based method is introduced as benchmark criteria in Section 4, and some experimental results for both images and video sequences are also reported in this section. Finally, we give some conclusions in Section 5.

 

2. Background

Assume that for an object of intrinsic luminance L0, its apparent luminance L in presence of fog of extinction coefficient k is modeled by Koschmieder’s law [12] as follows:

where d is the distance of the object, and Ls is the sky intensity. Equation (1) is known as the atmospheric scattering model. The model indicates that the luminance of the object that is seen through fog is attenuated in e−kd (Beer-Lamber law). Eq. (1) may be rewritten as:

Based on this equation, Duntley developed a contrast-attenuation law [10], that is, a nearby object exhibiting contrast C0 with the background will be perceived at distance d with following contrast [13]:

This expression is on the basis of the definition of a standard dimension that is called “meteorological visibility distance”. According to CIE, the meteorological visibility distance is defined as the greatest distance at which a black object of a suitable dimension can be seen in the sky on the horizon, with the threshold contrast set 5% [14], that is C/C0=0.05. Thus, this definition yields the following expression:

For camera response, we first let f denote the camera response function, which models the mapping from scene luminance to image intensity by imaging system, including optic as well as electronic parts [15]. As can be seen in Fig. 2, the intensity I of a pixel is the result of f applied to the sum of the sky intensity A and the direct transmission T:

Fig. 2.Atmospheric scattering model. Fog luminance is due to the scattering of light. Light coming from the sun and scattered by atmospheric particles towards the camera is the sky intensity A. It increases with the distance. The light emanating from the object R is attenuated by scattering along the line of sight. Direct transmission T of the scene radiance R decreases with distance.

Suppose that the conversion process between incident energy on the charge-coupled device (CCD) sensor and the intensity in the image is linear. This is a general case for short exposure times, because it prevents the CCD array from being saturated. Furthermore, short exposure times are used on in-vehicle cameras to reduce the motion blur. This assumption can thus be considered as valid and then Eq. (5) becomes:

where I is the input foggy image, R is the restored image. e−kd is the so-called transmission map, which expresses the relative portion of light, and manages to survive the entire path between the observer and a surface point in the scene, As is the background sky intensity.

According to the camera model established by Hautiere et al. [13], we have the following representation. As illustrated in Fig. 3, the position of a pixel is given by its (u, v) coordinates in the image plane. The coordinates of the optical center projection in the image are designated by (u0, v0). Let H denote the mounting height of the camera, θ the angle between the optical axis of the camera and the horizontal, and vh is the vertical position of the horizontal line in the image. The intrinsic parameters of the camera are its local length fl, and the horizontal size tpu and vertical size tpv of a pixel. From the above three parameters, we can obtain αu = fl / tpu and αv = fl / tpv, and typically we have: αu ≈ αv = α . Suppose that the road is flat, which makes it possible to associate a distance d with the vertical position of each pixel in the image coordinate system (u, v) which is denoted by v and it is also the row number of each pixel. Therefore, the distance d can be written as:

Fig. 3.Modeling of the camera within its environment. It is located at a height of H in the (S, X, Y, Z) coordinate system relative to the scene. Its intrinsic parameters are its focal length f and pixel size t. θ is the angle between the optical axis of the camera and the horizontal. Within the image coordinate system, (u, v) designates the position of a pixel, (u0, v0) is the position of the optical center C and Vh is the vertical position of the horizontal line.

Here, introducing λ = Hα / cosθ in (7) is to make Eq. (7) appear simpler for display purpose. Thus, for an input foggy image at pixel (x, y), Eq. (6) can be written as:

In Eq. (7) and (8), the value of parameter λ can be obtained in two ways. The one is computing λ through camera parameters ( e.g. H, α and θ), as we mentioned above. The other is to estimate λ through marked targets. To perform the estimation, the actual distance d1-d2 between two points and their coordinates v1 and v2 in the input image should be first obtained. Then, the parameter λ can be expressed as the following equation (9). More details about Eq. (7) and Eq. (9) are given in Appendix A.

 

3. Proposed Visibility Detection Approach

A new visibility detection algorithm is proposed in this section. The proposed algorithm may achieve very good detection results for road scene images. Fig. 4 depicts the flowchart of the visibility detection algorithm. It can be seen that there are three key steps for detecting the visibility of road scene foggy images: camera parameter estimation, ROI estimation and visibility computation. Thanks to the ROI extraction, the position of the inflection point vi may be measured in practice. Thus, combined with the two estimated camera parameters vh and λ, the visibility distance of the input foggy image can be computed with a single camera and the presence of just road and sky in the scene.

Fig. 4.Algorithm flowchart for visibility detection. The intermediate steps are shown as red blocks and the key steps are shown as blue blocks.

3.1 Camera Parameter Estimation

The concept of visibility distance is proposed based on the Koschemieder’s law [12]. According to the mathematical properties of the law, the existence of an inflection point can be detected on the image. Once we have the vertical positions of the inflection point and the horizontal line, we can obtain the image line representative of the visibility distance. Then, the image line representative in the image coordinate system can be transformed into real visibility distance by virtue of Eq. (7). Therefore, the first step of visibility detection is estimating camera parameters by setting a set of targets on the road.

The camera parameters used for visibility detection are the vertical position of the horizon line vh and the camera parameter λ. Specifically, to estimate the two parameters, we first mark the targets in white on the road, and the distances between the first five targets and the camera from near to far are 5, 7, 9, 11, 13 meters [see Fig. 5(b)]. Then, we manually choose four line segments. From the bottom of the image, the sequences are [5th target, 4th target], [4th target, 3rd target], [3rd target, 2nd target], [2nd target, 1st target], as shown in Fig. 5(c). The marker sequences are corresponding to d = [13 11; 11 9; 9 7; 7 5]. With any pair of the markers, the parameter λ can be obtained using Eq. (9), where the parameter vh is inferred as follows, and more details are given in Appendix A.

Fig. 5.Steps of camera parameter estimation. (a) Original foggy image. (b) White markers on the road. (c) The manually chosen line-segments.

In Eq. (10), y1 and y2 are the vertical ordinate of the marked 5thand 4th targets, respectively. d1 and d2 are the corresponding actual distance values, so d1 = 13 and d2 = 11. Similarly, the other three pairs of vh and λ can also be obtained following the same steps. Thus, the average of the four pairs of the two parameters is the final parameter value. In our experiment, the parameter values calculated for Fig. 5(a) are vh = 879 and λ = 3633. Since different cameras and different cars have different values of v1 and v2, so the values of vh and λ are different accordingly. Therefore, we can deduce that the two parameters vh and λ are camera dependent and car dependent. Besides, as we mentioned before, the parameter can be also expressed as λ = Hα / cosθ . Since different cameras and cars have different parameter settings, thus the same conclusions that the two parameters are camera and car dependent can be drawn.

3.2 ROI Extraction

The ROI extraction is the most important step for visibility detection. For inflection point estimation, which object the luminance variation is to be measured according to the definition of visibility should be first considered. There is no doubt that the most suitable object would be the road as it offers a dark object that always remains present in the road scene and acts as an interface between the road and the sky. Thus, the ROI for the proposed approach is the road region. Inspired by Hautiere’s work [9], a region growing algorithm is used for segmenting the road from the input image.

Specifically, the image contours are first extracted to highlight the major contrast breaks constituting the roadway edges, vehicles ahead or crossing, trees, etc. This extraction step is performed by using canny detection. The two parameters of canny detection tL and tH are respectively set to be 0.06 and 0.25 to exclude noise during contour detection and thereby avoid obtaining an interruption at level of the horizon line. Here, the set of all relevant contours obtained by canny detection is denoted by E.

Next, the region growing algorithm is performed to extract target region. The method is classified as a pixel-based image segmentation method, since it involves the selection of initial seed points. This approach to segmentation examines the neighboring pixels of initial seed points and determines whether the pixel neighbors should be added to the region. The process is iterated on in the same manner as general data clustering algorithms [16]. In our application, the initial seed points of region expansion for an image with a size of M×N are chosen as the pixels of a line from the image height M-20. The gray level of these initial seed points is close to the median of gray levels for this line. The purpose of the operation is to make the segmented region displays minimal gradient variation when the expanded region crossed from bottom to top. In our experiment, if the difference between the pixel intensity and the median value is less than 10 (-10 ≤ L - Lmedian ≤ 10), the pixel is selected as a new seed for region expansion, and the set of all seed is denoted by Ps.

Then, suppose P(i, j) denotes the pixel that is aggregated to the target region R, four conditions that used for the pixel aggregation to the target region are defined as follows:

- First condition: The pixel does not belong to the region R.

- Second condition: The pixel is not included within a contour detection by canny edge detection.

- Third condition: Only the three pixels lying above the current pixel can be incorporated into region R. As presented in Fig. 6, their gradients should satisfy the following relationship:

Fig. 6.Diagram representative of the third condition of aggregation to the target region for a pixel P(i, j) in its 8-zone.

In Eq. (13), denotes the maximum vertical gradient existing between two successive lines of the image, as shown in Fig. 6. Let k = 0 designates the pixel lying right above the current the pixel, and k = -1 and k =1 denote the top-left and top-right, respectively. Since the road is not always homogeneous enough, the value of is set between 4 and 10. Besides, the following constraints are also introduced to incite vertical movement at the time of expansion.

- Fourth condition: The pixel exhibits a certain similarity to the seed Ps. The similarity is evaluated by the following expression:

where ρ < 1 and nr designates the line number between P(i, j) and Ps.

For the above four conditions, the former two conditions give rigid constraints for region features, and the latter two give elastic limitations for region values. Furthermore, the third condition reflects that the deviation between the investigate pixel and pixel Ps should not beyond certain restrictions, and the fourth condition shows that the investigate pixel and pixel Ps has relatively strong similarity. Fig. 7 presents various results from region growing procedure for different value of Gmax. One can clearly see that for Fig. 7(a), the growing region immediately stop after the seed Ps start. That’s because the value of Gmax is too small, which suppress the region expansion procedure. The region growing result shown in Fig. 7(b) presents a good segmented road area. Although Fig. 7(c) shows a comparable road segmentation result, the road region that is close to the sky seems too wide and the non-gray pixels in the middle of the road region are also aggregated in the region expansion procedure. As can be seen in Fig. 7(d), the trees at the both sides of the road are all expanded in the region growing result. Thus, we can deduce that the value of Gmax should not be too large, otherwise the region growing procedure cannot achieve an ideal road segmentation result.

Fig. 7.Region growing results for different maximum vertical gradient Gmax (ρ = 0.3).

Then, to ensure excluding measurements of low-contrast objects located on the edge of the target region, the luminance variation measurement does not span the entire surface are of R. Instead, we will measure the luminance variation over a vertical bandwidth B with a maximum width w.

Let’s start by providing the definition of a horizontal segment Sj[i1, i2] belong to R:

For each line j of R, the center ij of the longest horizontal segment belonging to R is then computed as:

where segments are defined by the following equations:

The set of pixels P(ij, j) constitutes the central axis of B and is ultimately obtained by the following expression presented in Eq. (19).

where n1 and n2 are the line number of the bottom and top of R, respectively. Fig. 8 shows the measurement bandwidth search process. As can be seen in the figure, suppose in the row j of the input image exits four horizontal segments and the second one is the longest, then ij represents the center of the longest horizontal segment, and l is the length of the longest segment. The positions of pixel set denoted by are also shown in Fig. 8.

Fig. 8.Diagrammatic example of the center ij.

Finally, the median luminance Lj of each bandwidth line is computed, which serves to derive L. Here, L is the function representing the vertical variation in luminance over the target region R. The median operation can be written as:

The reason why the median operation is adopted here is that the non-gray road part may be chosen in the extracted ROI, but the gray road region always takes most part of the ROI. Therefore, the median operation may eliminate the interference of the non-gray road part. Fig. 9 shows the key steps for ROI extraction. In the figure, Fig. 9(a) is the canny edge detection for the original image shown in Fig. 5(a), its corresponding region growing result is shown in Fig. 9(b). The measurement bandwidth for Fig. 5(a) is shown on the blue lines in Fig. 9(c). In our experiment, the bandwidth estimation begins at the middle of the image width istart. Then, a flag is set to determine whether the new midline is on the left or right side of istart , and the segment constraints mentioned in Eq. (19) should be satisfied at the same time. Thus, the measurement bandwidth can be depicted in Fig. 9(c).

Fig. 9.Target region detection and bandwidth measurement. (a) Canny edge detection result. (b) Region growing result (the target region is painted in white). (c) Measurement bandwidth computation (blue lines).

3.3 Visibility Computation

To compute the visibility distance of the input image, the inflection point vi should be obtained. However, too many local inflection points may be detected. To avoid such problem, a smoothing of L is performed. Fig. 10(a) shows the curve representative of the median luminance of the bandwidth. The derivative of L then gets calculated and smoothed afterward, as shown in Fig. 10(b). The local minimum positions are the position of the inflection point vi, which minimizes the squared error between the issued model and the measured curve. From Fig. 10(b), we can deduce that vi is 941. Besides, the vertical position of the horizontal line vh can be obtained using Eq. (10), and the camera parameter λ can be obtained using Eq. (9). For the illustrative example shown in Fig. 5, we already know vh = 879 and λ = 3633. Thus, we can get the value of the extinction coefficient k:

Fig. 10.Curve representative of the measurement of vertical luminance variation in the foggy image. (a) The curve of the median luminance of the bandwidth (blue—without smoothing; red—with smoothing). (b) The derivative of the curve (blue—without smoothing; green—with smoothing). (c) Visibility distance measurement. (d) Corresponding detection values.

More details about Eq. (21) are given in Appendix B. Thus, according to Eq. (4), the visibility distance dmax (unit: meter) can be computed by:

Let vv denotes the image line representative of the visibility distance dmax in the form of pixel unit, we have

Thus, we are able to deduce the image line vv:

Fig. 10(c) presents the visibility estimation result. In the figure, the measurement bandwidth is shown in vertical blue lines. The horizontal black line represents an estimation of visibility distance vv, and the horizontal red line represents the vertical position of the horizontal line vh. The corresponding detection values are shown in Fig. 10(d). From the figure we can see that the vertical positions of the horizontal line vh and the inflection point vi are 879 and 941, respectively. The vertical position of the visibility line vv is 920, which demonstrates that the visibility distance of the image is 88m. This detection result is consistent with our visual perception result.

 

4. Experimental Results

In this section, we first analyze the robustness of the proposed framework in different scenes. Besides, as a result of the lack of any reference equipment or sensor for providing the exact visibility value, we have evaluated the accuracy of the proposed method based on a few black-and-white reference targets. Therefore, we then introduce the reference target based visibility estimation method [17]. Next, we compare the reference target based method with our method. Results on a variety of road scene images show that a good correlation between the measurements obtained with the reference targets and the proposed method can be obtained. Finally, the proposed method is tested on some challenging video sequences to demonstrate the effectiveness of the proposed method.

4.1 Robustness of Proposed Framework

To verfy the robustness of the proposed framework, a virtual mockup of the fog observation test bench is constructed. As can be seen in Fig. 11, the signal lights of the virtual mockup are marked in red, and their relative positions are specified.

Fig. 11.Image of the virtual mockup of the fog observation test bench.

To analyze the impact of algorithm parameters in different scenes, we must ensure that the parameters obtained by the proposed method can estimate correct visibility disance. In this aim, we smulated images of the virtual mockup of our test bench in various fog situations for different visibility distance values. These images are shown in Fig. 12.

Fig. 12.Simulations of the test bench in various fog situations for different visibility distances.

Table 1 shows the parameter values of the proposed method for different visibility distances in Fig. 12. One can clearly see that different visibility distances dmax obtained by our parmeter values can well reflect the image clearness of different fog situations shown in Fig. 12. The estimated dmax is not only consistent with human visual perception, but also basiclally comply with the distance values we set for the virtual mockup of the fog observation test bench (standard dmax: 33 m, 66 m, 100 m, 133 m, 166 m and 200 m). Therefore, the robustness of the proposed framework can be verified.

Table 1.Parameter values for different visibility distance in Fig. 12

4.2 Reference Target based Method

The reference target based method [17] detects visibility distance by using the image intensity information and the Koschmieder’s law mentioned in Section 2. The black-and-white reference targets that used in the method have a simple shape as shown in Fig. 13(a). These reference targets are arranged at different distance. Fig. 13(b) shows the position of these targets.

Fig. 13.Reference targets and their positions. (a) Ideal black-and-white target. (b) Target positions.

Placing the reference targets at the distance d from observer to the target, we have the following equations according to the Koschmieder’s law [see Eq. (1)]:

where W and B represent the white and black areas of the reference targets, respectively. Since both black and white areas are on the same target and the target has the same distance d from observer. Thus, we can subtract Eq. (25) from Eq. (26):

Assume that targets are arranged at all different distance, which means the distance d from each target to observer is known. Besides, the luminance differences C between the black area and white area of these targets LW-LB can also be computed by real-captured images. By using the least square fitting, the extinction coefficient k and the intrinsic luminance differences C0 between the black area and white area can be obtained. Therefore, the horizontal line at of the normalized contrast (C/C0=0.05) is a threshold level of contrast that determines visibility in terms of meters. Note that, in order to be able to measure visibility in a wide range, reference targets must be placed at a wide range of distance. That’s because more targets at varying distance will increase the measurement range and accuracy.

4.3 Comparative Analysis

In this section, we have carried out two experiments. In the first experiment, we present results to show the accuracy of the proposed method based on the visibility detection results of the reference target based method. In the second experiment, we present the comparative results and compare the proposed method to the target based method. In these experiments, we have obtained a good correlation between the measurements obtained with the reference targets and our method.

To test the accuracy of our method, we have computed the visibility distance using the contrast impairment on the reference target shown in Fig. 13(a). Specifically, in our experiemts, we placed the same reference target at ten different distances to capture a set of images. Assume the variation of extinction coefficient is not too much in a short period of time, and the relative camera parameters are fixed during the image capture process. Therefore, the luminance reflected by the same gary-level is basically the same in each real-captured image. Fig. 14 presents four sample images captured in the reference target experiment. The experiement was carried out at the same period of the visibility detection test using the proposed algorithm.

Fig. 14.Sample images captured in the reference target experiment.

With the ten images of the reference target and the corresponding distance information, we can depict the fitting curve as shown in Fig. 15(a). In the figure, the black dots represent the ration of luminance difference (or the normalized contrast), which is C/C0. Using Eqs. (3) and (4), the extinction coefficients and the normalized contrasts can be computed by substituting distances between 0 to 300 meters. The resulting graph is shown in Fig. 15(a). As can be seen in the figure, the visibility distance is 71 meters since the horizontal line at normalized contrast 0.05 is a threshold level of contrast that determines visibility in terms of meters according to the definition of meteorological visibility distance defined by CIE. Fig. 15(b) shows the visibility detection result for our method. One can clearly see that the visibility estimated by the method is 88 meters. According to the findings of the World Meteorological Organization (WMO), 10-20% relative error for the visibility sensors is realistic and acceptable [18,19]. Assume the mean value of the target-based result and the visual inspection result (80 meters) is regarded as the true value of the visibility distance, that is Vtrue = (71+80)/2=75.5 meters. Then, the relative error can be written as:

Fig. 15.Visibility detection results. (a) Fitting curve for the reference target method. (b) Detection values for the proposed method.

From the above error analysis result, we can deduce that the proposed method can meet the requirements of WMO when it is used to detect the visibility around 100 meters.

Besides, we also compare the proposed method with the target based method. An illustrative example is shown in Fig. 16. For the proposed method, the computed measurement bandwidth is shown in Fig. 16(a). Fig. 16(b) shows the curve representative of the median luminance of the bandwidth. The derivative of the median luminance then gets calculated and smoothed afterward, as shown in Fig. 16(c). Thus, we can deduce that the visibility of the input image is 301 meters by the inflection point shown in Fig. 16(c). The visibility detection result for the reference target method is shown in Fig. 16(d). From the figure, we can see that the distance that corresponding to the horizontal line at normalized contrast 0.05 is 250 meters, which means the visibility obtained by the method is 250m. Since the visual inspection result for the input image is 260m, the relative error for this example is 18%, which satisfies the requirement of WMO. Therefore, the conclusion that there is a good correlation between the measurements obtained with the reference target and the presented technique can be drawn.

Fig. 16.Comparative results for our method and the reference target method. (a) Measurement bandwidth computation (black lines). (b) The curve of the median luminance of the bandwidth (black—without smoothing; red—with smoothing). (c) The derivative of the curve. (d) Fitting curve for the reference target method.

4.4 Extension to Video Sequences

In our experiment, we not only detect visibility for a single image but also extend to video sequences. Although the final goal of our project is serving the driver assistance system, as a preliminary analysis of the problem, we mainly focus on the algorithm design at present.

To test the proposed contribution, we applied the proposed method on two video sequences. Each video sequence contains over 150 frame images, and respectively represents two kinds of foggy weather conditions: fog and heavy fog. These two sequences have been illustrated by an image on top of Fig. 17. The visibility value measured on the 150 images of each sequence is also presented in Fig. 17.

Fig. 17.Visibility distance measurement conducted on the two video sequences.

From the experimental results shown in Fig. 17, we can see that the average visibility values of the two sequences are 51.40m and 37.69m, respective. While the visual inspection results for the two sequences are 50m and 30m. Thus, we can see that the visibility value obtained by our method is slightly larger than the visual inspection value. Besides, the proposed method has relatively fast speed (smaller than 1s for a 640×480 image) on a PC (3.00GHz Intel Pentium Dual-Core Processor) in MATLAB environment, and runs onboard our prototype vehicle. Then, despite the presence of obstacles, such as vehicles followed or crossed, turns in the road, etc., the results are indeed relatively stable and even more so with the increasing fog density. This outcome constitutes one of the proposed method’s key advantages.

 

5. Conclusion

In this paper, we proposed a simple and effective approach to measuring the visibility distance using a single camera placed onboard a moving vehicle. The proposed approach implemented the Koschmieder’s law, and enables computing the visibility distance, a measure defined by the CIE as the distance beyond which a black object of an appropriate dimension is perceived with a contrast of less than 5%. The proposed algorithm was controlled by a few parameters and consists in: camera parameter estimation, ROI estimation and visibility computation. Thanks to the ROI extraction, the position of the inflection point may be measured in practice. Thus, combined with the estimated camera parameters, the visibility distance of the input foggy image can be computed with a single camera and the presence of just road and sky in the scene. To assess the accuracy of the proposed approach, a reference target based method was also introduced. The comparative experiments and quantitative evaluations showed that a good correlation between the measurement obtained with the reference target and the proposed approach can be obtained, and the proposed algorithm may achieve good visibility detection results for road scene foggy images.

However, the proposed visibility detection approach also has some drawbacks. First, the proposed approach requires only road and sky existed in the scene to make our system run properly, which may limit its applications in some situations. Second, the proposed approach cannot be applied at night. Finally, it is very hard to know whether the result instabilities are due to local variations in fog density or errors introduced by the method. Therefore, in the future, we intend to reduce data inaccuracies and make the system more robust, rather than just using one image to determine the visibility distance, and we could also make use of frame-to-frame coherence in video sequence. We also hope that a new method capable of generalizing our approach can be developed and other shortcomings mentioned above can be solved in our future work.

References

  1. V. Cavallo, M. Colomb, J. Dore, “Distance perception of vehicle rear lights in fog,” Human Factors, vol. 43, no. 3, pp. 442-451, 2001. Article (CrossRef Link) https://doi.org/10.1518/001872001775898197
  2. S. Goswami, J. Kumar, J. Goswami, “A hybrid approach for visibility enhancement in foggy image,” in Proc. of 2nd International Conference on Computing for Sustainable Global Development, pp. 175-180, March 11-13, 2015. Article (CrossRef Link)
  3. R. C. Miclea, I. Silea, “Visibility detection in foggy environment,” in Proc. of 20th International Conference on Control Systems and Computer Science, pp. 959-964, May 27-29, 2015. Article (CrossRef Link)
  4. H. J. Song, Y. Z. Chen, Y. Y. Gao, “Real-time visibility distance evaluation based on monocular and dark channel prior,” International Journal of Computational Science and Engineering, vol. 10, no. 4, pp. 375-386, 2015. Article (CrossRef Link) https://doi.org/10.1504/IJCSE.2015.070992
  5. S. Bronte, L.M. Bergasa, P.F. Alcantarilla, “Fog detection system based on computer vision techniques,” in Proc. of 12th International IEEE Conference on Intelligent Transportation Systems, pp. 30-35, October 3-7, 2009. Article (CrossRef Link)
  6. X. Xu, S.H. Shafin, Y. Li, H.W. Hao, “A prototype system for atmospheric visibility estimation based on image analysis and learning,” Journal of Information and Computational Science, vol. 11, no. 13, pp. 4577-4585, September, 2014. Article (CrossRef Link) https://doi.org/10.12733/jics20104419
  7. Z.Z. Chen, J. Li, Q.M. Chen, “Real-time video detection of road visibility conditions,” in Proc. of 2009 WRI World Congress on Computer Science and Information Engineering, pp. 472-476, March 31-April 2, 2009. Article (CrossRef Link)
  8. N. Hautiere, R. Labayrade, D. Aubert, “Real-time disparity contrast combination for onboard estimation of the visibility distance,” IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 2, pp.201-212, June, 2006. Article (CrossRef Link) https://doi.org/10.1109/TITS.2006.874682
  9. N. Hautiere, J.P. Tarel, J. Lavenant, D. Aubert, “Automatic fog detection and estimation of visibility distance through use of an onboard camera,” Machine Vision and Applications, vol. 17, no. 1, pp. 8-20, April, 2006. Article (CrossRef Link) https://doi.org/10.1007/s00138-005-0011-1
  10. C. Boussard, N. Hautiere, N.B. dAndrea, “Visibility distance estimation based on structure from motion,” in Proc. of 11th International Conference on Control Automation Robotics & Vision, pp. 1416-1421, December 7-10, 2010. Article (CrossRef Link)
  11. S. Lenor, B. Jahne, S. Weber, U. Stopper, “An improved model for estimating the meteorological visibility from a road surface luminance curve,” Lecture Notes in Computer Science, vol. 8142, pp. 184-193, September, 2013. Article (CrossRef Link)
  12. W. Middleton, Vision through the atmosphere, Geophysik II / Geophysics II, Springer Berlin Heidelberg, German, 1957. Article (CrossRef Link)
  13. N. Hautiere, J.P. Tarel, J. Lavenant, D. Aubert, “Mitigation of visibility loss for advanced camera-based driver assistance,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 474-484, June, 2006. Article (CrossRef Link) https://doi.org/10.1109/TITS.2010.2046165
  14. International lighting vocabulary. 17.4., 1987. Article (CrossRef Link)
  15. M.D. Grossberg, S.K. Nayar, “Modeling the space of camera response functions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp.1272-1282, October, 2004. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2004.88
  16. A. Mehnert, P. Jackway, “An improved seeded region growing algorithm,” Pattern Recognition Letters, vol. 18, no. 10, pp. 1065-1071, October, 1997. Article (CrossRef Link) https://doi.org/10.1016/S0167-8655(97)00131-1
  17. T.M. Kwon, “Atmospheric visibility measurements using video cameras: relative visibility,” Report no. CTS 04-03, November, 2004. Article (CrossRef Link)
  18. World Meteorological Organization (WMO), Guide to meteorological instruments and methods of observation, The Commission for Instruments and Methods of Observations (CIMO), Switzerland, 2008. Article (CrossRef Link)
  19. J.D. Crosby, “Visibility sensor accuracy: what's realistic?” in Proc. of 12th Symposium on Meteorological Observations and Instrumentation, pp. 1-5, 2003. Article (CrossRef Link)