• Title/Summary/Keyword: vision system model

Search Result 571, Processing Time 0.027 seconds

An Experimental Study on the Optimal Number of Cameras used for Vision Control System (비젼 제어시스템에 사용된 카메라의 최적개수에 대한 실험적 연구)

  • 장완식;김경석;김기영;안힘찬
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.94-103
    • /
    • 2004
  • The vision system model used for this study involves the six parameters that permits a kind of adaptability in that relationship between the camera space location of manipulable visual cues and the vector of robot joint coordinates is estimated in real time. Also this vision control method requires the number of cameras to transform 2-D camera plane from 3-D physical space, and be used irrespective of location of cameras, if visual cues are displayed in the same camera plane. Thus, this study is to investigate the optimal number of cameras used for the developed vision control system according to the change of the number of cameras. This study is processed in the two ways : a) effectiveness of vision system model b) optimal number of cameras. These results show the evidence of the adaptability of the developed vision control method using the optimal number of cameras.

Image Enhanced Machine Vision System for Smart Factory

  • Kim, ByungJoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.7-13
    • /
    • 2021
  • Machine vision is a technology that helps the computer as if a person recognizes and determines things. In recent years, as advanced technologies such as optical systems, artificial intelligence and big data advanced in conventional machine vision system became more accurate quality inspection and it increases the manufacturing efficiency. In machine vision systems using deep learning, the image quality of the input image is very important. However, most images obtained in the industrial field for quality inspection typically contain noise. This noise is a major factor in the performance of the machine vision system. Therefore, in order to improve the performance of the machine vision system, it is necessary to eliminate the noise of the image. There are lots of research being done to remove noise from the image. In this paper, we propose an autoencoder based machine vision system to eliminate noise in the image. Through experiment proposed model showed better performance compared to the basic autoencoder model in denoising and image reconstruction capability for MNIST and fashion MNIST data sets.

Compensation of Installation Errors in a Laser Vision System and Dimensional Inspection of Automobile Chassis

  • Barkovski Igor Dunin;Samuel G.L.;Yang Seung-Han
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.4
    • /
    • pp.437-446
    • /
    • 2006
  • Laser vision inspection systems are becoming popular for automated inspection of manufactured components. The performance of such systems can be enhanced by improving accuracy of the hardware and robustness of the software used in the system. This paper presents a new approach for enhancing the capability of a laser vision system by applying hardware compensation and using efficient analysis software. A 3D geometrical model is developed to study and compensate for possible distortions in installation of gantry robot on which the vision system is mounted. Appropriate compensation is applied to the inspection data obtained from the laser vision system based on the parameters in 3D model. The present laser vision system is used for dimensional inspection of car chassis sub frame and lower arm assembly module. An algorithm based on simplex search techniques is used for analyzing the compensated inspection data. The details of 3D model, parameters used for compensation and the measurement data obtained from the system are presented in this paper. The details of search algorithm used for analyzing the measurement data and the results obtained are also presented in the paper. It is observed from the results that, by applying compensation and using appropriate algorithms for analyzing, the error in evaluation of the inspection data can be significantly minimized, thus reducing the risk of rejecting good parts.

A Stereo-Vision System for 3D Position Recognition of Cow Teats on Robot Milking System (로봇 착유시스템의 3차원 유두위치인식을 위한 스테레오비젼 시스템)

  • Kim, Woong;Min, Byeong-Ro;Lee, Dea-Weon
    • Journal of Biosystems Engineering
    • /
    • v.32 no.1 s.120
    • /
    • pp.44-49
    • /
    • 2007
  • A stereo vision system was developed for robot milking system (RMS) using two monochromatic cameras. An algorithm for inverse perspective transformation was developed for the 3-D information acquisition of all teats. To verify performance of the algorithm in the stereo vision system, indoor tests were carried out using a test-board and model teats. A real cow and a model cow were used to measure distance errors. The maximum distance errors of test-board, model teats and real teats were 0.5 mm, 4.9 mm and 6 mm, respectively. The average distance errors of model teats and real teats were 2.9 mm and 4.43 mm, respectively. Therefore, it was concluded that this algorithm was sufficient for the RMS to be applied.

A Hierarchical Motion Controller for Soccer Robots with Stand-alone Vision System (독립 비젼 시스템 기반의 축구로봇을 위한 계층적 행동 제어기)

  • Lee, Dong-Il;Kim, Hyung-Jong;Kim, Sang-Jun;Jang, Jae-Wan;Choi, Jung-Won;Lee, Suk-Gyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.9
    • /
    • pp.133-141
    • /
    • 2002
  • In this paper, we propose a hierarchical motion controller with stand-alone vision system to enhance the flexibility of the robot soccer system. In addition, we simplified the model of dynamic environments of the robot using petri-net and simple state diagram. Based on the proposed model, we designed the robot soccer system with velocity and position controller that includes 4-level hierarchically structured controller. Some experimental results using the stand-alone vision system from host system show improvement of the controller performance by reducing processing time of vision algorithm.

A Study on the Practicality of Vision Control Scheme used for Robot's Point Placement task in Discontinuous Trajectory (불연속적인 궤적에서 로봇 점 배치작업에 사용된 비젼 제어기법의 실용성에 대한 연구)

  • Son, Jae-Kyeong;Jang, Wan-Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.20 no.4
    • /
    • pp.386-394
    • /
    • 2011
  • This paper is concerned with the application of the vision control scheme for robot's point placement task in discontinuous trajectory caused by obstacle. The proposed vision control scheme consists of four models, which are the robot's kinematic model, vision system model, 6-parameters estimation model, and robot's joint angles estimation model. For this study, the discontinuous trajectory by obstacle is divided into two obstacle regions. Each obstacle region consists of 3 cases, according to the variation of number of cameras that can not acquire the vision data. Then, the effects of number of cameras on the proposed robot's vision control scheme are investigated in each obstacle region. Finally, the practicality of the proposed robot's vision control scheme is demonstrated experimentally by performing the robot's point placement task in discontinuous trajectory by obstacle.

Integrated Navigation Design Using a Gimbaled Vision/LiDAR System with an Approximate Ground Description Model

  • Yun, Sukchang;Lee, Young Jae;Kim, Chang Joo;Sung, Sangkyung
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.14 no.4
    • /
    • pp.369-378
    • /
    • 2013
  • This paper presents a vision/LiDAR integrated navigation system that provides accurate relative navigation performance on a general ground surface, in GNSS-denied environments. The considered ground surface during flight is approximated as a piecewise continuous model, with flat and slope surface profiles. In its implementation, the presented system consists of a strapdown IMU, and an aided sensor block, consisting of a vision sensor and a LiDAR on a stabilized gimbal platform. Thus, two-dimensional optical flow vectors from the vision sensor, and range information from LiDAR to ground are used to overcome the performance limit of the tactical grade inertial navigation solution without GNSS signal. In filter realization, the INS error model is employed, with measurement vectors containing two-dimensional velocity errors, and one differenced altitude in the navigation frame. In computing the altitude difference, the ground slope angle is estimated in a novel way, through two bisectional LiDAR signals, with a practical assumption representing a general ground profile. Finally, the overall integrated system is implemented, based on the extended Kalman filter framework, and the performance is demonstrated through a simulation study, with an aircraft flight trajectory scenario.

A Study on Developing a High-Resolution Digital Elevation Model (DEM) of a Tunnel Face (터널 막장면 고해상도 DEM(Digital Elevation Model) 생성에 관한 연구)

  • Kim, Kwang-Yeom;Kim, Chang-Yong;Baek, Seung-Han;Hong, Sung-Wan;Lee, Seung-Do
    • Proceedings of the Korean Geotechical Society Conference
    • /
    • 2006.03a
    • /
    • pp.931-938
    • /
    • 2006
  • Using high resolution stereoscopic imaging system three digital elevation model of tunnel face is acquired. The images oriented within a given tunnel coordinate system are brought into a stereoscopic vision system enabling three dimensional inspection and evaluation. The possibilities for the prediction ahead and outside of tunnel face have been improved by the digital vision system with 3D model. Interpolated image structures of rock mass between subsequent stereo images will enable to model the rock mass surrounding the opening within a short time at site. The models shall be used as input to numerical simulations on site, comparison of expected and encountered geological conditions, and for the interpretation of geotechnical monitoring results.

  • PDF

Object Recognition using Smart Tag and Stereo Vision System on Pan-Tilt Mechanism

  • Kim, Jin-Young;Im, Chang-Jun;Lee, Sang-Won;Lee, Ho-Gil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2379-2384
    • /
    • 2005
  • We propose a novel method for object recognition using the smart tag system with a stereo vision on a pan-tilt mechanism. We developed a smart tag which included IRED device. The smart tag is attached onto the object. We also developed a stereo vision system which pans and tilts for the object image to be the centered on each whole image view. A Stereo vision system on the pan-tilt mechanism can map the position of IRED to the robot coordinate system by using pan-tilt angles. And then, to map the size and pose of the object for the robot to coordinate the system, we used a simple model-based vision algorithm. To increase the possibility of tag-based object recognition, we implemented our approach by using as easy and simple techniques as possible.

  • PDF

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.