• Title/Summary/Keyword: Computer Vision Systems

Search Result 598, Processing Time 0.036 seconds

EVALUATION OF SPEED AND ACCURACY FOR COMPARISON OF TEXTURE CLASSIFICATION IMPLEMENTATION ON EMBEDDED PLATFORM

  • Tou, Jing Yi;Khoo, Kenny Kuan Yew;Tay, Yong Haur;Lau, Phooi Yee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.89-93
    • /
    • 2009
  • Embedded systems are becoming more popular as many embedded platforms have become more affordable. It offers a compact solution for many different problems including computer vision applications. Texture classification can be used to solve various problems, and implementing it in embedded platforms will help in deploying these applications into the market. This paper proposes to deploy the texture classification algorithms onto the embedded computer vision (ECV) platform. Two algorithms are compared; grey level co-occurrence matrices (GLCM) and Gabor filters. Experimental results show that raw GLCM on MATLAB could achieves 50ms, being the fastest algorithm on the PC platform. Classification speed achieved on PC and ECV platform, in C, is 43ms and 3708ms respectively. Raw GLCM could achieve only 90.86% accuracy compared to the combination feature (GLCM and Gabor filters) at 91.06% accuracy. Overall, evaluating all results in terms of classification speed and accuracy, raw GLCM is more suitable to be implemented onto the ECV platform.

  • PDF

Trends in Biomimetic Vision Sensor Technology (생체모방 시각센서 기술동향)

  • Lee, Tae-Jae;Park, Yun-Jae;Koo, Kyo-In;Seo, Jong-Mo;Cho, Dong-Il Dan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1178-1184
    • /
    • 2015
  • In conventional robotics, charge-coupled device (CCD) and complementary metal-oxide-semiconductor (CMOS) cameras have been utilized for acquiring vision information. These devices have problems, such as narrow optic angles and inefficiencies in visual information processing. Recently, biomimetic vision sensors for robotic applications have been receiving much attention. These sensors are more efficient than conventional vision sensors in terms of the optic angle, power consumption, dynamic range, and redundancy suppression. This paper presents recent research trends on biomimetic vision sensors and discusses future directions.

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

The Automated Measurement of Tool Wear using Computer Vision (컴퓨터 비젼에 의한 공구마모의 자동계측)

  • Song, Jun-Yeop;Lee, Jae-Jong;Park, Hwa-Yeong
    • 한국기계연구소 소보
    • /
    • s.19
    • /
    • pp.69-79
    • /
    • 1989
  • Cutting tool life monitoring is a critical element needed for designing unmanned machining systems. This paper describes a tool wear measurement system using computer vision which repeatedly measures flank and crater wear of a single point cutting tool. This direct tool wear measurement method is based on an interactive procedure utilizing a image processor and multi-vision sensors. A measurement software calcultes 7 parameters to characterize flank and crater wear. Performance test revealed that the computer vision technique provides precise, absolute tool-wear quantification and reduces human maesurement errors.

  • PDF

A Case Study on Remote Computer Vision Laboratory (원격 컴퓨터 비전 실습 사례연구)

  • Lee, Sung-Youl
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.2
    • /
    • pp.60-67
    • /
    • 2007
  • This paper describes the development of on-line computer vision laboratories to teach the detailed image processing and pattern recognition techniques. The computer vision laboratories include distant image acquisition method, basic image processing and pattern recognition methods lens and light, and communication. This study introduces a case study that teaches computer vision in distance learning. environment. It shows a schematic of a distant teaming workstation and contents of laboratories with image processing examples. The study focus more on the contents of the vision Labs rather than internet application method. The study proposes the ways to improve the on-line computer vision laboratories and includes the further research perspectives.

  • PDF

Development of the Lighting System Design Code for Computer Vision (컴퓨터 비전용 조명 시스템 설계 코드 개발)

  • Ahn, In-Mo;Lee, Kee-Sang
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.11
    • /
    • pp.514-520
    • /
    • 2002
  • n industrial computer vision systems, the image quality is dependent on the parameters such as light source, illumination method, optics, and surface properties. Most of them are related with the lighting system, which is designed in heuristic based on the designer's experimental knowledge. In this paper, a design code by which the optimal lighting method and light source for computer vision systems can be found are suggested based on experimental results. The design coed is applied to the design of the lighting system for the transistor marking inspection system, and the overall performance of the machine vision system with the lighting system show the effectiveness of the proposed design code.

Robot vision interface (로보트와 Vision System Interface)

  • 김선일;여인택;박찬웅
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10b
    • /
    • pp.101-104
    • /
    • 1987
  • This paper shows the robot-vision system which consists of robot, vision system, single board computer and IBM-PC. IBM-PC based system has a great flexibility in expansion for a vision system interfacing. Easy human interfacing and great calculation ability are the benefits of this system. It was carried to interface between each component. The calibration between two coordinate systems is studied. The robot language for robot-vision system was written in "C" language. User also can write job program in "C" language in which the robot and vision related functions reside in the library.side in the library.

  • PDF

Customer Activity Recognition System using Image Processing

  • Waqas, Maria;Nasir, Mauizah;Samdani, Adeel Hussain;Naz, Habiba;Tanveer, Maheen
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.63-66
    • /
    • 2021
  • The technological advancement in computer vision has made system like grab-and-go grocery a reality. Now all the shoppers have to do now is to walk in grab the items and go out without having to wait in the long queues. This paper presents an intelligent retail environment system that is capable of monitoring and tracking customer's activity during shopping based on their interaction with the shelf. It aims to develop a system that is low cost, easy to mount and exhibit adequate performance in real environment.

A VISION SYSTEM IN ROBOTIC WELDING

  • Absi Alfaro, S. C.
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.314-319
    • /
    • 2002
  • The Automation and Control Group at the University of Brasilia is developing an automatic welding station based on an industrial robot and a controllable welding machine. Several techniques were applied in order to improve the quality of the welding joints. This paper deals with the implementation of a laser-based computer vision system to guide the robotic manipulator during the welding process. Currently the robot is taught to follow a prescribed trajectory which is recorded a repeated over and over relying on the repeatability specification from the robot manufacturer. The objective of the computer vision system is monitoring the actual trajectory followed by the welding torch and to evaluate deviations from the desired trajectory. The position errors then being transfer to a control algorithm in order to actuate the robotic manipulator and cancel the trajectory errors. The computer vision systems consists of a CCD camera attached to the welding torch, a laser emitting diode circuit, a PC computer-based frame grabber card, and a computer vision algorithm. The laser circuit establishes a sharp luminous reference line which images are captured through the video camera. The raw image data is then digitized and stored in the frame grabber card for further processing using specifically written algorithms. These image-processing algorithms give the actual welding path, the relative position between the pieces and the required corrections. Two case studies are considered: the first is the joining of two flat metal pieces; and the second is concerned with joining a cylindrical-shape piece to a flat surface. An implementation of this computer vision system using parallel computer processing is being studied.

  • PDF

Vision-Based Obstacle Collision Risk Estimation of an Unmanned Surface Vehicle (무인선의 비전기반 장애물 충돌 위험도 평가)

  • Woo, Joohyun;Kim, Nakwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1089-1099
    • /
    • 2015
  • This paper proposes vision-based collision risk estimation method for an unmanned surface vehicle. A robust image-processing algorithm is suggested to detect target obstacles from the vision sensor. Vision-based Target Motion Analysis (TMA) was performed to transform visual information to target motion information. In vision-based TMA, a camera model and optical flow are adopted. Collision risk was calculated by using a fuzzy estimator that uses target motion information and vision information as input variables. To validate the suggested collision risk estimation method, an unmanned surface vehicle experiment was performed.