• Title/Summary/Keyword: Image Signal Recognition

Search Result 184, Processing Time 0.028 seconds

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

A Survey of Objective Measurement of Fatigue Caused by Visual Stimuli (시각자극에 의한 피로도의 객관적 측정을 위한 연구 조사)

  • Kim, Young-Joo;Lee, Eui-Chul;Whang, Min-Cheol;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.195-202
    • /
    • 2011
  • Objective: The aim of this study is to investigate and review the previous researches about objective measuring fatigue caused by visual stimuli. Also, we analyze possibility of alternative visual fatigue measurement methods using facial expression recognition and gesture recognition. Background: In most previous researches, visual fatigue is commonly measured by survey or interview based subjective method. However, the subjective evaluation methods can be affected by individual feeling's variation or other kinds of stimuli. To solve these problems, signal and image processing based visual fatigue measurement methods have been widely researched. Method: To analyze the signal and image processing based methods, we categorized previous works into three groups such as bio-signal, brainwave, and eye image based methods. Also, the possibility of adopting facial expression or gesture recognition to measure visual fatigue is analyzed. Results: Bio-signal and brainwave based methods have problems because they can be degraded by not only visual stimuli but also the other kinds of external stimuli caused by other sense organs. In eye image based methods, using only single feature such as blink frequency or pupil size also has problem because the single feature can be easily degraded by other kinds of emotions. Conclusion: Multi-modal measurement method is required by fusing several features which are extracted from the bio-signal and image. Also, alternative method using facial expression or gesture recognition can be considered. Application: The objective visual fatigue measurement method can be applied into the fields of quantitative and comparative measurement of visual fatigue of next generation display devices in terms of human factor.

Traffic Signal Detection and Recognition in an RGB Color Space (RGB 색상 공간에서 교통 신호등 검출과 인식)

  • Jung, Min-Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.53-59
    • /
    • 2011
  • This paper proposes a new method of traffic signal detection and recognition in an RGB color model. The proposed method firstly processes RGB-filtering in order to detect traffic signal candidates. Secondly, it performs adaptive threshold processing and then analyzes connected components of the binary image. The connected component of a traffic signal has to be satisfied with both a bounding box rate and an area rate that are defined in this paper. The traffic signal recognition system is implemented using C language in an embedded Linux system for a high-speed real-time image processing. Experiment results show that the proposed algorithms are quite successful.

Traffic Signal Detection and Recognition Using a Color Segmentation in a HSI Color Model (HSI 색상 모델에서 색상 분할을 이용한 교통 신호등 검출과 인식)

  • Jung, Min Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.92-98
    • /
    • 2022
  • This paper proposes a new method of the traffic signal detection and the recognition in an HSI color model. The proposed method firstly converts a ROI image in the RGB model to in the HSI model to segment the color of a traffic signal. Secondly, the segmented colors are dilated by the morphological processing to connect the traffic signal light and the signal light case and finally, it extracts the traffic signal light and the case by the aspect ratio using the connected component analysis. The extracted components show the detection and the recognition of the traffic signal lights. The proposed method is implemented using C language in Raspberry Pi 4 system with a camera module for a real-time image processing. The system was fixedly installed in a moving vehicle, and it recorded a video like a vehicle black box. Each frame of the recorded video was extracted, and then the proposed method was tested. The results show that the proposed method is successful for the detection and the recognition of traffic signals.

Personal Recognition Method using Coupling Image of ECG Signal (심전도 신호의 커플링 이미지를 이용한 개인 인식 방법)

  • Kim, Jin Su;Kim, Sung Huck;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.8 no.3
    • /
    • pp.62-69
    • /
    • 2019
  • Electrocardiogram (ECG) signals cannot be counterfeited and can easily acquire signals from both wrists. In this paper, we propose a method of generating a coupling image using direction information of ECG signals as well as its usage in a personal recognition method. The proposed coupling image is generated by using forward ECG signal and rotated inverse ECG signal based on R-peak, and the generated coupling image shows a unique pattern and brightness. In addition, R-peak data is increased through the ECG signal calculation of the same beat, and it is thus possible to improve the recognition performance of the individual. The generated coupling image extracts characteristics of pattern and brightness by using the proposed convolutional neural network and reduces data size by using multiple pooling layers to improve network speed. The experiment uses public ECG data of 47 people and conducts comparative experiments using five networks with top 5 performance data among the public and the proposed networks. Experimental results show that the recognition performance of the proposed network is the highest with 99.28%, confirming potential of the personal recognition.

Optimization Numeral Recognition Using Wavelet Feature Based Neural Network. (웨이브렛 특징 추출을 이용한 숫자인식 의 최적화)

  • 황성욱;임인빈;박태윤;최재호
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2003.06a
    • /
    • pp.94-97
    • /
    • 2003
  • In this Paper, propose for MLP(multilayer perception) neural network that uses optimization recognition training scheme for the wavelet transform and the numeral image add to noise, and apply this system in Numeral Recognition. As important part of original image information preserves maximum using the wavelet transform, node number of neural network and the loaming convergence time did size of input vector so that decrease. Apply in training vector, examine about change of the recognition rate as optimization recognition training scheme raises noise of data gradually. We used original image and original image added 0, 10, 20, 30, 40, 50㏈ noise (or the increase of numeral recognition rate. In case of test image added 30∼50㏈, numeral recognition rate between the original image and image added noise for training Is a little But, in case of test image added 0∼20㏈ noise, the image added 0, 10, 20, 30, 40 , 50㏈ noise is used training. Then numeral recognition rate improved 9 percent.

  • PDF

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

A study on the recognition to road traffic sign and traffic signal for autonomous navigation (자율주행을 위한 교통신호 인식에 관한 연구)

  • 고현민;이호순;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1375-1378
    • /
    • 1997
  • In this paper, we presents the algorithm which is to recognize the traffic sign on the road the traffic signal in a video image for autonomous navigation. First, the rocognition of traffic sign on the road can be detected using boundary point estimation form some scan-lines within the lane deducted. For this algorithm, index matrix method is used to detemine what sign is. Then, the traffic signal recognition is performed by usign the window minified by several scan-lines which position may be expected. For this algoritm, line profile concept is adopted.

  • PDF

Speech Activity Decision with Lip Movement Image Signals (입술움직임 영상신호를 고려한 음성존재 검출)

  • Park, Jun;Lee, Young-Jik;Kim, Eung-Kyeu;Lee, Soo-Jong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.1
    • /
    • pp.25-31
    • /
    • 2007
  • This paper describes an attempt to prevent the external acoustic noise from being misrecognized as the speech recognition target. For this, in the speech activity detection process for the speech recognition, it confirmed besides the acoustic energy to the lip movement image signal of a speaker. First of all, the successive images are obtained through the image camera for PC. The lip movement whether or not is discriminated. And the lip movement image signal data is stored in the shared memory and shares with the recognition process. In the meantime, in the speech activity detection Process which is the preprocess phase of the speech recognition. by conforming data stored in the shared memory the acoustic energy whether or not by the speech of a speaker is verified. The speech recognition processor and the image processor were connected and was experimented successfully. Then, it confirmed to be normal progression to the output of the speech recognition result if faced the image camera and spoke. On the other hand. it confirmed not to output of the speech recognition result if did not face the image camera and spoke. That is, if the lip movement image is not identified although the acoustic energy is inputted. it regards as the acoustic noise.

Development of Vision Technology for the Test of Soldering and Pattern Recognition of Camera Back Cover (카메라 Back Cover의 형상인식 및 납땜 검사용 Vision 기술 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1999.10a
    • /
    • pp.119-124
    • /
    • 1999
  • This paper presents new approach to technology pattern recognition of camera back cover and test of soldering. In real-time implementing of pattern recognition camera back cover and test of soldering, the MVB-03 vision board has been used. Image can be captured from standard CCD monochrome camera in resolutions up to 640$\times$480 pixels. Various options re available for color cameras, a synchronous camera reset, and linescan cameras. Image processing os performed using Texas Instruments TMS320C31 digital signal processors. Image display is via a standard composite video monitor and supports non-destructive color overlay. System processing is possible using c30 machine code. Application software can be written in Borland C++ or Visual C++

  • PDF