• 제목/요약/키워드: Human computer interaction

검색결과 620건 처리시간 0.032초

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

원격공동연구 플랫품의 상호작용 프로토콜 (Interaction Protocol on the COLAB Platform)

  • 권대현;서영호;김용;황대준
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 1998년도 춘계학술발표 논문집
    • /
    • pp.304-308
    • /
    • 1998
  • Technical advances in computer networks and the Internet bring a new communication era and provide effective solutions for cooperative works and research. These technological advances introduced the concept of cyberspace that many people involve reseach and a project at different locations at the same time. In this paper, we present a fast and effective interaction protocol that is aeapted to the COLAB(COIIaborative LABoratory) Systems which use a high-speed ATM Network. The CCOLAB systems is developed for researchers those who are doing a large project on the collaborative research environment. The interaction protocol that we developed supports multi-session and multi-channel on the TCP/IP Network and provides more flexible solution to control multimedia data on the network.

  • PDF

Multi-channel Speech Enhancement Using Blind Source Separation and Cross-channel Wiener Filtering

  • Jang, Gil-Jin;Choi, Chang-Kyu;Lee, Yong-Beom;Kim, Jeong-Su;Kim, Sang-Ryong
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권2E호
    • /
    • pp.56-67
    • /
    • 2004
  • Despite abundant research outcomes of blind source separation (BSS) in many types of simulated environments, their performances are still not satisfactory to be applied to the real environments. The major obstacle may seem the finite filter length of the assumed mixing model and the nonlinear sensor noises. This paper presents a two-step speech enhancement method with multiple microphone inputs. The first step performs a frequency-domain BSS algorithm to produce multiple outputs without any prior knowledge of the mixed source signals. The second step further removes the remaining cross-channel interference by a spectral cancellation approach using a probabilistic source absence/presence detection technique. The desired primary source is detected every frame of the signal, and the secondary source is estimated in the power spectral domain using the other BSS output as a reference interfering source. Then the estimated secondary source is subtracted to reduce the cross-channel interference. Our experimental results show good separation enhancement performances on the real recordings of speech and music signals compared to the conventional BSS methods.

인간-컴퓨터 상호 작용을 위한 인간 팔의 3차원 자세 추정 - 기계요소 모델링 기법을 컴퓨터 비전에 적용 (3D Pose Estimation of a Human Arm for Human-Computer Interaction - Application of Mechanical Modeling Techniques to Computer Vision)

  • 한영모
    • 전자공학회논문지SC
    • /
    • 제42권4호
    • /
    • pp.11-18
    • /
    • 2005
  • 인간은 의사 표현을 위해 음성언어 뿐 아니라 몸짓 언어(body languages)를 많이 사용한다 이 몸짓 언어 중 대표적인 것은, 물론 손과 팔의 사용이다. 따라서 인간 팔의 운동 해석은 인간과 기계의 상호 작용(human-computer interaction)에 있어 매우 중요하다고 할 수 있다. 이러한 견지에서 본 논문에서는 다음과 같은 방법으로 컴퓨터비전을 이용한 인간팔의 3차원 자세 추정 방법을 제안하다. 먼저 팔의 운동이 대부분 회전 관절(revolute-joint)에 의해 이루어진다는 점에 착안하여, 컴퓨터 비전 시스템을 활용한 회전 관절의 3차원 운동 해석 기법을 제안한다. 이를 위해 회전 관절의 기구학적 모델링 기법(kinematic modeling techniques)과 컴퓨터 비전의 경사 투영 모델(perspective projection model)을 결합한다. 다음으로, 회전 관절의 3차원 운동해석 기법을 컴퓨터 비전을 이용한 인간 팔의 3차원 자세 추정 문제에 웅용한다. 그 기본 발상은 회전 관절의 3차원 운동 복원 알고리즘을 인간 팔의 각 관절에 순서 데로 적용하는 것이다. 본 알고리즘은 특히 유비쿼터스 컴퓨팅(ubiquitous computing)과 가상현실(virtual reality)를 위한 인간-컴퓨터 상호작용(human-computer interaction)이라는 응용을 목표로, 고수준의 정확도를 갖는 폐쇄구조 형태(closed-form)의 해를 구하는데 주력한다.

Smart Deaf Emergency Application Based on Human-Computer Interaction Principles

  • Ahmed, Thowiba E;Almadan, Naba Abdulraouf;Elsadek, Alma Nabil;Albishi, Haya Zayed;Al-Qahtani, Norah Eid;Alghamdi, arah Khaled
    • International Journal of Computer Science & Network Security
    • /
    • 제21권4호
    • /
    • pp.284-288
    • /
    • 2021
  • Human-computer interaction is a discipline concerned with the design, evaluation, and implementation of interactive systems for human use. In this paper we suggest designing a smart deaf emergency application based on Human-Computer Interaction (HCI) principles whereas nowadays everything around us is becoming smart, People already have smartphones, smartwatches, smart cars, smart houses, and many other technologies that offer a wide range of useful options. So, a smart mobile application using Text Telephone or TeleTYpe technology (TTY) has been proposed to help people with deafness or impaired hearing to communicate and seek help in emergencies. Deaf people find it difficult to communicate with people, especially in emergency status. It is stipulated that deaf people In all societies must have equal rights to use emergency services as other people. With the proposed application the deafness or impaired hearing can request help with one touch, and the location will be determined, also the user status will be sent to the emergency services through the application, making it easier to reach them and provide them with assistance. The application contains several classifications and emergency status (traffic, police, road safety, ambulance, fire fighting). The expected results from this design are interactive, experiential, efficient, and comprehensive features of human-computer interactive technology which may achieve user satisfaction.

효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술 (Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction)

  • 박동건;강경민;배진우;한지형
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

컴퓨터 사이언스 강의실 HCI 도입 : 컴퓨터 사이언스 학생에게 사용자 중심 설계 교육에 관한 사례 연구 (Bringing Human Computer Interaction in Computer Science Classrooms : Case Study on Teaching User-Centric Design to Computer Science Students)

  • 정영주;정구철
    • 한국실천공학교육학회논문지
    • /
    • 제2권1호
    • /
    • pp.164-173
    • /
    • 2010
  • In recent decades, focuses on usability and emphases on user-centric design have become more prevalent in the field of software design. However, it is not always easy for engineers and computer scientists to think in the users' shoes. Human-computer interaction (HCI) is a field of study that focuses on creating technologies easier and more intuitive for the users. This paper is based on teaching HCI skills to undergraduate computer science students in a software application design course. Specifically, this paper employs: first, the HCI skills taught to the students; second, the tendencies and challenges of the students in creating user-centric applications; and lastly, suggestions based on our findings to promote HCI in developing user-friendly software. While more firm conclusions shall be reserved for more formal empirical studies, the findings in this paper still offer implications and suggestions for promoting user-centric approach for software designers and developers in the technology industry.

  • PDF

인간의 인지 및 감성을 고려한 게임 디자인 전략 (A Cognitive and Emotional Strategy for Computer Game Design)

  • 최동성;김호영;김진우
    • Asia pacific journal of information systems
    • /
    • 제10권1호
    • /
    • pp.165-187
    • /
    • 2000
  • The computer game market has grown rapidly with numerous games produced all over the world. Most games have been developed to make gamers have fun while playing the games. However, there has been little research to address the elements of games that create the perception of being fun. The objectives of this research are to focus on which features provide fun, and then analyze these aspects both qualitatively and quantitatively. This study, through surveys with game players and developers, provides several inputs regarding what it is, that makes certain computer games fun. There are many common characteristics which fun games share, and through grouping and organizing these traits, then compiling the data for use in an AHP(Analytic Hierarchy Process), we measured the disparity in the 'fun' perception between game developer and game user.

  • PDF

멀티모달 인터랙션을 위한 사용자 병렬 모달리티 입력방식 및 입력 동기화 방법 설계 (Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction)

  • 임미정;박범
    • 대한인간공학회지
    • /
    • 제25권2호
    • /
    • pp.135-146
    • /
    • 2006
  • Multimodal interfaces are recognition-based technologies that interpret and encode hand gestures, eye-gaze, movement pattern, speech, physical location and other natural human behaviors. Modality is the type of communication channel used for interaction. It also covers the way an idea is expressed or perceived, or the manner in which an action is performed. Multimodal Interfaces are the technologies that constitute multimodal interaction processes which occur consciously or unconsciously while communicating between human and computer. So input/output forms of multimodal interfaces assume different aspects from existing ones. Moreover, different people show different cognitive styles and individual preferences play a role in the selection of one input mode over another. Therefore to develop an effective design of multimodal user interfaces, input/output structure need to be formulated through the research of human cognition. This paper analyzes the characteristics of each human modality and suggests combination types of modalities, dual-coding for formulating multimodal interaction. Then it designs multimodal language and input synchronization method according to the granularity of input synchronization. To effectively guide the development of next-generation multimodal interfaces, substantially cognitive modeling will be needed to understand the temporal and semantic relations between different modalities, their joint functionality, and their overall potential for supporting computation in different forms. This paper is expected that it can show multimodal interface designers how to organize and integrate human input modalities while interacting with multimodal interfaces.