• Title/Summary/Keyword: Human computer

Search Result 4,973, Processing Time 0.037 seconds

Identifying Strategies to Address Human Cybersecurity Behavior: A Review Study

  • Hakami, Mazen;Alshaikh, Moneer
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.299-309
    • /
    • 2022
  • Human factor represents a very challenging issue to organizations. Human factor is responsible for many cybersecurity incidents by noncompliance with the organization security policies. In this paper we conduct a comprehensive review of the literature to identify strategies to address human factor. Security awareness, training and education program is the main strategy to address human factor. Scholars have consistently argued that importance of security awareness to prevent incidents from human behavior.

Implementation of Human and Computer Interface for Detecting Human Emotion Using Neural Network (인간의 감정 인식을 위한 신경회로망 기반의 휴먼과 컴퓨터 인터페이스 구현)

  • Cho, Ki-Ho;Choi, Ho-Jin;Jung, Seul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.825-831
    • /
    • 2007
  • In this paper, an interface between a human and a computer is presented. The human and computer interface(HCI) serves as another area of human and machine interfaces. Methods for the HCI we used are voice recognition and image recognition for detecting human's emotional feelings. The idea is that the computer can recognize the present emotional state of the human operator, and amuses him/her in various ways such as turning on musics, searching webs, and talking. For the image recognition process, the human face is captured, and eye and mouth are selected from the facial image for recognition. To train images of the mouth, we use the Hopfield Net. The results show 88%$\sim$92% recognition of the emotion. For the vocal recognition, neural network shows 80%$\sim$98% recognition of voice.

A Novel Computer Human Interface to Remotely Pick up Moving Human's Voice Clearly by Integrating ]Real-time Face Tracking and Microphones Array

  • Hiroshi Mizoguchi;Takaomi Shigehara;Yoshiyasu Goto;Hidai, Ken-ichi;Taketoshi Mishima
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.75-80
    • /
    • 1998
  • This paper proposes a novel computer human interface, named Virtual Wireless Microphone (VWM), which utilizes computer vision and signal processing. It integrates real-time face tracking and sound signal processing. VWM is intended to be used as a speech signal input method for human computer interaction, especially for autonomous intelligent agent that interacts with humans like as digital secretary. Utilizing VWM, the agent can clearly listen human master's voice remotely as if a wireless microphone was put just in front of the master.

  • PDF

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

ESTIMATING THE MOTION OF THE HUMAN JOINTS USING OPTICAL MOTION CAPTURE SYSTEM

  • Park, Jun-Young;Kyota, Fumihito;Saito, Suguru;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.764-767
    • /
    • 2009
  • Motion capture systems allow to measure the precise position of markers on the human body in real time. These captured motion data, the marker position data, have to be fitted by a human skeleton model to represent the motion of the human. Typical human skeleton models approximate the joints using a ball joint model. However, because this model cannot represent the human skeleton precisely, errors between the motion data and the movements of the simplified human skeleton model happen. We propose in this paper a method for measuring a translation component of wrist, and elbow joints on upper limb using optical motion capture system. Then we study the errors between the ball joint model and acquired motion data. In addition, we discuss the problem to estimate motion of human joint using optical motion capture system.

  • PDF

Tactile Sensation Display with Electrotactile Interface

  • Yarimaga, Oktay;Lee, Jun-Hun;Lee, Beom-Chan;Ryu, Je-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.145-150
    • /
    • 2005
  • This paper presents an Electrotactile Display System (ETCS). One of the most important human sensory systems for human computer interaction is the sense of touch, which can be displayed to human through tactile output devices. To realize the sense of touch, electrotactile display produces controlled, localized touch sensation on the skin by passing small electric current. In electrotactile stimulation, the mechanoreceptors in the skin may be stimulated individually in order to display the sense of vibration, touch, itch, tingle, pressure etc. on the finger, palm, arm or any suitable location of the body by using appropriate electrodes and waveforms. We developed an ETCS and investigated effectiveness of the proposed system in terms of the perception of roughness of a surface by stimulating the palmar side of hand with different waveforms and the perception of direction and location information through forearm. Positive and negative pulse trains were tested with different current intensities and electrode switching times on the forearm or finger of the user with an electrode-embedded armband in order to investigate how subjects recognize displayed patterns and directions of stimulation.

  • PDF

Trends on Human/Robot Interface Research (휴먼/로봇 인터페이스 연구동향 분석)

  • Im, Chang-Ju;Im, Chi-Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.101-111
    • /
    • 2002
  • An intelligent robot, which has been developed recently, is no more a conventional robot widely known as an industrial robot. It is a computer system embedded in a machine and utilizes the machine as a medium not only for the communication between the human and the computer but also for the physical interaction among the human, the computer and their environment. Recent advances in computer technology have made it possible to create several of new types of human-computer interaction which are realized by utilizing intelligent machines. There is a continuing need for better understanding of how to design human/robot interface(HRI) to make for a more natural and efficient flow of information and feedback between robot systems and their users in both directions. In this paper, we explain the concept and the scope of HRI and review the current research trends of domestic and foreign HRL. The recommended research directions in the near future are also discussed based upon a comparative study of domestic and foreign HRI technology.

Human Face Tracking and Modeling using Active Appearance Model with Motion Estimation

  • Tran, Hong Tai;Na, In Seop;Kim, Young Chul;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.49-56
    • /
    • 2017
  • Images and Videos that include the human face contain a lot of information. Therefore, accurately extracting human face is a very important issue in the field of computer vision. However, in real life, human faces have various shapes and textures. To adapt to these variations, A model-based approach is one of the best ways in which unknown data can be represented by the model in which it is built. However, the model-based approach has its weaknesses when the motion between two frames is big, it can be either a sudden change of pose or moving with fast speed. In this paper, we propose an enhanced human face-tracking model. This approach included human face detection and motion estimation using Cascaded Convolutional Neural Networks, and continuous human face tracking and modeling correction steps using the Active Appearance Model. A proposed system detects human face in the first input frame and initializes the models. On later frames, Cascaded CNN face detection is used to estimate the target motion such as location or pose before applying the old model and fit new target.

Human Factor & Artificial Intelligence: For future software security to be invincible, a confronting comprehensive survey

  • Al-Amri, Bayan O;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.245-251
    • /
    • 2021
  • This work aims to focus on the current features and characteristics of Human Element and Artificial intelligence (AI), ask some questions about future information security, and whether we can avoid human errors by improving machine learning and AI or invest in human knowledge more and work them both together in the best way possible? This work represents several related research results on human behavior towards information security, specified with elements and factors like knowledge and attitude, and how much are they invested for ISA (information security awareness), then presenting some of the latest studies on AI and their contributions to further improvements, making the field more securely advanced, we aim to open a new type of thinking in the cybersecurity field and we wish our suggestions of utilizing each point of strengths in both human attributions in software security and the existence of a well-built AI are going to make better future software security.

Smart Deaf Emergency Application Based on Human-Computer Interaction Principles

  • Ahmed, Thowiba E;Almadan, Naba Abdulraouf;Elsadek, Alma Nabil;Albishi, Haya Zayed;Al-Qahtani, Norah Eid;Alghamdi, arah Khaled
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.4
    • /
    • pp.284-288
    • /
    • 2021
  • Human-computer interaction is a discipline concerned with the design, evaluation, and implementation of interactive systems for human use. In this paper we suggest designing a smart deaf emergency application based on Human-Computer Interaction (HCI) principles whereas nowadays everything around us is becoming smart, People already have smartphones, smartwatches, smart cars, smart houses, and many other technologies that offer a wide range of useful options. So, a smart mobile application using Text Telephone or TeleTYpe technology (TTY) has been proposed to help people with deafness or impaired hearing to communicate and seek help in emergencies. Deaf people find it difficult to communicate with people, especially in emergency status. It is stipulated that deaf people In all societies must have equal rights to use emergency services as other people. With the proposed application the deafness or impaired hearing can request help with one touch, and the location will be determined, also the user status will be sent to the emergency services through the application, making it easier to reach them and provide them with assistance. The application contains several classifications and emergency status (traffic, police, road safety, ambulance, fire fighting). The expected results from this design are interactive, experiential, efficient, and comprehensive features of human-computer interactive technology which may achieve user satisfaction.