• Title/Summary/Keyword: Action Recognition

Search Result 403, Processing Time 0.027 seconds

Detection of Low-Level Human Action Change for Reducing Repetitive Tasks in Human Action Recognition (사람 행동 인식에서 반복 감소를 위한 저수준 사람 행동 변화 감지 방법)

  • Noh, Yohwan;Kim, Min-Jung;Lee, DoHoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.432-442
    • /
    • 2019
  • Most current human action recognition methods based on deep learning methods. It is required, however, a very high computational cost. In this paper, we propose an action change detection method to reduce repetitive human action recognition tasks. In reality, simple actions are often repeated and it is time consuming process to apply high cost action recognition methods on repeated actions. The proposed method decides whether action has changed. The action recognition is executed only when it has detected action change. The action change detection process is as follows. First, extract the number of non-zero pixel from motion history image and generate one-dimensional time-series data. Second, detecting action change by comparison of difference between current time trend and local extremum of time-series data and threshold. Experiments on the proposed method achieved 89% balanced accuracy on action change data and 61% reduced action recognition repetition.

Depth Image-Based Human Action Recognition Using Convolution Neural Network and Spatio-Temporal Templates (시공간 템플릿과 컨볼루션 신경망을 사용한 깊이 영상 기반의 사람 행동 인식)

  • Eum, Hyukmin;Yoon, Changyong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.10
    • /
    • pp.1731-1737
    • /
    • 2016
  • In this paper, a method is proposed to recognize human actions as nonverbal expression; the proposed method is composed of two steps which are action representation and action recognition. First, MHI(Motion History Image) is used in the action representation step. This method includes segmentation based on depth information and generates spatio-temporal templates to describe actions. Second, CNN(Convolution Neural Network) which includes feature extraction and classification is employed in the action recognition step. It extracts convolution feature vectors and then uses a classifier to recognize actions. The recognition performance of the proposed method is demonstrated by comparing other action recognition methods in experimental results.

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

Action Recognition Method in Sports Video Shear Based on Fish Swarm Algorithm

  • Jie Sun;Lin Lu
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.554-562
    • /
    • 2023
  • This research offers a sports video action recognition approach based on the fish swarm algorithm in light of the low accuracy of existing sports video action recognition methods. A modified fish swarm algorithm is proposed to construct invariant features and decrease the dimension of features. Based on this algorithm, local features and global features can be classified. The experimental findings on the typical sports action data set demonstrate that the key details of sports action can be successfully retained by the dimensionality-reduced fusion invariant characteristics. According to this research, the average recognition time of the proposed method for walking, running, squatting, sitting, and bending is less than 326 seconds, and the average recognition rate is higher than 94%. This proves that this method can significantly improve the performance and efficiency of online sports video motion recognition.

Mediated Effects of Consumer Recognition in Relationship of Local Food Tour Experience and Intention of Action (로컬푸드 체험관광이 행동의도에 미치는 관계에서 소비자 인식의 매개효과)

  • Kim, Hee-Dong
    • Korean Journal of Organic Agriculture
    • /
    • v.22 no.1
    • /
    • pp.81-96
    • /
    • 2014
  • This study is aimed to examine the mediated effects of consumer recognition in relationship of local food tour experience and intention of action in the revitalization of local food. Questionnaire survey target was women in 30s and 40s. The local food tour experience is independent variable, intention of action is dependent variable, and consumer recognition is analyzed as mediated variable. As a result, consumer recognition which is mediating variable has two subordinated variables. One is direct affect and the other is indirect affect. Between local food tour experience and intention of action, there was partial mediating effect. Thus, through tour experience, consumer can have positive recognition of freshness, safety, health, taste, price, job creation and relationship. That affects to the intention of action. Based on the results of the study, it is necessary to learn success case for marketing revitalization, and develop and operate experiencing tour education program to manage customer continuously.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.

Robust Action Recognition Using Multiple View Image Sequences (다중 시점 영상 시퀀스를 이용한 강인한 행동 인식)

  • Ahmad, Mohiuddin;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.509-514
    • /
    • 2006
  • Human action recognition is an active research area in computer vision. In this paper, we present a robust method for human action recognition by using combined information of human body shape and motion information with multiple views image sequence. The principal component analysis is used to extract the shape feature of human body and multiple block motion of the human body is used to extract the motion features of human. This combined information with multiple view sequences enhances the recognition of human action. We represent each action using a set of hidden Markov model and we model each action by multiple views. This characterizes the human action recognition from arbitrary view information. Several daily actions of elderly persons are modeled and tested by using this approach and they are correctly classified, which indicate the robustness of our method.

  • PDF

Human Action Recognition Bases on Local Action Attributes

  • Zhang, Jing;Lin, Hong;Nie, Weizhi;Chaisorn, Lekha;Wong, Yongkang;Kankanhalli, Mohan S
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.3
    • /
    • pp.1264-1274
    • /
    • 2015
  • Human action recognition received many interest in the computer vision community. Most of the existing methods focus on either construct robust descriptor from the temporal domain, or computational method to exploit the discriminative power of the descriptor. In this paper we explore the idea of using local action attributes to form an action descriptor, where an action is no longer characterized with the motion changes in the temporal domain but the local semantic description of the action. We propose an novel framework where introduces local action attributes to represent an action for the final human action categorization. The local action attributes are defined for each body part which are independent from the global action. The resulting attribute descriptor is used to jointly model human action to achieve robust performance. In addition, we conduct some study on the impact of using body local and global low-level feature for the aforementioned attributes. Experiments on the KTH dataset and the MV-TJU dataset show that our local action attribute based descriptor improve action recognition performance.

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.