• Title/Summary/Keyword: Mobile Annotation

Search Result 36, Processing Time 0.049 seconds

Semantic Image Annotation and Retrieval in Mobile Environments (모바일 환경에서 의미 기반 이미지 어노테이션 및 검색)

  • No, Hyun-Deok;Seo, Kwang-won;Im, Dong-Hyuk
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1498-1504
    • /
    • 2016
  • The progress of mobile computing technology is bringing a large amount of multimedia contents such as image. Thus, we need an image retrieval system which searches semantically relevant image. In this paper, we propose a semantic image annotation and retrieval in mobile environments. Previous mobile-based annotation approaches cannot fully express the semantics of image due to the limitation of current form (i.e., keyword tagging). Our approach allows mobile devices to annotate the image automatically using the context-aware information such as temporal and spatial data. In addition, since we annotate the image using RDF(Resource Description Framework) model, we are able to query SPARQL for semantic image retrieval. Our system implemented in android environment shows that it can more fully represent the semantics of image and retrieve the images semantically comparing with other image annotation systems.

Extending Semantic Image Annotation using User- Defined Rules and Inference in Mobile Environments (모바일 환경에서 사용자 정의 규칙과 추론을 이용한 의미 기반 이미지 어노테이션의 확장)

  • Seo, Kwang-won;Im, Dong-Hyuk
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.158-165
    • /
    • 2018
  • Since a large amount of multimedia image has dramatically increased, it is important to search semantically relevant image. Thus, several semantic image annotation methods using RDF(Resource Description Framework) model in mobile environment are introduced. Earlier studies on annotating image semantically focused on both the image tag and the context-aware information such as temporal and spatial data. However, in order to fully express their semantics of image, we need more annotations which are described in RDF model. In this paper, we propose an annotation method inferencing with RDFS entailment rules and user defined rules. Our approach implemented in Moment system shows that it can more fully represent the semantics of image with more annotation triples.

WalkieTagging : Efficient Speech-Based Video Annotation Method for Smart Devices (워키태깅 : 스마트폰 환경에서 음성기반의 효과적인 영상 콘텐츠 어노테이션 방법에 관한 연구)

  • Park, Joon Young;Lee, Soobin;Kang, Dongyeop;Seok, YoungTae
    • Journal of Information Technology Services
    • /
    • v.12 no.1
    • /
    • pp.271-287
    • /
    • 2013
  • The rapid growth and dissemination of touch-based mobile devices such as smart phones and tablet PCs, gives numerous benefits to people using a variety of multimedia contents. Due to its portability, it enables users to watch a soccer game, search video from YouTube, and sometimes tag on contents on the road. However, the limited screen size of mobile devices and touch-based character input methods based on this, are still major problems of searching and tagging multimedia contents. In this paper, we propose WalkieTagging, which provides a much more intuitive way than that of previous one. Just like any other previous video tagging services, WalkieTagging, as a voice-based annotation service, supports inserting detailed annotation data including start time, duration, tags, with little effort of users. To evaluate our methods, we developed the Android-based WalkieTagging application and performed user study via a two-week. Through our experiments by a total of 46 people, we observed that experiment participator think our system is more convenient and useful than that of touch-based one. Consequently, we found out that voice-based annotation methods can provide users with much convenience and satisfaction than that of touch-based methods in the mobile environments.

Annotation Technique Development based on Apparel Attributes for Visual Apparel Search Technology (비주얼 의류 검색기술을 위한 의류 속성 기반 Annotation 기법 개발)

  • Lee, Eun-Kyung;Kim, Yang-Weon;Kim, Seon-Sook
    • Fashion & Textile Research Journal
    • /
    • v.17 no.5
    • /
    • pp.731-740
    • /
    • 2015
  • Mobile (smartphone) search engine marketing is increasingly important. Accordingly, the development of visual apparel search technology to obtain easier and faster access to visual information in the apparel field is urgently needed. This study helps establish a proper classifying system for an apparel search after an analysis of search techniques for apparel search applications and existing domestic and overseas apparel sites. An annotation technique is developed in accordance with visual attributes and apparel categories based on collected data obtained by web crawling and apparel images collecting. The categorical composition of apparel is divided into wearing, image and style. The web evaluation site traces the correlations of the apparel category and apparel factors as dependent upon visual attributes. An appraisal team of 10 individuals evaluated 2860 pieces of merchandise images. Data analysis consisted of correlations between apparel, sleeve length and apparel category (based on an average analysis), and correlation between fastener and apparel category (based on an average analysis). The study results can be considered as an epoch-making mobile apparel search system that can contribute to enhancing consumer convenience since it enables an effective search of type, price, distributor, and apparel image by a mobile photographing of the wearing state.

A Voice-Annotation Technique in Mobile E-book for Reading-disabled People (독서장애인용 디지털음성도서를 위한 음성 어노테이션 기법)

  • Lee, Kyung-Hee;Lee, Jong-Woo;Lim, Soon-Bum
    • Journal of Digital Contents Society
    • /
    • v.12 no.3
    • /
    • pp.329-337
    • /
    • 2011
  • Digital talking book has been developed to enhance reading experiences for reading-disabled people. In the existing digital talking book, however, annotations can be created only through the screen interfaces. Screen annotation interfaces is of no use for reading-disabled people because they need reader's eyesight. In this paper, we suggest a voice annotation technique can create notes and highlights at any playing time by using hearing sense and voice command. We design a location determination technique that pinpoints where a voice annotation should be placed in the playing sentences. To verify the effectiveness of our voice annotation technique, we implement a prototype in an android platform. We can find out by the black-blindfolded users testing that our system can perfectly locate the exact position that a voice annotation should be placed into.

Augmented Reality Annotation for Real-Time Collaboration System

  • Cao, Dongxing;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.483-489
    • /
    • 2020
  • Advancements in mobile phone hardware and network connectivity made communication becoming more and more convenient. Compared to pictures or texts, people prefer to share videos to convey information. For intentions clearer, the way to annotating comments directly on the video are quite important issues. Recently there have been many attempts to make annotations on video. These previous works have many limitations that do not support user-defined handwritten annotations or annotating on local video. In this sense, we propose an augmented reality based real-time video annotation system which allowed users to make any annotations directly on the video freely. The contribution of this work is the development of a real-time video annotation system based on recent augmented reality platforms that not only enables annotating drawing geometry shape on video in real-time but also drastically reduces the production costs. For practical use, we proposed a real-time collaboration system based on the proposed annotation method. Experimental results show that the proposed annotation method meets the requirements of real-time, accuracy and robustness of the collaboration system.

A Voice Annotation Browsing Technique in Digital Talking Book for Reading-disabled People (독서장애인을 위한 음성 도서 어노테이션 검색 기법)

  • Park, Joo Hyun;Lim, Soon-Bum;Lee, Jongwoo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.4
    • /
    • pp.510-519
    • /
    • 2013
  • In this paper, we propose a voice-annotation browsing system that make the reading-disabled people to be able to find and play the existing voice-annotations. The proposed system consists of 4 steps: input, ranking & recommendation, search, and output. For the reading-disabled people depending only on the auditory sense, all steps can accept voice commands. To evaluate the effectiveness of our system, we design and implement an android-based mobile e-book application supporting the voice-annotation browsing ability. The implemented system is tested by a number of blind-folded users. As a result, we can see almost all the reading-disabled people can successfully and easily reach the existing voice-annotations they want to find.

Transcoding Web Documents Using CC/PP and Annotation (CC/PP와 어노테이션을 이용한 웹 문서의 트랜스코딩)

  • Kim Hwe-Mo;Song Teuk-Seob;Choy Yoon-Chul;Lee Kyong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.2
    • /
    • pp.137-153
    • /
    • 2005
  • This paper presents a transcoding method that dynamically adapts Web pages to various devices. The proposed method is based on a CC/PP profile that is a standard description of a device's context information. Additionally, to support a sophisticated transcoding, we define an annotation schema for representing additional information about original contents. Since a mobile device has a screen of limited size, A Web page is splitted into many small ones. Our method constructs a navigation-map that represents the hierarchical relations among the splitted pages. Experimental results with various Web contents show that the proposed method is superior in terms of user's convenience of navigation and the transcoding Quality.

  • PDF

Semantic Image Annotation using Inference in Mobile Environments (모바일 환경에서 추론을 이용한 의미 기반 이미지 어노테이션 시스템 설계 및 구현)

  • Seo, Kwang-won;Im, Dong-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.999-1000
    • /
    • 2017
  • 본 논문에서는 이전의 의미 기반 이미지 어노테이션 및 검색 시스템 Moment(Mobile Semantic Image Annotation and Retrieval System)에 RDF(Resource Description Framework) 추론 기능을 사용한 어노테이션 방법을 제안한다. 이를 위하여 제안된 시스템은 Apache Jena Inference API를 통해 구현되였으며 각 이미지들이 가진 어노테이션의 개수가 증가되었다. 자동으로 추론된 결과 또한 SPARQL 질의를 통해 검색이 가능하며, 기존 어노테이션 결과에 대한 의미 검색을 더욱 효과적으로 할 수 있게 한다.

The Design Interface for Retrieval Meaning Base of User Mobile Unit (모바일 단말기에서 사용자의 의미기반 검색을 위한 인터페이스 설계)

  • Cho, Hyun-Seob;Oh, Hun
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1665-1667
    • /
    • 2007
  • Recently, retrieval of various video data has become an important issue as more and more multimedia content services are being provided. To effectively deal with video data, a semantic-based retrieval scheme that allows for processing diverse user queries and saving them on the database is required. In this regard, this paper proposes a semantic-based video retrieval system that allows the user to search diverse meanings of video data for electrical safetyrelated educational purposes by means of automatic annotation processing. If the user inputs a keyword to search video data for electrical safety-related educational purposes, the mobile agent of the proposed system extracts the features of the video data that are afterwards learned in a continuous manner, and detailed information on electrical safety education is saved on the database. The proposed system is designed to enhance video data retrieval efficiency for electrical safety-related educational purposes.

  • PDF