• Title/Summary/Keyword: 메타데이터

Search Result 1,491, Processing Time 0.143 seconds

A Study on Next-Generation Data Protection Based on Non File System for Spreading Smart Factory (스마트팩토리 확산을 위한 비파일시스템(None File System) 기반의 차세대 데이터보호에 관한 연구)

  • Kim, Seungyong;Hwang, Incheol;Kim, Dongsik
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.1
    • /
    • pp.176-183
    • /
    • 2021
  • Purpose: The introduction of smart factories that reflect the 4th industrial revolution technologies such as AI, IoT, and VR, has been actively promoted in Korea. However, in order to solve various problems arising from existing file-based operating systems, this research will focus on identifying and verifying non-file system-based data protection technology. Method: The research will measure security storage that cannot be identified or controlled by the operating system. How to activate secure storage based on the input of digital key values. Establish a control unit that provides input and output information based on BIOS activation. Observe non-file-type structure so that mapping behavior using second meta-data can be performed according to the activation of the secure storage. Result: First, the creation of non-file system-based secure storage's data input/output were found to match the hash function value of the sample data with the hash function value of the normal storage and data. Second, the data protection performance experiments in secure storage were compared to the hash function value of the original file with the hash function value of the secure storage after ransomware activity to verify data protection performance against malicious ransomware. Conclusion: Smart factory technology is a nationally promoted technology that is being introduced to the public and this research implemented and experimented on a new concept of data protection technology to protect crucial data within the information system. In order to protect sensitive data, implementation of non-file-type secure storage technology that is non-dependent on file system is highly recommended. This research has proven the security and safety of such technology and verified its purpose.

Research on Text Classification of Research Reports using Korea National Science and Technology Standards Classification Codes (국가 과학기술 표준분류 체계 기반 연구보고서 문서의 자동 분류 연구)

  • Choi, Jong-Yun;Hahn, Hyuk;Jung, Yuchul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.1
    • /
    • pp.169-177
    • /
    • 2020
  • In South Korea, the results of R&D in science and technology are submitted to the National Science and Technology Information Service (NTIS) in reports that have Korea national science and technology standard classification codes (K-NSCC). However, considering there are more than 2000 sub-categories, it is non-trivial to choose correct classification codes without a clear understanding of the K-NSCC. In addition, there are few cases of automatic document classification research based on the K-NSCC, and there are no training data in the public domain. To the best of our knowledge, this study is the first attempt to build a highly performing K-NSCC classification system based on NTIS report meta-information from the last five years (2013-2017). To this end, about 210 mid-level categories were selected, and we conducted preprocessing considering the characteristics of research report metadata. More specifically, we propose a convolutional neural network (CNN) technique using only task names and keywords, which are the most influential fields. The proposed model is compared with several machine learning methods (e.g., the linear support vector classifier, CNN, gated recurrent unit, etc.) that show good performance in text classification, and that have a performance advantage of 1% to 7% based on a top-three F1 score.

Brain MRI Template-Driven Medical Images Mapping Method Based on Semantic Features for Ischemic Stroke (허혈성 뇌졸중을 위한 뇌 자기공명영상의 의미적 특징 기반 템플릿 중심 의료 영상 매핑 기법)

  • Park, Ye-Seul;Lee, Meeyeon;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.2
    • /
    • pp.69-78
    • /
    • 2016
  • Ischemic stroke is a disease that the brain tissues cannot function by reducing blood flow due to thrombosis or embolisms. Due to the nature of the disease, it is most important to identify the status of cerebral vessel and the medical images are necessarily used for its diagnosis. Among many indicators, brain MRI is most widely utilized because experts can effectively obtain the semantic information such as cerebral anatomy aiding the diagnosis with it. However, in case of emergency diseases like ischemic stroke, even though a intelligent system is required for supporting the prompt diagnosis and treatment, the current systems have some difficulties to provide the information of medical images intuitively. In other words, as the current systems have managed the medical images based on the basic meta-data such as image name, ID and so on, they cannot consider semantic information inherent in medical images. Therefore, in this paper, to provide core information like cerebral anatomy contained in brain MRI, we suggest a template-driven medical images mapping method. The key idea of the method is defining the mapping characteristics between anatomic feature and representative images by using template images that can be representative of the whole brain MRI image set and revealing the semantic relations that only medical experts can check between images. With our method, it will be possible to manage the medical images based on semantic.

Construction of Component Repository for Supporting the CBD Process (CBD 프로세스 지원을 위한 컴포넌트 저장소의 구축)

  • Cha, Jung-Eun;Kim, Hang-Kon
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.7
    • /
    • pp.476-486
    • /
    • 2002
  • CBD(Component Based Development) has become the best strategical method for the business application. Because CBD is a new development paradigm which makes it possible to assemble the software components for application, it copes with the rapid challenge of business process and meets the increasing requirements for productivity. Since the business process is rapidly changing, CBD technology is the promising way to solve the productivity. Especially, the repository is the most important part for the development, distribution and reuse of components. In component repository, we can store and manage the related work-products produced at each step of component development as well as component itself. In this paper, we suggested a practical approach for repository construction to support and realize the CBD process and developed the CRMS(Component Repository Management System) as implementation product of the proposed techniques. CRMS can manage a variety of component products based on component architecture, and help software developers to search a candidate component for their project and to understand a variety of information for the component. In the paper, a practical approach for component repository was suggested, and a supporting environment was constructed to make CBD to be working efficiently. We expect this work wall be valuable research for component repository and the entire supporting Component Based Development Process.

A Comparative Study of Machine Learning Algorithms Using LID-DS DataSet (LID-DS 데이터 세트를 사용한 기계학습 알고리즘 비교 연구)

  • Park, DaeKyeong;Ryu, KyungJoon;Shin, DongIl;Shin, DongKyoo;Park, JeongChan;Kim, JinGoog
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.91-98
    • /
    • 2021
  • Today's information and communication technology is rapidly developing, the security of IT infrastructure is becoming more important, and at the same time, cyber attacks of various forms are becoming more advanced and sophisticated like intelligent persistent attacks (Advanced Persistent Threat). Early defense or prediction of increasingly sophisticated cyber attacks is extremely important, and in many cases, the analysis of network-based intrusion detection systems (NIDS) related data alone cannot prevent rapidly changing cyber attacks. Therefore, we are currently using data generated by intrusion detection systems to protect against cyber attacks described above through Host-based Intrusion Detection System (HIDS) data analysis. In this paper, we conducted a comparative study on machine learning algorithms using LID-DS (Leipzig Intrusion Detection-Data Set) host-based intrusion detection data including thread information, metadata, and buffer data missing from previously used data sets. The algorithms used were Decision Tree, Naive Bayes, MLP (Multi-Layer Perceptron), Logistic Regression, LSTM (Long Short-Term Memory model), and RNN (Recurrent Neural Network). Accuracy, accuracy, recall, F1-Score indicators and error rates were measured for evaluation. As a result, the LSTM algorithm had the highest accuracy.

The Application of Species Richness Estimators and Species Accumulation Curves to Traditional Ethnobotanical Knowledges in South Korea (남한지역 전통민속식물지식 자료를 활용한 종누적곡선 분석 및 종풍부도 추정 연구)

  • Park, Yuchul;Chang, Kae Sun;Kim, Hui
    • Korean Journal of Plant Resources
    • /
    • v.30 no.5
    • /
    • pp.481-488
    • /
    • 2017
  • Under circumstances of rapid disappearing of traditional ethnobotanical knowledge, traditional ethnobotanical knowledge surveys are the major step in documenting useful species with a conservation priority. In the ethnobotanical research, the relevance to the survey intensity, ethnobotanical information and plant species richness is the most important research theme. We made up TEK database in south Korea using metadata which had been published by the Korea National Arboretum. We calculated species richness using species richness estimator like ACE, Chao1, Chao2, ICE, Jack 1, Jack 2, and Bootstrap. Species accumulation curves showed each province sampling efforts appeared to be wide range of variance so that Gangwon province need more sampling efforts, and Chungnam province approached a horizontal asymptote earlier. We found heterogeneous patterns in the rarefaction curves of TEK species between gender for each categories of use (medicinal, food and handicrafts). Comparing with regional floral diversities, it was predicted that more diverse species would be found in some provinces by carrying out additional survey.

High-Frequency Parameter Extraction of Insulating Transformer Using S-Parameter Measurement (S-파라메타를 이용한 절연 변압기의 고주파 파라메타 추출)

  • Kim, Sung-Jun;Ryu, Soo-Jung;Kim, Tae-Ho;Kim, Jong-Hyeon;Nah, Wan-Soo
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.25 no.3
    • /
    • pp.259-268
    • /
    • 2014
  • In this paper, we suggest a method of extracting circuit parameters of the insulating transformer using S-parameter measurement, especially in high frequency range. At 60 Hz, conventionally, no load test and short circuit test are used to extract the circuit parameters. In this paper S-parameters measured from VNA(Vector Network Analyzer) were used to extract the transformer parameters using data fitting method (optimization). The S-parameters from the equivalent circuit using the extracted parameters showed good agreement with those from measurement. Furthermore, the transformer secondary voltages from the equivalent circuit model also coincide quite exactly to the measured secondary voltages in sinusoidal forms. Finally we assert that the proposed method to extract the parameters for the insulating transformer using S-parameter is valid especially in high frequency.

The Architecture and Its Function of Tool server in MPEG-21 Multimedia Framework (MPEG-21 멀티미디어 프레임워크에서 툴 서버의 구조 및 기능)

  • 김광용;홍진우;김진웅
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • /
    • pp.292-295
    • /
    • 2003
  • This paper presents the architecture and its function of Tool server. MPEG-21 will enable all-electronic creation, delivery and trade of digital multimedia content and transparent usage of various content types on network device. Therefore, we can provide access to information and services from almost anywhere at anytime with various terminals and networks. In order to support multimedia delivery chain that contains content creation, production, delivery and consumption, we need some elements to identify, describe, manage and protect the contents. Thus, we define Digital Item Processing(DIP), Digital Item Adaptation(DIA) server and Tool server as primary objects of MPEG-21 multimedia framework. DIP provides a function which creates and consumes Digital Item(DI) as a kind of a digital object by user. DIA server adapts the original DI to the usage environment description sent from the terminal and transmits the adapted DI to the terminal. Tool sewer searches for a tool requested from DIP or DIA and downloads the best tool to DIP or DIA server. In this paper, we present how Tool server is organized and is used among 2 primary objects. The paper is structured as following: Section 1 briefly describes why MPEG-21 is needed and what MPEG-21 wants. We see the basic architecture of tool server and its functionality by each module in section 2. Section 3 explains a scenario that tool server transmits tool to DIP or DIA. The paper concludes in section 4.

  • PDF

Design and Implementation of an Execution-Provenance Based Simulation Data Management Framework for Computational Science Engineering Simulation Platform (계산과학공학 플랫폼을 위한 실행-이력 기반의 시뮬레이션 데이터 관리 프레임워크 설계 및 구현)

  • Ma, Jin;Lee, Sik;Cho, Kum-won;Suh, Young-kyoon
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.77-86
    • /
    • 2018
  • For the past few years, KISTI has been servicing an online simulation execution platform, called EDISON, allowing users to conduct simulations on various scientific applications supplied by diverse computational science and engineering disciplines. Typically, these simulations accompany large-scale computation and accordingly produce a huge volume of output data. One critical issue arising when conducting those simulations on an online platform stems from the fact that a number of users simultaneously submit to the platform their simulation requests (or jobs) with the same (or almost unchanging) input parameters or files, resulting in charging a significant burden on the platform. In other words, the same computing jobs lead to duplicate consumption computing and storage resources at an undesirably fast pace. To overcome excessive resource usage by such identical simulation requests, in this paper we introduce a novel framework, called IceSheet, to efficiently manage simulation data based on execution metadata, that is, provenance. The IceSheet framework captures and stores each provenance associated with a conducted simulation. The collected provenance records are utilized for not only inspecting duplicate simulation requests but also performing search on existing simulation results via an open-source search engine, ElasticSearch. In particular, this paper elaborates on the core components in the IceSheet framework to support the search and reuse on the stored simulation results. We implemented as prototype the proposed framework using the engine in conjunction with the online simulation execution platform. Our evaluation of the framework was performed on the real simulation execution-provenance records collected on the platform. Once the prototyped IceSheet framework fully functions with the platform, users can quickly search for past parameter values entered into desired simulation software and receive existing results on the same input parameter values on the software if any. Therefore, we expect that the proposed framework contributes to eliminating duplicate resource consumption and significantly reducing execution time on the same requests as previously-executed simulations.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.