• Title/Summary/Keyword: Dataset Archive

Search Result 12, Processing Time 0.021 seconds

Design of Dataset Archive for AI Education (인공지능 교육을 위한 데이터셋 아카이브 설계)

  • Lee, Se-Hoon;Noh, Ye-Won;Noh, Yeon-Su
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.233-234
    • /
    • 2022
  • 본 논문에서는 효율적인 AI 교육을 위한 데이터셋 아카이브와 데이터 활용을 위한 프로그래밍 플랫폼과의 연동 모듈을 제안한다. 데이터셋 아카이브는 공공데이터를 전처리하여 생성한 데이터를 모아 설계하며, 프로그래밍 플랫폼 코드비(CodeB)와 연동하여 데이터를 활용할 수 있도록 한다. 코드비(CodeB)는 파이썬 블록 프로그래밍 플랫폼으로 연동을 통해 데이터를 활용한 프로그래밍이 가능하다.

  • PDF

Developing and Pre-Processing a Dataset using a Rhetorical Relation to Build a Question-Answering System based on an Unsupervised Learning Approach

  • Dutta, Ashit Kumar;Wahab sait, Abdul Rahaman;Keshta, Ismail Mohamed;Elhalles, Abheer
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.199-206
    • /
    • 2021
  • Rhetorical relations between two text fragments are essential information and support natural language processing applications such as Question - Answering (QA) system and automatic text summarization to produce an effective outcome. Question - Answering (QA) system facilitates users to retrieve a meaningful response. There is a demand for rhetorical relation based datasets to develop such a system to interpret and respond to user requests. There are a limited number of datasets for developing an Arabic QA system. Thus, there is a lack of an effective QA system in the Arabic language. Recent research works reveal that unsupervised learning can support the QA system to reply to users queries. In this study, researchers intend to develop a rhetorical relation based dataset for implementing unsupervised learning applications. A web crawler is developed to crawl Arabic content from the web. A discourse-annotated corpus is generated using the rhetorical structural theory. A Naïve Bayes based QA system is developed to evaluate the performance of datasets. The outcome shows that the performance of the QA system is improved with proposed dataset and able to answer user queries with an appropriate response. In addition, the results on fine-grained and coarse-grained relations reveal that the dataset is highly reliable.

A Study on Record Selection Strategy and Procedure in Dataset for Administrative Information (행정정보 데이터세트 기록의 선별 기준 및 절차 연구)

  • Cho, Eun-Hee;Yim, Jin-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.19
    • /
    • pp.251-291
    • /
    • 2009
  • Due to the trend toward computerization of business services in public sector and the push for e-government, the volume of records that are produced in electronic system and the types of records vary as well. Of those types, dataset is attracting everyone's attention because it is rapidly being supplied. Even though the administrative information system stipulated as an electronic record production system is increasing in number, as it is in blind spot for records management, the system can be superannuated or the records can be lost in case new system is developed. In addition, the system was designed not considering records management, it is managed in an unsatisfactory state because of not meeting the features and quality requirements as records management system. In the advanced countries, they recognized the importance of dataset and then managed the archives for dataset and carried out the project on management systems and a preservation formats for keeping data. Korea also is carrying out the researches on an dataset and individual administrative information systems, but the official scheme has not been established yet. In this study the items for managing archives which should be reflected when the administrative information system is designed was offered in two respects - an identification method and a quality requirement. The major directions for this system are as follows. First, as the dataset is a kind of an electronic record, it is necessary to reflect this factor from the design step prior to production. Second, the system should be established integrating the strategy for records management to the information strategy for the whole organization. In this study, based on such two directions the strategies to establish the identification for dataset in a frame to push e-government were suggested. The problem on the archiving steps including preservation format and the management procedures in dataset archive does not included in the research contents. In line with this, more researches on those contents as well as a variety of researches on dataset are expected to be more actively conducted.

Natural Background Level Analysis of Heavy Metal Concentration in Korean Coastal Sediments (한국 연안 퇴적물 내 중금속 원소의 자연적 배경농도 연구)

  • Lim, Dhong-Il;Choi, Jin-Yong;Jung, Hoi-Soo;Choi, Hyun-Woo;Kim, Young-Ok
    • Ocean and Polar Research
    • /
    • v.29 no.4
    • /
    • pp.379-389
    • /
    • 2007
  • This paper presents an attempt to determine natural background levels of heavy metals which could be used for assessing heavy metal contamination. For this study, a large archive dataset of heavy metal concentration (Cu, Cr, Ni, Pb, Zn) for more than 900 surface sediment samples from various Korean coastal environments was newly compiled. These data were normalized for aluminum (grain-size normalizer) concentration to isolate natural factors from anthropogenic ones. The normalization was based on the hypothesis that heavy metal concentrations vary consistently with the concentration of aluminum, unless these metals are of anthropogenic origin. So, the samples (outliers) suspected of receivingany anthropogenic input were removed from regression to ascertain the "background" relationship between the metals and aluminum. Identification of these outliers was tested using a model of predicted limits at 95%. The process of testing for normality (Kolmogorov-Smirnov Test) and selection of outliers was iterated until a normal distribution was achieved. On the basis of the linear regression analysis of the large archive (please check) dataset, background levels, which are applicable to heavy metal assessment of Korean coastal sediments, were successfully developed for Cu, Cr, Ni, Zn. As an example, we tested the applicability of this baseline level for metal pollution assessment of Masan Bay sediments.

A Study on Data Adjustment and Quality Enhancement Method for Public Administrative Dataset Records in the Transfer Process-Based on the Experiences of Datawarehouses' ETT (행정정보 데이터세트 기록 이관 시 데이터 보정 및 품질 개선 방법 연구 - 데이터웨어하우스 ETT 경험을 기반으로)

  • Yim, Jin-Hee;Cho, Eun-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.25
    • /
    • pp.91-129
    • /
    • 2010
  • As it grows more heavily reliant on information system, researchers seek for various ways to manage and utilize of dataset records which is accumulated in public information system. It might be needed to adjust date and enhance the quality of public administrative dataset records during transferring to archive system or sharing server. The purpose of this paper is presenting data adjustment and quality enhancement methods for public administrative dataset records, and it refers to ETT procedure and method of construction of datawarehouses. It suggests seven typical examples and processing method of data adjustment and quality enhancement, which are (1) verification of quantity and data domain (2) code conversion for a consistent code value (3) making component with combinded information (4) making a decision of precision of date data (5) standardization of data (6) comment information about code value (7) capturing of metadata. It should be reviewed during dataset record transfer. This paper made Data adjustment and quality enhancement requirements for dataset record transfer, and it could be used as data quality requirement of administrative information system which produces dataset.

AKARI Observation of the North Ecliptic Pole (NEP) Supercluster at z=0.087

  • Ko, Jong-Wan;Im, Myung-Shin;AKARINEP-Wideteam, AKARINEP-Wideteam
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.1
    • /
    • pp.74.2-74.2
    • /
    • 2010
  • We present a multi-wavelength study of a supercluster in the NEP region at z=0.087, using AKARI (Infrared space telescope) NEP-Wide (5.8 deg2) survey which has obtained an unique IR imaging dataset with contiguous wavelength coverage from 2 to $24{\mu}m$, overcoming the Spitzer limitation of imaging capability at $10-20{\mu}m$. The NEP-Wide survey is also covered in other wavelength such as X-ray, Radio, GALEX UV in the archive, optical (BRI from Maidanak 1.5m and CFHT's MegaPrime), and NIR imaging data (JH from KPNO 2.1m), with nearly 1900 optical spectra, mostly obtained by our group using MMT/Hectospec and WIYN/Hydra. Armed with the multiwavelength datasets, we investigate the connection between IR properties of galaxies and their environments as a tool to understand the evolution of galaxies in a supercluster environment. Specific attention will be given to MIR emission which can trace star formation activities and passive phases right after post-starbursts, and its relation to other wavelength data.

  • PDF

Analyzing performance of time series classification using STFT and time series imaging algorithms

  • Sung-Kyu Hong;Sang-Chul Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.1-11
    • /
    • 2023
  • In this paper, instead of using recurrent neural network, we compare a classification performance of time series imaging algorithms using convolution neural network. There are traditional algorithms that imaging time series data (e.g. GAF(Gramian Angular Field), MTF(Markov Transition Field), RP(Recurrence Plot)) in TSC(Time Series Classification) community. Furthermore, we compare STFT(Short Time Fourier Transform) algorithm that can acquire spectrogram that visualize feature of voice data. We experiment CNN's performance by adjusting hyper parameters of imaging algorithms. When evaluate with GunPoint dataset in UCR archive, STFT(Short-Time Fourier transform) has higher accuracy than other algorithms. GAF has 98~99% accuracy either, but there is a disadvantage that size of image is massive.

A Study of the Transition Process in Presidential Electronic Records Transfer and Improvement Measures : Focused on the Electronic Records of the 19th President Moon Jae-in's Administration (대통령 전자기록물의 이관방식 변천과 개선방안 연구 19대 문재인 정부 대통령 전자기록물을 중심으로 )

  • Yun, Jeonghun
    • The Korean Journal of Archival Studies
    • /
    • no.75
    • /
    • pp.41-89
    • /
    • 2023
  • Since the enactment of the Act on the Management of Presidential Archives in 2007, the cases of electronic records transfer in the 16th President Roh Moo-hyun's administration have played the role of an advance guard in managing public records and served as a test bed for new electronic records management. When transferring the electronic records of the 19th President Moon Jae-in's administration, the electronic records transfer method of President Roh's administration was inherited, while several innovative attempts were made. For instance, the Presidential Archives have for the first time converted the electronic documents from institutions advising the President into a long-term preservation package and transferred them online. In addition, considering the characteristics of the data, the administrative information dataset of the Presidential record creation institutions was transferred to the SIARD standard. Furthermore, the Presidential Archives had websites transferred in the form of OVF as a pilot test and collected social media directly through the API. Thus this study investigated the transition process of the presidential electronic records transfers from the 16th President Roh Moo-hyun's administration to the 19th President Moon Jae-in's. In addition, major achievements and issues were analyzed centering on the transfer method by type of electronic records during President Moon Jae-in's administration, and future improvement plans were presented.

Development of SNP marker set for marker-assisted backcrossing (MABC) in cultivating tomato varieties

  • Park, GiRim;Jang, Hyun A;Jo, Sung-Hwan;Park, Younghoon;Oh, Sang-Keun;Nam, Moon
    • Korean Journal of Agricultural Science
    • /
    • v.45 no.3
    • /
    • pp.385-400
    • /
    • 2018
  • Marker-assisted backcrossing (MABC) is useful for selecting offspring with a highly recovered genetic background for a recurrent parent at early generation unlike rice and other field crops. Molecular marker sets applicable to practical MABC are scarce in vegetable crops including tomatoes. In this study, we used the National Center for Biotechnology Information- short read archive (NCBI-SRA) database that provided the whole genome sequences of 234 tomato accessions and selected 27,680 tag-single nucleotide polymorphisms (tag-SNPs) that can identify haplotypes in the tomato genome. From this SNP dataset, a total of 143 tag-SNPs that have a high polymorphism information content (PIC) value (> 0.3) and are physically evenly distributed on each chromosome were selected as a MABC marker set. This marker set was tested for its polymorphism in each pairwise cross combination constructed with 124 of the 234 tomato accessions, and a relatively high number of SNP markers polymorphic for the cross combination was observed. The reliability of the MABC SNP set was assessed by converting 18 SNPs into Luna probe-based high-resolution melting (HRM) markers and genotyping nine tomato accessions. The results show that the SNP information and HRM marker genotype matched in 98.6% of the experiment data points, indicating that our sequence analysis pipeline for SNP mining worked successfully. The tag-SNP set for the MABC developed in this study can be useful for not only a practical backcrossing program but also for cultivar identification and F1 seed purity test in tomatoes.

Low-dose CT Image Denoising Using Classification Densely Connected Residual Network

  • Ming, Jun;Yi, Benshun;Zhang, Yungang;Li, Huixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2480-2496
    • /
    • 2020
  • Considering that high-dose X-ray radiation during CT scans may bring potential risks to patients, in the medical imaging industry there has been increasing emphasis on low-dose CT. Due to complex statistical characteristics of noise found in low-dose CT images, many traditional methods are difficult to preserve structural details effectively while suppressing noise and artifacts. Inspired by the deep learning techniques, we propose a densely connected residual network (DCRN) for low-dose CT image noise cancelation, which combines the ideas of dense connection with residual learning. On one hand, dense connection maximizes information flow between layers in the network, which is beneficial to maintain structural details when denoising images. On the other hand, residual learning paired with batch normalization would allow for decreased training speed and better noise reduction performance in images. The experiments are performed on the 100 CT images selected from a public medical dataset-TCIA(The Cancer Imaging Archive). Compared with the other three competitive denoising algorithms, both subjective visual effect and objective evaluation indexes which include PSNR, RMSE, MAE and SSIM show that the proposed network can improve LDCT images quality more effectively while maintaining a low computational cost. In the objective evaluation indexes, the highest PSNR 33.67, RMSE 5.659, MAE 1.965 and SSIM 0.9434 are achieved by the proposed method. Especially for RMSE, compare with the best performing algorithm in the comparison algorithms, the proposed network increases it by 7 percentage points.