• Title, Summary, Keyword: Database

Search Result 16,843, Processing Time 0.062 seconds

Incentive-Compatible Priority Pricing and Transfer Analysis in Database Services

  • Kim, Yong J.
    • The Journal of Information Technology and Database
    • /
    • v.4 no.2
    • /
    • pp.21-32
    • /
    • 1998
  • A primary concern of physical database design has been efficient retrieval and update of a record because predictable performance of a DBMS is indispensable to time-critical missions. To maintain such phenomenal performance, database manages often spends more than or as much as the goal of an organization can warrant. The motivation of this research stems from the fact that even predictable performance of a physical database can be hampered by stochastic query processing time, physical configurations of a database, and random arrival processes of queries. They all together affect the overall performance of a DBMS. In particular, if there are queuing delays due to limited capacity or during on-peak congestion, this paper suggest to prioritize database services. A surprising finding of this paper is that such a transition from a non-priority system to a corresponding priority-based system can be Pareto-improving in the sense that no users in the system will be worse off after the transition. Thus prioritizing database services can be a viable option for efficient database management.

  • PDF

Construction of Linkage Database on Nursing Diagnoses, Interventions, Outcomes in Abdominal Surgery Patients (복부수술환자의 간호진단, 간호중재, 간호결과 연계 데이터베이스 구축)

  • Yoo, Hyung-Sook;Chi, Sung-Ai
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.7 no.3
    • /
    • pp.425-437
    • /
    • 2001
  • This reserch was to develop database software in order to handle a lot of clinical nursing data with nursing diagnoses, related factors, defining characteristics, nursing interventions, nursing activities and nursing outcomes. MS Access2000 and SQL was selected to use a general purpose database logic with an efficiency. MS Visual Basic 6.0 was used to construct the circumstance of Graphic User Interface. The Linkage Database of abdominal surgery patients was constructed from the clinical data and questionnaire. This database system could add related factors, defining characteristics, nursing activities in the database and analyze the statistical results through Access query. In the final stage, end-users satisfaction analysis using 5 points Likert scale was dong with the response of using the database system. The accuracy/trustworthiness of the database system was verified with the highest average scores as 4.42 and also, the efficiency as 4.21, user friendly function as 4.1.

  • PDF

A Study on the Standardization for the Classification of Database Technologies (데이터베이스 기술 분류 표준화 연구)

  • Choi, Myung-Kyu
    • Journal of Information Management
    • /
    • v.27 no.2
    • /
    • pp.33-64
    • /
    • 1996
  • The systematic classification of database technologies is being much debated issue currently in the telecommunication and database industry. Such a rapid requirement toward standard classification model will enable many experts to characterize database technologies. The purpose of this study is to obtain a general overview and suggest a draft for the development of standard model associated with classification. This presented model is concentrating on information and database system. This presentation is catalogued by 5 subjects such as : general overview, information distribution, information retrieval systems, database systems, peripheral aspects related to database.

  • PDF

A design concept on object database of measurement data for building a safety management network of road bridges (도로 교량의 안전관리 네트워크 구축을 위한 계측자료의 객체 데이터베이스 설계 개념)

  • Park, Sang-Il;An, Hyun-Jung;Kim, Hoy-Jin;Lee, Sang-Ho
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • /
    • pp.518-523
    • /
    • 2008
  • In this study, we analyzed applicability of object database, designed the concept model based on object-oriented idea for measurement data management, and applied the design model to object database. The concept model composes three sub models Infrastructure managing information model, Infrastructure measurement data model, and Measurement unit model. The process to expand measurement data of new type was executed easily without changing database schema in object database. The process to expand measurement data of new type was executed easily without changing database schema in object database. Therefore, applicability of new technology to infrastructures for building a safety management network of road bridges could be increased with object database system.

  • PDF

Use of Conjoint Analysis to Test Customer Preferences on Database Service Quality for Knowledge Information (컨조인트 분석을 이용한 지식정보 데이터베이스 서비스 품질에 대한 고객 선호도 조사)

  • Park, Hye-Min;Park, Hee-Jun;Baek, Min-Ho;Park, Jong-Woo
    • Journal of Information Technology Services
    • /
    • v.7 no.2
    • /
    • pp.13-23
    • /
    • 2008
  • This research is to study the core factors for knowledge information database service and very important database service quality factors to improve customer satisfaction. The database service quality has been critical issue rather than just information service in these days, because the qualitative aspect is becoming more important factors rather than quantitative aspect. As database service quality has been influenced by satisfaction of database user, it needs to try to get excellent results by enhancing ability to obtain information. In order to satisfy this condition, it needs to measure database service quality more accurately first. In this study, we apply conjoint analysis to measure how much to give quality condition to achieve customer satisfaction.

An Efficient Algorithm for Updating Discovered Association Rules in Data Mining (데이터 마이닝에서 기존의 연관규칙을 갱신하는 효율적인 앨고리듬)

  • 김동필;지영근;황종원;강맹규
    • Journal of the Society of Korea Industrial and Systems Engineering
    • /
    • v.21 no.45
    • /
    • pp.121-133
    • /
    • 1998
  • This study suggests an efficient algorithm for updating discovered association rules in large database, because a database may allow frequent or occasional updates, and such updates may not only invalidate some existing strong association rules, but also turn some weak rules into strong ones. FUP and DMI update efficiently strong association rules in the whole updated database reusing the information of the old large item-sets. Moreover, these algorithms use a pruning technique for reducing the database size in the update process. This study updates strong association rules efficiently in the whole updated database reusing the information of the old large item-sets. An updating algorithm that is suggested in this study generates the whole candidate item-sets at once in an incremental database in view of the fact that it is difficult to find the new set of large item-sets in the whole updated database after an incremental database is added to the original database. This method of generating candidate item-sets is different from that of FUP and DMI. After generating the whole candidate item-sets, if each item-set in the whole candidate item-sets is large at an incremental database, the original database is scanned and the support of each item-set in the whole candidate item-sets is updated. So, the whole large item-sets in the whole updated database is found out. An updating algorithm that is suggested in this study does not use a pruning technique for reducing the database size in the update process. As a result, an updating algoritm that is suggested updates fast and efficiently discovered large item-sets.

  • PDF

Database Workload Analysis : An Empirical Study (데이타베이스 워크로드 분석 : 실험적 연구)

  • Oh, Jeong-Seok;Lee, Sang-Ho
    • The KIPS Transactions:PartD
    • /
    • v.11D no.4
    • /
    • pp.747-754
    • /
    • 2004
  • Database administrators should be aware of performance characteristics of database systems in order to manage database system effectively. The usages of system resources in database systems could be quite different under database workloads. The objective of this paper is to identify and analyze performance characteristics of database systems in different workloads, which could help database tuners tune database systems Under the TPC-C and TPC-W workloads, which represent typical workloads of online transaction processing and electronic commerce respectively, we investigated usage types of resource that are determined by fourteen performance indicator, and are behaved in response to changes of four tuning parameters (data buffer, private memory, I/O process, shared memory). Eight out of the fourteen performance indicators cleary show the performance differences under the workloads. Changes of data buffer parameter give a influences to database system. The tuning parameter that affects the system performance significantly is the database buffer size in the both workloads.

DNA Chip Database for the Korean Functional Genomics Project

  • Kim, Sang-Soo
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • /
    • pp.11-28
    • /
    • 2001
  • The Korean functional Genomics Project focuses on stomach and liver cancers. Specimens collected by six hospital teams are used in BNA microarray experiments. Experimental conditions, spot measurement data, and the associated clinical information are stored in a relational database. Microarray database schema was developed based on EBI's ArrayExpress. A diagrammatic representation of the schema is used to help navigate over marty tables in the database. Field description, table-to-table relationship, and other database features are also stored in the database and these are used by a PERL interface program to generate web-based input forms on the fly. As such, it is rather simple to modify the database definition and implement controlled vocabularies. This PERL program is a general-purpose utility which can be used for inputting and updating data in relational databases. It supports file upload and user-supplied filters of uploaded data. Joining related tables is implemented using JavaScripts, allowing this step to be deferred to a later stage. This feature alleviates the pain of inputting data into a multi-table database and promotes collaborative data input among several teams. Pathological finding, clinical laboratory parameters, demographical information, and environmental factors are also collected and stored in a separate database. The same PERL program facilitated developing this database and its user-interface.

  • PDF

Proteomics Data Analysis using Representative Database

  • Kwon, Kyung-Hoon;Park, Gun-Wook;Kim, Jin-Young;Park, Young-Mok;Yoo, Jong-Shin
    • Bioinformatics and Biosystems
    • /
    • v.2 no.2
    • /
    • pp.46-51
    • /
    • 2007
  • In the proteomics research using mass spectrometry, the protein database search gives the protein information from the peptide sequences that show the best match with the tandem mass spectra. The protein sequence database has been a powerful knowledgebase for this protein identification. However, as we accumulate the protein sequence information in the database, the database size gets to be huge. Now it becomes hard to consider all the protein sequences in the database search because it consumes much computing time. For the high-throughput analysis of the proteome, usually we have used the non-redundant refined database such as IPI human database of European Bioinformatics Institute. While the non-redundant database can supply the search result in high speed, it misses the variation of the protein sequences. In this study, we have concerned the proteomics data in the point of protein similarities and used the network analysis tool to build a new analysis method. This method will be able to save the computing time for the database search and keep the sequence variation to catch the modified peptides.

  • PDF