• Title, Summary, Keyword: 대용량 전송

Search Result 661, Processing Time 0.043 seconds

Comparison of Sampling Techniques for Passive Internet Measurement: An Inspection using An Empirical Study (수동적 인터넷 측정을 위한 샘플링 기법 비교: 사례 연구를 통한 검증)

  • Kim, Jung-Hyun;Won, You-Jip;Ahn, Soo-Han
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.45 no.6
    • /
    • pp.34-51
    • /
    • 2008
  • Today, the Internet is a part of our life. For that reason, we regard revealing characteristics of Internet traffic as an important research theme. However, Internet traffic cannot be easily manipulated because it usually occupy huge capacity. This problem is a serious obstacle to analyze Internet traffic. Many researchers use various sampling techniques to reduce capacity of Internet traffic. In this paper, we compare several famous sampling techniques, and propose efficient sampling scheme. We chose some sampling techniques such as Systematic Sampling, Simple Random Sampling and Stratified Sampling with some sampling intensities such as 1/10, 1/100 and 1/1000. Our observation focused on Traffic Volume, Entropy Analysis and Packet Size Analysis. Both the simple random sampling and the count-based systematic sampling is proper to general case. On the other hand, time-based systematic sampling exhibits relatively bad results. The stratified sampling on Transport Layer Protocols, e.g.. TCP, UDP and so on, shows superior results. Our analysis results suggest that efficient sampling techniques satisfactorily maintain variation of traffic stream according to time change. The entropy analysis endures various sampling techniques well and fits detecting anomalous traffic. We found that a traffic volume diminishment caused by bottleneck could induce wrong results on the entropy analysis. We discovered that Packet Size Distribution perfectly tolerate any packet sampling techniques and intensities.

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

A Study on the Deterioration Process of 22kV Power Cables in Operation (운전 중인 상태에 있는 22kV 전송선로 케이블의 열화 과정해석에 대한 연구)

  • Lee, Kwan-Woo;Um, Kee-Hong
    • The Journal of The Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.127-133
    • /
    • 2013
  • As an essential part of current industrial society, electric power energy is contantaly increasing in pace with the development of science and technology. In order to meet the demand of electric power, power facilities which take care of the higher voltage and bigger capacity is required. To produce and supply electric power on a sound basis the electric facilities should operate with reliability. To prevent disasters in advance, the high quality facilities should be manufactured, and a constant management should be done. When the power facilities cause accidents, the result is huge national deficits. Since the power facilities play a pivotal role in the key industry of national infrastructures of they should operate for a long time in maintaining a stable state, and the accidents can be prevented in advance. The lifetime of a power cable is considered to be 30 years at the time of manufacture, but in real fields, accidents of cable occur 8-12 years from the start of operation, resulting in a heavy loss of properties. In this paper, we will show that we have found out the cause and process of the deterioration of 22kV cable systems in operation. The result is that the process of deterioration does not follow the Weibull distribution only ; but rather, after the heat deterioration the Weibull distributed deterioration comes, then the cable is destroyed due to the partial discharge resulting from the Weibull distributed deterioration.

A Study on Element Features and Research Frames of Game Trailers (게임 트레일러의 유형 및 산업적 연구 프레임에 관한 고찰)

  • Kwon, Jae-Woong
    • Cartoon and Animation Studies
    • /
    • /
    • pp.187-222
    • /
    • 2015
  • The quantitave increase and qualitative development in the game industry leads to bitter competition and makes game companies struggle to find better ways promoting their own games. The game trailer is one of the critical ways to publicize diverse games by showing visual images directly. There are three reasons why the game trailer comes into the spotlight these days; the rapid growth of the Internet speed handling the large size of files, the remarkable development of visual image quality just like digital movies, and the advent of video websites such as You Tube that shows huge amount of videos regardless of the type and size. However, there are not enough amount of research on the game trailer because using game trailers as the marketing source is still at an early stage. Therefore, this research focuses on providing characteristics of game trailers that are available for practical market analysis. First, this research shows that game trailers can be divided by the category of display, style, and contents type. Second, this research provides the component parts of game trailers that are divided into contents factors such as characters, backgrounds, events and promotional factors such as title, production company name, distribution company name. Third, this research explores research frames that would be needed to analyze marketing strategies, effects of game trailers, production pipelines and so on. These categorizations would be useful for producing game trailers efficiently and utilizing them effectively.

A Framework of N-Screen Session Manager based N-Screen Service using Cloud Computing in Thin-Client Environment (씬클라이언트 환경에서 클라우드 컴퓨팅을 이용한 N-Screen 세션 관리 기반의 N-Screen 서비스 프레임워크)

  • Alsaffar, Aymen Abdullah;Song, Biao;Hassan, Mohammad Mehedi;Huh, Eui-Nam
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.21-32
    • /
    • 2012
  • We develop architecture of a virtual aggregation gateway (VAG) which enables composite application streaming based on N-Screen-as-a-Service (NaaS) using cloud computing in thin-client environment. We also discuss the problem of server computing burden in large scale multi-client case for screens sharing with composite application streaming over the internet. In particular, we propose an efficient Framework of N-Screen Session Manager which manages all media signaling that are necessary to deliver demanded contents. Furthermore, it will provides user with playback multimedia contents method (TV Drama, Ads, and Dialog etc) which is not considered in other research papers. The objectives of proposing N-Screen Session Manager are to (1) manage session status of all communication sessions (2) manage handling of received request and replies (3) allow users to playback multimedia contents anytime with variety of devices for screen sharing and (4) allow users to transfer an ongoing communication session from one device to another. Furthermore, we discuss the major security issues that occur in Session Initiation Protocol as well as minimizing delay resulted from session initiations (playback or transfer session).

Algorithm and experimental verification of underwater acoustic communication based on passive time reversal mirror in multiuser environment (다중송신채널 환경에서 수동형 시역전에 기반한 수중음향통신 알고리즘 및 실험적 검증)

  • Eom, Min-Jeong;Oh, Sehyun;Kim, J.S.;Kim, Sea-Moon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.3
    • /
    • pp.167-174
    • /
    • 2016
  • Underwater communication is difficult to increase the communication capacity because the carrier frequency is lower than that of radio communications on land. This is limited to the bandwidth of the signal under the influence of the characteristics of an ocean medium. As the high transmission speed and large transmission capacity have become necessary in the limited frequency range, the studies on MIMO (Multiple Input Multiple Output) communication have been actively carried out. The performance of the MIMO communication is lower than that of the SIMO (Single Input Multiple Output) communication because cross-talk occurs due to multiusers along with inter symbol interference resulting from the channel characteristics such as delay spread and doppler spread. Although the adaptive equalizer considering multi-channels is used to mitigate the influence of the cross-talk, the algorithm is normally complicated. In this paper, time reversal mirror technique with the characteristic of a self-equalization will be applied to simplify the compensation algorithm and relieve the cross-talk in order to improve the communication performance when the signal transmitted from two channels is received over interference on one channel in the same time. In addition, the performance of the MIMO communication based on the time reversal mirror is verified using data from the SAVEX15(Shallow-water Acoustic Variability Experiment 2015) conducted at the northern area of East China Sea in May 2015.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Multiplexing of UHDTV Based on MPEG-2 TS (MPEG-2 TS 기반의 UHDTV 다중화)

  • Jang, Euy-Doc;Park, Dong-Il;Kim, Jae-Gon;Lee, Eung-Don;Cho, Suk-Hee;Choi, Jin-Soo
    • Journal of Broadcast Engineering
    • /
    • v.15 no.2
    • /
    • pp.205-216
    • /
    • 2010
  • In this paper, a method of MPEG-2 Transport Stream (TS) multiplexing for Ultra HDTV (UHDTV) and its design and implementation as a SW tool is described. In practice, UHD video may be divided into several HD videos and each video is encoded in parallel. Therefore, it is necessary to synchronize and multiplex multiple bitstreams encoding each HD video for transmitting and storing UHD video. In this paper, it is assumed that 4 HD videos partitioning a UHD spatially are encoded as H.264/AVC and two 5.0 channel audios are encoded by AC-3. Therefore, 4 H.264/AVC elementary streams (ESs) and 2 AC-3 ESs is mainly considered in the TS multiplexing of UHD. For the carriage of H.264/AVC and AC-3 over MPEG-2 TS, PES packetization and TS multiplexing are designed and implemented based on the extended specification of the MPEG-2 Systems and ATSC (Digital audio compressed standard), respectively. The implemented UHD TS multiplexing tool emulates real time HW operation in the time unit corresponding to the duration of one TS packet transmission in a given TS rate. In particular, in order to satisfy the timing model, the buffers defined in the TS System Target Decoder (T-STD) are monitored and their statuses are considered in the scheduling of TS multiplexing. For UHD multiplexing, two kinds of multiplexing structures, which are UHD re-multiplexing and UHD program multiplexing, are implemented and their strength and weakness are investigated. The developed UHD TS multiplexing tool is tested and verified in terms of the syntax and semantics conformance and functionalities by using a commercial analyzer and real-time presentation tools.

ATM Cell Encipherment Method using Rijndael Algorithm in Physical Layer (Rijndael 알고리즘을 이용한 물리 계층 ATM 셀 보안 기법)

  • Im Sung-Yeal;Chung Ki-Dong
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1
    • /
    • pp.83-94
    • /
    • 2006
  • This paper describes ATM cell encipherment method using Rijndael Algorithm adopted as an AES(Advanced Encryption Standard) by NIST in 2001. ISO 9160 describes the requirement of physical layer data processing in encryption/decryption. For the description of ATM cell encipherment method, we implemented ATM data encipherment equipment which satisfies the requirements of ISO 9160, and verified the encipherment/decipherment processing at ATM STM-1 rate(155.52Mbps). The DES algorithm can process data in the block size of 64 bits and its key length is 64 bits, but the Rijndael algorithm can process data in the block size of 128 bits and the key length of 128, 192, or 256 bits selectively. So it is more flexible in high bit rate data processing and stronger in encription strength than DES. For tile real time encryption of high bit rate data stream. Rijndael algorithm was implemented in FPGA in this experiment. The boundary of serial UNI cell was detected by the CRC method, and in the case of user data cell the payload of 48 octets (384 bits) is converted in parallel and transferred to 3 Rijndael encipherment module in the block size of 128 bits individually. After completion of encryption, the header stored in buffer is attached to the enciphered payload and retransmitted in the format of cell. At the receiving end, the boundary of ceil is detected by the CRC method and the payload type is decided. n the payload type is the user data cell, the payload of the cell is transferred to the 3-Rijndael decryption module in the block sire of 128 bits for decryption of data. And in the case of maintenance cell, the payload is extracted without decryption processing.

Development of a Real-Time Mobile GIS using the HBR-Tree (HBR-Tree를 이용한 실시간 모바일 GIS의 개발)

  • Lee, Ki-Yamg;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.1
    • /
    • pp.73-85
    • /
    • 2004
  • Recently, as the growth of the wireless Internet, PDA and HPC, the focus of research and development related with GIS(Geographic Information System) has been changed to the Real-Time Mobile GIS to service LBS. To offer LBS efficiently, there must be the Real-Time GIS platform that can deal with dynamic status of moving objects and a location index which can deal with the characteristics of location data. Location data can use the same data type(e.g., point) of GIS, but the management of location data is very different. Therefore, in this paper, we studied the Real-Time Mobile GIS using the HBR-tree to manage mass of location data efficiently. The Real-Time Mobile GIS which is developed in this paper consists of the HBR-tree and the Real-Time GIS Platform HBR-tree. we proposed in this paper, is a combined index type of the R-tree and the spatial hash Although location data are updated frequently, update operations are done within the same hash table in the HBR-tree, so it costs less than other tree-based indexes Since the HBR-tree uses the same search mechanism of the R-tree, it is possible to search location data quickly. The Real-Time GIS platform consists of a Real-Time GIS engine that is extended from a main memory database system. a middleware which can transfer spatial, aspatial data to clients and receive location data from clients, and a mobile client which operates on the mobile devices. Especially, this paper described the performance evaluation conducted with practical tests if the HBR-tree and the Real-Time GIS engine respectively.

  • PDF