• Title/Summary/Keyword: %28Network cache

Search Result 7, Processing Time 0.022 seconds

Cache Table Management for Effective Label Switching (효율적인 레이블 스위칭을 위한 캐쉬 테이블 관리)

  • Kim, Nam-Gi;Yoon, Hyun-Soo
    • Journal of KIISE:Information Networking
    • /
    • v.28 no.2
    • /
    • pp.251-261
    • /
    • 2001
  • The traffic on the Internet has been growing exponentially for some time. This growth is beginning to stress the current-day routers. However, switching technology offers much higher performance. So the label switching network which combines IP routing with switching technology, is emerged. EspeciaJJy in the data driven label switching, flow classification and cache table management are needed. Flow classification is to classify packets into switching and non-switching packets, and cache table management is to maintain the cache table which contains information for flow classification and label switching. However, the cache table management affects the performance of label switching network considerably as well as flowclassification because the bigger cache table makes more packet switched and maintains setup cost lower, but cache is restricted by local router resources. For that reason, there is need to study the cache replacement scheme for the efficient cache table management with the Internet traffic characterized by user. So in this paper, we propose several cache replacement schemes for label switching network. First, without the limitation at switching capacity in the router. we introduce FIFO(First In First Out). LFC(Least Flow Count), LRU(Least Recently Used! scheme and propose priority LRU, weighted priority LRU scheme. Second, with the limitation at switching capacity in the router, we introduce LFC-LFC, LFC-LRU, LRU-LFC, LRU-LRU scheme and propose LRU-weighted LRU scheme. Without limitation, weighted priority LRU scheme and with limitation, LRU-weighted LRU scheme showed best performance in this paper.

  • PDF

Design and Implementation of an In-Memory File System Cache with Selective Compression (대용량 파일시스템을 위한 선택적 압축을 지원하는 인-메모리 캐시의 설계와 구현)

  • Choe, Hyeongwon;Seo, Euiseong
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.658-667
    • /
    • 2017
  • The demand for large-scale storage systems has continued to grow due to the emergence of multimedia, social-network, and big-data services. In order to improve the response time and reduce the load of such large-scale storage systems, DRAM-based in-memory cache systems are becoming popular. However, the high cost of DRAM severely restricts their capacity. While the method of compressing cache entries has been proposed to deal with the capacity limitation issue, compression and decompression, which are technically difficult to parallelize, induce significant processing overhead and in turn retard the response time. A selective compression scheme is proposed in this paper for in-memory file system caches that rapidly estimates the compression ratio of incoming cache entries with their Shannon entropies and compresses cache entries with low compression ratio. In addition, a description is provided of the design and implementation of an in-kernel in-memory file system cache with the proposed selective compression scheme. The evaluation showed that the proposed scheme reduced the execution time of benchmarks by approximately 18% in comparison to the conventional non-compressing in-memory cache scheme. It also provided a cache hit ratio similar to the all-compressing counterpart and reduced 7.5% of the execution time by reducing the compression overhead. In addition, it was shown that the selective compression scheme can reduce the CPU time used for compression by 28% compared to the case of the all-compressing scheme.

40-TFLOPS artificial intelligence processor with function-safe programmable many-cores for ISO26262 ASIL-D

  • Han, Jinho;Choi, Minseok;Kwon, Youngsu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.468-479
    • /
    • 2020
  • The proposed AI processor architecture has high throughput for accelerating the neural network and reduces the external memory bandwidth required for processing the neural network. For achieving high throughput, the proposed super thread core (STC) includes 128 × 128 nano cores operating at the clock frequency of 1.2 GHz. The function-safe architecture is proposed for a fault-tolerance system such as an electronics system for autonomous cars. The general-purpose processor (GPP) core is integrated with STC for controlling the STC and processing the AI algorithm. It has a self-recovering cache and dynamic lockstep function. The function-safe design has proved the fault performance has ASIL D of ISO26262 standard fault tolerance levels. Therefore, the entire AI processor is fabricated via the 28-nm CMOS process as a prototype chip. Its peak computing performance is 40 TFLOPS at 1.2 GHz with the supply voltage of 1.1 V. The measured energy efficiency is 1.3 TOPS/W. A GPP for control with a function-safe design can have ISO26262 ASIL-D with the single-point fault-tolerance rate of 99.64%.

Implementation of Memory Copy Reduction Scheme for Networked Multimedia Service in Linux (리눅스 커널에서 네트워크 멀티미디어 서비스를 위한 메모리 복사 감소 기법 구현)

  • Kim, Jeong-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.2B
    • /
    • pp.129-137
    • /
    • 2003
  • Multimedia streams, like MPEG continuously retrieve multimedia data because of their incessant playback. While these streams need an efficient support of kernel, the current buffer cache mechanism of Linux kernel such as Unix operating system was designed apt for small files, which is aperiodically requested as well as time uncritical. But, in case of continuous media, the CPU must enormously copy memory from kernel address space to user address space. This must lead to a large CPU overhead. This overhead both degrades system throughput and cannot guarantee QOS. In this paper, we have designed and implemented two memory copy reduction schemes in Linux kernel, direct I/O and one copy. The direct I/O skips the buffer cache layer of Linux kernel and results in dramatic reduction of CPU memory copy overhead. And, the one copy provides a fast disk-to-network data path without copying to user address space. The experimental results show considerable reduction of CPU overhead and throughput improvements.

Modeling and Analysis of a Reordering-based Optimistic Cache Consistency Protocol (재배열 기반의 낙관적 캐쉬 일관성 유지 기법의 모델링과 분석)

  • Cho, Sung-Ho;Hwang, Jeong-Hyon
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.458-467
    • /
    • 2001
  • Optimistic Two-Phase Locking(O2PL) performs as well as or better than the other approaches because it exploits client caching well and also has relatively lower network bandwidth requirements. However, O2PL leads to unnecessary waits, because, it can not be commit a transaction until the transaction obtains all requested locks. In addition, Optimistic Concurrency Control(OCC) tends to make needless aborts. This paper suggests an efficient optimistic cache consistency protocol that overcomes such shortcomings. Our scheme decides whether to commit or abort a transaction without wait and it adopts transaction re-ordering in order to minimize the abort rate. Our scheme needs only one version for each data item in spite of the re-ordering mechanism used. Finally, this paper presents a simulation-based analysis that shows superiority in performance of out scheme to O2PL and OCC.

  • PDF

Concurrency Control and Consistency Maintenance of Cached Spatial Data in Client-Server Environment (클라이언트-서버 환경에서 캐쉬된 공간 데이터의 동시성 제어 및 일관성 유지 기법)

  • Shin, Young-Sang;Hong, Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.512-527
    • /
    • 2001
  • In a client-server spatial database, it is desirable to maintain the cached data in a client side to minimize the communication overhead across a network. This paper deals with the issues of concurrency and consistency of map updates in this environment. A client transaction to update map data is an interactive work and takes a long time to complete it. The map update in a client site may affect the other sites'updates because of dependencies between spatial data stored at different sites. The concurrent updates should be propagated to the other clients as well as the server to keep the consistency of map replicated in a client cache, and also the communication overhead of the update propagation should be minimized not to lose the benefit of caching. The newly proposed cache region locking with CR lock and CX lock controls the update dependency due to spatial relationships. CS lock and COD lock are suggested to use optimistic detection-based approaches for guaranteeing the consistency of cached client data. The cooperative update protocol uses these extended locking primitives and Spatial Relationship-based 2PC (SR-based 2PC). This paper argues that the concurrent updates of cached client spatial data can be achieved by deciding on collaborative updates or independent updates based on spatial relationships.

  • PDF

Performance of an Authentication Proxy for Port Based Security Systems (포트레벨 보안을 위한 인증 프록시 시스템의 성능분석)

  • 이동현;이현우;정해원;윤종호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.8B
    • /
    • pp.730-737
    • /
    • 2003
  • In this paper, we present an efficient authentication proxy for IEEE 802.1x systems based on the port-based access control mechanism. An IEEE 802.1x system consists of PC supplicants, a bridge with authentication client functions, and an authentication server. For the network security and user authentication purposes, a supplicant who wants to access Internet should be authorized to access the bridge port using the Extended Authentication Protocol (EAP) over LAN. The frame of EAP over LAN is then relayed to the authentication server by the bridge. After several transactions between the supplicant and the server via the bridge, the supplicant may be either authorized or not. Noting that the transactions between the relaying bridge and the server will be increased as the number of supplicants grows in public networks, we propose a scheme for reducing the transactions by employing an authentication proxy function at the bridge. The proxy is allowed to cache the supplicant's user ID and password during his first transaction with the server. For the next authentication procedure of the same supplicant, the proxy function of the bridge handles the authentication transactions using its cache on behalf of the authentication server. Since the main authentication server handles only the first authentication transaction of each supplicant, the processing load of the server can be reduced. Also, the authentication transaction delay experienced by a supplicant can be decreased compared with the conventional 802.1x system.