• Title, Summary, Keyword: Buffer Cache

Search Result 119, Processing Time 0.034 seconds

Management Technique of Buffer Cache for Rendering Systems (렌더링 시스템을 위한 버퍼캐쉬 관리기법)

  • Shin, Donghee;Bahn, Hyokyung
    • The Journal of The Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.155-160
    • /
    • 2018
  • In this paper, we found that buffer cache in general systems does not perform well in rendering software, and presented a new buffer cache management scheme that resolves this problem. To do so, we collected various file I/O traces of rending software and analyzed their characteristics. From this analysis, we observed that file I/Os in rendering consist of long loops, short loops, random accesses, and write-once accesses. Based on this observation, we presented a buffer cache management scheme that allocates cache space to each access types and manages them appropriately, thereby improving the buffer cache performances by 19% on average and up to 55%.

Dynamic Cache Partitioning Strategy for Efficient Buffer Cache Management (효율적인 버퍼 캐시 관리를 위한 동적 캐시 분할 블록교체 기법)

  • 진재선;허의남;추현승
    • Journal of the Korea Society for Simulation
    • /
    • v.12 no.2
    • /
    • pp.35-44
    • /
    • 2003
  • The effectiveness of buffer cache replacement algorithms is critical to the performance of I/O systems. In this paper, we propose the degree of inter-reference gap (DIG) based block replacement scheme that retains merits of the least recently used (LRU) such as simple implementation and good cache hit ratio (CHR) for general patterns of references, and improves CHR further. In the proposed scheme, cache blocks with low DIGs are distinguished from blocks with high DIGs and the replacement block is selected among high DIGs blocks as done in the low inter-reference recency set (LIRS) scheme. Thus, by having the effect of the partitioning the cache memory dynamically based on DIGs, CHR is improved. Trace-driven simulation is employed to verified the superiority of the DIG based scheme and shows that the performance improves up to about 175% compared to the LRU scheme and 3% compared to the LIRS scheme for the same traces.

  • PDF

An Efficient Algorithm for Restriction on Duplication Caching between Buffer and Disk Caches (버퍼와 디스크 캐시 사이의 중복 캐싱을 제한하는 효율적인 알고리즘)

  • Jung, Soo-Mok
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.10 no.1
    • /
    • pp.95-105
    • /
    • 2006
  • The speed of hard disk which is based on mechanical operation is more slow than processor. The growth of processor speed is rapid by semiconductor technology, but the growth of disk speed which is based on mechanical operation is not enough. Buffer cache in main memory and disk cache in disk controller have been used in computer system to solve the speed gap between processor and I/O subsystem. In this paper, an efficient buffer cache and disk cache management scheme was proposed to restrict duplicated disk block between buffer cache and disk cache. The performance of the proposed algorithm was evaluated by simulation.

  • PDF

Design of Cache Memory System for Next Generation CPU (차세대 CPU를 위한 캐시 메모리 시스템 설계)

  • Jo, Ok-Rae;Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.6
    • /
    • pp.353-359
    • /
    • 2016
  • In this paper, we propose a high performance L1 cache structure for the high clock CPU. The proposed cache memory consists of three parts, i.e., a direct-mapped cache to support fast access time, a two-way set associative buffer to reduce miss ratio, and a way-select table. The most recently accessed data is stored in the direct-mapped cache. If a data has a high probability of a repeated reference, when the data is replaced from the direct-mapped cache, the data is stored into the two-way set associative buffer. For the high performance and fast access time, we propose an one way among two ways set associative buffer is selectively accessed based on the way-select table (WST). According to simulation results, access time can be reduced by about 7% and 40% comparing with a direct cache and Intel i7-6700 with two times more space respectively.

Design and Performance Evaluation of Expansion Buffer Cache (확장 버퍼 캐쉬의 설계 및 성능 평가)

  • Hong Won-Kee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7
    • /
    • pp.489-498
    • /
    • 2004
  • VLIW processor is considered to be an appropriate processor for the embedded system, provided with high performance and low power con-sumption due to its simple hardware structure. Unfortunately, the VLIW processor often suffers from high memory access latency due to the variable length of I-packets, which consist of independent instructions to be issued in parallel. It is because of the variable I-packet length that some I-packets must be placed over two cache blocks, which are called straddle I-packets, so that two cache accesses are required to fetch such I-packets. In this paper, an expansion buffer cache is proposed to improve not only the instruction fetch bandwidth, but also the power consumption of the I-cache with moderate hardware cost. The expansion buffer cache has a small expansion buffer containing a fraction of a straddle packet along with the main cache to reduce the additional cache accesses due to the straddle I-packets. With a great reduction in the cache accesses due to the straddle packets, the expansion buffer cache can achieve $5{\~}9{\%}$improvement over the conventional I-caches in the $Delay{\cdot}Power{\cdot}Area$ metric.

Design of an Asynchronous Data Cache with FIFO Buffer for Write Back Mode (Write Back 모드용 FIFO 버퍼 기능을 갖는 비동기식 데이터 캐시)

  • Park, Jong-Min;Kim, Seok-Man;Oh, Myeong-Hoon;Cho, Kyoung-Rok
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.72-79
    • /
    • 2010
  • In this paper, we propose the data cache architecture with a write buffer for a 32bit asynchronous embedded processor. The data cache consists of CAM and data memory. It accelerates data up lood cycle between the processor and the main memory that improves processor performance. The proposed data cache has 8 KB cache memory. The cache uses the 4-way set associative mapping with line size of 4 words (16 bytes) and pseudo LRU replacement algorithm for data replacement in the memory. Dirty register and write buffer is used for write policy of the cache. The designed data cache is synthesized to a gate level design using $0.13-{\mu}m$ process. Its average hit rate is 94%. And the system performance has been improved by 46.53%. The proposed data cache with write buffer is very suitable for a 32-bit asynchronous processor.

Cache memory system for high performance CPU with 4GHz (4Ghz 고성능 CPU 위한 캐시 메모리 시스템)

  • Jung, Bo-Sung;Lee, Jung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.2
    • /
    • pp.1-8
    • /
    • 2013
  • TIn this paper, we propose a high performance L1 cache structure on the high clock CPU of 4GHz. The proposed cache memory consists of three parts, i.e., a direct-mapped cache to support fast access time, a two-way set associative buffer to exploit temporal locality, and a buffer-select table. The most recently accessed data is stored in the direct-mapped cache. If a data has a high probability of a repeated reference, when the data is replaced from the direct-mapped cache, the data is selectively stored into the two-way set associative buffer. For the high performance and low power consumption, we propose an one way among two ways set associative buffer is selectively accessed based on the buffer-select table(BST). According to simulation results, Energy $^*$ Delay product can improve about 45%, 70% and 75% compared with a direct mapped cache, a four-way set associative cache, and a victim cache with two times more space respectively.

A Design and Implementation on Large Data File Management Using Buffer Cache and Virtual Memory File (버퍼 캐쉬와 가상메모리 파일을 이용한 대형 데이터화일의 처리방법 설계 및 구현)

  • 김병철;신병석;조동섭;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.7
    • /
    • pp.784-792
    • /
    • 1992
  • In this paper we design and implement a method for application programs to allow handling of large data files in DOS environment. In this method we use extended memory and hard disk as a data buffer. And we use a part of the conventional DOS memory as a buffer cache which allows the application program to use extended memory and hard disks transparently. Using buffer cache also allows us some speed improvement for the application program.

  • PDF

Buffer Cache Management of Smartphones Exploiting Write-Only-Once Characteristics (1회성 쓰기 참조 특성을 고려하는 스마트폰 버퍼캐쉬 관리 기법)

  • Kim, Dohee;Bahn, Hyokyung
    • The Journal of The Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.129-134
    • /
    • 2015
  • This paper analyzes file access characteristics of smartphone apps and finds that a large portion of file writes are performed only once. Based on this observation, we present a new buffer cache management scheme that considers this characteristics. Buffer cache improves storage performance by maintaining hot file data in memory thereby servicing subsequent requests without storage accesses. However, it should flush modified data to storage in order to resist system crashes. The proposed scheme evicts cache data that has been written only once upon flushes, thus improving cache space utilization. Simulation experiments show that the proposed scheme improves cache hit ratio by 5-33% and power consumption by 27-92%.

Hybrid Buffer Replacement Scheme Considering Reference Pattern in Multimedia Storage Systems (멀티미디어 저장 시스템에서 참조 유형을 고려한 혼성 버퍼 교체 기법)

  • 류연승
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.1
    • /
    • pp.47-56
    • /
    • 2002
  • Previous buffer cache schemes for multimedia storage systems only exploited the sequential references of multimedia files and didn't consider looping references. However, in some video applications like foreign language learning, users mark the scene as loop area and then application automatically playbacks the scene several times. In this paper, we propose a new buffer replacement scheme, called HBM(Hybrid Buffer Management), for multimedia storage systems that have both sequential and looping references. Proposed scheme assumes that application layer informs reference pattern of files to file system. Then HBM applies an appropriate replacement policy to each file. Our simulation experiments show that HBM outperforms previous buffer cache schemes such as DISTANCE and LRU.

  • PDF