• Title/Summary/Keyword: interval caching

Search Result 15, Processing Time 0.027 seconds

Optimality of Interval Caching Policies in Multimedia Streaming Systems

  • Cho, Kyungwoon;Bahn, Hyokyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.31-36
    • /
    • 2022
  • Interval caching is one of the representative caching strategies used in multimedia streaming systems. However, there has been no theoretical analysis on interval caching. In this paper, we present an optimality proof of the interval caching policy. Specifically, we propose a caching performance model for multimedia streaming systems and show the optimality of the interval caching policy based on this model.

Design and Optimality Analysis of Cache Performance Model for Multimedia Streaming Environments (멀티미디어 스트리밍 환경을 위한 캐쉬 성능평가 모델 설계 및 최적성 분석)

  • Hyokyung Bahn;Kyungwoon Cho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.9-13
    • /
    • 2023
  • Multimedia streaming data is very large in size and accessed sequentially, making the LRU(Least Recently Used) algorithm widely used to improve I/O performance in traditional caching environments ineffective. Experimental analysis of this has shown the superiority of interval-based caching over LRU, but the theoretical basis has not been proven. In this paper, we design a cache performance model to analyze the optimality of caching for multimedia streaming environments and design a theoretically optimal caching algorithm based on interval caching. Then, we show that the algorithm we design is an optimal algorithm that minimizes cache misses of streaming data based on the proposed model.

Dynamic Buffer Allocation Scheme for Caching in Realtime Multimedia Systems (실시간 멀티미디어 시스템에서의 캐슁을 위한 동적 버퍼 할당 기법)

  • Kwon, Jin-Baek;Yeom, Heon-Young;Lee, Kyung-Oh
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.4
    • /
    • pp.420-430
    • /
    • 2000
  • Several caching schemes for realtime multimedia systems have been proposed, but they focus only on increasing the hit ratio without providing any means to utilize the saved disk bandwidth due to cache hits. One of the most important metrics in multimedia systems is the number of clients that the systems can service simultaneously guaranteeing Quality of Service(QoS). Preemptive but Safe Interval Caching(PSIC) was proposed as a caching scheme which makes it possible to provide deterministic QoS.. However, it has no ability to adapt to the change of system environments since it has no mechanism to change the cache size. In this paper, we present a new caching scheme, Dynamic Interval Caching(DIC), which maximizes the performance, regardless of the change of system environments, providing hiccup-free service, by managing memory buffers dynamically. And it is demonstrated that DIC allocates buffer cache optimally, by comparing with PSIC through trace-driven simulations.

  • PDF

A Dual Mode Buffer Cache Management Policy for a Continuous Media Server (연속 미디어 서버를 위한 이중 모드 버퍼 캐쉬 관리 기법)

  • Seo, Won-Il;Park, Yong-Woon;Chung, Ki-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.12
    • /
    • pp.3642-3651
    • /
    • 1999
  • In this paper, we propose a new caching scheme for continuous media data where the buffer allocation unit is divided into two modes : interval and object. All of objects' access patterns are monitored and based on the results of monitoring, a request for an object is decided to cache its data with either interval mode or object mode. The results of our simulation show that our proposed caching scheme is better than the existing caching algorithms such as interval caching where the access patterns of the objects are changed with time.

  • PDF

Optimizing Caching in a Patch Streaming Multimedia-on-Demand System

  • Bulti, Dinkisa Aga;Raimond, Kumudha
    • Journal of Computing Science and Engineering
    • /
    • v.9 no.3
    • /
    • pp.134-141
    • /
    • 2015
  • In on-demand multimedia streaming systems, streaming techniques are usually combined with proxy caching to obtain better performance. The patch streaming technique has no start-up latency inherent to it, but requires extra bandwidth to deliver the media data in patch streams. This paper proposes a proxy caching technique which aims at reducing the bandwidth cost of the patch streaming technique. The proposed approach determines media prefixes with high patching cost and caches the appropriate media prefix at the proxy/local server. Herein the scheme is evaluated using a synthetically generated media access workload and its performance is compared with that of the popularity and prefix-aware interval caching scheme (the prefix part) and with that of patch streaming with no caching. The bandwidth saving, hit ratio and concurrent number of clients are used to compare the performance, and the proposed scheme is found to perform better for different caching capacities of the proxy server.

Block Level Refinement of Popularity-Aware Interval Caching for Multimedia Streaming Servers (멀티미디어 스트리밍 서버를 위한 인기도 기반 인터벌 캐슁의 블록 수준 세분화 기법)

  • Kwon, Oh-Hoon;Kim, Tae-Seok;Bahn, Hyo-Kyung;Koh, Kern
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.4
    • /
    • pp.138-144
    • /
    • 2007
  • With recent proliferation of video-on-demand services, caching in a multimedia streaming server is becoming increasingly important. Previous studies have shown that request interval based caching and its extension for considering different video popularity performs well in various streaming environments. In this paper, we show that block level refinement of this existing scheme can further improve the performance of streaming servers. Trace driven simulations with real world VOD traces have shown that the proposed scheme improves the cache hit rate and the startup latency.

A efficient caching scheme of Continuous stream service on the internet (인터넷 상에서 비디오 스트림을 위한 효과적인 캐슁기법)

  • 김종경;남현우
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.12
    • /
    • pp.1083-1092
    • /
    • 2003
  • This paper propose method to minimize service delay time based in popularity when service video stream in internet Result that measure performance of algorithm that propose considering correlation of cache space all of access frequency and each servers showed performance elevation of degree of triple than elevation of cache practical use rate 15% and interval caching about service laytency than agent caching scheme.

  • PDF

Big Data Meets Telcos: A Proactive Caching Perspective

  • Bastug, Ejder;Bennis, Mehdi;Zeydan, Engin;Kader, Manhal Abdel;Karatepe, Ilyas Alper;Er, Ahmet Salih;Debbah, Merouane
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.549-557
    • /
    • 2015
  • Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: Velocity, voracity, volume, and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platformand the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4Gbyte of storage size (87%of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.

An Efficient Load Balancing Technique in Cluster Based VOD Servers using the Dynamic Buffer Partitioning (동적 버퍼 분할을 이용한 클러스터 VOD 서버의 효율적 부하 분산 방법)

  • Kwon, Chun-Ja;Kim, Young-Jin;Choi, Hwang-Kyu
    • The KIPS Transactions:PartC
    • /
    • v.9C no.5
    • /
    • pp.709-718
    • /
    • 2002
  • Cluster based VOD systems require elaborate load balancing and buffer management techniques in order to ensure real-time display for multiuser concurrently. In this paper, we propose a new load balancing technique based on the dynamic buffer partitioning in cluster based VOD servers. The proposed technique evenly distribute the user requests into each service node according to its available buffer capacity and disk access rate. In each node, the dynamic buffer partitioning technique dynamically partitions the buffer to minimize the average waiting time for the requests that access the same continuous media. The simulation results show that our proposed technique decreases the average waiting time by evenly distributing the user requests compared with the exiting techniques and then increases the throughput in each node. Particularly under the overloaded condition in the cluster server, the simulation probes that the performance of the proposed technique is better two times than the Generalized Interval Caching based technique.

Web-Cached Multicast Technique for on-Demand Video Distribution (주문형 비디오 분배를 위한 웹-캐슁 멀티캐스트 전송 기법)

  • Kim, Back-Hyun;Hwang, Tae-June;Kim, Ik-Soo
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.775-782
    • /
    • 2005
  • In this paper, we propose multicast technique in order to reduce the required network bandwidth by n times, by merging the adjacent multicasts depending on the number of HENs (Head-End-Nodes) n that request the same video. Allowing new clients to immediately join an existing multicast through patching improves the efficiency of the multicast and offers services without any initial latency. A client might have to download data through two channels simultaneously, one for multicast and the other for patching. The more the frequency of requesting the video is, the higher the probability of caching it among HENs increases. Therefore, the requests for the cached video data can be served by HENs. Multicast from server is generated when the playback time exceeds the amount of cached video data. Since the interval of multicast can be dynamically expanded according to the popularity of videos, it can be reduced the server's workload and the network bandwidth. We perform simulations to compare its performance with that of conventional multicast. From simulation results, we confirm that the Proposed multicast technique offers substantially better performance.