• 제목/요약/키워드: Internet Cache

검색결과 183건 처리시간 0.025초

Mitigating Cache Pollution Attack in Information Centric Mobile Internet

  • Chen, Jia;Yue, Liang;Chen, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권11호
    • /
    • pp.5673-5691
    • /
    • 2019
  • Information centric mobile network can significantly improve the data retrieving efficiency by caching contents at mobile edge. However, the cache pollution attack can affect the data obtaining process severely by requiring unpopular contents deliberately. To tackle the problem, we design an algorithm of mitigating cache pollution attacks in information centric mobile network. Particularly, the content popularity distribution statistic is proposed to detect abnormal behavior. Then a probabilistic caching strategy based on abnormal behavior is applied to dynamically maintain the steady-state distribution for content visiting probability and achieve the purpose of defense. The experimental results show that the proposed scheme can achieve higher request hit ratio and smaller latency for false locality content pollution attack than the CacheShield approach and the baseline approach where no mitigation approach is applied.

An ICN In-Network Caching Policy for Butterfly Network in DCN

  • Jeon, Hongseok;Lee, Byungjoon;Song, Hoyoung;Kang, Moonsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권7호
    • /
    • pp.1610-1623
    • /
    • 2013
  • In-network caching is a key component of information-centric networking (ICN) for reducing content download time, network traffic, and server workload. Data center network (DCN) is an ideal candidate for applying the ICN design principles. In this paper, we have evaluated the effectiveness of caching placement and replacement in DCN with butterfly-topology. We also suggest a new cache placement policy based on the number of routing nodes (i.e., hop counts) through which travels the content. With a probability inversely proportional to the hop counts, the caching placement policy makes each routing node to cache content chunks. Simulation results lead us to conclude (i) cache placement policy is more effective for cache performance than cache replacement, (ii) the suggested cache placement policy has better caching performance for butterfly-type DCNs than the traditional caching placement policies such as ALWASYS and FIX(P), and (iii) high cache hit ratio does not always imply low average hop counts.

Design and analytical evaluation of a fuzzy proxy caching for wireless internet

  • Bae, Ihn-Han
    • Journal of the Korean Data and Information Science Society
    • /
    • 제20권6호
    • /
    • pp.1177-1190
    • /
    • 2009
  • In this paper, we propose a fuzzy proxy cache scheme for caching web documents in mobile base stations. In this scheme, a mobile cache model is used to facilitate data caching and data replication. Using the proposed cache scheme, the individual proxy in the base station makes cache decisions based solely on its local knowledge of the global cache state so that the entire wireless proxy cache system can be effectively managed without centralized control. To improve the performance of proxy caching, the proposed cache scheme predicts the direction of movement of mobile hosts, and uses various cache methods for neighboring proxy servers according to the fuzzy-logic-based control rules based on the membership degree of the mobile host. The performance of our cache scheme is evaluated analytically in terms of average response delay and average energy cost, and is compared with that of other mobile cache schemes.

  • PDF

Energy-Efficient Last-Level Cache Management for PCM Memory Systems

  • Bahn, Hyokyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권1호
    • /
    • pp.188-193
    • /
    • 2022
  • The energy efficiency of memory systems is an important task in designing future computer systems as memory capacity continues to increase to accommodate the growing big data. In this article, we present an energy-efficient last-level cache management policy for future mobile systems. The proposed policy makes use of low-power PCM (phase-change memory) as the main memory medium, and reduces the amount of data written to PCM, thereby saving memory energy consumptions. To do so, the policy keeps track of the modified cache lines within each cache block, and replaces the last-level cache block that incurs the smallest PCM writing upon cache replacement requests. Also, the policy considers the access bit of cache blocks along with the cache line modifications in order not to degrade the cache hit ratio. Simulation experiments using SPEC benchmarks show that the proposed policy reduces the power consumption of PCM memory by 22.7% on average without degrading performances.

A cache placement algorithm based on comprehensive utility in big data multi-access edge computing

  • Liu, Yanpei;Huang, Wei;Han, Li;Wang, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.3892-3912
    • /
    • 2021
  • The recent rapid growth of mobile network traffic places multi-access edge computing in an important position to reduce network load and improve network capacity and service quality. Contrasting with traditional mobile cloud computing, multi-access edge computing includes a base station cooperative cache layer and user cooperative cache layer. Selecting the most appropriate cache content according to actual needs and determining the most appropriate location to optimize the cache performance have emerged as serious issues in multi-access edge computing that must be solved urgently. For this reason, a cache placement algorithm based on comprehensive utility in big data multi-access edge computing (CPBCU) is proposed in this work. Firstly, the cache value generated by cache placement is calculated using the cache capacity, data popularity, and node replacement rate. Secondly, the cache placement problem is then modeled according to the cache value, data object acquisition, and replacement cost. The cache placement model is then transformed into a combinatorial optimization problem and the cache objects are placed on the appropriate data nodes using tabu search algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, a multi-access edge computing experimental environment is built. Experimental results show that CPBCU provides a significant improvement in cache service rate, data response time, and replacement number compared with other cache placement algorithms.

A Cache Privacy Protection Mechanism based on Dynamic Address Mapping in Named Data Networking

  • Zhu, Yi;Kang, Haohao;Huang, Ruhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권12호
    • /
    • pp.6123-6138
    • /
    • 2018
  • Named data networking (NDN) is a new network architecture designed for next generation Internet. Router-side content caching is one of the key features in NDN, which can reduce redundant transmission, accelerate content distribution and alleviate congestion. However, several security problems are introduced as well. One important security risk is cache privacy leakage. By measuring the content retrieve time, adversary can infer its neighbor users' hobby for privacy content. Focusing on this problem, we propose a cache privacy protection mechanism (named as CPPM-DAM) to identify legitimate user and adversary using Bloom filter. An optimization for storage cost is further provided to make this mechanism more practical. The simulation results of ndnSIM show that CPPM-DAM can effectively protect cache privacy.

캐쉬의 역할 구분을 이용한 확장성이 있는 캐쉬 그룹 구성 정책 (A Scalable Cache Group Configuration Policy using Role-Partitioned Cache)

  • 현진일;민준식
    • 한국콘텐츠학회논문지
    • /
    • 제3권3호
    • /
    • pp.63-73
    • /
    • 2003
  • 오늘날, 인터넷의 급격한 증가에 있어서 응답지연과 네트워크 트래픽의 양, 그리고 서버의 부하를 줄이기 위한 파일캐싱은 매우 중요해졌다. 실제로 하나의 네트워크에서 캐쉬를 사용함으로서 트래픽을 줄일 수 있게 되었고, 이것은 파일캐싱이 인터넷 링크의 용량을 늘리기 위한 비용 부분을 개선할 수 있음을 의미한다. 본 논문에서 확장성 문제를 해결하기 위하여 동적 캐쉬 그룹 구성 정책을 소개한다. 모의시험 결과는 본 논문에서 제안한 정책을 사용하는 캐쉬 그룹이 응답지연시간이 줄어들었음을 보여주며, 우리의 캐쉬그룹 구성이 정적 캐쉬 구성보다 더욱 확장성이 있음을 보여 준다.

  • PDF

무선 인터넷 프록시 서버 클러스터 성능 개선 (A Performance Improvement Scheme for a Wireless Internet Proxy Server Cluster)

  • 곽후근;정규식
    • 한국정보과학회논문지:정보통신
    • /
    • 제32권3호
    • /
    • pp.415-426
    • /
    • 2005
  • 사회적으로 큰 관심의 대상이 되고 있는 무선 인터넷은 유선 인터넷과 달리 기술 환경과 그 특성상 여러 가지 제약점들을 가지고 있다. 대역폭이 낮고, 접속이 빈번하게 끊기며, 단말기내의 컴퓨팅 파워가 낮고 화면이 작다. 또한 사용자의 이동성 문제와 네트워크 프로토콜, 보안등에서 아직 기술적으로 부족한 부분을 보이고 있다 그리고 급속도로 증가하는 수요에 따라 무선 인터넷 서버는 대용량 트래픽을 처리할 수 있는 확장성이 요구되어지고 있다. 이에 본 논문에서는 무선 인터넷 프록시 서버 클러스터를 사용하여 앞에서 언급된 무선 인터넷의 문제와 요구들을 캐싱(Caching), 압축(Distillation) 및 클러스터 (Clustering)를 통하여 해결하려고 한다. TranSend는 클러스터링 기반의 무선 인터넷 프록시 서버로 제안된 것이나 시스템적인(Systematic) 방법으로 확장성을 보장하지 못하고 불필요한 모듈간의 통신구조로 인해 복잡하다는 단점을 가진다. 기존 연구에서 시스템적인 방법으로 확장성을 보장하는 All-in-one 이라는 구조를 제안하였으나 이 역시 모듈간의 통신 구조가 복잡하고 캐시간 협동성이 없는 단점을 가진다. 이에 본 논문에서는 모듈간의 단순한 통신 구조와 캐시간 헙동성을 가지는 클러스터링 기반의 무선 인터넷 프록시 서버를 제안한다. 16대의 컴퓨터를 사용하여 실험을 수행하였고 실험 결과 TranSend 시스템과 All-in-one 시스템에 비해 각각 54.86$\%$, 4.70$\%$의 성능 향상을 보였다. 캐시서버간 데이타를 공유할 수 있기 때문에 제안된 구조에서는 캐시서버 수에 무관하게 캐시 메모리 전체 크기를 일정하게 할 수 장점을 가진다. 반면에 All-in-one에서는 각 캐시서버가 모든 캐시 데이타를 가져야 하므로 캐시 메모리 전체 크기가 캐시 서버 수에 비례하여 증가한다.

Impact Evaluation of DDoS Attacks on DNS Cache Server Using Queuing Model

  • Wang, Zheng;Tseng, Shian-Shyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권4호
    • /
    • pp.895-909
    • /
    • 2013
  • Distributed Denial-of-Service (DDoS) attacks towards name servers of the Domain Name System (DNS) have threaten to disrupt this critical service. This paper studies the vulnerability of the cache server to the flooding DNS query traffic. As the resolution service provided by cache server, the incoming DNS requests, even the massive attacking traffic, are maintained in the waiting queue. The sojourn of requests lasts until the corresponding responses are returned from the authoritative server or time out. The victim cache server is thus overloaded by the pounding traffic and thereafter goes down. The impact of such attacks is analyzed via the model of queuing process in both cache server and authoritative server. Some specific limits hold for this practical dual queuing process, such as the limited sojourn time in the queue of cache server and the independence of the two queuing processes. The analytical results are presented to evaluate the impact of DDoS attacks on cache server. Finally, numerical results are provided for further analysis.

CacheSCDefender: VMM-based Comprehensive Framework against Cache-based Side-channel Attacks

  • Yang, Chao;Guo, Yunfei;Hu, Hongchao;Liu, Wenyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권12호
    • /
    • pp.6098-6122
    • /
    • 2018
  • Cache-based side-channel attacks have achieved more attention along with the development of cloud computing technologies. However, current host-based mitigation methods either provide bad compatibility with current cloud infrastructure, or turn out too application-specific. Besides, they are defending blindly without any knowledge of on-going attacks. In this work, we present CacheSCDefender, a framework that provides a (Virtual Machine Monitor) VMM-based comprehensive defense framework against all levels of cache attacks. In designing CacheSCDefender, we make three key contributions: (1) an attack-aware framework combining our novel dynamic remapping and traditional cache cleansing, which provides a comprehensive defense against all three cases of cache attacks that we identify in this paper; (2) a new defense method called dynamic remapping which is a developed version of random permutation and is able to deal with two cases of cache attacks; (3) formalization and quantification of security improvement and performance overhead of our defense, which can be applicable to other defense methods. We show that CacheSCDefender is practical for deployment in normal virtualized environment, while providing favorable security guarantee for virtual machines.