• Title/Summary/Keyword: initial latency

Search Result 66, Processing Time 0.032 seconds

Analysis of E2E Latency for Data Setup in 5G Network (5G 망에서 Data Call Setup E2E Latency 분석)

  • Lee, Hong-Woo;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.113-119
    • /
    • 2019
  • The key features of 5G mobile communications recently commercialized can be represented by High Data Rate, Connection Density and Low Latency, of which the features most distinct from the existing 4G will be low Latency, which will be the foundation for various new service offerings. AR and self-driving technologies are being considered as services that utilize these features, and 5G Network Latency is also being discussed in related standards. However, it is true that the discussion of E2E Latency from a service perspective is much lacking. The final goal to achieve low Latency at 5G is to achieve 1ms of air interface based on RTD, which can be done through Ultra-reliable Low Latency Communications (URLLC) through Rel-16 in early 20 years, and further network parity through Mobile Edge Computing (MEC) is also being studied. In addition to 5G network-related factors, the overall 5G E2E Latency also includes link/equipment Latency on the path between the 5G network and the IDC server for service delivery, and the Processing Latency for service processing within the mobile app and server. Meanwhile, it is also necessary to study detailed service requirements by separating Latency for initial setup of service and Latency for continuous service. In this paper, the following three factors were reviewed for initial setup of service. First, the experiment and analysis presented the impact on Latency on the Latency in the case of 1 Data Lake Setup, 2 CRDX On/Off for efficient power, and finally 3H/O on Latency. Through this, we expect Low Latency to contribute to the service requirements and planning associated with Latency in the initial setup of the required services.

Design of a Vido Storage Server that Maximizes Concurrent Streams and Minimizes Initial Latency (사용자 수 증대와 초기 대기시간 감소를 위한 비디오 저장 서버의 설계)

  • Ma, Pyeong-Su;Jo, Chang-Sik;Jin, Yun-Suk;Sin, Gyu-Sang
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.10
    • /
    • pp.2608-2617
    • /
    • 1999
  • One of the most important functionality that commercial video storage servers should provide is to maximize the number of concurrent streams and to minimize the initial latency of new requests. In this paper, we propose a data placement scheme whose disk read unit size can be twice large than that of conventional striping methods. The proposed scheme can significantly increase the number of concurrent streams, since the ratio of rotational latency time is decreased and the disks are effectively utilized. The disk scheduling scheme we propose guarantees constant initial latency time. We also propose a procedural design method for a storage server by introducing the concept of allowed initial latency. The comparison with previous research shows that the proposed scheme provides better performance.

  • PDF

Analytical model for mean web object transfer latency estimation in the narrowband IoT environment (협대역 사물 인터넷 환경에서 웹 객체의 평균 전송시간을 추정하기 위한 해석적 모델)

  • Lee, Yong-Jin
    • Journal of Internet of Things and Convergence
    • /
    • v.1 no.1
    • /
    • pp.1-4
    • /
    • 2015
  • This paper aims to present the mathematical model to find the mean web object transfer latency in the slow-start phase of TCP congestion control mechanism, which is one of the main control techniques of Internet. Mean latency is an important service quality measure of end-user in the network. The application area of the proposed latency model is the narrowband environment including multi-hop wireless network and Internet of Things(IoT), where packet loss occurs in the slow-start phase only due to small window. The model finds the latency considering initial window size and the packet loss rate. Our model shows that for a given packet loss rate, round trip time and initial window size mainly affect the mean web object transfer latency. The proposed model can be applied to estimate the mean response time that end user requires in the IoT service applications.

Dynamic Stream Merging Scheme for Reducing the Initial Latency Time and Enhancing the Performance of VOD Servers (VOD 서버의 초기 대기시간 최소화와 성능 향상을 위한 동적 스트림 합병 기법)

  • 김근혜;최황규
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.5
    • /
    • pp.529-546
    • /
    • 2002
  • A VOD server, which is the central component for constructing VOD systems, requires to provide high bandwidth and continuous real-time delivery. It is also necessary to the sophisticated disk scheduling and data placement schemes in VOD sewers. One of the most common problem facing in such a system is the high initial latency time to service multiple users concurrently. In this paper, we propose a dynamic stream merging scheme for reducing the initial latency time in VOD servers. The proposed scheme allows clients to merge streams on a request as long as their requests fall within the reasonable time interval. The basic idea behind the dynamic stream merging is to merge multiple streams into one by increasing the frame rate of each stream. In the performance study, the proposed scheme can reduce the initial latency time under the minimum buffer use and also can enhance the performance of the VOD server with respect to the capacity of user admission.

  • PDF

Analytical Modelling and Heuristic Algorithm for Object Transfer Latency in the Internet of Things (사물인터넷에서 객체전송지연을 계산하기 위한 수리적 모델링 및 휴리스틱 알고리즘의 개발)

  • Lee, Yong-Jin
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.3
    • /
    • pp.1-6
    • /
    • 2020
  • This paper aims to integrate the previous models about mean object transfer latency in one framework and analyze the result through the computational experience. The analytical object transfer latency model assumes the multiple packet losses and the Internet of Things(IoT) environment including multi-hop wireless network, where fast re-transmission is not possible due to small window. The model also considers the initial congestion window size and the multiple packet loss in one congestion window. Performance evaluation shows that the lower and upper bounds of the mean object transfer latency are almost the same when both transfer object size and packet loss rate are small. However, as packet loss rate increases, the size of the initial congestion window and the round-trip time affect the upper and lower bounds of the mean object transfer latency.

A Dynamic Buffer Allocation Scheme in Video-on-Demand System (주문형 비디오 시스템에서의 동적 버퍼 할당 기법)

  • Lee, Sang-Ho;Moon, Yang-Sae;Whang, Kyu-Young;Cho, Wan-Sup
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.9
    • /
    • pp.442-460
    • /
    • 2001
  • In video-on-demand(VOD) systems it is important to minimize initial latency and memory requirements. The minimization of initial latency enables the system to provide services with short response time, and the minimization of memory requirements enables the system to service more concurrent user requests with the same amount of memory. In VOD systems, since initial latency and memory requirement increase according to the increment of buffer size allocated to user requests, the buffer size allocated to user requests must be minimized. The existing static buffer allocation scheme, however, determines the buffer size based on the assumption that thy system is in fully loaded state. Thus, when the system is in partially loaded state, the scheme allocates user requests unnecessarily large buffers. This paper proposes a dynamics buffer allocation scheme that allocates user requests the minimum buffer size in fully loaded state as well as a partially loaded state. This scheme dynamically determines the buffer size based on the number of user requests in service and the number of user requests arriving while servicing current requests. In addition, through analyses and simulations, this paper validates that the dynamics buffer allocation outperforms the statics buffer allocation in initial latency and the number of concurrent user requests that can be supported. Our simulation results show that, in proportion to the static buffer allocation scheme, the dynamic buffer allocation scheme reduces the average initial latency by 29%~65%, and in a systems having several disks. increases the average number of concurrent user requests by 48%~68%. Our results show that the dynamic buffer allocation scheme significantly improves the performance and reduce the capacity requirements of VOD systems.

  • PDF

Mathematical Model for Mean Transfer Delay of Web Object in Initial Slow Start Phase (초기 슬로우 스타트 구간에서 웹 객체의 평균 전송 시간 추정을 위한 수학적 모델)

  • Lee, Yong-Jin
    • 대한공업교육학회지
    • /
    • v.33 no.2
    • /
    • pp.248-258
    • /
    • 2008
  • Current Internet uses HTTP (Hyper Text Transfer Protocol) as an application layer protocol and TCP (Transmission Control Protocol) as a transport layer protocol to provide web service. SCTP (Stream Control Transmission Protocol) is a recently proposed transport protocol with very similar congestion control mechanisms as TCP, except the initial congestion window during the slow start phase. In this paper, we present a mathematical model of object transfer latency during the slow start phase for HTTP over SCTP and compare with the latency of HTTP over TCP. Validation of the model using experimental result shows that the mean object transfer latency for HTTP over SCTP during the slow start phase is less than that for HTTP over TCP by 11%.

A Push-Caching and a Transmission Scheme of Continuous Media for NOD Service on the Internet (인테넷상에서 NOD 서비스를 위한 연속미디어 전송 및 푸쉬-캐싱 기법)

  • Park, Seong-Ho;Im, Eun-Ji;Choe, Tae-Uk;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.6
    • /
    • pp.1766-1777
    • /
    • 2000
  • In multimedia new service on the internet, there are problems such as server overload, network congestion and initial latency. To overcome these problems, we propose a proxy push-caching scheme that stores a portion of continuous media stream or entire stream, and a transmission scheme of NOD continuous media, RTP-RR and RTP-nR to exploit push-caching scheme. With the proposed push-caching scheme, NOD server pushes fixed portion of stream to a proxy when new data is generated, and the cached size of each stream changes dynamically according to the caching utility value of each stream. As a result, the initial latency of client side could be reduced and the amount of data transmitted fro ma proxy server to client could be increased. Moreover, we estimate a caching utility value of each stream using correlation between disk space occupied by the stream and the amount of data stream requested by client. And we applied the caching utility value ot replacement policies. The performance of the proxy push-caching and continuous media transmission schemes proposed were compared with other schemes using simulations. In the simulation, these schemes show better results than other schemes in terms of BHR (Byte Hit Rate), initial latency, the number of replacement and packet loss rate.

  • PDF

Random Number Generation using SDRAM (SDRAM을 사용한 난수 발생)

  • Pyo, Chang-Woo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.415-420
    • /
    • 2010
  • Cryptographic keys for security should be generated by true random number generators that apply irreversible hashing algorithms to initial values taken from a random source. As DRAM shows randomness in its access latency, it can be used as a random source. However, systems with synchronous DRAM (SDRAM) do not easily expose such randomness resulting in highly clustered random numbers. We resolved this problem by using the xor instruction. Statistical testing shows that the generated random bits have the quality comparable to true random bit sequences. The performance of bit generation is at the order of 100 Kbits/sec. Since the proposed random number generation requires neither external devices nor any special circuits, this method may be used in any computing device that employs DRAM.

A History Retransmission Algorithm for Online Arcade Video Games

  • Kim Seong-Hoo;Park Kyoo-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.798-806
    • /
    • 2005
  • In this paper, we suggest a game system that can support network modules for multi-platform based video games, and built a system that can convert from a single-user game to multi-user game. In this system, we bring in an initial delay buffering scheme on clients to handle any periods of latency occurring from the load fluctuation in a network, when a real-time game is played, and shows that stable play for a game is achieved as the result of the scheme. This paper also presents a retransmission algorithm based on the history of game commands to handle drawbacks of UDP mechanism. And, we evaluate the network delay and packet loss using the simulation tool NS2, and shows the case of 0.3 second buffer delay is the most suitable for recovery.

  • PDF