• Title/Summary/Keyword: Latency Time

Search Result 983, Processing Time 0.03 seconds

Muscle Latency Time and Activation Patterns for Upper Extremity During Reaching and Reach to Grasp Movement

  • Choi, Sol-a;Kim, Su-jin
    • Physical Therapy Korea
    • /
    • v.25 no.3
    • /
    • pp.51-59
    • /
    • 2018
  • Background: Despite muscle latency times and patterns were used as broad examination tools to diagnose disease and recovery, previous studies have not compared the dominant arm to the non-dominant arm in muscle latency time and muscle recruitment patterns during reaching and reach-to-grasp movements. Objects: The present study aimed to investigate dominant and non-dominant hand differences in muscle latency time and recruitment pattern during reaching and reach-to-grasp movements. In addition, by manipulating the speed of movement, we examined the effect of movement speed on neuromuscular control of both right and left hands. Methods: A total of 28 right-handed (measured by Edinburgh Handedness Inventory) healthy subjects were recruited. We recorded surface electromyography muscle latency time and muscle recruitment patterns of four upper extremity muscles (i.e., anterior deltoid, triceps brachii, flexor digitorum superficialis, and extensor digitorum) from each left and right arm. Mixed-effect linear regression was used to detect differences between hands, reaching and reach-to-grasp, and the fast and preferred speed conditions. Results: There were no significant differences in muscle latency time between dominant and non-dominant hands or reaching and reach-to-grasp tasks (p>.05). However, there was a significantly longer muscle latency time in the preferred speed condition than the fast speed condition on both reaching and reach-to-grasp tasks (p<.05). Conclusion: These findings showed similar muscle latency time and muscle activation patterns with respect to movement speeds and tasks. Our findings hope to provide normative muscle physiology data for both right and left hands, thus aiding the understanding of the abnormal movements from patients and to develop appropriate rehabilitation strategies specific to dominant and non-dominant hands.

A study on Packet Losses for Guaranteering Response Time of Service (서비스 응답시간 보장을 위한 패킷 손실에 관한 연구)

  • Kim Tae-Kyung;Seo Hee-Seok;Kim Hee-Wan
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.3
    • /
    • pp.201-208
    • /
    • 2005
  • To guarantee the quality of service for user request, we should consider various kinds of things. The important thing of QoS is that response time of service is transparently suggested 'to network users. We can know the response time of service using the information of network latency, system latency, and software component latency, In this paper, we carried out the modeling of network latency and analyzed the effects of packets loss to the network latency, Also, we showed the effectiveness of modeling using the NS-2. This research can help to provide the effective methods in case of SLA(Service Level Agreement) agreement between service provider and user.

  • PDF

RTK Latency Estimation and Compensation Method for Vehicle Navigation System

  • Jang, Woo-Jin;Park, Chansik;Kim, Min;Lee, Seokwon;Cho, Min-Gyou
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.6 no.1
    • /
    • pp.17-26
    • /
    • 2017
  • Latency occurs in RTK, where the measured position actually outputs past position when compared to the measured time. This latency has an adverse effect on the navigation accuracy. In the present study, a system that estimates the latency of RTK and compensates the position error induced by the latency was implemented. To estimate the latency, the speed obtained from an odometer and the speed calculated from the position change of RTK were used. The latency was estimated with a modified correlator where the speed from odometer is shifted by a sample until to find best fit with speed from RTK. To compensate the position error induced by the latency, the current position was calculated from the speed and heading of RTK. To evaluate the performance of the implemented method, the data obtained from an actual vehicle was applied to the implemented system. The results of the experiment showed that the latency could be estimated with an error of less than 12 ms. The minimum data acquisition time for the stable estimation of the latency was up to 55 seconds. In addition, when the position was compensated based on the estimated latency, the position error decreased by at least 53.6% compared with that before the compensation.

Eager Data Transfer Mechanism for Reducing Communication Latency in User-Level Network Protocols

  • Won, Chul-Ho;Lee, Ben;Park, Kyoung;Kim, Myung-Joon
    • Journal of Information Processing Systems
    • /
    • v.4 no.4
    • /
    • pp.133-144
    • /
    • 2008
  • Clusters have become a popular alternative for building high-performance parallel computing systems. Today's high-performance system area network (SAN) protocols such as VIA and IBA significantly reduce user-to-user communication latency by implementing protocol stacks outside of operating system kernel. However, emerging parallel applications require a significant improvement in communication latency. Since the time required for transferring data between host memory and network interface (NI) make up a large portion of overall communication latency, the reduction of data transfer time is crucial for achieving low-latency communication. In this paper, Eager Data Transfer (EDT) mechanism is proposed to reduce the time for data transfers between the host and network interface. The EDT employs cache coherence interface hardware to directly transfer data between the host and NI. An EDT-based network interface was modeled and simulated on the Linux-based, complete system simulation environment, Linux/SimOS. Our simulation results show that the EDT approach significantly reduces the data transfer time compared to DMA-based approaches. The EDTbased NI attains 17% to 38% reduction in user-to-user message time compared to the cache-coherent DMA-based NIs for a range of message sizes (64 bytes${\sim}$4 Kbytes) in a SAN environment.

A Case Study on Network Status Classification based on Latency Stability

  • Kim, JunSeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4016-4027
    • /
    • 2014
  • Understanding network latency is important for providing consistent and acceptable levels of services in network-based applications. However, due to the difficulty of estimating applications' network demands and the difficulty of network latency modeling the management of network resources has often been ignored. We expect that, since network latency repeats cycles of congested states, a systematic classification method for network status would be helpful to simplify issues in network resource managements. This paper presents a simple empirical method to classify network status with a real operational network. By observing oscillating behavior of end-to-end latency we determine networks' status in run time. Five typical network statuses are defined based on a long-term stability and a short-term burstiness. By investigating prediction accuracies of several simple numerical models we show the effectiveness of the network status classification. Experimental results show that around 80% reduction in prediction errors depending on network status.

Variable latency L1 data cache architecture design in multi-core processor under process variation

  • Kong, Joonho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.1-10
    • /
    • 2015
  • In this paper, we propose a new variable latency L1 data cache architecture for multi-core processors. Our proposed architecture extends the traditional variable latency cache to be geared toward the multi-core processors. We added a specialized data structure for recording the latency of the L1 data cache. Depending on the added latency to the L1 data cache, the value stored to the data structure is determined. It also tracks the remaining cycles of the L1 data cache which notifies data arrival to the reservation station in the core. As in the variable latency cache of the single-core architecture, our proposed architecture flexibly extends the cache access cycles considering process variation. The proposed cache architecture can reduce yield losses incurred by L1 cache access time failures to nearly 0%. Moreover, we quantitatively evaluate performance, power, energy consumption, power-delay product, and energy-delay product when increasing the number of cache access cycles.

Low-latency SAO Architecture and its SIMD Optimization for HEVC Decoder

  • Kim, Yong-Hwan;Kim, Dong-Hyeok;Yi, Joo-Young;Kim, Je-Woo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.1
    • /
    • pp.1-9
    • /
    • 2014
  • This paper proposes a low-latency Sample Adaptive Offset filter (SAO) architecture and its Single Instruction Multiple Data (SIMD) optimization scheme to achieve fast High Efficiency Video Coding (HEVC) decoding in a multi-core environment. According to the HEVC standard and its Test Model (HM), SAO operation is performed only at the picture level. Most realtime decoders, however, execute their sub-modules on a Coding Tree Unit (CTU) basis to reduce the latency and memory bandwidth. The proposed low-latency SAO architecture has the following advantages over picture-based SAO: 1) significantly less memory requirements, and 2) low-latency property enabling efficient pipelined multi-core decoding. In addition, SIMD optimization of SAO filtering can reduce the SAO filtering time significantly. The simulation results showed that the proposed low-latency SAO architecture with significantly less memory usage, produces a similar decoding time as a picture-based SAO in single-core decoding. Furthermore, the SIMD optimization scheme reduces the SAO filtering time by approximately 509% and increases the total decoding speed by approximately 7% compared to the existing look-up table approach of HM.

Ultra-low-latency services in 5G systems: A perspective from 3GPP standards

  • Jun, Sunmi;Kang, Yoohwa;Kim, Jaeho;Kim, Changki
    • ETRI Journal
    • /
    • v.42 no.5
    • /
    • pp.721-733
    • /
    • 2020
  • Recently, there is an increasing demand for ultra-low-latency (ULL) services such as factory automation, autonomous driving, and telesurgery that must meet an end-to-end latency of less than 10 ms. Fifth-generation (5G) New Radio guarantees 0.5 ms one-way latency, so the feasibility of ULL services is higher than in previous mobile communications. However, this feasibility ensures performance at the radio access network level and requires an innovative 5G network architecture for end-to-end ULL across the entire 5G system. Hence, we survey in detailed two the 3rd Generation Partnership Party (3GPP) standardization activities to ensure low latency at network level. 3GPP standardizes mobile edge computing (MEC), a low-latency solution at the edge network, in Release 15/16 and is standardizing time-sensitive communication in Release 16/17 for interworking 5G systems and IEEE 802.1 time-sensitive networking (TSN), a next-generation industry technology for ensuring low/deterministic latency. We developed a 5G system based on 3GPP Release 15 to support MEC with a potential sub-10 ms end-to-end latency in the edge network. In the near future, to provide ULL services in the external network of a 5G system, we suggest a 5G-IEEE TSN interworking system based on 3GPP Release 16/17 that meets an end-to-end latency of 2 ms.

Design of CPS Architecture for Ultra Low Latency Control (초저지연 제어를 위한 CPS 아키텍처 설계)

  • Kang, Sungjoo;Jeon, Jaeho;Lee, Junhee;Ha, Sujung;Chun, Ingeol
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.5
    • /
    • pp.227-237
    • /
    • 2019
  • Ultra-low latency control is one of the characteristics of 5G cellular network services, which means that the control loop is handled in milliseconds. To achieve this, it is necessary to identify time delay factors that occur in all components related to CPS control loop, including new 5G cellular network elements such as MEC, and to optimize CPS control loop in real time. In this paper, a novel CPS architecture for ultra-low latency control of CPS is designed. We first define the ultra-low latency characteristics of CPS and the CPS concept model, and then propose the design of the control loop performance monitor (CLPM) to manage the timing information of CPS control loop. Finally, a case study of MEC-based implementation of ultra-low latency CPS reviews the feasibility of future applications.

Real-time Camera and Video Streaming Through Optimized Settings of Ethernet AVB in Vehicle Network System

  • An, Byoungman;Kim, Youngseop
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3025-3047
    • /
    • 2021
  • This paper presents the latest Ethernet standardization of in-vehicle network and the future trends of automotive Ethernet technology. The proposed system provides design and optimization algorithms for automotive networking technology related to AVB (Audio Video Bridge) technology. We present a design of in-vehicle network system as well as the optimization of AVB for automotive. A proposal of Reduced Latency of Machine to Machine (RLMM) plays an outstanding role in reducing the latency among devices. RLMM's approach to real-world experimental cases indicates a reduction in latency of around 41.2%. The setup optimized for the automotive network environment is expected to significantly reduce the time in the development and design process. The results obtained in the study of image transmission latency are trustworthy because average values were collected over a long period of time. It is necessary to analyze a latency between multimedia devices within limited time which will be of considerable benefit to the industry. Furthermore, the proposed reliable camera and video streaming through optimized AVB device settings would provide a high level of support in the real-time comprehension and analysis of images with AI (Artificial Intelligence) algorithms in autonomous driving.