• Title/Summary/Keyword: On-The-Fly Race Detection

Search Result 11, Processing Time 0.024 seconds

Loop Splitting for On-the-fly Race Detection of Sharded-memory Parallel Programs (공유 메모리 병렬 프로그램의 수행중 오류 탐지를 위한 루프 분리)

  • Song, Tae-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.3
    • /
    • pp.391-398
    • /
    • 2012
  • Detecting races is important for debugging shared-memory parallel programs, because the races result in unintended non-deterministic executions of the programs. Previous on-the-fly techniques to detect races in parallel programs with general inter-thread coordination show serious space overhead which depends on the maximum parallelism of the program. Therefore, this paper presents a loop splitting technique for on-the-fly race detection of parallel programs which is more efficient in space complexity than previous techniques. This loop splitting technique is the debugged program which preserves semantics of the original program. Monitering loop splitting program in on-the-fly can detect first races.

Detecting the First Race in OpenMP Program with Nested Parallelism (내포 병렬성을 가지는 OpenMP 프로그램의 최초 경합 탐지)

  • Chon, Byoung-Gyu;Woo, Jong-Jung;Jun, Yong-Kee
    • The KIPS Transactions:PartA
    • /
    • v.8A no.3
    • /
    • pp.253-260
    • /
    • 2001
  • It is important to detect races for debugging shared-memoy parallel programs, because the races cause unintended nondeterministic program execution. Previous on-the-fly techniques to detect races can not guarantee the first race detection in nested parallel programs. Detecting the first race is important for debugging parallel programs, since the removal of the first race may make the next occurred races disappear. In this paper, we presents an on-the-fly detection technique to detect all of the first races through the reexecution of the debugged programs. We assume that the debugged parallel program may have one-way nested parallel programs. The number of reexecution is at the least the nesting depth of the program in the worst case. The space complexity is O(VT) and the time complexity to detect race in each access of access history is O(T), where V is number of shared variables and T is the maximum parallelism of the program. This efficiency of our technique in each execution is the same with the previous on-the-fly detection techniques. Therefore, this technique makes debugging parallel programs more effective and practical.

  • PDF

A Labeling Scheme for Efficient On-the-fly Detection of Race Conditions in Parallel Programs (병렬프로그램의 경합조건을 수행 중에 효율적으로 탐지하기 위한 레이블링 기법)

  • Park, So-Hee;Woo, Jong-Jung;Bae, Jong-Min;Jun, Yong-Kee
    • The KIPS Transactions:PartA
    • /
    • v.9A no.4
    • /
    • pp.525-534
    • /
    • 2002
  • Race conditions, races in short, need to be detected for debugging parallel programs, because the races result in unintended non-deterministic executions. To detect the races in an execution of program, previous techniques use a centralized data structure which may incur serious bottleneck in generating concurrency information, or show inefficient time complexity which depends on the degree of nested parallelism in comparing any two of them. We propose a new labeling scheme in this paper, which is scalable in generating the concurrency information without bottleneck by using private data structure, and improves time complexity into constant in checking concurrency. The scalability and time efficiency therfore makes on-the-fly race detection efficient not only for programs with either shared-memory or message-passing, but also for programs with mixed model of the two.

On-the-fly Detection of Race Conditions in Message-Passing Programs (메시지 전달 프로그램에서의 수행 중 경합탐지)

  • Park, Mi-Young;Kang, Moon-Hye;Jun, Yong-Kee;Park, Hyuk-Ro
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.7
    • /
    • pp.267-275
    • /
    • 2007
  • Message races should be detected for debugging message-passing parallel programs because they can cause non-deterministic executions. Specially, it is important to detect the first race in each process because the first race can cause the occurrence of the other races in the same process. The previous techniques for detecting the first races require more than two monitored runs of a program or analyze a trace file which size is proportional to the number of messages. In this paper we introduce an on-the-fly technique to detect the first race in each process without generating any trace file. In the experiment we test the accuracy of our technique with some benchmark programs and it shows that our technique detects the first race in each process in all benchmark programs.

Efficient On-the-fly Detection of First Races in Shared-Memory Programs with Nested Parallelism (내포병렬성을 가진 공유메모리 프로그램의 수행중 최초경합 탐지를 위한 효율적 기법)

  • 하금숙;전용기;유기영
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.7_8
    • /
    • pp.341-351
    • /
    • 2003
  • For debugging effectively the shared-memory programs with nested parallelism, it is important to detect efficiently the first races which incur non-deterministic executions of the programs. Previous on-the-fly technique detects the first races in two passes, and shows inefficiencies both in execution time and memory space because the size of an access history for each shared variable depends on the maximum parallelism of program. This paper proposes a new on-the-fly technique to detect the first races in two passes, which is constant in both the number of event comparisons and the space complexity on each access to shared variable because the size of an access history for each shared variable is a small constant. This technique therefore makes on-the-fly race detection more efficient and practical for debugging shared-memory programs with nested parallelism.

On-the -fly Detection of the First Races for Shared-Memory Parallel Programs with Ordered Synchronization (순서적 동기화를 포함하는 공유 메모리 병렬프로그램에서의 수행중 최초경합 탐지 기법)

  • Park, Hui-Dong;Jeon, Yong-Gi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.8
    • /
    • pp.884-894
    • /
    • 1999
  • 순서적 동기화 및 내포 병렬성을 포함하는 공유메모리 병렬 프로그램에서의 경합(race)은 프로그램 수행에서 원하지 않은 비결정성(nondeterminism)을 야기할 수 있기 때문에 반드시 탐지되어져야 한다. 특히 프로그램 수행에서 최초경합(first race)을 탐지하는 것은 중요한데, 그 이유는 이 경합을 제거하면 다른 경합이 나타나지 않을 수도 있기 때문이다. 본 논문에서는 결정적 공유메모리 병렬프로그램을 위한 2단계 수행중 (two-pass on-the-fly) 최초경합 탐지 기법을 제시하며, 이것은 공유메모리 병렬 프로그램의 특정 수행에서 "최초로 발생되는" 경합들을 탐지하는 기법이다. 그리고 HPF 컴파일러를 이용하여 본 탐지 프로토콜을 공인된 벤치마크 프로그램에 적용하여, 병렬 프로그램 디버깅 시 고려하여야 할 파라미터들에 대한 실험으로부터 본 기법의 효율성을 보였다.Abstract Detecting races is important in debugging shared-memory parallel programs which have ordered synchronization and nested parallelism, because the races result in unintended non- deterministic executions of the programs. The first races are important in debugging, because the removal of such races may make other races disappear. It is even possible that all races reported would disappear once the first races are removed. This paper presents a new two-pass on-the-fly algorithm to detect the first races in such parallel programs. The algorithm reported in this paper is an on-the-fly algorithm that detects the races that "occur first" in a particular execution of shared-memory parallel programs. The experiment has accomplished, where two certified benchmark programs which can be executed under High Performance Fortran environments to get some parameters which improve debugging performance with our algorithm. with our algorithm.

On-the-fly Detection of the First Races for Reducing Bottlenecks by Summary Report Method (요약보고 방법에 의해 병목현상을 개선한 최초경합의 수행중 탐지기법)

  • Kim, Jeong-Si;Jeon, Yong-Gi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.9
    • /
    • pp.1042-1054
    • /
    • 1999
  • 공유메모리 병렬프로그램의 오류수정에서 경합의 탐지는 중요하다. 왜냐하면 경합은 잘못된 수행 결과를 초래할 뿐만 아니라, 의도하지 않은 프로그램의 비결정적인 수행을 유발하여 오류수정을 어렵게 하기 때문이다. 특히 최초경합의 탐지는 더욱 중요하다. 그 이유는 최초경합을 제거함으로써 나머지 경합들을 방지할 수도 있기 때문이다. 기존의 수행중 경합 탐지기법들은 접근별 보고방식을 기반으로 하는데, 이 기법들은 임의 공유변수에 대한 병행 쓰레드들의 모든 접근사건들을 검사하기 위해서 접근역사라는 유일한 공유정보를 이용하므로 탐지과정에 심각한 병목현상을 유발시킨다. 그러나, 최초경합 탐지를 위한 경우 이러한 병목현상은 크게 개선될 수 있다. 본 논문에서는, 각 접근사건 검사를 위해 각 쓰레드에 공유되지 않는 독립적인 접근역사를 별개로 두고, 경합을 보고하는 시점인 쓰레드 합류시점에서만 공유되는 접근역사를 이용하도록 함으로써 병목현상을 개선하여 최초경합을 탐지할 수 있는 새로운 수행중 탐지기법을 제안한다. 그러므로 본 기법은 최초경합을 보다 효율적으로 탐지할 수 있기 때문에 수행중 경합 탐지를 더욱 효율적이고 실용적으로 할 수 있다. Abstract Detecting races is important for debugging shared-memory parallel programs, because the races lead to unintended nondeterministic executions of the programs as well as erroneous result and then make debugging programs difficult. Especially, detecting the first races is more important. The reason is that the removal of the first races can make other races disappear. Most existing on-the-fly techniques to detect the races are based on per- access reporting method incurring the serious central bottleneck, because the techniques use unique shared information called access history for checking all accesses of concurrent threads to a shared variable. Such bottleneck, however, can be improved considerably in case of detecting first races. This paper presents a new on-the-fly technique which detects the first races with reduced bottleneck through checking each accesses with private access histories and finally reporting races with shared access histories. Therefore, this technique makes on-the-fly race detection more efficient and practical.

An Improving Method of Restructuring Parallel Programs for Data Race Detection

  • Ha, Keum-Sook;Lee, Sung woo;Yoo, Kee-Young
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.715-718
    • /
    • 2000
  • Although shared memory parallel programs are designed to be deterministic both in their final results and intermediate states, the races that occur when different processes access a common memory location in an order not guaranteed by synchronization could result in unintended non-deterministic executions of the program. So, Detecting races, particularly first data races, is important for debugging explicit shared memory parallel programs. It is possible that all data races reported by other on-the-fly algorithms would disappear once the first races were removed. To detect races parallel programs with nested loops and inter-thread coordination, it must guarantee the order of synchronization operations in an execution instance. In this paper, we propose an improved restructuring method that guarantee ordering execution instance and preserve the semantics of original program. This method requires O(np) time and (s + up) space, where n is the number of total operations, s is the number of synchronization operations and p is the number of parallelism in the execution. Also, this method makes on-the-fly detection of parallel program with nested loops and inter-thread coordination more easily in space and time complexity.

  • PDF

On-the-fly Monitoring Tool for Detecting Data Races in Multithread Programs (멀티 스레드 프로그램의 자료경합 탐지를 위한 수행 중 감시 도구)

  • Paeng, Bong-Jun;Park, Se-Won;Kuh, In-Bon;Ha, Ok-Kyoon;Jun, Yong-Kee
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.155-161
    • /
    • 2015
  • It is difficult and cumbersome to figure out whether a multithread program runs with concurrency bugs, such as data races and atomicity violations, because there are many possible executions of the program and a lot of the defects are hard to reproduce. Hence, monitoring techniques for collecting and analyzing the information from program execution, such as thread executions, memory accesses, and synchronization information, are important to locate data races for debugging multithread programs. This paper presents an efficient and practical monitoring tool, called VcTrace, that analyzes the partial ordering of concurrent threads and events during an execution of the program based on the vector clock system. Empirical results on C/C++ benchmarks using Pthreads show that VcTrace is a sound and practical tool for on-the-fly data race detection as well as for analyzing multithread programs.

A Preprocessor for Detecting Potential Races in Shared Memory Parallel Programs with Internal Nondeterminism (내부적 비결정성을 가진 공유 메모리 병렬 프로그램에서 잠재적 경합탐지를 위한 전처리기)

  • Kim, Young-Joo;Jung, Min-Sub;Jun, Yong-Kee
    • The KIPS Transactions:PartA
    • /
    • v.17A no.1
    • /
    • pp.9-18
    • /
    • 2010
  • Races that occur in shared-memory parallel programs such as OpenMP programs must be detected for debugging because of causing unintended non-deterministic results. Previous works which verify the existence of these races on-the-fly are limited to the programs without internal non-determinism. But in the programs with internal non-determinism, such works need at least N! execution instances for each critical section to verify the existence of races, where N is the degree of maximum parallelism. This paper presents a preprocessor that statically analyzes the locations of non-deterministic accesses using program slicing and can detect apparent races as well as potential races through single execution using the analyzed information. The suggested tool can deterministically monitor non-deterministic accesses to occur in OpenMP programs so that this tool can verify the existence of races even if it is used any race detection protocol which can apply to programs with critical section. To prove empirically this tool, we have experimented using a set of benchmark programs such as synthetic programs that involve non-deterministic accesses, OpenMP Microbenchmark, NAS Parallel Benchmark, and OpenMP application programs.