• Title/Summary/Keyword: Task scheduling algorithm

Search Result 207, Processing Time 0.026 seconds

Task Scheduling Algorithm for the Communication, Ocean, and Meteorological Satellite

  • Lee, Soo-Jeon;Jung, Won-Chan;Kim, Jae-Hoon
    • ETRI Journal
    • /
    • v.30 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we propose an efficient single-resource task scheduling algorithm for the Communication, Ocean, and Meteorological Satellite. Among general satellite planning functions such as constraint check, priority check, and task scheduling, this paper focuses on the task scheduling algorithm, which resolves conflict among tasks which have an exclusion relation and the same priority. The goal of the proposed task scheduling algorithm is to maximize the number of tasks that can be scheduled. The rationale of the algorithm is that a discarded task can be scheduled instead of a previously selected one depending on the expected benefit acquired by doing so. The evaluation results show that the proposed algorithm enhances the number of tasks that can be scheduled considerably.

  • PDF

Task Scheduling and Resource Management Strategy for Edge Cloud Computing Using Improved Genetic Algorithm

  • Xiuye Yin;Liyong Chen
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.450-464
    • /
    • 2023
  • To address the problems of large system overhead and low timeliness when dealing with task scheduling in mobile edge cloud computing, a task scheduling and resource management strategy for edge cloud computing based on an improved genetic algorithm was proposed. First, a user task scheduling system model based on edge cloud computing was constructed using the Shannon theorem, including calculation, communication, and network models. In addition, a multi-objective optimization model, including delay and energy consumption, was constructed to minimize the sum of two weights. Finally, the selection, crossover, and mutation operations of the genetic algorithm were improved using the best reservation selection algorithm and normal distribution crossover operator. Furthermore, an improved legacy algorithm was selected to deal with the multi-objective problem and acquire the optimal solution, that is, the best computing task scheduling scheme. The experimental analysis of the proposed strategy based on the MATLAB simulation platform shows that its energy loss does not exceed 50 J, and the time delay is 23.2 ms, which are better than those of other comparison strategies.

Schedulability Test using task utilization in Real-Time system (실시간 시스템에서 태스크 이용율을 이용한 스케줄링 가능성 검사)

  • Lim Kyung-Hyun;Seo Jae-Hyeon;Park Kyung-Woo
    • Journal of Internet Computing and Services
    • /
    • v.6 no.2
    • /
    • pp.25-35
    • /
    • 2005
  • The Rate Monotonic(RM) scheduling algorithm and Earliest Deadline First(EDF) scheduling algorithm are normally used in Real-Time scheduling algorithm. In those scheduling algorithm, we could predict the performance possibility with total utilization value of task group. But. it had problems with prediction of the boundedness in individual task when the utilization value was over in temporary task. In this paper, the suggested scheduling algorithm can predict task when the utilization value was over and it suggested the method of predicting scheduling possibility based on the utilization value of individual task as well. it predicted the boundedness of scheduling possibility test through simulation In Real-Time scheduling algorithm and analyzed the result.

  • PDF

A Fault-tolerant Task Scheduling Algorithm Supporting the Minimum Schedule Length (최소의 스케줄 길이를 유지하는 결함 허용 태스크 스케줄링 알고리즘)

  • Min, Byeong-Jun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.4
    • /
    • pp.1201-1210
    • /
    • 2000
  • In order to tolerate faults which may occur during the execution of distributed tasks in high-performance parallel computer systems, tasks are duplicated on different processors. In this paper, by utilizing the task duplication based scheduling algorithm, a new task scheduling algorithm which duplicates each task on more than two different processors with the minimum schedule length is presented, and the number of processors required for the duplication is analyzed with the ratio of communication cost to computation time and the workload of the system. A simulation with various task graphs reveals that the number of processors required for the full-duplex fault-tolerant task scheduling with the obtainable minimum schedule length increases about 30% to 75% when compared with that of the task duplication based scheduling algorithm.

  • PDF

Long-Term Container Allocation via Optimized Task Scheduling Through Deep Learning (OTS-DL) And High-Level Security

  • Muthakshi S;Mahesh K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1258-1275
    • /
    • 2023
  • Cloud computing is a new technology that has adapted to the traditional way of service providing. Service providers are responsible for managing the allocation of resources. Selecting suitable containers and bandwidth for job scheduling has been a challenging task for the service providers. There are several existing systems that have introduced many algorithms for resource allocation. To overcome these challenges, the proposed system introduces an Optimized Task Scheduling Algorithm with Deep Learning (OTS-DL). When a job is assigned to a Cloud Service Provider (CSP), the containers are allocated automatically. The article segregates the containers as' Long-Term Container (LTC)' and 'Short-Term Container (STC)' for resource allocation. The system leverages an 'Optimized Task Scheduling Algorithm' to maximize the resource utilisation that initially inquires for micro-task and macro-task dependencies. The bottleneck task is chosen and acted upon accordingly. Further, the system initializes a 'Deep Learning' (DL) for implementing all the progressive steps of job scheduling in the cloud. Further, to overcome container attacks and errors, the system formulates a Container Convergence (Fault Tolerance) theory with high-level security. The results demonstrate that the used optimization algorithm is more effective for implementing a complete resource allocation and solving the large-scale optimization problem of resource allocation and security issues.

Enhanced Technique for Performance in Real Time Systems (실시간 시스템에서 성능 향상 기법)

  • Kim, Myung Jun
    • Journal of Information Technology Services
    • /
    • v.16 no.3
    • /
    • pp.103-111
    • /
    • 2017
  • The real time scheduling is a key research area in high performance computing and has been a source of challenging problems. A periodic task is an infinite sequence of task instance where each job of a task comes in a regular period. The RMS (Rate Monotonic Scheduling) algorithm has the advantage of a strong theoretical foundation and holds out the promise of reducing the need for exhaustive testing of the scheduling. Many real-time systems built in the past based their scheduling on the Cyclic Executive Model because it produces predictable schedules which facilitate exhaustive testing. In this work we propose hybrid scheduling method which combines features of both of these scheduling algorithms. The original rate monotonic scheduling algorithm didn't consider the uniform sampling tasks in the real time systems. We have enumerated some issues when the RMS is applied to our hybrid scheduling method. We found the scheduling bound for the hard real-time systems which include the uniform sampling tasks. The suggested hybrid scheduling algorithm turns out to have some advantages from the point of view of the real time system designer, and is particularly useful in the context of large critical systems. Our algorithm can be useful for real time system designer who must guarantee the hard real time tasks.

Multi-factor Evolution for Large-scale Multi-objective Cloud Task Scheduling

  • Tianhao Zhao;Linjie Wu;Di Wu;Jianwei Li;Zhihua Cui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1100-1122
    • /
    • 2023
  • Scheduling user-submitted cloud tasks to the appropriate virtual machine (VM) in cloud computing is critical for cloud providers. However, as the demand for cloud resources from user tasks continues to grow, current evolutionary algorithms (EAs) cannot satisfy the optimal solution of large-scale cloud task scheduling problems. In this paper, we first construct a large- scale multi-objective cloud task problem considering the time and cost functions. Second, a multi-objective optimization algorithm based on multi-factor optimization (MFO) is proposed to solve the established problem. This algorithm solves by decomposing the large-scale optimization problem into multiple optimization subproblems. This reduces the computational burden of the algorithm. Later, the introduction of the MFO strategy provides the algorithm with a parallel evolutionary paradigm for multiple subpopulations of implicit knowledge transfer. Finally, simulation experiments and comparisons are performed on a large-scale task scheduling test set on the CloudSim platform. Experimental results show that our algorithm can obtain the best scheduling solution while maintaining good results of the objective function compared with other optimization algorithms.

Periodic and Real-Time Aperiodic Task Scheduling Algorithm based on Topological Sort and Residual Time (위상 정렬과 여유 시간 기반 주기 및 실시간 비주기 태스크 스케줄링 알고리즘)

  • Kim, Si-Wan;Park, Hong-Seong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.4
    • /
    • pp.302-307
    • /
    • 2012
  • Real-time systems perform periodic tasks and real-time aperiodic tasks such as alarm processing. Especially the periodic tasks included in control systems such as robots have precedence relationships among them. This paper proposes a new scheduling algorithm based on topological sort and residual time. The precedence relationships among periodic tasks are translated to the priorities of the tasks using topological sort algorithm. During the execution of the system the proposed scheduling algorithm decides on whether or not a newly arrived real-time aperiodic task is accepted based on residual time whenever the aperiodic task such as alarm is arrived. The proposed algorithm is validated using examples.

An On-line Algorithm to Search Minimum Total Error for Imprecise Real-time Tasks with 0/1 Constraint

  • Song Gi-Hyeon
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.12
    • /
    • pp.1589-1596
    • /
    • 2005
  • The imprecise real-time system provides flexibility in scheduling time-critical tasks. Most scheduling problems of satisfying both 0/1 constraint and timing constraints, while the total error is minimized, are NP complete when the optional tasks have arbitrary processing times. Liu suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on uniprocessors for minimizing the total error. Song et al suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on multiprocessors for minimizing the total error. But, these algorithms are all off-line algorithms. On the other hand, in the case of on line scheduling, Shih and Liu proposed the NORA algorithm which can find a schedule with the minimum total error for a task system consisting solely of on-line tasks that are ready upon arrival. But, for the task system with 0/1 constraint, it has not been known whether the NORA algorithm can be optimal or not in the sense that it guarantees all mandatory tasks are completed by their deadlines and the total error is minimized. So, this paper suggests an optimal algorithm to search minimum total error for the imprecise on-line real-time task system with 0/1 constraint. Furthermore, the proposed algorithm has the same complexity, O(N log N), as the NORA algorithm, where N is the number of tasks.

  • PDF

Efficient Duplication Based Task Scheduling with Communication Cost in Heterogeneous Systems (이질 시스템에서 통신 시간을 고려한 효율적인 복제 기반 태스크 스케줄링)

  • Yoon, Wan-Oh;Baek, Jueng-Kuy;Shin, Kwang-Sik;Cheong, Jin-Ha;Choi, Sang-Bang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.3C
    • /
    • pp.219-233
    • /
    • 2008
  • Optimal scheduling of parallel tasks with some precedence relationship, onto a parallel machine is known to be NP-complete. The complexity of the problem increases when task scheduling is to be done in a heterogeneous environment, where the processors in the network may not be identical and take different amounts of time to execute the same task. This paper introduces a Duplication based Task Scheduling with Communication Cost in Heterogeneous Systems (DTSC), which provides optimal results for applications represented by Directed Acyclic Graphs (DAGs), provided a simple set of conditions on task computation and network communication time could be satisfied. Results from an extensive simulation show significant performance improvement from the proposed techniques over the Task duplication-based scheduling Algorithm for Network of Heterogeneous systems(TANH) and General Dynamic Level(GDL) scheduling algorithm.