Você está na página 1de 42

Real Time Systems

Algorithms and Complexity Concerning the Preemptive Scheduling of Periodic, Real-Time Tasks on One Processor

Introduction Basic Concepts and


Notations

Periodic Task System: A task system consists of a finite number of tasks, each of which is released at regular periodic intervals. Notations for Task System: T will denote a task system of n tasks such that for each task Ti, si is the start time, ei is the execution time, di is the deadline, and pi is the period. Parameters of Task System: Start time, Execution time, Deadline and Period are parameters of task system. Formal definition of Periodic Task: A periodic task Ti can be completely characterized by the following three parameters: its start time si, its period pi and its execution time ei. Formal definition of Periodic Task System: Thus, a set of periodic tasks can be denoted by T = { Ti (si , pi , ei ), i = 1, . . . , n}. Processor Utilization Factor (density): Given a set T of n periodic tasks, the processor utilization factor U is the fraction of processor time spent in the execution of the task set. Since ei/pi is the fraction of processor time spent in executing task Ti , the utilization factor for n tasks is given by U = summation i=1->n ei/pi The processor utilization factor provides a measure of the computational load on the CPU due to the periodic task set. Although the CPU utilization can be improved by increasing tasks computation times or by decreasing their periods, there exists a maximum value of U below which T is schedulable and above which T is not schedulable.

Introduction Basic Concepts and


Notations

Feasibility of Task System in terms of Processor Utilization Factor and Scheduling Algorithm: Let Uub(T, A) be the upper bound of the processor utilization factor for a task set T under a given algorithm A. When U = Uub (T, A), the set T is said to fully utilize the processor. In this situation, T is schedulable by A, but an increase in the computation time in any of the tasks will make the set infeasible. Below are 2 examples of periodic task system each consisting of 2 task On the left task system periods of each task are {p1, p2} = {4,6}, execution time is {e1, e2} = {2, 2}, Uub = 2/4 + 2/6 = 5/6 On the right task system periods of each task are {p1, p2} = {4,5}, execution time is {e1, e2} = {2, 2}, Uub = 2/4 + 2/5 = 9/10. If the utilization factor of a task set is greater than 1.0, the task set cannot be scheduled by any algorithm.

Introduction Basic Concepts and


Notations

Earliest Deadline First or Deadline Algorithm: The Earliest Deadline First (EDF) algorithm is a dynamic scheduling rule that selects tasks according to their absolute deadlines. Specifically, tasks with earlier deadlines will be executed at higher priorities. Since the absolute deadline of a periodic task depends on the current jth instance as di,j = si + (j 1)pi + Di

Di denotes the relative deadline of task Ti, di,j denotes the absolute deadline of the jth instance of task Ti,si denotes the release time of first instance of task EDF is a dynamic priority assignment. Moreover, it is typically executed in preemptive mode, thus the currently executing task is preempted whenever another periodic instance with earlier deadline becomes active. The schedulability of a periodic task set handled by EDF can be verified through the processor utilization factor. In this case, however, the least upper bound is one; therefore, tasks may utilize the processor upto 100% and still be schedulable. A set of periodic tasks is schedulable with EDF if and only if summation i=1->n ei/pi <= 1

Introduction Basic Concepts and


Notations

Earliest Deadline First Example: Consider the periodic task set illustrated in Figure below, for which the processor utilization factor is U = 2/5 + 4/7 = 34/35 ~ 0.97 This means that 97 percent of the processor time is used to execute the periodic tasks, whereas the CPU is idle in the remaining 3 percent.

Introduction Basic Concepts and


Notations

Some important points regarding periodic task systems In general, the initial release of individual tasks may occur at different times. Each time a task is released, it must be scheduled on a processor for some specified number of time units before its deadline is reached. The execution time requirement and the amount of time before the deadline is reached are both unchanged for each subsequent release of the task, but may be different for different tasks. PAST Research Non Integer Parameters for Scheduling Algorithm:In most of the work done in the past (e.g., (Liu and Layland 1973; Leung and Merrill 1980; Lawler and Martel 1981; Leung and Whitehead 1982; Leung 1989)), the parameters to the problems have not been restricted to integer values, and preemptions have been allowed at any time. Furthermore, in schedules constructed by several algorithms in the literature, preemptions may occur at noninteger time values even if all start times, execution times, and periods are integers (see e.g., (Coffman 1976; Leung 1989)). Continuous Schedules: We will refer to a schedule in which preemptions may occur at arbitrary time values as a continuous schedule. Discrete Schedules: Schedules in which preemptions may only occur at specified discrete intervals.

Synchronous Task Systems: Those task systems in which all start times are identical.
Complete and Incomplete Task Systems: In a complete task system, the start times, execution times, deadlines, and periods for each task are given as input; in an incomplete task system, the start times are omitted.

Introduction Problem and Goals

In paper referred by this presentation author restrict all inputs to be integers, then allow preemptions to occur only at integer time values. This will be presented in next section. There are two issues involved in assuming these restrictions. The first issue is whether it is reasonable to restrict the inputs to integer values. In this presentation it will be concluded that integer inputs provide the proper abstraction to the underlying physical problem. The second issue is whether, once the inputs have been restricted to integer values, it is reasonable to restrict preemptions to integer values. (Of course, if the inputs are allowed to be nonintegers, restricting preemptions to integer values necessarily causes a loss of generality.) Concerning this issue, it will be concluded that the restriction is preferable. Furthermore, It will be shown that once the inputs have been restricted to integer values, it can be assumed without loss of generality that preemptions occur only at integer values; that is, it is proved in paper that valid discrete schedules exist whenever valid continuous schedules exist, provided the parameters are all integers. Although this fact is shown specifically for single processor schedules, the proof is general enough to extend to task systems on several identical processors.

1.

Introduction Problems and Goals

Goal of task system: The ultimate goal regarding task systems is to find an algorithm to mechanically synthesize online scheduling algorithms. Such an algorithm would first determine the feasibility of an instance-that is, whether there exists a valid schedule--and then construct a suitable scheduling algorithm.

Deadline Algorithm is Optimal Scheduling Algorithm: The Deadline Algorithm (see (Liu and Layland 1973; Labetoulle 1974)) is known to be an optimal scheduling algorithm for task systems on one processor; that is, it produces a valid schedule for every feasible task system.
Deadline Algorithm is exponential time in terms of time complexity: This algorithm generates a cyclic schedule with a potentially exponential-length period at the rate of O(log n) steps per time unit of the schedule, and can therefore be considered an exponential-time feasibility test (see (Leung and Merrill 1980)). However, an exponential-time feasibility test is usually unacceptable. Most efficient algorithm for general feasibility problem in terms of time complexity: The most efficient algorithm known for the general feasibility problem for task systems on one processor was given by Leung and Merrill (1980); this algorithm operates in a time exponential in the number of tasks.They also showed this problem to be co-NP-hard. However, they have left as open questions whether the problem is co-NP-complete or co-NP-hard in the strong sense.

Algorithm for feasibility problem of synchronous systems: Work on feasibility problem of synchronous systems was not worked much before. However, Leung and Whitehead (1982) have given a pseudopolynomial-time algorithm for deciding feasibility with respect to fixed priority schedules. This last fact might be taken as evidence that a pseudo-polynomial-time algorithm exists for the feasibility problem for general synchronous systems.

Introduction Problems and Goals

All of the results given in the above paragraph are with respect to complete task systems. Feasible complete task system: complete task system is said to be feasible if a valid schedule exists.

Feasible Incomplete task system: An incomplete task system is said to be feasible if there exist start times such that the resulting complete task system is feasible.
Goal of Paper: Paper referred by this presentation provides simple necessary and sufficient conditions for testing the feasibility of complete task systems on one processor. These conditions cannot, in general, be tested efficiently unless P = NP. It is showed, however, that they yield efficient algorithms with respect to several significant sub classes of complete task systems. Application: Synchronous task systems in which the sum of the ratios of the execution times to the periods, or the density, is bounded above by a fixed constant less than 1 (say, 0.9 or 9.99, for example). This restriction is fairly minor, since all task systems that are feasible on one processor have a density no greater than 1 (see, e.g., (Leung and Merrill 1980)). Goal of Paper: Provides a simple algorithm for testing the feasibility of such systems on one processor in a time proportional to the product of the number of tasks and the largest period an exponential improvement over the algorithm given by Leung and Merrill (1980) for general complete task systems.

Introduction Problems and Goals

Goal: It is showed that algorithm for testing the feasibility for asynchronous system is not possible unless P = NP; that is, It is show that the feasibility problem for general task systems on one processor is co-NP-hard in the strong sense even if we restrict ourselves to instances whose densities are bounded above by any fixed positive constant. Goal: It is also showed the Simultaneous Congruences Problem to be NP-complete in the strong sense; both of these results answer questions that have been open for ten years (Leung and Merrill 1980; Leung and Whitehead 1982). Goal: Paper answered a third open question (Leung and Merrill 1980) by showing the feasibility problem for complete task systems on one processor to be co-NP-complete in the strong sense. Application: Another significant subclass of task systems for which our main lemma proves useful is the set of task systems having a fixed number of distinct types of tasks. For this subclass, we derive a polynomialtime algorithm for deciding feasibility on one processor. Even in the case of incomplete task systems, this result yields a pseudo-polynomial-time algorithm, in spite of the fact that we show the general feasibility problem for incomplete task systems on one processor to be complete.

Discrete vs Continuous Schedule


Basic Definitions

Formal Definition of Discrete Schedule: Let N denote the natural numbers, and let N+= N - {0}. For a task system T of n tasks, a discrete schedule on one processor is a mapping S : N -> {0, 1, . . . . n}. Intuitively, if i != 0, then S(t) = i means that task i is scheduled at time t; if S(t) = 0, then no task is scheduled at time t. For each task Ti in T, let si element of N denote the start time, ei element of N+ denote the execution time, di element of N+ (di >- el) denote the deadline, and pi element of N+ (pi >- di) denote the period. Formal Definition of Valid Discrete Schedule: Discrete Schedule S is valid if for all k element of N, i element of {1. . . . . n}, there are at least ei times t such that S(t) = i and si + kpi <= t < si + kpi + di. These definitions may be extended naturally to m identical processors by making S : N --, {0, 1,. . . . n}m such that for all t element of N, all nonzero components of S(t) are distinct.

Informal Definition of Continuous Schedule: In continuous schedule, it is unclear how much time was spent in processing each task. Suppose we have some function X such that for any t, x(t) is either 0 or 1, as shown in Figure below (we will leave the domain of X unspecified for the time being). Specifically, X(t) = 1 iff some particular task Ti is scheduled at t. We want some way of describing the amount of time between points a and b at which X(t) = 1. Note that this amount is the same as the area bounded by the t axis on the bottom, x(t) on the top, t = a on the left, and t = b on the right (the shaded area of Figure).

Discrete vs Continuous Schedule


Basic Definitions

Formal Definition of Continuous Schedule: Formally, if X is a Riemann integrable function on [a, b], then integral [a, b] X(t) dt describes the amount of time between a and b during which X has a value of 1. Continuous schedule of T on one processor is a function S : R -> {0, 1. . . . . n} such that for all i element of {1,. . . . n}, (XoS) is Riemann integrable on R (or equivalently, such that S is Riemann integrable on R+). Given xi:N -> {0, 1} be defined by xi(i) = 1; and xi(i) = 0 if i != j Intuitively, S(t) gives the job scheduled at t (or 0 if no job is scheduled), and xi(S(t)) = 1 iff task Ti is scheduled at time t.

Discrete vs Continuous Schedule


Basic Definitions

Formal Definition of Valid Continuous Schedule: Continuous Schedule S is valid if for all k element of N, i element of {1. . . . . n}, integral [si+kpi, si+kpi+di] xi(S(t)) dt >= ei. Further Explaination of Continuous Schedule: Any function S : R+ -> {0, 1. . . . . n} for which all intervals [tl, t2) can be partitioned into a finite number of subintervals I1, I2. . . . . Im, such that S is constant over each Ii will be Riemann integrable, and will therefore satisfy definition of a continuous schedule. Realistic Schedules: Clearly, any scheduler is ultimately limited to scheduling in multiples of some discrete time unit due to timing constraints involved in computation. Furthermore, if we wish to abstract away this limitation, all inputs to the problem must be expressed as rational numbers which may all be multiplied by a common denominator. We can change rational numbers to Integers and this transformation can be done in polynomial time.

Discrete vs Continuous Schedule


Solution to Problems

Discrete Schedules Should be preferred over Continuous Schedules: Presented paper provides 2 arguments Discrete schedules are the most general types of schedules that remain realistic (by definition of realistic schedules). Second, for task systems on one processor, a continuous schedule exists iff a discrete schedule exists If an incomplete task system has a valid schedule in which the start times are not required to be integers, then it has a valid schedule in which the start times are integers: This result is proved in the paper by using the fact that Deadline Algorithm is an optimal schedule for complete task systems on one processor; that is, a valid schedule exists iff the Deadline Algorithm produces a valid schedule (Liu and Layland 1973; Labetoulle 1974). Unfortunately proof is too cumbersome for us to understand completely and present it. For task systems on one processor, a continuous schedule exists iff a discrete schedule exists: Stated in the form of theorem in the presented paper For any (complete or incomplete) periodic task system Twith integer-valued parameters, there is a valid discrete schedule for T on one processor iff there is a valid continuous schedule for T on one processor. Following is little explanation of proof we understand

Discrete vs Continuous Schedule


Solution to Problems

Proof of above theorem: Using a technique similar to that given by Horn (1974), author construct, from T, a flow network G and an integer K such that valid schedules of T on one processor correspond to maximum flows of K in G.

All capacities in G will be integers in the suggested algorithm to build Network G.


Since there are maximal network flow algorithms that never introduce non integers if all capacities are integers (e.g., the Ford-Fulkerson Algorithm (Ford and Fulkerson 1962)), author is able to generate a discrete schedule from an integer-valued maximum flow. Construction of Network G for Continuous Schedule: Let each task Ti, 1 <= i <= n, consist of start time si, execution time ei, deadline di, and period pi. Author make use of the fact, shown by Leung and Merrill (1980), that if T has a valid schedule on one processor, then it has a valid cyclic schedule with period P=lcm(pl..... Pn); that is, for all t, S(t) = S(t + P). Using P, construction of network G is as shown in Figure. G contains four types of nodes: 1. A source node a;

2. For all natural numbers j <= P - 1, a node tj;


3. For each task Ti and each k element of {0, 1 ..... (P/pi) - 1}, a node qi,k; and 4. A sink node z.

Discrete vs Continuous Schedule


Solution to Problems

Construction of Network G for Continuous Schedule Cont.: G contains three types of edges:

1. From a to each Type 2 node, an edge with capacity 1 (to extend this proof to m processors, these capacities are changed to m);
2. From each Type 2 node tj to each Type 3 node qi,k such that si + kpi <= k'P + tj < Si + kpi + d i for some k' element of N, an edge with capacity 1; and 3. from each Type 3 node qi,k to z, an edge with capacity ei.

Discrete vs Continuous Schedule


Solution to Problems

Construction of Network G for Continuous Schedule Cont.: Flow at edges of Network G for Continuous Schedule Continuous

Based on S, author construct a flow for G.


To each Type 3 edge (qi,k, z) assign a flow of ei. providing enough flow on the remaining edges to support the flow assigned to the Type 3 edges; if the assignment causes more flow to enter a node than the amount of flow leaving that node, always decrease the flow on incoming edges. To each Type 2 edge (tj, qi,k) assign a flow of integral [j, j+l] xi(S(t))dt which is clearly no more than 1

Now each Type 3 node qi.k has incoming edges from all Type 2 nodes tj such that si + kpi <= k'P + tj < si + kpi + di for some k' element of N.
Since S is cyclic with period P, the total flow into qi,k is summation j=si+kPi -> si+kPi+di integral [j,j+1] xi(S(t))dt = integral [si+kp, si+kpi+di] xi(S(t))dt >= ei since S is valid Finally, to each Type 1 edge (a, tj) assign a flow summation i=1->n integral j = j+1 xi(S(t))dt which is no more than 1 since summation i=1->n xi(S(t)) never exceeds 1. From each Type 2 node, there i=1 is an edge to at most one Type 3 node qi,k for each i. Thus, the total flow out of tj is summation i=1->n integral [j, j+1] xi(S(t))dt Author shows how to construct a flow F in G in which all Type 3 edges are filled to capacity. Clearly, F represents a maximum flow of K.

Discrete vs Continuous Schedule


Solution to Problems

Construction of Network G for Continuous Schedule Cont.: Construction of discrete flow F` from G and F:

Since there are algorithms that fred a maximum flow consisting entirely of integers whenever all capacities are integers, G has a maximum flow F' that is entirely made up of integers.
We now describe a valid discrete schedule S' as follows. Since each Type 2 node ti can have an incoming flow of at most 1, it has at most one outgoing edge for which F' has a positive (=1) flow. If this edge is (tj, qi,k), let S(k'P + j) = i for all k' E N. Now suppose some task Ti is released at time si + k'P + kpi for k' E N and k E {0, 1 ..... (P/pi) - 1}. (Clearly, k and k' can be chosen to yield any release time of Ti). The number of time units assigned to this release of Ti in S is exactly the number of incoming edges to node qi,k having positive flow in F'. Since this number must equal the capacity of the single outgoing edge of qi,k, or ei, the schedule is valid.

Discrete vs Continuous Schedule


Solution to Problems

Construction of Network G Cont.:

Complete task systems Problems and Goals

Problem is deciding feasibility for complete task systems on one processor. Theorem in previous section gives an algorithm for deciding the feasibility problem in general; however, the algorithm generates a network whose size is exponential in the number of tasks. In order to generate a more efficient algorithm, authors in main lemma both necessary and sufficient conditions for a complete task system to be feasible. Although these conditions cannot, in general, be tested efficiently (unless P = NP), they are useful in constructing algorithms for certain special types of task systems. For example, we will use this lemma to give an algorithm for deciding feasibility of synchronous task systems for which the density is bounded above by some fixed constant strictly less than 1; this algorithm will run in a time proportional to the product of the number of tasks and the value of the largest period. Author will then show that unless P = NP, such a pseudo-polynomial-time algorithm cannot extend to asynchronous task systems; that is, they show that the general feasibility problem on one processor is co-NPcomplete in the strong sense even if the density is bounded above by any fixed positive constant. Finally, author give a polynomial-time algorithm for deciding the feasibility problem for complete task systems on one processor when the number of distinct types of tasks is fixed.

Author developed necessary and sufficient conditions for a complete task system to be feasible.
It is a well-known fact (see, e.g., (Leung and Merrill 1980)) that the density must be no more than 1 in order for a task system to be feasible on one processor.

Complete task systems Problems and Goals

Strategy is to show that if the density is no more than 1 and the system is not feasible, then there must be a time interval in which too much execution time is required. In other words, we can ignore specific release times and deadlines, and simply show that the total amount of execution time needed by jobs which must be scheduled entirely within the interval exceeds the amount of time available in the interval. Furthermore, authors are able to show that the endpoints of this interval can be encoded in a polynomial number of bits, and that the total amount of execution time needed within the interval may be computed efficiently.

Complete task systems Notations

Before we formally present this result, we will introduce some more notations. For a given discrete schedule S of T, let Cs(T,t)=(el,t . . . . .en,t), where ei,t is the number of times Ti is scheduled in S before t, beginning with its last request before t. Finally, given times tl and t2, 0 <= tI < t2, let ni(tl, t2) denote the total number of natural numbers k such that Ti <= si + kpi and Si + kpi + di <= t2 Thus, ni(tl, t2) is the number of times Ti must be completely scheduled in [tl, t2)

Complete task systems Notations

We will eventually show that if T is not schedulable, then for some small t1 and t2, summation i=1 to n ni(tl, t2) . ei > t2 - t1. In order to avoid the need to reason about all possible schedules in showing this result, we will make use of the fact that the Deadline Algorithm is optimal for one processor (Liu and Layland 1973; Labetoulle 1974). This fact allows us to focus our attention solely on schedules produced by the Deadline Algorithm. Furthermore, due to Theorem 2.1, we will only consider discrete schedules, which the Deadline Algorithm always produces on integer input.

Complete task systems Previous Research Results

LEMMA 3.1. (From (Leung and Merrill 1980).) Let S be the schedule of T on one processor constructed by the Deadline Algorithm. Then for each task Ti,. and for each time ti >= si, we have ei,t1 >= el,t2, where t2=t1+ P. LEMMA 3.2. (From (Leung and Merrill 1980).) Let S be the schedule of T o n one processor constructed by the Deadline Algorithm. T is feasible on one processor iff 1. all deadlines in the interval (0, t2] are met in S, where t2 = s + 2P, and 2. Cs(T, t1) = Cs(T, t2), where t = s + P Lemma 3.2 gives two conditions, one of which must fail if T is not feasible. We need something stronger, namely, that the first condition must fail. As we show in the following lemma, we can reach this conclusion provided the density is no more than 1. (Note also that the following lemma holds whether or not T is feasible.)

Complete task systems Research Result of Paper

LEMMA 3.3. Let S be the schedule of T on one processor constructed by the Deadline Algorithm. If E i=1 to n ei/Pi < = 1, then Cs(T, t1) = Cs(T, t2), where t1 = s + P and t2 = s + 2 P

Complete task systems Main Research Result of Paper


LEMMA 3.4. A complete task system T is feasible on one processor iff 1. E i = 1 to n ei/Pi <= 1, and 2. E i = 1 to n ni(tl, t2) . ei <- t2 - tl for all 0 <= t1 < t2 < s + 2 P

Complete task systems Research Result of Paper


In order for the above lemma to be more useful, we will now show that n1(tl, t2) can be efficiently computed. This fact follows from the following lemma. LEMMA 3.5. n1(t1, t2) = max { 0, | ( t2 si di ) / pi | } - max { 0, | ( t1 si ) / pi | + 1 }

Complete task systems Research Result of Paper


THEOREM3.1. Let c be a fixed constant, 0 < c < 1. The feasibility problem for synchronous systems on one processor is solvable in O(n max{pi - di}) time if E i = 1 to n ei/Pi <= c.

Complete task systems Explanation of Above Result


The above proof might be a bit more intuitive if we consider a graphical interpretation. The function g(t) = E i = 1 to n (| (t - di)/ pi | + 1) el is a step function, but if we eliminate the floor operations, we get a linear function f >= g. The slope o f f is equal to the density, and f(0) = E i = 1 to n (pi - di)ei/Pi (see Figure 3). If the density is less than 1, f must cross the line f(t) = t. The above proof gives an upper bound on where this intersection must occur. The algorithm given in the above proof is a pseudo-polynomial-time algorithm, since the value of max{pi} is not necessarily polynomial in the size of the problem description. At this time, we do not know whether the above problem can be solved in polynomial time, nor whether the feasibility problem for general synchronous systems can be solved in pseudo-polynomial time. However, we can address the open question (see (Leung and Merrill 1980)) of whether the feasibility problem for asynchronous systems can be solved in pseudo-polynomial time. Leung and Merrill (1980) have shown this problem to be co-NP-hard, but only in the weak sense. In what follows, we sharpen this result by showing the problem to be co-NP-complete in the strong sense.

Complete task systems Explanation of Above Result


Thus, there is no pseudo-polynomial-time algorithm for this problem unless P = NP. Furthermore, we show that the problem remains co-NP-complete in the strong sense even if the density is bounded above by any fixed positive constant. In the proof that the feasibility problem is co-NP-hard, Leung and Merrill used a reduction from the complement of the Simultaneous Congruences Problem (SCP). This problem was shown to be NP-complete in the weak sense by Leung and Whitehead (1982). It was left as an open question whether SCP was NP-complete in the strong sense (Leung and Merrill 1980; Leung and Whitehead 1982). In what follows, we answer this question in the affirmative. It then follows from the proof of Leung and Merrill that the feasibility problem is co-NP-hard in the strong sense. Finally, Lemma 3.4 immediately implies that the problem is co-NP-complete.

Complete task systems Simultaneous Congruences


Problem

Let A = {(a1, b1), (an, bn)} <= N x N+ and 2 <= k <= n be given. The Simultaneous Congruences Problem is to determine whether there is a subset A' <= A of k pairs and a natural number x such that for every (ai,bi)EA',x=ai(modbi).

In what follows, we show this problem to be NP-complete in the strong sense.


Our proof uses the Generalized Chinese Remainder Theorem (see, e.g., (Knuth 1981)), which we now reproduce.

Complete task systems Generalized Chinese


Remainder Theorem

Let A = {(a1, b1), (an, bn)} <= N x N+. There is an x such that for all (ai, bi) E A, x = ai(mod bi) iff for all 1 <= i < j <= k, ai = aj ( mod gcd(bi, bj)).

Complete task systems Generalized Chinese


Remainder Theorem Conclusion

We can conclude that if each pair of distinct pairs (ai, bi) and (aj, bj) collides (i.e., there is an x such that x = ai(mod bi) and x = aj(mod bj)), then there is a simultaneous collision of all pairs. This fact is very useful in the construction that follows.

The proof by Leung and Whitehead (1982) that SCP is NP-hard (in the weak sense) consisted of a reduction from the CLIQUE problem (Karp 1972). Given an undirected graph G = (V, E) such that V = {vl ..... vn}, a set of pairs {(al, b1), .., (an, bn)} was constructed such that ai = aj(mod gcd(bi, bj)) iff (v i, vj) E E. Thus, there is a simultaneous collision of k items iff G has a clique of size k. However, since each ai and bi were the product of O(n2) distinct prime numbers, the values of ai and bi were not polynomial in the size of the description of G. In order to overcome this problem, we give an entirely different reduction, from 3-SAT rather than CLIQUE.

Complete task systems Research Result of Paper


THEOREM 3.2. SCP is NP-complete in the strong sense.

Complete task systems Research Result of Paper


LEMMA 3.7. The feasibility problem for complete task systems on one processor is co-NP-hard in the strong sense.

Complete task systems Research Result of Paper


COROLLARY 3.1. The feasibility problem for complete task systems on one processor is co-NP-hard in the strong sense even if the systems are restricted to have density not greater than e, where e is any fixed positive constant.

Complete task systems Research Result of Paper


THEOREM 3.3. The feasibility problem for complete periodic task systems on one processor is co-NP-complete in the strong sense.

Complete task systems Research Result of Paper


THEOREM 3.4. Let T be a synchronous task system with min{p i - di} > P(1 - E I = 1 to n ei/Pi), where P is the least common multiple of the periods. Then T is not feasible on one processor.

Complete task systems Research Result of Paper


THEOREM 3.5. For complete task systems with a fixed number of distinct types of tasks, the feasibility problem for one processor can be solved in polynomial time.

InComplete task systems Research Result of Paper


THEOREM 4.1. The feasibility problem for incomplete task systems on one processor is summation 2->P complete. THEOREM 4.2. THEOREM 4.2. For a fixed number of distinct types of tasks, the feasibility problem for incomplete task systems on one processor can be solved in pseudo-polynomial time.

Complexity Notations
Polynomial Time. In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is one of the most fundamental complexity classes. It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

NP: NP is the set of decision problems where the "yes"-instances can be recognized in polynomial time by a nondeterministic Turing machine.
Co-NP: In computational complexity theory, co-NP is a complexity class. A problem is a member of co-NP if and only if its complement is in the complexity class NP. In simple terms, co-NP is the class of problems for which efficiently verifiable proofs of no instances, sometimes called counterexamples, exist. Co-NP-Complete. In complexity theory, computational problems that are co-NP-complete are those that are the hardest problems in co-NP, in the sense that they are the ones most likely not to be in P. If there exists a way to solve a co-NPcomplete problem quickly, then that algorithm can be used to solve all co-NP problems quickly. Co-NP-Hard. NP-Complete. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard problems so that any NP problem can be converted into L by a transformation of the inputs in polynomial time. Although any given solution to such a problem can be verified quickly, there is no known efficient way to locate a solution in the first place; indeed, the most notable characteristic of NP-complete problems is that no fast solution to them is known. That is, the time required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows.

Complexity Notations
Psuedo-Polynomial Time. In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is polynomial in the numeric value of the input (which is exponential in the length of the input its number of digits).

An NP-complete problem with known pseudo-polynomial time algorithms is called weakly NP-complete. An NP-complete problem is called strongly NP-complete if it is proven that it cannot be solved by a pseudo-polynomial time algorithm unless P=NP.

Você também pode gostar