In real-time systems, release jitter occurs when the task's arrival is not the same as when the task is released. This is very common issue for distributed systems. Suppose you have two tasks: one periodic and one sporadic . The periodic task () releases the sporadic task on another processor.
We could attempt to model the sporadic task as periodic and use standard response time equations to assess the impact of the tasks on lower priority tasks (interference). This is however not insufficient. To illustrate this, consider two consecutive executions of the periodic task and the sporadic task to be released at the very end of its execution.
On the first execution of task there is interference from higher priority tasks such that it does not complete until its latest possible time ; on the second execution there is no interference and the tasks completes execution within Cp.
We can clearly show that the minimum inter-arrival time is no longer Ts = Tp, instead it is Ts = Tp - Rp.
Therefore in order to correctly calculate the interference caused by the sporadic tasks, response time equation must factor the release jitter. Strictly speaking the release jitter is the maximum variation in a task's release time and it is usually represented using letter J.
Task p is released at time t, but it's execution is not completed until t + J; at this point task s is released. the second release of task p has no interference hence it will start executing at t + Tp. Clearly, if we assume some arbitrary values e.g.J = 15, Ts = 20, Cp = 1, t = -1, then we get: 0, 5, 25,45 etc as the releases for the task p. These can be easily calculated using the following formula: 0, T - J, 2T = J, 3T - J etc.