In computer science, rate-monotonic scheduling (RMS) is a priority assignment algorithm used in real-time operating systems (RTOS) with a static-priority scheduling class. The static priorities are assigned according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority.
These operating systems are generally preemptive and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application.
A simple version of rate-monotonic analysis assumes that threads have the following properties:
It is a mathematical model that contains a calculated simulation of periods in a closed system, where round-robin and time-sharing schedulers fail to meet the scheduling needs otherwise. Rate monotonic scheduling looks at a run modeling of all threads in the system and determines how much time is needed to meet the guarantees for the set of threads in question.
Liu & Layland (1973) proved that for a set of n periodic tasks with unique periods, a feasible schedule that will always meet deadlines exists if the CPU utilization is below a specific bound (depending on the number of tasks). The schedulability test for RMS is:
where Ci is the computation time, Ti is the release period (with deadline one period later), and n is the number of processes to be scheduled. For example, U ≤ 0.8284 for two processes. When the number of processes tends towards infinity, this expression will tend towards:
Therefore, a rough estimate is that RMS can meet all of the deadlines if CPU utilization is less than 69.32%. The other 30.7% of the CPU can be dedicated to lower-priority non real-time tasks. It is known that a randomly generated periodic task system will meet all deadlines when the utilization is 85% or less, however this fact depends on knowing the exact task statistics (periods, deadlines) which cannot be guaranteed for all task sets.