A Basic Overview Of Operating Systems Information Technology Essay

Published: November 30, 2015 Words: 3739

Resource administrator in a modern operating system forms the building block of a systematic design and implementation of resource sentient system with some form of timeliness. More the number of users greater the task for the administrator to manage and protect the memory , I/O and other devices. The role of The OS as a resource manager is to keep track of grant and revoke request of resources along with mediating the conflicting requests and keeping track of who the users using the resources. Scheduling algorithms are designed to help the OS to choose the jobs which should be allocated the resources. A weak scheduling algorithm would lead to poor performance of the system due to jobs having to wait for the resources for an indefinite time , it may cause a convoy effect where all the other jobs slow down waiting for one individual job. Thus, it is important to choose a powerful scheduling algorithm to maximize system utilization and throughput while at the same time providing acceptable levels of wait time and response time to jobs of varying importance. In this paper we shall discuss 3 scheduling algorithms namely the fair share scheduler and its variants lottery scheduler and stride scheduler also called as proportional share schedulers.

2. Fair share scheduler

2.1 Introduction

An executable of a user's application consists of a group of processes. The resources are allocated to these processes with the help of various schedulers. A user is interested in the performance of his application as a whole and would not be concerned about the time taken by the individual processes. Thus, it would be wise to make the scheduling decisions based on the entire group of process also termed as a process set instead of individual processes

This approach of allocation resources fairly to users or process sets instead of the traditional resource allocation to individual processes is called as fair-share scheduling. A Fair share scheduling is a concept dating from before 1988, when research into this scheduling concept resulted in a paper by J. Kay and P. Lauder being published in the Communications of the ACM. A Fair Share Scheduler(FSS) is a scheduler that allocates resources so that users get their fair machine share over a long period. Another form of FSS supports sharing not only between individual but also group of users . This variation of FSS is called the hierarchical for of Fair share Scheduler. The term "fair" means different to different users . There are many variations to Fair share scheduler like the lottery scheduler, stride scheduler, max-min fair share scheduler and hierarchical share scheduler. Each of them use different parameters to optimize the "fairness" in allocation resources

2.2 Basic Overview:

The main motivation for behind the Fair share scheduling algorithm as described by J. Kay and P. Lauder in their paper about Fair share scheduling was the problems faced in a student environment where each student could not get a fair share of the system during the peak submission time and resulted in late submissions. Fair share scheduling algorithm not only helped improve the efficiency of the student environment but also was thought to be an efficient CPU scheduling algorithm . On many system the unfairness issues are partially addresses by the Charging Mechanism (Nielsen 1970, McKell et al. 1979). Charging mechanism as the name suggests charges the users for the consumption of resources. It may also be referred to as a fixed budget model, in that each user has a fixed busget allocated to them . As and when they use the resources the budget decreases till its zero after which they cannot use the machine at all.

If a user is prepared to pay more than the processes he owns can get a better machine share. The charging mechanism was a great solution for consumption of the system resources but could not deal with the problems of CPU allocation. It was for these that the Fair Share Scheduler was developed. If a user exceeded his limit he was warned each time for a fixed number if successful logins post which the user was blocked i.e was not allowed to login. For example, a user may have a printer page bound of ten pages and a daily increment of two pages. This means that they start with a budget of ten pages and if they print, say, three pages in one day, their budget for the rest of the day is seven pages and provided they do no more printing that day, their budget at the beginning of the next day will be nine pages.

2.3 Objectives of FSS:

Objectives of the FSS at the individual level as well as the independent group level is that it should

1. seem fair

2. easy to understand

3. its behavior should be predictable

4. should be able to accommodate user specific requirements: where a user needs excellent response for a short time, it should permit that with minimum disruption to other users;

5. deal well with peak loads;

6. encourage load spreading;

7. give interactive users reasonable response;

8. give some resources to each process so that no process is postponed indefinitely;

9. be very cheap to run. After we have described Share, we evaluate it in terms of these objectives.

There are two main components, one at the user level and the other at the process level. First, we describe the user level component.

User-level scheduling

It needs to update the usage for each user by adding charges incurred by all their processes since their last update and decaying by the appropriate constant

It also needs to update accumulated records of resource consumption for planning, monitoring and policy decisions

Process Level Sceduling :

Process level scheduling consists of 3 main functions:

Process activation : Select and activate a process with lowest priority once the charges on the current process are updated.

Priority adjustment : Increase the priority of the processes based on the user's active processes , his current usage and shares.

Decay of process priorities : The scheduler uses the UNIX nice number which ranges from zero to nineteen. Larger the nice value assigned to the process poorer response is acceptable. The FSS is responsible to decrease the users priorities with a slower decay for the processes with a lower nice value.

2.4 Implementation:

2.4.1 User-level Scheduling:

The User level Scheduler is invoked every t1 sec where t1 = granularity of changes in users usage. Usage is large as compared the resources consumed in a second t1 could be of the order of a few seconds without compromising on the fairness. The advantage of a large t1 is that we can afford relatively costly computations without compromising on the time efficiency of the share. The Vax implementation example takes t1 to be four seconds, which is 240 times the schedulers resolution. The first component of the user-level scheduler decays each user's usage. This ensures that usages remain bounded and the value of the constant K1 in combination with t1 defines the rate of decay.

The effect of K1 depends on the half-life of usage. In a student environment, as mentioned earlier, a half life of three days was considered. In other contexts mainly the CPU bound contexts, it has been much shorter but generally of the order of several hours.

At a conceptual level, this step is performed for all users. In fact, The actual calculation is performed only once the user has logged in, which basically means that the calculation is done only for the active users The next part of the user level scheduling involves updating the usage of active users by the charges they have incurred in the last t1 seconds and resetting the charges tally.

Share implementation:

Every t1 seconds:

user level scheduling for each user

Decay usage and update with costs incurred in last t1 seconds

usageuser = usageuser x K1 + chargesuser

Reset cost tally

chargesuser = 0

Every t2 seconds: decay of process priorities

for each process

priorityprocess = priorityprocess x K2 x (niceprocess + K3)

Every t3 seconds: priority adjustment

prioritycurrent_process = prioritycurrent_process +

usagecurrent_user x active_processcurrent_user

(Sharescurrent_user)^2

At each scheduling event: current process selection

chargescurrent_user ï€½ï€ chargescurrent_user ï€«ï€ costevent

run process with lowest priority.

2.4.2 Process-level Scheduling

At the process level the scheduler runs on the basis of the priority of the processes. The one with the lowest priority ends up getting the first CPU share. The priority decides the order of CPU resource allocation to the processes. The scheduler has to perform the following functions with at the process level:

• chooses the process with the lowest priority and schedules CPU resources to the process.

The process with the lowest priority is at the head of the queue.

• Increase the priority of the processes each time its chosen for CPU resource allocation which makes the process move further down in the process queue.

•Slowly decreases or decays the priorities of the processes so that it would gradually drift higher in the process queue and can be served once it reaches the head of the queue.

The section below now discusses how FSS combines these activities with user level scheduling.

2.4.2.1 Decay of Process Priorities.

The decay or decrease of process priorities ages processes so that those which were waiting in the queues and had not been able to get the CPU allocation could drift slowly towards the head of the queue by possessing better priority values. The value of t2 combined with the value of K2 defines the rate at which processes age. We need to make t2 small, compared to t1, because priority values change very quickly. In the Vax implementation of FSS , t2 is set at 1 second, which is sixty times

the resolution of the scheduler. FSS also follows the UNIX scheduler approach to the nice number: it assumes that users usually prefer the best response for all their processes(which corresponds to a nice value of zero) but there are certain processes for which a user is happy with a lesser response time, which the users would indicate by giving the nice value a non zero integer value . (Its range is from zero, the default, to nineteen which gives the worst response.). Thus, the rate at which processes age would be affected by their nice value. The value of K2 can be defined as

K2 =K2' /( K3 + max_nice)

where max_nice is the largest nice value

(19). This ensures that the priority of processes with nice set to max_nice is decayed by K2' every t2 seconds and the priority of processes with nice set to zero is decayed somewhat faster. The values of K2' and K3 must be sufficiently large to ensure that priorities are well spread and remembered long enough to prevent large numbers of processes from having zero priority.

2.4.2.2 Priority Adjustment

At the best resolution of the scheduler, t3,The current process priority is increased based upon the usage and the user's current active processes. (The scheduler resolution, t3, is one sixtieth of a second on the Vax version).Typically, schedulers increase the priority by a constant. Intuitively, one might view the difference between FSS and typical schedulers as follows:

• a typical scheduler adjusts the priority of the current process by pushing it down the queue of processes by a constant amount;

• FSS increases the priority of the process by an amount proportional to the usage and number of active processes of the process's owner, and inversely proportional to the square of that user's shares, so that processes belonging to higher usage (more active) users are pushed further down the queue than processes belonging to lower usage (less active) users which would remove the problem of the high usage user dominating the CPU time.

Thus, a process belonging to a high usage user will take longer to move its way up to the head of the queue. The priority takes longer to decay till it reaches the lowest value. We also want users to be able to work at a rate proportional to the shares allocated to them. This means that the charges they incur must be allowed to increase in proportion to the square of the shares (which gives a derivative, or rate of work done, proportional to the shares). The formula also considers the number of active processes (the processes on the priority queue) for the user who owns the current process. This is necessary since a priority increment that involved just usage and shares would push a single process down the queue far enough to ensure that the user gets no more than his fair share. If the user has more than one active process, in order to ensure that the users share is spread between all of the processes the user needs to be penalized for each of the active processes and this is achieved by calculation of the product of the priority increment and the active process' count. This is the core of the FSS mechanism for making long term usage, over all resources that attract charges, affect the user's response and rate of work. Although the model described may be adequate for some implementations where process priorities have a large range of values, on the machines where we have implemented FSS, process priorities are

small integers and so cannot be used directly. We need to normalize the share priorities into a range that is appropriate for real process priorities. In addition, where the range in priority values is quite small, we need to ensure that the normalization procedure does not allow a single very large Share priority value to reduce all other normalized priorities to zero. To avoid this, an upper bound on the share priority value is defined.

Algorithm for Priority Normalisation:

Find greatest Share priority for normalization

max_priority = 0

for each process

if

max_priority < priorityprocess <= priority_bound

then

max_priority = priorityprocess

for each process

Scale priority to appropriate range

if

priorityprocess < = max_priority

then

normalised_priorityprocess = (K4 - 1) x priorityprocess

max_priority

else

normalised_priorityprocess = K4

is calculated in the process-level scheduler as shown above. The value of K4 is the largest priority value available to the low-level scheduler. Note that the FSS priority bound does, somewhat unfairly, favor very heavy users. However, they still suffer the effects of their slowly decaying large usage and they are still treated more severely than everyone else. On the other hand, it helps prevent marooning( concept of marooning discussed later on in this paper).

2.4.2.3 Process Activation: At each scheduling event, FSS updates the current user's charges by the costs associated with the event and selects the lowest priority process to run. This aspect of FSS is typical of CPU schedulers.

2.4.2.4 Multiple Processors: In a multiprocessor environment the FSS would still work the same way provided the processors have a single priority queue for all the processes. The only difference is that the frequency in which processes are selected to run from the front of the queue, and incur charges is very high as compared to a uniprocessor environment.

2.4.2.5 Efficiency The implementation shown in the above algorithm should only be seen as a model of the actual code. For efficiency, some of the calculations that are shown at the level of the process scheduler are actually pre-calculated elsewhere.

2.4.3 Edge Effects

In general, it is important to avoid edge effects on scheduler behavior. In particular, if a user enters the system with zero usage they could effectively consume 100% of the resources, at least for one cycle of the user-level scheduler. Since, this is a comparatively long time (a few seconds), this would be quite unacceptable. Te following section shows an analysis on why this undue favoritism could occur and how FSS deals with the problem. First, we define the relative proportion of the machine due to a user by virtue of their allocation of shares.

This is:

machine_proportion_dueuser =

sharesuser

active_users

∑ sharesu

u = 1

u=1 This defines the proportion of the machine due to a user in the short term. Now we can also predict the

short term future proportion of the machine that a user should get by virtue of their usage.

near_future_machine_proportionuser =

((sharesuser)^2/(usageuser))/

(active_users ï€ ï€¨ï€¨sharesu)^2/ usageu))



u=1

If everyone is getting their fair share, these two formulae will give the same value for each active user. Indeed, FSS works towards values for these two formulae closer or almost equal to the same value for each user. In a situation where the user has a zero or an almost zero usage the FSS has to work towards preventing this user from being unduly favored resulting in the other active users being ignored. We do this by altering the usage value in the user-level scheduler as shown below:

Algortihm Avoiding edge effects :

Every t1 seconds: user level scheduler

for each user

if

near_future_machine_proportionuser >

K5 x machine_proportion_dueuser

then

usageuser = usageuser x near_future_machine_proportionuser

K5 x machine_proportion_dueuser

The value of K5 is set to 2.

2.4.3.1 System Processes

Processes running in support of the Operating system are called system processes. These processes are given the highest priority, thus, are give 100% share of the resources as it is assumed that they would not take long to run. FSS ensures that processes that run in support of the operating system must be given all the resources that they need. In effect, system processes are given a 100% share of the resources, and it is assumed that they won't use it most of the time. Share is intended to arbitrate fairly between users, after the system has taken all the resources it needs.

2.4.3.2 Marooning

Under certain situations it may happen that a user get a very large usage during a relatively idle period. Then, if new users become active, the original user's share becomes so small that they can do no effective work. The original user's processes are effectively marooned with insufficient CPU allocation even to exit. Marooning is avoided by the combination of bounds on the normalized form of priority, the process priority decay rate, and the granularity of the process-level schedule.

2.5 Fallbacks of the Fair share Scheduler:

Fair share scheduling is a dynamic scheduling algorithm to use in clusters and grid that is compute bound processes and not very favorable for interactive processes .

Policies can have enormous impact on throughput and response time."Accurate control over quality of service provided to users and applications requires support for specifying relative computation rates."For interactive applications need ability to do this on a short time-frame.

Fair Share schedulers have a relatively coarse control over long-running computations.

"Algorithms are complex, requiring periodic usage updates, complicated dynamic priority adjustments, administrative parameter setting to ensure fairness on a time scale of minutes."

Fair share Scheduling causes priority inversion. Priority inversion is a situation when high-priority jobs can be blocked behind low-priority jobs due to preemption by a lower priority job.

"Fair share" implemented by adjusting priorities with a feedback loop to achieve fairness over the (very) long term (highest priority still wins all the time, but now the Unix priorities are always changing)

3. Proportional Share Scheduler

To overcome the fallbacks of the traditional schedulers typically FSS , proportional share schedulers which are slight variation of the FSS were developed. A general framework for specifying dynamic resource management policies, along with efficient mechanisms for implementing that framework was developed. Resource rights are encapsulated by abstract, first-class objects called tickets. The number of resources consumed by the active client is directly proportional to the number of tickets it holds. Tickets can be issued in different amounts and may be transferred between clients. The algorithm also uses a modular currency abstraction in order to flexibly name, share, and protect sets of tickets. Currencies can be used to isolate or group sets of clients, enabling the modular composition of arbitrary resource management policies. Two different underlying proportional-share mechanisms are introduced to support this framework namely Lottery scheduling and Stride scheduling. "Lottery scheduling is a novel randomized resource allocation mechanism. An allocation is performed by holding a lottery, and the resource is granted to the client with the winning ticket. Stride scheduling is a deterministic resource allocation mechanism that computes a representation of the time interval, or stride, that each client must wait between successive allocations. Stride scheduling cross-applies and generalizes elements of rate-based flow control algorithms designed for networks [DKS90, Zha91, ZK91, PG93] to dynamically schedule other resources such as processor time". Novel variants of these core mechanisms are also introduced to provide improved proportional-share accuracy for many workloads. In addition, a number of new resource-specific techniques are proposed for proportional-share management of diverse resources including memory, access to locks, and disk bandwidth.

3.1 Lottery Scheduling

"Lottery scheduling is a randomized resource allocation mechanism". Resource allocation rights are represented by lottery tickets. Each allocation is determined by holding a lottery; the client with the winning ticket gets the allocation to the resource. This effectively allocates resources to competing clients in proportion to the number of tickets that they hold. The core lottery scheduling idea is to randomly pick a lottery ticket from a pool of tickets competing for a particular resource. Since it is a lottery each ticket has an equal probability of being picked up the probability that a given client will be allocated a resource is directly proportional to the number of lottery tickets assigned to the him.

3.1.1 Resource Rights

"Lottery tickets encapsulate resource rights that are abstract, relative, and uniform:"

Abstract because the resource rights are measured independent of the machine details.

Relative, because the fraction of a resource a ticket represents is directly proportional to the contention of that resource. Thus, a client will obtain more of a lightly contended resource than one that is highly contended; in the worst case, it will receive a share proportional to its share of tickets in the system.

Uniform because rights for heterogeneous resources can be homogeneously represented as tickets. These properties of lottery tickets are similar to those of money in computational economies [Wal92].

3.1.2 Lotteries

"Scheduling by lottery is probabilistically fair". The expected allocation of a resource, as mentioned earlier, is directly proportional to the number of tickets the client holds. Since the scheduling algorithm is randomized, the actual allocated proportions may not 100% match the expected proportions exactly. However, the inequality between them