22C:116, Lecture Notes, Aug. 25, 1995

Douglas W. Jones
University of Iowa Department of Computer Science

  1. Memory resources may be viewed in many ways, depending both on what objects are clients of the memory manager and on how the manager allocates memory.

    The unit of memory allocation may the page, the contiguous block of memory, or the segment, depending both on the system in question and the client to which memory is allocated.

    Clients may be user programs or other parts of the operating system. Allocation to user programs need not use the same mechanism as allocation to other parts of the system, although a proliferation of mechanisms clearly adds complexity to the system.

  2. Sharing CPU time is essential on any system with more processes than processors. It is common to talk about the contending processes as if each belongs to a different end user of the system.

    Since 1966, with the Berkeley Timesharing System, though, many systems have allowed users to run more than one process in parallel, so not only do users compete with each other for CPU time, but the processes of one user also compete with each other.

    Aside: Surveys done of UNIX users done many years ago show that a single user at a remote terminal typically had 4 processes, one of which is interactive, one of which was waiting for the former process to terminate, and two of which may have been waiting or running in the background.

    On window-oriented systems, it is common to see one process behind each window on the display screen.

  3. Input-output time must typically be allocated between contending requests for input-output transfers on all but the smallest systems.

    Operating systems typically allocate I/O to processes competing for the limited bandwidth of the available channels. This may involve code to schedule I/O transactions that is as complex as the code used to schedule CPU time, or on low performance personal computers, the system may ignore this issue.

  4. Secondary memory is the single most important resource managed by most operating systems, at least when seen from the user perspective. In the extreme, some users may view the operating system as if it was nothing more than a command line interface plus a file system.

    From the user perspective, the central function of a file system, as opposed to the I/O subsystem, is the allocation of space on disk for competing files. Allocation of disk space usually involves algorithms quite different from those used for main memory, both because the user model of disk files is not the same as the user model of main memory, and because secondary storage devices are typically block structured with a complex relationship between block addresses and I/O latency times.

  5. Security and reliability are frequently seen differently by the system and by users.

    Protection mechanisms are necessary for both security or reliability, but many small computer systems do without, even though the technology needed to build secure systems has been well known since the mid 1960's.

    Security demands that system resources be protected malicious attack.

    Reliability demands that system resources be protected accidental damage.

    Aside: As a general rule, anything which a malicious user might intentionally do to damage a system might also be done by a careless user. Traditionally, security has been concerned with protection against malicious users, while carelessness was viewed as the concern of fault tolerance. The recognition that malice and carelessness frequently produce the same result has led to the recognition of the relationship between fault tolerance and secure systems.

  6. Resource Protection is generally demanded by users when they perceive a security threat.

    Typically, users demand that segments of main memory, files on secondary memory, and terminals allocated to them all be protected from unauthorized use or inspection by other users. There are uniform access control and protection mechanisms that can control access to all of these on the same basis, but on most modern systems, each class of resource is protected using ad-hoc mechanisms that are generally different form those used to protect other resources.

  7. Resource sharing becomes an issue when there are multiple users or multiple processes that must share access to some resources while other resources are private and protected.

    Files, windows or memory segments are all appropriate objects that might be shared or might be protected. Sharing is only trivial when there is no protection, and protection is only trivial when there is no sharing.

    Early timesharing systems viewed resource sharing in terms of allocating free resources to one or another user, exclusively. If this is done, the system is effectively creating one isolated personal computer for each user, so there is essentially no sharing and the result is of only limited utility.

    With the advent of systems such as Multics and the Berkeley Timesharing System (around 1966), the focus shifted to the management of information that was actively shared between users. Thus, each user might have some memory regions that are entirely private, and others that are shared with some, but not all, other users. Management of this shared data, whether on disk or in main memory, poses a significant challenge.

    This challenge was largely ignored for much of a decade between 1975 and 1985 as personal computers displaced timesharing systems from growing numbers of applications, but it has re-emerged with a vengence with the advent of computer networks.

  8. Reliability, or absence of failures, is a natural goal of the system designer. Protection mechanisms play a strong part in assuring the reliability of systems by preventing failures such as programming errors in one part of a system from having unlimited effects on other parts of the system.

    On early systems, it was generally assumed that any failure in one part of a system would lead to the failure of the entire system, but as technology has advanced, we have learned to build systems where failures lead to incremental degradation of performance.

    Early examples of this include memory managers which detect memory errors and remove the bad memory addresses from the pool of memory available for allocation to various system applications. This is now routinely done both with disk systems and with main memory.

    More modern examples include multiprocessors where, when a processor fails, the remaining processors continue to operate.

    Aside 1: With centralized systems, the assumption that system failure was an all-or-nothing thing was natural. In fact, this was never true, but partial failures of a centralized system were rare enough that they are the subject of folklore -- for example, the timesharing system where the floating point unit failed. The operating system didn't use floating point, so none of the system managers knew anything was wrong until users started complaining about strange results.

    Aside 2: With large-scale distributed systems, partial system failures are the norm! Many networks today are large enough that the normal state of the system is that one or more machines are down at any instant. As an exercise, consider the internet, with around one million machines. Assuming each machine has a mean-time between failures of one month, what is the rate of failures, per unit time?

  9. Process Management

    The traditional picture of the states of a process, as found in many elementary operating systems textbooks, is as follows:

                Dead _
                ^|  |\
                ||    \
            kill||     \Terminate
                ||      \
                ||       \
            <----|------- \
    Waiting     ||  Wait   Running ---
           \    ||        _           |
            \   ||Create  /|          |
             \  ||       /            |
        Signal\  |      / Schedule    |
               \ v     /              | Preempt
                _|    /               | Relinquish
                 Ready <--------------
    

  10. Process States

    If no process in the system needs the CPU, something must be running on the CPU! For this reason, operating systems always include some kind of background process to soak up the otherwise unused CPU cycles. Sometimes, this is the background process, (as opposed to other background processes).

    On early UNIX systems, one well known use for the background process was for computing more digits of pi. This is silly, and there are useful housekeeping jobs the background process can do. For example, it can run CPU and memory diagnostics, or it can manage garbage collection.

    The ready queue may have any of a number of interesting data structures, but it always has two basic operations, enqueue a process and dequeue a process. A FIFO queue gives a system where all processes have equal priority. A priority queue allows high priority processes to have more access to the CPU.

    Fixed priority processes are used in many real-time applications. With UNIX, an interesting variable priority scheme was introduced, in which the priority of a process rises when it voluntarily relinquishes the CPU, for example, to wait for I/O completion, and the priority of a process falls when it is preempted at the end of a time-slice. This favors interactive users over compute bound processes.

  11. Preemption

    A running process is preempted when it is forced into the ready state in order to give the CPU to some other process. This might be done, for example, by the real-time-clock interrupt service routine at the end of a time-slice in a timesharing system. This is illustrated in the following pseudocode:

    Clock interrupt service routine:
       begin
          Disable Interrupts (if not already disabled)
    
          -- Save process state of interrupted process
          current.pc := interrupt-return.pc
          current.registers := interrupt-return.registers
    
          -- Change interrupted process state to ready
          Enqueue( current, ready queue )
    
          -- Make new process into the current process
          current := Dequeue( ready queue )
    
          -- Restore process state of new process
          interrupt-return.pc := current.pc
          interrupt-return.registers := current.registers
    
          Enable Interrupts (if necessary)
          return
       end
    

    Note that, in addition to involuntary preemption, many systems include provisions for a running process to voluntarily move itself from running to ready. This is sometimes called voluntarily relinquishing the CPU, and the code for this is essentially the same as that given above for preemption, except for differences in the calling sequence.

    In old systems, clock interrupts were periodic, for example, every 1/60 of a second. Modern hardware usually has a programmable timer, so it is possible for the scheduler to vary the length of the time-slice (interval between interrupts) depending on the behavior of the process being scheduled.

    The Berkeley Timesharing System did this by the late 1960's, giving long time slices to jobs that appeared compute bound, in an effort to minimize context switching, while giving short timeslices to jobs that appeared I/O bound.

    Interrupts must be disabled during the interrupt service routine so that some other interrupt doesn't cause scheduling activity when the ready queue is halfway through being updated or when the process state is only partially saved or restored.

  12. Process States

    Semaphores were invented by E.W.Dijkstra in the late 1960's and they were only widely accepted in the early 1970's.

    Prior to this, most operating systems had more ad-hoc and frequently incorrect implementations of the waiting state.

    UNIX, which was developed in the late 1960's, does not entirely reflect the post-Dijkstra view of processes. With the exception of some later versions (for example, System V UNIX), UNIX does not provide user-level semaphore services, so UNIX programmers must avoid any such services that are provided if they wish to write portable programs.

  13. Semaphore Operations

    The name P is Dijkstra's original name for the wait operation; it comes from the Dutch word which is the cognate of Pause in English. Here is some pseudocode that implements this on a uniprocessor:

    Wait( sem ) or P( sem ):
       begin
          Disable Interrupts
          if sem.counter > 0
    	 sem.counter := sem.counter - 1
          else
             -- Save process state
             current.pc := proc-return.pc
             current.regs := proc-return.regs
    
             -- Change state to waiting
    	 enqueue( current, sem.queue )
    
             -- Make new process current process
             current := Dequeue( ready queue )
    
             -- Restore process state
             proc-return.pc := current.pc
             proc-return.regs := current.regs
          endif
          Enable Interrupts
          return
       end
    

    Signal( sem ) or V( sem ):
       begin
          Disable Interrupts
          if empty( sem.queue )
             sem.count := sem.count + 1
          else
             temp := dequeue( sem.queue );
             enqueue( temp, ready queue )
          endif
          Enable Interrupts
          return
       end
    

    The above implementations are correct only if all processes have equal priority. If some have a higher priority, and if a low priority process issues a signal on a semaphore for which a high priority process is waiting, then the above code lets the low priority process continue running while the high priority process waits in the ready queue. This is wrong.