Midterm Solutions and Commentary

22C:116, Spring 2002

Douglas W. Jones

Midterm Distributions

Homework (total for 6 assignments)

        Mean   = 27.9      X
        Median = 28.6    X X
                       X X X
                       X X X
                       X X X
                       X X X
        _____X_X_____X_X_X_X____
         20. 22. 24. 26. 28. 30
Exam
        Mean   = 8.2
        Median = 9.0
                           X
             X             X
             X             X
             X X           X   X X
        _____X_X_X_____X_X_X_X_X_X_______X_____X____
         0 . 2 . 4 . 6 . 8 . 10. 12. 14. 16. 18. 20

Midterm Exam Solutions

  1. Background: Consider an operating system in which each user process has an independent virtual address space...

    Part A: Some kernel code is only executed as part of the execution of user processes, while other kernel code never executes on behalf of a user process. Give an example of each.

    When the user calls a kernel function, that function runs on behalf of the user process. In sum, this description covers all code in trap service rouines, including any functions they call.

    In contrast, interrupt service routines and everything they call must be considered to be executed on behalf of something external to any user process, since they are executed in response to events that occur in the outside world.

    2 had perfect scores here, 8 had good answers for one part and poor answers to the second part, 4 had good answers for one part and a wrong answer for the other, and 4 had overall poor answers. The remainder had no credit at all. In sum, it was easier to identify code executed on behalf of user processes than it was to identify code executed on behalf of external entities.

    Part B: There is a heap in the kernel's address space. Given examples of the kinds of objects the kernel might allocate on this heap.

    The kernel heap would include process descriptions, open file descriptions, and device descriptions. In sum, it will include the representations of a variety of dynamically allocated objects managed by the operating system.

    5 did well, 4 gave one good example (while the plural construction in the question really demanded multiple examples), 4 gave a gbad example, and the remainder received no credit.

    Part C: The physical memory of the machine is divided into page frames. At any instant, some of these page frames are assigned to user address spaces and some of them are free. What page frames that are not assigned to any user address space yet are not free?

    Page frames assigned to the kernel's heap are one example; in the context of the previous question, this ought to have been obvious, but only 4 gave this answer!

    The page frames that hold the code of the kernel itself are another example, and 14 said so.

    The frames that hold page tables of user processes are another example that 4 students used.

    The page frames holding the kernel's static variables are another.

    Only 3 students gave two or more good examples, 3 gave very weak examples, and 3 got no credit.

    Part D: When the kernel's heap manager runs out of memory, what does it do to get more? As a result, does the kernel heap manager face any special problems that a user's heap manager would not face? Explain briefly.

    The kernel heap manager can claim another page frame and add it to the kernel heap. If there are free page frames, these are obvious candidates for this. If there are none, a page of some virtual address space must be forced to disk in order to free a frame.

    Only 1 student gave a really good answer here, and 3 more gave mediocre answers, mostly speaking in terms of entire processes being forced to disk instead of just one page of one address space.

    4 more students said, usually extremely tersely, "steal memory from a user" without a hint of a mechanism.

    Among the bad answers were "call sbrk()". 7 gave this answer! The problem is, sbrk() only has well defined semantics in the context of the user's virtual address space, and the problem statement makes it clear that the MMU is turned off while the kernel is running. Therefore, sbrk() cannot be called!

    The problem asked for an explanation of any special problems the kernel heap manager would face, and full credit depended not only on explaining the kernel's response to the heap overflow, but also on explaining the consequences this poses for the heap manager. The problem caused by adding random page frames to the heap is fairly obviously one of fragmentation, because the heap itself ends up being allocated from pages that are scattered around in main memory instead of being allocated in one continuous range of addresses.

    This question does have a strong relationship to the study questions that related to sbrk(), and perhaps students who were concentrating excessively on the details of the study questions without thinking about the more general context were mislead by this.

    Another problem some students had with this question was the distracting issue of prompt versus lazy heap management. Nothing in the problem statement suggested this issue!

    Part E: A careless programmer forgets to deallocate objects on the heap that are no-longer needed. When the programmer fixes this, the program runs much faster. Why?

    The forgotten objects on the user's heap cause the heap to grow within the user's virtual address space. As a result, as time passes, the locality of reference in the user's heap is likely to get worse and worse. 8 did well here.

    4 more had answers that were not as good, usually speaking of the increased cost of swapping or other concepts that were relevant but off target.

    3 were essentially right but made major errors, earning half credit. Another 4 earned half credit by correctly identifying some minor source of overhead such as the cost of the calls to sbrk() required to enlarge the heap. Only a very few earned no credit.

  2. Background: Like most other operating systems, IBM's VM operating system offers a virtual machine to each user process...

    Part A: A program running under VM tries to write a byte to the printer data register. VM has assigned an interprocess FIFO queue to that process as a virtual printer. Explain how VM goes about putting the user's byte in this queue.

    The write to the printer data register is done by an OUT instruction. This is privileged, so the trap service routine in the VM kernel gains control, determines that an OUT was executed with the printer data register as a destination, and simulates the semantics of the real OUT instruction by copying the source operand to the FIFO queue that serves as the virtual printer. Finally, the trap service routine returns to the instruction after the OUT and execution continues.

    8 did well here. 3 understood that a trap was involved, but then confused the virtual printer (a FIFO queue) with a real printer. 3 seem to have entirely forgotten the trap mechanism and focused on the mechanics of enqueueing an item in a FIFO queue.

    The remainder did poorly; many appear to have been confused by the fact that this question did not deal with the distinction between an operating system running under VM and a simple application under VM. In fact, the bulk of the answer has nothin particular to do with the features that make VM unique and instead focuses on the universal elements common to most operating systems.

    Part B: A program running under VM does not write a byte to the printer data register until it has first checked the printer status register to see if the printer is ready for the next byte. VM has assigned an interprocess FIFO queue to that process as a virtual printer. What is the use of the printer-ready bit in the virtual printer status register for that process.

    The virtual printer ready bit, in this case, would be good place to return an indication that the queue is not full.

    5 did well here. 3 mentioned preventing queue overrun errors (although there are many other ways to do this) and then added various confusing elements. 2 gave user interface details only, with no real answer to the question. Over 10 earned no credit.

    Part C: The program running under VM tries to turn on the MMU (by changing one bit in the MMU control register). What happens in the physical MMU? Hint: Was the physical MMU already turned on? If so, how does VM simulate the turning on or off of the virtual MMU?

    The answer to the hint question is, when a program under VM is running, the physical MMU is always turned on, whether or not that program thinks the virtual MMU is on or off. Therefore, the action taken by VM in response to a user issuing the turn-on MMU instruction cannot be to turn on the physical MMU. Rather, the system must change the state of the physical MMU to reflect, for that user, the consequences of turning on the virtual MMU. An obvious thing to do is compute, from the user's actual memory map, plus the map provided by the user, a new memory map, and then install this map in the physical MMU.

    Only 2 did well here, while 5 earned half credit by correctly answering the hint question without saying anything about the consequences.

    Again, over 10 earned no credit. This is dissapointing, considering that this question follows directly from the final midterm study question.

  3. Background: The original UNIX system had no user-accessible semaphore mechanisms; these were added as afterthoughts later in the development of UNIX. As a result, many of the older UNIX utilities use various kluges to achieve mutual exclusion...

    Part A: Aside from file-system overhead (which is significant), what is the worst feature of this mutual exclusion mechanism, from the point of view of system performance, and how could you reduce the impact of this feature. (Hint: Try writing quick and dirty pseudocode for the lock operation used to enter a critical section.)

    		repeat
    			fd = open(lockfile, O_CREAT | O_EXCL );
    			close( fd )
    		until open successful;
    		-- critical section
    		unlink(lockfile)
    
    The lock-file mechanism requires busy waiting, chewing up quite a bit of time. The impact of busy waiting can be reduced by including some kind of relinquish in the polling loop -- pausing for a second on the real-time clock is one feasible way to do this.

    6 gave essentially this answer. 2 more got half credit for confused answers that also mentioned busy waiting. 3 more got partial credit, while over 8 earned no credit.

    A surprising number wrote about the overhead of file creation, despite the fact that the question began with "Aside from file-system overhead (which is significant)..."

    Part B: Given that all processes involved in some activity can read and write a particular pipe, we could use the pipe as a lock -- read one character from the pipe to enter the critical section, write a character to the pipe to exit. How is a user process blocked while it is waiting to enter the critical section? Compare this with your answer to part A.

    While the process is waiting, it is blocked attempting to consume a byte from the FIFO queue used to implement the pipe. Typically, this involves blocking on a kernel semaphore. As a result, there is no busy-waiting overhead.

    6 gave essentially this answer, 3 said that the process is blocked, but continued with answers that were confusing. Over 7 earned no credit. This was the last question on the exam, and several left it blank, suggesting that they were out of time.