Homework 3

22C:116, Fall 1995

Due Friday Sept. 8, 1995, in class

Douglas W. Jones
  1. Background: On some modern RISC processors, the memory management unit has no understanding of page tables. Instead of a translation lookaside buffer that is merely a cache of recently referenced page table entries, the translation lookaside buffer is directly modifyable by software, and if the MMU can't translate a virtual address using one of the entries in the TLB, a fault condition is raised, and the software must intervene.

    Thus, in addition to handling page faults, the MMU fault handler software must also handle TLB misses by loading the appropriate page table entry into one of the TLB registers. This simplifies the MMU hardware; the hardware need not initiate memory cycles to load the TLB from the page table, and it need not handle the decision of which TLB entry to invalidate in order to make room for the replacement. On the other hand, operations that would normally be done by hardware in one memory cycle now require the execution of a trap service routine that might involve hundreds of instrucitons.

    The question: Discuss the relative cost of this approach to translation lookaside buffer management, when compared to the traditional approach. Your discussion should recognize that this cost is a function of the ratio of TLB entries to page frames in main memory, and it should comment on the expected performance penalty when this ratio is one, greater than one, and less than one.

  2. Background: In the original version of UNIX, main memory was divided into a numbe of segments, each holding either the reentrant code of a process, the stack of a process, or the heap of a process. The fork operation was done by allocating space on disk for a new process's stack and heap segments, and then swapping out the stack and heap of the process that executed the fork into this newly allocated space. Finally, the fork was completed by editing the stack segment by inserting the process ID of the child into the appropriate function return location before allowing the process to continue.

    Because swapping was normally required for context switches, the expense of a fork operation was not much different from the expense of routine context switching. This situation changed when demand-paged virtual memory was introduced into UNIX.

    The question, part A: Explain why early UNIX systems didn't need to swap with every context switch.

    The question, part B: Explain why the early UNIX fork operation didn't need to duplica the code segment of the process being forked.

    The question, part C: Explain the problem raised by the semantics of the UNIX fork operation in the context of virtual memory.

  3. Background: Consider the storage manager written to support a small single user machine that has no virtual memory mechanism. This manager faces two primary problems, achieving fast operation and minimizing fragmentation. One useful way to speed deallocation is to avoid combining free blocks until there is a need to allocate a block larger than any existing free block. Only at that point is it necessary to begin a sweep through the heap combining free blocks to make larger blocks, and the sweep can stop as soon as a large enough block is manufactured.

    The question: Now consider what happens when this storage manager is moved to a large virtual memory environment and used to allocate storage on behalf of one user program in one user's virtual address space. What of these changes will have an impact on the appropriateness of the algorithm, and what aspects of the algorithm pose the greatest problems in this new environment? (This is a good qualifying exam question, if we ever re-impose some kind of qualifying exams in this department)