22C:116, Spring 1997

Douglas W. Jones


On the front cover of the exam booklet, write: Write legibly! Your E-mail address is optional! If you provide it, your final grade will be E-mailed to you. Illegible answers will result in reduced credit. This exam is worth 3/10 of your final grade, 30 points!
  1. Background: Recall the mark-sweep garbage collection algorithm. To collect garbage, mark all reachable nodes by recursively traversing the data structure and setting mark bits, then sweep up all unreachable items in the heap and, while you're at it, reset all the mark bits.

    Consider file system where each file or directory may be indexed from any number of directories, and where the file system clearly distinguishes files from directories. Files contain arbitrary user data, while directories contain only links to files or directories. Garbage collection is required in this system in order to to reclaim unreachable files or directory structures.

    Consider a distributed implementation of this file system under a system like Amoeba, where links from directories to files or other directories are represented by network capabilities. Assume that there are many file servers and that there are no restrictions on file or directory placement -- that is, a directory may reference directories and files on any servers in the system.

    Note that, for the purpose of this problem, we reject, outright, the possibility of using aging or some kind of timeout to delete objects. The only way an object will be deleted is if no capability for that object survives in the reachable part of the directory structure. Therefore, the Amoeba age and destroy operations are not implemented by our file server.

    Part A: (2.7 points) The marking phase of our a distributed garbage collection algorithm is initiated by issuing a touch operation directed to the root directory or directories in the file structure. Outline the response of the file server to a touch operation directed to one of it's objects. Take into account the different object types (file/directory) and whether or not the object is currently marked at the time it is touched.

    Part B: (2.7 points) Present a convincing (short) argument that the marking phase of our distributed garbage collector terminates after a finite time, despite the presence of loops in the directory structure.

    Part C: (2.7 points) Discuss (briefly) the problem of detecting termination of the marking phase. Does your solution to part A detect termination? If so, what signal does the initiator of the marking phase receive to indicate termination. If not, suggest a way that the algorithm/protocol could be modified to allow termination detection.

  2. Background: Consider a distributed virtual memory system. For the sake of simplicity, assume only one shared address space exists on the entire system, that there is no secondary storage, and that the total number of pages in the address space is smaller than the total number of page frames on the system. Furthermore, assume that each machine has only one user process.

    Assume that each machine has a kernel that includes a page fault handler that gains control in response to faults raised in the execution of the local user process, and that it includes a virtual memory network server that gains control when a message concerning virtual memory management arrives from elsewhere. All messages are point-to-point, and each kernel knows about only some of the other kernels in the system.

    Assume that each machine has a trivial memory management unit that allows each page in the address space to be marked as either read-write or inaccessable.

    Part A: (2.8 points) Given only two machines in the network, what sequence of messages would the fault handler typically exchange with the other machine in the network in response to a page fault. Cover the case where there is no free page frame as well as the case where a free frame is locally available.

    Part B: (2.8 points) Given multiple machines, propose a decentralized protocol for finding the machine that has a particular page and obtaining a copy of that page. (Note for the curious: A simplification of the same protocol can probably be used to find a free page frame in case page replacement is needed.)

    Part C: (2.8 points) Assuming that the distributed virtual memory software maintains some bits of extra state information per page frame, what is the most effective page replacement policy that can be implemented on each machine, and in that case, what are these bits used for?

  3. Background: In the Charm and Amulet systems, the receipt of a message on a host machine causes the creation of a new thread. Messages cannot be received in any other way! The kernel primitives available to threads include send(msg,dst) and exit(). Messages are queued, and the send primitive is nonblocking. The message destination specifies, in sum, an execution context (an object) and the code to execute in that context (a method of that object). Messages may contain arbitrary amounts of all kinds of information.

    Threads in Charm and Amulet never block. Once a thread is initiated, it is run to completion; as a result, a Charm or Amulet kernel employs no context switching.

    Part A: (2.7 points) Suppose you have a file object f with the method read, where the message to contains the sector number of the file you wish to read. What other information would you have to include in this message? Discuss how you would structure a program to read all sectors of a file and perform some processing operaton on them.

    Part B: (2.7 points) Discuss how you would structure a semaphore object under Charm or Amulet. The methods of this object are P and V. What parameters would each of these require, what state variables would the object contain, and how would you implement the wait-queue implied by a semaphore.

  4. Background: Consider the Intel 80286 architecture running in protected mode. At any instant, a process can address 4 memory segments, the code segment, stack segment and two others, each limited to 64K bytes. These are addressable through segment description registers that are part of the CPU hardware. There are instructions to load the segment description registers from a special segment, the process base segment, also described by a hardware segment description register and not directly manipulable by any user-mode instructions.

    The operating system maintains a master segment table in memory; this table contains the actual base and size information for every segment in the system, and perhaps other information, as needed to interpret the contents of the process base segment for each process.

    Formally, the 80286 architecture uses capability-based addressing. The segment descriptor registers each hold a capability for a memory segment, and the process base segment is the capability list describing a process's current protection domain. To my knowledge, the 80286 architecture has never been fully exploited by a strongly protected operating system.

    Part A: (2.7 points) How would demand driven virtual memory on this system differ from traditional demand paged virtual memory? What new problems would the fault handler have to deal with? Does it matter whether faults are detected when a segment descripton register is loaded or when that segment is actually addressed?

    Part B: (2.7 points) Consider two kernel services, insert(cap,p), which puts cap, a capability, into slot p of the process base segment, and extract(p), which returns the capability found in slot p of the process base segment. The capability representation taken by insert and returned by extract is a bit pattern that can be stored in any user data structure. How could the insert and extract operations be constructed so that users would have an arbitrarily difficult time forging capabilities?

  5. Background: Mappings from name to address dominate the operating system field. Network name service, virtual memory management, directory servers and others all serve this same function in different parts of the system.

    The Question: (2.7 points) Explain how the use of a cache or caches in each of these contexts can speed name to address translation, and explain what operations are required, in each of these contexts, to account for a cache miss.