Final

22C:116, Spring 1999

Douglas W. Jones
Please write your answers on the paper provided. Make sure your name is on the front page, in the same form as it appears in the official class list! Illegible and overly long answers are difficult to grade. Please leave margins around your answers!

NOTE: Essentially all questions on this test can be answered in one short paragraph or less! There are no long essay questions here!

This exam is worth 3/10 of the final grade (30 points; allocate 4 minutes per point).

  1. A question: (based on study question 1) Explain how thunks may be used to implement pass-by-reference semantics in the context of parameters to an RPC. Your answer should be short and at a high level, avoiding complex algorithmic content and avoiding reference to specific operating systems or RPC protocols. (2 points)

  2. Background: (based on study question 1) If an Amoeba client wishes to use thunks to pass parameters to an Amoeba server, the client must create a thunk-server.

    Part A: Should the thunk-server be a new process, a new thread, or part of the thread that passes the thunk? Explain your answer, clearly stating your reason for rejecting the inappropriate alternatives suggested in the question. (2 points)

    Part B: When a client thread wishes to pass a thunk to an Amoeba server, the actual parameter is a capability. How does the server use this capability. (2 points)

    Part C: When a client thread wishes to pass a thunk to an Amoeba server, the actual parameter is a capability. How does the client create or obtain the capability for its thunk server? (2 points)

  3. Background: (based on study question 2) Here is a version of thread_read() based on the nb_read routine defined in the study question:
    struct done_rec {
       thread_semaphore * s;
       int cc;
    }
    
    void done( int dcc, void * p )
    {
       struct done_rec * d = (struct done_rec *) p;
       d->cc = dcc;
       thread_signal( d->s );
    }
    
    int thread_read( int d, void * buf, int len )
    {
        int cc;
        struct done_rec d;
        thread_semaphore_init( &d.s, 0 );
        cc = nb_read( d, buf, len, done, (void *)&d );
        thread_wait( &d.s );
        return( cc + d.cc );
    }
    

    Part A: The top-level design of the above code contains a major error. Explain! (2 points)

    Part B: The structure done-rec was added to the above code as an afterthought. In the original version, only the semaphore was passed as a parameter to done, and the delayed character count dcc was returned from done to thread_read through a global variable. What mistake was corrected by introducting the structure done-rec into the code? (2 points)

    Part C: Consider the problem of implementing the nb_read() operation for disk files. The implementation may involve adding code to many levels of the system, from the disk interrupt service routine to the disk driver to the sector-buffering cache manager to the actual code of the kernel call itself. Where in this code would you put the code to execute the callback to done()? If there are multiple places where it is appropriate to execute this callback, explain why. (2 points)

  4. Background: (based on study question 3) Consider the problem of running elections on an arbitrarily connected point-to-point network. The basic election algorithm on such a network operates as follows:

    Part A: Consider the case where two elections are initiated concurrently -- that is, where, after one machine has initiated an election, a machine that has not yet received the call for election from the first machine also calls for an election. Explain how the election protocol can be modified to eliminate the extra election, so that only one election will take place. (2 points)

    Part B: The election algorithm described above is not fault tolerant! Give an example of the kind of fault that causes the algorithm to fail, and suggest a modification that will make it fault tolerant. (2 points)

  5. Background: Whenever pointers are shared between different addressing domains, there are potential problems. For example, when pointers are passed from system users to the kernel, where users run with the virtual memory mechanism turned on, and the kernel runs in physical memory, the kernel must do extra work to interpret any pointer parameters.

    Consider an operating system kernel that allows users to share pages arbitrarily, so, for example, user A may insert page X at virtual address y while user B inserts page X at virtual address z, where y and z differ.

    Part A: Suppose users A and B wish to share a binary tree or a linked list in page X, where both users do insert and delete operations on items in this shared data structure. What problems would the users have with the pointers used to link elements of this structure? (2 points)

    Part B: How could you solve the problem you identified, assuming that you are not allowed to move the shared page to identical virtual addresses in both user address spaces. (2 points)

    Part C: The presence of shared pages also complicates certain problems within the system kernel. Assuming the kernel allows demand-paged virtual memory, what data structures must be present in the frame table to allow replacement of an infrequently used shared page? (2 points)

  6. Background: Consider a file server in a distributed system, where the file server maintains a local cache of recently referenced disk sectors, and the server has a good disk scheduler. Assume that each machine in the system has a good local real-time clock.

    The Problem: Propose a program of experiments a user could perform in order to estimate the cache size in the file server's memory. (2 points)

  7. Background: Consider a system where each user has access to a single paged virtual address space. The system kernel does not handle demand paged virtual memory! The kernel provides the following operations to user processes:

    Given va, a virtual address, map() causes the kernel to associate a page frame with the page containing the indicated address, and unmap() causes the kernel to break any association of a page with the indicated virtual address. For the sake of simplicity, assume that there are no auxiliary access rights on a page, so access is granted on an all-or-nothing basis. The return code r indicates success or failure. Unmap fails if va was already unmapped. Map only fails when there is no free page frame to map to the indicated address. Map is a no-op, returning success, if the indicated address was already mapped.

    After a call to on_fault(), when the process accesses an unmapped page, a callback will be made to the user-supplied function at va. The parameter to this user-supplied function is the virtual address of the unmapped page. any return from the user-supplied function will attempt to re-execute the instruction that caused the fault. On-fault() need not be called more than once.

    Part A: If a user wishes to write a user-level page-fault handler for this system, what page replacement policies does this system allow? (2 points)

    Part B: Ignoring the issue of page replacement policy, write detailed pseudocode for a user-level page-fault handler on this machine. Assume the availability of suitable read and write routines for disk I/O. (2 points)