Final Exam Solutions and Commentary

22C:116, Fall 2001

Douglas W. Jones

Score Distribution

Mean   = 16.4
Median = 17.0
                              X
___________X_____X____X______XX____X___X__XX_X_X__X_____________
 0 . 2 . 4 . 6 . 8 . 10. 2 . 4 . 6 . 8 . 20. 2 . 4 . 6 . 8 . 30
           - - C C C + + - - B B B + + - - A A A + +

Solutions and Comments

  1. Part A: What problem posed by an inadequacy in the design of the FinalOS kernel is solved by starting each message with the message length in bytes?
    Neither the send nor the receive kernel services provide any indication that the recipients buffer was too small to hold the message sent. By starting the message with its own length, the recipient can compare the integer at the start of the buffer with the length of the recipient's buffer in order to determine if there was an error.

    10 did well here, 2 got half credit for the strange idea that the recipient could first receive the length and then allocate a buffer of that size in order to receive the remainder of the buffer. This assumes behaviors that are impossible under FinalOS as it was specified in the study questions or the prologue to the exam. One more got severely reduced credit for making a wider set of assumptions that were incompatable with FinalOS.

    Part B: If we reserve a different channelpub for each service provided by a server, we could eliminate the need for a field in the message used to identify the service requested. Why is this a bad idea in the context of the FinalOS kernel.

    A server under FinalOS cannot test to see if there are messages waiting on a channel, nor can the server wait on any of a group of channels. As a result, the server is obligated, at the time it waits, to name the specific channel on which it awaits a message. As a result all services which a server could perform in any order for any client must share the same channel.

    3 did well here, 3 more mentioned the specific need for a multiwait primitive with no (or insufficient) explanation of why it was needed. 3 got partial credit, focusing on secondary issues while they appeared to understand part of the problem. 4 got no credit for answers that expressed no clear understanding of any relevant issue.

    Part C: One approach to calling a FinalOS server is for the calling process to set aside a fixed channel (with associated public and private identifiers) to be used for all return messages from any server. An alternative is to allocate a new channel for one-time use prior to each call to a server; in this case, a a new channelpriv must be computed and a new channelpub must be derived from it prior to each call to a server. What problem does the second and more complex approach solve?

    One-time-use channels prevent a reply from a server from being confused with replies from previous interactions with that server. We might use this if we don't trust the server to reply exactly once to each request; it is simpler than the alternative of including a transaction ID in each message.

    5 did well here. 4 got half credit for answers that focused on access rights and capabilities -- these answers were relevant but were reading something into the question that wasn't there. The remainder got some amount of partial credit for answers that at least mentioned relevant issues.

    Part D: What problem does the handle or server-side authenticator solve when we are calling a service such as "create" that does not require the handle of a specific object as a parameter?

    FinalOS provides probabilistic protection; there is a small chance that a channel ID chosen by one process will accidentally match the ID chosen by another. Because of this, prudent programmers will include extra information in messages to allow the server to verify that it is indeed the intended recipient of the message. The server-side authenticator on a public service such as create-process is included for this reason.

    None gave clear answers here. 5 got generous partial credit for saying something to the effect that the channel-ID plus the server-side autnenticator, taken together, can be used as a capability. This is true and relevant, but not to the point. 2 more said this along with incorrect things for reduced credit. 2 focused on secondary issues, while 4 wrote (frequently at length) about irrelevant issues such as exception handlers or return information.

  2. Part A: What should each kernel keep, in a local cache, in order to expedite message delivery? (in a multicomputer version of FinalOS)

    The local cache could map channel(public) to the machine ID of the machine that most recently expressed an interest in receiving messages addressed to that channel.

    8 said this. 1 confused public and private channel IDs for half credit, 1 gave half the contents of each cache line for half credit, 2 gave interesting but wrong answers for fractional credit, and one seemed entirely off base.

    Part B: Suggest an appropriate kernel action when there is a cache miss.

    One solution rests on a name-server of some kind. The kernel of a process waiting for a message addressed to a channel registers with the server, and when there is a cache miss, the server is contacted to see if anyone is waiting. Alternatively, when there is a cache miss, there needs to be a broadcast asking "is anyone waiting on this public channel ID?"

    9 gave good answers (most using the broadcast solution). 1 gave a confused but relevant answer for half credit, 2 were very confused but worth a fraction, and 1 had no credit.

    Part C: One approach to calling a FinalOS server is for the calling process to set aside a fixed channel (with associated public and private identifiers) to be used for all return messages from any server. An alternative is to allocate a new channel for one-time use prior to each call to a server; in this case, a a new channelpriv must be computed and a new channelpub must be derived from it prior to each call to a server. What problem does the second and more complex approach cause in the context of the cache scheme you outlined in parts A and B?

    Use of a new channel for each reply means that each reply will cause a cache miss and the cache therefore has no value at all for replies!

    7 did well here, 3 failed to quantify their answers, saying that there would be more cache misses for replies, as if there could be some hits! Only a very few got half credit or less for really bad answers.

    Part D: The FinalOS kernel may be implemented using synchronous message passing or asynchronous buffered message passing. Each of these poses problems for the kernel in a multicomputer implementation of FinalOS. Explain which approach is best, and justify your answer by explaining what problems are avoided by using this approach.

    Asynchronous message passing in FinalOS must be interpreted as synchronous receive with an asynchronous send and a buffered channel. The cost of buffering, the lack of any tools for limiting the demand on buffers, and the lack of any tools for determining when a message is dead because no process will ever receive from that channel cause problems with this. Asynchronous message passing offers the potential for a sender to issue many requests to different servers before waiting for any of them to act.

    Synchronous message passing eliminates the problem with resource consumption for the channels, but it also eliminates the potential for parallelism discussed above. Because the resource issue could be fatal to a FinalOS system (consider a process that goes into a loop sending messages to every possible channel until all of memory is filled with messages), the synchronous model is probably better.

    The question of which was best was somewhat misleading. The key to a good answer was to clearly articulate the issues. 3 did this. 3 identified a single issue and focused on it, 5 got half credit for a flawed comparison, and 2 got a little credit for poor comparisons.

  3. Part A: Explain the relative advantages and disadvantages of a paged memory management unit as a tool to use in implementing the FinalOS set or kernel and process management services.

    The FinalOS kernel spec strongly suggests that each process has a private address space, since it provides no way for the creator of a process to deal with sharing or addressing conflicts. Therefore, the competition to paging is some other memory management unit and not the lack of such a unit. Furthermore, there is no requirement that the 'shape' of the address space of a process be declared in advance, so the kernel only learns the set of addresses a process will need as those addresses are poked.

    The advantage of a paged MMU in this context is simplicity of expansion of the address space in response to the loading of a program into a process's address space by a sequence of pokes. It also offers the possibility of demand paging, but that is, comparatively, a secondary benefit.

    7 got no credit, mostly with long-winded answers that expressed poor understanding of all of the issues. 5 more made no reference to FinalOS or no reference to paged virtual memory, for limited credit. Only 1 gave an answer worth good partial credit.

    Part B: If page-fault exceptions are just another kind of exception, handled by passing them to the designated exception handler, what minimal set of additional or process management services must there be in order to allow the exception handler to implement demand-paged virtual memory on behalf of a process? (focus on the bare minimum when answering this part, and assume the stupidest possible page-replacement policy!)

    We need a way for the exception handler to unmap pages in the address space of the process being managed. Poking data into a page maps it into the address space, but we have no mechanism for unmapping. It would, of course, be handy if there was some way to inquire about the current shape of the address space, but we could easily extend peek to return an error indication if we peek into an unmapped address, satisfying this need.

    5 did well here, and 2 more correctly understood the problem but focused on some really bad ideas for solving it. A little partial credit was available for the 3 who mentioned page tables in a vague way. 3 got no credit.

    Part C: Assuming that we ask the exception handler to handle page faults, an intelligent page-replacement policy will require cooperation between the exception handler and lower-level parts of the system. Suppose you wanted to use a software implementation of the clock replacement algorithm -- this is about the best you can do without special hardware support from the memory management unit, what minimal set of additional or process management services must there be in addition to those you mentioned in part B.

    The software clock replacement algorithm requires a way to control the access rights on a page. Therefore, we need a way to mark pages as unaccessible but in RAM as opposed to read-write. Adding read-only allows clean-dirty replacement.

    1 got this. 4 didn't understand the software clock algorithm, but understood the need for access to the dirty bit or the mark bit in the MMU. 3 gave answers minimally relating to the clock algorithm to FinalOS for half credit, and 3 got a little credit for demonstrating some understanding of page replacement. 1 left the answer blank, 2 were totally wrong.

  4. Part A: There are two ways the disk driver could distinguish between messages from clients and messages from the kernel. First, it could use a different channelpriv for each purpose, and second, it could rely on the contents of the messages to distinguish between them. Explain why FinalOS requires one of these alternatives and forbids the other.

    If we used one channel for interrupts directed to an I/O driver process, while we used another for messages from client processes, the lack of a primitive for waiting on multiple channels would force us to wait for messages from only one channel or the other at any particular time. Thus, the driver could block awaiting an interrupt, or it could block awaiting a message from a client, but it could never block awaiting one or the other. Unfortunately, this is exactly what the driver will need to do, so it will have to rely on a single channel for incoming messages, using the contents of those messages to distinguish between interrupt messages and client messages.

    Only 1 got this. 5 gave wrong reasons for the correct conclusion, earning half credit, and 2 more said minor relevant things while reaching the wrong conclusion for partial credit. 4 gave falacious, illegible or incomprehensible answers.

    Part B: Assume that the disk driver supports multiple concurrently active independent clients, and assume that the driver is single-threaded. What data structures must be present inside the driver in order to support these specific needs. (Hint: Does the answer to this question depend in any great way on the fact that the driver is a FinalOS server and not just a bunch of classical functions and interrupt service routines?)

    One concurrently active client may request disk I/O while another is waiting for I/O completion, so we need a queue of some kind to hold the record of each client. At minimum, this record must include the return channel and request type (peek or poke), along with the buffer being read or written. The fact that there are multiple disk drives means that many I/O operations may be at different stages of completion, and the fact that there are many clients means that there may be multiple requests for one drive, and thus the need for disk scheduling. Therefore, the queue must contain all of the information needed to schedule disk operations and coordinate overlap of operations addressed to different drives.

    4 gave good answers, 5 got half credit for saying more than just the word queue, and 3 got some partial credit for, at least, saying queue with no real discussion of what is queued or why. 1 received no credit for the empty suggestion that semaphores are needed. In fact, they are not.

    Part C: Explain the relationship between the client process, the FinalOS directory server, the FinalOS file server, and the FinalOS SCSI II disk server.

    In a classical client-server system architecture, a request to open a file is directed to the directory server. This returns to the client a file handle that can be used to read or write the file using the file server. The directory server also uses the file server to store directories. The file server translates between addresses of sectors within a file and addresses of sectors on disk, and it may also do blocking and deblocking, but ultimately, it uses the services of the disk server to access the physical disk(s) that store(s) the file. This classical structure may be used with no change under FinalOS.

    1 gave a good answer, 5 gave answers that, while not far off the mark, exhibited some confusion. 3 were more confused, earning half credit, and 3 more were very confused (or painfully brief). Just 1 earned no credit.

    Part D: Could a user-level thread package be used with FinalOS in order to simplify the structure of the disk driver, for example, by allowing the disk driver to create a new thread for each active disk I/O request. Explain what features of the FinalOS specification allow for this or prevent this from being effective.

    There is nothing in FinalOS to prevent the use of a user-level thread package, but it would not simplify the disk driver, because whenever a thread blocks awaiting a message, it blocks the entire process. This is because there is no to check a channel to see if any messages are waiting on that channel. Had we such a mechanis, the thread could relinquish to other threads instead of blocking, thus allowing one thread to block on one channel while another continues and does useful work.

    Nobody gave a really good answer, but 4 gave mediocre reasoning to explain why threads would not be of much use in FinalOS. 4 more gave incorrect reasons for the correct conclusions, earning half credit. The remainder reached incorrect conclusions and supported them by faulty reasoning.