Midterm Preparation

22C:116, Fall 1995

Douglas W. Jones

  1. Background: The commercially available product, RAM Doubler, is a virtual memory product for the Macintosh family of computers. This doubles the apparent memory available on the system, frequently without use of any backing storage, by combining three basic mechanisms:

    First, RAM Doubler allocates page frames to virtual addresses on a demand basis. If a program has been allocated a large block of memory, but only uses part of that block, RAM Doubler only allocates page frames for those pages that are actually referenced. Since classic Mac applications are written in terms of real memory, not virtual memory, they generally claim the largest block of memory they might later need, right up front.

    Second, RAM Doubler uses compression. Less recently used pages are compressed in order to free up page frames to be allocated to active applications.

    Third, RAM Doubler will use disk as backing storage, should the first two techniques fail to free enough page frames to allow support of a virtual address space twice as big as the physical address space.

    Think about the extent to which different applications would be likely to bring these different virtual memory strategies into action. As the author of an application that you wanted to run under RAM Doubler, how could you improve the likelyhood that your application would behave well? What about the internals of RAM Doubler. Given a function compress(a,b) that compresses one page from location a to location b in real memory, returning the number of bytes the page was compressed to, and a function expand(a,b) that reverses the compression of a page, returning the number of bytes of compressed data consumed, how would you handle page faults.

  2. Background: Consider a memory management unit that supports a 32 bit virtual address space, with pages of 4K bytes each. Thus, addresses consist of 12 bits for the byte-in-page, 10 bits for page-in-segment, and 10 bits for segment number. Each page table entry is one 32 bit word, with hardware enforced read, write and execute access right bits and 20 bits of frame number. Thus, the machine supports a 32 bit physical address space, and there are 9 bits left unassigned in each page table entry, for software use.

    Think about basic questions first. How big is the page table for a process? How many page table entries fit in one page? Would you expect the page table to be stored in the MMU, in main memory, or in virtual memory? How would you expect unused portions of the page table to be handled? What is the role of segmenting in this system?

    Think about how you could use software to implement features such as those found on the CAP system. How would you represent capabilities for objects which are not pages (at least, they aren't pages from the user point of view). How would you represent capabilities for address spaces? How would you represent capabilities for sealed objects? How would you detect and interpret (with software) attempted operations on these objects?

  3. Background: Consider a database system where a number of interactive users share access to a database. Users interactively inspect and update records in the database. Updates may involve reading one or more pages and combining the values on those pages with new information to compute the contents of the pages that must be updated. Update operations therefore involve read-modify-write critical sections.

    The obvious implementation for this system is to attach a semaphore to each database record, with the rule that a user must lock all records involved prior to a read-modify-write update. In this case, all code involved in performing the updates would belong to the user processes.

    Think about the problem of avoiding or detecting deadlocks in this model. What deadlock detection model is applicable, if any? When a deadlock is detected, what action would be appropriate? If the set of records involved in an update does not depend on the contents of any of those records, does this simplify the problem?

  4. Background: Consider the following interprocess communication primitives:
    send( channel, buffer )
    receive( channel, buffer )
    wait( semaphore )
    signal( semaphore )
    
    The channels used for this communication are memoryless. When a process tries to receive a message, it is blocked until a message is delivered from a sender. When a process tries to sends a message, it is blocked until the message is copied to a receiver. Thus, if two processes try to communicate, either the sender or the receiver will be blocked until the message can be transferred.

    The semaphores are general semaphores with the usual semantics. Processes may share semaphores and channels, but may not share memory.

    Think about the problem of constructing a bounded buffer FIFO queue abstraction using these primitives. Your solution will no-doubt involve a queue-manager process, where queue users send and receive buffers full of data to and from the queue, through some channel(s). How would you use the semaphores to assure that no blocking occurs except when the queue abstraction demands blocking? Do you need to include some kind of operation code in the buffer, or can you use multiple channels to somehow encode the operation to be performed?

  5. Background: The optimal disk sector size depends on the ratio of the I/O data transfer rate to the latency time per operation.

    Think about how the optimal sector size depends on the disk scheduling algorithm.

  6. Remember, these study questions are not intended to exhaust the range of questions on the exam.