Homework 3 Solutions

22C:116, Fall 2000

Douglas W. Jones

  1. A Problem: What problems would a programmer encounter in using Java to write a storage manager for main memory.
    Java is strongly typed and makes an extraordinary effort to pretend that there is no such thing as a pointer. As a result, Java programs cannot manipulate pointers directly, and they cannot perform arithmetic on machine addresses in order to create new pointers to newly allocated memory regions.

  2. Part A: As time passes, after many allocate and deallocate cycles for blocks of random sizes using a boundary tag allocator using first fit allocation with all free list manipulation beginning from the head of the list, what distribution of free block sizes would you expect in the free list?
    Large blocks near the head of the list would always be subdivided into small blocks no matter what block size is requested, while large blocks near the end of the list would be likely to survive for long periods without being subdivided. Therefore, as time passes, blocks too small to satisfy many allocation requests would accumulate near the head of the list while large blocks would dominate the tail of the list. Popular sized blocks would have a very short life at the list head because if they are not subdivided, they would have a high likelyhood of being used to satisfy a new request for a block of the same size.

    If the list head walks around a circular free list, the list would be unlikely to become sorted, but would tend to stay well randomized.

    Part B: Given your answer to part A, how would you expect these two free list organizations to differ for real applications where most allocate and deallocate operations involve only a few common block sizes, while other more random appearing block sizes are manipulated only occasionally?

    If the smallest common block size is the most frequently used block size, the access-from-the-head scheme would clearly perform the best. In fact, it is highly likely that small blocks will be the most numerous and the most intensely used, so the access-from-the-head scheme is likely to perform well. As the variety of block sizes grows and the likelyhood of requesting different block sizes becomes more uniformly distributed, the circular scheme becomes more likely to perform well.

  3. The Question: Does incremental lazy merger guarantee that the resulting system will meet real-time constraints? How would you expect its performance to compare with prompt and purely lazy merger?
    Incremental lazy merger offers no performance guarantees, but it does reduce the CPU demand posed by prompt merger because many blocks of popular sizes will have a chance to be re-allocated before any CPU time is wasted merging them. And, compared to simple lazy merger, incremental lazy merger is more likely to have created some large blocks before a request is made for one, so it is less likely to require an indeterminate pause at allocate time.

  4. Part A: The code implementing thread_exit() contains notes that parts of this code are risky. Give a clear and concise explanation of the mechanism by which this risk could lead to failures!
    The thread-exit routine returns the current thread's stack to the heap while it is still running in that stack! As a result, the deallocate routine may corrupt its own return address as it performs the deallocation! To prevent this, some deallocation routines will raise an exception if they find their parameter pointing below the current stack top, while other deallocation routines, because they zero the block being deallocated, will simply die when they try to return.

    Part B: Consider this fragment of a FIFO queue package written to be used under this thread manager:

        enqueue(item i, struct queue * q)
        {
            thread_wait( q->space );
     /**/   thread_wait( q->mutex );
            q->buffer[q->tail] = i;
            q->tail = (q->tail + 1) % queuesize;
     /**/   thread_signal( q->mutex );
            thread_signal( q->data );
        }
    
    Explain why, under the example thread package, 2 of the 6 lines in the function can be deleted.
    The lines marked /**/ can be deleted! They are there only in order to create a critical section, but our thread manager uses non-preemptive scheduling, so any sequence of simple C statements that does not include calls to thread-manager functions is guaranteed to run without interference by other threads.

  5. A Question: Why is there a process identifier field in each entry of the MIPS R2000 MMU's address translation associative memory? What software cost would you expect to have to pay if this field was not present in the address translation hardware?
    If we assume multiple processes, each with a separate address space, this field of the MMU translation table allows some table entries to operate on behalf of one process while others operate on behalf of other processes. As a result, when we do a context switch between processes, we do not need to delete all table entries relating to the old process, and there is a possibility that some entries needed by the new process will still be in the MMU, left over from the last time that process ran.

    If this field was omitted from the MMU, the context switching code would have to clear the MMU whenever there was a context switch, and then it would have to either load the MMU with the contents for the new process or let the new process cause a sequence of soft page faults to load the MMU as it begins running. Either way, context switching would be slower!