Homework 4 Solutions

22C:116, Fall 2001

Douglas W. Jones
  1. Part A: What deadlock model applies to applications written to run under the example thread manager?
    In general, in a program synchronized with semaphores, any thread or process may be able to signal any semaphore. Therefore, the problem is at least as complex as the or-model!

    Part B: Would it be practical to add a general purpose deadlock detection mechanism to the example thread manager, so that it would detect and report (with an error message) all deadlocked applications running under the thread manager. If so, explain what detection algorithm you would use. If not, explain why.

    Because the or-model applies, deadlock detection relies on knot detection. Unfortunately, even this requires that we be able to determine, for each blocked thread, the set of threads that could potentially signal the semaphore on which it is blocked. Since the example package is based on C or C++, languages that have no protection mechanism, any thread may potentially access any variable. Therefore, if even one thread is not blocked, it is possible that that thread will eventually signal any semaphore. Short of simulating the future course of the computation through to the end of time, we cannot determine if the process will indeed do so. Therefore, the only automatic deadlock detection possible in this thread package is the mechanism already present -- we report a potential deadlock if all threads have blocked.

  2. A Problem; Do problem 5 on page 264 of the text.
    a) For first-fit allocation, assuming holes are tried in the order of their memory addresses,
    Holes:     10K  4K  20K  18K  7K  9K  12K  15K
    Blocks:    10K      12K   9K
    New holes:           8K
    
    b) For best-fit allocation,
    Holes:  10K  4K  20K  18K  7K  9K  12K  15K
    Blocks: 10K                    9K  12K
    
    c) For worst-fit allocation,
    Holes:  10K  4K  20K  18K  7K  9K  12K  15K
    Blocks:          12K  10K                9K
    New holes:        8K   8K                6K
    
    d) For next-fit allocation, assuming holes are tried in the order of their memory addresses,
    Holes:  10K  4K  20K  18K  7K  9K  12K  15K
    Blocks:          12K  10K      9K
    New holes:        8K   8K
    

  3. Part A: What data would you ask the MMU designers to collect whenever there is a page-fault in order to allow you to reconstruct the CPU state prior to the fault? Remember, you can't change the CPU, but you can assume that MMU has yet to be designed!
    1. The address of the first byte of the instruction.
    2. The number of instruction bytes fetched in the current instruction.
    3. The number of operand bytes loaded or stored by the current instruction.

    The address bus would be copied at the start of each instruciton cycle and the two counts would be zeroed at the start of each instruction cycle. The counts would be incremented at the end of each successful memory cycle. When a MMU trap is requested, the address and the two counts would be moved from internal MMU registers to the interface registers that the trap service routine can inspect (typically, the virtual address that caused the trap would also end up in such an interface register).

    Part B: How would you use this data?

    The address of the first byte of the instruction allows the trap service routine to find the opcode of the instruction that caused the trap. This plus the count of instruction bytews allows it to determine how much of the instruction has been fetched, while the count of operand bytes allows it to find out how far the execution of the instruction got before the trap occurred. Knowing these plus some basic details of the microcode -- these can be determined empirically -- it should be possible to figure out which side effects the instruciton has yet to execute and which it has not yet executed.