Assignment 4, due Feb 22

Solutions

Part of the homework for 22C:112, Spring 2008
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

  1. Background: The main program of an X application (that is, an application running under the Unix X-windows system) generally follows the following outline:
    	main() {
    		create various window objects, some nested within others
    		call XtAddEventHandler() for each object add event handlers
    		for (;;) {
    			XEvent event;
    			XtAppNextEvent( ... , &event );
    			XtDispatchEvent( &event );
    		}
    	}
    

    The window library handles dispatching events to the right handler by determining which window object the event occurred in and then transferring control to that object. Pending X-events are queued in the X-event queue. X-events include such things as keypress events and mouse-click events. The X window manager process is responsible for merging keypress and mouse-click events into a single stream of events.

    a) Propose (in vague outline) the means by which input events on two different devices could be merged into a single stream of events for delivery to your X application. (0.5 points)

    Consider polling each potential source of input, blocking to read any particular device's input queue only if input is pending in that queue. This is not a very good approach, since it rests on busy waiting.

    b) Propose (in similar vague outline) how a subroutine registered as an event handler for an X object could be called by the XtDispatchEvent routine. This has several parts: How would you find the right X object, how would you store the handler's identity in that object, and how would you transfer control to the handler. (0.5 points)

    The dispatch event routine needs to know the current coordinates of the cursor on the screen. Given this, it must search through the data structure of X-objects on the screen to find which object the cursor is in at the time of the event, and then dispatch the event to that object's event handler.

  2. Background: Consider a computer where each device has an interrupt-enable bit and a ready bit in one of its device control and status registers. There is also a global interrupt-enable bit in the CPU's state register. Device X will request an interrupt if it is ready and its own enable bit is set and the global enable bit is set.

    There is no hardware support for the concept of interrupt priorities. There is no hardware support for interrupt vectoring. That is, all interrupts transfer control to a single interrupt service routine that must scan the ready bits of the various devices to see which device requested the interrupt.

    The CPU automatically resets the global interrupt-enable bit as it transfers control to an interrupt service routine. There is a special return from interrupt instruction that enables interrupts as it returns.

    This problem deals with the idea of having the software create a notion of interrupt priority, such that higher priority interrupts could interrupt lower priority interrupt service routines.

    a) In What order should the interrupt service routine scan the ready bits of the various devices in order to create a notion of interrupt priority? (0.3 points)

    In priority order, highest priority first.

    b) If higher priority devices are allowed to interrupt lower priority interrupt service routines, the global interrupt enable bit will need to be set by the interrupt service routine. What must it do first in order to to prevent itself or lower priority devices from requesting interrupts. (0.3 points)

    Before setting the global interrupt enable bit in an interrupt service routine, the code must reset the device-specific interrupt-enable bits of all lower priority devices, ad it must reset ts own interrupt-enable bit as well (to prevent a run-away recursive interrupt from hanging the system).

    c) For some devices, the device interrupt-enable bit is used to effectively turn off that device. For example, for the printer port, if there is no pending output to the printer, the printer interrupt-enable bit is reset. When an application enqueues a character to print, the printer's interrupt is enabled. How does this complicate the activity required in part b? (0.4 points)

    Prior to return from interrupt, the global interrupt-enable bit needs to be reset, then the device specific interrupt-enable bits need to be to their previous values. If it were not for the complexity introduced here, the previous value would be "set", but the new complexity is that each device-specific interrupt-enable bit could be in an unknown prior state, so this prior state must be saved and restored.

  3. Background: Starting around 1970, some computers were marketed with a machine instruction that caused the CPU to cease its fetch-execute cycle until a device asserted an interrupt request. Let's call this instruction sleep to distinguish it from halt, the usual name for an instruction that simply stopped the CPU. When a CPU is sleeping, its power consumption goes way down compared to when it is running its fetch-execute cycle.

    Consider a system with interrupt-driven input-output and a single-threaded application that communicates with devices through queues. Application access to queues is through predicates to test queue empty and queue full, plus routines to enqueue and dequeue data.

    a) In general, when should the sleep instruction be executed? Your answer should be given in terms of queue states, application activity and device activity. (0.5 points)

    Whenever the application would otherwise have nothing to do while it is awaiting input. Specifically, once per iteration of each polling loop after determining that the queue state requires waiting.

    b) Conaider the specific case of an application reading characters from an input queue. Where does the sleep instruction go in this code:

    	while (empty(inputqueue)) (void);
            ch = dequeue(inputqueue);
    
    (0.5 points)
    	while (empty(inputqueue)) sleep();
            ch = dequeue(inputqueue);