The handler must be called on the thread's stack, not on a separate stack reserved for signals. The reason for this is that we need the activation of the handler to persist on the thread's stack so that control can return to this invocation of the handler by longjmp when the thread is next scheduled. The thread mechanism does not guarantee that the threads will return from their handlers in a LIFO order, so the activation records for the handlers cannot be stacked on a signal stack shared by all threads.
Part B: Propose the code for the SIGALRM handler for a preemptive thread manager.
void handle_sigalrm( int i ) /* preempt current thread at end of timeslice */ { sigset_t oset; /* enter critical section */ sigprocmask( SIG_BLOCK, &thread_related_signal_mask, &oset ); if (_setjmp(current->state) == 0) { thread_enqueue( current, &readylist ); current = thread_dequeue( &readylist ); /* exit critical section */ sigprocmask( SIG_SETMASK, &oset, NULL ); _longjmp(current->state, 1); } /* exit critical section */ sigprocmask( SIG_SETMASK, &oset, NULL ); }In this code, the set of signals that must be blocked during the critical section is not documented; generally, all asynchronously raised signals for which the process specifies a handler, where that handler uses any thread primitives or data structures, must blocked. Similar critical section code must be added to many other thread-related routines to prevent signals from being received while manipulating the thread queue or while manipulating the heap or making thread_unsafe system calls.
A 128 megabyte memory divided into blocks of 64K requires no extra memory beyond the initial 32bit free-list pointer, to keep track of free blocks. We simply use the first 32-bit word of each free block to point to the next free block. The data structures Tannenbaum outlines are largely redundant except when the block size is variable. In that case, however, things get very messy.If block sizes are not uniform but are no longer than 64K, we need the data structure something like Tannenbaum describes, adding roughly 2 32-bit words to each block, or 8 bytes for each 64K block. 128M/64K = 2K - we have about 2K blocks in our memory, and 2K*8 bytes of overhead per block is 16K bytes of overhead, total.
If we commit to 16K bytes of overhead, we could use this as a bit vector with 16K*8 bits, or 128K bits; 128M/128K = 1K, so this bit vector is big enough to describe 128M of memory divided into 1K blocks.
Do problem 17 on page 265.
Assume that the system starts a main memory cycle as it starts a TLB lookup, and then aborts the main memory cycle if there is a TLB hit, we always pay 1 nsec per memory reference, and we pay an extra 4 nsec for each TLB miss. If we want an average overhead of 2 nsec, we need a miss-rate m such that m*4=1 -- that is, our misses cost us an additional 1 nsec on the average. Thus, we can miss 1/4 of the time, which implies that we hit 3/4 of the time.
Do problem 29 on page 266
a) NRU - replaces either page 1 or 2 at random (if we assume the M bit to be zero on both pages), or it replaces 2 (if the strange notation on page 1 means the M bit is 1).
b) FIFO - replaces page 3, the page loaded the longest time ago.
c) LRU - replaces page 1, the page referenced the longest time ago.
d) Second chance - replaces page 2, the oldest not-recently referenced page. page 2.
The system has 65536/4096 = 16 pages. The program takes 32768/4096 = 8 pages; the data takes 16386/4096 = 4.004 pages, but because we cannot divide pages between segments, we must allocate 5 pages. The stack takes 15870/4096 = 3.87 pages, for which we must allocate 4 pages. We need 1 more page than the 16 pages on this machine!
If pages were 512 bits, the arithmetic would be as follows: We have 65536/512 = 128 pages, the program takes 32768/512 = 64 pages, the data takes 16386/512 = 32.003 pages, which rounds up to 33, and the stack takes 15870/512 = 30.99 pages, which rounds up to 31. 31+33+64 = 128, so we have just enough pages.
A one-level TLB is entirely compatable with a 2-level paged-segmented address space. We use the concatenated page and segment numbers for the lookup in the TLB and only split them to look up the required data in RAM if there is a TLB miss.