Exam 2: Final Solutions

Part of the assignments for 22C:169, Spring 2006
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

Score Distributions

Final Exam Scores

Mean   = 15.75
_____________________________X_X_____X_X_____X_____X_____X_____X______________
5 . 6 . 7 . 8 . 9 . 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24

Total of Midterm plus Final

Mean   = 26.36
Median = 25.5                          X
_____________________________X_X_X_____X_____X_____X_______X__________________
6 . 8 . 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. 32. 34. 36. 38. 40. 42. 44

Total Homework Scores

Mean   = 35.86                                     X                     X
_______________________X___________________________X_________X_______X___X_X__
6 . 8 . 10. 12. 14. 16. 18. 20. 22. 24. 26. 28. 30. 32. 34. 36. 38. 40. 42. 44

Total Scores

Mean   = 62.22
Median = 63.2
                                         X
_X___________________X_______X___________X___________X___X_______________X____
42. 44. 46. 48. 50. 52. 54. 56. 58. 60. 62. 64. 66. 68. 70. 72. 74. 76. 78. 80
 -         - B             B +         + -         - A             A +

Solutions

  1. Background: The Amoeba system uses a trapdoor function to compute the publically advertised service number of each server from its private identity. A server must reveal its public number to all potential clients, but it must guard its private identity. An Amoeba-like system can be made as secure as desired by increasing the number of bits in the identity. Newly launched servers typically contact a unique-integer server to get their secret identity, and then register their public identity with some kind of registry server so that others can find them.

    a) What damage could be done by an adversary who discovered the server's private identity? (2 points)

    By merely registering using the server's private identity, without ever accepting any incoming messages, the adversary could deny clients access to that server. If the adversary actually accepts the incoming service requests, the adversary can impersonate the server.

    4 said impersonate, 2 said deny access, few said both. The primary problem here was that students confused the two distinct uses that Amoeba made of trapdoor functions (one use is for delivery of messages to a server, the subject of this question, while the other is for server-side authentication of object references once the server gets the message).

    b) Assume that the unique integer server issues integers in sequence, such that if it just issued integer i, the next integer it issues will be f(i), where f is some known deterministic function that visits all (or at least most) of the applicable range of integers, in sequence. How can an adversary exploit this in order to attempt to gain the secret identities of some server or servers. (2 points)

    Conisider an adversary who periodically gets numbers and monitors the registry. If the adversary most recently got the number p, and the next number n is not f(p), then some other client got f(p). If that client was a server, it would be likely to be entered in the registry soon after getting this number. Therefore, examination of new entries in the registry should quickly locate a registry entry with public ID t(f(p)) -- where t is the trapdoor function. (Defense against this attack is possible; for example, draw lots of numbers from the take-a-number service and discard most of them.)

    This was a difficult problem. In addition to confusion of the two uses of trapdoor functions, students confused f with the trapdoor function. The possibility of reverse engineering f is of limited value; the usual implementation of f(x) for unique number servers is x+1, and it is rare to try to keep it a secret. Because f has the potential to visit almost all of the integers, exhaustive search is of extraordinarily limited value.

  2. Background: Consider two software components, A and B, both of which make use of objects of class C. For example, A may need a push-down stack and B may also need a push-down stack (2 different instances of the same class). A top level security requirement that we wish to enforce is that the components A and B only communicate through some designated interface I, and we want to make very certain that there are no covert channels between A and B. Our interest, therefore, focuses on the implementation of class C.

    a) Suppose you were presented with a Java or C++ implementation of class C, what would you look for in order to assure that C did not include any covert channels that A and B could abuse? (Note that it may help to think about the lifetimes and scope rules that apply to different kinds of variables. Some kinds of variables pose greater threats than others.) (2 points)

    If there are no global variables or global objects referenced from within the implementation of class C, and if there are no static or class variables declared within C or within its methods, it would not be possible for operations on one object in class C to covertly communicate with operations on another object of the class. If any of these are present, their use must be audited with great care.

    2 did well, 4 only mentioned global variables, 1 only mentioned class or static variables. 1 focused on public components of objects, but these pose no real problem, since any public component can be replaced by a private component plus two methods, one to inspect that component and the other to set the value of that component.

    b) If you had to examine the Amoeba implementation of the class C, what would you look for in order to assure that C did not include any covert channels that A and B could abuse? (Note that each Amoeba server implements a class, including all details of the storage of objects of that class. A typical Amoeba server repeatedly awaits a request from a client, performs the requested operation on the required object, and then returns the results to that client.) (2 points)

    In Amoeba, servers iteratively await messages containing the parameters to a method invocation, carry out that method on the indicated object, and then send the results as a message. Variables local to one iteration of this loop are safe (corresponding to local variables of a method invocation in C++ or Java), while variables with lifetimes that span multiple loop iterations correspond to global or class static variables. In addition, the code must only look at one object in the object-table, and not look at any other entries in the object table.

    1 did well. 3 gave answers indistinguishable from their answers to part a), failing to suggest any understanding of what makes the Amoeba object model different from that of C++ or Java.

  3. Background: System A offers you a password protected files, while system B requires that you log in using a name and password, and once you've done so, you never need to type another password to access a file.

    a) Explain how these systems differ with regard to their authentication mechanisms. (1 point)

    System A does not appear to use any authentication mechanism, while system B relies on authenticated user identity as the basis of all access control decisions.

    2 did well here, 3 were confused, offering various answers worthy of partial credit. In many cases, students wandered off into various complicating assumptions that were not even vaguely suggested by the plain wording of the problem statement, such as the assumption that system A must also have had some kind of authentication mechanism or some kind of keychain mechanism to keep track of passwords.

    b) Explain how these systems differ in terms of the possibility of constructing an access matrix documenting which users have access to which files. (1 point)

    With system A, you would have to question each user to find all the passwords they knew, and then find out the passwords of all files in order to build an access matrix. Users should be suspicious of anyone attempting such an exercise, since they have to give away all their passwords in the process. With system B, in contrast, there must be some kind of access matrix inside the system if there are any access restrictions, and disclosure by the system of the access matrix is probably common (see, for example, the behavior of ls -l in Unix).

    2 did well here, 3 got partial credit. The most common problems involved either the assumption that system B used some specific mechanism (access control lists or capability lists, typically), or the assumption that system A somehow allowed you to intuit what passwords eacy user knew, without asking them.

    c) Explain how these systems differ in the extent to which they are likely to rely on cryptography for information protection. (1 point)

    If system B uses some implementation of the access matrix, it is unlikely that it uses cyrptography for access control restrictions. In contrast, system A, with password protection on each file, might well be using those passwords as cryptographic keys. Thus, system A could be implemented relying on cryptography, while there is no specific need for cyptography in the implementation of system B.

    1 did well here, 4 had partial credit for drawing a reasonable inference but not finishing the story. Typically, this involved having made specific assumptions that allowed conversion of the "could be" and "no specific need for" into certain sounding "is" and "does not use".

  4. Background: The Whatchamacalit Certified Security Device (WCSD) contains a microprocessor, RAM and flash EEPROM, as well as a compact flash slot on the side of it and various other bits and pieces, including at least one pixel of output and a keyboard with at least one key. This machine executes firmware directly from the internal flash EEPROM.

    The WCSD firmware allows upgrades as follows: When it detects the insertion of a compact flash card, it looks for a DOS-format directory. If it finds a directory, it searches for a files with names of the form "upgrade-n". For each such file, if the file does not already match the contents of block n of the internal flash EEPROM, it upgrades block n. The upgrade is done by flash erasing the contents of the block, loading that file as the new contents of that block, and then restarting the firmware.

    a) Normally, code is executed directly from the internal flash EEPROM, but it is essential that some parts of the code be copied to RAM before execution. What code must be executed from RAM and cannot be safely executed from the flash EEPROM? Be specific! Do not suggest moving any code to RAM that does not absolutely need to be copied to RAM. (3 points)

    The code to erase and update a block of flash EEPROM must execute from RAM, just in case the block being erased and then updated is the block containing the code to erase and update.

    5 did well here, 2 had small problems, 1 had larger problems. Everyone got at least some credit on this part, so it was one of the easier questions on the exam.

    b) Suppose, by accident, you let Mr. Adversary insert an unknown compact flash card into the system while it was running. You don't know if Mr. Adversary's card performed an upgrade or not. Explain the patch file or files that you would put on a compact flash card in order to try to produce output that convinces you that the flash EEPROM contains no code left by Mr. Adversary. (Note that the sum of your patches is permitted to completely overwrite every block of the EEPROM in the process of doing its job, so long as you can patch it yet again to put the correct code in place.) (3 points)

    Load a patch file that fills as much of Flash EEPROM as possible with controlled but random appearing data and then perform a computation on all of the contents of the flash EEPROM that requires the use of all of RAM. At the end of this computation, use the output device to signal (perhaps using morse code) the result of this computation. The only non-random looking code that remains in Flash EEEPROM is the code to load new patches and the code to compute the tests described here. Doing less than the above allows the adversarial code to hide itself, for example in RAM, while it emulates the correct behavior of your test. It is also important that the adversary not know, in advance, the correct output, so the random looking data should be decided at the last moment, so that the correct output is not known until then.

    1 did well, 2 suggested some kind of hash -- but did not appear to understand the importance of using all of RAM and filling all of RAM and EEPROM or the importance of a challenge that Mr. Adversary cannot guess in advance. 3 suggested zeroing and reloading all of flash EEPROM, but how do you know that you really zeroed it? Mr. Adversary's code could pretend to do this.

    c) The above queston said "produce output" very deliberately, suggesting that the nature of the output is the only indication required to complete the proof. How can measuring the time taken to produce the required output increase your confidence? (This question is inspired by the recent Doonesbury comic strips about the current source versus the voltage source.) (2 points)

    If you time the test and know how long it ought to take, you can detect the fact that Mr. Adversary has installed an emulator in your system. This makes it less important to fully utilize all of RAM.

    3 answered this well. 3 more said that they should be suspicious if the system was not fast.

    d) What should the firmware have done in order to protect you from Mr. Adversary? In answering this, consider the possibility that Mr. Adversary may have worked for you in the past and he may even have had the job of installing legitimate WCSD patches. Obviously, your next firmware upgrade ought to install the functionality that you describe in your answer to this question. (3 points)

    The firmware should not have permitted arbitrary reloading of firmware. At minimum, the firmware should have requested permission to install upgrades, so that you could tell it was about to happen. Better yet, the firmware should require a password that depends on both the version being upgraded from and on the version being upgraded to. It would also help if the upgrade mechanism itself were not upgradable, so that y ou could always install new upgrades to replace suspicious code.

    2 did well here, 2 suggested the use of a password to install changes (good, but is it system wide, dependant only on the change being installed, dependant on the system being changed, or what?). 2 suggested digital signatures on the upgraded code, good only for preventing accidental upgrades with nonsense -- Mr. Adversary could easily sign his code, couldn't he? 2 more suggested lockouts of some kind on the upgrade mechanism -- something equivalent to asking permssion.

  5. Background: Consider the anon command from the midterm and assignment 6.

    a) Suppose anon maintains a log of all uses made of it, recording the real and effective user ID of the caller as well as the argument list. What security threats does this log defend against? (1 point)

    The log permits investigation of who used the anon command when, and it gives an idea of what it might have been used for. This allows investigation of attacks that might be launched against the system making use of the anon command.

    This might have been the easiest question on the exam. Only 1 had trouble.

    b) Suppose anon maintains a log of all users who call it. What potential uses of anon remain valuable in the presence of such a log. (1 point)

    Anon remains useful as a tool for creating a protected sandbox around the execution of untrusted code. In this context, the command name anon is misleading, since the command's user is no longer anonymous.

    This question was also easy, although somewhat harder than part a). At least 1 student was distracted into focusing on attacks that remain possible instead of focusing on the uses of the tool.

    c) Suppose I have been given an untrustworthy executable file and told to keep it very private. The file is called px. I don't want to reveal px to any other system user, I do want to execute it, but I cannot trust it. Therefore, I want to use anon to run px. Explain why this poses a problem in the context of the anon command as we have discussed it up to this point. (3 points)

    The problem is, the program anon runs under the user ID ANON and therefore cannot open my private executable file px since I own that file and the access rights are --x------. Anon will only be able to execute px if I set the access rights to --------x, but if I do that, anyone can execute it.

    2 did well. 5 earned some credit for observing that the log proposed in part a) would reveal the name of the file px to the system administrators. This is true, but revealing the name of a file is a very minor risk compared to revealing the contents of that file or allowing someone else to execute that file. Furthermore, it is easy to defend against someone gaining any useful information from the name of the file. Just rename it before using it, then rename it again immediately after use.

    d) Evaluate this model for protecting the anon command from attack by applications launched by the anon command: Two new user IDs are required, ANON and ANON1. The anon command that the user runs is in a file owned by ANON with access rights --s--s--x. This program then uses execve() to run ~ANON/anon1, where the home directory of ANON, ~ANON has the access rights --x------ so that only ANON can launch programs from this directory. The file ~ANON/anon1 is owned by ANON1 and has the access rights --s--s--x. This is the program that actually launches the application requested by the original user. (3 points)

    The user's application runs with user ID ANON1, so therefore, it is unable to make changes to the anon command because it would need owner ID ANON to change the access rights on that command so it could change the command itself. If the user's application, running as ANON1, does have the user ID needed to damage the file ~ANON/anon1, but it cannot get there because it cannot traverse the directory ~ANON. Therefore, programs run under the anon command cannot attack the anon command.

    1 did well here. 1 was worried about the possibility that the program would revert to the real user ID, but we discussed that already -- anon, or in this case anon1 must use setuid() and setgid() erase all evidence of the real user ID before launching the user's application, so it runs with ANON1 as its effective, real and saved user ID. It can't do a thing with chown() in that case. 3 held (for partial credit) that anon1 was still vulnerable, apparently because they did not catch how the placement of the file anon1 in the directory ~ANON protected it.