This page summarizes the lectures and mentions the suggested reading assignments. This summary is quite brief, and its intention is to help the student and instructor recollect what was covered.

## Week 1

• Meeting 1: We introduced Turing Machines as a formal model for algorithms. We motivated the need for such a model by the type of questions we want to address in this course. We described in full detail Turing Machines for two problems -- (a) to determine if a given string of 0's and 1's has a 1 or not, and (b) to determine if a given string of 0's and 1's is a palindrome. The material in this meeting was mostly from Section 1.2 of the text.
• Meeting 2: Turing Machines Compute Functions. Running Time. The impact of restricting the alphabet -- Claim 1.5 in the text. We saw a slightly detailed version of that proof sketch.
• Reading for the week: Section 1.2 and 1.3 of the text.

## Week 2

• Meeting 1: We considered the impact of restricting the number of tapes. We argued that for any function from 0-1 strings to {0,1}, a TM M computing it using K tapes, there is a different one tape TM that computes the same function. This results in at most a quadratic slowdown in run-time. This is Claim 1.6 of the text. We next considered encoding TMs as 0-1 strings, and described a specific such encoding. We introduced the idea of a universal TM from Section 1.4 of the text.
• Meeting 2: We described how the Universal TM works, establishing the relaxed version of Theorem 1.9. We then demonstrated a function UC that is not computable by any Turing Machine. This is Theorem 1.10 in the text.
• Reading: Section 1.3 and 1.4 of the text.

## Week 3

• Meeting 1: We show that there is no Turing Machine to compute the function HALT -- this is Theorem 1.11. This proof is an example of the use of reductions, in a spirit similar to NP-completeness reductions in an Algorithms course. We will then show that some other functions about Turing Machine behavior are also not computable. These examples emphasize that many questions about program behavior don't admit algorithms (that always work).
• Meeting 2: Two more examples of uncomputable functions. One is to determine whether a set of polynomial equations with integer coefficients has an integer solution. The other is the Post Correspondence Problem. We just stated that these functions are uncomputable without proof. We introduced the class P of languages that are polynomial time decidable. There are some subtle points about this definition -- how problem inputs are encoded as 0-1 strings, how functions that output something different from 0 and 1 are handled, etc. We talked about these subtleties.
• Reading: Sections 1.5 and 1.6 of the text.
Meeting 1: Looked at the example of Gaussian Elimination to point out that unlike the analysis of algorithms in an algorithms course, the Turing machine will have to explicitly do the bit operations on numbers. Ruuning time needs to account for this. We introduced the class NP from Chapter 2, and discussed why the languages composites, subset sum, and independent set are in NP. For other examples, see Section 2.1 of the text.

## Week 4

• Meeting 1: Looked at the example of Gaussian Elimination to point out that unlike the analysis of algorithms in an algorithms course, the Turing machine will have to explicitly do the bit operations on numbers. Ruuning time needs to account for this. We introduced the class NP from Chapter 2, and discussed why the languages composites, subset sum, and independent set are in NP. For other examples, see Section 2.1 of the text.
• Meeting 2: We show that P is contained in NP which is contained in EXP -- Claim 2.4 of the text. We then introduced Non-Deterministic Turing Machines, and gave two examples (NDTMs for factoring and subset-sum) that illustrate the power of non-determinism. We finished with a Theorem that characterizes NP as the set of languages that are decided by an NDTM in polynomial time.

## Week 5

• Meeting 1: We explained in detail Problem 3 of Homework 2. We finished the proof of Theorem 2.6 that characterizes NP in terms of NDTMs. We introduced the important Definition of poly-time reducibility - Definition 2.7.
• Meeting 2: We introduced the notion of NP-hard and NP-complete languages, and argued Theorem 2.8. We showed that the language TMSAT is NP-complete. This is our first example of an NP-complete language. We introduced CNF-SAT from Section 2.3.

## Week 6

• Meeting 1: We showed how to reduce graph 2-colorability to CNF-SAT. We then showed Claim 2.13, which says that for any boolean function on k variables, there is a corresponding CNF formula with at most \$2^k\$ clauses. We began the proof of Lemma 2.11, that SAT is NP-hard. Our proof is different from the textbook's.
• Meeting 2: We completed the proof that SAT is NP-hard.
• Reading: Section 2.3. The proof in the book that SAT is NP-hard uses the idea of oblivious TM's. This reduces the number of variables needed in the reduction.

## Week 7

• Meeting 1: We identified some crucial features of the reduction -- see remark 2 in Secion 2.3.6. We discussed Section 2.5, Decision vs. Search, and in particular proved Theorem 2.18.
• Meeting 2: Review by Ruoyu, our TA.

## Week 8

• Meeting 1: We covered the material in Section 2.6 on coNP, EXP, and NEXP. We stated Theorem 2.22 (P= NP implies EXP = NEXP).
• Meeting 2: We finish the proof of Theorem 2.22. We prove a version of the Time Hierarchy Theorem 3.1 from Chapter 3.
• Reading: 2.6, 2.7, and 3.1.

## Week 9

• Meeting 1: Ladner's Theorem from Section 3.3: If P is not equal to NP there are languages in P that are neither in P nor NP-complete.
• Meeting 2: Space Complexity from Chapter 4.

## Week 10

• Meeting 1: Midterm
• Meeting 2: Review of number of configurations os an S(n) space NDTM. Proof that if S(n) is Omega(log n), then SPACE(S(n)) and NSPACE(S(n)) are contained in DTIME(2^{O(S(n))}) (Theorem 4.2). Discussion of logspace (the class L) and non-deterministic logspace (the class NL), and why PATH is in NL. (This stuff is from Section 4.1.2)

## Week 11

• Meeting 1: Brief mention of space hierarchy theorem (Theorem 4.8). Discussion of quantified boolean formulae, and the proof that TQBF is PSPACE-complete (Theorem 4.13).
• Meeting 2: In the last meeting, we showed that TQBF is in PSPACE. In this meeting, we showed that its is PSPACE-hard.

## Week 12

• Meeting 1: Savitch's Theorem (Theorem 4.14) and the formula game.
• Meeting 2: PSPACE completeness of the formula game and generalized geograph game (a handout for this was passed out in class, as this is not from the textbook).

## Week 13

• Meeting 1: Circuits, circuit families and how they decide languages, examples.
• Meeting 2: Theorem 6.6, where we show that any language in P is recognized by a polynomial-size circuit family. The example of UHALT, a language that is undecidable but is recognized by a polynomial size circuit family.
• Reading: This week's lectures came from Section 6.1 of the text.

## Week 14

• Meeting 1: Randomized Algorithms: Quicksort, and the problem of checking, given three n by n matrices A, B, and C, if AB = C. Such algorithms can be modeled by Probabilistic Turing Machines (PTMs). The notion of a PTM, its running time on an input, expected running time, and probability of accepting/rejecting its input.
• Meeting 2: The classes BPTIME(T(n)), RTIME(T(n)), ZTIME(T(n)), and the corresponding poly-time classes BPP, RP, and ZPP. The straightforward containments: RP subset of BPP, coRP subset of BPP. The slightly less straightforward ZPP subset of (RP intersect coRP). How to show that (RP intersect coRP) contained in ZPP?
• Reading: Section 7.1. Our examples are simpler and different from the ones in Section 7.2. Sectin 7.3. Though we didn't cover section 7.4 in detail, our Q and A addressed the robustness of our definitions and ideas of how to decrease the error in acceptance probabilities. Some of the probability calculations for the latter are outside the scope of this course. Those who have studied randomized algorithms may be familiar with them.

## Week 15

• Meeting 1: Proof that (RP intersect coRP) is contained in ZPP. Proof that any language in BPP has a polynomial-size circuit family.
• Meeting 2: Review and Course Evaluations