https://www.youtube.com/watch?v=xL0kNw5TudI&list=PLoROMvodv4rO1NB9TD4iUZ3qghGEGtqNX&index=17 字幕記錄 00:04 All right. So let's get started. 00:06 [NOISE] So today's lecture is going to be on logic. 00:11 Um, to motivate things, 00:13 I wanna start with, uh, hopefully an easy question. 00:16 So if X_1 plus X_2 is 10, 00:20 and X_1 minus X_2 is 4. 00:22 What is X_1? 00:25 Someone shout out the answer once you figure it out. 00:28 7. So how did you come to get 7? 00:34 Yeah. You do the algebra thing that you learned a while ago, right? 00:39 [NOISE] Um, so what's the point of this? 00:41 Um, so notice that this is, uh, 00:44 a factor graph where we have two variables, 00:47 they're connected by two constraints or factors. 00:49 And you could in principle go and use backtracking search to try 00:53 different values of X_1 and X_2 until you eventually arrive at the right answer. 00:58 But clearly, this is not really an efficient way to do it. 01:01 And somehow in this problem, 01:03 there's extra structure that we can leverage 01:06 to arrive at the answer in a much, much easier way. 01:09 And this is kind of the- going to be the poster child of what we're 01:12 gonna explore today and on next Monday's lecture, 01:16 how you can do logical inference to 01:19 arrive at answers much faster than you might have otherwise. 01:22 [NOISE] So we've arrived at the end of the- of the class, um, 01:30 and I wanna just, uh, 01:31 [NOISE] reflect a little bit on what we've learned, 01:34 and maybe this will be also a good review for the exam. 01:38 Um, so in this class, 01:39 we've boast- bolstered everything on the modeling, 01:44 um, inference learning paradigm. 01:47 And the picture you should have in your head is this. 01:50 Abstractly, we take some data, 01:53 we perform some learning on it, 01:55 and we produce a model. 01:56 And using that model, we can produce a perform inference 01:59 which looks like taking in a question and returning an answer. 02:04 So what does this look like for 02:06 all the different types of instantiations we've looked at? 02:08 So for search problems, 02:10 the model is a, is, 02:12 is a search problem and the inference asked the question; 02:15 what is the minimum cost path? 02:17 Um, in MDPs and games, 02:19 we asked the question what is the maximum value policy? 02:23 In CSPs, we asked the question [NOISE] what is the maximum weight assignment? 02:28 And in Bayesian networks, 02:29 we can answer probabilistic inference queries of the form, 02:32 what is the probability of some query variables conditioned on some evidence variables? 02:37 And for each of these case, 02:39 we looked at the modeling, 02:41 we look at the, the inference algorithms, 02:43 and then we looked at different types of, uh, 02:46 learning procedures going backwards, maximum likelihood. 02:49 Uh, we looked at, 02:51 uh, various reinforcement learning algorithms. 02:53 We looked at structured perceptron and so on. 02:55 And hopefully, this, this kind of sums up, um, 02:59 the kind of the worldview that CS221 is trying to impart is 03:03 that there are these different co- you know, components. 03:07 Um, and depending on what kind of modeling you choose, 03:09 you have different types of algorithms, 03:12 um, and learning algorithms. 03:13 Inference algorithms and learning algorithms that emerge. 03:17 Okay? So we looked at several modeling paradigms, 03:22 um, roughly broken into three categories. 03:25 The first is state based models, 03:27 search problems, MDPs, and games. 03:28 And here, um, the, 03:31 the way you think about modeling is in terms of states, 03:34 and as nodes in a graph and actions that take you between different states, um, 03:39 which incur either a cost or give you some sort of reward, 03:43 and your goal is just to find paths, 03:44 or contingent paths, or policies, 03:47 um, in these graphs. 03:49 Then we shifted gears to talk about variable based models. 03:52 Where instead we think about variables, um, 03:55 and factors that constrain or, 03:59 um, these variables to, 04:01 uh, take on certain types of values. 04:04 Um, so in today's lecture, 04:08 I'm gonna talk about logical based models. 04:10 So we're gonna look at propositional logic and first-order logic which are 04:13 two different types of logical languages or, um, models. 04:17 And, um, we're gonna instead think about logical formulas and 04:21 inference rules which is going to be another way of kind 04:24 of thinking about, uh, modeling the world. 04:28 Historically, logic was actually the dominant paradigm in AI, 04:33 um, before the 1990s. 04:35 So it might be hard to kind of believe now. 04:37 But just imagine the amount of excitement that is going into deep learning today. 04:41 This equal amount, uh, 04:43 of excitement was going into logical based methods in, 04:47 in AI in, uh, 04:48 um, in the '80s and before that too. 04:51 Um, but there was kind of two problems with logic. 04:54 One is that, um, 04:55 logic was deterministic so it didn't handle uncertainty very well, 05:00 and [NOISE] that's why 05:01 probabilistic inference and other methods were developed to address this. 05:05 And it was also rule-based which allow- didn't allow you 05:08 to naturally ingest a large amount of data to, 05:11 you know, uh, guide behavior, 05:13 and the emergence of machine learning has addressed this. 05:16 Um, but one strength that kind of has been left on the table is the expressiveness. 05:21 And I kind of emphasize that logic as you will see gives you, 05:27 um, the ability to express very complicated things in a very, 05:32 you know, succinct way. 05:34 And that is kind of the main point of, uh, 05:38 logic which I really want everyone to, um, 05:40 kind of appreciate, and hopefully this will become clear through examples. 05:45 Um, as I motivated on the first day of class, 05:49 the reason- uh, one, 05:50 one good way to think about why we might want logic is, imagine, 05:56 you wanna lie on a beach, um, 05:58 and [NOISE] you want your assistant to be able to do things for you but, um, 06:02 hopefully it's more like, 06:04 um, Data from Star Trek rather than Siri. 06:07 Um, you wanna take an assistant and you want to be able to at 06:09 least tell information and ask your questions, 06:13 and have, um, these questions actually be answered in response- eh, 06:18 to, to reflect the information that you've, uh, [NOISE] told them. 06:20 Um, so just kind of a brief refresher on the first day of class. 06:24 I showed you this demo where you can talk to the system, 06:28 [NOISE] uh, um, and say things and ask a question. 06:32 So a small example is, um, 06:34 let's say all students like CS221. 06:38 It's great and teaches them important things. 06:41 Um, and, uh, Alice does not like CS221. 06:45 And then you can ask, 06:47 um, you know, ''Is Alice a student?'' 06:50 And the answer should be, 06:51 ''No'', because it can kind of reason about this one. 06:54 [NOISE] Um, and just to, 06:58 uh, dive under the hood a little bit, um, 07:00 inside, it has some sort of knowledge-based stack contains the information at, 07:04 um, it has, we'll come back to this in a second. 07:09 Okay? Um, so this, this, uh, 07:13 system needs to be able to digest 07:15 heterogeneous information in the form of natural language, 07:19 you know, uh, utterances, 07:20 and it has to reason deeply with that information. 07:23 So it can't just do, you know, 07:24 superficial pattern matching. 07:27 So I've kind of suggested natural language as an interface to this. 07:33 Um, and natural language is very powerful because, um, 07:37 I can send up here and use natural language to give 07:40 a lecture and hopefully you guys can understand at least some of it. 07:44 Um, and- but, you know, 07:47 let's, let's go with natural language for now. 07:48 So here- here's an example of how you can draw inferences using natural language. 07:53 Okay? So a dime is better than a nickel. 07:56 Um, a nickel is better than a penny. 07:59 So therefore, a dime is better than a penny, okay? 08:01 So this seems like pretty sound reasoning. 08:04 Um, so what about this example, 08:06 a penny is better than nothing, 08:08 um, nothing is better than world peace. 08:10 [inaudible]. 08:13 Therefore, a penny is better than world peace, right? 08:17 Okay. So something clearly went, uh, wrong here. 08:21 And this is because, 08:22 languages- natural language is kind of slippery. 08:25 It's not very precise, 08:26 which makes it very easy to kind of make these, um, these mistakes. 08:31 Um, but if we step back and think about what is the role of natural language, 08:38 it's really- language itself is an ex- mechanism for expression. 08:43 So there are many types of languages. 08:44 There's natural languages, um, 08:47 there's programming languages, which all of you are, you know, familiar with. 08:51 Um, but we're going to talk about, 08:53 a different type of language called logical languages. 08:56 Um, like programming languages, 08:59 so they're gonna be formal. 09:00 So we're gonna be absolutely clear what we mean by 09:02 when we have a statement in a logical language. 09:05 Um, but, and like natural language, 09:09 it's going to be, um, declarative. 09:12 Um, and this is maybe a little bit harder to appreciate right now, 09:16 but it means that, 09:18 uh, there's kind of a more of a, 09:20 um, one to one isomorphism between logical languages, and natural languages, 09:23 as they're compared to, um, 09:25 programming languages and natural language. 09:29 Okay. Um, so in a logical language, 09:33 we want to have two, uh, properties. 09:37 First, a logical language should be, uh, 09:40 rich enough to represent knowledge about the world. 09:43 Um, and secondly, it's not 09:46 sufficient just to represent the knowledge, because you know, uh, 09:48 a hard drive can represent the knowledge, 09:50 but you have to be able to use that knowledge in a- in a way to reason with it. 09:55 Um, a logic contains three, uh, 10:00 ingredients, um, which I'll, 10:03 um, go through in a- in subsequent slides. 10:07 There is a syntax, which defines, um, 10:11 what kind of expressions are valid or grammatical in this language. 10:17 Um, there's semantics which is for each, 10:20 um, expression or formula. 10:23 What does it mean? 10:25 And mean means is- is actually, 10:27 means something very precise which I'll, 10:29 you know, come back to. 10:31 And then inference rules allow you to take various f- uh, 10:35 formulas and, um, do kind of operations on them. 10:39 Just like in the beginning, 10:40 we- when we have the, um, 10:42 algebra pro- problem, you can add equations, 10:44 you can move things to different sides. 10:47 Um, you can perform with these rules, 10:48 which are syntactic manipulations on these formulas or expressions. 10:54 That preserves some sort of, um, semantics. 10:58 Okay, so just to, 11:01 uh, talk about syntax versus semantics a little bit, 11:04 because, I think this might be a s- slightly subtle point which, 11:08 um, hopefully will be clear with this example. 11:10 So syntax refers to, 11:12 what are the valid expressions in this language, 11:15 um, and semantics is about what these ex- expressions mean. 11:19 So here is an example of two expressions, 11:23 which have different syntax. 11:25 2 plus 3 is not the same thing as 3 plus 2, 11:29 but they have the same semantics. 11:31 Both of them mean, the number, you know, 5. 11:33 Um, here's a case where, 11:36 we have two expressions with the same syntax, 11:39 3 divided by 2, 11:40 but they have different semantics, 11:42 betw- depending on which language you're in. 11:44 Okay? So in order to define a language precisely, 11:47 you not only have to specify the syntax, 11:50 but also the, um, semantics. 11:52 Because just by looking at the syntax you don't actually know what its, um, meaning is. 11:56 Unless I tell you. Um, there's a bunch of different logics. 12:02 Um, the ones highlighted in bold are 12:04 the ones I'm gonna actually talk about in this class. 12:06 So today's lecture is going to be,um, on propositional logic. 12:10 Um, and then, uh, 12:12 in the next lecture, I'm gonna look at first-order logic. 12:15 Um, as with most models in general, 12:18 there's going to be a trade off between, 12:20 the expressivity and the computational efficiency. 12:23 So as I go down this list, 12:26 to first-order logic and beyond, um, 12:28 I'm going to be able to express more and more things, using the language. 12:31 But it's gonna be harder to do, uh, 12:33 computation in that language. 12:38 Okay. So this is the- the kind of a key diagram, um, 12:43 to have in your head, 12:44 while I go through syntax semantics and inference rules. 12:47 So for every- I'm gonna do this for propositional logic, 12:50 and then in, uh, 12:51 Monday's lecture I'm gonna do it for, uh, first-order logic. 12:55 Um, so just to get them on the board, um, 12:58 we have syntax, and we have, 13:01 um, semantics, and then we have inference rules. 13:09 Um, let's just write it here. 13:13 Um, so this lecture is going to have a lot of definitions and concepts, um, in them. 13:20 Just to giv- give you a warning. 13:22 There's a lot of kind of ideas here. 13:24 Um, they're all very kind of, uh, um, 13:27 simple by themselves and they kinda piece together, 13:31 but there's just gonna kind- kinda be a barrage of, uh, 13:34 terms and I'll try to write them on the board, 13:36 so that you can kind of remember them. 13:38 Um, so in order to find, uh, 13:41 a logic called language, um, 13:44 I need to specify what are the formulas. 13:47 Um, um, so one maybe other comment about logic is that, 13:54 some of you have probably taken, um, 13:56 CS 103 or an equivalent class where you have been exposed to propositional logic. 14:00 Um, what I'm gonna do here, 14:03 is kind of a much more methodological and- and rigorous treatment of it. 14:08 Um, the- I wanted to distinguish the difference between, 14:11 um, being able to do logic yourself. 14:13 Like if I give you some, uh, 14:15 logical expression you can manipulate it. 14:17 That's different than, um, 14:19 talking about a general set of algorithms, 14:23 that can operate on logic itself. 14:26 Right. So remember in AI, 14:29 we're not interested in you guys doing logic, 14:31 because that's just I. That's intelligence. 14:34 Um, [LAUGHTER] but we're interested in developing general principles, 14:38 or general algorithms that can actually do the wo- work, uh, for you. 14:42 Okay? Just like in, uh, 14:44 in the Bayesian networks, it's very fine and 14:47 well that you can- you guys can, uh, 14:49 manipulate and condition, uh, 14:51 calculate conditional and marginal probabilities yourself. 14:54 But the whole point is we devise algorithms like Gibbs sampling, 14:57 and variable eliminish- elimination that can work on any, uh, Bayesian network. 15:02 Just wanna get that out there. 15:06 Okay. So let's, uh, begin. 15:09 Um, this is gonna be building from the ground up. 15:12 So first of all, there are in propositional logic, 15:17 there are a set of propositional symbols. 15:19 These are typically going to be uppercase letters or even, um, 15:23 words, Um, and these are the atomic formulas. 15:27 Um, these are formulas that can't be any smaller. 15:31 There is going to be logical connectives such as, uh, 15:34 not and or, um, 15:37 implication and bidirectional implication. 15:39 And then, the set of formulas are built up recursively. 15:43 So if F and G are formulas, 15:45 then I can- these are also formulas. 15:48 I can have not F. I can have F and G, F or G, 15:53 F implies G and F, um, 15:55 bidirectional implication G or equivalent to G. Okay? 16:00 So key, um,you know ideas here are, we have, um, 16:06 propositional symbols, um, we're gonna mo- move this down, because we're gonna run out of space. 16:14 Um, so this- these are things like A, 16:17 there's- that gives rise to formulas in general, 16:20 um, which is gonna be denoted, um, 16:24 F. Um, and so here are some examples. 16:28 So A is a formula. 16:30 Okay? It's in particular, it's an atomic formula which is a propositional symbol, 16:33 not A is a formula, 16:35 not B implies C as a formula. 16:38 This is a formula. 16:39 Um, this is the formula. 16:42 Double negation is fine. 16:44 This is not a formula, 16:45 because there's no connective between A and not B. 16:49 Um, this is also not a formula, 16:51 because what the heck is plus? 16:54 It's not a connective. So- so I think in- in thinking about logic, 17:00 you really have to divorce yourself from the common sense, 17:04 that you all come with in interpreting these symbols. 17:07 Right? Not is just a symbol or is just a symbol, 17:10 and they don't have any semantics. 17:12 In- in fact, I can go and define some semantics, 17:15 which would be completely different from what you would imagine. 17:17 It would be a lo- valid logical, um, system. 17:21 These are just symbols, 17:22 and all I'm here at defining is what symbols are valid 17:26 and what symbols are not valid, slash grammatical. 17:31 Okay? Any questions about the syntax of propositional logic? 17:41 So the syntax gives you the set of formulas or basically statements you can make. 17:50 So you can- think about as- as this is our language. 17:53 If we could only speak in propositional logic, I could say, 17:57 A or not B or the or, 18:01 um, A implies C. And that's all I would be able to say. 18:05 Um, and of course now I have to tell you what do these things mean. 18:09 Okay? And this is the realm of semantics. 18:12 So semantics, there's gonna be a number of definitions. 18:16 So first, is a model. 18:17 So this is really unfortunate and confusing terminology. 18:21 But this is standard in the logical literature. So I'm just gonna use it. 18:24 So a model, which is different from our general notion of a model. 18:28 Um, for example, um, 18:30 Hidden Markov Model for example. 18:32 It- a model here, in propositional logic, 18:35 just refers to an assignment of, 18:38 um, truth values to propositional symbols. 18:42 Okay? So if you have three propositional symbols, 18:45 that they are 8 possible models. 18:47 Um, A is, uh, 1, 18:50 B is 0, C is 0 for example. 18:53 So these are just complete assignments that we saw from factor graphs. 18:57 But now, um, in this new kind of language. 19:01 Okay. So that's the first constant. 19:03 And in first-order logic models are going to be more complicated. 19:06 Um, but for now, 19:08 you think about them as a complete sentence. 19:10 And I'm using W, because sometimes you also call them, um, worlds. 19:14 Um, because a complete assignments slash a model, 19:17 is both to represent the state of the world at any one particular point in time. Yeah. 19:22 [inaudible] 19:28 Yeah. So the question is, 19:30 can each propositional symbol either be true or false? 19:33 And in logic, as I'm presenting it, yes. 19:36 Only true or false or 0 or 1. 19:40 Okay? So these are models. 19:43 Um, and next is a key thing that actually defines the semantics, 19:48 which is the interpretation function. 19:50 So the interpretation function, um, 19:53 takes a formula and a model and returns true if that formula is, 20:01 uh, is true in this model and false, 20:05 um, you know, otherwise. 20:07 Okay? So I can make the interpretation function whatever I want, 20:12 um, and that just gives me the semantics. 20:15 So when I talk about what are the semantics, 20:17 it's the interpretation function. 20:21 Function i of f, 20:25 w. [NOISE] So the way to think about this is, 20:33 um, I'm gonna represent formulas as these, uh, horizontal bars. 20:37 Okay? So this is- think about this as, 20:40 uh, a thing you'd say. 20:41 It sits outside though, 20:44 uh, reality in so- in some sense. 20:47 And then this box, I'm gonna draw on the space of all possible models. 20:51 So think about this as a space of situations, uh, 20:55 that we could be in the world, 20:56 and a point here corresponds to a particular model. 21:00 So an interpretation function takes one, a formula, takes, 21:04 um, a model and says, 21:08 "Is this statement true if the world looks like this?" 21:12 Okay? So just to ground this out, um, 21:15 a little bit more, um, 21:18 I'm gonna define this for propositional logic, again recursively. 21:23 Um, so for propositional symbols, um, p, 21:28 I'm just gonna interpret that propositional symbol as a lookup in, 21:33 uh, the, the model, right? 21:36 So if I'm asking, "Hey, is A true?" 21:38 Well, I go to my, uh, 21:40 model and I see well, 21:42 does it say A is true or false. 21:44 Okay? That's a base case. 21:47 So recursively, I can define 21:50 the interpretation of any formula in terms of its sub formulas. 21:55 And the way I do this is suppose I have two formulas f and g 21:59 and they're interpreted in some way. 22:03 Okay. And now, I took a formula let's say f and g. Okay? 22:08 So what is the interpretation of f and g, um, in w? 22:12 Well, it's given by this truth table. 22:15 So if f is 0 and g is, uh, 22:19 interpreted to be 0, 22:20 then f and g is also interpreted to be 0. 22:24 And, um, 0,1 maps to 0, 22:27 1, 0 maps to 0 and 1, 1 maps to 1. 22:30 So you can verify that this is kind of, um, 22:33 your intuitive notion of what and should be, right? 22:39 Um, or is, um, 1 if, 22:44 um, at least one of f and g are 1, 22:48 um, implication is 1. 22:51 If f is 0 or g is, 22:54 uh, is, is 1. 22:56 Um, if bidirectional implication just means that if f and g evaluate to the same thing, 23:03 uh, not f is, you know, 23:06 clearly just, you know, 23:07 negation of whatever the interpretation of f is. 23:11 Okay. So this slide gives you the full semantics of propositional logic. 23:16 There's nothing more to propositional logic 23:19 and at least the definition of what it is, um, aside from this. 23:25 Um, let me go through example and then I'll maybe take questions. 23:29 So, so let's look at this formula, 23:33 not A and B, bidirectional implication C. Um, 23:37 in this model A is 1, 23:38 B is 1, C is 0. 23:40 Um, how do I interpret this formula against this model? 23:44 Well, I look at the, 23:46 the tree, um, which breaks down the formula. 23:49 So if I look at the leaves, 23:51 um, let's start bottom-up. 23:52 So the interpretation of A against w is just 1. 23:57 Because for a proposition symbols, 23:59 I just look up what A is and A is 1 here. 24:02 Um, the interpretation of not A is 0 because if I look back at this table. 24:09 If this evaluates to 1, 24:11 then this evaluates to 0. 24:12 I'm just looking, um, 24:14 based on the table. 24:16 Um, B is 1 just by table lookup, and then, um, 24:21 not A and B is 0 because I just take these two values and I add them together. 24:28 Um, C is 0 by table look up and then, 24:32 uh, bidirectional implication, um, 24:35 is interpreted as, uh, 24:38 1 here because 0 is equal to 0 here. 24:41 Yeah. Yeah, question? 24:46 Interpretation function is user defined in this case not like learning 24:50 how to interpret [inaudible] 24:57 Yeah, so the question is, 24:58 is the interpretation function user defined? 25:02 It is just written down. 25:04 Um, this is it, 25:05 there's no learning that's just these are- this is what you get. 25:09 Um, it's not user-defined in the sense that not everyone's gonna go define their own, 25:14 [LAUGHTER] you know, truth tables. 25:16 Um, some logicians came up with this, 25:18 and that's what it is. 25:20 Um, but you could define your own logics, 25:23 and it's kind of a fun thing you could try doing. 25:25 Okay? Any other questions about interpretation functions and models? 25:31 So now, we're kind of connecting syntax and semantics, right? 25:35 So an interpretation function binds what our formulas, 25:38 which are in syntax land to, 25:41 um, a notion of models which are, uh, in semantics. 25:48 So a lot of logic is very, um, 25:53 it might seem a little bit pedantic but it's just 25:55 because we're trying to be very rigorous in a way 25:58 that doesn't need to appeal to your common sense 26:01 intuitions about of what these formulas mean. 26:07 Okay? Any questions? All right. 26:13 So, so while we have the interpretation function that defines everything, 26:19 it's really going to be useful to think about formulas in a slightly, uh, different way. 26:25 So we're gonna think about a formula, um, 26:30 as representing, um, the set of all models for which interpretation is, you know, true. 26:38 Okay? So M of F, 26:40 which is the- is the set of models, 26:43 um, that f is, 26:47 uh, true on that model. 26:49 Okay? So pictorially, this is, 26:51 ah, f that you say out loud. 26:54 And what you mean by this is, um, 26:57 simply this subset of models, 27:00 um, which, uh, this f is true. 27:03 Okay? So if I make a statement here, 27:06 what I'm really saying is that I think we're in 27:09 one of these models and not in one of these other models. 27:12 So that's a kind of an important, 27:14 uh, I think intuition to have, 27:16 the meaning of a- a formula is carving 27:20 out a space of possible situations that you can be. 27:25 And if I say there's a water bottle on the table, 27:28 what I'm really saying is that, 27:30 I'm ruling out all the possible worlds that we could be in where there is 27:33 no water table- bottle on the table. Okay? 27:42 So models, um, M of f is gonna be a subset of all the possible models in the world. 27:55 Okay? So here's an example. 27:58 So if I say it's either raining or wet- rain or wet, 28:03 then the set of models can be represented, 28:08 um, by this, um, 28:09 subset of this two-by-two. 28:12 Okay? So over here, 28:13 I have rain, over here I have wet. 28:16 So this corresponds to no rain but it's wet outside, 28:19 [NOISE] um, this corresponds to it's raining but it's not wet outside. 28:23 And the set of models f is this red region which are these three possible models. 28:31 Okay? So I'm gonna use this kind of pictorial depiction throughout this, uh, lecture. 28:36 So hopefully, this makes sense. 28:40 So, so one key idea here- remember, 28:44 I said that logic allows you to express 28:46 very complicated and large things by using very small means. 28:51 So here, I have a very small formula that's, um, 28:55 able to represent a set of models, 28:57 and that set of models could be exponentially large. 29:00 And much of the power of logic allow- it comes from the ability to, 29:05 um, do stuff like that. 29:09 Okay. So, um, here's yet another definition. 29:13 This one's not somehow, um, 29:15 such a new definition- um, sorry, 29:20 a new concept but it's kind of just 29:22 trying to give us a little bit more intuition for what these, 29:25 um, formulas and, um, 29:27 uh, models are doing. 29:29 So knowledge base is just a set of formulas. 29:33 Um, and, and think about this as the set of facts you know about the world. 29:38 So this is what you have your head. 29:40 And in general, it's going to be a- just a set of 29:43 formulas and now the- the key thing as I need to connect us with semantics. 29:48 So I'm gonna define the set of models denoted by a knowledge base 29:53 to be the intersection of all of the models denoted by the formulas. 29:59 So in this case, if I've rain or snow being this, uh, 30:02 green ellipse and traffic being this red ellipse, 30:06 then the model is denoted by the knowledge base is just the intersection. 30:12 Okay? So you can think about knowledge as how fine-grain, 30:17 uh, we've kind of zoomed in on where we are in the world, right? 30:21 So initially, if you don't know anything, 30:23 we say anything is possible all 2 to the n, 30:26 you know, models are possible. 30:28 And as you add formulas into your knowledge base, 30:31 the set of, um, 30:33 possible worlds that you think might exist, uh, 30:36 are possible is going to shrink, um, 30:39 and, you know, we'll see that in a sec- in a second. 30:45 Okay. So here's an example of a knowledge base. 30:48 If so have rain that corresponds to this set of models, 30:52 um, rain implies wet corresponds to the set of models. 30:56 And if I look at the models of this knowledge base, 31:01 it's just going to be the intersection which is this, 31:03 uh, red square down here. 31:11 Okay? Any questions about knowledge bases, models, interpretation functions so far? 31:22 All right. So as I've alluded to earlier, 31:28 knowledge base is the thing you have in your head. 31:30 And as you go through life, 31:31 you're going to add more formulas to your knowledge base. 31:34 You're gonna learn more things. 31:36 Um, so your knowledge base just gets, uh, 31:39 union with whatever formula you- you have. 31:43 And over time, the set of models is going 31:46 to shrink because you're just taking your intersection. 31:50 Um, and one question is, 31:53 you know, how much is this sh- uh, shrinking? 31:56 Okay? So here, there's a bunch of different cases to contemplate. 32:02 Um, the first case is, um, entailment. 32:06 So suppose this is your knowledge base so far, 32:10 and then someone tells you, um, 32:13 the formula that corresponds to this, ah, set here. 32:17 Okay. So in this case, um, intuitively, 32:21 f doesn't add any information or new constraints that was known before. 32:25 And in particular, if you take the intersection of these two, 32:27 you end up with exactly the same set of 32:29 models you had before so you didn't learn anything. 32:33 Um, and this is called entailment. 32:36 So, um, so there's kind of three notions here, 32:42 there's entailment which is written this way with two horizontal, uh, bars, 32:51 um, which means that the set of models, um, 32:57 of f is at least as large as the set of models in, 33:01 uh, KB or in another words, it's a superset. 33:05 Okay? Um, so for example, rain and snow. 33:10 If you already knew it was raining or snowing and someone tells you, "Ah, it's snowy." 33:13 Then you say, "Well, um, 33:15 duh, I didn't learn anything." 33:17 Um, a second case is contradiction. 33:20 [NOISE] So if you believe the world to be 33:24 in here or somewhere and someone told you it's actually out here, 33:28 then, um, your brain explodes, right? 33:31 Um, so this is where the set of models of 33:36 the knowledge base on the set of models denoted by the formula is empty. 33:42 So this- it doesn't make sense. 33:45 Okay? So that's a contradiction. 33:52 Um. 34:03 Okay, so if you knew it was rainy and snowing 34:06 and someone says it's actually not snowing, 34:08 then you, you- right, 34:10 know that, that can't- that can't be right. 34:14 Okay. So the third case is, 34:16 um, basically everything else. 34:18 It's contingency where, um, 34:21 f adds a non-trivial amount of information to a knowledge base. 34:25 So the, the new set of models- the intersection here is neither 34:29 empty nor is it the original knowledge base. Okay? 34:44 One thing to kind of uh- not get confused by is, 34:49 if the set of models were actually strictly inside and the knowledge base, 34:53 that would also be a contingency. 34:54 Right, because when you intersect it, 34:56 it's neither empty nor the original. 35:01 Okay. So if you knew it was rainy, it's, 35:05 um- raining and someone said, 35:07 "Oh, it's also snowing too." 35:08 Then you're like, "Oh cool, I learned something." 35:14 Okay, so there- there's a relationship between contradiction and entailment. 35:19 So contradiction says that the knowledge base Mf, um, 35:23 are- have zero intersection, and entailment, um, 35:29 this is equivalent to, um, 35:31 the knowledge base entailing not f. Okay so just, 35:36 there's a simple proposition that says, 35:38 um, KB contradicts f if KB- if and if KB entails not f. Okay so, 35:47 um, there's the picture you should have in 35:51 here is like not f is all of the models which are not in this. 35:56 And if you think about kind of, 35:58 wrapping that around, it kind of looks like this. 36:04 Okay, [BACKGROUND] all right. 36:15 So with these three, 36:17 um, notions: entailment, contradiction, 36:22 and contingency which are relationships between a knowledge base and a new formula, 36:28 we can now go back to our kind of 36:30 virtual assistant example and think about how to implement, um, these operations. 36:37 So if I have a knowledge base and I tell, um, 36:41 the- the virtual assistant of a particular, um, 36:45 formula f, there are 36:48 three possible abilities which correspond to different appropriate responses. 36:53 So if I say it's raining, then, 36:57 if it's in entailment, then I say, 37:00 "I already knew that because I didn't learn anything new." 37:04 If it's a contradiction, then I should say, 37:06 "I don't believe that because it's not consistent with my knowledge so far." 37:10 And otherwise, you learn something new. 37:14 Okay, um, there's also the ask operation which if you're asking a question, 37:21 again, the same three, uh, 37:24 entailment, contradiction, and contingency can hold. 37:29 Uh, but now, the responses are- should be answers to this question. 37:33 So if it's entailment, 37:35 then I say yes. 37:37 And this is a strong yes and it's not like, 37:40 uh, probably yes, this is a definitely yes. 37:44 If it's a contradiction, 37:45 then I say no. 37:46 It's- again, it's a strong no, it's impossible. 37:51 And then in the other case it's just contingent which you say I don't know, okay. 37:59 So the answer to a yes or no question, 38:01 there's three responses, not two, okay. 38:10 Okay, any questions about this? 38:25 Okay, how many of you are following along just fine? 38:29 Okay, good. All right. 38:32 So this is a little bit of a digression and it's going to connect to Bayesian networks. 38:37 So you might be thinking in your head, 38:41 well, we kind of did something like this already, right? 38:44 In Bayesian networks, we had these complete assignments 38:47 and we actually defined joint distributions over complete assignments. 38:52 And- and now, what we're talking about 38:56 is not distributions but sets of assignments or models. 39:01 And so we can actually think about the relation between a knowledge base and 39:07 an f also having an analog in a Bayesian network land given by this formula. 39:13 So, um, remember, a knowledge base is a set of models or possible worlds. 39:20 So in probabilistic terms, this is an event. 39:23 Um, and that event has some probability. 39:29 So that's- that's a denominator here. 39:32 And when you look at f and KB and you intersect them, 39:38 you get some other, 39:41 uh, event which is a subset of that and you can ask for the probability mass of that, 39:47 uh, intersecting event, that's the numerator here. 39:50 And if you divide those, 39:51 that actually just gives you the probability of a formula, 39:54 uh, given the knowledge base. 39:57 Okay, so this is actually a kind of pretty nice and direct, um, um, 40:02 probabilistic generalization of propositional logic. Uh, yeah. 40:08 It's only once if you had like all the var- all the variables required for that, 40:13 for me they're already exists a set of networks like in this scenario, there's A, B, C, 40:17 if we were asking something about D, 40:18 would still be in I don't know, 40:19 because we don't have that information. 40:21 Yeah so the question is this only works when restricted to the set of predefined, 40:27 uh, propositional symbols and you are asking about D, 40:30 then yeah, you would say I don't know. 40:32 And it's- in fact, when you define propositional logic, 40:36 you have to pre-specify the set of symbols that you are dealing with, 40:40 um, in general, yeah. 40:43 [inaudible] given the example that we did earlier, 40:46 like reading wasn't given a set of examples are things 40:48 that our agent knew about before we started like training. 40:52 So is that something we'll get to later? 40:54 Um, yes so the question is in the- in the- in practice, 41:00 you could imagine giving you an agent like it is raining or it's snowing or it's sleeting, 41:04 and having novel concepts. 41:06 Um, it is true that you can devise, 41:10 build systems and the system I'm showing you has that capability. 41:14 Um, um, and this is, uh, 41:19 it'll be clear how we do that when we talk about 41:21 inference rules because that allows you to operate directly on the syntax. 41:25 Um, here, I'm talking about semantics where you essentially just, just for convenience. 41:30 I mean you can be smarter but- but we're just defining the- the world. 41:34 Yeah. Yeah, so in this formula, 41:39 why is this union not intersection? 41:41 So I'm unioning the KB with a formula, 41:45 which is equivalent to intersecting the models of the KB with the models of the formula. 41:55 Okay. So this is a number between 0 and 1. 42:00 And this reduces actually to, 42:02 uh, the logical case if, 42:05 if this probability of 0, um, 42:08 that means there's a [NOISE] contradiction, right? 42:12 Because this intersection is- it's gonna be pros- probability 0. 42:15 And if it's 1, that means in its entailment. 42:20 Um, and the cool thing is that instead of just saying I don't know, 42:23 you're gonna actually give a probabilistic estimate of like, 42:26 well, I don't know, but it's probably like 90%. 42:30 Um, so, you know, we're not gonna talk, 42:34 uh, this is all I'm gonna say about probabilistic extensions to logic, 42:39 but there are a bunch of other things, um, 42:41 that you can do that kind of marry the, um, 42:44 the expressive power of logic with some of 42:47 the more advanced capability of handling uncertainty of probabilities. Yeah? 42:52 [inaudible]. 42:54 Assuming that, I mean, the probability distribution? 42:58 Yeah. To do this, you are assuming that we actually have the joint distribution at hand. 43:02 And a separate problem is of course, learning this. 43:06 So for logic, I'm only talking about inference, 43:11 I'm not gonna talk about learning although there are 43:13 ways to actually in- infer logical expressions too. 43:19 Okay. So back from the digression now, no probabilities anymore. 43:24 We're just gonna talk about logic. 43:26 Um, there's another concept which is, 43:29 um, really useful called, uh, satisfiability. 43:33 [NOISE] Um, and this is going to allow us to implement, 43:37 um, entailment, contradiction, contingency using kind of one, um, primitive. 43:43 [NOISE] And the definition is that knowledge base is satisfiable if, 43:47 um, the set of models is non-empty, okay? 43:51 So it's not self-contradictory in other words. 43:54 So now, we can reduce ask and tell to satisfiability, okay? 44:00 Um, remember, ask and tell have three possible outcomes. 44:06 If I ask a satisfiable question, 44:09 how many possible outcomes are there? 44:12 Two? So how am I gonna make this work? 44:19 I have to probably called satisfiable twice, okay? 44:23 So let's start with asking if knowledge-based union, 44:27 um, not f is satisfied or not, okay? 44:32 So if the answer is, 44:34 um, no, what can I conclude? 44:45 So remember, the answer is no. 44:48 So it's not satisfiable, 44:49 which means that KB contradicts not f, 44:54 and what is that equivalent to saying? 44:57 [inaudible]. 45:02 Sorry? 45:03 Like it's- like it's [inaudible]. 45:04 Um, yeah so it's not f. 45:09 So it's- which one of these should have been entailment, contradiction, or contingency? 45:16 [inaudible]. 45:21 Yeah, so, um, I'm interested in the relationship between KB and f. 45:25 [NOISE] I'm asking the question about KB Union not f. Yeah? 45:29 [inaudible]. 45:36 Yeah, yeah. So exactly- so this should be an entailment, 45:40 relation between KB and f. Remember, 45:44 KB entails f is equivalent to KB contradicting not f, okay? 45:51 Um, okay, so what about, um, 45:57 if it's yes, then I can ask another question, 46:00 is KB Union f satisfiable or not? 46:04 So the answer is no, 46:06 then what should I say? 46:08 [inaudible]. 46:11 It should be a contradiction because, I mean, 46:15 this literally says KB contr- contradicts f. And then finally, 46:20 if it's yes, then it's contingency, okay? 46:25 So this is a way in which you can def- reduce answering ask and tell, 46:32 which is basically about assessing entailment, contradiction, 46:35 or contingency to just- uh, 46:37 to- at most two satisfiability calls. 46:42 So why are we reducing things to satisfiability? 46:46 For propositional logics, uh, 46:49 checking satisfiability is just the classical SAT problem 46:53 and it's actually a special case of solving constraint satisfaction problems. 46:57 Um, and the mapping here is, 46:59 we just call propositional symbols variables, 47:02 um, mo- formulas constraints, 47:05 and we get an assignment here and we call that a model. 47:09 So in this case, 47:11 if we have a knowledge base, um, 47:14 like this, then there are three variables; A, 47:18 B and C and we define this CSP and then we can, um, 47:23 if we find a satisfying assignment, 47:26 then, um, then we return satisfiable. 47:31 If we can't find one, then we 47:32 return unsat, okay? 47:43 Um, so this is called model checking. 47:46 Um, it's called model checking because we're checking whether a model, 47:51 eh, exists or is, uh, um, is true. 47:55 So you- in model checking takes 47:57 a knowledge base and outputs whether there is a satisfying model. 48:01 Um, there are a bunch of algorithms here which are, 48:06 you know, very popular, there's something called DPLL named after, 48:10 uh, four, four people. 48:12 Um, and this is essentially backtracking search plus, uh, 48:15 pruning that takes into account the, 48:18 the structure of your CSPs, 48:20 um, which are propositional logic formulas. 48:23 Um, and, uh, there's something called WalkSat which is, 48:26 you can think about the closest analog that we've seen as 48:28 Gibbs sampling which is a- a randomized local search. 48:32 Um, okay? 48:35 So at this point, 48:36 you really can't have all the ingredients you, uh, 48:39 you need to do, um, 48:42 inference and propositional logic. 48:44 So to define what propositional logic symbols are, 48:47 um, or formulas are, 48:49 I define the semantics, um, 48:52 and I've told you even how to solve, 48:55 uh, entailment and, um, 48:58 contradiction and contingency queries by reducing 49:01 the satisfiability which is actually something we've already, 49:03 you know, seen coincidentally, 49:05 um, so that should be it, okay? 49:09 Um, but now, coming back to the, 49:14 you know, original motivation of X_1 plus X_2 equals 10, 49:18 and how we were able to perform their logical query much faster, 49:23 we can, uh, ask the question now, 49:25 can we exploit that, that the factors are, 49:28 are formulas rather than arbitrary, you know, 49:31 functions, and this is where inference rules is gonna come into play, okay? 49:39 So, um, so I'll try to explain this, uh, 49:42 figure a little bit since it looks probably pretty 49:44 mysterious, um, from the beginning. 49:48 So I have a bunch of formulas. 49:49 This is my knowledge base over time, I accrue formulas. 49:52 And these formulas carve out, um, 49:56 a set of models in semantics land. 50:00 And, and this formula here, 50:06 if it's, um, a superset, 50:08 that means it's entailed by these formulas, right? 50:11 So I know that this is true given my knowledge, 50:16 which means that this is kind of a logical consequence of what I know, okay? 50:21 So, so far, what we've talked about is taking 50:23 formulas and doing everything over in semantics land. 50:26 What I'm gonna talk about now is inference rules that are gonna allow us to 50:30 directly operate on the syntax and hopefully get, 50:33 um, some results that way. 50:36 Okay. So here is an example of making an inference. 50:40 Um, so if I say it is raining, 50:42 and I tell you if it's raining, 50:44 it's wet, um, rain implies wet, 50:47 then what should you be able to conclude? 50:51 It's wet. 50:52 It's wet, right? 50:54 So I'm gonna write this inference rule this way with this, 51:00 um, kind of fraction looking like thing where there's a set of 51:03 premises which is a set of formulas which I know to be true. 51:07 And if those things were true, 51:09 then I can derive a conclusion which is another formula. 51:14 This is, um, an instance of a general rule called Modus ponens, um, 51:20 that says for any propositional symbols p and q, 51:24 if I have p and p implies q in my knowledge base, 51:28 then I can- that entails, uh, q, okay? 51:33 So let's talk about inference rules, [NOISE] um, inference. 51:39 Actually, let me do it over here, 51:42 since we're gonna run out of space otherwise. 51:44 [NOISE] Okay. 51:50 So Modus ponens is a first thing we're gonna talk about. 51:58 [NOISE] 52:03 So notice here, that if I could do these type of inferences, it's much less work, right? 52:10 Because I- it's very localized. 52:12 All I have to do is look at these three formulas. 52:14 I don't have to care about all the other formulas or propositional symbols that exist, 52:19 and going back to this question over here about- or how 52:22 do I- what happens if I have new concepts that occur? 52:25 Well, you can just treat everything as if it's just a new symbol. 52:28 There's not necessarily a fixed set of symbols 52:30 that you're working with at any given time. 52:33 Okay, so this is the example of inference rule. 52:38 In general, the idea of the inference rule is that you have rules that say, 52:43 ''If I see f 1 through f k which are formulas, 52:47 then I can add g.'' Um, 52:51 and the key ideas I mentioned before is that in 52:53 these inference rules operate directly on the syntax, 52:56 and not on the semantics. 52:58 So given a bunch of inference rules I have this kind of meta algorithm, 53:03 that can do a logical inference as follows. 53:07 So I have a set of inference rules, 53:09 and I'm just going to repeat until there's no changes to the knowledge base. 53:16 I choose a set of formulas from the knowledge base. 53:20 Um, if I find a matching rule, 53:22 inside rules, that exists, 53:24 then I simply add g to the knowledge base. 53:28 Okay? Um, so what the- other definition I'm going to make is, 53:36 this idea of derives and proved. 53:38 So, um, so inference rule, um, derives, 53:47 proves, um, so I'm gonna write KB, 53:56 and now with a single horizontal line. 54:00 Um, to mean that from this knowledge base given 54:04 a set of inference rules I can produce f via the rules. 54:09 Okay? This is in contrast to entailment which is defined by the relationship between 54:15 the models of KB and the models of f. Now this is 54:18 just a function of mechanically applying a set of rules. 54:21 Okay, so that's a very, very important distinction. 54:27 And if you think about it, 54:29 why is it called proof? So whenever you do 54:30 a mathematical proof or some sort of logical argument, 54:34 you are in sense, in some sense, 54:36 just doing logical inference: where you have 54:39 some premises and then you can apply some rule. 54:43 For example you can add, 54:45 um, you know, multiply both sides of the equation by two. 54:49 That's a rule. You can apply it. 54:52 And you get some other equation which you can- um, 54:56 which you hope is true as well. 55:02 Okay. So here's an example. 55:06 Um, maybe I'll just for fun I'll do it over here. 55:10 Um, oops. 55:13 So I can say it is raining and if I dumped that, 55:17 it gives me my knowledge base that has rain. 55:20 Um, if it is raining it is wet. 55:25 Um, so if I dumped then I have- this is the same as, 55:32 um, rain implies wet. 55:34 Okay. Just- just just, uh, 55:36 in case you're rusty on your propositional logic. 55:39 If I have P implies Q that's the same as not p or q. 55:44 Okay? And notice that I have also wet appear in 55:50 my knowledge base because this in the background 55:52 it's basically running forward inference to, 55:55 um, try to derive as many conclusions as we can. 56:00 Okay. Um, and if I say if it is wet, 56:05 it is, uh, slippery. 56:08 Um, Again. 56:10 Now I have. Um, I have, 56:15 uh, wet implies slippery. 56:17 Um, and now I also derived slippery. 56:20 Um, I also derived rain, um, 56:24 implies slippery which is actually as you have seen not derivable from modus ponens, 56:29 so it behind the scenes is actually a much more, 56:31 uh, fancy inference algorithm. 56:34 But, um, but- but the idea here is that you have your knowledge base. 56:40 Um, you can pick up rain and rain implies wet and then you could add wet. 56:45 And you pick up rai- wet- wet implies slippery and then you can add slippery here. 56:49 And with modus ponens, 56:53 um, you can't actually derive somethings. 56:56 You can't derive not wet, um, 56:59 which is probably good because it's not true. 57:03 And you also can't derive rain implies slippery which 57:07 actually is true but a modus ponens is not powerful enough to derive it. 57:12 Okay. So- so the burning question you should have in your head is- okay. 57:19 I talked about there's two relations between a knowledge base 57:22 KB and a formula f. There is entailment relation. 57:26 And this is really what you want because this is 57:29 semantics you all- you care about meaning. 57:32 Um, and you also have this: KB derives f which is a syntactic relationship. 57:41 So what's a connection here? 57:44 In general there's no connection. 57:46 Um, but there's the kind of these concepts that will help us think about that connection. 57:51 So semantics these are things which are- Um, 57:56 when you look at semantics you should think about the models implied by the, um, 58:02 formulas, and syntax is just some set of rules that someone made up. 58:08 Okay. So how do these relate? 58:10 Okay. So to, um, understand this, 58:15 imagine you have a glass and this glass is um-um, 58:20 what's inside the glass is formulas. 58:24 And in particular it's the formulas which are true. 58:27 Okay. So this glass is all formulas such 58:30 that the- this formula is entailed by the knowledge base. 58:35 So, um, soundness is a property of a set of rules. 58:41 And it says if I apply these rules until the end of time, 58:47 do I stay within the glass? Am I always going to generate formulas which 58:52 are inside the glass which are semantically valid or entailed? 58:57 Okay. So soundness is good. 59:00 Um, completeness says that the kinda, um, 59:05 other direction which says that I am going to 59:10 generate all the formulas which are true for entailed. 59:15 I might generate extra stuff. 59:17 But at least I'll cover everything. 59:19 That's what it means to be complete. 59:22 Okay. So the model you should have in your head is you want the truth, 59:29 the whole truth and nothing but the truth. 59:32 Soundness is really about nothing but 59:34 the truth and completeness is about the whole truth. 59:39 Ideally you would want both. 59:42 Sometimes you can't have both so you're going to have to pick your battles. 59:48 Uh, but generally, you want soundness. 59:52 Um, you can maybe live without completeness, um, 59:55 but if you're unsound, 59:58 that means you are just going to generate erroneous conclusions, 60:02 which is, uh, bad. 60:04 Whereas, if you're incomplete, 60:06 then maybe you just can't, uh, 60:08 infer certain f- notions, 60:10 but at least you- the things that you do infer, 60:12 you know, are actually true. 60:16 Okay. So how do we check, um, soundness? 60:21 [NOISE] So is modus ponens sound? 60:26 Um, so remember, uh, 60:31 there's, kind of, a rigorous way to do this. 60:33 And the rigorous way is to look at two formulas. 60:38 Rain, rain, uh, implies wet, 60:40 uh, and then look at their models. 60:43 Okay. So rain corresponds to these set of models here. 60:47 Um, rain implies wet and corresponds to this set. 60:51 Um, and when I intersect them, that's the, 60:55 the set of models which are conveyed by the knowledge base, 60:58 which is this corner here. 61:01 Um, and I have to check whether that is a s- a subset of the models of wet. 61:09 And wet is over here. 61:11 So this one-one corner is a subset of one-one and zero-one. 61:18 So this rule is sound. 61:21 [NOISE] 61:32 Remember this, why is this subset of the thing I wanna check? 61:36 Because that's just the definition of entailment. 61:39 Right? Okay. So let's do another example. 61:48 So if someone said it was wet, 61:50 and you know that rain implies wet, 61:53 can you infer rain? 61:55 [NOISE] Well, let's, let's, 61:59 uh, let's just double-check this. 62:00 Okay. So again, what are the models of wet? They're here. 62:04 What is the models of rain implies wet? They're here. 62:07 And I intersect them, 62:08 I get this, um, 62:10 this- these two over here in dark red, 62:13 and then is that a subset of models of rain? 62:17 Nope. So this is unsound. 62:20 Okay. So in general, soundness is actually a fairly, 62:24 uh, easy condition to check, 62:26 especially in propositional logic. 62:28 But in higher-order logics, 62:30 it's, you know, not as bad. 62:32 So now completeness is a, 62:33 kind of, a different story, 62:35 which I'm not gonna have time to really do full justice in this class. 62:39 But, um, but here's a, 62:42 kind of, a, a example showing modus ponens as, 62:45 um, incomplete, so, um, 62:50 uh, for propositional logic. 62:53 [NOISE] Uh, so here we have the knowledge base, 62:58 some rain, rain or snow implies wet. 63:03 Um, and is this entailed, wet? 63:14 So it's raining, and if I know it's raining or snowing, 63:18 then that should be wet. 63:21 How many of you say yes? 63:24 Yeah. It should be entailed, right? 63:27 Okay. Well, what does modus ponens do? 63:30 Well, all the rules look like this. 63:35 Um, so [NOISE], um, 63:40 clearly you can actually arrive at this with modus ponens because 63:44 modus ponens can't reason about or, or disjunction. Yeah. 63:50 Is it possible for it to be right about the rain or snow? Or is 63:54 it saying that it's not possible for it to be not wet given rain? 63:57 Uh, is it [NOISE] not? 64:01 Yeah. Because you're- you already [NOISE] know that it's raining. 64:04 All right. 64:04 So you should say that it's wet. 64:07 Yeah. Okay. So this is incomplete. 64:12 Um, so we can be, um, 64:14 you know, sad about this. 64:16 Uh, there are two ways you can go to fix this. 64:20 The first way is, 64:22 um, we say, okay, okay, 64:24 propositional logic was, um, 64:26 too fancy. Uh, question? 64:29 Um, just going back to the [inaudible] - 64:30 Yeah. 64:30 -of the notation, when it says KB equals rain, then comma, 64:34 rain or snow implies wet, 64:35 is in implying in any type of assignment to rain there? 64:38 Like, is it saying that it is raining or is it just saying that we have a variable rain? 64:42 Yeah. So the question is what does this mean? 64:45 Um, this- so the knowledge base is a set of, uh, formulas. 64:50 So in this particular formula is rain. 64:52 And remember, the models of a knowledge base is where are the formulas are true. 64:57 So yes, in this case it does commit to rain being 1. 65:01 The- then models of KB are- only include the, 65:06 um, the models where rain is 1. 65:09 Otherwise, this formula would be false. 65:11 Thank you. 65:14 Yeah. Yeah? 65:16 [inaudible] the probability of the model, um, 65:17 as in way back before the [inaudible]? 65:21 Oh, it was, how can we have a probability over a model? 65:24 Um, so remember that a model is- um, where did it go? 65:33 [NOISE] Okay. 65:35 So remember a model here is just an assignment to a set of, 65:40 uh, proposition symbols or variables, right? 65:42 So when we talk about Bayesian networks, um, 65:45 we're defining a distribution over assignments to all the variables. 65:51 So here, while I'm saying is that assume there is some distribution over, 65:56 um, complete assignments to random variables, 65:59 and I can use that to compute, um, 66:02 probabilistic queries of the form- of formula given knowledge base. 66:08 Am I answering your question? 66:11 Yeah. [inaudible] [NOISE] -shouldn't you- if you have two models that contradict, 66:16 they can't be in the same knowledge base, right? 66:20 [NOISE] Um, if you have two models that [NOISE], uh, 66:24 or [NOISE] formulas that contradict, 66:26 then this intersection is going to be, uh, 0 [NOISE]. 66:30 So there is- exists a set of models. 66:32 So let me do it, um, you know, this way. 66:35 So imagine you have these two [NOISE] variables, rain and wet. 66:43 Um, [NOISE] a Bayesian network might assign a probability 0.1, 66:47 point- um, I should make these sums of one. 66:51 Um, point, what is this? 66:54 Five? Um, so some distribution over these states, right? 66:59 And, um, [NOISE] and if I have [NOISE] rain, 67:05 [NOISE] that corresponds to these models. 67:09 So I can write the probability of rain [NOISE] is 0.2 plus 0.5, [NOISE] so 0.7. 67:17 Okay? And if I have the probability of wet [NOISE], 67:24 um, given rain, um, 67:26 this is going to be the probability of the conjunction of these, 67:33 which is going to be wet and rain, which is here. 67:37 This is going to be 0.5 divided by the probability of rain, which is 0.7. 67:42 [NOISE] Does that help? 67:46 [NOISE] Okay. 67:49 [NOISE] 67:59 Whoops. 67:59 Um, okay. So- okay. 68:07 So modus ponens is sound, 68:10 but it's not complete. 68:11 So there's two things we can do about this. 68:14 We can either say propositional logic is too fancy. 68:17 Let's just restrict it so that modus ponens becomes complete with, 68:22 with respect to a restricted set of formulas. 68:24 Or we can use more powerful inference rules. 68:27 So today we're going to restrict propositional logic, 68:33 um, to make it complete. 68:35 Um, and then next time we're gonna show how a resolution which is, uh, 68:39 even more powerful inference rule can be used to make, 68:43 um, any arbitrary inferences. 68:45 And this is what's, uh, 68:46 powering the, the system that I showed you. 68:48 [NOISE] Okay. 68:51 So um, 68:53 a few more definitions. 68:55 [NOISE] 69:01 So we're gonna define,um, a propositional logic with, with horn clauses. 69:10 Okay. So, so a definite clause is, 69:18 um, [NOISE] a propositional formula of the following form. 69:22 So you have some propositional symbols, all conjoined together, 69:27 conjoined just means it's added, um, 69:29 implies some other propositional symbol q. 69:32 And the intuition of this, uh, 69:35 formula says if p_1 through p_k hold, then q also holds. 69:39 So here's some examples. 69:41 Rain and snow implies traffic. 69:44 Um, traffic is can be- is possible. 69:48 Um, this is a non-example. 69:52 Um, this is a valid propositional, uh, 69:56 logic formula, but it's not a valid definite clause. 70:00 Um, here is also another example. 70:03 Um, rain and snow implies peace- uh, 70:06 traffic or peaceful, okay? 70:09 So this is not allowed because the only thing allowed on the right side- hand side of, 70:15 um, the implication is a single propositional symbol, 70:18 and there's two things over here. 70:21 Okay? So a horn clause is, um, 70:29 a definite clause or a goal clause, 70:33 um, which might seem a little bit mysterious. 70:38 But, um, it's defined as a, 70:42 [NOISE] um, something that, 70:45 um, p_1 through p_k implies false. 70:49 And the way to think about this is the negation of a conjunction of things, right? 70:56 Because remember, um, p implies q is not p or q. 71:01 So this would be not p or true, 71:06 which is not p in this case. 71:11 Okay? So, so now we have these horn clauses. 71:16 Um, now, remember the inference rule of modus ponens. 71:20 Um, we are gonna slightly generalize this to include, 71:26 um, not just p implies q, 71:28 but p_1 through p_k implies q. 71:30 So you get to match on, um, 71:33 premises which include formulas which are atomic propositional symbols and a rule that, 71:38 um, looks like this, and you can, uh, 71:41 derive or prove, uh, q from that. 71:45 Okay? So as an example, wet and weekday. 71:49 If you see wet, weekday, 71:52 wet and weekday implies traffic, those three formulas. 71:55 Then you can- you're able to add traffic. 71:58 [NOISE] Okay. 72:01 So, um, so here's the claim. 72:06 So modus ponens is complete with respect to horn clauses for propositional logic. 72:12 So in other words, 72:14 what this means is that suppose that 72:17 the knowledge base contains only horn clauses and that, 72:21 um, p is, some entailed, 72:23 uh, propositional, uh, symbol. 72:26 By the entailed propositional symbol, I mean, 72:28 like, KB actually entails p semantically. 72:32 Then applying modus ponens will derive p. Means that the two relations are equivalent, 72:43 and you can celebrate because you have both, uh, 72:45 soundness and, uh, completeness. 72:49 Okay. So just a quick example of this. 72:52 Um, so here imagine is your knowledge base, um, 72:56 and you're asking the question, 73:01 uh, is there traffic? 73:03 And remember, because, um, 73:06 this is a set of only horn clauses, 73:09 and we're using modus ponens which is complete, that means, um, 73:14 entailment is the same as I'm being able to 73:17 derive it using these rules- this particular rule. 73:21 Um, and you would do it in the following way. 73:23 So rain, rain implies wet, gives you wet. 73:27 Wet, weekday, wet and weekday implies traffic, gives you traffic, 73:30 and then you're done. Yeah? 73:34 You are saying wet implies to, uh, 73:37 like rain and weekday, 73:38 why are those horn clauses? 73:39 [NOISE] The question is why are rain and weekday horn clauses? 73:44 Yes. 73:44 So if you look at the definition of horn clauses, they're definite clauses. 73:48 If you look at the definite- definition of definite clauses, 73:50 they look like this. 73:52 And, um, k can be 0 here. 73:55 Which means that, um, there's, 73:58 um, there's kind of like nothing there. 74:02 [NOISE] That makes sense? 74:10 It's a little bit of like- I'm using this notation kind of, um, 74:14 to exploit this corner case that if you have the n of zero things, 74:20 then that's just, um, you know, true. 74:26 Or you can just say by definition, 74:28 definite clauses contain propositional symbols. Um, that will do too. 74:38 Okay. So let me try to give you some intuition why modus ponens and horn clauses. 74:44 So the way you can think about, um, 74:49 modus ponens is that, 74:51 it only works with positive, 74:54 uh, information in some sense. 74:56 There's no branching, either or. 74:58 It's like, every time you see this, 75:00 you just definitively declare q to be true. 75:03 So in your knowledge base, 75:04 you're just gonna build up all these propositional symbols 75:07 that you know are gonna be true. 75:09 And the only way you can add a new propositional symbol ever is by, um, 75:13 matching a set of other things which you can definitely know to be true 75:17 and some rule that tells you q should be true and then you make q true, 75:21 um, add q to your knowledge base as well. 75:23 The problem with propositional sym- uh, 75:26 the more general clauses is, 75:28 if you look at this, 75:29 rain and snow implies traffic or peaceful. 75:32 You can't just write down traffic or, 75:35 or peaceful, or both of them. 75:38 You have to reason about, 75:40 well, it could be either one. 75:41 And that, um, is outside the scope of what modus, 75:47 you know, ponens can do. Yeah? 75:49 [inaudible] and peaceful, that way, 75:52 you can say like [inaudible]. 75:55 Yeah, yeah, good, good question. 75:57 So what happens if it were traffic and peaceful? 76:00 Um, so this is an interesting case where technically, 76:03 it's not a definite clause, 76:05 but it essentially is. 76:07 Um, [LAUGHTER] so- I don't- there's a few subtleties here. 76:13 So if you have a implies b and c, 76:16 you can rewrite that. 76:18 And this is exactly the same as having two formulas 76:21 a implies b and a implies c. And these are definite clauses. 76:26 Um, just like, you know, 76:28 technically if I gave you, "Hey, 76:31 what about a not a or b?" 76:33 It's not a definite clause by the definition, 76:36 but you can rewrite that as, 76:38 um, a implies b. 76:41 So there is, um, 76:43 slight [NOISE] kind of- you can extend this to say not only definite clauses, 76:51 but all things which are morally, 76:55 um, horn clauses, right? 76:57 Where you can do a little bit of rewriting, um, 77:00 and then you get a horn clause and then you can do your, you know, inference. 77:07 Okay? Um, so resolution is this inference rule 77:11 that we'll look at next time that allows us to deal with these disjunctions. 77:16 Um, okay. 77:17 So to wrap up. So today, 77:20 we talked about logic. 77:21 So logic has three pieces. 77:22 We introduced the syntax for propositional logic. 77:25 There are propositional symbols which you string together into formulas. 77:29 Um, and over in syntax land, uh, 77:33 these are given meaning by, 77:35 um, talking about, you know, semantics. 77:38 And we introduced an idea of a model 77:41 which is a particular configuration of world that you can be in. 77:44 A formula denotes a set of models which are true under that formula, 77:50 this is given by the interpretation function. 77:52 Then we looked at, um, 77:54 entailment contradiction [NOISE] and contingency, 77:56 which are relations between a knowledge base and a new formula that you might pick up. 78:00 You could either do satisfiability check or model checking, 78:04 which tests for satisfiability to solve that, 78:07 or you can actually do things in, um, 78:11 syntax land by adjust, uh, 78:13 operating directly on, um, inference rules. 78:17 Um, so that's all I have for, uh, today. 78:21 And I'll see you next Monday. 78:23 [NOISE].