Transcript 00:14 okay well I guess a reminder we're 00:18 scheduled for our first exam this Friday 00:21 and there are sample tests on the course 00:25 website you certainly want to try these 00:26 before Friday 00:29 your first assignment you should get 00:32 that back graded together with a 00:35 solution set on Wednesday so you'd be 00:38 able to go over it before the exam all 00:42 right do you have any questions all 00:50 right today we're going to extend the 00:53 binomial heap to the Fibonacci heap data 00:56 structure and today we'll only take a 01:02 look at how you do this extension we 01:06 won't actually analyze the complexity of 01:08 or the amortized complexity of Fibonacci 01:11 heap operations what we'll see is that 01:14 the Fibonacci heap supports two 01:17 operations in addition to those 01:19 supported by the binomial heap the to 01:21 being the remove which is an arbitrary 01:24 remove so external to the data structure 01:27 the user has pointers to nodes in your 01:32 fibonacci heap and says remove the item 01:35 that's sitting in this particular node 01:37 okay so you get an arbitrary remove and 01:40 you have a decrease key again for the 01:44 decrease key the assumption is that the 01:47 user is keeping pointers to nodes in 01:49 your structure and comes to you and says 01:53 decrease the key of the item sitting in 01:56 this particular node okay all right so 02:00 we're going to support these two 02:02 operations and in addition to the three 02:04 supported by a binomial heap the actual 02:08 complexity of the operations the actual 02:11 complex their binomial heap operations 02:12 remains the 02:13 same here as it was when we looked at 02:17 binomial heaps the remove operation and 02:20 the decrease key have a rather poor 02:22 actual complexity or event okay so if 02:28 you simply look at the actual 02:29 complexities your conclusion would be 02:31 that this is a pretty lousy data 02:33 structure however the amortized 02:35 complexity is fairly good in particular 02:38 the amortized complexity of the decrease 02:40 key operation is order one which is very 02:43 very good and which makes this data 02:45 structure very useful in the 02:47 implementation of of greedy algorithms 02:51 to arrive at provably good from the 02:54 asymptotic complexity point of view 02:56 greedy algorithms now today we'll take a 03:05 look at a couple of applications of the 03:08 fibonacci heap and then how you perform 03:10 the various operations in the fibonacci 03:12 heap we won't actually do the amortized 03:15 complexity analysis I'm going to let you 03:20 read the amortize complexity analysis 03:23 from the text alternatively to the 03:27 website I've added a pointer to a 03:30 lecture that I gave in the past which is 03:32 very similar to our last lecture and it 03:36 kind of goes through the analysis of the 03:38 Fibonacci heap so if you follow that 03:40 link you can watch an old lecture on 03:43 fibonacci heap analysis and see why the 03:45 amortized complexity is what I've 03:48 claimed it is okay all right so let's 03:54 take a look at a couple of applications 03:55 so two greedy algorithms are going to 03:57 see and actually when I say see I just 04:00 mean I'm going to try and refresh your 04:02 memory about the two greedy algorithms 04:04 the first is Dykstra's Dykstra is 04:06 shortest path algorithm it finds the 04:08 shortest paths in a directed graph from 04:10 a given source vertex to all destination 04:12 vertices in the graph okay so have a 04:16 directed graph here 04:17 it has a total of seven vertices the 04:20 given source vertex is one and you want 04:22 to find the shortest path from vertex 04:24 one to the remaining six vertices to 04:27 seven Dijkstra's shortest path algorithm 04:32 if you recall constructs the shortest 04:35 paths in increasing order of lengths 04:37 okay so you start with a shortest path 04:40 from vertex one to itself its length is 04:42 zero then you get the next shortest path 04:47 next meaning the one of the next 04:49 smallest length okay and then you go 04:52 from having two shortest paths to three 04:54 to four till eventually you have seven 04:56 shortest paths the way you get the next 05:01 shortest path is you look at one edge 05:04 extensions of the shortest paths you 05:06 already have okay so you start with the 05:09 path from 1 to 1 and then from there we 05:12 look at all the one edge extensions so 05:14 the path from 1 to 1 you can extend it 05:16 you can go to vertex 3 you can go to 05:18 vertex 2 you can go to vertex 4 and you 05:21 can go this way deck 7 through a one 05:23 edge extension and of all the possible 05:25 one edge extensions you take the 05:26 smallest one ok so this would have a 05:30 total cost of 2 total cost of 16 total 05:32 cost of 6 total cost of 14 the shortest 05:35 is one three total cost of two so your 05:39 next shortest path in your sequence is 1 05:41 3 ok so now I've got two shortest paths 05:45 then I look at one edge extensions of 05:47 both of these shortest paths and from 05:50 all of those possibilities pick the 05:51 shortest they'll give me 3 shortest 05:53 paths and you repeat this process till 05:55 you end up with 7 shortest paths or 05:57 until you can't add another shortest 05:59 path ok alright so that's basically how 06:03 Dijkstra's algorithm works and in terms 06:06 of implementation what you do is you 06:09 keep track of a quantity called D length 06:12 of the shortest one edge extension that 06:15 gets you to vertex I and then in each 06:21 round you have to find the smallest D ok 06:26 the smallest as yet unselected 06:28 destination you add that to your 06:31 collection of shortest paths and then 06:36 you update some of the D values by using 06:39 edges leaving this new 06:41 vertex okay all right so hopefully that 06:46 sounds kind of familiar all right I 06:50 don't expect you to understand 06:51 Dijkstra's algorithm for what I said 06:52 here but I expect you to remember it 06:55 like straight down with them based on 06:56 what I said here okay all right so the 06:59 things you do when this quantity D okay 07:02 on each iteration where you add the next 07:06 shortest path you take this collection 07:10 of items D and from there you remove the 07:13 destination with the smallest value a 07:17 destination to which you don't already 07:18 have a path okay so you do this really n 07:25 minus 1 times if you have n vertices 07:27 okay so you're doing it order of n times 07:30 so you got order event remove many 07:33 operations on this set of D values when 07:39 you select the vertex the next shortest 07:40 path you decrease some of the D values 07:43 because now you have a shortest path for 07:45 whom you look at the one edge extensions 07:46 and that decreases some of the old D 07:48 values this is done a total number of 07:52 times which is equal to the number of 07:54 edges in the graph all right these are 08:00 the two operations you really perform on 08:02 this collection of values the typical 08:07 way to implement Dijkstra's algorithm at 08:09 least the way that's taught in 08:10 undergraduate courses is to employ an 08:13 array for D okay so to do the remove men 08:19 you just check all of the end values of 08:22 D forget the ones for which you already 08:25 have a destination taken care of and 08:28 from the remainder pick the one that has 08:30 the smallest okay and that takes you 08:33 order of n time to do the remove min 08:38 okay and to do the decrease key you look 08:44 at an edge you make a random access to 08:45 the appropriate D and reduce its D value 08:48 so the decreases you do sorry you do 08:53 Endre movements 08:54 taking and tying this n square you do it 08:57 decreases each decrease taking one time 09:00 that's e and since the number of edges 09:02 cannot be more than n square your 09:04 overall complexities and square for 09:06 Dijkstra's algorithm using an array and 09:09 that's typically what you would have 09:11 seen in an undergraduate data structures 09:14 class yeah you could use a min heap okay 09:23 if you use a min heap then you got n 09:26 remove mins which would take n log n 09:29 time and you've got e decrease key so 09:33 that's e change key values each change 09:37 key value would take log n so it'll work 09:41 out to n log n plus e log n if the 09:49 number of edges is small so if you have 09:52 a sparse graph B being of the order of n 09:54 then this works out to n log n and 09:56 you're doing better than the array but 09:59 if the graph has is dense it's got off 10:03 the order of n square edges it works out 10:05 to n Square log n so that's worse than 10:07 the array okay so depending upon whether 10:10 you have a sparse graph or a dense graph 10:12 and then heap may give you better or 10:14 worse performance use a Fibonacci heap 10:21 okay then the remove min operations okay 10:27 so we're going to have a sequence of 10:28 remove means that sequence will have 10:30 about Endre movements on it will have a 10:32 sequence of decrease keys and there will 10:35 be e decrease keys in the sequence while 10:37 an individual remove Minh might take 10:39 order of n time or a specific decrease 10:41 key might take order of n time we really 10:44 care about the total running time of the 10:46 algorithm so I'll use the amortized 10:48 complexity to figure out the cost of the 10:50 whole sequence of n remove means and e 10:54 decrease key operations the amortized 10:57 complexity here is log N and over here 11:00 it's 1 so the entire sequence would take 11:03 n log n plus e amount of time 11:08 okay so if you implement Dijkstra's 11:11 algorithm using the Fibonacci heap your 11:13 algorithm would run an n log n plus e 11:15 time now this is not a mattias time this 11:18 is actual worst case time of your 11:20 algorithm is going to be n log n plus e 11:25 so if you have a dense graph then ease 11:29 of the order of n square this works out 11:31 to n square you're doing the same as an 11:33 array if you have a sparse graph ease of 11:37 the order of n this is n log n you're 11:39 doing what the heap was doing so you're 11:42 kind of getting the best of both worlds 11:45 okay so regardless of the kind of graph 11:48 you're dealing with the Fibonacci heap 11:50 would give you good performance right 11:58 any questions about that let's take a 12:08 look at another one this is prims 12:10 minimum cost spanning tree algorithm 12:12 again something you should have seen in 12:14 an undergraduate data structures class 12:16 and the same is there you can implement 12:19 this algorithm using an array and the 12:22 complexity will turn out to be n square 12:24 again it's a greedy algorithms the same 12:26 kind of thing on each round we select 12:27 the next vertex to include using a 12:30 greedy strategy so you do a remove min 12:32 based on that you have to change some of 12:34 the values okay so complexity is the 12:37 same with the three different choices of 12:42 data structures the Fibonacci heap gives 12:44 you the best performance over a wide 12:48 range of densities for the graph 12:59 alright so with that as motivation let's 13:02 take a look at the Fibonacci heap itself 13:04 again there's a mint Fibonacci heap in a 13:06 max Fibonacci heap the symmetric 13:08 structures we look at the min version 13:12 like your binomial tree it's a 13:14 collection of entries except this time 13:18 the trees may not be a binomial all 13:24 right 13:24 the note structure we're going to employ 13:25 here each node will have a degree field 13:29 a child field and a data field so that's 13:31 kind of the same as in a binomial heap 13:35 we're going to have not just a sibling 13:38 field but left and right siblings 13:41 because we're going to maintain doubly 13:42 linked lists here instead of singly 13:43 linked lists so the siblings will be 13:49 linked together and a doubly linked 13:50 circular list rather than a singly 13:53 linked circular list each node will have 13:58 a parent field each node will have a 14:03 field called child cut which will be 14:05 true or false and child cut will be true 14:10 if a node has lost a child since the 14:13 most recent time it became a child of 14:15 its parent sounds a little complicated 14:17 will become clear when we look at an 14:19 example okay right so that's a boolean 14:21 field that's a point of field and two 14:24 point of fields I will see in a moment 14:29 why we need to have W linked circular 14:32 lists rather than singly linked circular 14:34 lists 14:42 okay let's look at an example so you 14:49 have a collection of mint trees okay so 14:51 here I've got this pink one here a 14:55 yellow one - this one here 3 4 5 6 min 15:00 trees all right so there are 6 min trees 15:02 in my Fibonacci heap in this particular 15:07 case all of them are binomial trees but 15:09 they don't have to be okay you could 15:12 have other kinds of trees again the 15:14 kinds of trees that can arise are 15:17 dependent on how we define the 15:20 algorithms just like in a binomial heap 15:23 we ended up with the binomial trees by 15:24 virtue of how we define the algorithms 15:26 okay now to see why we need I guess I 15:34 should point out with some of the fields 15:36 are not shown here parents not shown in 15:37 child cuts not shown right suppose you 15:48 want to perform some of the new 15:50 operations like a I would surely remove 15:55 okay so people have pointers external to 15:59 the structure in order to do things like 16:01 arbitrary remove and decrease key and 16:06 what that means is if somebody came to 16:09 you and said remove the for okay and the 16:16 way you implemented that was by using 16:18 the idea of copying the forward item in 16:21 a singly linked circular list so we 16:24 copied this tree over here so basically 16:28 it can't be this the content of this 16:29 node into that node and then delete this 16:32 node okay so it works as far as removing 16:35 the forest concerned but the problem you 16:37 have is that external to you there are 16:40 people keeping track of the fact that 16:41 the five was sitting here okay and he 16:45 took the five out of there and the next 16:47 time around the user doesn't know you 16:49 don't know where these pointers are that 16:51 are coming to the five so the next time 16:53 around somebody's probably going to come 16:55 and say change the key in this 16:56 - - but you went and deleted that node 16:58 okay so so to get things to work here 17:06 okay so to get things - so to get things 17:15 to work right because you're supporting 17:17 an arbitrarily remove and a decrease key 17:19 both of which require the user to keep 17:22 track of different nodes you can't 17:24 actually start moving things from one 17:26 node to another okay so if somebody says 17:30 remove four you must physically remove 17:33 that node and not one of the other nodes 17:42 all right so that's why we need doubly 17:44 linked lists here so with a doubly 17:48 linked list you can remove this fellow 17:51 from it's doubly linked list in order 17:52 one time but just changing its left and 17:55 right sibling pointers all right so 18:04 let's take a look at how the operations 18:05 work the first three insert the insert 18:15 remove men and meld work the same way 18:18 here as they work for binomial heaps do 18:20 insert an item you create a min tree put 18:22 it into your collection of doubly linked 18:24 min trees to meld you combine two W 18:31 linked circular lists into one to do a 18:35 remove men you remove the item take its 18:41 sub trees take the remaining trees at 18:44 the top level do pairwise combines 18:47 leaving behind no two trees with the 18:49 same degree okay so all of that is the 18:52 same 18:55 right suppose you now want to do an 18:57 arbitrary remove okay so to do an 19:02 arbitrary to remove the user has to 19:04 point to the node that contains the item 19:06 that's to be removed 19:10 okay let me back over but when I talked 19:13 about the insert when you do an insert 19:16 you put an item into a node and then the 19:19 insert can return to the user the node 19:21 in which you put the item that's how the 19:23 user keeps track of which node contains 19:25 the item that he may later want to do a 19:27 decrease key on or remove okay so the 19:31 insert gets modified slightly it returns 19:33 a pointer to the node in which the item 19:35 was placed alright so so later the user 19:40 may decide to remove the item in a node 19:42 so he invokes this method gives you a 19:45 pointer to the node which contains the 19:47 relevant item you check to see if the 19:51 user is actually trying to use this as a 19:53 sneaky way to do a remove men okay so if 19:57 the node happens to be pointing to the 19:59 minimum item you just invoke the so if 20:06 it happens to point to the minimum item 20:07 you invoke the remove meant nothing okay 20:10 all right so this is not a an escape 20:15 from remove Minh 20:29 all right so let's assume that the node 20:33 doesn't point to the minimum item in the 20:36 Fibonacci heap okay let's say it points 20:40 to some item here it doesn't have to 20:42 point to one that has a parent it could 20:44 for example be pointing to the six of 20:46 the five that's okay too 20:48 ingestion point to one at this time all 20:57 right the things we need to do are we 21:01 need to take this subtree out of its 21:03 doubly linked circular list of siblings 21:05 okay so you can do that in order one 21:08 time take a node out of a doubly linked 21:10 list now once you take it out okay like 21:17 I've done now you then need to take this 21:23 doubly linked list of sub trees and meld 21:29 it into the top-level doubly linked list 21:32 okay so you have two doubly linked 21:35 circular lists as a top-level one which 21:37 has one six five and now there's one 21:39 which has this ten and five okay so I 21:42 will it take these two doubly linked 21:44 lists of men trees and combine them into 21:47 a single one I will end up with a 21:49 top-level doubly linked list that has 21:51 5min trees in it 22:08 all right so this combining takes order 22:11 one time I have to change the parent 22:16 fields in these nodes these were 22:17 previously pointing to the four so the 22:20 parent feels have to be changed to no to 22:22 indicate these are now top level nodes 22:24 their roots so the time required to 22:27 change those parent fields will depend 22:29 upon the degree of the fourth how many 22:32 sub trees did it have and when you do 22:34 the analysis you'll see that the number 22:36 of sub trees are still log n though the 22:38 base is different it's not log of n base 22:40 - it's really log of n base V where V is 22:43 the so-called golden ratio 1 plus root 5 22:47 over 2 so it's still logarithmic in n so 22:52 the base is different okay so that'll 22:55 there's a logarithmic number of sub 22:59 trees whose roots have to have their 23:01 parent pointer change - no right so the 23:07 actual complexity of this operation if 23:14 this is all we did depends on how many 23:18 sub trees that 4 had okay and as we'll 23:24 see later that could become as much as 23:27 log n okay but we're going to be adding 23:30 some more 23:30 extraneous work in order to get that log 23:34 n degree and that extra work is going to 23:36 take order of n time all right but for 23:40 now you've got order 1 to pull it out 23:44 order 1 to combine the two lists and 23:48 some amount of time to change the 23:49 current fields to know the when he 23:54 pointer doesn't change because the main 23:56 element stays what it was we did not 23:58 remove the main element there is no 24:02 pairwise combining of equal degree trees 24:11 now this isn't all that there is to it 24:14 there's going to be something else that 24:15 we'll talk about in a little bit and 24:17 once we do that the actual complexity 24:20 will go up to order M all right any 24:24 questions about removing the removing an 24:29 arbitrary element well by element I mean 24:45 well first a a single-ended Part D Q is 24:51 a collection of elements in each element 24:53 has a priority associated with it so in 24:56 this case here this mint tree represents 24:59 two elements one has prior to Phi one as 25:01 Friday six there may be other attributes 25:04 associated with an element which are not 25:06 shown in the pictures so if I say remove 25:09 the node here that means take out the 25:11 element that is stored here and it's 25:14 priority has been shown here five but 25:16 it's got other fields you just take out 25:17 whatever stored in the data field of the 25:19 node throw it out all right other 25:25 questions 25:34 let's look at how we plan to do a 25:36 decrease key again here the user has to 25:43 tell you which node contains the item 25:45 whose key you want to reduce and then he 25:48 has to tell you by how much the keys is 25:50 to be reduced and that's supposed to be 25:55 a non-negative amount okay so we're 25:59 going to decrease it so it's not a 26:01 sneaky way to actually increase the key 26:10 all right so let's suppose we are to 26:14 decrease the key of the elements stored 26:16 in that node currently the key is for 26:22 once you decrease the key when you 26:27 decrease the key you don't affect the 26:30 min-heap relationships with respect to 26:33 the descendants ok this becomes smaller 26:35 so this whole subtree is still a min 26:40 tree the only problem that decreasing 26:45 this key can create is a problem with 26:47 respect to its parent and of course it 26:50 may have a grandparent and great 26:51 grandparent and so on okay so if I 26:55 decrease the key from four to three I 26:57 don't have to anything I just leave 26:59 things the way they are if I decrease 27:02 the key from four to zero 27:04 I have a problem because now my parent 27:07 has a bigger key than I have and that's 27:09 not permitted 27:16 okay all right so we decrease the key if 27:23 the key drops below that in the parent 27:26 we're going to take the tree out 27:31 fortunately have a parent field so I can 27:33 easily check whether my key drops below 27:35 that in the parent if it does I pull it 27:38 out we've seen how to pull a tree out 27:40 from a doubly-linked list that takes 27:42 order one time okay and then we'll take 27:48 that tree and put it into the top level 27:50 doubly linked list okay so very similar 27:52 to what we were doing in the case of an 27:56 opportunity to remove an important 27:58 difference which is what makes the 28:01 amortized complexity of this operation 28:03 different from that of an arbitrary 28:05 remove is that when you put things back 28:09 there is only one tree that goes back in 28:12 the case of an arbitrary remove the root 28:15 disappears and all of its sub trees come 28:17 in and the number of sub trees can be 28:21 logarithmic here is only one and that's 28:25 the important difference between these 28:26 two operations and that's why this one 28:30 is going to have an amortized complexity 28:32 of one and the other one has an 28:33 amortized complexity of log n all right 28:39 so I decrease the key I pull the tree 28:41 out put it in to the top level then I 28:51 have to check this key against the min 28:53 value because I might a drop below the 28:56 main value I update the min pointer and 28:58 in this case I'm actually dropped below 29:00 the old main value so the min pointer is 29:02 now going to come over here again there 29:07 is no pair wise combining of trees that 29:09 have the same degree 29:16 okay alright so the actual complexity 29:18 here so far is order one whereas in the 29:24 previous case we saw that it was really 29:27 order of the degree of a node 29:48 all right questions about how a decrease 29:51 key is done 30:03 right when you do the operations in this 30:05 manner the height of the trees that are 30:10 created can actually become fairly large 30:12 you can get trees whose height is all 30:15 event but the important thing is that 30:26 the degree of these trees will be low 30:28 we're not going to prove that today but 30:31 if you watch the lecture that's posted 30:33 on the web which has the analysis it's 30:36 shown out there that the degree is 30:38 logged with Mike right you've complete 30:43 the two operations we need to do 30:47 additional work if we don't do the 30:49 additional work the amortized complexity 30:51 is not as stated in that table okay so 30:55 you get the amortized complexity to get 30:58 the degree of the trees to be 31:01 logarithmic in the number of nodes they 31:04 contain we need to perform additional 31:07 work and that additional work is in the 31:11 form of something called a cascading cut 31:19 the two operations arbitrary remove and 31:24 decrease key 31:25 they cut a node off from its parent okay 31:29 you take it out of its doubly-linked 31:31 circular list of siblings effectively 31:33 you're cutting it off from its parent 31:38 now if you permit too many nodes to get 31:41 cut off from their parent then you get 31:43 provably bad performance from the 31:46 amortized point of view you have to 31:50 control the number of children a node 31:53 can lose in this process and that's 31:57 where this child cut field comes in the 32:00 child cut field keeps track of how many 32:03 children a node is lost and in fact in 32:08 the pure form of Fibonacci heaps a node 32:10 is allowed to lose only one child and to 32:13 keep track of that you only need a 32:14 boolean variable true false true if 32:17 you've lost a child false if you haven't 32:19 okay so if you've lost a child in the 32:22 past then if you lose one more you are 32:26 cut off from your parent as well so in a 32:34 cascading cut when a node loses a second 32:37 child it is cut off from its parent and 32:40 then this thing could trigger the same 32:43 operation at its old parent because now 32:45 the old parent may have lost a second 32:47 child in which case it gets cut off from 32:48 its parent and so on and this continues 32:51 until you reach a node that is losing 32:55 its first child or until you reach the 32:59 root in which case everybody has been 33:01 cut off let's see an example I'll make 33:05 it pretty clear what happens in a 33:07 cascading tact 33:27 all right so I'm asked to say either 33:35 remove this element eight or I'm asked 33:41 to decrease the key of eight and it 33:43 happens to drop to below seven okay so 33:46 whatever you're doing you have to cut 33:48 this subtree out okay right in this game 33:53 decreasing it by two drops to six I cut 33:55 it out okay so when I cut it out okay I 34:01 examine its old parent its old parent 34:04 had a child cut field of true which 34:07 meant it had lost a child 34:09 previously okay so now it is losing a 34:14 second child and that's not permitted or 34:18 I shouldn't say is not permitted but 34:20 when that happens you cut the the parent 34:24 off from its parent so the seven is 34:27 going to get cut off here okay so we cut 34:32 the seven out and it moves up to become 34:37 a top level tree again keep in mind that 34:40 even though in this figure this node is 34:47 simply a single node but in reality it 34:50 could have a large number of sub trees 34:52 and all of those get moved along with 34:55 the seven up to the top level okay so a 34:59 large number of sub trees get promoted 35:00 up to the top level right now the seven 35:07 was cut off from his parent but its 35:09 parent already had a child cut field of 35:11 true meaning that it had lost one child 35:13 in the past so that means you got to cut 35:20 the four off from its parent 35:27 and when you cut the four off its 35:29 subtrees go along with it 35:30 so it moves up to the top level yes when 35:41 it moves up it becomes a root and for a 35:45 root node the child cut value is not 35:47 important because we never use it for a 35:49 root okay so you could set it to false 35:54 it doesn't do you any good to do that 35:56 though all right so at this time the to 36:07 last a child at last is child for this 36:11 is the first time it's losing it it's 36:13 child cut value is currently false so we 36:16 change its child cut value to true okay 36:20 we don't cut it off from its parent we 36:23 just leave it here as true but yeah oh 36:37 what a question is well why don't we 36:41 just cut it off in any case why wait for 36:44 the second child to go okay again this 36:50 has to do with the proof of the 36:52 amortized complexity okay if you cut off 36:57 the fellow that is false the proof will 37:01 not go through okay because now the way 37:07 you pay for this child cut okay now that 37:09 little while ago I said the height of 37:11 these things can be order event okay so 37:14 the child cut operation has an actual 37:17 complexity of the sword event because 37:19 you start somewhere deep and you keep 37:20 going up and if everybody had true on it 37:23 you end up at the root that cause you 37:25 order em okay but you got to pay for 37:28 this somehow in the amortized scheme to 37:31 bring the complexity of the other 37:33 operations down to log N and one and the 37:38 only way to do that is if all of the 37:40 child card 37:41 cost is borne by somebody else and not 37:44 by the decreased key operation or by the 37:48 remove the arbitrary remove so somebody 37:51 else has to pay for it okay and the way 37:55 you end up paying for it is that the 38:00 operation that did the first cut okay 38:03 leaves a credit of 1 on the truth 38:09 so everybody's everybody who does a cut 38:12 ends up paying one so it increases their 38:15 cost by one instead of by a huge amount 38:17 okay and so if you went off and cut 38:19 somebody with a false there's no credit 38:21 sitting on the false to pay for that cut 38:24 so again if you if you watch the lecture 38:28 on the analysis you'll see that we can 38:30 arrange for every node with the child 38:32 cut value of true to have a credit or 38:35 one sitting on it to pay for the time it 38:37 gets cut off from its parent whereas the 38:41 false is don't have any credits on them 38:43 and so if you do any work out there 38:45 you're in trouble alright so getting 38:49 back here so the two had a sub tree 38:52 whose root was four we cut it off and 38:54 moved it up previously it's child cut 38:56 value was false I change it to true and 38:59 I don't go any further up the tree 39:12 all right so that's how the cascading 39:16 cut works or any questions about how a 39:26 cascading cut works yeah 39:37 what why do I do a cut when the parents 39:41 child cut right what why did I do that 39:47 um well let's go back all right so I 39:58 decrease the key here by two and then I 40:02 had to take that fellow out okay and 40:05 before I did that it had the child cut 40:07 value of true so at this time this 40:11 fellow has lost a second child and the 40:14 question is well so what why don't we 40:16 just leave it okay again it has to go 40:18 back to the proof okay if you look at 40:23 the proof carefully you'll see that you 40:25 can extend the proof so long as these 40:27 fellows lose only a constant number of 40:29 children okay so I could change it 40:33 instead of keeping a boolean value here 40:35 to keep a counter zero one two if it has 40:37 been become too well then I cut it off 40:40 or it becomes three but if you allow it 40:42 to lose an arbitrary number of children 40:45 then the amortized complexity is no 40:46 longer order one for the decrease key or 40:50 logarithmic for the opportune remove 40:52 okay so it's all got to do with being 40:55 able to prove a desired amortized 41:00 complexity all right the other questions 41:06 with the child cut or the cascading cut 41:10 operation all right so the question that 41:16 Polly remains is the cut operations that 41:23 you do when you have a arbitrary remove 41:26 and decrease key they change shall cut 41:29 values from false to true okay so the 41:35 question then would be well when do 41:36 these things become false again because 41:40 if you do these operation long enough 41:42 everybody becomes true or becomes the 41:44 root 41:46 okay when you do a remove men operation 41:50 that's the only operation in this whole 41:52 collection of operations that makes one 41:54 node a child of another so the removal 42:00 operation when it makes one node a child 42:02 of another it sets the child cut value 42:06 of the child to be true to be false okay 42:12 so that's why we have this somewhat 42:14 complicated definition of child card 42:16 upfront which said that you count based 42:22 on the most recent time you became a 42:23 child of your parent okay 42:25 so you can lose nodes you know like this 42:30 fellow here can lose a subtree is child 42:33 card can become true but then if you do 42:35 remove min and it becomes a subtree of 42:38 somebody's child card becomes false 42:39 again and now it can lose another 42:41 subtree and that's okay okay so it's not 42:44 that you can lose a child only once in 42:46 the history of the system you can lose 42:48 children many many times but only once 42:52 for each time you become a child of your 42:55 current parent so the removed min 43:01 operation resets the child cut field of 43:06 one node okay every time we do a 43:08 pairwise combined okay there's one node 43:11 whose child card field gets set to false 43:13 that's the fellow who becomes a subtree 43:15 of the bigger sorry of the smaller tree 43:20 all right questions 43:29 okay all right so as I said we will not 43:32 be doing the analysis in class but there 43:35 is a pre-recorded lecture that does do 43:38 the analysis and you can get that by 43:41 following the link it's under 43:42 instructional material go to the bottom 43:44 of that page there are links to some 43:47 lectures from the past alright so we'll 43:51 stop here