Transcript 00:11 okay see it started now I guess some of 00:18 you indicator you having a problem 00:20 getting access to the questions that are 00:23 there on the first assignment so I asked 00:26 the TA to type up the questions and post 00:28 them on the TA announcement page so if 00:30 you follow the links to the TA 00:32 announcement page you can see what 00:33 problems you need to do on the first 00:35 assignment okay all right now today 00:38 we're going to start talking about 00:40 double-ended priority queues and we 00:42 actually be spending two lectures on 00:44 this topic okay so first let's take a 00:50 look at what a double-ended priority 00:51 queue is and then I'll describe two 00:57 different ways to arrive at data 00:59 structures for double-ended priority 01:01 queues and today we will take a look at 01:03 one of these the would you call generic 01:07 methods okay and then next time we look 01:09 at some specific structures right so in 01:13 a double-ended prior to queue the 01:15 operations you need to support are you 01:17 need to be able to insert a new item and 01:19 each item has a key 01:20 or priority associated with it you need 01:24 to be able to remove the item that has 01:26 the maximum priority and you also need 01:29 to be able to remove the item that has 01:31 the minimum priority okay so you need to 01:35 be able to do all three of these 01:36 operations by contrast a single ended 01:40 priority queue will support only one of 01:43 these to remove operations so for 01:46 example if you have a max-heap 01:47 you can do the first two efficiently and 01:50 if you have a min heap you can do the 01:51 first and the third efficiently okay but 01:55 in a double-ended priority queue we can 01:57 perform all three of these operations 01:59 efficiently now there are general ways 02:06 to arrive at data structures for 02:09 double-ended priority queues starting 02:11 with structures for single ended 02:12 priority queues 02:13 so we could start with a heap for 02:15 example and build an effective structure 02:18 for double-ended prior to q and the 02:20 general methods are say the first one is 02:23 called a dual min and Max single ended 02:25 prior to Q so you keep both a min 02:28 product you and a max party Q we'll see 02:32 how that works in a moment and then 02:35 there are correspondents based 02:36 structures which at least more space 02:40 efficient than the dual structures they 02:47 also are specialized structures designed 02:50 specifically for double ended priority 02:52 queues and pretty much all of these are 02:55 really inspired by the heap structure 02:58 for single ended priority queues so you 03:00 make some variations on the heap 03:02 structure do you arrive at these double 03:04 ended specialized structures so what one 03:09 of these is called the symmetric min max 03:11 heap spell the simplest of the 03:13 specialized structures we won't be 03:16 covering it here in class but you can 03:18 read about it it's in the readings and 03:19 actually there are two places where you 03:22 can find it in the readings one is if 03:25 you go to the website for the class 03:28 there is a reading for double ended 03:30 product queues so if you look at that 03:33 reading you'll find the symmetric min 03:35 max heaps described there's also reading 03:38 there titled pairing heaps and if you go 03:41 into that reading there is a slightly 03:43 more extensive discussion of symmetric 03:46 min max heaps in that reading than there 03:48 is in the one titled double ended 03:50 products use so you can read it from 03:54 either of those two places the second 03:56 one is as I said slightly more elaborate 03:58 ok min max heaps these are described in 04:03 the text we won't be going through those 04:04 in class there's the deep structure also 04:08 described in the text you won't be going 04:10 through it in class there is a 04:11 programming not a program but there's an 04:13 S one of the questions on the first 04:14 assignment is based on deep so you 04:16 really need to read it yourself and then 04:18 figure out how to do that problem there 04:21 are interval heaps and interval heaps 04:25 that's something we'll become 04:27 bring in class that would be the subject 04:28 of the next lecture okay 04:32 so today as I said we'll be looking at 04:35 the general methods and then in the next 04:37 lecture we'll take a look at interval 04:39 heaps all right so as I said for general 04:46 methods these aim and building and 04:50 double-ended priority queue structure 04:51 starting from a structure for a single 04:53 ended priority queue the simplest 04:57 strategy is what's called a dual single 05:00 ended priority queue structure and in 05:03 that we keep both a min prior to Q and a 05:06 max prior to Q so if you have n elements 05:12 in your double ended project Q then each 05:14 of these n elements will be stored in 05:16 the mint project q as well as in a max 05:19 project you okay so you essentially end 05:22 up taking space for two n elements the 05:28 requirement for this strategy to work is 05:31 that in addition to being able to do the 05:33 single ended prior to Q operations 05:35 efficiently you should be able to do an 05:37 arbitrary to move efficiently so if I 05:40 point to a node in your single ended 05:42 priority queue structure not necessarily 05:44 the men are the max you should be able 05:45 to remove the element in that node in a 05:48 short amount of time as we'll see in the 05:55 example when we use the scheme each node 05:58 in the main priority queue structure 05:59 will keep a pointer to its copy in the 06:02 max party queue structure and the 06:04 reverse all right so let's suppose we 06:09 have a nine element double ended prior 06:11 to Q and I'm showing only the keys of 06:14 the priorities of the nine elements and 06:17 so using the dual scheme 06:19 I will have say if I'm basing it on a 06:21 heap okay I could use other product you 06:25 structures and we'll be seeing other 06:27 priority queue structures later in this 06:29 course we'll see things like leftist 06:31 trees and binomial heaps and things 06:33 those could be used in place of a heap 06:35 but for the moment let's use a heap 06:38 that's what we're familiar with 06:40 okay so I could take a min heap and put 06:42 my nine elements in the min heap then I 06:45 could take a max heap and put my nine 06:47 elements into the max heap and then I'll 06:52 have pointers from each element in the 06:54 main heap to its position in the max 06:58 heap and vice versa so the one for 07:00 example will have a pointer to its 07:04 mirror image in the max heap over there 07:07 and then a backward pointer from the one 07:09 in the Maxie to the main Heat 07:11 okay similarly the two okay so from a 07:15 two in the min heap you can find out 07:17 where it is in the max heap and vice 07:18 versa same thing for the three before 07:22 the six and so on okay all right so 07:27 actually there will be nine of these two 07:28 way pointers all right so so with this 07:37 set up okay we will need space for twice 07:41 as many elements so I have 18 nodes the 07:45 nodes are somewhat bigger than they were 07:46 when he just had a single ended product 07:49 queue because we've got these pointers 07:52 which weren't there in a single-ended 07:54 priority queue I can perform my 07:59 operations in logarithmic time if you 08:02 want to do an insert will insert it here 08:05 will also insert it in the max heap and 08:07 then set up this two way pointer between 08:10 the two occurrences of the same element 08:13 so the insert would still take log n 08:16 time 08:17 it's just they will be doing two inserts 08:18 and a pointer set up if you want to do a 08:23 remove men I'll take the one out of here 08:26 using the min heap delete and then this 08:29 pointer will get me to the one out here 08:30 and then I'll do an arbitrary remove 08:33 from here okay so I know the position of 08:36 the element I'll use my algorithm for an 08:38 arbitrary to remove knowing the position 08:40 and take that one out 08:51 the remove max is similar to the remove 08:53 men initialize is similar you initialize 08:58 on men heap with all the elements 08:59 initialize the max heap and said the two 09:01 way pointers so you can get everything 09:05 to work in the same asymptotic 09:07 complexity as it takes for a heap it's 09:13 just that the amount of time each of 09:16 these things take is a little bit more 09:18 than twice what it took in the case of a 09:22 heap it's a little more than twice 09:25 because of these pointers so every time 09:27 you move an element you're not just 09:29 moving the element also moving the 09:30 pointers okay all right so that's one 09:35 way to do it if you know how to do a 09:36 single-ended priority queue and if that 09:39 structure supports an arbitrary to 09:42 remove in addition to the standard 09:43 single-ended priority queue operations 09:45 then you can build a double-ended 09:47 structure quite easily okay all right 09:52 the all right so this strategy takes 09:58 about twice the time when it takes about 09:59 twice the space but also at 10:09 correspondent structures these ones aim 10:12 to reduce the number of nodes to end 10:20 all right so I would still use both of 10:23 Menma max single-ended product q how at 10:27 this time only half the elements will be 10:29 kept in the min kyu and half in the max 10:31 Q if you have an odd number of elements 10:37 then the additional element will be kept 10:39 in a buffer so it won't be present in 10:41 either the men or the max Q like in the 10:54 dual structure will establish two-way 10:56 pointers which we'll call a 10:58 correspondence but now since you don't 11:00 have duplicates the correspondence is 11:02 defined differently each element in the 11:06 main heap will be paired with an element 11:08 that's at least as big in the max heap 11:11 and each element of the max heap will be 11:14 paired with an element that is no bigger 11:17 in the min heap okay so a slightly 11:21 different notion of correspondence and 11:27 that's the case for total correspondence 11:30 the other version is leaf correspondence 11:32 which we'll talk about a little bit 11:33 later as was the case for the dual 11:41 strategy here also you need to be able 11:43 to do an arbitrary remove efficiently 11:51 let's look at total correspondence I 11:53 said in total correspondence each 11:56 element of the min priority queue is 11:58 paired with a different element in the 12:01 max prior to Q whose values at least as 12:03 big since the number of elements in both 12:08 min and Max party Q's is the same once 12:11 you do this pairing in one direction you 12:13 automatically get a pairing in the other 12:14 direction which covers every element in 12:17 the Max prior to Q all right so suppose 12:25 you have nine elements now let's suppose 12:29 you have eleven elements okay 12:31 then five of them will go in the main 12:33 heap another five will go in the max 12:36 heap and the extra one will stay in a 12:39 buffer in addition we will need a 12:43 correspondence established between every 12:46 element here and an element on the other 12:49 side which is at least as big and you 12:51 can't pair one of these or two of these 12:54 with the same one out there you go to 12:55 pair them with different fellows all 12:59 right so I pair the one with the two 13:01 okay so the one has to be paired with 13:04 someone that's at least as big so I'm 13:06 pairing it with the two and that 13:08 automatically pairs the two with the one 13:09 I pair the five with the seven I pair 13:17 the nine with the ten the 14 with the 18 13:21 and the 17 with the 20 13:32 okay alright so this would be an example 13:34 of a total correspondence between a 13:37 single ended two single ended priority 13:39 queues each having exactly the same 13:41 number of elements alright so if you 13:54 look at this structure okay the smallest 14:00 element is either at the root of this 14:05 the main product Q or is in the buffer 14:09 okay from the elements in the products 14:15 use this one has to be the smallest it's 14:20 the smallest of the ones here and then 14:22 there's at least one fellow on the other 14:24 side that is bigger than it out there 14:27 and in fact for everybody here there is 14:30 a bigger element out there because of 14:33 the correspondence okay so every element 14:37 here has a bigger element on that side 14:39 so the smallest has got to be here and 14:41 then this is the smallest of everybody 14:43 so that's the smallest of the fellows 14:45 that are in the two product queues the 14:47 only thing that might spoil things is 14:49 this fellow here the fellow in the 14:52 buffer the smallest fellow is either 14:54 here or in the buffer 14:57 similarly the largest fellow from the 15:01 fellows and the to predict use the 15:03 largest fellows gotta be this guy 15:05 because everybody here has a bigger 15:07 element out here corresponding bigger 15:09 element okay so the biggest can't be 15:12 here it's got to be here and of the 15:14 people here the biggest is in the root 15:16 because this is a max product queue okay 15:19 so the root of the max pratik U is the 15:21 overall biggest from the fellows and the 15:25 to product queues and then this fellow 15:28 here may be even bigger so if you want 15:32 to find the smallest element you find 15:35 the smaller of this guy on that if you 15:37 want to find the biggest element you 15:38 find the larger of that guy in this okay 15:43 so you can find the smallest in largest 15:45 constant time all right let's take a 15:52 look at inserting an element okay right 16:00 the first case for insertion is that the 16:02 buffer is empty so you currently have an 16:04 even number of elements in that case you 16:06 just put the odd fellow in the buffer 16:14 the next case is that the buffer is not 16:17 empty so the buffer is not empty let's 16:20 suppose you are inserting a four okay 16:22 then we find the smaller of the two the 16:24 four and the fellow in the buffer this 16:25 is four so you put the four in the main 16:27 structure you put the twelve in the max 16:29 structure and then set up a 16:31 correspondence between the four in the 16:32 twelfth 16:44 okay so for an insert but half the time 16:52 you won't be doing anything you just put 16:54 it in the buffer and the other half of 16:57 the time you would be doing two inserts 17:00 one into the main structure one into the 17:02 marks and setting up the pointers so 17:11 again your insert would run in 17:13 logarithmic time or any questions about 17:20 the insert 17:32 let's look at the remove men operation 17:37 so to remove the men you need to first 17:40 find the men and we said the men element 17:42 is either the fellow here or it's in the 17:45 buffer if it's the fellow in the buffer 17:51 you just empty the buffer you don't have 17:52 to do anything else and if it's not the 18:04 fellow in the buffer and in this case 18:06 it's not is this fellow here well then 18:09 you remove the one from here and you 18:12 remove the corresponding element from 18:15 the max structure so you take the one 18:17 out and you take the two out okay 18:21 so once you take out the one and the two 18:23 you leave behind properly struck 18:25 structured heaps together with 18:28 correspondence pointers okay when the 18:31 two is out you have to take the two and 18:35 you got to put it back in so to put the 18:39 two back in if the buffer is empty you 18:41 just put it into the buffer if the 18:43 buffer is not empty you had to find the 18:45 smaller of the two and the twelve the 18:46 smaller one gets put in here and the 18:48 bigger one gets put in over there all 18:57 right so so one way to look at remove 19:00 men is find the smaller of the root of 19:05 the men tree and the root and the fellow 19:07 on the buffer if it's the fellow in the 19:09 buffer you empty the buffer if the 19:13 fellow here then you have to do two 19:15 deletes you got to delete here's you got 19:18 to do a remove meant from here then you 19:20 got to do a remove arbitrary of the 19:22 corresponding element here once you've 19:25 done those two deletes these two trees 19:29 are properly structured and your problem 19:31 now is that you've got an element in 19:32 this case - that's not in your data 19:35 structure you got to put it back here so 19:36 you do an insert okay and the insert 19:40 works as described before if the buffer 19:43 is empty put it in the 19:45 buffer if the buffer is not empty then 19:48 find the smaller of the two fellows in 19:50 this case the two the smaller gets 19:52 inserted here and the bigger one gets 19:54 inserted that 20:13 okay so you find the smaller of the root 20:23 of the min tree and the buffer if it's 20:28 the fellow in the buffer you just empty 20:29 the buffer that's the easy case the 20:33 oneworld cases the one that's shown in 20:36 this picture the smallest element is the 20:38 root of the min tree okay so you do a 20:40 delete min from here using the delete 20:42 min operation for your single ended 20:44 product queue you follow this pointer 20:48 you find the corresponding element here 20:49 you do a delete arbitrary of this fellow 20:55 once you've finished with that delete 20:57 arbitrary then you have two single ended 21:01 priority queues here which satisfy the 21:02 correspondents constraint everybody has 21:04 appointed okay but you have an element 21:07 in this case the two which is homeless 21:10 okay it's not in the structure you got 21:13 to put it back here so now we do an 21:15 insert to and that works like the insert 21:18 we described earlier which means you 21:20 check to see if the buffer is empty if 21:22 the buffer is empty the two goes in the 21:24 buffer if the buffer is not empty you 21:26 find the smaller of the two and the 21:27 twelve in this case the smaller one gets 21:30 inserted here the bigger one gets 21:31 inserted there okay so it looks like 21:35 you're doing like two deletes and two 21:37 inserts in the worst case to accomplish 21:40 this operation 21:46 now as as an implementation point you 21:49 can do it quicker than doing two deletes 21:52 and two inserts because in this 21:56 particular case for example what we 21:58 would really do is since the two is 22:01 coming here I'll do a change key on the 22:03 one so I'll do a change key operation 22:05 change the one two or two in which case 22:07 the two will just stay here in this 22:09 particular case there'll be no sifting 22:11 down okay and then I'll do a change key 22:14 on that one I've changed the two to a 22:15 twelve and I'll follow a path up here 22:18 the twelve will come here and the 10 22:19 will go down okay so even though 22:23 initially it's easier to describe it 22:25 like two deletes and two inserts as an 22:28 implementation point you can do it as to 22:31 change key operations so it's actually 22:37 faster than it seems 22:48 or any questions about the remove meant 22:58 well the purpose of taking out the two 23:01 is that you have to maintain the duality 23:04 okay so the duality says that for every 23:07 element here you got to have a bigger 23:08 element on this side if every element 23:10 here you go to have a smaller element on 23:12 this side so if I just took out the one 23:14 okay then the two will not have a dual 23:17 element on this side and if you don't 23:20 have the duality then the claim that the 23:25 root here is the smallest element and 23:27 the root there's the biggest may not 23:29 hold because the way we said this 23:33 fellow's the smallest is for every 23:35 element here there is a bigger element 23:37 on that side so the biggest can't be the 23:39 smallest can't be on that side okay 23:42 similarly for every element here there's 23:44 a bigger element so there's a smaller 23:46 element on this side so again the 23:48 smallest can't be out here okay so if 23:51 you don't have the duality then you 23:53 could have the smallest element sitting 23:55 somewhere in the other tree down you 23:59 know near the bottom and in fact that 24:01 would be the case here if I took the one 24:03 out and just restructured this the 24:05 smallest element left will be here or 24:07 two okay so next time you come to do a 24:11 remove men of fine men you won't know 24:13 where it is okay so the correspondence 24:17 is essential to prove that the roots 24:21 okay 24:22 barring the buffer element the roots 24:24 contain the min and Max elements yeah 24:37 well okay let's go back to the simpler 24:40 strategy which we had for two deletes 24:42 and two removes two inserts to make this 24:44 thing work okay so we took the two out 24:47 and put it in here 24:48 we took the twelve and put it in there 24:51 then the correspondence is established 24:52 between the two and the twelve okay 24:57 again remember what I said was you take 25:01 this fellow out you take out its 25:03 corresponding element okay once you're 25:06 done with that then everybody here has a 25:09 corresponding element there and 25:10 everybody here has a corresponding 25:11 element there because you always take 25:13 two elements out at a time okay one from 25:17 each side all right now you take the two 25:21 to put the two back in if the buffer is 25:24 empty you just leave it in the buffer so 25:26 still 25:26 everybody has corresponding elements on 25:28 either side if the buffer is not empty 25:31 then you have two elements to insert and 25:34 these will be the corresponding elements 25:36 so the two goes here and this 25:38 corresponding element 12 goes on that 25:40 side okay so that's how you maintain the 25:43 correspondence okay now I guess this 26:09 statement is that in the initialization 26:10 instead of mapping one to two every 26:12 remark one to seven as far as the 26:15 mapping is concerned of the 26:16 correspondence it doesn't matter who you 26:18 map to who so long as each person here 26:21 is mapped to a bigger element there and 26:23 each person here is mapped to a smaller 26:26 element so if everybody here is mapped 26:27 to a bigger element here you get the 26:29 reverse for free okay now in this 26:32 particular case if you map the one to 26:36 the seven you have nobody here to map to 26:38 the two to satisfy the constraint okay 26:42 so here with this example you don't have 26:45 a choice one has to be mapped to the two 26:49 okay but there are probably other 26:51 relevants that you might have been able 26:52 to map differently okay initializing 27:02 might take some time 27:03 not really you can I mean if you're 27:10 given n elements to initialize versus if 27:12 you start with empty structures yeah if 27:16 you're given n of them - to start with 27:18 and you want to initialize them you 27:21 would certainly need to pair those 27:24 elements by size okay 27:27 and I think one simple way of course we 27:30 will to sort all of the elements you'll 27:32 just take n log n time and then put the 27:35 big ones in the back side and the small 27:37 ones in the men's side and then you 27:39 could probably there anybody in the 27:41 men's side with anybody in the back side 27:51 all right other questions about the 27:53 remove operation again as I said while 27:58 it was easier to describe it as a remove 28:02 men and arbitrary to remove and then to 28:04 inserts as far as implementation goes to 28:08 change key operations would work better 28:10 than trying to do four of the other 28:12 operations let's go to the other 28:22 strategy which is leaf correspondence 28:28 here the correspondence is established 28:31 only for leaf elements rather than for 28:34 every element so each leaf of them in 28:39 private cue is paired with an element in 28:43 the max prior to cue which is at least 28:44 as big and vice versa 28:48 each leaf of the Max barbecue is paired 28:53 with somebody in the min party cue that 28:55 is no bigger okay 28:58 now here I need to explicitly state both 29:03 of these requirements in the case of 29:05 total I only had one of them because 29:08 this one guaranteed the other okay if 29:12 everybody in the men is paired with 29:14 somebody in the max then everybody in 29:15 the max will get paired with somebody in 29:17 the men but when you're dealing with 29:19 leaves you may pair leaves with non leaf 29:23 elements in the max structure in which 29:26 case some leaves in the max structure 29:28 may not be paired with anybody in the 29:29 men so I got to explicitly put this 29:32 requirement every leaf in both 29:34 structures has to be paired with 29:35 somebody in the other structure for 29:42 things to work we need added 29:43 restrictions the first restriction that 29:50 you need on your single-ended party 29:52 queue structure is that when we do an 29:56 insertion only the newly inserted item 29:59 can become a new leaf you would have a 30:01 problem doing insertions efficiently if 30:05 many non leaves from the old structure 30:07 suddenly became leaves and you'll have 30:09 to find corresponding elements for them 30:11 okay so there's a restriction placed 30:13 here which says that only the newly 30:16 inserted item can become a leaf 30:18 following the insert and similarly when 30:25 you remove an item only the parent of 30:28 the removed item can become a leaf 30:32 now these restrictions are not satisfied 30:37 by heaps okay but they are satisfied by 30:40 some of the structures we're going to 30:42 study later like leftist trees okay so 30:46 getting leaf correspondents to work with 30:49 heaps is actually a fairly complex 30:51 process whereas it's fairly simple 30:54 trying to get it to work with leftist 30:56 trees and that's because min min and Max 31:01 heaps don't satisfy these two 31:03 requirements as we'll see when you look 31:05 at an example when you do an insertion 31:09 it'll be possible for a previous non 31:11 leaf to become a leaf similarly when you 31:13 do a remove it'll be possible for a 31:15 previous non leaf other than the parent 31:18 of the removed item to become a leaf 31:25 alright for example I'm for the example 31:28 I'm going to still look at heaps 31:29 even though heaps are a bad data 31:31 structure to work with four leaf 31:33 correspondence I'm going to look at 31:35 heaps mainly because at this time that's 31:37 the only single ended priority queue 31:38 structure that I'm sure all of you know 31:41 okay all right so as was the case for 31:45 total correspondence half the elements 31:47 go into the min structure half go into 31:51 the max structure and then you have a 31:53 buffer in 31:54 there's an odd number of elements the 31:56 extra element goes into the buffer this 32:00 time I have to establish a 32:02 correspondence only for the leaves so 32:05 the three the 17 and the 14 must have a 32:08 corresponding element in the Mack 32:09 structure 32:10 similarly the 6 the 7 and the 18 must 32:13 have a corresponding element in the bin 32:15 structure so in this particular case the 32:22 correspondence that's being established 32:25 is the 5 corresponds to the 7 the 3 to 32:29 the 6 the 14 to the 18 in the 17 to the 32:34 20 32:47 but yeah five is not a leaf but the five 32:53 is there because the seven needed 32:55 somebody to correspond to so it's 32:56 corresponding to the five okay now I 32:59 could probably have got by was the three 33:02 I couldn't in this particular case we 33:07 really couldn't get by with just three 33:09 correspondences because the three okay 33:13 if you just wanted three you'd have to 33:14 stick with the three leaves here and the 33:16 three leaves there so the three could 33:18 correspond with the six the seven or the 33:21 18 okay but the 14 can only correspond 33:27 to the 18 out there and the 17 can only 33:31 correspond to the 18 from the leaves 33:32 okay so and we got to have different 33:35 ones so I couldn't set up a 33:36 correspondence from 14 to a leaf and 17 33:39 to a leaf and have different leaves okay 33:42 so it became necessary to go to four 33:45 correspondences the requirement for a 33:51 correspondence is that the fellow here 33:55 must correspond to somebody at least as 33:57 big on that side okay now in leaf 33:59 correspondence we only need to establish 34:01 it for the leaves okay so suppose we are 34:04 establishing correspondence for these 34:06 three leaves okay 34:07 and I want to limit myself to using 34:09 leaves on that side then my choices are 34:12 6 7 and 18 okay so 6 7 and 18 need to 34:17 get paired with 3 14 and 17 34:19 there's no way to pair them so that each 34:23 one here is paired with a bigger fellow 34:25 on that side okay because two small guys 34:29 6 and 7 there ok so if you take the 3 34:33 and you pair it with the 6 you can't 34:35 play the 14 with the 7 ok all right so 34:39 you can't do it with just leaf tea leaf 34:47 no leaf correspond is not a miss normal 34:51 leaf correspondent says that each leaf 34:54 here must have a corresponding element 34:56 there and each leaf here must have a 34:59 corresponding element there it's not 35:01 saying that the leaf must correspond to 35:03 a leaf okay and so we certainly 35:07 satisfied that requirement each of the 35:09 three leaves here has a corresponding 35:11 element they're not necessarily a leaf 35:13 and each of the three elements leaves 35:16 there has a corresponding element here 35:18 not necessarily a leaf yeah right so the 35:29 correspondence is something that is 35:31 established during insertion and 35:33 deletion question of possibilities all 35:41 right now we're just drawing it so the 35:43 thing is if I'm trying to draw an 35:44 example could I have drawn it 35:46 differently okay so in this particular 35:48 case I could not have drawn it with 35:50 these two trees and having only three of 35:52 these red pointers because won't work 36:00 well that's that's true I mean once you 36:03 fix the algorithm doesn't matter what 36:05 structure you're working with unless you 36:06 have randomness in the algorithm the 36:09 outcome is always well-defined 36:19 all right so this would be an example of 36:25 a leaf correspondence with 11 elements 36:35 now again to use this structure for our 36:39 double ended prior to Q we would need to 36:41 establish that the main element can be 36:44 easily found at the max element can be 36:45 easily found okay 36:48 again we would like to prove that with 36:52 the exception of this buffer fellow the 36:54 minimum element is here and again it's 37:02 not too difficult to see that'll be the 37:04 case because if you claim that well 37:11 certainly from the fellows out here this 37:12 is the minimum okay and then if you look 37:19 at that side and you claim well no the 37:21 minimum is sitting out there okay so the 37:23 minimum is sitting out here then it's 37:25 got to be in one of the leaves because 37:28 the values decrease as you go down okay 37:30 but each leaf out here has a bigger 37:32 element on that side okay so the minimum 37:35 can't be out here okay same reasoning 37:40 for the max from these elements that's 37:42 the max the only other possibility is 37:45 the max is on this side but here the 37:47 values increase as you go down so the 37:50 max here has to be in one of the leaves 37:51 but every leaf here has a bigger element 37:54 on this side okay so that's got to be 37:56 the max okay borrowing the buffer okay 38:00 so with leaf correspondence as with 38:05 total correspondence we can prove that 38:06 the smallest element is here with the 38:09 exception on the buffer and the biggest 38:10 element is at the root there with the 38:13 exception of the buffer okay 38:15 and knowing that you are then able to 38:18 perform the remove mend removal max 38:22 operations 38:26 let's take a partial looked at and said 38:29 because insert as I said because we're 38:31 using heaps gets complicated and we 38:34 won't be able to look at all of the 38:35 cases for an insert it's a lot easier if 38:37 we were doing a leftist tree for example 38:42 okay alright so the first case is the 38:45 buffers empty so going from an even to 38:47 an odd number of elements you just put 38:50 the new item into the buffer if the 38:56 buffer is not empty 38:58 okay then we find the smaller of the new 39:02 item and the buffer we put the smaller 39:04 item in here to do an insert okay if the 39:08 smaller item becomes Alif okay then I 39:12 will insert the bigger one into that 39:14 otherwise I want to leave it in the 39:15 buffer 39:28 okay so here we do two inserts only if 39:33 the newly inserted item becomes a leaf 39:40 if you look at this example here suppose 39:45 I'm inserting a four okay so if you put 39:50 a four it'll just stay out here it will 39:51 become a leaf okay 39:53 then I have to insert the twelfth so 39:54 that this leaf has a corresponding 39:56 element on the other side and by the 40:01 assumptions of the structure on that 40:03 side either the twelve will become a 40:06 leaf or an old leaf will become a leaf 40:08 so we won't have a previous non leaf 40:11 without a corresponding element showing 40:13 up as a leaf on the other side okay if 40:18 you were inserting a - okay well then 40:22 the tree would move down so that was 40:24 previously relief it comes down and 40:25 becomes a leaf and the two will come in 40:27 here in that case we would not insert 40:29 the twelve on the other side yeah but 40:39 not when you have leaf correspondence 40:46 question is if I insert a four maybe I 40:48 can correspond it with the ten the 40:51 difficulty with doing that is you have 40:52 to find the tenth okay which is not easy 40:55 to do if you had an easy way to find it 40:59 yes then we wouldn't have to put in the 41:00 twelve we could correspond it to the 41:02 time okay 41:05 again the important point that you noted 41:07 is that once we go into this scheme here 41:10 the size of these two structures isn't 41:13 going to remain the same 41:15 okay because we're not always inserting 41:16 two at a time 41:29 now if you started off with an even 41:33 number of elements so perhaps down here 41:36 you had a four okay and now you're doing 41:38 an insert if I inserted the two then 41:41 this fellow is a non leaf and so it may 41:43 not have had a corresponding element 41:44 this would move down here and a non leaf 41:47 becomes a leaf and now you going to find 41:49 a corresponding fellow for that guy and 41:51 that becomes difficult okay so that's 41:54 why this structure is not well suited 41:56 for leaf correspondence though there are 41:58 ways around it and the readings tell you 42:00 how to do it right remove men first a 42:15 simple case the smallest element is in 42:17 the buffer okay so you find the smaller 42:19 of this fellow and who's in the buffer 42:21 if it happens to be in the buffer that's 42:23 the men you take it out and you're done 42:32 so otherwise we take the one out the 42:39 assumption on the structure is when you 42:43 do a remove then only the parent of the 42:46 removed item can become a leaf in this 42:48 case there's no current to become a leaf 42:51 okay of course in this structure that 42:53 assumption doesn't hold because as 42:55 you'll see when you take the if you take 42:58 the one out they will take the 17 and we 43:00 try to put it back in okay or actually 43:03 in this case it will work you put the 17 43:05 back here and it'll filter down here and 43:07 the five will move up okay but in 43:11 general one you when you take the 43:13 minimum element out it's possible for a 43:16 previous non leaf a previous non leaf 43:20 element which is not the parent of the 43:22 deleted element to end up becoming a 43:24 leaf which messes things up but 43:28 basically the strategy is take the 43:30 minimum element out from here if this 43:34 had a corresponding element and 43:36 side we will take that one out doing an 43:38 arbitrary remove and then if there are 43:42 new leaves we'll have to set up 43:43 correspondence but the assumption is 43:46 that the structures we use here have the 43:50 property that's only the parent of the 43:52 removed item that can become a leaf 43:53 there's only one fellow for whom you 43:55 have to set up a correspondence okay 43:58 again this is something that's probably 44:01 worth revisiting once we study leftist 44:04 trees to see how this thing works with 44:07 leftist trees where the constraints we 44:09 placed actually hold okay they said it's 44:12 a lot harder with min-heap so you can 44:14 get it to work all right that's all I 44:23 was going to say about the remove men 44:24 just in principle how it works okay all 44:27 right any questions what happens if it's 44:34 not maintained well if you don't 44:36 maintain the leaf correspondence then 44:38 again you cannot assume that the fellow 44:41 up here is the minimum or the fellow up 44:43 there is the maximum okay 44:46 just like in total correspondence if you 44:48 don't maintain the total correspondence 44:50 then the roots of the priority queues 44:53 don't contain the min and Max elements 44:55 don't necessarily contain it okay so 44:59 that's why you have to have the 45:00 correspondence 45:11 all right so we'll stop here