Transcript 00:00 the following content is provided under 00:01 a Creative Commons license your support 00:04 will help MIT OpenCourseWare continue to 00:06 offer high quality educational resources 00:08 for free 00:09 to make a donation or view additional 00:12 materials from hundreds of MIT courses 00:14 visit MIT opencourseware at ocw.mit.edu 00:25 so this is actually the almost near the 00:29 end so this is actually the last lecture 00:32 on transport protocols and then on 00:34 Wednesday my plan is to talk about how 00:38 many of the things we've studied in this 00:39 class applied to the internet it kind of 00:41 be a history lesson about communication 00:45 networks and I'll talk in specific terms 00:47 about two interesting problems one of 00:49 them is a problem we'll start on today 00:52 which is how you pick these window sizes 00:54 and I'll talk about how TCP does this 00:56 and it's one of the pretty amazing 01:00 result that was only invented in the 01:02 mid-1980s or late 1980s and the second 01:04 thing I want to talk about when I talk 01:05 about this history of the internet from 01:07 say 1960 to today I'll talk about how 01:11 people can hijack other people's routes 01:14 and be able to you know attract traffic 01:17 that doesn't actually belong to them so 01:19 apparently now you know there are people 01:21 who are doing it illegally but 01:23 apparently now some governments are also 01:25 doing this sort of things so it's kind 01:27 of an interesting thing to understand 01:29 how it is that some of the routing 01:30 protocols we studied are not secure so 01:32 I'll do that on Wednesday and then we'll 01:35 wrap up next week so today the plan is 01:38 to continue to talk about transport 01:40 protocols in particular about sliding 01:42 windows so just to refresh everyone's 01:44 memory the problem is that you have a 01:46 best-effort network where packets could 01:48 be lost packets could be reordered 01:50 packets could be duplicated and delays 01:53 in the network are variable and what we 01:56 would like to provide to applications 01:57 like for example your web browser 02:01 you know your web client or our server 02:04 is an abstraction where the application 02:07 just writes data into some layer and the 02:09 application on the other side reads data 02:11 from a layer and this transport layer 02:13 deals with providing in order reliable 02:17 delivery so we looked at the first 02:18 version of that protocol which was stop 02:21 and wait and it had a few nice ideas in 02:23 it the first all simple ideas the first 02:25 is to use sequence number and then to 02:29 have acknowledgments and then to 02:33 retransmit after a timeout and I didn't 02:41 actually talk about how to do adaptive 02:42 timers and the low pass exponentially 02:45 weighted moving average filter but we 02:47 studied that in recitation and if I have 02:49 time I'll come back to that today but my 02:51 assumption is that you've already seen 02:53 how to do that but then we concluded 02:57 that the throughput of the stop-and-wait 02:58 protocol is not very high it's sometimes 03:03 a good idea for example are to get 03:05 reliable delivery between this access 03:07 point here and your computer right now a 03:10 stop and wait protocol is perfectly 03:12 reasonable we will understand why later 03:14 on but the short answer why is because 03:16 the round-trip time between this access 03:20 point here and your laptop is quite 03:23 small and because it's a really small 03:27 round-trip time you're able to get one 03:29 packet per round-trip time or you know 03:31 another packet losses it's less than one 03:32 packet the round-trip time but roughly 03:35 about one packet per round-trip number 03:36 the round-trip time is on the order of 03:38 microseconds one packet per round-trip 03:41 time can give you a throughput that is 03:45 that's quite large and therefore if the 03:48 link speed is 10 megabits per second and 03:50 you're able to send you know a thousand 03:52 byte packet in say 20 microseconds if 03:55 you take the ratio you know that's 03:57 probably going to be bigger than the 03:58 link speed and therefore you're going to 03:59 get or off the order of the link speed 04:01 and therefore you're not going to under 04:03 you 04:03 the lake but now if the round-trip time 04:07 were 100 milliseconds and you were able 04:09 to send just one packet every 100 04:11 milliseconds it would be slow and to 04:13 solve that problem we looked at this 04:15 idea of a sliding window this is just 04:19 pipelining 04:20 it just says rather than have one packet 04:22 unacknowledged at any point in time 04:24 we're going to have a value W that the 04:27 sender decides decides upon this value 04:31 and the semantics of a window are that 04:34 we're going to have W unacknowledged 04:37 packets in the system between the sender 04:44 and the receiver now technically it's at 04:46 most W packets because from time to time 04:48 you might have transients where you have 04:51 less than W packets because you're about 04:53 to send the next packet or if you get 04:55 toward the end of the file you know and 04:57 you run out of data to send you're 04:59 clearly going to have fewer than W 05:01 packets so the technical definition of a 05:03 window a fixed sized window says that if 05:05 the window is W then the semantics are 05:08 that we will have no more than W 05:10 unacknowledged packets in the system now 05:14 that's not the only possible definition 05:16 of the window but that's our current 05:17 operating definition of the window so 05:21 then we the rules at the center are very 05:23 simple when you get an acknowledgment 05:25 from the receiver as long as it's an 05:27 acknowledgment for a packet that you've 05:29 sent and that packet has not previously 05:32 been acknowledged then you now know that 05:36 that packet is me acknowledged so you 05:37 remove it from your list of 05:38 unacknowledged packets and you send a 05:40 new packet the packet you send the new 05:42 packet is sent is the smallest sequence 05:45 number that you haven't sent so far it's 05:48 a very simple rule and separately 05:50 there's the calculation of the time out 05:53 an exponential moving average filter 05:56 that calculates the average value of 05:58 that time of the smooth 05:59 estimate of the time out a similar 06:01 calculation that finds the deviation 06:04 from the mean and you pick a 06:06 retransmission time out that there is 06:08 some number of deviations away from the 06:10 mean for example four times the smoothes 06:13 estimate if that timer fires and you 06:15 haven't received an acknowledgement you 06:18 retransmit the back it's a very simple 06:20 idea so I'm going to show now what 06:21 happens in some pictures with a sliding 06:24 window when you have packet loss so it's 06:27 the same picture as the last time except 06:31 now we have a packet packet two is lost 06:35 the sender doesn't know that it's lost 06:36 yet so packet one goes here back at twos 06:39 lost this is supposed to be packet three 06:43 the sent packet for is sent and packet 06:45 five is sent the window size in this 06:47 example is five so now when the first 06:51 packet gets its acknowledgment the 06:53 window slides to the right by one and at 06:56 this point we send packet six and the 06:59 window is now packets to six percent 07:02 packet six and now in the meantime 07:05 what's going to happen is when packet 07:06 packet 2 doesn't reach but packet three 07:08 reaches when packet three reaches three 07:10 gets an acknowledgment the receiver says 07:12 that it's received three when it 07:15 receives three what's the next pack in 07:17 that's transmitted 07:19 the packet that's transmitted is seven 07:21 now let me ask you a question here the 07:26 receiver ii got packet a one and packet 07:29 a three sorry 07:30 acknowledgment a one and acknowledgment 07:32 a three now if the sender were 07:35 calculating the expected next technology 07:37 meant it knows that after a one it 07:40 should get a two and it now got a three 07:44 so why doesn't it just race the resend 07:46 packet too right now 07:51 yeah 07:54 it could have been delayed now yes it 07:58 could have been delayed but if it were 08:00 delayed wouldn't three packet three have 08:02 also been delayed why not 08:10 because all packets are delayed so the 08:12 question is what is it about the delay 08:13 is one part of the answer but what is it 08:15 specifically about that delay that has 08:17 caused the stuff I mean if a packet gets 08:19 delayed and packets are sitting in a 08:20 queue if the first packet in the queue 08:22 was delayed then the remaining packets 08:24 are also going to get delayed because 08:25 they're sitting behind that packet in 08:28 the queue yes sir well so far let's 08:34 assume that you have a network where 08:35 packets are delayed and delay is a 08:37 variable and you have a switch right and 08:40 the switch has a queue in it and let's 08:44 say that you have in this example you 08:46 have packet one and two and three but in 08:48 fact packet two was lost and you don't 08:49 know that this packet three that's one 08:53 case but in the other case a packet two 08:55 were not actually long if packet two 08:57 were lost and you got an acknowledgment 09:00 for one and acknowledgment for three 09:02 if packet two were legitimately lost 09:04 then it's certainly correct behavior for 09:06 the sender to send when it receives a 09:08 three to retransmitted back at two so 09:12 clearly you're going after a case here 09:13 where two exists but wasn't lost in 09:17 other words if the sender were to 09:19 retransmit packet three sorry packet two 09:26 when it receives a three she said that 09:30 that's wrong because it could have 09:31 gotten delayed but what kind of delay 09:34 would delay packet two but not back at 09:36 three or what kind of delay equivalently 09:39 would they delay a two but not a three 09:42 if it's sitting behind the same Q and 09:45 the Q serviced in that order I mean if 09:48 this packet will delay was delayed this 09:51 packet would also be delayed in this 09:52 packets behind this packet in the kill 09:54 so what else could it be 10:01 yeah so the the word I'm looking for is 10:03 that packets could get reordered in the 10:05 network in fact the reordering could 10:07 happen even if there were no variable 10:09 delays you know like no queuing delays 10:11 in the network I mean you could just 10:12 have a switch that you have a switch 10:15 here it could be that packet 2 gets them 10:17 that way and packet 3 gets sent this way 10:19 it could be that what you know here's a 10:22 very concrete example of how this would 10:23 happen from your previous lab it could 10:25 be that the network had a certain set of 10:27 routes and packets were going along this 10:29 path and then I knew maybe there was a 10:34 failure before and a new link showed up 10:36 or the the failure healed and the 10:38 routing protocol converged to pick this 10:41 path going forward and this new packet 3 10:44 that showed up after 2 gets sent along 10:46 this path and it could easily be the 10:47 case that this path has a lot shorter 10:49 delay to the destination than that bad 10:51 and therefore what would happen is that 10:54 the receiver packet 3 would arrive 10:56 before packet 2 so in other words if I 11:00 had a network where no packets ever got 11:02 reordered no acknowledgments got no data 11:05 packets got reordered and no acts got 11:07 reordered when in fact it was perfectly 11:09 good behavior for the sender at this 11:10 point when it observes a 3-2 go ahead 11:13 and resend packet to because I'm 11:16 guaranteeing to you that there's no 11:17 reordering in the network but in general 11:19 in networks in packet switched networks 11:21 and invaders they get a lot of their 11:23 robustness to failure and resilience to 11:25 failure because they you know they sent 11:27 packets in in which way their job only 11:29 job is to get packets to the destination 11:31 as with a higher likelihood as it as 11:34 they can which means packets are allowed 11:36 to get reordered and therefore it's not 11:38 correct for packet to to get 11:41 retransmitted when you get a 3 11:44 so let's keep going so what is the next 11:47 packet that's going to be sent when you 11:48 when you get a three in this picture 11:51 it's seven because the senders rule is 11:53 very simple have I seen the act before 11:55 no is this an act corresponding to a 11:58 packet that I've sent before remember we 12:00 need that check because it's possible 12:01 that a flaky you know there's some bug 12:03 on the on the receiver side so is it is 12:05 it an act that corresponding to a packet 12:07 upset before yes send the next in 12:10 sequence packet so it sends back at 12:11 seven and at this point we're going to 12:13 lose the beautiful animation because I 12:15 do each of these things you know takes 12:17 an endless amount of time so I just you 12:19 know produce the full picture 12:20 I just wish I had the patience to sit 12:23 and do the full animation but you know 12:27 patience so anyway you send packets 12:30 seven at this point and then pack and 12:32 then when you receive a four you're sent 12:33 back at eight when you receive a five 12:35 you sent back at nine a six you sent 12:37 back at ten now let me ask this question 12:40 when when at the point at this point in 12:42 time when you receive this 12:43 acknowledgement a five and you send back 12:45 at nine 12:46 what is the window that is what is the 12:49 set of packets in the window 12:54 the window is five right the window size 12:56 but the window size corresponds to some 12:59 list of packets that are in the window 13:00 what is that set of packets on their 13:03 list of packets yeah 13:06 2 7 8 9 10 13:08 this is important this is 2 7 8 9 10 13:11 these packets are not in sequence it's 13:13 very tempting to sort of say the windows 13:14 they expect if I pack it so if I've sent 13:17 our 10 the window must be 10 9 8 7 6 13:20 well that's not true the window just 13:22 says here's the number of unacknowledged 13:24 packets the number of unacknowledged 13:26 packets is 5 in this case all right 13:31 let's keep going when 10 arrives went 13:34 when 10 sent out here and then at some 13:39 later point in time we get an 13:40 acknowledgement for 7 we send out 11 13:42 when we get 8 we send out 12 at this 13:44 point in time the window is 12 11 10 9 13:46 and 2 at some point the sender times out 13:50 and the timeout is picked to be 13:53 conservative that's why we take the 13:55 smooth the average we take the deviation 13:57 because we don't actually want to 13:58 retransmit a packet that hasn't 14:01 genuinely been lost the reason for that 14:04 is oftentimes when you start seeing 14:08 weird behavior like this like a presumed 14:11 missing packet you're not actually sure 14:13 if it's missing or if it's just delayed 14:14 as was pointed out before because it 14:17 took a different route there's something 14:19 something strange is going on in the 14:21 network and causing a retransmission to 14:26 happen of a packet that hasn't actually 14:28 been lost makes things worse because it 14:31 kind of adds more load on to the system 14:33 right about the time and there's 14:34 something fishy going on in the network 14:35 so the last thing you want to do when 14:37 something is under stress is to add more 14:39 stress to it that's why the timeouts are 14:41 conservative any time the sender in any 14:44 protocol like this 3 transmits a packet 14:47 that is not actually lost that's 14:50 considered a spurious retransmission 14:52 it's considered a retransmission that is 14:54 just not a good thing now actually our 14:57 protocol as we've described it has some 14:59 wonderful nice properties and I'll show 15:01 later or maybe you can read about this 15:03 in the in the book that it actually is 15:05 the best possible protocol you can come 15:07 up with in an asymptotic sense in other 15:09 words it meets you know no other 15:11 protocol if you ran it for a long time 15:12 we would actually get higher throughput 15:14 in a network that had losses so it has 15:16 some nice properties but it has this one 15:18 bad property which is that in fact this 15:21 protocol in the way these 15:22 acknowledgments are structured ends up 15:25 with a lot of spurious retransmissions 15:26 or they could could end up with a lot of 15:28 spurious retransmissions can you kind of 15:31 see why this protocol could be that we 15:33 follow this discipline extremely nicely 15:35 that timeouts are conservative so we 15:38 only timeout when we're really really 15:40 sure when a packet when we don't get an 15:42 acknowledgment is when we timeout and we 15:45 wait a long time but the protocol could 15:47 still have spurious retransmissions can 15:50 anyone see why there's a very peculiar 15:51 behavior of this property of this 15:53 protocol that comes from the way in 15:55 which these acknowledgments work 16:02 yeah so this protocol has a peculiar 16:05 problem which is that all packets and 16:07 acknowledgments are essentially the same 16:09 they contain the same information if you 16:12 lose a packet or you lose an 16:14 acknowledgment for that packet the 16:16 sender can't tell the difference now 16:21 this is therefore not necessarily the 16:23 best protocol in the sense that if you 16:26 have a path here's an extreme case I 16:28 have a path where there's no packet 16:30 losses going from me to you and I'm 16:31 sending data to you and coming back the 16:34 packet loss rate is 25% this protocol 16:37 has this unfortunate property that I 16:39 will believe that 25% of my 16:41 transmissions are lost to you in fact 16:44 you've got every single packet I've sent 16:46 it's just that I don't see the 16:48 acknowledgments for those specific 16:49 packets and therefore I'm going to 16:50 retransmit all those packets to you 16:52 leading to spurious retransmissions so 16:55 how would you we don't have to worry 16:57 about it for the lab or for the class 16:59 but as a design problem can you think of 17:02 an in can you invent a protocol that fix 17:03 this problem can you modify this 17:09 protocol or come up with an idea of your 17:10 own which has the property that pick 17:13 pick a design pick the design point that 17:15 is the sender to receiver path is 17:17 generally lost free but let's say that 17:19 the receiver to sender path has a high 17:22 loss and by the way this is not 17:23 hypothetical this is what happens in 17:25 wireless networks a lot because that 17:27 base station sitting some cell tower 17:28 there has a huge amount of power you 17:30 know it's powered in it consumes 17:31 probably kilowatts these days so they 17:33 can blast at whatever is the maximum 17:35 allowed by the FCC and your poor little 17:37 dinky phone is sending acknowledgments 17:38 and the thing is running out of battery 17:40 all the time so they are like carefully 17:41 trying to figure out what's the minimum 17:43 power at which I can transmit so in fact 17:45 these asymmetric conditions are not not 17:47 unrealistic they're quite realistic so 17:49 if I ran this protocol or network like 17:51 that it would be a bad probably a bad 17:52 thing so what would you do to the 17:54 protocol 17:56 yes wait send multiple technologies yeah 18:00 you could send multiple acknowledgments 18:02 to every time so you'd be doubling yeah 18:04 you know that's a little bit but it's 18:07 the kind of right kind of idea you want 18:08 some sort of redundancy yeah yeah that's 18:15 actually not a bad idea that in a sense 18:17 you're sending multiple acknowledgments 18:19 but you're not just blindly sending 18:21 multiple acknowledgments but any time 18:23 you send an acknowledgement you could 18:24 also say sending the list of all packets 18:27 you've received so far as a huge amount 18:28 because if I send you a gigabyte movie 18:31 or something I mean I'd by the end of 18:32 that movie you're just sending me a lot 18:33 of acknowledgments so you don't quite 18:35 want to do that but remember the 18:36 receiver has some idea if it knew the 18:38 window size it would have some idea of 18:40 what the you know the the number of 18:43 things at the sender you could do 18:44 something even simpler than all of that 18:46 one thing you could do is that the 18:48 receiver when it acknowledged the packet 18:50 wouldn't just acknowledge package seven 18:52 when it got package seven but it might 18:54 be able to send a cumulative 18:55 acknowledgment in other words it could 18:58 say that when I send an acknowledgment I 19:00 guarantee to you that everything I've 19:03 received up to this point I'm sorry I 19:05 guarantee to you that all packets up to 19:07 that point I have received so if I tell 19:10 you that I my acknowledgment is 17 I 19:12 guarantee to you that there's nothing 19:13 before 17 that I've not received and 19:15 then I could in addition in the 19:17 acknowledgement tell you a little bit 19:19 about some of the later packets I've 19:20 received or some of the later packets 19:22 that might be missing so you can make 19:23 this protocol have a little bit more 19:25 redundancy and if you do that and you 19:28 apply almost everything else I've taught 19:29 you get TCP which is an extremely 19:32 popular protocol but that's about the 19:34 only difference in terms of significance 19:36 between our protocol and TCP now 19:39 interestingly our protocol when you 19:43 actually have loss rates in the forward 19:45 and reverse directions that are roughly 19:47 the same our protocol actually does a 19:49 little better than what TCP happens to 19:50 do but TCP is good at dealing with the 19:53 reverse 19:54 having a higher degree of packet loss 19:58 okay so the other question I want to ask 20:04 people here at this point is let's say 20:07 that you have a receiver that's running 20:09 on an extremely simple device so you 20:11 don't want to have a lot of storage now 20:15 why would you need storage before I get 20:17 to that question let's take this picture 20:19 here so packet two hasn't yet been 20:23 received but in the meantime the 20:24 receiver has garden packets three and 20:26 four and five all the way up to twelve 20:28 so what does the receiver have to do 20:31 well the receiver remember before it 20:33 delivers it to the application it has to 20:34 hold on to those packets it can't 20:37 deliver packet three to the application 20:39 and packet four to the application 20:40 because this guarantee that the receiver 20:42 is giving is that all packets will be 20:45 delivered in exactly the same order in 20:47 which they were sent so the receiver has 20:48 got to hold on to those packets until 20:50 packet two shows up does that make sense 20:55 okay how big can that receivers buffer 20:57 become how big do you need to make it 20:59 like if you were implementing this on a 21:01 computer if you want to allocate memory 21:04 for it how big do you need to make it 21:12 what big enough to handle the time I'm 21:14 good how big can the timeout be 21:24 well the timer can be you know some 21:25 finite number but think about what 21:27 happens think about the time what 21:28 happens you read transmit packet to 21:31 packet was lost again now in the 21:35 meantime the protocol is going to 21:36 continue because all these other packets 21:38 are going to keep getting 21:38 acknowledgments and they're going to 21:40 keep causing the sender to keep sending 21:41 packets so if packet twos retransmission 21:44 we're lost we're going to be at this 21:47 point here we're still going to be 21:48 sending at this point in time packet 21:50 thirteen and foot you know wherever 13 21:52 and 14 and 15 and 16 and so forth right 21:55 now packet two could just keep getting 21:57 lost I mean it may happen with low 22:00 probability but there is a probability 22:02 that it'll happen so how big does the 22:05 receivers buffer have to be in this 22:07 implementation in the worst case 22:15 well let's say that you don't know how 22:16 big the file is it's a continuous stream 22:18 of packets that are sent I mean is there 22:22 a bound on the in the worst case is 22:24 there a real bound and then on the size 22:26 of the buffer or can it grow to be you 22:28 know as big as the entire stream that 22:31 you're sending you can grow to be really 22:33 really big now this is a potential 22:35 problem because it can keep growing and 22:38 growing and growing at some point you 22:40 don't run out you might run out of space 22:42 when you start to run out of space it's 22:44 tempting to just throw things out so 22:47 let's say that the receiver implements 22:49 it somebody implements this protocol and 22:51 just says I'm going to just have a 22:52 hundred packets you know the sender is 22:54 running with the window size of five I'm 22:57 just going to have a buffer of 100 22:59 packets which says the maximum number of 23:02 packets I'm going to hold in my buffer 23:04 before I start discarding later packets 23:06 is 100 does this protocol work is it 23:13 correct if I do that yes 23:22 okay but what if I acknowledge a packet 23:25 as soon as I get it okay if you 23:27 acknowledge a packet as soon as you get 23:29 it the receivers discipline the 23:30 guarantee it should provide is if it 23:32 acknowledge is a packet then it's told 23:34 the sender that it's got the packet 23:36 which means the sender will never 23:37 retransmitted which means it shouldn't 23:40 throw the packet away so as long as the 23:43 receiver only throws out packets that it 23:45 doesn't acknowledge your okay 23:48 does that make sense so the discipline 23:51 is it's it's just like a writing a legal 23:52 contract right that's what protocols are 23:54 it's just a bunch of legal contracts and 23:56 you try to make them as simple as 23:57 possible and you try out and you end up 23:58 with 200 pages but you know that's what 24:01 lawyers also say that's really simple 24:03 then you got all these clauses but the 24:04 reality is that you got to deal with all 24:06 these corner cases so protocols are 24:08 nothing more than contracts that both 24:09 sides agree upon and the contract here 24:11 from the receiver is actually pretty 24:13 simple it says if I send you an 24:14 acknowledgement it means that I'm not 24:16 throwing the packet away what happens if 24:20 I tweak this protocol to do a little bit 24:22 differently at the receiver when I get a 24:24 packet if it's in order I deliver it to 24:28 the application and after I deliver it 24:30 to the application I send an 24:31 acknowledgement okay in other words I 24:36 only send an acknowledgment to the 24:38 sender after it's delivered up to the 24:39 application otherwise I don't what 24:44 happens to this protocol if I do that 24:45 does it perform the same as what I 24:47 described remember there's a subtle 24:49 difference the only difference is in 24:51 this protocol as I've described it 24:52 receiver gets a package sends an 24:54 acknowledgment and then holds on to it 24:56 in a buffer if the packets not the next 24:58 packet in order the modification I'm 25:02 proposing is the receiver gets a packet 25:04 and only when it delivers it up to the 25:08 application does it send an 25:09 acknowledgment otherwise it doesn't send 25:11 an acknowledgment 25:14 yes 25:17 is it just like stomping wait 25:23 but if packets are not being lost it's 25:25 doing a lot better than stop and wait 25:27 right 25:28 if packets are not getting lost its 25:29 doing isn't it would you agree that if 25:31 packets are not lost it does better than 25:32 stomping away in fact if packets are not 25:36 lost is there any difference between my 25:38 protocol and this my modification what 25:40 if I'd know okay but yet you had a good 25:42 thought 25:43 it looks like stop and wait when does it 25:44 look like stopping wait yeah so that 25:47 modification is when packets are lost 25:50 it looks like stopping wait now this is 25:51 not a mere academic thing so it turns 25:53 out that there was a period of time in 25:54 the 90s where somebody in Linux TCP had 25:57 the bright idea it seemed like a bright 25:58 idea that that's what they would do 26:00 there was the one you know for a period 26:02 of time there was a Linux TCP where they 26:04 said well it's all very complicated you 26:06 know because what would happen was that 26:08 sometimes the machine would crash and 26:09 the sender thought that the packet had 26:11 been acknowledged but it hadn't actually 26:12 been delivered up to the application so 26:14 let's just make it so the packets get 26:15 delivered to the application and only 26:17 when the application does the read for 26:20 those of you who've done the sort of 26:21 thing from the socket buffer and it's 26:23 been out in the application and out of 26:25 the operating system that's when we'll 26:26 send the acknowledgement and you know 26:27 you read seemed okay people said it 26:29 seems resumed and Linux the way it seems 26:31 things seem to work as people try out a 26:33 lot of stuff and then you know it's a 26:35 guess from time to time somebody 26:36 declares that something is right so 26:38 anyway they tried this out and you know 26:41 it actually didn't work as well and the 26:42 reason for that is if you run on a high 26:44 enough packet loss rate network then 26:46 what could happen is that you may get 26:47 stuck and it it's very hard to notice 26:51 these performance problems you know 26:52 correctness problems are one thing 26:54 because the other side stops you know it 26:56 stops working and you can kind of corner 26:59 down but this simple tweak that looks 27:02 perfectly reasonable is actually a 27:03 performance problem if it doesn't show 27:05 up all the time it actually shows up 27:06 only when the packet loss rate is 27:07 reasonably high so these are all 27:09 examples and reasons why these protocols 27:11 are not completely obvious and require 27:13 of 27:14 amount of you know care to get it to 27:16 work are there any questions about any 27:20 of this stuff yes it's all clear okay 27:26 what I want to do now is to show a 27:28 picture of something called a sequence 27:29 plot which is a very useful tool in 27:33 understanding how these protocols 27:34 actually works so what you do in to 27:36 produce one of these plots is you run 27:37 your protocol and you plotter out at the 27:41 sender you plot out the times at which 27:43 the sender sent out every sequence 27:46 number every time it transmitted a 27:48 packet and you plot that out as a 27:50 function of time the y-axis is the 27:54 sequence number the x-axis is time and 27:57 similarly every time the sender gets an 27:59 acknowledgment you plot it plot that out 28:01 on a trace as well so you look at these 28:03 two traces okay this is a trace of 28:05 packet transmissions the data packet 28:08 transmission this is a trace of AK 28:09 packet receptions and you look at this 28:11 picture now there's a few things that 28:15 you can the moment you get a picture 28:16 like this is a few things you can 28:17 immediately conclude the first thing you 28:18 can conclude is that if I look at the 28:21 distance between the data and the AK 28:23 when there are no losses happening if I 28:25 look at that distance that tells me the 28:27 window size because that's the number of 28:29 packets because the last technology mate 28:31 every time an acknowledgment happens you 28:32 send out a new packet therefore the 28:34 distance in sequence numbers between the 28:38 in one of these vertical slices when 28:40 there are no packet losses is the window 28:42 size 28:44 you can also read off the typical 28:46 round-trip time of the connection 28:48 because the round-trip time is the time 28:49 between when a packet data packet was 28:51 sent and when you got an acknowledgment 28:52 for it so you can read that off as well 28:55 these pictures are there's an easy way 28:58 for you to in your lab 9 or 2 to produce 29:01 these pictures so if you're running into 29:04 things where things look slow things 29:05 look bad you know you should just put up 29:07 one of these pictures and then it'll 29:09 usually become pretty apparent what's 29:10 going on what may happen is that 29:12 initially things look like this and all 29:13 of a sudden things stop and you can 29:15 start to say well I'm not getting 29:16 acknowledgments or I'm not sending data 29:18 the right way and these are very useful 29:19 to understand what is going on and 29:21 generally speaking these are useful to 29:23 uncover performance issues rather than 29:26 correctness I mean correctness usually 29:28 you cannot an hour before you get to the 29:30 stage the retransmission timeout is the 29:34 time between when you send a packet and 29:36 when you send the retransmission for the 29:37 packet in this particular picture the 29:40 deviation from the mean was small and 29:42 that's why they're not retransmission 29:43 time out is only a little bit bigger 29:45 than the main round-trip time every time 29:49 you see a packet that's off of that 29:51 sequence trace so you see packets here 29:53 the pluses are data packets and then you 29:55 see some something going normally and 29:57 then you see a lower sequence number 29:58 retransmitted sent here 30:01 that's a retransmission so you see 30:03 normally the new packets are all sent 30:05 there but the retransmission show up 30:06 before so these are examples of free 30:07 transmissions and these are examples of 30:10 packets that were retransmitted more 30:12 than once because they're timing out 30:14 multiple times yes 30:19 yeah so the window sighs what's the 30:21 definition of the window says the 30:22 maximum number of unacknowledged packets 30:25 so the maximum number of unacknowledged 30:26 packets when there are no packet losses 30:28 that have happened is the difference 30:30 between the last packet you transmitted 30:33 and the last technology meant to guard 30:34 because every time you got an 30:36 acknowledgment you send in your packet 30:38 and initially you send out you know W 30:40 packets so if you continue that so you 30:42 initially send one two five then you 30:43 send to two six three to seven the last 30:46 ack you had was two when you sent out 30:49 three to seven so that distance tells 30:50 you the window size I might be off by 30:54 one you know it's probably the last 30:55 packet you sent - the last 30:57 acknowledgment you got plus one is the 31:00 window size or minus one something like 31:03 that 31:04 you got to get that right on the quiz 31:06 unfortunately I don't have to get it 31:07 right here 31:12 and then some of these things here are 31:14 later X's and these are acknowledgments 31:17 that shots so these are package that got 31:18 retransmitted multiple times 31:20 these are acknowledgments that are for 31:22 these retransmitted packets and I say 31:24 most probably because I can't actually 31:26 be sure in principle it could be that 31:28 this acknowledgment here is for a this 31:31 packet is for this data packet that was 31:34 actually originally transmitted over 31:35 here rather than for this retransmission 31:38 it's in principle possible that this 31:40 acknowledgement was sent by the receiver 31:42 upon the reception of a packet over here 31:44 it's just that it's unlikely it's more 31:46 likely that it was this because that's 31:48 the round-trip time that's consistent 31:49 with that RG t but you can't actually be 31:50 sure all you know is that this was 31:54 encouragement for that data packet but 31:56 most likely it was for the 31:58 retransmission okay so these sequence 32:00 traces are generally pretty helpful and 32:02 useful in understanding the performance 32:04 of transport protocols particularly 32:06 sliding-window protocols so any 32:10 questions 32:13 okay so now I'm going to turn to the 32:15 last remaining issue for these transport 32:18 protocols which is analogous to we did a 32:21 calculation of the form of the 32:23 throughput of this stop-and-wait 32:24 protocol I want to look at the true put 32:25 of the sliding-window protocol okay and 32:28 I want to explain that by first actually 32:31 explaining what the problem is and then 32:33 I'm going to go back and tell you about 32:36 a very beautiful result very widely 32:38 applicable result applies to everything 32:40 from you know network theory and 32:42 networking to you know how long you're 32:44 going to wait to get served at a 32:45 restaurant called littles law 32:47 it's a remarkable result very simple and 32:49 widely applicable everybody should know 32:52 it so the question here is what's the 32:56 throughput of sliding window 33:05 and in particular if I run a protocol in 33:09 a network that looks like this so I have 33:10 a sender I have a receiver there's some 33:15 network paths in between and of course 33:19 this has a bunch of switches here and I 33:22 want to know what is the throughput of 33:23 the protocol and I would like a few to 33:26 tell you you know what the throughput is 33:28 in terms of so the sender has a window 33:30 size W according to this protocol for 33:34 now we'll assume that there's no packet 33:36 loss that is acknowledgement packet data 33:39 packets are not lost in acknowledgments 33:41 are not lost if I have time today I'll 33:43 come back to explaining what happens 33:45 with packet loss otherwise we'll pick it 33:47 up in the situation tomorrow or I'll 33:48 point you to the place in the book it's 33:50 just a simple calculation that expands 33:52 the more important part is when there 33:54 are no losses now I'll also assume that 33:58 there's you know links of different 34:02 rates here and one of these links on the 34:06 path between sender and receiver is the 34:08 link that is the bottleneck link in 34:12 other words no matter what you do or who 34:14 you bribe you cannot send packets faster 34:16 than the speed of that link for 34:18 simplicity I'll assume that there's one 34:20 bottleneck the the the general results 34:22 apply even when there are multiple 34:23 bottlenecks but I'll assume that there's 34:26 some bottleneck here and I'll assume 34:28 that it's rate is C packets per second 34:34 and I will assume here that because 34:37 there's a bottleneck in general there 34:40 are you know there are packets may show 34:42 up faster than the bottleneck can handle 34:44 and if they do they sit in a queue and 34:50 because I've constructed the problems 34:52 the packets don't get lost the queue can 34:53 have an arbitrary length it could be 34:55 potentially it could grow unbounded 34:57 though in reality it won't because the 35:00 sender has a fixed window size of W now 35:02 all of this analysis and calculation 35:05 will apply when there are many many 35:06 people transmitting data sharing this 35:08 bottleneck so you could have multiple 35:10 such senders to multiple receivers and 35:12 they'll all share that this link in some 35:14 way and for now today all I'll assume is 35:17 that there's one user of the network 35:18 it's not hard to extend the same 35:20 calculation to multiple users and the 35:24 question is what is the throughput in 35:25 terms of the window size and in terms of 35:30 these other things now 35:33 in order to answer this question it'll 35:36 turn out that the trooper depends on the 35:37 window size and also on the round-trip 35:38 time and also on the loss rate and also 35:41 on in some in a certain mode it will 35:46 depend on see it can't exceed C okay but 35:50 in order to understand how to solve 35:52 these kinds of questions there's a more 35:54 general result that's more widely 35:55 applicable called littles law which I 35:57 wanted to tell you about so littles law 35:58 applies to any queueing system it 36:01 applies to any system where there's some 36:03 big black box here and the black box has 36:06 a Q sitting inside it and the Q drains 36:12 at some rate so you have a Q sitting 36:15 here things that I've into the Q I call 36:19 that the arrival process which are 36:24 represented by a and then things come 36:26 out of the Q according to some service 36:28 process which I'll represent by s now 36:35 littles law by the way little is a 36:36 professor at MIT I think he wrote this 36:38 this result this law I don't think he 36:42 called it a little slaw but he 36:47 so he did this work I think in the 36:49 nineteen fifties and what's beautiful 36:51 about this result is that it relates the 36:54 it relates three parameters it relates 36:57 the I'll call it n it relates n the 37:01 average number of items that you have in 37:03 this system in the queue or in this 37:06 black box it relates that to the service 37:09 rate and to the average delay 37:11 experienced by an item that sits inside 37:15 this black box so let me relate the 37:18 three again it relates n which is the 37:22 average number to D which is the average 37:30 delay so I'm going to put a bar above 37:32 the fact that it's an average to lambda 37:37 which is the average rate now the result 37:43 applies to a stable system what that 37:45 means is it applies to a system where 37:46 the Q doesn't grow unbounded to infinity 37:49 in other words it applies to a system 37:51 where the service rate you know if the 37:53 arrivals are persistently bigger than 37:55 the service then it doesn't matter what 37:57 you do the Q's going to grow to infinity 37:59 and the delay is going to grow to 38:00 infinity so you get a result that's a 38:01 and and n is going to grow to infinity 38:03 so you're going to get you know a relate 38:05 relationship that's not of much 38:07 practical use but otherwise if the rate 38:09 at which things come out of the system 38:11 in a stable system is lambda which if 38:14 it's a stable system the rate at which 38:15 things enter the system can't exceed it 38:17 either but it relates the service rate 38:21 lambda for a stable system that doesn't 38:23 grow unbounded to N and to D okay 38:27 so let me give this first by example how 38:31 a few guys have used the food truck 38:34 alright so last week I was I did a 38:37 little experiment there and I found that 38:39 this is all real data I found that at 38:41 least the the the thigh truck does you 38:46 know they seem to take about 20 seconds 38:48 per person on average okay and when I 38:59 showed up there and this wasn't an 39:00 average calculation but I showed up 39:02 there and there were 20 people ahead of 39:03 me in line 39:11 a question of course is you know I don't 39:14 care how many people that are in line 39:15 what I care about is how long have I do 39:17 I have to wait assuming that the random 39:19 sample I did is the average which who 39:20 knows if it was or not looking at these 39:25 two numbers 39:26 what's the waiting time in other words 39:28 what's D 10 minutes is it I didn't wait 39:34 10 minutes how do you get 10 I see it 39:39 might be so I don't 30 so I had in a 39:41 strain all right it might be 10 why is 39:42 the 10 how do you how do you conclude 39:47 that it was 10 39:50 who said then why 39:59 Yeah right so what that says what you 40:02 did was you just said that D must be 40:04 equal to n over lambda right or n is 40:08 lambda times D so if you say that it's 40:09 20 seconds per person is three people 40:12 per minute so what you do is you do 30 40:15 people divided by 3 per minute and so 40:18 you get 10 minutes so that's about right 40:23 that is exactly so littles law just 40:25 tells you that the average number of 40:29 items in a system this is all applicable 40:31 to various conditions on stable systems 40:32 and so forth it says the average number 40:35 of items or packets or people or 40:37 whatever is equal to the product of the 40:41 rate at which the system is servicing 40:43 them multiplied by the average delay 40:45 that they experience so knowing two of 40:47 them you can calculate the third and 40:48 what's truly truly remarkable about the 40:50 result is that it applies to anything 40:53 that you do in the system like packets 40:56 could arrive or jobs could arrive or 40:57 people could arrive in some arbitrary 40:58 distribution they could be serviced 41:01 according to some completely arbitrary 41:02 distribution they don't have to be 41:04 serviced in the order in which they 41:05 arrived they could be you know shuffled 41:07 around you could make it so people the 41:09 people who come in last get serviced 41:11 first you could do whatever 41:12 and the result still applies yes 41:21 no well I kind of cheated here a little 41:23 bit this is 20 seconds per person but 41:26 whenever I tell you a number like that 41:27 what this really says that this is three 41:30 people per minute so this is really the 41:34 so it looks like a delay but this is 41:36 really an inverse of a rate this is this 41:40 is the inverse of the rate in the way of 41:41 a describer so it's a bit of a I mean 41:44 it's intuitive to say they take 20 41:46 seconds per person but when I tell you 41:49 that it takes 20 seconds per packet or 41:50 20 seconds per person it looks like a 41:52 delay but it's really a rage so it's 41:55 important that's a good question yeah so 41:56 this is the rate so this is inverse time 41:59 and this is whatever quantity you're 42:01 dealing with so if you then take the 42:02 ratio of n to lambda you get time okay 42:08 so why is little slot true so here's a 42:10 very simple pictorial proof of littles 42:13 law and it applies under sort of 42:15 specific conditions but it turns out 42:17 these conditions are good enough for our 42:18 use so let's say we draw a picture like 42:21 this of a queue so I'm going to assume 42:22 that packets enter the queue and leave 42:24 the queue now the fact that there's a 42:25 single queue versus not a queue doesn't 42:27 matter it's any black box the packets 42:29 could get or information or messages or 42:32 items could get sent from the sender 42:35 they enter a black box and they get 42:36 stuck out at the receiver and the thing 42:39 you know applies to that as well so let 42:42 me plot the number of packets in the 42:44 queue or the number of items in the 42:46 queue as a function of time so I'm going 42:48 to assume here that capital T is 42:49 extremely long so this is some you know 42:52 whenever I deal with rates I have to 42:53 look at what happens over a long period 42:54 of time and then I can calculate a rate 42:56 so you can see that so what I've drawn 42:59 here is that every time a packet arrives 43:00 or an item arrives it the queue 43:02 increments by one so you can see that 43:05 the y-axis the the height of each of 43:07 those little snippets here is one and 43:09 then every time it leaves it drops by 43:11 one so you get in a particular execution 43:13 of this of whatever the queue does you 43:16 get a trace that looks like this now of 43:18 course in a different execution the 43:20 details might be different but you're 43:21 going to get if for a long if you do it 43:23 for a long enough time you're going to 43:24 you know sample all the possible 43:26 evolutions of this thing 43:28 or at least enough of it so you can make 43:29 meaningful statements so whenever a 43:31 packet arrives I've shown it in a color 43:33 and I think I've matched the color up 43:34 against whenever that packet leaves but 43:36 in fact the result applies this 43:38 particular example is a first in first 43:41 out queue so packets leave in the same 43:43 order they were sent but that doesn't 43:44 have to be true so let me label these 43:47 packets as shown here now what I'm going 43:51 to try to do is to relate the rate at 43:53 which packets have entered or left the 43:55 queue to the number of items in the 43:58 queue and the average delay experienced 44:00 by each item in the queue in this 44:02 pictorial proof so the way you do that 44:06 is you you know everything has to do 44:08 with the fact that there are two 44:09 different ways of looking at the area 44:10 under this curve and these two different 44:13 ways one of them relates to the rate and 44:16 the other relates to the average delay 44:18 and then we're going to say alright the 44:21 area under the curve is the same and 44:23 therefore we're going to equate two 44:24 numbers so the first thing I'm going to 44:26 do is I'm going to divide this up and 44:28 the area under the curve and divide it 44:30 up into rectangles like this I'll 44:31 associate with each rectangle a packet 44:34 or an item so I'm going to say that it 44:36 showed up here and it left at that point 44:39 so this entire period of time here 44:42 corresponds to a packet a sitting in the 44:44 queue this entire period of time 44:46 corresponds to B a left at this point in 44:49 time so now my queue was three packets 44:51 and there BCD and then at this point in 44:54 time he showed up so we now had C and D 44:56 sitting here but now he showed up and 44:58 then F showed up and so forth so you 45:00 agree that I can divide this up into 45:01 rectangles and associate with each 45:03 little rectangle whose height is 1 a 45:06 particular packet and that is the same 45:08 packet in the queue so each the height 45:09 represents a particular packet and I 45:11 associate every little piece of this Q 45:13 picture with a with a given packet 45:18 now let's assume that we run this 45:20 experiment for a long time T capital T 45:23 and P packets were forwarded through the 45:27 system so what is the rate 45:35 pee packets 40 seconds so the rate is 45:37 clearly lambdas P over T right this is 45:39 easy okay great 45:43 now let's assume that the area under the 45:46 curve is a this is the entire area under 45:50 the curve here now this is an area under 45:54 the curve of n of T which is the number 45:56 of packets as a function of T so if I 46:00 take this area under the curve which is 46:03 the same if you think of it in 46:05 continuous domain it's the integral of n 46:06 of T and I divided by T I get that 46:09 number right you agree that the mean 46:12 number of packets in the queue is the 46:14 integral of n of T which is the number 46:16 of packets in the queue at any point in 46:18 time if I take that integral and I 46:20 divide by capital T I get the mean 46:22 number of packets in the queue all it 46:23 says is this is the total number of 46:26 packets in the queue over aggregated 46:29 across all time therefore to find the 46:30 average I take the integral and I divide 46:33 by T all right that's the definition of 46:35 the mean all right so now we have two 46:38 things we have the rate as P over T and 46:41 we have the me that the mean number of 46:42 packets in the queue is a over T where a 46:44 is the area under the curve now to 46:47 complete the puzzle what we have to 46:50 observe is that if you look at the same 46:51 area under the curve you can look at in 46:54 two ways the one way to look at it is 46:55 the mean number of packets McKeever is 46:56 some line through here which is the area 46:59 under that curve divided by T but each 47:03 of these things accounts for a certain 47:04 delay and the mean delay 47:06 experienced by the packet is simply the 47:09 area under this entire curve but it's 47:12 divided into all of the packets that 47:15 ever got forwarded through the system so 47:17 through this experiment P packets got 47:19 forwarded by the system and the area 47:21 under the curve also represents a total 47:23 aggregate delay because if I look at it 47:26 with this axis here that's the total 47:27 time so that's the total time 47:30 faint and if I take this entire area 47:32 under the curve and I look at that area 47:35 under the curve and I divide by the 47:36 number of packets that I sent that gives 47:39 me the average time that a given packet 47:41 sent in the queue spent in the queue 47:43 which means that the mean delay is a 47:46 over P so if I take a over P and 47:49 multiply it by P over T what I get is a 47:54 over T which is equal to N and that's 47:58 littles law so now we're going to apply 48:01 littles law I mean it's actually a very 48:03 intuitive idea it just says that if I 48:05 take the rate average rate and the 48:08 average delay and multiply the two I get 48:09 the average number that's sitting in the 48:11 system so in order to complete the 48:13 picture for the throughput of this 48:14 sliding-window protocol what we're going 48:17 to do is to apply littles law in a 48:18 couple of different ways we're going to 48:21 say that if the window size is W in that 48:23 protocol and the round-trip time is RT t 48:29 that's the time between when I send a 48:31 packet and get an acknowledgement back I 48:34 first apply littles law so now I have a 48:36 big black box I send out packets and 48:39 every time I receive an acknowledgement 48:41 I send out another packet and I never 48:44 send out more than W packets so the 48:47 average delay between when I send a 48:49 packet and when I get an acknowledgment 48:51 for it is RT t so that's the D in the 48:55 littles law formula the number of things 48:59 that I have sitting in this black box 49:00 inside the network the number of 49:01 outstanding things that I have that are 49:03 waiting to be processed is W and 49:06 therefore the rate is by littles law n 49:10 over D which is w over RT T 49:15 so therefore the true put of this 49:16 protocol is simply equal to W over RT t 49:27 so as if I increase W I get higher 49:31 throughput so if I draw this as a 49:33 function of the window size W I look at 49:37 the throughput here I get a linear 49:40 increase like that now the problem with 49:45 this is of course you look at this and 49:46 go well the best way to get higher and 49:47 higher and higher throughput is to keep 49:49 increasing the window size so what 49:51 happens if I does this keep going on 49:52 forever there all I have to do is to 49:55 keep increasing the window size and then 49:57 I'm you know getting infinite throughput 50:02 that's clearly not happening 50:04 so what happens why is it that it's 50:06 completely true that W over RT T is the 50:08 throughput so why is it that I can't 50:11 just keep increasing the window size and 50:13 get infinite throughput yeah 50:20 well it's true you're bounded by sea but 50:22 yet this formula is true right it's true 50:25 that there's some round-trip time so 50:27 what's really going on of course is that 50:28 if you increase the window size more 50:30 than a certain amount all that's going 50:33 to happen is that packets are going to 50:34 get stuck in the skew here and they're 50:37 going to start draining at some rate see 50:39 packets per second but they're just 50:40 gonna get stuck at the back of the end 50:42 back end of the queue when they get 50:44 stuck at the back end of the queue the 50:46 RTT is no longer fixed the RTT now also 50:49 starts growing so in other words 50:51 initially the window the throughput is 50:53 always this formula but initially when 50:55 the packets are no longer in the queue 50:57 until a certain point initially you said 50:58 one packet it goes through you have a 51:00 window of two packets they go through 51:01 and you get acts three packets they go 51:03 through and get backs at some point in 51:04 time they start to fill up the queue and 51:06 when they start to fill up the queue w 51:10 keeps drawing but RT t keeps growing and 51:12 what ends up happening is this ratio 51:14 doesn't exceed C so you end up with 51:16 throughput that looks like that so and 51:18 the point at which this happens here 51:20 this point here is actually a product of 51:24 the minimum RTT of the system which is 51:27 the round-trip time in the absence of 51:30 any queuing I'm gonna call that RT t min 51:32 and that depends on the propagation 51:34 delay and the transmission delay but not 51:37 on the queuing delay there's no queues 51:39 and there's a certain minimum round-trip 51:40 time like it takes hundred milliseconds 51:42 to go to California and back or whatever 51:44 now when you start to grow that our TT 51:46 starts increasing but until that point 51:49 happens the round-trip time is RT T min 51:51 and if I take that and I multiply that 51:53 by C that's the critical window size up 51:57 to which point there are no queue 51:59 packets that build up in the queue but 52:01 after that packets start to build up in 52:03 the queue and this name there's a name 52:05 given to this product of the bottleneck 52:08 link speed or bandwidth and RTT men it's 52:11 called the bandwidth delay product 52:14 it's the product of the bandwidth and 52:16 the delay where the delay is the minimum 52:18 round-trip time and if I were to draw an 52:20 analogous picture of the actual 52:22 round-trip time as a function of the 52:26 window size initially when the window 52:28 size is small the round-trip time is RT 52:30 T min with some value and then at this 52:34 point in time you I wanted to mimic this 52:36 thing here you get to this point in time 52:39 which is the bandit delay product and 52:40 then the round-trip time starts to grow 52:42 so this is the actual delay and so you 52:47 look at this picture and a well-designed 52:49 well running protocol will run with the 52:51 window size roughly around here where it 52:53 gets the highest possible throughput at 52:55 the lowest possible delay but sometimes 52:57 you might end up running with a bigger 52:59 window size you're not going to get any 53:01 faster throughput but what you would see 53:03 is increasingly higher delay now in real 53:06 networks designing protocols that run at 53:09 this nice sweet spot is an extremely 53:10 challenging problem I'll get back to 53:12 this problem on Wednesday and talk about 53:14 how people work on it it's still a 53:16 somewhat open problem in fact it's still 53:18 an open problem in things like cellular 53:20 wireless networks so I'll come back to 53:22 this point but the main point here is 53:23 this idea of a bandwidth delay product