1 00:00:01,069 --> 00:00:03,189 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu today we're going to talk about analyzing tasks parallel algorithms multi-threaded algorithms and this is gonna rely on the fact that everybody has taken an algorithms class ok and so I want to remind you of some of the stuff you learned in your algorithms class and if you don't remember this then it's probably good to bone up on it a little bit because it's going to be essential and that is the topic of divide and conquer recurrences everybody remember divide and conquer recurrences these are and there's a general method for solving them that we'll deal with most of the ones we've one called the master method and it deals with recurrences the form T of n equals a times T of n over B plus F of N and this is generally interpreted as I have a problem of size n I can solve it by solving a problems of size n over B and it cost me F of n work to do that division and accumulate that whatever the results are of that to make my final result ok for all these recurrences the unstated base case is that is that you know this is sort of a running time so T of n is its constant if if n is small does that make sense everybody familiar with this right ok well we're gonna review it anyway because I don't like to go ahead and just assume and then leave 20 percent of you or more or less left behind in the woods so let's just remind ourselves of what this means so it's easy to understand this in terms of a recursion tree okay I start out and the idea is a recursion tree is to take the the the running time here and to reexpress it using the recurrence so if I really have an f of n I can put an effort in at the root and have a copies of T of n over B and that's exactly the same amount of work as I had or running time as I had in the T of n I've just simply expressed it with the right-hand side and then I do it again at every level okay so I I expand all the leaves I only expanded one here because I ran out of space okay and you keep doing that till you get down to T of one okay and so then the trick of looking at these recurrences is to add across the rows so the first row adds up to F of n the second row rad adds up to a times f of n over B the third one is a squared F of n over B squared and so forth and the height here now since I'm taking in and dividing it by B each time how many times can I divide by B till I get to something that's constant size that's just log base B of n okay so so far review any any questions here for anybody okay so I'll I get the height and then I look at how many if I've got T of one work at every leaf how many leaves are there hmm and for this analysis we're gonna sue everything works out everything 2n is a perfect you know power of B and so forth so if I go down k levels how many subproblems are there at K levels a to the K so how many levels am I going down H which is log base B of n so I end up with a to the log base B of n times what's at the leaf which is T of 1 okay and T of 1 is constant a log base B of n that's the same as n to the log base B of A ok that's just a little bit of exponential algebra ok and you can one way to see that is take the log of both sides of both equations you realize that all that's used there's the commutative law because if you take the log base if you take log of a log BN you get log B n times log if you take a base B log ba and you get the same thing if you take the log base B of the what I have is the result then you get the exponent log base B a times log base B of n so same thing just in different orders okay so that's just a little bit of math because this is basically we're interested in what's the growth in n so we prefer not to have log ins and the diameter we prefer to have ends and sorry in the exponent we prefer to have ends okay so that's basically the the number of things and so then the question is how much work is there if I add up all of these guys all the way down there how much work is in is in all those levels ok and it turns out there's a trick ok and the trick is to compare log base B of n to the log base B of A with F of N and there are three cases that are very commonly arising and for the most part that's what we're going to see is just these three cases okay so case one is that is the case where log base B n to the log base B of A is much bigger than F of n ok and by much bigger I mean it's bigger by a polynomial amount in other words there's an epsilon such that the ratio between the two is is at least n to the epsilon there's an epsilon greater than zero in other words F of n is o of n to the log base B of a minus epsilon in the in the numerator there okay which is the same as n log base B of a divided by n via Epsilon in that case this is geometrically increasing and so all the weight the constant fraction of the weight is in the leaves okay so then the answer is T of n is n to the log base B of A so if n to the log base B of A is bigger than F of n the answer is n to the log base B of a as long as it's bigger by a polynomial amount okay now case two is is in is the situation where we're into the log base B of A is approximately equal to F of n okay they're very similar in in growth and specifically we're going to look at the case where F of n is n to the log base B of A times a log or a poly logarithmic factor log to the K of n for some constant K greater than or equal to zero that greater than or equal to zero is very important okay you can't do this for negative K okay even though negative K is defined and meaningful this is not the answer when K is negative but if K is greater than equal to zero then it turns out that what's happening is the it's growing arithmetic aliy from beginning to end and so when you solve it what happens is you essentially add an extra log term so the answer is since if f of n is n to the log base B of a log to the K n the answer is n to the log base B of a log to the k plus 1 of n ok so you you kick in one extra log ok and and basically it's like on average there's basically you know I it's almost all equal and their log layers okay that's not quite the math but it's but it's good intuition okay they're almost all equal and their log layers so you tack on an extra log okay and then finally a case three is the case when an to log base B is much less than f of N and in pacifically where it is smaller by a once again a polynomial factor by an end to the epsilon factor for epsilon greater than zero it's also the case here the F has to satisfy what's called a regularity condition and this is a condition that's satisfied by all the functions we're going to look at polynomials and polynomials times logarithms and things of that nature okay it's not satisfied for weird functions like you know sines and cosines and things like that it's also not more relevantly it's not satisfied if you have things like Exponential's okay so so this is but for all the things we're gonna look at that's that's the case and in that case things are geometrically decreasing and so all the work is at the root okay and the root is basically costs f of n so the solution is theta f of n ok so we're gonna hand out a cheat sheet so if you could conscription of the TAS to get that distributed as quickly as possible okay so let's do a little a little puzzle here okay so so here's the cheat sheet that's basically what's on it okay and we'll do a little little quiz in class quiz self quiz okay so we have T of n as for T of n over 2 plus N and the solution is the thing that as a computer scientist you just memorize this so that you can in any situation you don't have to even look at the cheat sheet you just know it it's one of these basic things that all computer scientists should know it's kind of like you know what's two to the fifteenth what is it yes and interestingly that's my office number I'm in 32 - G seven six eight I'm the only one in the Stata Center with a power of two office number and that was totally unplanned okay so if you need to remember my office number two to the 15th okay so what's the solution here his case well case one and what's the solution N squared very good yeah so into the log base B of A is n to the log base four let base 2 of 4 log base 2 of 4 is 2 so that's N squared that's much bigger than n so it's case 1 and the answer is Theta N squared okay pretty easy okay how about this one yeah yeah it's N squared log in once again the first part is the same and to the log base B of A is N squared n squared is N squared log to the zero and so it's case two with K equals zero and so you just tack on an extra log factor okay so it's N squared log in okay and then of course we got to do this one yeah yeah NQ because once again into log base B of A is N squared that's much less than n cubed n cubes bigger so that dominates so we have theta N squared okay what about this one yeah no that's not the answer which case do you think it is k-stew okay nope yeah it's none of the cases it's a trick question oh I'm a nasty guy I'm a nasty guy okay this is one where the master method does not apply this would be case two but K has to be greater than equal to 0 and here K is minus 1 so case 2 doesn't apply okay and case 1 doesn't apply okay where we're comparing N squared to N squared over login because the ratio there is 1 over log N and that is sorry there the ratio there is is log in and log in is smaller than any n to the epsilon and you need to have an end to the epsilon separation okay there's actually a more the actual answer is N squared log log in for that one by the way which you can prove by the substitution method and it uses the same idea you just do a little bit different math there's a more general solution to this kind of recurrence called the aqua bazi method but but for most of what we're going to see it's sufficient to just applying the Aqua Bazzi method is more complicated than simply you know doing the sort of table lookup of which is bigger and if it's sufficiently big it's one or the other or the common case where they're about the same so within a log factor so we're gonna use the master method but there are more general ways of solving these kinds of things okay let's talk about some some multi-threaded algorithms the first thing I want to do is talk about loops because loops are are a great thing to analyze and understand because so many programs have loops probably 90% or more of the programs that are paralyzed are paralyzed by making parallel loops ok the spawn and sink types of parallels in the sub routine type parallelism you not done that frequently in in code usually it's loops so what we're gonna look at is a is a code to do an in-place matrix transpose okay as an example of this so if you look at this code I want to swap the lower side of the matrix with the upper side of the matrix and here's some code to do it where I paralyze the outer loop okay so so we're running the outer index from I equals 1 to 2 n I'm actually running the indexes from 0 from 0 to n minus 1 okay and then the inner loop goes from 0 up to I minus 1 now I've seen people write transpose code this is one of these trick questions they give you in interviews where they say write the transpose of a matrix with nested loops and what many people will do is the inner loop they'll run to n rather than running to I okay and what happens if you run the inner loop to end it's a very expensive identity function okay and there's an easier faster way to compute identity then with doubly nested loops where you swap everything you swap them all back okay so it's important that the iteration space here what's the shape of the iteration space if you look at the I and J values and you map them out on the plane what's the shape that you get it's not a square which it would be if they were both going from from 1 to N or 0 to n minus 1 what's the shape of of this iteration space yeah it's a triangle okay it's basically you know we're gonna run through all the things in the lower you know in this lower area that's the idea right and we're gonna swap it with the things in the upper one but the iteration space runs through just the lower triangle or correspondingly it runs through the upper triangle if you want to view it from that point of view but doesn't go through both triangles because then you will get an identity so anyway that's just a tip when you're interviewing you know double-check that they've got the loop indices to be what they ought to be okay and here what we've done is we've paralyze the outer loop which means how much work is on each iteration of this loop you know how much well how much time does it take to execute each iteration of loop for a given value of I what does it cost us to to execute the loop yeah yes theta I write which means that if you think about this if you've got a certain number of processors you don't want to just chunk it up so that each processor gets an equal range of i2 to work on you need something that's going to load balanced okay and this is where the Silk technology you know is best is when when there are these unbalanced things because it does the right thing okay as we'll see now so let's talk a little bit about how loops are actually implemented by the open silk compiler and runtime system okay so what happens is we have this doubly nested loop here but the only one that we're interested in is the the outer loop basically and what it does is it creates this recursive program for the loop okay and what is this program doing so I'm highlighting essentially this part this is basically the loop body here which has been lifted into this program this recursive program and what it's doing is it is finding a midpoint and then recursively calling itself on the two sides until it gets down to in this case an a one element iteration and then it executes the inner the body of the loop which in this case is itself a for loop but not a parallel for loop okay so it's doing divide and conquer it's just basically tree splitting so so let me so so basically it's got this control on top of it and if I take a look at what's going on in the control it looks something like this okay so this is using the the dag model that we saw before and now what I have here highlighted is the is the body the lifted body of the loop that's just sorry is the control and then down below in the purple I have the lifted body and what it's doing is it's basically saying let me divide it into two parts and then I call one I spawn one recurrence and I call the other and I just keep dividing like that till I get down to the bot base condition and then the work that I'm doing I've sort of illustrated here the work I'm doing in each iteration a loop is growing from one to n okay I'm showing it for eight but in general this is working from 1 to N for this particular one okay so that is that clear so that's what's actually going on so the the open silk runtime system does not have a loop primitive okay doesn't have doesn't have loops it only has essentially this ability to spawn and so forth and so things effectively are translated into this divide and conquer and that's the way that you need to think about loops when you're thinking in parallel ok makes sense and so one of the questions is well what what ok that seems like a lot of code to write for a simple loop what did we pay for that ok how much does that cost us so let's analyze this a little bit analyze parallel loops ok so as you know we analyze things in terms of work and span ok so what is the work of this computation hmm well what's the work before you get there what's the work of the original computation the doubly nested loop okay if you just think about it in terms of loops okay if they were serial loops how much work would be there right doubly nested loop in a loop and iterations interlinear iteration you're doing I work some of I I equals 1 to n what do you get yes theta n squared right doubly nested group and although you're not doing half the work you know you are doing the other half of the work of the of the N squared work that you might think was there if you wrote the the unfortunate identity function okay so so the question is how much work is in this particular computation because now I've got this whole tree spawning business going on in addition to the work that I'm doing in the leaves okay so the leaf work here okay along the along the bottom here okay that's all going to be order n squared work right because that's the same as in the as in the serial loop case how much does that other stuff up top asset it looks it looks huge right it's bigger than the other stuff isn't it how much is there basic computer science yeah it's theta ven why is it Theta then in the upper part yep yeah so going from the leaves to the root every level is having so it's geometric so it's the total number of leaves because there's constant work in each of those phrases so the total amount is theta n squared okay another way of thinking about this is you know you've got a complete binary tree that we've created with n leaves how many internal nodes are there in a complete binary tree with n leaves in this case there's actually an over I used to say there's n leaves yeah how many internal nodes are there if I have n leaves how many internal nodes to the tree that his nodes that have children there's exactly n minus one that's a property that's true of any full binary tree that is any binary tree in which every non leaf has two children there's exactly n minus one okay so nice tree properties nice computer science properties right we like computer science that's why we're here not because we're gonna make a lot of money okay okay let's look at the span of this hmm what's the span of this calculation okay because that's how we understand parallelism is by understanding work and span I see some familiar hands okay theta in yeah how did you get that yeah so we're basically the the longest path is basically take going from here down down down to eight and then back up okay and so the eight is really n in the general case right so that's really n in the general case and so we basically are going down and and so the span of the loop control is login and that's the key takeaway here the span of loop control is login when I do divide and conquer like that if I an infinite number of processors I could get it all done in logarithmic time okay but the ape there is linear that's order in case in this case n is 8 but okay so that's order n so then it's log n plus order log in which is therefore order n ok and so the the so what's the parallelism here theta n it's the ratio of the two right it's the ratio of two is Theta n is that good well parallelism of N squared you mean or is this good parallelism yeah that's pretty good right that's pretty good because typically you're going to be working on you know systems that have maybe if you are working on a big big system you've got maybe you know sixty four cores or 128 cores or something that's pretty big ok so whereas this is saying if and if you're doing that you better have a problem that's really big that you're running it on ok and so typically n is way bigger than then not you know the number of processors for a problem like this ok not always the case but here it is ok any questions about this so so we can use our work and span analysis to understand that hey the work overhead is a constant factor and we're going to talk more about the overhead of of work but basically from an asymptotic point of view our work is N squared just like the original code ok and we have a fair amount of parallelism we have order n parallelism okay how about if we if we make the inner loop be parallel as well ok so rather than just paralyze in the outer loop we're also going to paralyze the inner loop ok so how much work do we have for this situation so hint all work questions are trivial or at least no harder than they were when you were doing ordinary cereal algorithms okay so maybe we can come up with a trick question on the exam where the work changes but almost always the work doesn't change so what's the work yeah N squared right okay parallelizing stuff doesn't change the work what it hopefully does is reduce the span of the calculation okay and by reducing the span we get big parallelism that's the idea now sometimes it's the case when you paralyze stuff that you add work and that's unfortunate because it means that even if you if you end up taking your pelo program and running it on one processing core you're not going to get any speed up it's going to be a slowdown compared to the original algorithm so we're actually interested generally in work efficient parallel algorithms which we'll talk more about later so generally we're after work efficiency okay what's the span of this it is not still theta n what was your thinking to say it was Theta Ven [Music] but now notice that eight is now not is a for-loop itself okay not quite but but but so this is so this man is commendable okay okay absolutely okay this is commendable okay because this is this is why I try to have a bit of a Socratic method in here where we're you know I'm asking questions as opposed to just sitting here lecturing and having that go over your heads okay you have the opportunity to ask questions and to have your particular misunderstandings or whatever correct it okay that's how you learn okay and so I'm really in favor of anybody who wants to come here and learn okay that's my that's my desire and that's my job is to teach people who want to learn okay so I hope that that this is a safe space for you folks to sort of be willing to put yourself out there and not necessarily get stuff right okay I can't tell you how many times I've screwed up okay and it's only by airing it and so forth having somebody say no I don't think it's like that Charles you know this is this is like this I said oh yeah you're right god that was stupid you know but but the fact is that I no longer beat my head when I'm when I'm being stupid okay that's our natural state is stupidity okay we have to work hard not to be stupid okay right right he's hard work not to be stupid okay yeah question yeah but usually when my experience is and this is let me tell you from I've been in MIT almost 38 years okay my experience is that one one person has a question there's all these other people in the room have the same question and it's by you articulating it you're actually helping them out if I think you're going too slow okay if things are going too slow that we're wasting people's time that's my job as the lecturer to make sure that doesn't happen and I'll say let's take this offline you can talk after class okay but I appreciate your point of view that you know because that's that's considerate but actually it's more consideration okay if you're willing to air what you think and have have other people say you know I had that same question certainly they're gonna be as people in the class who say roll their eyes or whatever okay but look I don't teach to the top 10% I try to teach the top 90% okay and believe me believe me you know that you know I get I get farther with students and have more people enjoying the course and learning the stuff which is not necessarily easy stuff after the fact you're gonna discover this is easy okay but while you're learning it it's not easy this is what what Steven Pinker calls the curse of knowledge okay once you know something you have a really high time putting yourself in the position of not of what it was like to not know it okay and and so it's very easy to learn something and then it's like when somebody doesn't understand it's like oh you know whatever okay but but the fact of the matter is that most of us you know it's that empathy that's what makes for you to be a good communicator okay and all of you I know are at some point going to have to do communication with other people who are not as technically sophisticated as you folks are okay and so this is really good to sort of appreciate how important it is to you know to to recognize that this stuff isn't necessarily easy when you're learning it okay later you can learn it and then it'll be easy but that doesn't mean it's necessarily easy for somebody else so those of you who who think that some of these answers are like come on move along move along okay please be patient with your with the other people in the class if they learn better they're gonna be better teammates on projects and so forth okay and we'll all learn nobody's in competition with anybody here for grades or anything okay nobody's in competition we all set it up so we're going against benchmarks and so forth you're not in competition so we want to make this something where everybody helps everybody learn I probably spent too much time on that but in some sense not nearly enough okay so the span is not order in we got that much who else would like to hazard a okay it is login what's your reasoning [Music] yep log n plus log and good and then what about the leaves what's that you got to add in the span of the leaves that was just the span of the control the leaves are just one okay right boom so the span of the outer loop is our to log in the inner loop is order log in and the span of the body is order one because we're going down to the body now it's just doing what iteration of serial execution it's not doing i iterations it's only doing one iteration and so I add all that together and I get login okay does that make sense okay so the parallelism is this why should every hand in the room should be up waiting to call on me call on me sure yeah and squared over login okay that's the ratio of the two okay good any questions about that okay so the parallelism is N squared over log N and this is more parallel than the previous one but it turns out you got to remember even though it's more parallel is it a better algorithm in practice not necessarily because parallelism is like a threshold enough parallelism beyond the number of processors that you have right the parallel slackness remember okay so you have to have the number the amount of parallelism if it's much greater than the number of processors you're good so for something like this if with order n parallelism your bigger way bigger than the number of processors you don't need to parallel wise the inner loop okay you don't need to paralyze the inner loop you'll be fine and in fact we're going to talking a little bit about overheads and I'm gonna do that with an example from using vector addition okay so here's a really simple piece of code it's a vector add two vectors together two arrays and all it does is it adds B into a you can see every every position as being a and I'm going to paralyze this by putting a silk four in front rather than an ordinary four and what that does is it gives us this divide-and-conquer tree once again with with n leaves and the work here is order n because that's we got n iterations of constant time and the span is just the control login and so the parallelism is n over log n okay is that so this is basically easier than what we just did okay so now if I look at this though the work here includes some substantial overhead because they're all these function calls it may be order in okay and that's good enough if you're certain kinds of theoreticians this kind of theoretician that's not good enough okay I want to understand where these overheads are going so the first thing that I might do to to get rid of that overhead so because so in this case what I'm saying is that as I do the divide-and-conquer if I go all the way down to N equals 1 what am i doing in a leaf how much work is in one of these leaves here okay it's a it's an ad it's 2 minute you know it's to memory fetches and a memory store and an ad the memory operations are going to be the most expensive thing there okay that's all that's going on and yet right before then I've done a subroutine call okay a parallel subroutine call mind you okay and that's going to have substantial overhead okay and so the question is do you do a subroutine call to add two numbers together that's pretty expensive so let's take a look at how we can optimize away some of this overhead okay and so this gets more into the realm of engineering so the open silk system has a pragma so pragma is a compiler directive suggestion to the compiler where can suggest in this case that there be a grain size up here of gee for whatever you set G to and the grain size is essentially we're going to use and it shows up here in the code as instead of ending up it used to be high greater than low plus one so that you ended with a single element now it's going to be plus G so that at the leaves I'm going to have up to G elements per chunk that I do when I'm doing my divide and conquer so therefore I can take my subroutine overhead and amortize it across G iterations rather than amortize me across one iteration okay so that's that's korsun anow if the green size pragma is not specified the silk runtime system makes its best guess to minimize the overhead okay so what it actually does it run time is it figures out for the loop how many cores it's running on and makes a good guess as to the actual how much to run serially at the leafs and how much to do in parallel does that make sense okay so it's basically trying to overcome that so let's analyze this a little bit so let's let I be the time for one iteration of the loop body okay so this is I for iteration this is of the of this particular loop body so basically the cost of those three memory operations plus an add okay and let's G is the green size okay and now let's take a look at another variable here which is the time to perform a spawn and return okay I'm gonna call the spawn return is basically the overhead for spawning okay so if I look at the work in this context I can view it as I've got t1 work which is n here times the number of iterations because I got 1 2 3 up to n iterations and then I have and those are just the normal iterations and then since I have n over G minus 1 there's n over G leaves here of size G okay so I have n over G minus 1 internal nodes which are my subject overhead that's s so the total work is n times I plus n over G minus 1 times s now in the original code effectively the work is what if I had the code without the soak for loop how much work is there before I put in all this parallel control stuff what would the work be yeah n times I we're just doing an iterations and yeah there's a little bit of loop control but that loop control is really cheap ok and on a modern out of order you know processor the cost of incrementing a variable and testing against it's bound is like dwarfed by the stuff going on inside the loop ok so it's n I so this part here okay whoops what did I do oops I went back where I see so this part here this part here there we go ok is all overhead okay this is what it costs this part here is what cost me originally okay so let's take a look at the span of this ok so the span is going to be well if I add up what's at the leaves that's just G times I ok and now I've got the overhead here okay for any of these paths which is a basically proportional I'm ignoring constants here to make it easier log of n over G times s because it's going log levels and I've got n over G chunks because each I've got G things that iterations in each leaf so therefore the number of leaves is n over G and I've got n minus 1 of those sorry I've got log n of those actually to log in to log n over G of those times s actually maybe I don't have login because I'm gonna count it going down and going up it's actually constant of one is fine okay who's confused okay let's have some questions you have a question okay I know you're confused so believe me I spend one of that one of my great successes in life was discovering it Oh confusion is how I usually am okay and then it's getting unconfused that is you know that's the that's the thing because I see so many people going through life thinking they're not confused but you know what they're confused okay and that's a worse state of affairs to be in then than knowing that you're confused let's ask some questions people who are confused let's ask them questions okay because I want to make sure this that everybody gets this okay and for those who who think you think know it already sometimes it helps them to know it a little bit even better when we go through a discussion like this so somebody asked me a question yes yeah okay is the second half of the work part okay so the second half of the work part n over G minus one so the first thing is if I've got G iterations at the leaves of a binary tree how many leaves do I have if I've got a total of n iterations and over G right okay so that's the first thing the second thing is a fact about binary trees of any full binary tree but in particular complete binary trees how many internal nodes are there in a complete binary tree if n is the number of leaves that's n minus 1 here the number of leaves is n over G so it's n over G minus 1 that clear up something for some people ok good ok so that's where that and now each of those I've got to do those three colorful operations which is what I'm calling s so you got the work down okay who has a question about span spans my favorite work is good right work is work is more important actually in most contexts but what span is so cool yeah so so what I'm saying well I think what I was saying I think I was miss saying something probably there but the point is that the span is basically starting at the top here and taking any path down to a leaf and then going back up right and so if I look at that that's going to be then log of the number of leaves well the number of leaves as we agreed was n over G okay and then each of those is at most s to do the the subroutine calling and so forth that's bookkeeping that's in that note does that makes sense still still I didn't answer the question or it could be any of the paths but take a look at all the paths go down and back up there's no path that's like going down and around and up and so forth this is a dag so if you just look at the directions of the arrows you've got to follow the directions of the arrows you can't go down and up you're either going down or you've started back up okay so it's always going to be essentially down through a set of subroutines and back up through a set of subroutines that makes sense if you think about the code the the recursive code like what's happening when you do divide and conquer if you were operating with a stack how many things would get stacked up and then unstacked right so the the the path down and back up which would also be logarithmic at most okay does that make sense so I don't have a situation if I had one subtree here for example dependent on whoops that's that's not the mode I want to be in so one subtree here dependent on another subtree then indeed the span would grow okay but the whole point is not to have these two things to make these two things independent so I can run them at the same time okay so there's no dependency there okay what we good okay so so here I have the work and the span I had the I have two things I want out of this number one I want the work to be small I want to be work to be close to the work of the the end times I the work of the ordinary serial algorithm okay and I want the span to be small so it's as parallel as possible those things are working in opposite directions okay because if you look the dominant term for G in the first equation okay is as dividing in so if I want that to be the work to be small I want G to be what big okay the dominant term for G in the span is the G multiplied by the I there is another term there but that's a lower order term so if I want the span to be small I want G to be small if they're going in opposite directions okay so what we're interested in is picking a finding a medium place okay so you want G to be and in particular if you look at this what I want is G to be at least s over I Y if I if I make G be it will be much bigger than s over I okay so if G is bigger than s over I then this term multiplied by s ends up being much less than this term you see that that's algebra okay so do you see that if I make G be if if G is much less than s over I okay so get rid of the minus one that doesn't matter okay so that's really n times s over G so therefore s over G that's basically much smaller than I so I end up with something where the result is much smaller than ni does that make sense okay how we doing on time okay I'm gonna get through everything that I expect to get through despite my rant okay okay so does that make sense we want G to be much greater than s over I then the overhead is going to be small because I'm going to do a whole bunch of iterations that are going to make it so that that function call was just like and who cares okay that's the idea okay so so that's the goal so let's take a look at let's see what was the so uh so let me just see here did I somehow I feel like I have something out of order here okay because because now I have the other implementation okay I think maybe that is where I left it okay I think we come back to this let me just let me see I'm gonna lecture on okay okay so here's another implementation of a of a of the for loop to add two vectors okay and what this is going to use as a subroutine I have this this operator called V add which itself does just a serial vector add and now what I'm going to do is run through the loop here okay and spawn off additions okay and the min there is just for a boundary condition I'm going to spun off spin off things in groups of G okay so I spin off do a vector out of size G go on vector out of size G vector out of size G jumping G each time okay so let's take a look at the analysis of that so now what I've got is I've got G iterations each of which costs me I and this is sort of this the the dag structure that I've got because the the for loop here that has the silk spawn in it is going along and notice that the silk spawn is in a the silk spawn is in a loop and so it's basically going it's spawning off G iterations say it's spawning off the V vector ad which is going to do G iterations okay because I'm giving basically G because the boundary case let's not worry about okay and then spawn off G spawn off G spawn off G and so forth so what's the work of this let's see well let's make things easy to begin with let's assume G is 1 okay and analyze it and then and this is a common thing by the ways you look to assume the grain size is 1 and analyze it and then as a practical matter coercing it to make it more efficient so if G is 1 what's the work of this yeah yeah what's order n right because those other two things are constant so yeah exactly right okay it's order n okay in fact this is what's a technique by the way that's called strip mining if you take away the parallel thing will you take a loop of length N and you really have it nested loops here one that is going has n over G iterations and one that has G iterations and you're going through exactly the same stuff okay and that's the same as going through n iterations but you're replacing a singly nested loop by a doubly nested loop and the only difference here is that in the inner loop I'm actually spawning off work okay so here the work is order n because I basically if I'm spinning off just if G is one then I'm spinning off you know one piece of work and I'm going to n minus 1 for spinning off one so I've got order n work up here and order and work down below okay what's the span for this after all I got n spans there now sorry n spawns not in spans and spawns what's the span gonna be yeah sorry sorry I couldn't hear it enough theta s no it's bigger than that yeah you'd think gee I just have to do one thing to go down and up but the span is the longest path in the whole dag okay it's the longest path on the whole dag so where's the longest path in the whole dag start upper left right and where does it end upper right how long is that path what's the longest one it's gonna go all the way down the backbone of the top there and then flip down and back up so how many things are in the if G is one how many things am i spawning off their end thing so the span is order n okay right let's order n its long okay so what's the parallelism here yeah it's order one and what do we call that sad it right but there's a more technical name okay they call that puny okay it's like we went through all this work spawned off all that stuff added all this overhead and it didn't go any faster I can't tell you how many times I've seen people do this okay when they start parallel programming oh but I spawned off all this stuff yeah but it you didn't reduce the span okay so let's now that was analyzed in terms of uh n sorry in terms of GE equals one now let's increase the grain size okay and analyze it in terms of G so once again what's the work now the work is always a gimme yeah and same as before n the work doesn't change when you paralyze things differently and stuff like that okay I'm doing I'm doing order and iterations okay oh but what's the span this is a tricky one yeah close that's half right that's half right good that's half right yeah n over G Plus G don't forget that other term so the the path that we care about is it goes along the top here and then goes down there and this has span G right so we've got n over G here because I'm doing chunks of G Plus G so it's n over G plus n over G okay and now how can I choose G to minimize the span there's nothing to choose to minimize the work except there's some work overhead that we're not that we're trying to do but how can I choose G to minimize the span what's the best value for G here yeah you got it square root of n okay so one of these is increasing okay if G is increasing and over G is decreasing where do they cross when they're equal that's when in G equals n over G or G is square root of n okay so this actually has decent pit for n big enough square root of n that's not bad so it is okay to spawn things off in chunks just don't make the chunks real little okay what's the parallelism just one C n this is always a gaming it's the ratio so square root of n okay quiz on parallel loops okay I'm gonna let you folks do this offline okay here's the answers if you if you quickly write it down you don't have to think about it okay okay so so take a look at the notes afterwards and you can try to figure out why those things are so okay so there's some performance tips that make sense when you're programming with loops one is minimize the span to maximize the parallelism because the spans in the denominator and generally you want to generate ten times more parallelism than processors if you want near perfect linear speed-up okay so if you have a lot more parallelism than the number of process we talked about that last time you get good speed-up if you have plenty of parallelism try to trade some of it off to reduce the work overhead okay so the idea was for any of these things you can fill with the fiddle with the numbers okay to the grain size in particular to reduce it reduces the parallelism but it also reduces the overhead and as long as you've got sufficient parallelism your code run is going to run just fine parallel okay it's only when you're sort of in the place where you I don't have enough parallelism and I don't want to pay the overhead those are the sticky ones okay but most of the time you're you're gonna be in the case where you've got way more parallelism than you need and the question is how can you reduce some of it in order to get reduced the work overhead generally you should use divide and conquer is your recursion or parallel loops rather than spawning one small thing after another so it's better to do the Silk 4 which already is doing divide and conquer parallelism than doing the spawn off one thing at a time type of strategy unless you can chunk them so that you have relatively few things that you're spawning off this would be fine the thing I say not this would be fine if foo of I was really expensive fine then we'll have lots of perrilyn because there's a lot of work there okay but generally it's better to do the divide and conquer generally you should make sure that the number work that you're doing per number of spawns is sufficiently large so this bonds say it well how much are you busting your your your work into in terms of chunks because the spawn has an overhead and so the question is well how big is that and so you can do course and by using function calls and inlining near the leaves generally better to paralyze outer loop as opposed to inner loops if you're forced to make a choice because the inner loops there the overhead you're incurring every single time the outer loops you can amortize it against the work that's going on inside that doesn't have the overhead and watch out for scheduling overheads so here's an example of two codes that have parallelism to and one of them is an efficient code and the other one is lousy code okay the the top one is efficient because I have two iterations and each of them that I run in parallel and each of them does a lot of work okay so that there's only one scheduling operation that happens the bottom one I have I have n iterations and each iteration does work to soive n over so I basically have n iterations with overhead and so if you just look at these look at the overhead you can see what the difference is okay I want to talk a little bit about Mason I actually have a whole bunch of things here that I'm not going to get to but I didn't expect to get to them but I do want to get to some of matrix multiplication people familiar with this problem okay we're gonna assume for simplicity that N is a power of two so here's a typical way of paralyzing matrix multiplication I take the the two outer loops and I paralyze them I can't easily paralyze the inner loop because if I do I get a race condition because I'll have two iterations that are both trying to update C of IJ okay so I can't just paralyze K so I'm just gonna paralyze I and J okay the work for this is what triply nested loop n cubed everybody knows make some multiplication unless you do something clever like Strassen or one of the more recent Virgie williams algorithm there's you know that the running time is for the standard algorithm is n cubed okay the span for this is what yeah that inner loop is linear size and then you've got two logins log n plus log n plus n ok so it's order n ok so the parallelism is around N squared ok if I ignore a constants and I said I was working on matrices of say a thousand by a thousand or so the parallelism is something like n square roots about a thousand squared is a million Wow that's a lot of parallelism how many processor you're running on ok is it bigger than ten times the number of processors by a little bit ok now there's another strategy that one can use which is called divide and conquer and this is the strategy that's used in stress and we're not going to do the Strassen salad we're just going to use the eight multiply version of this for people who know stress and more power to you it's great algorithm really surprising really really amazing and it's actually worthwhile doing in practice by the way for sufficiently large matrices see the idea here is I can multiply two n by n matrices by doing eight multiplications of n over two by n over two matrices and then add two n by n matrices ok so when we start talking matrices this is a little bit of a diversion from the algorithms but so important because representation of matrices is one of the things that gets people into trouble when they're doing any kind of two-dimensional coding and stuff okay and so I want to talk a little bit about indexing we're going to talk about more this more later when we do cache behavior and such how do you represent sub-matrices the standard way of representing is either in row major or column major order depending upon the language you use Fortran uses column major ordering so there are a lot of subroutines that are column major but for the most part C which we're using is is row major and so the question is if I take a sub matrix of a large matrix how do I calculate where the IJ element of the of that matrix is here I have the IJ element here I've got a matrix M which is embedded and by row major remember that means I just take row after row and I just put them in linear order through the memory okay so every two-dimensional matrix you can index as a one-dimensional matrix because you all you have to do is which is exactly what the code is doing you need to know the beginning of the matrix but if you have a sub matrix it's a little more complicated okay so here's the idea suppose that you have a sub matrix M so starting in location M of this outer matrix so here we okay here we have the outer matrix which has length n sub M this is the outer the big matrix actually I should have called that M okay I should not have called this in and so M should have called it in some something else because this is my M that I'm interested in which is this location here okay and what I'm interested in doing is finding out yeah I named these variables stupidly okay it's finding out where is the IJ element of this sub matrix M if I tell you the beginning what do I add to get to IJ and the answer is that I've got to add the number of rows that comes down here well that's I times the width of the full matrix that you're taking it out of not the width of your local sub matrix okay and then you have to add in the the and then you add in our n J from that point okay there we go okay so I have to add in the length of the long matrix plus J for each each row I does that make sense okay because it's embedded in there you have to skip over full rows of the outer matrix so you can't generally just pass a sub matrix and expect to do indexing on that when it's embedded in a large matrix if you make a copy sure then you can index it according to whatever the new copy is but if you want to operate in place on matrices which is often the case then you have to understand that every row you have to jump a row of the outer matrix not a row of whatever your sub matrix is when you're doing the divide-and-conquer okay so when we look at doing divide and conquer I have a matrix here which is which I want to now divide into four sub matrices of size M over two and the question is where is the starting corners of each of those matrices so M 0 0 that starts at the same places in okay that upper left one where does m 0 1 start whereas MZ ro1 start yeah m plus n over 2 where does m10 start this is the tricky one okay here's the answer okay am+ the long matrix times n over two because I'm going down M over two rows and I've got to go down the number of rows of the outer matrix and then M 1 1 is the same as as the two there so here's the in general for for row and column being 0 and 1 in some sense okay this is a general formula it matches up with that ok where where I plug in 0 1 for each one and now here's my code and I just want to point out a couple things and then we'll we'll quit and I'll let you take a look at the the rest of this on your own here's my divide and conquer matrix multiply okay I use restrict so everybody familiar with restrict says don't tells the compiler these things you can assume are not aliased okay so that when you change one you're not changing other that lights the compiler produce better code and then the row sizes are gonna be n sub c and sub a and n sub B and then the matrices that we're taking them out of those are the sizes of the sub matrix the outer matrix is going to have size n by n okay for which when I have my recursion I want to talk about sub matrices they're embedded in this larger out side matrix here is a great piece of bit tricks this says N is a power of two okay so go back and remind yourself of what the bit tricks are but that's a clever bit trick to say that N is a power of two very quick and and so take a look at that okay and then we're going to course in the leaves with a base case the base case just goes through and solves the problem for small ant just with a typical triply nested loop okay and what we're gonna do is allocate a temporary n by n array and then we're going to define the temporary array to having to having underlying row size n and then here is this fabulous macro that makes all the index calculations easy it uses the sharp sharp operator which pastes together tokens so that I can paste in sub C when I pass or and C it passes whatever value I pass for that it pastes it together so it allows me to do the indexing of the and have the right thing so that for each of these address calculations I'm able to to do them by just saying X of and just give the formulas there otherwise you'd get V driven nuts by the formula so take a look at that macro because that may help you in some of your other things and then I sync and then add it up okay and the addition is just going to be a doubly nested parallel addition and then I free it so what I would like you to do is go home and take a look at the analysis of this and it turns out this has way more parallelism than you need and if you reduce the amount of parallelism you get much better performance and there's several other algorithms like put in there as well okay so I'll try to get this posted tonight thanks very much you 2 00:00:03,189 --> 00:00:05,769 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu today we're going to talk about analyzing tasks parallel algorithms multi-threaded algorithms and this is gonna rely on the fact that everybody has taken an algorithms class ok and so I want to remind you of some of the stuff you learned in your algorithms class and if you don't remember this then it's probably good to bone up on it a little bit because it's going to be essential and that is the topic of divide and conquer recurrences everybody remember divide and conquer recurrences these are and there's a general method for solving them that we'll deal with most of the ones we've one called the master method and it deals with recurrences the form T of n equals a times T of n over B plus F of N and this is generally interpreted as I have a problem of size n I can solve it by solving a problems of size n over B and it cost me F of n work to do that division and accumulate that whatever the results are of that to make my final result ok for all these recurrences the unstated base case is that is that you know this is sort of a running time so T of n is its constant if if n is small does that make sense everybody familiar with this right ok well we're gonna review it anyway because I don't like to go ahead and just assume and then leave 20 percent of you or more or less left behind in the woods so let's just remind ourselves of what this means so it's easy to understand this in terms of a recursion tree okay I start out and the idea is a recursion tree is to take the the the running time here and to reexpress it using the recurrence so if I really have an f of n I can put an effort in at the root and have a copies of T of n over B and that's exactly the same amount of work as I had or running time as I had in the T of n I've just simply expressed it with the right-hand side and then I do it again at every level okay so I I expand all the leaves I only expanded one here because I ran out of space okay and you keep doing that till you get down to T of one okay and so then the trick of looking at these recurrences is to add across the rows so the first row adds up to F of n the second row rad adds up to a times f of n over B the third one is a squared F of n over B squared and so forth and the height here now since I'm taking in and dividing it by B each time how many times can I divide by B till I get to something that's constant size that's just log base B of n okay so so far review any any questions here for anybody okay so I'll I get the height and then I look at how many if I've got T of one work at every leaf how many leaves are there hmm and for this analysis we're gonna sue everything works out everything 2n is a perfect you know power of B and so forth so if I go down k levels how many subproblems are there at K levels a to the K so how many levels am I going down H which is log base B of n so I end up with a to the log base B of n times what's at the leaf which is T of 1 okay and T of 1 is constant a log base B of n that's the same as n to the log base B of A ok that's just a little bit of exponential algebra ok and you can one way to see that is take the log of both sides of both equations you realize that all that's used there's the commutative law because if you take the log base if you take log of a log BN you get log B n times log if you take a base B log ba and you get the same thing if you take the log base B of the what I have is the result then you get the exponent log base B a times log base B of n so same thing just in different orders okay so that's just a little bit of math because this is basically we're interested in what's the growth in n so we prefer not to have log ins and the diameter we prefer to have ends and sorry in the exponent we prefer to have ends okay so that's basically the the number of things and so then the question is how much work is there if I add up all of these guys all the way down there how much work is in is in all those levels ok and it turns out there's a trick ok and the trick is to compare log base B of n to the log base B of A with F of N and there are three cases that are very commonly arising and for the most part that's what we're going to see is just these three cases okay so case one is that is the case where log base B n to the log base B of A is much bigger than F of n ok and by much bigger I mean it's bigger by a polynomial amount in other words there's an epsilon such that the ratio between the two is is at least n to the epsilon there's an epsilon greater than zero in other words F of n is o of n to the log base B of a minus epsilon in the in the numerator there okay which is the same as n log base B of a divided by n via Epsilon in that case this is geometrically increasing and so all the weight the constant fraction of the weight is in the leaves okay so then the answer is T of n is n to the log base B of A so if n to the log base B of A is bigger than F of n the answer is n to the log base B of a as long as it's bigger by a polynomial amount okay now case two is is in is the situation where we're into the log base B of A is approximately equal to F of n okay they're very similar in in growth and specifically we're going to look at the case where F of n is n to the log base B of A times a log or a poly logarithmic factor log to the K of n for some constant K greater than or equal to zero that greater than or equal to zero is very important okay you can't do this for negative K okay even though negative K is defined and meaningful this is not the answer when K is negative but if K is greater than equal to zero then it turns out that what's happening is the it's growing arithmetic aliy from beginning to end and so when you solve it what happens is you essentially add an extra log term so the answer is since if f of n is n to the log base B of a log to the K n the answer is n to the log base B of a log to the k plus 1 of n ok so you you kick in one extra log ok and and basically it's like on average there's basically you know I it's almost all equal and their log layers okay that's not quite the math but it's but it's good intuition okay they're almost all equal and their log layers so you tack on an extra log okay and then finally a case three is the case when an to log base B is much less than f of N and in pacifically where it is smaller by a once again a polynomial factor by an end to the epsilon factor for epsilon greater than zero it's also the case here the F has to satisfy what's called a regularity condition and this is a condition that's satisfied by all the functions we're going to look at polynomials and polynomials times logarithms and things of that nature okay it's not satisfied for weird functions like you know sines and cosines and things like that it's also not more relevantly it's not satisfied if you have things like Exponential's okay so so this is but for all the things we're gonna look at that's that's the case and in that case things are geometrically decreasing and so all the work is at the root okay and the root is basically costs f of n so the solution is theta f of n ok so we're gonna hand out a cheat sheet so if you could conscription of the TAS to get that distributed as quickly as possible okay so let's do a little a little puzzle here okay so so here's the cheat sheet that's basically what's on it okay and we'll do a little little quiz in class quiz self quiz okay so we have T of n as for T of n over 2 plus N and the solution is the thing that as a computer scientist you just memorize this so that you can in any situation you don't have to even look at the cheat sheet you just know it it's one of these basic things that all computer scientists should know it's kind of like you know what's two to the fifteenth what is it yes and interestingly that's my office number I'm in 32 - G seven six eight I'm the only one in the Stata Center with a power of two office number and that was totally unplanned okay so if you need to remember my office number two to the 15th okay so what's the solution here his case well case one and what's the solution N squared very good yeah so into the log base B of A is n to the log base four let base 2 of 4 log base 2 of 4 is 2 so that's N squared that's much bigger than n so it's case 1 and the answer is Theta N squared okay pretty easy okay how about this one yeah yeah it's N squared log in once again the first part is the same and to the log base B of A is N squared n squared is N squared log to the zero and so it's case two with K equals zero and so you just tack on an extra log factor okay so it's N squared log in okay and then of course we got to do this one yeah yeah NQ because once again into log base B of A is N squared that's much less than n cubed n cubes bigger so that dominates so we have theta N squared okay what about this one yeah no that's not the answer which case do you think it is k-stew okay nope yeah it's none of the cases it's a trick question oh I'm a nasty guy I'm a nasty guy okay this is one where the master method does not apply this would be case two but K has to be greater than equal to 0 and here K is minus 1 so case 2 doesn't apply okay and case 1 doesn't apply okay where we're comparing N squared to N squared over login because the ratio there is 1 over log N and that is sorry there the ratio there is is log in and log in is smaller than any n to the epsilon and you need to have an end to the epsilon separation okay there's actually a more the actual answer is N squared log log in for that one by the way which you can prove by the substitution method and it uses the same idea you just do a little bit different math there's a more general solution to this kind of recurrence called the aqua bazi method but but for most of what we're going to see it's sufficient to just applying the Aqua Bazzi method is more complicated than simply you know doing the sort of table lookup of which is bigger and if it's sufficiently big it's one or the other or the common case where they're about the same so within a log factor so we're gonna use the master method but there are more general ways of solving these kinds of things okay let's talk about some some multi-threaded algorithms the first thing I want to do is talk about loops because loops are are a great thing to analyze and understand because so many programs have loops probably 90% or more of the programs that are paralyzed are paralyzed by making parallel loops ok the spawn and sink types of parallels in the sub routine type parallelism you not done that frequently in in code usually it's loops so what we're gonna look at is a is a code to do an in-place matrix transpose okay as an example of this so if you look at this code I want to swap the lower side of the matrix with the upper side of the matrix and here's some code to do it where I paralyze the outer loop okay so so we're running the outer index from I equals 1 to 2 n I'm actually running the indexes from 0 from 0 to n minus 1 okay and then the inner loop goes from 0 up to I minus 1 now I've seen people write transpose code this is one of these trick questions they give you in interviews where they say write the transpose of a matrix with nested loops and what many people will do is the inner loop they'll run to n rather than running to I okay and what happens if you run the inner loop to end it's a very expensive identity function okay and there's an easier faster way to compute identity then with doubly nested loops where you swap everything you swap them all back okay so it's important that the iteration space here what's the shape of the iteration space if you look at the I and J values and you map them out on the plane what's the shape that you get it's not a square which it would be if they were both going from from 1 to N or 0 to n minus 1 what's the shape of of this iteration space yeah it's a triangle okay it's basically you know we're gonna run through all the things in the lower you know in this lower area that's the idea right and we're gonna swap it with the things in the upper one but the iteration space runs through just the lower triangle or correspondingly it runs through the upper triangle if you want to view it from that point of view but doesn't go through both triangles because then you will get an identity so anyway that's just a tip when you're interviewing you know double-check that they've got the loop indices to be what they ought to be okay and here what we've done is we've paralyze the outer loop which means how much work is on each iteration of this loop you know how much well how much time does it take to execute each iteration of loop for a given value of I what does it cost us to to execute the loop yeah yes theta I write which means that if you think about this if you've got a certain number of processors you don't want to just chunk it up so that each processor gets an equal range of i2 to work on you need something that's going to load balanced okay and this is where the Silk technology you know is best is when when there are these unbalanced things because it does the right thing okay as we'll see now so let's talk a little bit about how loops are actually implemented by the open silk compiler and runtime system okay so what happens is we have this doubly nested loop here but the only one that we're interested in is the the outer loop basically and what it does is it creates this recursive program for the loop okay and what is this program doing so I'm highlighting essentially this part this is basically the loop body here which has been lifted into this program this recursive program and what it's doing is it is finding a midpoint and then recursively calling itself on the two sides until it gets down to in this case an a one element iteration and then it executes the inner the body of the loop which in this case is itself a for loop but not a parallel for loop okay so it's doing divide and conquer it's just basically tree splitting so so let me so so basically it's got this control on top of it and if I take a look at what's going on in the control it looks something like this okay so this is using the the dag model that we saw before and now what I have here highlighted is the is the body the lifted body of the loop that's just sorry is the control and then down below in the purple I have the lifted body and what it's doing is it's basically saying let me divide it into two parts and then I call one I spawn one recurrence and I call the other and I just keep dividing like that till I get down to the bot base condition and then the work that I'm doing I've sort of illustrated here the work I'm doing in each iteration a loop is growing from one to n okay I'm showing it for eight but in general this is working from 1 to N for this particular one okay so that is that clear so that's what's actually going on so the the open silk runtime system does not have a loop primitive okay doesn't have doesn't have loops it only has essentially this ability to spawn and so forth and so things effectively are translated into this divide and conquer and that's the way that you need to think about loops when you're thinking in parallel ok makes sense and so one of the questions is well what what ok that seems like a lot of code to write for a simple loop what did we pay for that ok how much does that cost us so let's analyze this a little bit analyze parallel loops ok so as you know we analyze things in terms of work and span ok so what is the work of this computation hmm well what's the work before you get there what's the work of the original computation the doubly nested loop okay if you just think about it in terms of loops okay if they were serial loops how much work would be there right doubly nested loop in a loop and iterations interlinear iteration you're doing I work some of I I equals 1 to n what do you get yes theta n squared right doubly nested group and although you're not doing half the work you know you are doing the other half of the work of the of the N squared work that you might think was there if you wrote the the unfortunate identity function okay so so the question is how much work is in this particular computation because now I've got this whole tree spawning business going on in addition to the work that I'm doing in the leaves okay so the leaf work here okay along the along the bottom here okay that's all going to be order n squared work right because that's the same as in the as in the serial loop case how much does that other stuff up top asset it looks it looks huge right it's bigger than the other stuff isn't it how much is there basic computer science yeah it's theta ven why is it Theta then in the upper part yep yeah so going from the leaves to the root every level is having so it's geometric so it's the total number of leaves because there's constant work in each of those phrases so the total amount is theta n squared okay another way of thinking about this is you know you've got a complete binary tree that we've created with n leaves how many internal nodes are there in a complete binary tree with n leaves in this case there's actually an over I used to say there's n leaves yeah how many internal nodes are there if I have n leaves how many internal nodes to the tree that his nodes that have children there's exactly n minus one that's a property that's true of any full binary tree that is any binary tree in which every non leaf has two children there's exactly n minus one okay so nice tree properties nice computer science properties right we like computer science that's why we're here not because we're gonna make a lot of money okay okay let's look at the span of this hmm what's the span of this calculation okay because that's how we understand parallelism is by understanding work and span I see some familiar hands okay theta in yeah how did you get that yeah so we're basically the the longest path is basically take going from here down down down to eight and then back up okay and so the eight is really n in the general case right so that's really n in the general case and so we basically are going down and and so the span of the loop control is login and that's the key takeaway here the span of loop control is login when I do divide and conquer like that if I an infinite number of processors I could get it all done in logarithmic time okay but the ape there is linear that's order in case in this case n is 8 but okay so that's order n so then it's log n plus order log in which is therefore order n ok and so the the so what's the parallelism here theta n it's the ratio of the two right it's the ratio of two is Theta n is that good well parallelism of N squared you mean or is this good parallelism yeah that's pretty good right that's pretty good because typically you're going to be working on you know systems that have maybe if you are working on a big big system you've got maybe you know sixty four cores or 128 cores or something that's pretty big ok so whereas this is saying if and if you're doing that you better have a problem that's really big that you're running it on ok and so typically n is way bigger than then not you know the number of processors for a problem like this ok not always the case but here it is ok any questions about this so so we can use our work and span analysis to understand that hey the work overhead is a constant factor and we're going to talk more about the overhead of of work but basically from an asymptotic point of view our work is N squared just like the original code ok and we have a fair amount of parallelism we have order n parallelism okay how about if we if we make the inner loop be parallel as well ok so rather than just paralyze in the outer loop we're also going to paralyze the inner loop ok so how much work do we have for this situation so hint all work questions are trivial or at least no harder than they were when you were doing ordinary cereal algorithms okay so maybe we can come up with a trick question on the exam where the work changes but almost always the work doesn't change so what's the work yeah N squared right okay parallelizing stuff doesn't change the work what it hopefully does is reduce the span of the calculation okay and by reducing the span we get big parallelism that's the idea now sometimes it's the case when you paralyze stuff that you add work and that's unfortunate because it means that even if you if you end up taking your pelo program and running it on one processing core you're not going to get any speed up it's going to be a slowdown compared to the original algorithm so we're actually interested generally in work efficient parallel algorithms which we'll talk more about later so generally we're after work efficiency okay what's the span of this it is not still theta n what was your thinking to say it was Theta Ven [Music] but now notice that eight is now not is a for-loop itself okay not quite but but but so this is so this man is commendable okay okay absolutely okay this is commendable okay because this is this is why I try to have a bit of a Socratic method in here where we're you know I'm asking questions as opposed to just sitting here lecturing and having that go over your heads okay you have the opportunity to ask questions and to have your particular misunderstandings or whatever correct it okay that's how you learn okay and so I'm really in favor of anybody who wants to come here and learn okay that's my that's my desire and that's my job is to teach people who want to learn okay so I hope that that this is a safe space for you folks to sort of be willing to put yourself out there and not necessarily get stuff right okay I can't tell you how many times I've screwed up okay and it's only by airing it and so forth having somebody say no I don't think it's like that Charles you know this is this is like this I said oh yeah you're right god that was stupid you know but but the fact is that I no longer beat my head when I'm when I'm being stupid okay that's our natural state is stupidity okay we have to work hard not to be stupid okay right right he's hard work not to be stupid okay yeah question yeah but usually when my experience is and this is let me tell you from I've been in MIT almost 38 years okay my experience is that one one person has a question there's all these other people in the room have the same question and it's by you articulating it you're actually helping them out if I think you're going too slow okay if things are going too slow that we're wasting people's time that's my job as the lecturer to make sure that doesn't happen and I'll say let's take this offline you can talk after class okay but I appreciate your point of view that you know because that's that's considerate but actually it's more consideration okay if you're willing to air what you think and have have other people say you know I had that same question certainly they're gonna be as people in the class who say roll their eyes or whatever okay but look I don't teach to the top 10% I try to teach the top 90% okay and believe me believe me you know that you know I get I get farther with students and have more people enjoying the course and learning the stuff which is not necessarily easy stuff after the fact you're gonna discover this is easy okay but while you're learning it it's not easy this is what what Steven Pinker calls the curse of knowledge okay once you know something you have a really high time putting yourself in the position of not of what it was like to not know it okay and and so it's very easy to learn something and then it's like when somebody doesn't understand it's like oh you know whatever okay but but the fact of the matter is that most of us you know it's that empathy that's what makes for you to be a good communicator okay and all of you I know are at some point going to have to do communication with other people who are not as technically sophisticated as you folks are okay and so this is really good to sort of appreciate how important it is to you know to to recognize that this stuff isn't necessarily easy when you're learning it okay later you can learn it and then it'll be easy but that doesn't mean it's necessarily easy for somebody else so those of you who who think that some of these answers are like come on move along move along okay please be patient with your with the other people in the class if they learn better they're gonna be better teammates on projects and so forth okay and we'll all learn nobody's in competition with anybody here for grades or anything okay nobody's in competition we all set it up so we're going against benchmarks and so forth you're not in competition so we want to make this something where everybody helps everybody learn I probably spent too much time on that but in some sense not nearly enough okay so the span is not order in we got that much who else would like to hazard a okay it is login what's your reasoning [Music] yep log n plus log and good and then what about the leaves what's that you got to add in the span of the leaves that was just the span of the control the leaves are just one okay right boom so the span of the outer loop is our to log in the inner loop is order log in and the span of the body is order one because we're going down to the body now it's just doing what iteration of serial execution it's not doing i iterations it's only doing one iteration and so I add all that together and I get login okay does that make sense okay so the parallelism is this why should every hand in the room should be up waiting to call on me call on me sure yeah and squared over login okay that's the ratio of the two okay good any questions about that okay so the parallelism is N squared over log N and this is more parallel than the previous one but it turns out you got to remember even though it's more parallel is it a better algorithm in practice not necessarily because parallelism is like a threshold enough parallelism beyond the number of processors that you have right the parallel slackness remember okay so you have to have the number the amount of parallelism if it's much greater than the number of processors you're good so for something like this if with order n parallelism your bigger way bigger than the number of processors you don't need to parallel wise the inner loop okay you don't need to paralyze the inner loop you'll be fine and in fact we're going to talking a little bit about overheads and I'm gonna do that with an example from using vector addition okay so here's a really simple piece of code it's a vector add two vectors together two arrays and all it does is it adds B into a you can see every every position as being a and I'm going to paralyze this by putting a silk four in front rather than an ordinary four and what that does is it gives us this divide-and-conquer tree once again with with n leaves and the work here is order n because that's we got n iterations of constant time and the span is just the control login and so the parallelism is n over log n okay is that so this is basically easier than what we just did okay so now if I look at this though the work here includes some substantial overhead because they're all these function calls it may be order in okay and that's good enough if you're certain kinds of theoreticians this kind of theoretician that's not good enough okay I want to understand where these overheads are going so the first thing that I might do to to get rid of that overhead so because so in this case what I'm saying is that as I do the divide-and-conquer if I go all the way down to N equals 1 what am i doing in a leaf how much work is in one of these leaves here okay it's a it's an ad it's 2 minute you know it's to memory fetches and a memory store and an ad the memory operations are going to be the most expensive thing there okay that's all that's going on and yet right before then I've done a subroutine call okay a parallel subroutine call mind you okay and that's going to have substantial overhead okay and so the question is do you do a subroutine call to add two numbers together that's pretty expensive so let's take a look at how we can optimize away some of this overhead okay and so this gets more into the realm of engineering so the open silk system has a pragma so pragma is a compiler directive suggestion to the compiler where can suggest in this case that there be a grain size up here of gee for whatever you set G to and the grain size is essentially we're going to use and it shows up here in the code as instead of ending up it used to be high greater than low plus one so that you ended with a single element now it's going to be plus G so that at the leaves I'm going to have up to G elements per chunk that I do when I'm doing my divide and conquer so therefore I can take my subroutine overhead and amortize it across G iterations rather than amortize me across one iteration okay so that's that's korsun anow if the green size pragma is not specified the silk runtime system makes its best guess to minimize the overhead okay so what it actually does it run time is it figures out for the loop how many cores it's running on and makes a good guess as to the actual how much to run serially at the leafs and how much to do in parallel does that make sense okay so it's basically trying to overcome that so let's analyze this a little bit so let's let I be the time for one iteration of the loop body okay so this is I for iteration this is of the of this particular loop body so basically the cost of those three memory operations plus an add okay and let's G is the green size okay and now let's take a look at another variable here which is the time to perform a spawn and return okay I'm gonna call the spawn return is basically the overhead for spawning okay so if I look at the work in this context I can view it as I've got t1 work which is n here times the number of iterations because I got 1 2 3 up to n iterations and then I have and those are just the normal iterations and then since I have n over G minus 1 there's n over G leaves here of size G okay so I have n over G minus 1 internal nodes which are my subject overhead that's s so the total work is n times I plus n over G minus 1 times s now in the original code effectively the work is what if I had the code without the soak for loop how much work is there before I put in all this parallel control stuff what would the work be yeah n times I we're just doing an iterations and yeah there's a little bit of loop control but that loop control is really cheap ok and on a modern out of order you know processor the cost of incrementing a variable and testing against it's bound is like dwarfed by the stuff going on inside the loop ok so it's n I so this part here okay whoops what did I do oops I went back where I see so this part here this part here there we go ok is all overhead okay this is what it costs this part here is what cost me originally okay so let's take a look at the span of this ok so the span is going to be well if I add up what's at the leaves that's just G times I ok and now I've got the overhead here okay for any of these paths which is a basically proportional I'm ignoring constants here to make it easier log of n over G times s because it's going log levels and I've got n over G chunks because each I've got G things that iterations in each leaf so therefore the number of leaves is n over G and I've got n minus 1 of those sorry I've got log n of those actually to log in to log n over G of those times s actually maybe I don't have login because I'm gonna count it going down and going up it's actually constant of one is fine okay who's confused okay let's have some questions you have a question okay I know you're confused so believe me I spend one of that one of my great successes in life was discovering it Oh confusion is how I usually am okay and then it's getting unconfused that is you know that's the that's the thing because I see so many people going through life thinking they're not confused but you know what they're confused okay and that's a worse state of affairs to be in then than knowing that you're confused let's ask some questions people who are confused let's ask them questions okay because I want to make sure this that everybody gets this okay and for those who who think you think know it already sometimes it helps them to know it a little bit even better when we go through a discussion like this so somebody asked me a question yes yeah okay is the second half of the work part okay so the second half of the work part n over G minus one so the first thing is if I've got G iterations at the leaves of a binary tree how many leaves do I have if I've got a total of n iterations and over G right okay so that's the first thing the second thing is a fact about binary trees of any full binary tree but in particular complete binary trees how many internal nodes are there in a complete binary tree if n is the number of leaves that's n minus 1 here the number of leaves is n over G so it's n over G minus 1 that clear up something for some people ok good ok so that's where that and now each of those I've got to do those three colorful operations which is what I'm calling s so you got the work down okay who has a question about span spans my favorite work is good right work is work is more important actually in most contexts but what span is so cool yeah so so what I'm saying well I think what I was saying I think I was miss saying something probably there but the point is that the span is basically starting at the top here and taking any path down to a leaf and then going back up right and so if I look at that that's going to be then log of the number of leaves well the number of leaves as we agreed was n over G okay and then each of those is at most s to do the the subroutine calling and so forth that's bookkeeping that's in that note does that makes sense still still I didn't answer the question or it could be any of the paths but take a look at all the paths go down and back up there's no path that's like going down and around and up and so forth this is a dag so if you just look at the directions of the arrows you've got to follow the directions of the arrows you can't go down and up you're either going down or you've started back up okay so it's always going to be essentially down through a set of subroutines and back up through a set of subroutines that makes sense if you think about the code the the recursive code like what's happening when you do divide and conquer if you were operating with a stack how many things would get stacked up and then unstacked right so the the the path down and back up which would also be logarithmic at most okay does that make sense so I don't have a situation if I had one subtree here for example dependent on whoops that's that's not the mode I want to be in so one subtree here dependent on another subtree then indeed the span would grow okay but the whole point is not to have these two things to make these two things independent so I can run them at the same time okay so there's no dependency there okay what we good okay so so here I have the work and the span I had the I have two things I want out of this number one I want the work to be small I want to be work to be close to the work of the the end times I the work of the ordinary serial algorithm okay and I want the span to be small so it's as parallel as possible those things are working in opposite directions okay because if you look the dominant term for G in the first equation okay is as dividing in so if I want that to be the work to be small I want G to be what big okay the dominant term for G in the span is the G multiplied by the I there is another term there but that's a lower order term so if I want the span to be small I want G to be small if they're going in opposite directions okay so what we're interested in is picking a finding a medium place okay so you want G to be and in particular if you look at this what I want is G to be at least s over I Y if I if I make G be it will be much bigger than s over I okay so if G is bigger than s over I then this term multiplied by s ends up being much less than this term you see that that's algebra okay so do you see that if I make G be if if G is much less than s over I okay so get rid of the minus one that doesn't matter okay so that's really n times s over G so therefore s over G that's basically much smaller than I so I end up with something where the result is much smaller than ni does that make sense okay how we doing on time okay I'm gonna get through everything that I expect to get through despite my rant okay okay so does that make sense we want G to be much greater than s over I then the overhead is going to be small because I'm going to do a whole bunch of iterations that are going to make it so that that function call was just like and who cares okay that's the idea okay so so that's the goal so let's take a look at let's see what was the so uh so let me just see here did I somehow I feel like I have something out of order here okay because because now I have the other implementation okay I think maybe that is where I left it okay I think we come back to this let me just let me see I'm gonna lecture on okay okay so here's another implementation of a of a of the for loop to add two vectors okay and what this is going to use as a subroutine I have this this operator called V add which itself does just a serial vector add and now what I'm going to do is run through the loop here okay and spawn off additions okay and the min there is just for a boundary condition I'm going to spun off spin off things in groups of G okay so I spin off do a vector out of size G go on vector out of size G vector out of size G jumping G each time okay so let's take a look at the analysis of that so now what I've got is I've got G iterations each of which costs me I and this is sort of this the the dag structure that I've got because the the for loop here that has the silk spawn in it is going along and notice that the silk spawn is in a the silk spawn is in a loop and so it's basically going it's spawning off G iterations say it's spawning off the V vector ad which is going to do G iterations okay because I'm giving basically G because the boundary case let's not worry about okay and then spawn off G spawn off G spawn off G and so forth so what's the work of this let's see well let's make things easy to begin with let's assume G is 1 okay and analyze it and then and this is a common thing by the ways you look to assume the grain size is 1 and analyze it and then as a practical matter coercing it to make it more efficient so if G is 1 what's the work of this yeah yeah what's order n right because those other two things are constant so yeah exactly right okay it's order n okay in fact this is what's a technique by the way that's called strip mining if you take away the parallel thing will you take a loop of length N and you really have it nested loops here one that is going has n over G iterations and one that has G iterations and you're going through exactly the same stuff okay and that's the same as going through n iterations but you're replacing a singly nested loop by a doubly nested loop and the only difference here is that in the inner loop I'm actually spawning off work okay so here the work is order n because I basically if I'm spinning off just if G is one then I'm spinning off you know one piece of work and I'm going to n minus 1 for spinning off one so I've got order n work up here and order and work down below okay what's the span for this after all I got n spans there now sorry n spawns not in spans and spawns what's the span gonna be yeah sorry sorry I couldn't hear it enough theta s no it's bigger than that yeah you'd think gee I just have to do one thing to go down and up but the span is the longest path in the whole dag okay it's the longest path on the whole dag so where's the longest path in the whole dag start upper left right and where does it end upper right how long is that path what's the longest one it's gonna go all the way down the backbone of the top there and then flip down and back up so how many things are in the if G is one how many things am i spawning off their end thing so the span is order n okay right let's order n its long okay so what's the parallelism here yeah it's order one and what do we call that sad it right but there's a more technical name okay they call that puny okay it's like we went through all this work spawned off all that stuff added all this overhead and it didn't go any faster I can't tell you how many times I've seen people do this okay when they start parallel programming oh but I spawned off all this stuff yeah but it you didn't reduce the span okay so let's now that was analyzed in terms of uh n sorry in terms of GE equals one now let's increase the grain size okay and analyze it in terms of G so once again what's the work now the work is always a gimme yeah and same as before n the work doesn't change when you paralyze things differently and stuff like that okay I'm doing I'm doing order and iterations okay oh but what's the span this is a tricky one yeah close that's half right that's half right good that's half right yeah n over G Plus G don't forget that other term so the the path that we care about is it goes along the top here and then goes down there and this has span G right so we've got n over G here because I'm doing chunks of G Plus G so it's n over G plus n over G okay and now how can I choose G to minimize the span there's nothing to choose to minimize the work except there's some work overhead that we're not that we're trying to do but how can I choose G to minimize the span what's the best value for G here yeah you got it square root of n okay so one of these is increasing okay if G is increasing and over G is decreasing where do they cross when they're equal that's when in G equals n over G or G is square root of n okay so this actually has decent pit for n big enough square root of n that's not bad so it is okay to spawn things off in chunks just don't make the chunks real little okay what's the parallelism just one C n this is always a gaming it's the ratio so square root of n okay quiz on parallel loops okay I'm gonna let you folks do this offline okay here's the answers if you if you quickly write it down you don't have to think about it okay okay so so take a look at the notes afterwards and you can try to figure out why those things are so okay so there's some performance tips that make sense when you're programming with loops one is minimize the span to maximize the parallelism because the spans in the denominator and generally you want to generate ten times more parallelism than processors if you want near perfect linear speed-up okay so if you have a lot more parallelism than the number of process we talked about that last time you get good speed-up if you have plenty of parallelism try to trade some of it off to reduce the work overhead okay so the idea was for any of these things you can fill with the fiddle with the numbers okay to the grain size in particular to reduce it reduces the parallelism but it also reduces the overhead and as long as you've got sufficient parallelism your code run is going to run just fine parallel okay it's only when you're sort of in the place where you I don't have enough parallelism and I don't want to pay the overhead those are the sticky ones okay but most of the time you're you're gonna be in the case where you've got way more parallelism than you need and the question is how can you reduce some of it in order to get reduced the work overhead generally you should use divide and conquer is your recursion or parallel loops rather than spawning one small thing after another so it's better to do the Silk 4 which already is doing divide and conquer parallelism than doing the spawn off one thing at a time type of strategy unless you can chunk them so that you have relatively few things that you're spawning off this would be fine the thing I say not this would be fine if foo of I was really expensive fine then we'll have lots of perrilyn because there's a lot of work there okay but generally it's better to do the divide and conquer generally you should make sure that the number work that you're doing per number of spawns is sufficiently large so this bonds say it well how much are you busting your your your work into in terms of chunks because the spawn has an overhead and so the question is well how big is that and so you can do course and by using function calls and inlining near the leaves generally better to paralyze outer loop as opposed to inner loops if you're forced to make a choice because the inner loops there the overhead you're incurring every single time the outer loops you can amortize it against the work that's going on inside that doesn't have the overhead and watch out for scheduling overheads so here's an example of two codes that have parallelism to and one of them is an efficient code and the other one is lousy code okay the the top one is efficient because I have two iterations and each of them that I run in parallel and each of them does a lot of work okay so that there's only one scheduling operation that happens the bottom one I have I have n iterations and each iteration does work to soive n over so I basically have n iterations with overhead and so if you just look at these look at the overhead you can see what the difference is okay I want to talk a little bit about Mason I actually have a whole bunch of things here that I'm not going to get to but I didn't expect to get to them but I do want to get to some of matrix multiplication people familiar with this problem okay we're gonna assume for simplicity that N is a power of two so here's a typical way of paralyzing matrix multiplication I take the the two outer loops and I paralyze them I can't easily paralyze the inner loop because if I do I get a race condition because I'll have two iterations that are both trying to update C of IJ okay so I can't just paralyze K so I'm just gonna paralyze I and J okay the work for this is what triply nested loop n cubed everybody knows make some multiplication unless you do something clever like Strassen or one of the more recent Virgie williams algorithm there's you know that the running time is for the standard algorithm is n cubed okay the span for this is what yeah that inner loop is linear size and then you've got two logins log n plus log n plus n ok so it's order n ok so the parallelism is around N squared ok if I ignore a constants and I said I was working on matrices of say a thousand by a thousand or so the parallelism is something like n square roots about a thousand squared is a million Wow that's a lot of parallelism how many processor you're running on ok is it bigger than ten times the number of processors by a little bit ok now there's another strategy that one can use which is called divide and conquer and this is the strategy that's used in stress and we're not going to do the Strassen salad we're just going to use the eight multiply version of this for people who know stress and more power to you it's great algorithm really surprising really really amazing and it's actually worthwhile doing in practice by the way for sufficiently large matrices see the idea here is I can multiply two n by n matrices by doing eight multiplications of n over two by n over two matrices and then add two n by n matrices ok so when we start talking matrices this is a little bit of a diversion from the algorithms but so important because representation of matrices is one of the things that gets people into trouble when they're doing any kind of two-dimensional coding and stuff okay and so I want to talk a little bit about indexing we're going to talk about more this more later when we do cache behavior and such how do you represent sub-matrices the standard way of representing is either in row major or column major order depending upon the language you use Fortran uses column major ordering so there are a lot of subroutines that are column major but for the most part C which we're using is is row major and so the question is if I take a sub matrix of a large matrix how do I calculate where the IJ element of the of that matrix is here I have the IJ element here I've got a matrix M which is embedded and by row major remember that means I just take row after row and I just put them in linear order through the memory okay so every two-dimensional matrix you can index as a one-dimensional matrix because you all you have to do is which is exactly what the code is doing you need to know the beginning of the matrix but if you have a sub matrix it's a little more complicated okay so here's the idea suppose that you have a sub matrix M so starting in location M of this outer matrix so here we okay here we have the outer matrix which has length n sub M this is the outer the big matrix actually I should have called that M okay I should not have called this in and so M should have called it in some something else because this is my M that I'm interested in which is this location here okay and what I'm interested in doing is finding out yeah I named these variables stupidly okay it's finding out where is the IJ element of this sub matrix M if I tell you the beginning what do I add to get to IJ and the answer is that I've got to add the number of rows that comes down here well that's I times the width of the full matrix that you're taking it out of not the width of your local sub matrix okay and then you have to add in the the and then you add in our n J from that point okay there we go okay so I have to add in the length of the long matrix plus J for each each row I does that make sense okay because it's embedded in there you have to skip over full rows of the outer matrix so you can't generally just pass a sub matrix and expect to do indexing on that when it's embedded in a large matrix if you make a copy sure then you can index it according to whatever the new copy is but if you want to operate in place on matrices which is often the case then you have to understand that every row you have to jump a row of the outer matrix not a row of whatever your sub matrix is when you're doing the divide-and-conquer okay so when we look at doing divide and conquer I have a matrix here which is which I want to now divide into four sub matrices of size M over two and the question is where is the starting corners of each of those matrices so M 0 0 that starts at the same places in okay that upper left one where does m 0 1 start whereas MZ ro1 start yeah m plus n over 2 where does m10 start this is the tricky one okay here's the answer okay am+ the long matrix times n over two because I'm going down M over two rows and I've got to go down the number of rows of the outer matrix and then M 1 1 is the same as as the two there so here's the in general for for row and column being 0 and 1 in some sense okay this is a general formula it matches up with that ok where where I plug in 0 1 for each one and now here's my code and I just want to point out a couple things and then we'll we'll quit and I'll let you take a look at the the rest of this on your own here's my divide and conquer matrix multiply okay I use restrict so everybody familiar with restrict says don't tells the compiler these things you can assume are not aliased okay so that when you change one you're not changing other that lights the compiler produce better code and then the row sizes are gonna be n sub c and sub a and n sub B and then the matrices that we're taking them out of those are the sizes of the sub matrix the outer matrix is going to have size n by n okay for which when I have my recursion I want to talk about sub matrices they're embedded in this larger out side matrix here is a great piece of bit tricks this says N is a power of two okay so go back and remind yourself of what the bit tricks are but that's a clever bit trick to say that N is a power of two very quick and and so take a look at that okay and then we're going to course in the leaves with a base case the base case just goes through and solves the problem for small ant just with a typical triply nested loop okay and what we're gonna do is allocate a temporary n by n array and then we're going to define the temporary array to having to having underlying row size n and then here is this fabulous macro that makes all the index calculations easy it uses the sharp sharp operator which pastes together tokens so that I can paste in sub C when I pass or and C it passes whatever value I pass for that it pastes it together so it allows me to do the indexing of the and have the right thing so that for each of these address calculations I'm able to to do them by just saying X of and just give the formulas there otherwise you'd get V driven nuts by the formula so take a look at that macro because that may help you in some of your other things and then I sync and then add it up okay and the addition is just going to be a doubly nested parallel addition and then I free it so what I would like you to do is go home and take a look at the analysis of this and it turns out this has way more parallelism than you need and if you reduce the amount of parallelism you get much better performance and there's several other algorithms like put in there as well okay so I'll try to get this posted tonight thanks very much you 3 00:00:05,769 --> 00:00:08,019 4 00:00:08,019 --> 00:00:09,850 5 00:00:09,850 --> 00:00:10,930 6 00:00:10,930 --> 00:00:13,120 7 00:00:13,120 --> 00:00:15,160 8 00:00:15,160 --> 00:00:21,720 9 00:00:21,720 --> 00:00:24,549 10 00:00:24,549 --> 00:00:29,139 11 00:00:29,139 --> 00:00:35,380 12 00:00:35,380 --> 00:00:37,270 13 00:00:37,270 --> 00:00:42,430 14 00:00:42,430 --> 00:00:43,930 15 00:00:43,930 --> 00:00:45,250 16 00:00:45,250 --> 00:00:47,380 17 00:00:47,380 --> 00:00:49,390 18 00:00:49,390 --> 00:00:50,800 19 00:00:50,800 --> 00:00:53,140 20 00:00:53,140 --> 00:00:55,720 21 00:00:55,720 --> 00:00:57,780 22 00:00:57,780 --> 00:01:01,960 23 00:01:01,960 --> 00:01:03,340 24 00:01:03,340 --> 00:01:04,929 25 00:01:04,929 --> 00:01:07,540 26 00:01:07,540 --> 00:01:11,260 27 00:01:11,260 --> 00:01:14,860 28 00:01:14,860 --> 00:01:16,719 29 00:01:16,719 --> 00:01:20,020 30 00:01:20,020 --> 00:01:23,440 31 00:01:23,440 --> 00:01:28,240 32 00:01:28,240 --> 00:01:32,020 33 00:01:32,020 --> 00:01:33,960 34 00:01:33,960 --> 00:01:36,580 35 00:01:36,580 --> 00:01:39,010 36 00:01:39,010 --> 00:01:42,219 37 00:01:42,219 --> 00:01:44,520 38 00:01:44,520 --> 00:01:49,359 39 00:01:49,359 --> 00:01:53,340 40 00:01:53,340 --> 00:01:56,950 41 00:01:56,950 --> 00:01:58,780 42 00:01:58,780 --> 00:02:01,330 43 00:02:01,330 --> 00:02:05,200 44 00:02:05,200 --> 00:02:08,229 45 00:02:08,229 --> 00:02:11,470 46 00:02:11,470 --> 00:02:13,989 47 00:02:13,989 --> 00:02:13,999 48 00:02:13,999 --> 00:02:14,620 49 00:02:14,620 --> 00:02:17,950 50 00:02:17,950 --> 00:02:21,430 51 00:02:21,430 --> 00:02:26,950 52 00:02:26,950 --> 00:02:35,050 53 00:02:35,050 --> 00:02:37,270 54 00:02:37,270 --> 00:02:40,330 55 00:02:40,330 --> 00:02:42,550 56 00:02:42,550 --> 00:02:46,330 57 00:02:46,330 --> 00:02:48,220 58 00:02:48,220 --> 00:02:51,370 59 00:02:51,370 --> 00:02:56,190 60 00:02:56,190 --> 00:02:58,960 61 00:02:58,960 --> 00:03:02,590 62 00:03:02,590 --> 00:03:04,270 63 00:03:04,270 --> 00:03:09,940 64 00:03:09,940 --> 00:03:12,430 65 00:03:12,430 --> 00:03:16,360 66 00:03:16,360 --> 00:03:19,150 67 00:03:19,150 --> 00:03:22,810 68 00:03:22,810 --> 00:03:24,910 69 00:03:24,910 --> 00:03:28,540 70 00:03:28,540 --> 00:03:31,420 71 00:03:31,420 --> 00:03:33,700 72 00:03:33,700 --> 00:03:36,040 73 00:03:36,040 --> 00:03:39,850 74 00:03:39,850 --> 00:03:45,250 75 00:03:45,250 --> 00:03:49,540 76 00:03:49,540 --> 00:03:53,140 77 00:03:53,140 --> 00:03:57,580 78 00:03:57,580 --> 00:04:03,070 79 00:04:03,070 --> 00:04:05,140 80 00:04:05,140 --> 00:04:07,330 81 00:04:07,330 --> 00:04:11,370 82 00:04:11,370 --> 00:04:15,640 83 00:04:15,640 --> 00:04:19,210 84 00:04:19,210 --> 00:04:22,680 85 00:04:22,680 --> 00:04:27,130 86 00:04:27,130 --> 00:04:28,120 87 00:04:28,120 --> 00:04:32,500 88 00:04:32,500 --> 00:04:39,670 89 00:04:39,670 --> 00:04:43,900 90 00:04:43,900 --> 00:04:47,800 91 00:04:47,800 --> 00:04:50,410 92 00:04:50,410 --> 00:04:54,340 93 00:04:54,340 --> 00:04:56,230 94 00:04:56,230 --> 00:04:57,700 95 00:04:57,700 --> 00:04:59,940 96 00:04:59,940 --> 00:05:07,210 97 00:05:07,210 --> 00:05:11,860 98 00:05:11,860 --> 00:05:15,700 99 00:05:15,700 --> 00:05:18,630 100 00:05:18,630 --> 00:05:24,310 101 00:05:24,310 --> 00:05:26,890 102 00:05:26,890 --> 00:05:30,070 103 00:05:30,070 --> 00:05:32,590 104 00:05:32,590 --> 00:05:34,350 105 00:05:34,350 --> 00:05:36,880 106 00:05:36,880 --> 00:05:38,710 107 00:05:38,710 --> 00:05:40,480 108 00:05:40,480 --> 00:05:42,930 109 00:05:42,930 --> 00:05:46,600 110 00:05:46,600 --> 00:05:49,420 111 00:05:49,420 --> 00:05:51,850 112 00:05:51,850 --> 00:05:56,730 113 00:05:56,730 --> 00:05:59,920 114 00:05:59,920 --> 00:06:02,670 115 00:06:02,670 --> 00:06:07,060 116 00:06:07,060 --> 00:06:12,580 117 00:06:12,580 --> 00:06:16,090 118 00:06:16,090 --> 00:06:18,370 119 00:06:18,370 --> 00:06:20,890 120 00:06:20,890 --> 00:06:22,390 121 00:06:22,390 --> 00:06:24,700 122 00:06:24,700 --> 00:06:32,020 123 00:06:32,020 --> 00:06:35,650 124 00:06:35,650 --> 00:06:39,969 125 00:06:39,969 --> 00:06:42,390 126 00:06:42,390 --> 00:06:44,640 127 00:06:44,640 --> 00:06:47,280 128 00:06:47,280 --> 00:06:49,920 129 00:06:49,920 --> 00:06:51,420 130 00:06:51,420 --> 00:06:53,129 131 00:06:53,129 --> 00:06:56,870 132 00:06:56,870 --> 00:06:59,279 133 00:06:59,279 --> 00:07:01,439 134 00:07:01,439 --> 00:07:03,900 135 00:07:03,900 --> 00:07:06,659 136 00:07:06,659 --> 00:07:10,020 137 00:07:10,020 --> 00:07:13,790 138 00:07:13,790 --> 00:07:18,180 139 00:07:18,180 --> 00:07:21,120 140 00:07:21,120 --> 00:07:24,000 141 00:07:24,000 --> 00:07:25,980 142 00:07:25,980 --> 00:07:29,279 143 00:07:29,279 --> 00:07:31,200 144 00:07:31,200 --> 00:07:37,350 145 00:07:37,350 --> 00:07:42,200 146 00:07:42,200 --> 00:07:45,779 147 00:07:45,779 --> 00:07:48,320 148 00:07:48,320 --> 00:07:50,189 149 00:07:50,189 --> 00:07:54,330 150 00:07:54,330 --> 00:07:57,390 151 00:07:57,390 --> 00:08:00,330 152 00:08:00,330 --> 00:08:02,939 153 00:08:02,939 --> 00:08:04,830 154 00:08:04,830 --> 00:08:07,350 155 00:08:07,350 --> 00:08:10,110 156 00:08:10,110 --> 00:08:13,260 157 00:08:13,260 --> 00:08:15,570 158 00:08:15,570 --> 00:08:17,700 159 00:08:17,700 --> 00:08:19,890 160 00:08:19,890 --> 00:08:22,980 161 00:08:22,980 --> 00:08:25,800 162 00:08:25,800 --> 00:08:28,230 163 00:08:28,230 --> 00:08:32,760 164 00:08:32,760 --> 00:08:35,010 165 00:08:35,010 --> 00:08:38,459 166 00:08:38,459 --> 00:08:42,170 167 00:08:42,170 --> 00:08:46,470 168 00:08:46,470 --> 00:08:49,290 169 00:08:49,290 --> 00:08:54,360 170 00:08:54,360 --> 00:08:55,550 171 00:08:55,550 --> 00:08:58,220 172 00:08:58,220 --> 00:09:00,170 173 00:09:00,170 --> 00:09:03,110 174 00:09:03,110 --> 00:09:05,240 175 00:09:05,240 --> 00:09:09,860 176 00:09:09,860 --> 00:09:14,540 177 00:09:14,540 --> 00:09:17,980 178 00:09:17,980 --> 00:09:21,280 179 00:09:21,280 --> 00:09:24,470 180 00:09:24,470 --> 00:09:26,329 181 00:09:26,329 --> 00:09:28,850 182 00:09:28,850 --> 00:09:31,340 183 00:09:31,340 --> 00:09:34,430 184 00:09:34,430 --> 00:09:36,290 185 00:09:36,290 --> 00:09:37,490 186 00:09:37,490 --> 00:09:39,950 187 00:09:39,950 --> 00:09:42,650 188 00:09:42,650 --> 00:09:45,410 189 00:09:45,410 --> 00:09:48,410 190 00:09:48,410 --> 00:09:50,630 191 00:09:50,630 --> 00:09:53,870 192 00:09:53,870 --> 00:09:56,140 193 00:09:56,140 --> 00:10:03,560 194 00:10:03,560 --> 00:10:05,300 195 00:10:05,300 --> 00:10:07,370 196 00:10:07,370 --> 00:10:10,730 197 00:10:10,730 --> 00:10:14,570 198 00:10:14,570 --> 00:10:18,470 199 00:10:18,470 --> 00:10:25,640 200 00:10:25,640 --> 00:10:29,140 201 00:10:29,140 --> 00:10:31,700 202 00:10:31,700 --> 00:10:35,020 203 00:10:35,020 --> 00:10:41,210 204 00:10:41,210 --> 00:10:45,770 205 00:10:45,770 --> 00:10:50,199 206 00:10:50,199 --> 00:10:53,060 207 00:10:53,060 --> 00:10:57,530 208 00:10:57,530 --> 00:11:00,620 209 00:11:00,620 --> 00:11:07,129 210 00:11:07,129 --> 00:11:09,660 211 00:11:09,660 --> 00:11:11,970 212 00:11:11,970 --> 00:11:14,310 213 00:11:14,310 --> 00:11:16,259 214 00:11:16,259 --> 00:11:19,310 215 00:11:19,310 --> 00:11:21,720 216 00:11:21,720 --> 00:11:23,699 217 00:11:23,699 --> 00:11:31,220 218 00:11:31,220 --> 00:11:34,319 219 00:11:34,319 --> 00:11:40,740 220 00:11:40,740 --> 00:11:43,410 221 00:11:43,410 --> 00:11:47,129 222 00:11:47,129 --> 00:11:54,360 223 00:11:54,360 --> 00:11:56,069 224 00:11:56,069 --> 00:12:05,280 225 00:12:05,280 --> 00:12:08,040 226 00:12:08,040 --> 00:12:15,120 227 00:12:15,120 --> 00:12:18,450 228 00:12:18,450 --> 00:12:22,470 229 00:12:22,470 --> 00:12:25,200 230 00:12:25,200 --> 00:12:29,730 231 00:12:29,730 --> 00:12:32,100 232 00:12:32,100 --> 00:12:33,900 233 00:12:33,900 --> 00:12:45,150 234 00:12:45,150 --> 00:12:47,220 235 00:12:47,220 --> 00:12:49,410 236 00:12:49,410 --> 00:12:53,310 237 00:12:53,310 --> 00:12:56,970 238 00:12:56,970 --> 00:12:59,250 239 00:12:59,250 --> 00:13:02,310 240 00:13:02,310 --> 00:13:04,770 241 00:13:04,770 --> 00:13:16,329 242 00:13:16,329 --> 00:13:22,430 243 00:13:22,430 --> 00:13:24,019 244 00:13:24,019 --> 00:13:26,120 245 00:13:26,120 --> 00:13:26,130 246 00:13:26,130 --> 00:13:26,930 247 00:13:26,930 --> 00:13:30,949 248 00:13:30,949 --> 00:13:37,450 249 00:13:37,450 --> 00:13:37,460 250 00:13:37,460 --> 00:13:41,160 251 00:13:41,160 --> 00:13:46,949 252 00:13:46,949 --> 00:13:50,160 253 00:13:50,160 --> 00:13:57,880 254 00:13:57,880 --> 00:13:59,860 255 00:13:59,860 --> 00:14:05,080 256 00:14:05,080 --> 00:14:07,570 257 00:14:07,570 --> 00:14:10,450 258 00:14:10,450 --> 00:14:12,220 259 00:14:12,220 --> 00:14:15,580 260 00:14:15,580 --> 00:14:22,480 261 00:14:22,480 --> 00:14:24,700 262 00:14:24,700 --> 00:14:27,310 263 00:14:27,310 --> 00:14:30,430 264 00:14:30,430 --> 00:14:33,090 265 00:14:33,090 --> 00:14:38,050 266 00:14:38,050 --> 00:14:40,030 267 00:14:40,030 --> 00:14:44,620 268 00:14:44,620 --> 00:14:47,290 269 00:14:47,290 --> 00:14:49,720 270 00:14:49,720 --> 00:14:51,790 271 00:14:51,790 --> 00:14:58,140 272 00:14:58,140 --> 00:15:00,490 273 00:15:00,490 --> 00:15:02,620 274 00:15:02,620 --> 00:15:04,450 275 00:15:04,450 --> 00:15:08,380 276 00:15:08,380 --> 00:15:10,060 277 00:15:10,060 --> 00:15:12,160 278 00:15:12,160 --> 00:15:15,520 279 00:15:15,520 --> 00:15:18,100 280 00:15:18,100 --> 00:15:20,200 281 00:15:20,200 --> 00:15:22,540 282 00:15:22,540 --> 00:15:25,450 283 00:15:25,450 --> 00:15:28,450 284 00:15:28,450 --> 00:15:29,950 285 00:15:29,950 --> 00:15:32,080 286 00:15:32,080 --> 00:15:34,930 287 00:15:34,930 --> 00:15:36,220 288 00:15:36,220 --> 00:15:37,780 289 00:15:37,780 --> 00:15:43,750 290 00:15:43,750 --> 00:15:46,150 291 00:15:46,150 --> 00:15:49,390 292 00:15:49,390 --> 00:15:53,050 293 00:15:53,050 --> 00:15:56,110 294 00:15:56,110 --> 00:16:00,580 295 00:16:00,580 --> 00:16:02,680 296 00:16:02,680 --> 00:16:05,980 297 00:16:05,980 --> 00:16:08,950 298 00:16:08,950 --> 00:16:13,210 299 00:16:13,210 --> 00:16:17,350 300 00:16:17,350 --> 00:16:20,350 301 00:16:20,350 --> 00:16:22,300 302 00:16:22,300 --> 00:16:24,130 303 00:16:24,130 --> 00:16:27,100 304 00:16:27,100 --> 00:16:33,280 305 00:16:33,280 --> 00:16:35,470 306 00:16:35,470 --> 00:16:39,730 307 00:16:39,730 --> 00:16:44,020 308 00:16:44,020 --> 00:16:48,190 309 00:16:48,190 --> 00:16:52,210 310 00:16:52,210 --> 00:16:53,590 311 00:16:53,590 --> 00:16:55,900 312 00:16:55,900 --> 00:16:59,440 313 00:16:59,440 --> 00:17:03,700 314 00:17:03,700 --> 00:17:06,040 315 00:17:06,040 --> 00:17:11,770 316 00:17:11,770 --> 00:17:13,510 317 00:17:13,510 --> 00:17:19,430 318 00:17:19,430 --> 00:17:23,230 319 00:17:23,230 --> 00:17:26,360 320 00:17:26,360 --> 00:17:29,750 321 00:17:29,750 --> 00:17:31,730 322 00:17:31,730 --> 00:17:34,610 323 00:17:34,610 --> 00:17:36,770 324 00:17:36,770 --> 00:17:38,720 325 00:17:38,720 --> 00:17:41,330 326 00:17:41,330 --> 00:17:43,400 327 00:17:43,400 --> 00:17:46,520 328 00:17:46,520 --> 00:17:51,410 329 00:17:51,410 --> 00:17:53,480 330 00:17:53,480 --> 00:17:58,580 331 00:17:58,580 --> 00:18:01,550 332 00:18:01,550 --> 00:18:03,500 333 00:18:03,500 --> 00:18:07,280 334 00:18:07,280 --> 00:18:10,520 335 00:18:10,520 --> 00:18:12,170 336 00:18:12,170 --> 00:18:14,270 337 00:18:14,270 --> 00:18:17,330 338 00:18:17,330 --> 00:18:18,980 339 00:18:18,980 --> 00:18:20,240 340 00:18:20,240 --> 00:18:22,010 341 00:18:22,010 --> 00:18:23,960 342 00:18:23,960 --> 00:18:26,150 343 00:18:26,150 --> 00:18:29,210 344 00:18:29,210 --> 00:18:31,460 345 00:18:31,460 --> 00:18:33,650 346 00:18:33,650 --> 00:18:35,930 347 00:18:35,930 --> 00:18:38,510 348 00:18:38,510 --> 00:18:41,480 349 00:18:41,480 --> 00:18:41,490 350 00:18:41,490 --> 00:18:45,190 351 00:18:45,190 --> 00:18:47,810 352 00:18:47,810 --> 00:18:49,610 353 00:18:49,610 --> 00:18:54,260 354 00:18:54,260 --> 00:19:02,730 355 00:19:02,730 --> 00:19:07,409 356 00:19:07,409 --> 00:19:09,210 357 00:19:09,210 --> 00:19:11,850 358 00:19:11,850 --> 00:19:14,279 359 00:19:14,279 --> 00:19:19,200 360 00:19:19,200 --> 00:19:21,149 361 00:19:21,149 --> 00:19:24,409 362 00:19:24,409 --> 00:19:28,560 363 00:19:28,560 --> 00:19:31,950 364 00:19:31,950 --> 00:19:34,220 365 00:19:34,220 --> 00:19:40,740 366 00:19:40,740 --> 00:19:43,529 367 00:19:43,529 --> 00:19:47,880 368 00:19:47,880 --> 00:19:52,620 369 00:19:52,620 --> 00:19:57,600 370 00:19:57,600 --> 00:19:59,279 371 00:19:59,279 --> 00:20:02,990 372 00:20:02,990 --> 00:20:06,539 373 00:20:06,539 --> 00:20:10,740 374 00:20:10,740 --> 00:20:14,100 375 00:20:14,100 --> 00:20:16,110 376 00:20:16,110 --> 00:20:19,230 377 00:20:19,230 --> 00:20:22,680 378 00:20:22,680 --> 00:20:26,639 379 00:20:26,639 --> 00:20:30,860 380 00:20:30,860 --> 00:20:35,250 381 00:20:35,250 --> 00:20:38,600 382 00:20:38,600 --> 00:20:45,539 383 00:20:45,539 --> 00:20:50,190 384 00:20:50,190 --> 00:20:52,169 385 00:20:52,169 --> 00:20:55,560 386 00:20:55,560 --> 00:20:57,510 387 00:20:57,510 --> 00:21:03,360 388 00:21:03,360 --> 00:21:09,029 389 00:21:09,029 --> 00:21:11,430 390 00:21:11,430 --> 00:21:13,350 391 00:21:13,350 --> 00:21:15,370 392 00:21:15,370 --> 00:21:19,630 393 00:21:19,630 --> 00:21:23,260 394 00:21:23,260 --> 00:21:27,700 395 00:21:27,700 --> 00:21:29,710 396 00:21:29,710 --> 00:21:32,470 397 00:21:32,470 --> 00:21:35,140 398 00:21:35,140 --> 00:21:37,860 399 00:21:37,860 --> 00:21:43,330 400 00:21:43,330 --> 00:21:46,360 401 00:21:46,360 --> 00:21:48,820 402 00:21:48,820 --> 00:21:51,330 403 00:21:51,330 --> 00:21:54,100 404 00:21:54,100 --> 00:21:56,130 405 00:21:56,130 --> 00:21:59,529 406 00:21:59,529 --> 00:22:03,370 407 00:22:03,370 --> 00:22:05,320 408 00:22:05,320 --> 00:22:07,330 409 00:22:07,330 --> 00:22:09,310 410 00:22:09,310 --> 00:22:11,649 411 00:22:11,649 --> 00:22:14,680 412 00:22:14,680 --> 00:22:17,470 413 00:22:17,470 --> 00:22:21,789 414 00:22:21,789 --> 00:22:24,340 415 00:22:24,340 --> 00:22:27,159 416 00:22:27,159 --> 00:22:29,560 417 00:22:29,560 --> 00:22:31,779 418 00:22:31,779 --> 00:22:33,220 419 00:22:33,220 --> 00:22:35,799 420 00:22:35,799 --> 00:22:38,020 421 00:22:38,020 --> 00:22:40,899 422 00:22:40,899 --> 00:22:42,880 423 00:22:42,880 --> 00:22:45,490 424 00:22:45,490 --> 00:22:47,770 425 00:22:47,770 --> 00:22:51,690 426 00:22:51,690 --> 00:22:54,310 427 00:22:54,310 --> 00:22:59,320 428 00:22:59,320 --> 00:23:04,870 429 00:23:04,870 --> 00:23:06,279 430 00:23:06,279 --> 00:23:08,980 431 00:23:08,980 --> 00:23:11,830 432 00:23:11,830 --> 00:23:15,610 433 00:23:15,610 --> 00:23:17,110 434 00:23:17,110 --> 00:23:21,640 435 00:23:21,640 --> 00:23:27,220 436 00:23:27,220 --> 00:23:30,160 437 00:23:30,160 --> 00:23:35,980 438 00:23:35,980 --> 00:23:37,200 439 00:23:37,200 --> 00:23:40,080 440 00:23:40,080 --> 00:23:42,400 441 00:23:42,400 --> 00:23:45,970 442 00:23:45,970 --> 00:23:48,160 443 00:23:48,160 --> 00:23:49,780 444 00:23:49,780 --> 00:23:52,180 445 00:23:52,180 --> 00:23:57,100 446 00:23:57,100 --> 00:23:59,560 447 00:23:59,560 --> 00:24:02,049 448 00:24:02,049 --> 00:24:04,780 449 00:24:04,780 --> 00:24:07,150 450 00:24:07,150 --> 00:24:11,080 451 00:24:11,080 --> 00:24:14,380 452 00:24:14,380 --> 00:24:16,810 453 00:24:16,810 --> 00:24:19,090 454 00:24:19,090 --> 00:24:21,450 455 00:24:21,450 --> 00:24:24,850 456 00:24:24,850 --> 00:24:27,040 457 00:24:27,040 --> 00:24:29,169 458 00:24:29,169 --> 00:24:40,770 459 00:24:40,770 --> 00:24:45,670 460 00:24:45,670 --> 00:24:48,810 461 00:24:48,810 --> 00:24:58,810 462 00:24:58,810 --> 00:25:02,590 463 00:25:02,590 --> 00:25:04,420 464 00:25:04,420 --> 00:25:06,280 465 00:25:06,280 --> 00:25:09,280 466 00:25:09,280 --> 00:25:12,240 467 00:25:12,240 --> 00:25:14,650 468 00:25:14,650 --> 00:25:16,720 469 00:25:16,720 --> 00:25:19,780 470 00:25:19,780 --> 00:25:22,210 471 00:25:22,210 --> 00:25:24,550 472 00:25:24,550 --> 00:25:26,050 473 00:25:26,050 --> 00:25:28,690 474 00:25:28,690 --> 00:25:30,220 475 00:25:30,220 --> 00:25:31,020 476 00:25:31,020 --> 00:25:34,690 477 00:25:34,690 --> 00:25:39,850 478 00:25:39,850 --> 00:25:42,940 479 00:25:42,940 --> 00:25:45,580 480 00:25:45,580 --> 00:25:48,010 481 00:25:48,010 --> 00:25:49,590 482 00:25:49,590 --> 00:25:53,530 483 00:25:53,530 --> 00:25:55,630 484 00:25:55,630 --> 00:25:57,100 485 00:25:57,100 --> 00:25:58,960 486 00:25:58,960 --> 00:26:00,220 487 00:26:00,220 --> 00:26:09,460 488 00:26:09,460 --> 00:26:15,810 489 00:26:15,810 --> 00:26:20,260 490 00:26:20,260 --> 00:26:21,700 491 00:26:21,700 --> 00:26:26,830 492 00:26:26,830 --> 00:26:37,100 493 00:26:37,100 --> 00:26:54,280 494 00:26:54,280 --> 00:26:57,820 495 00:26:57,820 --> 00:27:00,070 496 00:27:00,070 --> 00:27:04,410 497 00:27:04,410 --> 00:27:07,600 498 00:27:07,600 --> 00:27:11,620 499 00:27:11,620 --> 00:27:13,420 500 00:27:13,420 --> 00:27:18,400 501 00:27:18,400 --> 00:27:20,620 502 00:27:20,620 --> 00:27:22,600 503 00:27:22,600 --> 00:27:25,750 504 00:27:25,750 --> 00:27:27,970 505 00:27:27,970 --> 00:27:30,640 506 00:27:30,640 --> 00:27:35,830 507 00:27:35,830 --> 00:27:39,700 508 00:27:39,700 --> 00:27:43,600 509 00:27:43,600 --> 00:27:46,450 510 00:27:46,450 --> 00:27:53,580 511 00:27:53,580 --> 00:28:03,230 512 00:28:03,230 --> 00:28:06,830 513 00:28:06,830 --> 00:28:09,409 514 00:28:09,409 --> 00:28:16,250 515 00:28:16,250 --> 00:28:24,919 516 00:28:24,919 --> 00:28:28,159 517 00:28:28,159 --> 00:28:30,230 518 00:28:30,230 --> 00:28:34,039 519 00:28:34,039 --> 00:28:37,519 520 00:28:37,519 --> 00:28:40,370 521 00:28:40,370 --> 00:28:43,669 522 00:28:43,669 --> 00:28:44,690 523 00:28:44,690 --> 00:28:48,620 524 00:28:48,620 --> 00:28:49,850 525 00:28:49,850 --> 00:28:51,230 526 00:28:51,230 --> 00:28:54,950 527 00:28:54,950 --> 00:28:58,130 528 00:28:58,130 --> 00:28:59,720 529 00:28:59,720 --> 00:29:03,409 530 00:29:03,409 --> 00:29:07,010 531 00:29:07,010 --> 00:29:11,779 532 00:29:11,779 --> 00:29:17,269 533 00:29:17,269 --> 00:29:18,710 534 00:29:18,710 --> 00:29:22,000 535 00:29:22,000 --> 00:29:25,100 536 00:29:25,100 --> 00:29:28,549 537 00:29:28,549 --> 00:29:31,340 538 00:29:31,340 --> 00:29:32,690 539 00:29:32,690 --> 00:29:38,080 540 00:29:38,080 --> 00:29:42,730 541 00:29:42,730 --> 00:29:45,500 542 00:29:45,500 --> 00:29:46,850 543 00:29:46,850 --> 00:29:50,779 544 00:29:50,779 --> 00:30:00,950 545 00:30:00,950 --> 00:30:04,880 546 00:30:04,880 --> 00:30:07,570 547 00:30:07,570 --> 00:30:11,210 548 00:30:11,210 --> 00:30:17,930 549 00:30:17,930 --> 00:30:20,660 550 00:30:20,660 --> 00:30:24,320 551 00:30:24,320 --> 00:30:27,580 552 00:30:27,580 --> 00:30:31,640 553 00:30:31,640 --> 00:30:33,320 554 00:30:33,320 --> 00:30:35,720 555 00:30:35,720 --> 00:30:38,990 556 00:30:38,990 --> 00:30:41,240 557 00:30:41,240 --> 00:30:42,410 558 00:30:42,410 --> 00:30:44,660 559 00:30:44,660 --> 00:30:46,430 560 00:30:46,430 --> 00:30:49,070 561 00:30:49,070 --> 00:30:50,600 562 00:30:50,600 --> 00:30:53,260 563 00:30:53,260 --> 00:30:56,330 564 00:30:56,330 --> 00:30:58,700 565 00:30:58,700 --> 00:31:01,370 566 00:31:01,370 --> 00:31:02,930 567 00:31:02,930 --> 00:31:05,360 568 00:31:05,360 --> 00:31:07,100 569 00:31:07,100 --> 00:31:08,870 570 00:31:08,870 --> 00:31:14,570 571 00:31:14,570 --> 00:31:16,730 572 00:31:16,730 --> 00:31:18,520 573 00:31:18,520 --> 00:31:18,530 574 00:31:18,530 --> 00:31:25,120 575 00:31:25,120 --> 00:31:28,279 576 00:31:28,279 --> 00:31:38,870 577 00:31:38,870 --> 00:31:42,430 578 00:31:42,430 --> 00:31:46,029 579 00:31:46,029 --> 00:31:49,190 580 00:31:49,190 --> 00:31:51,680 581 00:31:51,680 --> 00:31:54,049 582 00:31:54,049 --> 00:31:56,509 583 00:31:56,509 --> 00:31:58,580 584 00:31:58,580 --> 00:32:00,560 585 00:32:00,560 --> 00:32:02,720 586 00:32:02,720 --> 00:32:04,779 587 00:32:04,779 --> 00:32:07,149 588 00:32:07,149 --> 00:32:09,230 589 00:32:09,230 --> 00:32:13,279 590 00:32:13,279 --> 00:32:15,799 591 00:32:15,799 --> 00:32:18,350 592 00:32:18,350 --> 00:32:21,680 593 00:32:21,680 --> 00:32:25,210 594 00:32:25,210 --> 00:32:29,480 595 00:32:29,480 --> 00:32:32,389 596 00:32:32,389 --> 00:32:34,909 597 00:32:34,909 --> 00:32:37,700 598 00:32:37,700 --> 00:32:40,279 599 00:32:40,279 --> 00:32:43,220 600 00:32:43,220 --> 00:32:44,450 601 00:32:44,450 --> 00:32:47,000 602 00:32:47,000 --> 00:32:49,250 603 00:32:49,250 --> 00:32:49,260 604 00:32:49,260 --> 00:32:49,840 605 00:32:49,840 --> 00:32:54,620 606 00:32:54,620 --> 00:32:56,930 607 00:32:56,930 --> 00:32:59,180 608 00:32:59,180 --> 00:33:01,629 609 00:33:01,629 --> 00:33:04,190 610 00:33:04,190 --> 00:33:08,690 611 00:33:08,690 --> 00:33:17,890 612 00:33:17,890 --> 00:33:21,470 613 00:33:21,470 --> 00:33:24,410 614 00:33:24,410 --> 00:33:27,590 615 00:33:27,590 --> 00:33:29,660 616 00:33:29,660 --> 00:33:32,780 617 00:33:32,780 --> 00:33:36,230 618 00:33:36,230 --> 00:33:37,790 619 00:33:37,790 --> 00:33:39,980 620 00:33:39,980 --> 00:33:42,740 621 00:33:42,740 --> 00:33:44,210 622 00:33:44,210 --> 00:33:46,190 623 00:33:46,190 --> 00:33:48,020 624 00:33:48,020 --> 00:33:49,760 625 00:33:49,760 --> 00:33:52,340 626 00:33:52,340 --> 00:33:54,770 627 00:33:54,770 --> 00:33:56,900 628 00:33:56,900 --> 00:33:59,090 629 00:33:59,090 --> 00:34:01,850 630 00:34:01,850 --> 00:34:06,200 631 00:34:06,200 --> 00:34:07,700 632 00:34:07,700 --> 00:34:09,170 633 00:34:09,170 --> 00:34:11,180 634 00:34:11,180 --> 00:34:14,270 635 00:34:14,270 --> 00:34:18,110 636 00:34:18,110 --> 00:34:20,900 637 00:34:20,900 --> 00:34:27,130 638 00:34:27,130 --> 00:34:31,400 639 00:34:31,400 --> 00:34:34,130 640 00:34:34,130 --> 00:34:36,680 641 00:34:36,680 --> 00:34:40,040 642 00:34:40,040 --> 00:34:42,280 643 00:34:42,280 --> 00:34:45,380 644 00:34:45,380 --> 00:34:49,220 645 00:34:49,220 --> 00:34:52,430 646 00:34:52,430 --> 00:34:55,820 647 00:34:55,820 --> 00:34:57,410 648 00:34:57,410 --> 00:34:59,840 649 00:34:59,840 --> 00:35:04,940 650 00:35:04,940 --> 00:35:07,250 651 00:35:07,250 --> 00:35:08,630 652 00:35:08,630 --> 00:35:12,770 653 00:35:12,770 --> 00:35:14,870 654 00:35:14,870 --> 00:35:17,540 655 00:35:17,540 --> 00:35:19,280 656 00:35:19,280 --> 00:35:22,580 657 00:35:22,580 --> 00:35:24,800 658 00:35:24,800 --> 00:35:26,720 659 00:35:26,720 --> 00:35:30,109 660 00:35:30,109 --> 00:35:33,170 661 00:35:33,170 --> 00:35:36,950 662 00:35:36,950 --> 00:35:41,450 663 00:35:41,450 --> 00:35:44,300 664 00:35:44,300 --> 00:35:47,570 665 00:35:47,570 --> 00:35:49,670 666 00:35:49,670 --> 00:35:50,900 667 00:35:50,900 --> 00:35:53,750 668 00:35:53,750 --> 00:35:55,370 669 00:35:55,370 --> 00:35:58,750 670 00:35:58,750 --> 00:36:01,820 671 00:36:01,820 --> 00:36:03,560 672 00:36:03,560 --> 00:36:05,210 673 00:36:05,210 --> 00:36:09,560 674 00:36:09,560 --> 00:36:11,870 675 00:36:11,870 --> 00:36:15,040 676 00:36:15,040 --> 00:36:17,780 677 00:36:17,780 --> 00:36:19,550 678 00:36:19,550 --> 00:36:21,950 679 00:36:21,950 --> 00:36:23,120 680 00:36:23,120 --> 00:36:24,710 681 00:36:24,710 --> 00:36:26,870 682 00:36:26,870 --> 00:36:31,130 683 00:36:31,130 --> 00:36:35,030 684 00:36:35,030 --> 00:36:38,150 685 00:36:38,150 --> 00:36:38,160 686 00:36:38,160 --> 00:36:39,910 687 00:36:39,910 --> 00:36:42,730 688 00:36:42,730 --> 00:36:42,740 689 00:36:42,740 --> 00:36:47,640 690 00:36:47,640 --> 00:36:55,680 691 00:36:55,680 --> 00:36:59,670 692 00:36:59,670 --> 00:37:00,930 693 00:37:00,930 --> 00:37:04,610 694 00:37:04,610 --> 00:37:09,420 695 00:37:09,420 --> 00:37:12,600 696 00:37:12,600 --> 00:37:14,970 697 00:37:14,970 --> 00:37:17,250 698 00:37:17,250 --> 00:37:19,890 699 00:37:19,890 --> 00:37:22,260 700 00:37:22,260 --> 00:37:24,900 701 00:37:24,900 --> 00:37:26,880 702 00:37:26,880 --> 00:37:31,640 703 00:37:31,640 --> 00:37:39,030 704 00:37:39,030 --> 00:37:49,480 705 00:37:49,480 --> 00:37:51,370 706 00:37:51,370 --> 00:37:53,980 707 00:37:53,980 --> 00:37:59,290 708 00:37:59,290 --> 00:38:03,510 709 00:38:03,510 --> 00:38:09,670 710 00:38:09,670 --> 00:38:14,319 711 00:38:14,319 --> 00:38:19,359 712 00:38:19,359 --> 00:38:23,890 713 00:38:23,890 --> 00:38:25,839 714 00:38:25,839 --> 00:38:30,819 715 00:38:30,819 --> 00:38:33,130 716 00:38:33,130 --> 00:38:37,960 717 00:38:37,960 --> 00:38:40,420 718 00:38:40,420 --> 00:38:43,170 719 00:38:43,170 --> 00:38:45,819 720 00:38:45,819 --> 00:38:48,040 721 00:38:48,040 --> 00:38:49,660 722 00:38:49,660 --> 00:38:53,319 723 00:38:53,319 --> 00:38:58,030 724 00:38:58,030 --> 00:38:59,980 725 00:38:59,980 --> 00:39:01,809 726 00:39:01,809 --> 00:39:05,589 727 00:39:05,589 --> 00:39:07,059 728 00:39:07,059 --> 00:39:08,650 729 00:39:08,650 --> 00:39:11,530 730 00:39:11,530 --> 00:39:15,010 731 00:39:15,010 --> 00:39:18,549 732 00:39:18,549 --> 00:39:23,530 733 00:39:23,530 --> 00:39:26,500 734 00:39:26,500 --> 00:39:28,780 735 00:39:28,780 --> 00:39:32,380 736 00:39:32,380 --> 00:39:35,049 737 00:39:35,049 --> 00:39:36,670 738 00:39:36,670 --> 00:39:39,940 739 00:39:39,940 --> 00:39:42,870 740 00:39:42,870 --> 00:39:48,130 741 00:39:48,130 --> 00:39:51,900 742 00:39:51,900 --> 00:39:55,329 743 00:39:55,329 --> 00:39:58,390 744 00:39:58,390 --> 00:40:03,099 745 00:40:03,099 --> 00:40:06,579 746 00:40:06,579 --> 00:40:11,259 747 00:40:11,259 --> 00:40:14,650 748 00:40:14,650 --> 00:40:19,359 749 00:40:19,359 --> 00:40:20,799 750 00:40:20,799 --> 00:40:23,859 751 00:40:23,859 --> 00:40:27,219 752 00:40:27,219 --> 00:40:29,440 753 00:40:29,440 --> 00:40:32,109 754 00:40:32,109 --> 00:40:36,249 755 00:40:36,249 --> 00:40:40,719 756 00:40:40,719 --> 00:40:43,690 757 00:40:43,690 --> 00:40:47,620 758 00:40:47,620 --> 00:40:49,900 759 00:40:49,900 --> 00:40:52,420 760 00:40:52,420 --> 00:40:54,609 761 00:40:54,609 --> 00:40:57,150 762 00:40:57,150 --> 00:41:02,049 763 00:41:02,049 --> 00:41:05,049 764 00:41:05,049 --> 00:41:07,930 765 00:41:07,930 --> 00:41:09,400 766 00:41:09,400 --> 00:41:12,759 767 00:41:12,759 --> 00:41:16,569 768 00:41:16,569 --> 00:41:20,949 769 00:41:20,949 --> 00:41:24,549 770 00:41:24,549 --> 00:41:27,789 771 00:41:27,789 --> 00:41:29,170 772 00:41:29,170 --> 00:41:30,789 773 00:41:30,789 --> 00:41:35,259 774 00:41:35,259 --> 00:41:37,689 775 00:41:37,689 --> 00:41:41,259 776 00:41:41,259 --> 00:41:44,819 777 00:41:44,819 --> 00:41:49,120 778 00:41:49,120 --> 00:41:51,549 779 00:41:51,549 --> 00:41:54,089 780 00:41:54,089 --> 00:41:57,189 781 00:41:57,189 --> 00:42:03,160 782 00:42:03,160 --> 00:42:06,699 783 00:42:06,699 --> 00:42:09,279 784 00:42:09,279 --> 00:42:12,370 785 00:42:12,370 --> 00:42:13,809 786 00:42:13,809 --> 00:42:16,000 787 00:42:16,000 --> 00:42:18,190 788 00:42:18,190 --> 00:42:21,040 789 00:42:21,040 --> 00:42:25,360 790 00:42:25,360 --> 00:42:27,160 791 00:42:27,160 --> 00:42:29,320 792 00:42:29,320 --> 00:42:31,870 793 00:42:31,870 --> 00:42:34,660 794 00:42:34,660 --> 00:42:38,430 795 00:42:38,430 --> 00:42:43,000 796 00:42:43,000 --> 00:42:45,790 797 00:42:45,790 --> 00:42:48,280 798 00:42:48,280 --> 00:42:50,740 799 00:42:50,740 --> 00:42:53,200 800 00:42:53,200 --> 00:42:54,910 801 00:42:54,910 --> 00:42:59,310 802 00:42:59,310 --> 00:43:01,840 803 00:43:01,840 --> 00:43:04,930 804 00:43:04,930 --> 00:43:08,920 805 00:43:08,920 --> 00:43:10,930 806 00:43:10,930 --> 00:43:16,090 807 00:43:16,090 --> 00:43:19,360 808 00:43:19,360 --> 00:43:23,020 809 00:43:23,020 --> 00:43:25,900 810 00:43:25,900 --> 00:43:28,180 811 00:43:28,180 --> 00:43:33,520 812 00:43:33,520 --> 00:43:36,870 813 00:43:36,870 --> 00:43:39,670 814 00:43:39,670 --> 00:43:42,070 815 00:43:42,070 --> 00:43:44,320 816 00:43:44,320 --> 00:43:47,080 817 00:43:47,080 --> 00:43:51,880 818 00:43:51,880 --> 00:43:57,100 819 00:43:57,100 --> 00:44:06,670 820 00:44:06,670 --> 00:44:08,830 821 00:44:08,830 --> 00:44:15,310 822 00:44:15,310 --> 00:44:16,870 823 00:44:16,870 --> 00:44:20,110 824 00:44:20,110 --> 00:44:23,190 825 00:44:23,190 --> 00:44:28,330 826 00:44:28,330 --> 00:44:29,859 827 00:44:29,859 --> 00:44:33,579 828 00:44:33,579 --> 00:44:37,559 829 00:44:37,559 --> 00:44:41,469 830 00:44:41,469 --> 00:44:46,569 831 00:44:46,569 --> 00:44:54,630 832 00:44:54,630 --> 00:44:57,309 833 00:44:57,309 --> 00:44:58,620 834 00:44:58,620 --> 00:45:02,739 835 00:45:02,739 --> 00:45:05,680 836 00:45:05,680 --> 00:45:07,269 837 00:45:07,269 --> 00:45:10,690 838 00:45:10,690 --> 00:45:13,269 839 00:45:13,269 --> 00:45:16,029 840 00:45:16,029 --> 00:45:18,099 841 00:45:18,099 --> 00:45:20,769 842 00:45:20,769 --> 00:45:24,130 843 00:45:24,130 --> 00:45:27,700 844 00:45:27,700 --> 00:45:28,870 845 00:45:28,870 --> 00:45:32,380 846 00:45:32,380 --> 00:45:37,299 847 00:45:37,299 --> 00:45:40,779 848 00:45:40,779 --> 00:45:42,640 849 00:45:42,640 --> 00:45:49,269 850 00:45:49,269 --> 00:45:57,579 851 00:45:57,579 --> 00:46:01,059 852 00:46:01,059 --> 00:46:05,349 853 00:46:05,349 --> 00:46:08,380 854 00:46:08,380 --> 00:46:10,509 855 00:46:10,509 --> 00:46:13,299 856 00:46:13,299 --> 00:46:16,690 857 00:46:16,690 --> 00:46:19,809 858 00:46:19,809 --> 00:46:22,989 859 00:46:22,989 --> 00:46:24,910 860 00:46:24,910 --> 00:46:28,180 861 00:46:28,180 --> 00:46:33,370 862 00:46:33,370 --> 00:46:33,380 863 00:46:33,380 --> 00:46:34,359 864 00:46:34,359 --> 00:46:36,400 865 00:46:36,400 --> 00:46:40,120 866 00:46:40,120 --> 00:46:43,630 867 00:46:43,630 --> 00:46:44,980 868 00:46:44,980 --> 00:46:46,720 869 00:46:46,720 --> 00:46:47,890 870 00:46:47,890 --> 00:46:52,870 871 00:46:52,870 --> 00:46:57,490 872 00:46:57,490 --> 00:46:59,440 873 00:46:59,440 --> 00:47:01,390 874 00:47:01,390 --> 00:47:04,300 875 00:47:04,300 --> 00:47:08,530 876 00:47:08,530 --> 00:47:12,670 877 00:47:12,670 --> 00:47:14,830 878 00:47:14,830 --> 00:47:16,510 879 00:47:16,510 --> 00:47:18,700 880 00:47:18,700 --> 00:47:22,330 881 00:47:22,330 --> 00:47:24,460 882 00:47:24,460 --> 00:47:26,200 883 00:47:26,200 --> 00:47:27,880 884 00:47:27,880 --> 00:47:29,860 885 00:47:29,860 --> 00:47:31,240 886 00:47:31,240 --> 00:47:33,880 887 00:47:33,880 --> 00:47:36,480 888 00:47:36,480 --> 00:47:39,220 889 00:47:39,220 --> 00:47:40,720 890 00:47:40,720 --> 00:47:42,790 891 00:47:42,790 --> 00:47:49,740 892 00:47:49,740 --> 00:47:52,510 893 00:47:52,510 --> 00:47:55,000 894 00:47:55,000 --> 00:47:57,400 895 00:47:57,400 --> 00:47:59,800 896 00:47:59,800 --> 00:48:02,500 897 00:48:02,500 --> 00:48:04,480 898 00:48:04,480 --> 00:48:10,180 899 00:48:10,180 --> 00:48:11,890 900 00:48:11,890 --> 00:48:15,190 901 00:48:15,190 --> 00:48:17,050 902 00:48:17,050 --> 00:48:17,920 903 00:48:17,920 --> 00:48:20,020 904 00:48:20,020 --> 00:48:23,530 905 00:48:23,530 --> 00:48:25,990 906 00:48:25,990 --> 00:48:29,080 907 00:48:29,080 --> 00:48:32,230 908 00:48:32,230 --> 00:48:36,700 909 00:48:36,700 --> 00:48:38,980 910 00:48:38,980 --> 00:48:40,710 911 00:48:40,710 --> 00:48:43,980 912 00:48:43,980 --> 00:48:47,410 913 00:48:47,410 --> 00:48:52,560 914 00:48:52,560 --> 00:48:54,580 915 00:48:54,580 --> 00:48:57,310 916 00:48:57,310 --> 00:49:01,000 917 00:49:01,000 --> 00:49:09,819 918 00:49:09,819 --> 00:49:12,589 919 00:49:12,589 --> 00:49:13,940 920 00:49:13,940 --> 00:49:15,890 921 00:49:15,890 --> 00:49:18,019 922 00:49:18,019 --> 00:49:20,359 923 00:49:20,359 --> 00:49:23,299 924 00:49:23,299 --> 00:49:26,990 925 00:49:26,990 --> 00:49:30,140 926 00:49:30,140 --> 00:49:31,670 927 00:49:31,670 --> 00:49:36,170 928 00:49:36,170 --> 00:49:40,370 929 00:49:40,370 --> 00:49:42,170 930 00:49:42,170 --> 00:49:45,640 931 00:49:45,640 --> 00:49:47,900 932 00:49:47,900 --> 00:49:57,410 933 00:49:57,410 --> 00:50:00,410 934 00:50:00,410 --> 00:50:00,420 935 00:50:00,420 --> 00:50:00,710 936 00:50:00,710 --> 00:50:03,049 937 00:50:03,049 --> 00:50:05,180 938 00:50:05,180 --> 00:50:07,670 939 00:50:07,670 --> 00:50:09,109 940 00:50:09,109 --> 00:50:11,539 941 00:50:11,539 --> 00:50:13,819 942 00:50:13,819 --> 00:50:18,829 943 00:50:18,829 --> 00:50:21,829 944 00:50:21,829 --> 00:50:23,510 945 00:50:23,510 --> 00:50:26,029 946 00:50:26,029 --> 00:50:28,730 947 00:50:28,730 --> 00:50:31,490 948 00:50:31,490 --> 00:50:33,140 949 00:50:33,140 --> 00:50:35,240 950 00:50:35,240 --> 00:50:37,220 951 00:50:37,220 --> 00:50:37,230 952 00:50:37,230 --> 00:50:37,910 953 00:50:37,910 --> 00:50:42,170 954 00:50:42,170 --> 00:50:45,620 955 00:50:45,620 --> 00:50:51,200 956 00:50:51,200 --> 00:50:53,839 957 00:50:53,839 --> 00:50:56,660 958 00:50:56,660 --> 00:50:59,690 959 00:50:59,690 --> 00:51:02,000 960 00:51:02,000 --> 00:51:05,690 961 00:51:05,690 --> 00:51:08,809 962 00:51:08,809 --> 00:51:10,279 963 00:51:10,279 --> 00:51:12,170 964 00:51:12,170 --> 00:51:14,599 965 00:51:14,599 --> 00:51:18,049 966 00:51:18,049 --> 00:51:18,059 967 00:51:18,059 --> 00:51:19,150 968 00:51:19,150 --> 00:51:24,110 969 00:51:24,110 --> 00:51:27,110 970 00:51:27,110 --> 00:51:29,750 971 00:51:29,750 --> 00:51:32,570 972 00:51:32,570 --> 00:51:37,460 973 00:51:37,460 --> 00:51:41,270 974 00:51:41,270 --> 00:51:44,600 975 00:51:44,600 --> 00:51:46,480 976 00:51:46,480 --> 00:51:49,070 977 00:51:49,070 --> 00:51:52,400 978 00:51:52,400 --> 00:51:56,540 979 00:51:56,540 --> 00:52:01,670 980 00:52:01,670 --> 00:52:04,430 981 00:52:04,430 --> 00:52:11,870 982 00:52:11,870 --> 00:52:16,280 983 00:52:16,280 --> 00:52:19,370 984 00:52:19,370 --> 00:52:21,440 985 00:52:21,440 --> 00:52:26,930 986 00:52:26,930 --> 00:52:30,440 987 00:52:30,440 --> 00:52:33,830 988 00:52:33,830 --> 00:52:37,940 989 00:52:37,940 --> 00:52:41,420 990 00:52:41,420 --> 00:52:44,300 991 00:52:44,300 --> 00:52:48,380 992 00:52:48,380 --> 00:52:51,650 993 00:52:51,650 --> 00:52:56,090 994 00:52:56,090 --> 00:53:00,850 995 00:53:00,850 --> 00:53:04,190 996 00:53:04,190 --> 00:53:07,370 997 00:53:07,370 --> 00:53:07,380 998 00:53:07,380 --> 00:53:10,030 999 00:53:10,030 --> 00:53:13,560 1000 00:53:13,560 --> 00:53:18,520 1001 00:53:18,520 --> 00:53:20,080 1002 00:53:20,080 --> 00:53:23,740 1003 00:53:23,740 --> 00:53:28,330 1004 00:53:28,330 --> 00:53:32,950 1005 00:53:32,950 --> 00:53:34,690 1006 00:53:34,690 --> 00:53:37,560 1007 00:53:37,560 --> 00:53:42,460 1008 00:53:42,460 --> 00:53:44,740 1009 00:53:44,740 --> 00:53:46,150 1010 00:53:46,150 --> 00:53:53,110 1011 00:53:53,110 --> 00:53:55,030 1012 00:53:55,030 --> 00:53:58,210 1013 00:53:58,210 --> 00:53:59,950 1014 00:53:59,950 --> 00:54:03,220 1015 00:54:03,220 --> 00:54:04,810 1016 00:54:04,810 --> 00:54:09,130 1017 00:54:09,130 --> 00:54:15,490 1018 00:54:15,490 --> 00:54:18,250 1019 00:54:18,250 --> 00:54:29,370 1020 00:54:29,370 --> 00:54:32,110 1021 00:54:32,110 --> 00:54:39,340 1022 00:54:39,340 --> 00:54:39,350 1023 00:54:39,350 --> 00:54:42,060 1024 00:54:42,060 --> 00:54:55,060 1025 00:54:55,060 --> 00:54:57,580 1026 00:54:57,580 --> 00:54:59,800 1027 00:54:59,800 --> 00:55:01,890 1028 00:55:01,890 --> 00:55:08,500 1029 00:55:08,500 --> 00:55:13,240 1030 00:55:13,240 --> 00:55:15,670 1031 00:55:15,670 --> 00:55:17,830 1032 00:55:17,830 --> 00:55:20,770 1033 00:55:20,770 --> 00:55:24,340 1034 00:55:24,340 --> 00:55:26,410 1035 00:55:26,410 --> 00:55:32,580 1036 00:55:32,580 --> 00:55:36,460 1037 00:55:36,460 --> 00:55:38,890 1038 00:55:38,890 --> 00:55:43,750 1039 00:55:43,750 --> 00:55:47,110 1040 00:55:47,110 --> 00:55:49,210 1041 00:55:49,210 --> 00:55:53,260 1042 00:55:53,260 --> 00:55:55,240 1043 00:55:55,240 --> 00:55:58,120 1044 00:55:58,120 --> 00:56:01,270 1045 00:56:01,270 --> 00:56:03,370 1046 00:56:03,370 --> 00:56:06,460 1047 00:56:06,460 --> 00:56:09,250 1048 00:56:09,250 --> 00:56:12,130 1049 00:56:12,130 --> 00:56:16,600 1050 00:56:16,600 --> 00:56:18,640 1051 00:56:18,640 --> 00:56:22,090 1052 00:56:22,090 --> 00:56:23,770 1053 00:56:23,770 --> 00:56:25,810 1054 00:56:25,810 --> 00:56:28,540 1055 00:56:28,540 --> 00:56:30,880 1056 00:56:30,880 --> 00:56:33,220 1057 00:56:33,220 --> 00:56:36,780 1058 00:56:36,780 --> 00:56:43,630 1059 00:56:43,630 --> 00:56:47,260 1060 00:56:47,260 --> 00:56:49,660 1061 00:56:49,660 --> 00:56:51,460 1062 00:56:51,460 --> 00:56:53,440 1063 00:56:53,440 --> 00:56:56,110 1064 00:56:56,110 --> 00:56:59,050 1065 00:56:59,050 --> 00:57:14,190 1066 00:57:14,190 --> 00:57:14,200 1067 00:57:14,200 --> 00:57:18,540 1068 00:57:18,540 --> 00:57:21,360 1069 00:57:21,360 --> 00:57:23,130 1070 00:57:23,130 --> 00:57:27,300 1071 00:57:27,300 --> 00:57:30,960 1072 00:57:30,960 --> 00:57:33,150 1073 00:57:33,150 --> 00:57:34,710 1074 00:57:34,710 --> 00:57:37,500 1075 00:57:37,500 --> 00:57:39,810 1076 00:57:39,810 --> 00:57:42,900 1077 00:57:42,900 --> 00:57:44,610 1078 00:57:44,610 --> 00:57:47,340 1079 00:57:47,340 --> 00:57:48,510 1080 00:57:48,510 --> 00:57:50,610 1081 00:57:50,610 --> 00:57:52,560 1082 00:57:52,560 --> 00:57:54,300 1083 00:57:54,300 --> 00:57:55,830 1084 00:57:55,830 --> 00:58:01,310 1085 00:58:01,310 --> 00:58:05,070 1086 00:58:05,070 --> 00:58:08,010 1087 00:58:08,010 --> 00:58:10,350 1088 00:58:10,350 --> 00:58:12,960 1089 00:58:12,960 --> 00:58:15,030 1090 00:58:15,030 --> 00:58:18,960 1091 00:58:18,960 --> 00:58:24,390 1092 00:58:24,390 --> 00:58:27,840 1093 00:58:27,840 --> 00:58:27,850 1094 00:58:27,850 --> 00:58:34,800 1095 00:58:34,800 --> 00:58:39,640 1096 00:58:39,640 --> 00:58:45,030 1097 00:58:45,030 --> 00:58:48,400 1098 00:58:48,400 --> 00:58:50,800 1099 00:58:50,800 --> 00:58:51,730 1100 00:58:51,730 --> 00:58:53,800 1101 00:58:53,800 --> 00:59:05,710 1102 00:59:05,710 --> 00:59:09,539 1103 00:59:09,539 --> 00:59:12,190 1104 00:59:12,190 --> 00:59:16,599 1105 00:59:16,599 --> 00:59:18,000 1106 00:59:18,000 --> 00:59:25,450 1107 00:59:25,450 --> 00:59:28,630 1108 00:59:28,630 --> 00:59:30,849 1109 00:59:30,849 --> 00:59:35,020 1110 00:59:35,020 --> 00:59:37,510 1111 00:59:37,510 --> 00:59:40,599 1112 00:59:40,599 --> 00:59:51,490 1113 00:59:51,490 --> 00:59:54,309 1114 00:59:54,309 --> 01:00:02,839 1115 01:00:02,839 --> 01:00:06,390 1116 01:00:06,390 --> 01:00:11,760 1117 01:00:11,760 --> 01:00:18,859 1118 01:00:18,859 --> 01:00:20,940 1119 01:00:20,940 --> 01:00:23,010 1120 01:00:23,010 --> 01:00:25,140 1121 01:00:25,140 --> 01:00:25,150 1122 01:00:25,150 --> 01:00:25,890 1123 01:00:25,890 --> 01:00:27,510 1124 01:00:27,510 --> 01:00:31,020 1125 01:00:31,020 --> 01:00:33,359 1126 01:00:33,359 --> 01:00:35,150 1127 01:00:35,150 --> 01:00:39,589 1128 01:00:39,589 --> 01:00:47,370 1129 01:00:47,370 --> 01:00:53,430 1130 01:00:53,430 --> 01:00:56,370 1131 01:00:56,370 --> 01:01:00,020 1132 01:01:00,020 --> 01:01:04,650 1133 01:01:04,650 --> 01:01:12,740 1134 01:01:12,740 --> 01:01:15,260 1135 01:01:15,260 --> 01:01:16,760 1136 01:01:16,760 --> 01:01:18,859 1137 01:01:18,859 --> 01:01:20,750 1138 01:01:20,750 --> 01:01:20,760 1139 01:01:20,760 --> 01:01:21,500 1140 01:01:21,500 --> 01:01:27,410 1141 01:01:27,410 --> 01:01:30,849 1142 01:01:30,849 --> 01:01:40,900 1143 01:01:40,900 --> 01:01:44,900 1144 01:01:44,900 --> 01:01:48,980 1145 01:01:48,980 --> 01:01:51,740 1146 01:01:51,740 --> 01:01:56,750 1147 01:01:56,750 --> 01:02:00,880 1148 01:02:00,880 --> 01:02:05,030 1149 01:02:05,030 --> 01:02:07,640 1150 01:02:07,640 --> 01:02:12,590 1151 01:02:12,590 --> 01:02:15,650 1152 01:02:15,650 --> 01:02:16,910 1153 01:02:16,910 --> 01:02:18,980 1154 01:02:18,980 --> 01:02:21,110 1155 01:02:21,110 --> 01:02:23,540 1156 01:02:23,540 --> 01:02:29,630 1157 01:02:29,630 --> 01:02:34,400 1158 01:02:34,400 --> 01:02:39,520 1159 01:02:39,520 --> 01:02:42,800 1160 01:02:42,800 --> 01:02:45,260 1161 01:02:45,260 --> 01:02:48,770 1162 01:02:48,770 --> 01:02:55,190 1163 01:02:55,190 --> 01:02:58,250 1164 01:02:58,250 --> 01:03:01,060 1165 01:03:01,060 --> 01:03:03,830 1166 01:03:03,830 --> 01:03:06,110 1167 01:03:06,110 --> 01:03:09,430 1168 01:03:09,430 --> 01:03:13,310 1169 01:03:13,310 --> 01:03:15,410 1170 01:03:15,410 --> 01:03:19,850 1171 01:03:19,850 --> 01:03:26,090 1172 01:03:26,090 --> 01:03:28,640 1173 01:03:28,640 --> 01:03:30,320 1174 01:03:30,320 --> 01:03:42,660 1175 01:03:42,660 --> 01:03:46,630 1176 01:03:46,630 --> 01:03:49,089 1177 01:03:49,089 --> 01:03:52,150 1178 01:03:52,150 --> 01:03:56,289 1179 01:03:56,289 --> 01:03:57,880 1180 01:03:57,880 --> 01:04:00,160 1181 01:04:00,160 --> 01:04:02,440 1182 01:04:02,440 --> 01:04:04,930 1183 01:04:04,930 --> 01:04:07,059 1184 01:04:07,059 --> 01:04:09,069 1185 01:04:09,069 --> 01:04:11,650 1186 01:04:11,650 --> 01:04:13,359 1187 01:04:13,359 --> 01:04:14,890 1188 01:04:14,890 --> 01:04:18,160 1189 01:04:18,160 --> 01:04:20,049 1190 01:04:20,049 --> 01:04:23,740 1191 01:04:23,740 --> 01:04:25,779 1192 01:04:25,779 --> 01:04:28,420 1193 01:04:28,420 --> 01:04:31,120 1194 01:04:31,120 --> 01:04:33,640 1195 01:04:33,640 --> 01:04:35,589 1196 01:04:35,589 --> 01:04:37,660 1197 01:04:37,660 --> 01:04:40,509 1198 01:04:40,509 --> 01:04:42,819 1199 01:04:42,819 --> 01:04:44,140 1200 01:04:44,140 --> 01:04:45,940 1201 01:04:45,940 --> 01:04:47,319 1202 01:04:47,319 --> 01:04:50,019 1203 01:04:50,019 --> 01:04:52,059 1204 01:04:52,059 --> 01:04:54,819 1205 01:04:54,819 --> 01:04:56,769 1206 01:04:56,769 --> 01:04:58,120 1207 01:04:58,120 --> 01:05:00,730 1208 01:05:00,730 --> 01:05:03,819 1209 01:05:03,819 --> 01:05:05,859 1210 01:05:05,859 --> 01:05:07,779 1211 01:05:07,779 --> 01:05:10,779 1212 01:05:10,779 --> 01:05:13,180 1213 01:05:13,180 --> 01:05:15,160 1214 01:05:15,160 --> 01:05:17,890 1215 01:05:17,890 --> 01:05:22,569 1216 01:05:22,569 --> 01:05:24,370 1217 01:05:24,370 --> 01:05:26,230 1218 01:05:26,230 --> 01:05:28,210 1219 01:05:28,210 --> 01:05:30,849 1220 01:05:30,849 --> 01:05:35,230 1221 01:05:35,230 --> 01:05:36,430 1222 01:05:36,430 --> 01:05:38,859 1223 01:05:38,859 --> 01:05:42,999 1224 01:05:42,999 --> 01:05:44,620 1225 01:05:44,620 --> 01:05:46,390 1226 01:05:46,390 --> 01:05:49,930 1227 01:05:49,930 --> 01:05:52,210 1228 01:05:52,210 --> 01:05:54,490 1229 01:05:54,490 --> 01:05:57,040 1230 01:05:57,040 --> 01:05:59,320 1231 01:05:59,320 --> 01:06:01,720 1232 01:06:01,720 --> 01:06:03,520 1233 01:06:03,520 --> 01:06:05,890 1234 01:06:05,890 --> 01:06:07,660 1235 01:06:07,660 --> 01:06:09,940 1236 01:06:09,940 --> 01:06:12,070 1237 01:06:12,070 --> 01:06:13,960 1238 01:06:13,960 --> 01:06:15,820 1239 01:06:15,820 --> 01:06:17,770 1240 01:06:17,770 --> 01:06:21,490 1241 01:06:21,490 --> 01:06:23,380 1242 01:06:23,380 --> 01:06:25,930 1243 01:06:25,930 --> 01:06:30,280 1244 01:06:30,280 --> 01:06:32,349 1245 01:06:32,349 --> 01:06:36,310 1246 01:06:36,310 --> 01:06:39,040 1247 01:06:39,040 --> 01:06:43,000 1248 01:06:43,000 --> 01:06:46,150 1249 01:06:46,150 --> 01:06:48,520 1250 01:06:48,520 --> 01:06:51,120 1251 01:06:51,120 --> 01:06:56,200 1252 01:06:56,200 --> 01:07:00,790 1253 01:07:00,790 --> 01:07:02,890 1254 01:07:02,890 --> 01:07:04,660 1255 01:07:04,660 --> 01:07:07,240 1256 01:07:07,240 --> 01:07:10,120 1257 01:07:10,120 --> 01:07:11,230 1258 01:07:11,230 --> 01:07:12,700 1259 01:07:12,700 --> 01:07:13,870 1260 01:07:13,870 --> 01:07:15,460 1261 01:07:15,460 --> 01:07:18,870 1262 01:07:18,870 --> 01:07:22,930 1263 01:07:22,930 --> 01:07:25,450 1264 01:07:25,450 --> 01:07:28,300 1265 01:07:28,300 --> 01:07:30,609 1266 01:07:30,609 --> 01:07:35,560 1267 01:07:35,560 --> 01:07:37,630 1268 01:07:37,630 --> 01:07:39,849 1269 01:07:39,849 --> 01:07:42,490 1270 01:07:42,490 --> 01:07:44,230 1271 01:07:44,230 --> 01:07:48,280 1272 01:07:48,280 --> 01:07:51,220 1273 01:07:51,220 --> 01:07:56,700 1274 01:07:56,700 --> 01:08:01,850 1275 01:08:01,850 --> 01:08:04,560 1276 01:08:04,560 --> 01:08:06,060 1277 01:08:06,060 --> 01:08:09,750 1278 01:08:09,750 --> 01:08:14,990 1279 01:08:14,990 --> 01:08:17,970 1280 01:08:17,970 --> 01:08:20,760 1281 01:08:20,760 --> 01:08:23,910 1282 01:08:23,910 --> 01:08:25,800 1283 01:08:25,800 --> 01:08:29,190 1284 01:08:29,190 --> 01:08:32,820 1285 01:08:32,820 --> 01:08:37,230 1286 01:08:37,230 --> 01:08:39,690 1287 01:08:39,690 --> 01:08:41,730 1288 01:08:41,730 --> 01:08:43,800 1289 01:08:43,800 --> 01:08:49,040 1290 01:08:49,040 --> 01:08:54,450 1291 01:08:54,450 --> 01:08:55,290 1292 01:08:55,290 --> 01:08:59,520 1293 01:08:59,520 --> 01:09:01,200 1294 01:09:01,200 --> 01:09:06,630 1295 01:09:06,630 --> 01:09:08,460 1296 01:09:08,460 --> 01:09:10,560 1297 01:09:10,560 --> 01:09:12,000 1298 01:09:12,000 --> 01:09:13,500 1299 01:09:13,500 --> 01:09:14,700 1300 01:09:14,700 --> 01:09:16,800 1301 01:09:16,800 --> 01:09:18,480 1302 01:09:18,480 --> 01:09:20,670 1303 01:09:20,670 --> 01:09:22,290 1304 01:09:22,290 --> 01:09:23,490 1305 01:09:23,490 --> 01:09:25,310 1306 01:09:25,310 --> 01:09:28,410 1307 01:09:28,410 --> 01:09:31,140 1308 01:09:31,140 --> 01:09:35,520 1309 01:09:35,520 --> 01:09:38,280 1310 01:09:38,280 --> 01:09:47,849 1311 01:09:47,849 --> 01:09:50,400 1312 01:09:50,400 --> 01:09:51,720 1313 01:09:51,720 --> 01:09:53,780 1314 01:09:53,780 --> 01:09:56,010 1315 01:09:56,010 --> 01:09:57,180 1316 01:09:57,180 --> 01:10:00,140 1317 01:10:00,140 --> 01:10:03,480 1318 01:10:03,480 --> 01:10:05,940 1319 01:10:05,940 --> 01:10:07,650 1320 01:10:07,650 --> 01:10:10,170 1321 01:10:10,170 --> 01:10:12,320 1322 01:10:12,320 --> 01:10:15,380 1323 01:10:15,380 --> 01:10:17,180 1324 01:10:17,180 --> 01:10:19,340 1325 01:10:19,340 --> 01:10:21,140 1326 01:10:21,140 --> 01:10:24,380 1327 01:10:24,380 --> 01:10:25,970 1328 01:10:25,970 --> 01:10:28,970 1329 01:10:28,970 --> 01:10:34,160 1330 01:10:34,160 --> 01:10:36,800 1331 01:10:36,800 --> 01:10:39,440 1332 01:10:39,440 --> 01:10:43,160 1333 01:10:43,160 --> 01:10:47,300 1334 01:10:47,300 --> 01:10:50,240 1335 01:10:50,240 --> 01:10:52,820 1336 01:10:52,820 --> 01:10:54,470 1337 01:10:54,470 --> 01:10:56,630 1338 01:10:56,630 --> 01:10:59,990 1339 01:10:59,990 --> 01:11:01,460 1340 01:11:01,460 --> 01:11:03,590 1341 01:11:03,590 --> 01:11:05,510 1342 01:11:05,510 --> 01:11:07,040 1343 01:11:07,040 --> 01:11:09,050 1344 01:11:09,050 --> 01:11:10,610 1345 01:11:10,610 --> 01:11:14,230 1346 01:11:14,230 --> 01:11:17,810 1347 01:11:17,810 --> 01:11:20,180 1348 01:11:20,180 --> 01:11:22,760 1349 01:11:22,760 --> 01:11:26,000 1350 01:11:26,000 --> 01:11:28,390 1351 01:11:28,390 --> 01:11:31,450 1352 01:11:31,450 --> 01:11:34,550 1353 01:11:34,550 --> 01:11:36,710 1354 01:11:36,710 --> 01:11:39,260 1355 01:11:39,260 --> 01:11:41,570 1356 01:11:41,570 --> 01:11:46,400 1357 01:11:46,400 --> 01:11:51,080 1358 01:11:51,080 --> 01:11:53,720 1359 01:11:53,720 --> 01:11:56,360 1360 01:11:56,360 --> 01:11:58,060 1361 01:11:58,060 --> 01:12:01,580 1362 01:12:01,580 --> 01:12:03,620 1363 01:12:03,620 --> 01:12:06,110 1364 01:12:06,110 --> 01:12:08,450 1365 01:12:08,450 --> 01:12:11,180 1366 01:12:11,180 --> 01:12:14,750 1367 01:12:14,750 --> 01:12:19,820 1368 01:12:19,820 --> 01:12:22,670 1369 01:12:22,670 --> 01:12:25,550 1370 01:12:25,550 --> 01:12:25,560 1371 01:12:25,560 --> 01:12:27,370 1372 01:12:27,370 --> 01:12:31,910 1373 01:12:31,910 --> 01:12:34,400 1374 01:12:34,400 --> 01:12:37,250 1375 01:12:37,250 --> 01:12:39,020 1376 01:12:39,020 --> 01:12:41,570 1377 01:12:41,570 --> 01:12:44,720 1378 01:12:44,720 --> 01:12:46,460 1379 01:12:46,460 --> 01:12:49,340 1380 01:12:49,340 --> 01:12:51,050 1381 01:12:51,050 --> 01:12:52,940 1382 01:12:52,940 --> 01:12:55,130 1383 01:12:55,130 --> 01:12:57,980 1384 01:12:57,980 --> 01:12:59,840 1385 01:12:59,840 --> 01:13:02,150 1386 01:13:02,150 --> 01:13:04,190 1387 01:13:04,190 --> 01:13:06,890 1388 01:13:06,890 --> 01:13:09,080 1389 01:13:09,080 --> 01:13:13,040 1390 01:13:13,040 --> 01:13:16,760 1391 01:13:16,760 --> 01:13:19,040 1392 01:13:19,040 --> 01:13:21,410 1393 01:13:21,410 --> 01:13:26,060 1394 01:13:26,060 --> 01:13:30,200 1395 01:13:30,200 --> 01:13:39,700 1396 01:13:39,700 --> 01:13:48,399 1397 01:13:48,399 --> 01:13:48,409 1398 01:13:48,409 --> 01:13:49,290 1399 01:13:49,290 --> 01:13:54,250 1400 01:13:54,250 --> 01:13:54,260 1401 01:13:54,260 --> 01:14:00,909 1402 01:14:00,909 --> 01:14:04,239 1403 01:14:04,239 --> 01:14:07,870 1404 01:14:07,870 --> 01:14:10,270 1405 01:14:10,270 --> 01:14:12,100 1406 01:14:12,100 --> 01:14:16,179 1407 01:14:16,179 --> 01:14:19,810 1408 01:14:19,810 --> 01:14:23,500 1409 01:14:23,500 --> 01:14:26,319 1410 01:14:26,319 --> 01:14:29,370 1411 01:14:29,370 --> 01:14:32,949 1412 01:14:32,949 --> 01:14:36,580 1413 01:14:36,580 --> 01:14:37,929 1414 01:14:37,929 --> 01:14:39,489 1415 01:14:39,489 --> 01:14:43,209 1416 01:14:43,209 --> 01:14:45,600 1417 01:14:45,600 --> 01:14:47,949 1418 01:14:47,949 --> 01:14:50,919 1419 01:14:50,919 --> 01:14:53,229 1420 01:14:53,229 --> 01:14:55,239 1421 01:14:55,239 --> 01:14:58,029 1422 01:14:58,029 --> 01:14:59,259 1423 01:14:59,259 --> 01:15:01,149 1424 01:15:01,149 --> 01:15:04,209 1425 01:15:04,209 --> 01:15:06,609 1426 01:15:06,609 --> 01:15:11,319 1427 01:15:11,319 --> 01:15:13,359 1428 01:15:13,359 --> 01:15:15,759 1429 01:15:15,759 --> 01:15:18,790 1430 01:15:18,790 --> 01:15:21,129 1431 01:15:21,129 --> 01:15:22,989 1432 01:15:22,989 --> 01:15:26,169 1433 01:15:26,169 --> 01:15:29,319 1434 01:15:29,319 --> 01:15:36,759 1435 01:15:36,759 --> 01:15:38,770 1436 01:15:38,770 --> 01:15:40,449 1437 01:15:40,449 --> 01:15:42,609 1438 01:15:42,609 --> 01:15:46,120 1439 01:15:46,120 --> 01:15:49,120 1440 01:15:49,120 --> 01:15:50,739 1441 01:15:50,739 --> 01:15:52,839 1442 01:15:52,839 --> 01:15:55,629 1443 01:15:55,629 --> 01:15:59,469 1444 01:15:59,469 --> 01:16:01,870 1445 01:16:01,870 --> 01:16:05,290 1446 01:16:05,290 --> 01:16:08,199 1447 01:16:08,199 --> 01:16:12,639 1448 01:16:12,639 --> 01:16:14,770 1449 01:16:14,770 --> 01:16:17,720 1450 01:16:17,720 --> 01:16:22,460 1451 01:16:22,460 --> 01:16:25,430 1452 01:16:25,430 --> 01:16:29,060 1453 01:16:29,060 --> 01:16:31,520 1454 01:16:31,520 --> 01:16:35,030 1455 01:16:35,030 --> 01:16:37,610 1456 01:16:37,610 --> 01:16:39,970 1457 01:16:39,970 --> 01:16:45,080 1458 01:16:45,080 --> 01:16:47,090 1459 01:16:47,090 --> 01:16:49,550 1460 01:16:49,550 --> 01:16:51,800 1461 01:16:51,800 --> 01:16:53,900 1462 01:16:53,900 --> 01:16:55,520 1463 01:16:55,520 --> 01:17:00,310 1464 01:17:00,310 --> 01:17:02,920 1465 01:17:02,920 --> 01:17:06,470 1466 01:17:06,470 --> 01:17:10,010 1467 01:17:10,010 --> 01:17:11,750 1468 01:17:11,750 --> 01:17:15,080 1469 01:17:15,080 --> 01:17:16,760 1470 01:17:16,760 --> 01:17:19,250 1471 01:17:19,250 --> 01:17:21,020 1472 01:17:21,020 --> 01:17:22,550 1473 01:17:22,550 --> 01:17:24,130 1474 01:17:24,130 --> 01:17:26,600 1475 01:17:26,600 --> 01:17:32,459 1476 01:17:32,459 --> 01:17:32,469 1477 01:17:32,469 --> 01:17:34,530