1 00:00:01,069 --> 00:00:03,189 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu so it's my pleasure to introduce professor simone amar singh he as our guest lecturer today so saman amar singh he is a professor in the EECS department at MIT and he's also the associate department head he's an expert in compilers domain-specific languages and auto-tuning in fact he was the designer of the open tuner framework that you've been using for your homework assignments so today saman is going to talk about some of his recent work on domain-specific languages and also on auto tuning so let's give some on immersing thank you ok so I used to take this class for many many years unfortunately now I am administrator so I don't I Julian and and Charles get to have the fun teaching the class hopefully you guys enjoy I know you're starting you're done project 1 2 & 3 in going into project 4 yeah project 4 is really fun and yeah it will look big and and daunting but that then you'll enjoy it and especially all the amount of time people spend working on it so so I think I'm making you scarier more than anything so ok so let's get into the talk today so I'm going to talk to you about domain-specific language it's a little bit about auto-tuning how it leaves sort of tuning so why is domain-specific languages so VI all used to general purpose languages that we use all the day all day and those languages are set up to capture a very large sort of what people might want to do in programming however lot of times there are specific area specific domains either some area in that you won't implement or the certain patterns you won't implement code that has lot of interesting properties that in a general-purpose language it's very hard to describe and a lot of times it's basically very hard especially from compiler point of view to take advantage because it has to work for everybody so domain-specific language basically is has this lot of engineering benefits because if we know that you are what you are building has a certain shape certain set of properties if the language capture this and if you are building on that it could be much easier to build it should be had a lot of clarity it's very easy to maintain that kind of things very easy to test and also the other thing is it's very easy to understand because the domain is very clear it's described if you can't build a library but somebody can go and do weird things in libraries if additional is if it is built into the language it's set in stone you can't go and say I'm going to change something here let me do some weird thing here it's it's built into the language so it's in stress there it makes it much easier for program multiplied but for my point of view the domain-specific language I really like are the languages where I know I can take advantage of knowledge of the domain experts to get really good performance so lot of times domain experts are high in this domain I can do okay if there's some linear algebra you know this kind of algebra that I can do to simplify the expression that algebra might only work on that dumb it's very hard to put some complex algebra into a in to press c plus plus so c but in that domain i can say hi can code it up so you can write any expression that i can simplify it and also there are a lot of idioms in each domain so some domain might say okay look i am going to represent a graph that i'm going to talk about in the normal free plus plus you put a bunch of classes you do this very complicated things it's the idiom is hidden in there first of all the c++ doesn't know that i had to look for graphs but even if you are look for graphs it's if you can try graphs in hundreds of millions of ways but if the if it is a first class support in the language i don't have to work heroically to extract that it's there i can easily see that so most of my compiler can be doing useful things in there and most of the time the other thing is if you build a domain-specific language right you can leave the complex the lower-level decision to the compiler in a few C++ you might be tempted to say yeah I know some of two minuses and let me do something here let me do some of the optimizations here so I have been working on optimization my all my life and lot of times when you write a compiled optimization fast you spent half of for more than half of your time undoing the crazy optimization the programmer did like you guys already so this you think you know better you go do something and that might work well then but believe me that code survives 20 years later and 20 years later that looks like a really stupid thing to do and then you look at and say okay now I had to undo everything in the compiler to do the right thing in the current architecture and because of that if you capture the right level I will let the compiler do that do the work in here and then as the architectures keep maturing as the problems keep changing I don't have to worry I don't have to undo these parts in here so so again I'm coming to the performance engineering class and telling you guys leave the performance to the compiler but that's nice thing that if the compiler can do the most of your work much nicer job so don't doubt the compiler so I'm going to talk about three parts in here one is three different programming languages in here domain-specific languages graffiti light and then open tuna which is not just in the language but a framework in here and between graph it and halide you will see some patterns and then we will see whether you you found the pattern that we are working on anything so gravity cells product that I work with Julian so if you have any questions of graffiti after today you can definitely gratitude and he knows probably more about graphs and graph it the mobile graph than probably anybody in this planet so so he sees a good resource to talk about graphs so talking about graphs graphs everywhere so if you go to something like Google and do some search the Google has represented the entire knowledge on the Internet is a big graph that you graph processing behind you that is how what guides your search in there oh if you go again mapped or something like Luba it will find your direction the entire road network is a graph and it's trying to find things like short as part in this graph or to give you the map and if you go to a recommendation engine to get a recommendation for a movie if you get a really cool movie you like that's because there's a huge graph between everybody who's watched movie they watch and they are likings to those movies and and they are looking and comparing you to them and recommending that that's all can be moved as graphs in here and even if you go to a ATM and try to do a transaction there's a very fast graph analysis back to say is this a fraudulent transaction or not so if most of the transactions people have done all the connectivities in the back in there by before the time that actually the money pops out of your ATM machine it tests that bunch of graph processes to understand okay this seems like a good transaction so I will actually give you the money sometimes you say you get this other messages that means that the graph processing decided there might be some weird thing going on there so lot of these things that some of them like maps and graphs has Maps and these transactions have very fine latency thing in that you have to get this thing done right you have to get to directions especially if you take a wrong turn you didn't get the next set of directions done very fast before you go eat some Rosten bad via traffic so you these things have to work for and other things like recommendations and Google search is huge graph they build the entire web oh then all the recommendations are to do huge amount of processing so performance matters a lot in these gap legations so let me dive down little bit deeper into show what graphs means what draw processing means so one of the very well-known graph algorithms is called pagerank anybody knows history of pager how many have heard of PageRank okay what does page stand in PageRank Larry Page so so the first algorithm Google did I don't think this anywhere near Google at this point was this algorithm PageRank so it did track these pages but it was developed by Larry Page so you so depends on either either pitch web pages or Larry Page we don't know but people thinks Larry Page in spades right so in our graph in here so this graph algorithm what it does is it strange to some iterations either smacks it onto some convergence in here so what it first do is it will go around look at all its neighbors and calculate basically a rank new rank out of all my neighbors so that means what is how good are my neighbors what's their rank and and what's their contribution to me so if beans being known to a good person and having a connection to a very well-known in this case of a webpage means my I am highly ranked so I'm more influential because I have closer to something in there so what it does is basically it will go it's not calculate some value and propagate to all the neighbors and aggregate in this so entire graph participating in that and then what happens is each node will go about calculating its new rank in there from looking at all rang it gets modified a little bit for towards an ill rank and then this swap all ranks and new ranks so this is the two computations that you iterate over that and you have to do for the entire graph in here so of course you can run this this will not very very slow if you run this so if you want to get performance you're right this piece of code so this piece of code basically is huge and it runs 23 23 times faster than the what's in the previous graph in here on a to elko machine it basically had multi-threaded so it get parallel performance it is load balance because as you know graphs vary in balance so you get load balance if you have non-uniform memory access machines things like multiple socket machines it will take advantage of that it advantage of caches lot of things happening in in this piece of code but of course you know this is hard to write this piece of code but also worse you might not know what to do what the right optimizer you might only internet you might don't try many things and this is very hard every time you change something if you say I want to do something a little bit different that I have to write a very complicated piece of code get it alright get everything working before I test him here so this is why we can use a DSL for this one so let me go little bit talk about graph algorithms and say this seems like a new set of me so what do people do with graphs so when they say graph algorithm so I'm going to go a little bit deep down to show you what type of things represent these graphs this one class of graph algorithms that are called topology driven algorithms that means the entire graph participates in the computation for example Google search before you do Google search it will do a do a entire basically collection of all the web web links it will build this an huge graph and do huge amount of processing to basically able to do the search in here recommendation engine so every probably few weeks or whatever it is it will collect all everybody's recommendations have this huge data and you're going to process that and do this recommendation so this is applying for the entire graphs and sometimes billions of trillions of known had to go into this computation another set of algorithms called data-driven algorithms so what that means is you start with certain nodes and then you keep going to its neighbors and its neighbors processing data in here as we do that the kind of algorithms that fit in this category are things like if you have a map if you have to find a shortest path that means I have probably two parts I don't have to get him from direction from here to Boston I don't have to figure go through New York nodes in New York I just have to go through my neighbors connected in via so I am I am basically operating on a certain area with some connections and and processing that there so these are data driven algorithms so so I might have huge graph but my computation might only work on a small region or a small part of the graph in these algorithms so when you're traversing through a graph doing that there are multiple ways of doing graphed reversals and this is why optimizations hard because there are many different ways of doing things and each has different set of outcomes you can get so assume lot of graph algorithms I need to get something from my neighbors okay one way to get something my neighbors is I can calculate what the neighbor all my neighbors might want and give it to all the neighbors and in there oh I can go change all the neighbors to update my values so what do you think okay you have done some programming in here what do you think about this is this a good way so if I won't update everybody I will calculate why what I won't do and I'll go change everybody all my neighbors not as parallel as it could be I think you are getting to a point but why it's not that parallel the neighbors - what yeah but okay that's a very good price so if you are doing a data-driven if I am doing that's not good but if everybody is doing that to their neighbor so then I have a parallelism so everybody might say you updated your neighbor you are updating you everybody is updating their neighbors so now there's another problem showing up what's the problem if everybody try to update their neighbors back there that's a race in there because everybody's gone right in there so if you want to get this actually right you have a bunch of issues here you want to basically do atomic updates because you need to lock that thing so it has to be like atomically updated in here and this is nice but I don't have to Travis anything because everybody I need to update I actually go and go and update him yeah that's that's nice way to do that especially if it is not a global thing so if I'm propagating I will update my neighbor's and-and-and-and-and and I can propagate that down another way to do that is pull schedule that means if I everybody look at ask their neighbors okay do you have what do you have to give it to me and I flicked everything from my neighbors and I update myself so is there a race condition now how many people say there's a race how many people think there's no race so what happens is I am reading from all the neighbors everybody is reading from the neighbors but I am only updating myself so because of that I'm the only one who's writing me so I don't have a race so it's very nice you don't have race but I might if I'm doing a data-driven transformation I might not know that I need to get updated because the update comes from that person and and and and that means I might be tasking oh do you have anything to say you might say no so in that sense I might basically doing lot of extra computation than necessary because I might not know that I have a data I need to get but I had asked you whether I should I should do this but I don't have any any any basically need to do any synchronization another interesting thing is I can take this graph and I can basically punch in the graph once I partition the graph I can basically say okay this Co get this graph please go get this graph oh this process are not discreet this graph what's the advantage of partitioning a graph why do I want to partition a graph large graph into small pieces of course you can't do a good partition you can't do arbitral partition so what happens if I do a good partitioning I don't tell the word because then then the answer comes out in there okay let me see anybody else you have asked anybody else want to answer come on you have taken it what happened if I take apart and and find two different groups and separate them and give this one to one and this one to another what what do I get I get parallelism also but other thing if I have a lot of connected things going to you these connected things go into that person what are the what else I can I get locality have you heard did you do locality in the class so that means the partition means my the thing I am working I am only working on a small amount and that might if you are lucky fit in my cache and it will be very nice that everybody's has to go to every know every node in here so so if I partition is properly I will get good locality in here it's actually written there oops okay so my answer was in there so in locality but of course now I might have a little bit extra overhead because now I might have to replicate some some nodes stuff like that because it's in both sides and India so another interesting in properties of graphs is when you look at data structures until now things like array is the size mantis is represent about array it fits the cache and stuff like that graphs there are some other properties of the graphs in here so if you go to social networks social network is a graph what's the interesting property and social networks use you have observed connectedness there are people like me that probably had 20 friends in there and has a very little number of connections and then there are celebrities who have millions of connections in here so the interesting thing is if you look at a social network graph you have this relationship called power law relationship that means there's exponential code there are some people here like very non well-known celebrities that that might have millions and millions of users in here connector connections in the neighbors so like so whatever it is in that node and there are people like me sitting here that has very little people connected to the rest of the world so this is normally people have observed this big social networks type graphs you have this kind of exponential relationship in here so web has exponential relationship a social network has this kind of relationship in there so those things you have to do very interesting thing when you process these graphs because there are certain connections that matter certain nodes that matter a lot as a big impact than other nodes then there are other graphs that are have a bounded degree distribution road networks the maximum connection probably you might have a intersection that has six roads coming together in there you don't have million roads connecting into one place anyway in there so that doesn't happen so this is lot more flatter lot more bounded degree distribution graphs in here they have lot of excellent locality near because of course all the crowds in in Cambridge might be connected but roads in Cambridge can be separated from roads in New York City so they are they are separate they are locality nice luck writing these kind of graphs so even if you say the graph is the same size the shape of the graph matters in computation a lot of times so what happens is now when you want to operate on this graph you have to look at three interesting properties one property is okay how much parallelism my algorithm what I trying to do to this graph is going to get it's like a Goldilocks type thing you don't want too much parallelism if you say I have algorithm that huge you want to parallelism if I can't take at is not useful so you need to get a parallel some good enough that I can actually use it then I really like to have locality because if I have locality if my caches will work everything will be nearby I can get runs things fast if I everytime I have to get something from main memory it can be very very slow so I want to get locality but interesting thing about graphs is to get localities and get some of these you might have do some extra work so if you saw that graph got divided into two different graphs I had extra nodes in here I might write some extra data structure so do some extra computation so I might have to do some extra work in here so in certain things I might not be that work efficient so I might I might get really good parallelism locality but I am doing too much work so for example if I want to assume I want to find one nodes neighbor very where to get good parents my everybody finds their neighbor ok but that's not efficient I mean most of the computation not useful so so there you can do things that you are doing extra work than necessary then that can get much faster other things but but but you have to be careful on doing that so you have this balance in there so certain algorithms will fit in different place in here in this trade-off space so push out with them will fit in here so for example if you go to something like a pull algorithm what you might find is you are doing less work efficient because you might do a little bit more work but it might be better in locality and and parallelism because you don't have do locks in here and then you do something like partitioning it gets really good locality in partitioning but you are doing extra work and also because when you're partying you might limit your parallelism in here so you might less parallelism but you get really good locality so all this is basically a large trade off space in here and then if when you keep adding more and more things you can do it fits into this this big trade of space so how do you decide what to go on the trade off space is a very important thing decision so it depends on the graphs if you have power log graph so you might want to do something if you have a more limited distributed graph you want do something else and the power of graph sometimes you might do something different for the high connected edges versus others you might don't even differentiate between that it depends algorithm so if you are doing will be sitting all the nodes versus a data-driven algorithm you might do something different it also different hardware you're running so for example if you are doing a Google search basically indexing you are running algorithm that has to operate of the entire graph in here and the graph is a power-law graphs in there and you're running on a cluster so the right thing might be something like a pull schedule with some partitioning and something like a vertex parallel or some kind of a parallelism scheme in here might give you the best performance but in the other side of the Google if you are trying to do a map and if you're trying to give you directions you have a very different type of a graph you are doing a data-driven algorithm in that graph and you might be running on a single machine because unity do direction fast for each individual time you might have a very different type of algorithm yonder on this graph the push out with a mean vertex parallel in the perhaps some combination in there and of course if you get a bad algorithm a bad set of way of doing it you can be very bad you can get hundreds of thousand times slower than the best you can achieve so it matters to find the right thing right way of doing thing so this is a graph it came in graphic as a domain-specific language base could be developed and one thing Rafa did was he said okay look the algorithm is mostly constant but how you process the how you go about it is very different so we won't separate these things so first thing we did was come up with algorithm is what do you want to compute it's very high level it don't tell you how we are computing that saying this is my algorithm I am going to process these nodes in this is the computation I want to do in there and you separate it with optimization of schedule how to compute so it's a okay to do this algorithm you have to do a push schedule do this which separately and the nice thing is that is now deeper if the graph change or the is the machine change I can I can give you a different schedule in here so let me show you some examples first look at algorithm in here so we show three different types of things you won't do so you want to do the entire graph in here and have the data driven oh I might want to just were operating on the vertices in here so this one we happy the the language provide a very simple way of doing that like media this function saying if there are edges all the edges of the graph apply you can give a function that function takes the basically nodes and the edges basically it will basically carry out this computation very simple way of doing that so this is the representation so the nice thing the simplicity of programming now if I write it in C if you look like a big blob of ugly code in the domain-specific language all you're right is this makes life very simple oh if you're a data driven language I have to say okay I start with this set of vertices to compute him here and here are the vertices I'm going to in here the vertex sent here and then I do some filtering because I might not go visit everybody there are some filtering of what you can do and then once you figure out exactly the things you are computing gets a function to go and apply to that so I can give you some very nice way of basically subsets in my graph with certain properties see like those things and now go compute there and if you're only doing 40 sees say okay a for each vertices again I can filter saying this subset or something go to that computation so language-wise is very simple this is what all you have to do now if you look at page rank page rank has two interesting update functions what is one is updating going looking at edges so what it says is new rank I get the destination edge and it get updated using all the sausages in here this is the update function very simple update function and then once you do that for each basically vertex I go I do internal update I give these two functions and put them together into into driver and the driver says from this function from this function and I'm done okay so I can write this chord at higher level much simpler much nice a much more elegant way okay it's much easier to understand easier than even the simple C++ code to understand what's going on if you write it in this way so this is the first advantage of for domain-specific language I can do this then the next thing you can do is now I can come up with the schedule so schedule should be easy to use and it should be powerful enough I should be able to get the best speed a possible because I can tell you all the crazy things I can do to the code so here's my program here for page drag and so what I can do is for this algorithm I can provide this schedule in here and this schedule basically says okay look at this guy s1 I marked it in there in the file in there for s1 I do I want to do sparse push type computation this is how I want to process this one and then by looking at that I can generate a pseudo code that looks like this that basically first go through the source nodes because I am doing push from source to destination and then I'm going through all the destination nodes of that source and I'm going to actually go up update them so I can do this this very simple updating you know but this might not do get that performance I say ah I want to do this in parallel a parallelism I won't run this parallel and then when I do that it will automatically generate say now I will make this two parallel and now I can't do a simple update I have two atomic add so here's my atomic add operation so the graph in here then you might think is that mm do I want to do the push can I do a pull so if I do a do a pull chain it will basically switch the know in here now I am going from destination to source I change the order in there and now I don't have to do that to make update because I am pulling everything to my know that updating and then of course if you want to do some kind of partitioning I can also say partitioning its now it appeared a subgraph in here and for the subgraph I am doing this partition so I can keep changing all these things look I didn't touch this my algorithm still says same I'm changing my my scheduling I can play with the schedule nice thing about that is now if you keep playing in the schedule here's the kind of performance like the first guy was sequential pretty bad performance the next guy just paralyzed in here I got some performance in in here but but it had all the synchronization so I changed the order of execution and I got another even better performance and now I partitioned got motor performance so this is the four order of the in doing that but of course you can play with many many different combinations and what graph it has is huge number of different combinations you can play with so there are a lot of different optimizations you can do Direction optimisation push-pull doing this path then different paralyzation cache new optimization and also data layout things like structures of arrays array of structure layout additional data structures that simplify computed all these things I can specify in here and then you can play with it's not clear which one means it depends on the algorithm depending on the graph shape graph size different machine you run so most of the time if you are a performance engineer you will be trying different things and look at the performance and say hey this doesn't get good cache behavior okay let me try differently so you won't iterate and these iterations you won't do fast and this will do that so let me tell you a little bit of results so this is a I had explained this is a little bit of a complicated graphic so what we looked at was run against bunch of different benchmarks a bunch of different frameworks that do graphs so what this says is here's a program PageRank run on a graph life journal graph in here one means it ran the fastest this ran about 8% slower this around 50% slower this ran 3x slower and 8x slower for that tough the interesting thing is as you add more different graphs if the performance changes so in fact even though we're and fastest for this rod graph it's a very different type of graph this framework R and provide the got the fastest result because the graph is different so it might be doing something that's better the interesting thing is since we have because most of other frameworks will have couple of built-in things they try they don't do give all this ability to try all these optimize and they say ha I know this this is really good I will do that it works for certain things not for everybody and so if you look at the entire different bread facade connected components shortest path algorithms what you find is is some frameworks are good sometimes they might be really bad in other times to either some algorithms from some type of data they can be really bad so this this algorithm was really kind of good at this data set but really bad than this data and really kind of not good in this algorithm we are most of the time good all the time the reason is we don't make decisions in in graph it what it will do is it will give you this ability to kind try different things and depending on the graph depending on the algorithm some optimizations might work better than the other this is what exactly what you guys have been doing in the class you are trying different optimizations by hand and the difference is every time you thought about optimizing you had to go change the entire program to make that work here you just change the scheduling language one write free compile run measure and you can do this fast any questions so far before I switch gears okay so I'm gonna switch to another domain-specific language you will find lot of similarity lot of parallels I mean near this was intentional I could have talked on many different domain-specific languages but I too took another one that you almost has kind of a mirror similarities of what's going on and you will see a pattern in team hopefully and after this I'll ask you what the patterns are this language is halide it was originally developed for image processing and it's focuses the grass focus on this past graph data structures here lights focused on because images are dense regular structures you do regular computation on the images and you process this thing and you have very complex pipeline like for example camera pipeline do many very complex algorithms to the image before you get from the bits coming out of your CCD to the the beautiful picture you see in Facebook and the primary coil of ferrite was you own match and exceed the hand optimize performance basically this was the pop art you wanna do and we want to reduce the road amount of programming that normally a performance engineer has to do to achieve this thing and we won't also increase the portability ability to take that program from different machine to difference so let me give you example here's a three by three blur example so what this does is this steal two loops go in the x-direction and do a blur in in in extreme direction get to get the average the three values next to each other and then it will go the result of that do it in my direction and average that okay very simple filter that you might want to do for image you can run this this is a valid C code but if you want to get performance you want to generate this guy this thing on the other hand Rand about 11 times faster than this one this has done tile it has fused multiple loops it has vectorize it has multi-threaded it turns into something and computation I'll get a little bit later and in basic needs a new roof line optimum performance that means it's using the machine resources to this max because this has bunch of floating-point operations so it's it's basically floating-point unit is running at the max performance so so that's nothing much else you could do to this one but you're right this thing and this is not that easy so this project started some time ago with one of my the person who did it going to he went to Adobe and they had this thing called a local laplacian filter in Camera Raw and Lightroom and Photoshop projects in here the reference implementation was about three hundred lines of code but implementation that they use was about thousand five hundred lines of code it took three months of one of their best engineers to get to that performance but it made sense because that give me I was able to get 10x faster bite by trial and error for this piece of code it's a non-trivial piece of coding here and who could do that so the student Jonathan who's not professor at Berkeley he basically in one day in sixty line of a halide he was able to beat 2x of Adobe code name some sense then those days didn't generate any code for GPUs because they decided GPUs are change too fast and they can't keep up updating for GPUs in every generation then they because they because of that they were not the Adobe applications for not using GPU so I had Photoshop it's not going to use a GP when you imagine has a GPU so Jonathan still had some time left in the day so he said okay let me try to write on GPUs so he just basically the same code he changed GPUs and got 9x faster faster than the faster player I will Adobe had ever had for this piece of thought so how do you do it again the key principle here is decoupling algorithm from schedule so algorithm again is what is computed and algorithm define the pipeline of very simple pure functions operating that in their execution order parallel sub all those things is left for the schedule the pipeline of halide just looks like this for the blur filter it says ok get the image in X dimension and and do it in the Y dimension that's all and the image size is because its operating on enter image you don't have loops in here that's all you are saying there then you'll copy the schedule again the same way when and where it's computed it to be simple that that you need to be able to say that and it has been powerful you need to be able to get the hand optimized performance or better by doing this something looks a little bit familiar because it's all these things a lot of work do you do performance kind of fit into these this genre you need to do a trade off between locality parallelism and redundant work that's what you look for in here so so let's look at the three things you need to do first you need to get parallelism parallelism is you need to keep the multi-core and vector units happy and probably the GPU BC but if you have too much parallelism it's not going to help you I means nobody is going to take advantage to pan parallelism so let's look at a piece of code in here so assume I have go and say I'm gonna run all these things parallel and all these things parallel afterwards if you have three cores great I got a lot more parallelism I got six times parallelism for are nobody's going to use that it's not that useful to get six times pedals I mean here on the other hand if you run like this one at a time you have parallelism of fun that's not that good because you are going to not be use the machine so what you really want is something basic it's done that actually do parallel sums of three might be the best way of running that machine to get to the get best performance you don't want too much you don't know too little you wanna get the exact right thing the next interesting thing you need to guess is locality normally when you image do image processing what you do is you do change everything in the image in one filter then the next filters go ahead and change everything in the image so what happens if I if one filter onto the entire image and the next filter come and start right into the entire image what happens basically is that good I give then time I'd say you do my first color correction and I will do some kind of chromatic aberration correction afterwards so what happens if you do something like that entire image process one filter then the next filter takes the image and process the entire you whatever multi megapixel image oh you know you're not change okay back there [Music] access the cache because if the images are large it doesn't fit in the cache it's not that great to do this this you won't get things in the cache in here so assume I go like this processing the entire first row before you go to the second row so what happens now here is we need to start touch this one I need to read these two values and those two are the last time I read them was where before I started so I stretch them I went through all the image and I come back to that this distance by the time I reach here I might these two might be out of the cache and when I go back there oops it's not in the cache I have a problem in that so the other way right way to do that might be on it this way you fire on it like this basically what happens is as you run when I touch this I won't have I want to get these three to run this thing last time I read this one was just before the in the previous iteration so to get to that I just touched it so the next guy uses it next guy and after after I go through that my window I will never touch that dang it I have a really good luck editing here so I won't operate it that way so I want to get good locality in here so redundant work is very interesting things sometimes if you want to get both locality and parallelism you might have to do some extra work a little bit of extra work so assume in this one I have to process these elements parallel if I want to run run these three because these three needs all these four elements in the list we need these four if I want to run these two parallel in two different course it might be better if both calculates these two values because I don't have to synchronize and stuff like I can say the left guy calculate four values and then I can do the three the right guy calculate the four values and then I can do the three I can do that thoroughly but now the middle two guys these to get calculated twice because both needs it and so what did that means is oops you can't keep that so you sometimes to do everything I might have to some redundant way so the look at that is I can put this into this scheduling framework I can make Matt by computation bandwidth that means cause interleaving with Lowell operator that means I finish everything before I go back again yeah between two things if I run two things I finish this one before I go to the next one fine interleaving means I process one element one that I go back and back and forth in here that's my two two options here other side is storage granularity what that means is storage generate very lower means I calculate something I don't remember next time I want it I recalculate it again very high strength generally means once I calculate I will remember it forever anytime you need that value I have it back for you so so that means I had to get it to you from anyway I calculated stroke and low means my process I calculate I use I throw it out if I need anybody else want it they are recalculate again so now you can have many different computations in different places of the space in here so if you want to compute somewhere here this is the schedule schedule in language that means I run this one and I run this one I have no redundant computation very cost gain interleaving okay that means I run the entire thing and done the next entire thing you can go very fine kerosene here I'll calculate this one and I'll calculate these three again these three again so the everything is calculated multiple times when you need it I recalculate every time I need something I don't store anything in here so it's good I have lot of locality but I am doing lot of recomputation and then here you have something like a sliding window basically you you are not recalculating anything but you are sliding in there you have little bit less parallelism and then you can't capture this entire spectrum in between in here and and you can get different levels of fusion of these tiles and you can calculate so I don't help recalculate everything I carefully calculate few things in here these to get recalculated and then you can I'll go to this fast in some sense you can these all these operations so here's interesting thing so here's I'm showing you different schedules at different points in here so I'm going to run this game this is on time so what it says is this is two in the you're going through the first same put ink in the middle one kind output in here so this is all locality lot of redundant work good Patterson locality all parents have not good look at it here's some kind of intermediate things so what it shows is these are no good a good balance between locality file some redundant work seem to do really well this can finish the fastest so what you do is to write different schedules for these things and then you keep running and we figured out what schedule works so this is kind of trial and error part you're dear to do in here so if you look at what's going on in here what you see here is the some example is bilateral filter computation here what it says the original is about 102 lines of C++ code and you found something with good parallelism in here but we could write it inside two lights of halide in here and we were able to get about 6 s fast so they'd CPU but the best algorithm was somebody hand wrote for the paper on GPUs and what it did was it gave up some parallels of a much better locality and if you give us some panels a much better locality because we can optimize in that we got faster than their handwritten added with them so so we can change something yes again another algorithm that is doing segmenting in here and it was written in MATLAB and MATLAB is a lot less lines of code of course but in halide it's a little bit more line because you're not just calling library functions and halide was 70 times faster and if you write into GPU versus matlab it's about one hundred and a thousand times faster it's not because you are running bad matlab loops in fact what matlab did was it called very well hand optimized libraries but the problem with calling libraries there's no locality I call it really fast library for first routine it runs it really fast and then you call the next routine that has to taught the entire image again and now my image is completely off the cache so what happens is between this very fast libraries you're bringing the image from cache and and when you have something like a library you can fuse library functions together in halide we can fuse them together and say who I take this line of the image and I will do everything on that before I move to the next thing so I can do much faster no I my feeling is each function probably MATLAB was faster because they have a handwritten really fast thing but the copying of data from carrying from cache the cache effects was really slowing it down so here's the thing that we showed before this is a very complicated algorithm it's it's it's what you call a pyramidal algorithm so what it does is you take a medium here and you divide it into bunch of blocks in here in each level of pyramid and you do some computation do some lookup and do some up sampling in here you do some addition computation and compute that and then you come create more and more small and small images in here you do you basically make pyramid in here and so to do this right it's not that simple what that means in each of this level there are different balances you want to be if you have lot of data parallelism is not that important at that point because you have parallelism anyways you publish focus a lot more on locality but when you get to this smaller amount I think parallels a matter so so you have to come up with very interesting balances between those so many many things things to you at every level there's no three things there's hundreds of different levels so nice thing about here is you can play with all these things you can you can play with all these all these different concepts and and figure out which actually give you the fastest performance in that so little bit of I would say bragging rights in here for Hill he lied he lied left MIT about I think six years ago and right now it's every I in Google so it's on Android phones and it started to Google glass doesn't exist anymore but in that and in fact any old images all the videos uploaded to YouTube right now they do front end processing and that processing pipeline is written in halide and they switch to halide because hey light code was about I think four or five percent faster than the previous version and four five percent faster for Google was multi-million dollar say for them because there's so many videos get downloaded from that so recently there's a Photoshop announcement that saying they have a iOS version of Photoshop from Adobe they just announced it I don't think it's even out it and then entire Photoshop filters are written in this new version using halide Karl come released this processor called a Snapdragon image processor so that they build that processor to do image processing in there and the programming language to program that processor is basically halide so you write the code in halide so that is the kind of the assembly level that that makes it available for this in India and also Intel is using that so so there's lot of use of this system at this point which is really fun to see academic project get into a point it's very heavy to use and part of that is because it's very useful because people realize they need to optimize these code because cameras and stuff performance matter and instead of having some poor engineer spending months in the corner just trying out aid those things you can try the same things and lot more by do it faster ok so let me ask you questions so now between halite and graph it what did you find the bunch of similarities I want to figure out are there any interesting interesting similarities you guys found between these two projects so part of that is also a lot of times compilers are kind of black box we know everything just feed us we will give you the really fast code and the problem is they're never the fastest so so if you really care about performance you get 90% then you get really frustrated now what do I do but these this was okay I am NOT going to you are better yet what to do but I will make you the life simpler so we still want the performance engineer it's not the the person who just don't understand performance feed you get fast code we need to perform but we won't make performance engineers life easier so so both of them said okay we need performance engineers we can automate it we don't know how to automate all these things that's too much complexity but we will let you performance engineer explain but to do but you'll make you like very simple what else [Music] do which is like pretty different yeah because I wouldn't say it you can do a lot of domain-specific of demand solution so algorithmic optimization is one level higher you can see I have a better algorithm so okay I don't want to do a quick search I can do insertion search because cook sortie insertion sort might be faster for certain class in there so that is level change we don't do Oh words that I can say like this happens in lot of thinner machine learning yeah if I just drop a number here I am okay I don't have to get the computer exactly right away I don't calculate everything if I calculate for 10 people it's good enough so that kind of changes are you can't do because that's very contextual like for example a lot of time if we are doing things like machine learning there's no right answer you need to have a good answer so sometimes good means you can not do certain things and you need to find what things you shouldn't you cannot do that you get a huge benefit but you don't lose that much that level you can't do that that's the next level of part is saying okay how do you do that how do will somebody say okay look I can train it for 10 iteration 110 is good enough I can't if your code is written to Train 400 iteration I can't tell you a 10 is good enough that is decision that has to be lot higher level then I can make so that said that there's an interesting level that can still existing in top of that which which we can't automate that easily but we might be able to make still a language like a scheduled language give you that option that says cool option ting you say like try some of these things that actually will change algorithm but but within the algorithm that means I will still give you the same answer I will try different things any other things any other things you guys thought that was interesting how about from somewhere here what are the interesting things you found back there yes what they'd know a lot of trial and error I mean this is the modern computer systems everything is extremely complicated there's no right way of doing things when you look at this pretty large piece of code and there might be a lot of it there are caches perils locality lot of things that that can go right and so you might have to try out many things so if you know the answer if you come up with I know exactly every time on the right hand so that's amazing but even the best performance person might not be able to look at a piece of code as AHA I know your solution you do a lot of these kind of supports that and you probably have figured that one out for most of your projects it's not like you went as yeah I know what to do you probably did many different trials and and and I know from this lot of things you actually they have no impact or slow down the code then you say well that didn't work and and all if deafs you leave in your codes show us like all the crazy things you tried and nothing happened so so that's interesting thing what else anything else little bit differently that you say on this one [Music] so interesting questions are there any other domains that don't have something like this people are working on similar things to machine learning these days that seem to be the domain and then tensorflow and and all those people are trying to do to build systems like similar like frameworks that you can get bad I mean that's a very I think the way I have operated is I go talk to people and sometimes you find this poor graduate student or postdoc who want to do some research but spending all their time basically optimizing their piece of code because they can't get their performance and and that might be a good domain you find these people in physics you find these people in biology and I am actually talking about it but because for example in biology the art of this gene sequencing stuff is there are very similar things you have to do but they seem to be spending all this time writing the code and made in like code complexity okay can you do something in that I mean the key thing is this is a good way to nicely what MIT is there are good lot of very smart people in many different domains try to push the state of the art and who's spending all this time cursing in front of a computer program to get to to a point they want to do because not because they know that they don't know the algorithm because the amount of data they had to deal with astronomy I mean they get this multiple telescopes that deluge of data and and and most of them they know what they have to do they just have to can't write the program to do it fast enough so there might be domains like that if you look at that and and there might be domains from application and domains from patterns like sparse matrices or graphs or patterns it's more not only on a single application I mean if it works in multiple places there might be other patterns say this is if you want to do research which might be interesting pace of doing research and and and and and and I have spent my life finding different domains and then a bunch of people that that that spend their lifetime just had hacking things and and and and telling them ok let me see whether we can do some nice abstraction anything else that that you guys found that's interesting so to both of them what are what's the space that they operated on to optimize programs dunph Amanda what I'm saying is it's what are the things that you try to optimize that there so there's a nice space of three different things parallelism locality and and and redundant work my feeling is as you go as a performance engineer that's going to be your life if I had additional thing there might be algorithmic things that you completely get rid of work but most of the time we all of you will be different performance we'll be working on some kind of a multi-core vector GPU type unit so you have to get parallelism so getting parallelism is important but then if you don't have locality it doesn't matter because most time you're waiting to get data from all the way from memory so you had to get good locality and then more lot of times you can do that really well if you do some extra computation but if you do too much extra things that's going to oh well that's not going to help you so it's all about playing in this you know you could a final project that's exactly what you're going to do you might say I if I can do some extra work okay I can do this fast but oops now this extra pre-compute pass or whatever it's not I can't amortize the cost so these these three things that that you're trading over that so that's one interesting thing other thing is we made it available for the programmers to do this scheduling a language but can you make it can you think of a way to make it a little bit easier for programmers they're doing a schedule language what can I do what's the nice thing about scheduling languages it's very simple it has a very simple pattern yeah I mean that's the number of options it's not like you can write any program there's certain things you can do the in schedule so if you know that space he's sort of who's doing that smarty what else can we do with it test them all that's one approach there we can do auto tuning so that switching to the auto tuning part of this talk so performance engineering basically most of the time is finding right these crazy things like you start looking at I think probably a charge talk about this voodoo parameters like okay what's the right block size and it has a big impact finding that I knew a memory allocation particular to find the right strategy what right memory allocation through bunch of these things even GCC compiler there are I think about 400 different flags for GCC and you can actually get factor of for performance by having esoterically 200 flags into GCC it's crazy and and that 200 Flags it's not the same for every program and of course some programs will crash in some place but most of the time it'll slow down the speed up so you can't just just give all the flags of GCC and not rotate on that and and you can get factor of 2 to 4 performance in there it's just crazy and then because you do weird things in there and oh I know 2 or 3 will only do certain amount so o 3 doesn't it's not always right if you can try all these things so you can't do that and schedule in halide schedule in graph it all these things can be auto-tune so before taught uni when you have a large search space what do we normally do the the thing that when we think we are smart what do you do is we build models we have model for a cache and say can we understand the cache we have all these nice models in here and using the model I can predict AHA this is the right block size so what's the problem when you try to do a model exactly because most of the time when you try to model you had abstract and most of them you want to abstract out might be the most important part of the darn thing that we didn't consider so we build a model for cash but oops I could didn't figure out pages with pages made a big difference so so there might be things in real life that matters that didn't fit into your model or you didn't know it need to be fit in the model if you try to put everything that's too complicated of a model so you abstract something out you can see I have optimal result for this model but that optimal result might be like way off then the simple result you can get because the thing that you didn't put into the model are the important ones so model doesn't really the next thing is what you do heuristic based thing this is we are like this all people like come and say I know how to do this you know to do this you need to do this thing this thing this thing you can come up with some kind of the the whole grandmother solution type thing there are certain things that will always work and you hard code them so you can say if the matrix dimension is more than thousand always good to block or some kind of rules like that these roads work most of the time but obviously there are access cousins in certain cases that rules much worse that rules might be set for a certain machine said knock he takes all those things I'll give you the story so GCC has this fast table saw routine so fast table road sort routine says saw to using a parallel quicksort and when the number 616 switch to insertion sort it's hard coded in in GCC Wow some amazing person figure of 16 this amazing number has to switch from parallel quicksort to insertion sort so we are try to figure out what's the profoundness of this number the profoundness of this number is samara 1995 when this code was released in those machines that was the right number that that 16 was a really good number to switch from parallelism to doing that because the cache size stuff like that but that 16 survived from 1995 I think even today it's there the today that number should be like 500 but it's in that because somebody thought 16 is the right it's hard-coded in they didn't change so there are a lot of things in compilers code like that the date or the some programmer said I know what works here this fits in there you put it in that but there's no rhyme or reason because at that time they had reason but it doesn't scale so lot of these heuristics get out of focus very fast and there's no theory behind it to say you know how do you update you have to go ask me why did you put that I made this oh yeah because my machine has a 32 kilobytes of cache like oh okay that's the different machine what we have today so that's the problem other things you can do search you can say okay I will try every possible thing the problem here is sometimes my search base is 10 to the power 10 you don't have any enough seconds in your lifetime to do that so sorry to be too complicated and that's where the auto 2 unit comes in so ok actually I have a little bit more slices so motel model bed solution is you come up with this comprehensive model like a cache model or something like that and you do that and you can exactly sure what's right for the optimal solution in here but the problem is hard to build models cannot model everything and most of the time model will miss the most important thing here is my based things in the rule of the thumb kind of a solution that you come up and say it's hard core it in there and it's very simple and easy works most of the time if you get it right but the problem is too simplistic it doesn't scare it doesn't doesn't stand the test of time most of the time in here exhaustive search is great but the problem is just way too much possibility of searching in here too big of search space can do that so this is where you want to prune the search space and the pruning the best way to do that is basically use Auto tuning so what auto tuner you can do is you can define the space of acceptable values nicely choose a value at random that's what this system will to try it out there and even where the performance of that value went to n because n to n matters because if you try to predict the most of time it might not work and if satisfies the performance that that you need you are done otherwise choose a new value and iterate over this go to three in depth so this is the kind of thing and what you have to do is you need to have a system to figure out how to do is fast basically what space to basically do that when to basically thing you are done and how to go through the iterating through that so this is the kind of it's cartoonish way what happens is you give a value can it you compile the program you run it with a bunch of data you have to run into it but otherwise you all fitting you can't run it with one and you get the results and you get something like average and then if you go through this loop in here and what open tuna has done is come up with the ensemble of techniques so the idea there is when you're searching through a space you might be at the bottom of a hill of the space so what that means is there certain value if you keep improving value that you're getting it better and better and better and at that time something like a hill climber here you climb but something like Ned limine hill climber can actually give you the best performance you're getting very fast in this but me to arrive at the top of the hill for that parameters there's no place to go so that if you try to give try to he'll come it's not going to helpful so at that time what you want to do is do something like a random search in here so what this system do open to you know you do it it will basically test this request in there and and if something is doing very well it will give it more time if not what it will do is it will say ok look this technique is not working let's try something else it'll basically look at the timing here so so it do that this search much faster than otherwise you can do so I want to finish this by showing what you did for total infographic so you have algorithm and you have a schedule it's pain to write this schedule it's in fact there's a good interest in things like in in when you do halide we decided okay it should be similar to write the algorithm for halide and the schedule Google very fast realized many people won't use a light and they fats it's about two years they had about hundreds of programmers who can write the algorithm but they only had five people who could write the really good schedule to try to really good schedule you need to understand a little bit of the hell with them you know understand a little bit of the architecture a little bit of everything and it that's much harder for people to learn so getting the schedule right it's not that easy and same thing in here because you need to understand a lot unless you do kind of random you can search all like arbitrary but to do try you need to know a little bit more so what we can do is we can basically give some I'd imp idea about the graphs and some idea about the rhythm in there we can auto-tune these things in there and then generate the schedule and so what we found was to generate this schedule if you do exhaustive search it runs four days but in in if you see Maradona open to you know you can find a really good schedule for in less than two hours and in fact few cases p4 schedules that was better than what we thought was the best possible schedule because it was able to it was able to search much better than our intuition would say in here and fine and even if our intuition know it it has more time to try many different combinations and trying something in come something better in here so that's all I have today you 2 00:00:03,189 --> 00:00:05,769 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu so it's my pleasure to introduce professor simone amar singh he as our guest lecturer today so saman amar singh he is a professor in the EECS department at MIT and he's also the associate department head he's an expert in compilers domain-specific languages and auto-tuning in fact he was the designer of the open tuner framework that you've been using for your homework assignments so today saman is going to talk about some of his recent work on domain-specific languages and also on auto tuning so let's give some on immersing thank you ok so I used to take this class for many many years unfortunately now I am administrator so I don't I Julian and and Charles get to have the fun teaching the class hopefully you guys enjoy I know you're starting you're done project 1 2 & 3 in going into project 4 yeah project 4 is really fun and yeah it will look big and and daunting but that then you'll enjoy it and especially all the amount of time people spend working on it so so I think I'm making you scarier more than anything so ok so let's get into the talk today so I'm going to talk to you about domain-specific language it's a little bit about auto-tuning how it leaves sort of tuning so why is domain-specific languages so VI all used to general purpose languages that we use all the day all day and those languages are set up to capture a very large sort of what people might want to do in programming however lot of times there are specific area specific domains either some area in that you won't implement or the certain patterns you won't implement code that has lot of interesting properties that in a general-purpose language it's very hard to describe and a lot of times it's basically very hard especially from compiler point of view to take advantage because it has to work for everybody so domain-specific language basically is has this lot of engineering benefits because if we know that you are what you are building has a certain shape certain set of properties if the language capture this and if you are building on that it could be much easier to build it should be had a lot of clarity it's very easy to maintain that kind of things very easy to test and also the other thing is it's very easy to understand because the domain is very clear it's described if you can't build a library but somebody can go and do weird things in libraries if additional is if it is built into the language it's set in stone you can't go and say I'm going to change something here let me do some weird thing here it's it's built into the language so it's in stress there it makes it much easier for program multiplied but for my point of view the domain-specific language I really like are the languages where I know I can take advantage of knowledge of the domain experts to get really good performance so lot of times domain experts are high in this domain I can do okay if there's some linear algebra you know this kind of algebra that I can do to simplify the expression that algebra might only work on that dumb it's very hard to put some complex algebra into a in to press c plus plus so c but in that domain i can say hi can code it up so you can write any expression that i can simplify it and also there are a lot of idioms in each domain so some domain might say okay look i am going to represent a graph that i'm going to talk about in the normal free plus plus you put a bunch of classes you do this very complicated things it's the idiom is hidden in there first of all the c++ doesn't know that i had to look for graphs but even if you are look for graphs it's if you can try graphs in hundreds of millions of ways but if the if it is a first class support in the language i don't have to work heroically to extract that it's there i can easily see that so most of my compiler can be doing useful things in there and most of the time the other thing is if you build a domain-specific language right you can leave the complex the lower-level decision to the compiler in a few C++ you might be tempted to say yeah I know some of two minuses and let me do something here let me do some of the optimizations here so I have been working on optimization my all my life and lot of times when you write a compiled optimization fast you spent half of for more than half of your time undoing the crazy optimization the programmer did like you guys already so this you think you know better you go do something and that might work well then but believe me that code survives 20 years later and 20 years later that looks like a really stupid thing to do and then you look at and say okay now I had to undo everything in the compiler to do the right thing in the current architecture and because of that if you capture the right level I will let the compiler do that do the work in here and then as the architectures keep maturing as the problems keep changing I don't have to worry I don't have to undo these parts in here so so again I'm coming to the performance engineering class and telling you guys leave the performance to the compiler but that's nice thing that if the compiler can do the most of your work much nicer job so don't doubt the compiler so I'm going to talk about three parts in here one is three different programming languages in here domain-specific languages graffiti light and then open tuna which is not just in the language but a framework in here and between graph it and halide you will see some patterns and then we will see whether you you found the pattern that we are working on anything so gravity cells product that I work with Julian so if you have any questions of graffiti after today you can definitely gratitude and he knows probably more about graphs and graph it the mobile graph than probably anybody in this planet so so he sees a good resource to talk about graphs so talking about graphs graphs everywhere so if you go to something like Google and do some search the Google has represented the entire knowledge on the Internet is a big graph that you graph processing behind you that is how what guides your search in there oh if you go again mapped or something like Luba it will find your direction the entire road network is a graph and it's trying to find things like short as part in this graph or to give you the map and if you go to a recommendation engine to get a recommendation for a movie if you get a really cool movie you like that's because there's a huge graph between everybody who's watched movie they watch and they are likings to those movies and and they are looking and comparing you to them and recommending that that's all can be moved as graphs in here and even if you go to a ATM and try to do a transaction there's a very fast graph analysis back to say is this a fraudulent transaction or not so if most of the transactions people have done all the connectivities in the back in there by before the time that actually the money pops out of your ATM machine it tests that bunch of graph processes to understand okay this seems like a good transaction so I will actually give you the money sometimes you say you get this other messages that means that the graph processing decided there might be some weird thing going on there so lot of these things that some of them like maps and graphs has Maps and these transactions have very fine latency thing in that you have to get this thing done right you have to get to directions especially if you take a wrong turn you didn't get the next set of directions done very fast before you go eat some Rosten bad via traffic so you these things have to work for and other things like recommendations and Google search is huge graph they build the entire web oh then all the recommendations are to do huge amount of processing so performance matters a lot in these gap legations so let me dive down little bit deeper into show what graphs means what draw processing means so one of the very well-known graph algorithms is called pagerank anybody knows history of pager how many have heard of PageRank okay what does page stand in PageRank Larry Page so so the first algorithm Google did I don't think this anywhere near Google at this point was this algorithm PageRank so it did track these pages but it was developed by Larry Page so you so depends on either either pitch web pages or Larry Page we don't know but people thinks Larry Page in spades right so in our graph in here so this graph algorithm what it does is it strange to some iterations either smacks it onto some convergence in here so what it first do is it will go around look at all its neighbors and calculate basically a rank new rank out of all my neighbors so that means what is how good are my neighbors what's their rank and and what's their contribution to me so if beans being known to a good person and having a connection to a very well-known in this case of a webpage means my I am highly ranked so I'm more influential because I have closer to something in there so what it does is basically it will go it's not calculate some value and propagate to all the neighbors and aggregate in this so entire graph participating in that and then what happens is each node will go about calculating its new rank in there from looking at all rang it gets modified a little bit for towards an ill rank and then this swap all ranks and new ranks so this is the two computations that you iterate over that and you have to do for the entire graph in here so of course you can run this this will not very very slow if you run this so if you want to get performance you're right this piece of code so this piece of code basically is huge and it runs 23 23 times faster than the what's in the previous graph in here on a to elko machine it basically had multi-threaded so it get parallel performance it is load balance because as you know graphs vary in balance so you get load balance if you have non-uniform memory access machines things like multiple socket machines it will take advantage of that it advantage of caches lot of things happening in in this piece of code but of course you know this is hard to write this piece of code but also worse you might not know what to do what the right optimizer you might only internet you might don't try many things and this is very hard every time you change something if you say I want to do something a little bit different that I have to write a very complicated piece of code get it alright get everything working before I test him here so this is why we can use a DSL for this one so let me go little bit talk about graph algorithms and say this seems like a new set of me so what do people do with graphs so when they say graph algorithm so I'm going to go a little bit deep down to show you what type of things represent these graphs this one class of graph algorithms that are called topology driven algorithms that means the entire graph participates in the computation for example Google search before you do Google search it will do a do a entire basically collection of all the web web links it will build this an huge graph and do huge amount of processing to basically able to do the search in here recommendation engine so every probably few weeks or whatever it is it will collect all everybody's recommendations have this huge data and you're going to process that and do this recommendation so this is applying for the entire graphs and sometimes billions of trillions of known had to go into this computation another set of algorithms called data-driven algorithms so what that means is you start with certain nodes and then you keep going to its neighbors and its neighbors processing data in here as we do that the kind of algorithms that fit in this category are things like if you have a map if you have to find a shortest path that means I have probably two parts I don't have to get him from direction from here to Boston I don't have to figure go through New York nodes in New York I just have to go through my neighbors connected in via so I am I am basically operating on a certain area with some connections and and processing that there so these are data driven algorithms so so I might have huge graph but my computation might only work on a small region or a small part of the graph in these algorithms so when you're traversing through a graph doing that there are multiple ways of doing graphed reversals and this is why optimizations hard because there are many different ways of doing things and each has different set of outcomes you can get so assume lot of graph algorithms I need to get something from my neighbors okay one way to get something my neighbors is I can calculate what the neighbor all my neighbors might want and give it to all the neighbors and in there oh I can go change all the neighbors to update my values so what do you think okay you have done some programming in here what do you think about this is this a good way so if I won't update everybody I will calculate why what I won't do and I'll go change everybody all my neighbors not as parallel as it could be I think you are getting to a point but why it's not that parallel the neighbors - what yeah but okay that's a very good price so if you are doing a data-driven if I am doing that's not good but if everybody is doing that to their neighbor so then I have a parallelism so everybody might say you updated your neighbor you are updating you everybody is updating their neighbors so now there's another problem showing up what's the problem if everybody try to update their neighbors back there that's a race in there because everybody's gone right in there so if you want to get this actually right you have a bunch of issues here you want to basically do atomic updates because you need to lock that thing so it has to be like atomically updated in here and this is nice but I don't have to Travis anything because everybody I need to update I actually go and go and update him yeah that's that's nice way to do that especially if it is not a global thing so if I'm propagating I will update my neighbor's and-and-and-and-and and I can propagate that down another way to do that is pull schedule that means if I everybody look at ask their neighbors okay do you have what do you have to give it to me and I flicked everything from my neighbors and I update myself so is there a race condition now how many people say there's a race how many people think there's no race so what happens is I am reading from all the neighbors everybody is reading from the neighbors but I am only updating myself so because of that I'm the only one who's writing me so I don't have a race so it's very nice you don't have race but I might if I'm doing a data-driven transformation I might not know that I need to get updated because the update comes from that person and and and and that means I might be tasking oh do you have anything to say you might say no so in that sense I might basically doing lot of extra computation than necessary because I might not know that I have a data I need to get but I had asked you whether I should I should do this but I don't have any any any basically need to do any synchronization another interesting thing is I can take this graph and I can basically punch in the graph once I partition the graph I can basically say okay this Co get this graph please go get this graph oh this process are not discreet this graph what's the advantage of partitioning a graph why do I want to partition a graph large graph into small pieces of course you can't do a good partition you can't do arbitral partition so what happens if I do a good partitioning I don't tell the word because then then the answer comes out in there okay let me see anybody else you have asked anybody else want to answer come on you have taken it what happened if I take apart and and find two different groups and separate them and give this one to one and this one to another what what do I get I get parallelism also but other thing if I have a lot of connected things going to you these connected things go into that person what are the what else I can I get locality have you heard did you do locality in the class so that means the partition means my the thing I am working I am only working on a small amount and that might if you are lucky fit in my cache and it will be very nice that everybody's has to go to every know every node in here so so if I partition is properly I will get good locality in here it's actually written there oops okay so my answer was in there so in locality but of course now I might have a little bit extra overhead because now I might have to replicate some some nodes stuff like that because it's in both sides and India so another interesting in properties of graphs is when you look at data structures until now things like array is the size mantis is represent about array it fits the cache and stuff like that graphs there are some other properties of the graphs in here so if you go to social networks social network is a graph what's the interesting property and social networks use you have observed connectedness there are people like me that probably had 20 friends in there and has a very little number of connections and then there are celebrities who have millions of connections in here so the interesting thing is if you look at a social network graph you have this relationship called power law relationship that means there's exponential code there are some people here like very non well-known celebrities that that might have millions and millions of users in here connector connections in the neighbors so like so whatever it is in that node and there are people like me sitting here that has very little people connected to the rest of the world so this is normally people have observed this big social networks type graphs you have this kind of exponential relationship in here so web has exponential relationship a social network has this kind of relationship in there so those things you have to do very interesting thing when you process these graphs because there are certain connections that matter certain nodes that matter a lot as a big impact than other nodes then there are other graphs that are have a bounded degree distribution road networks the maximum connection probably you might have a intersection that has six roads coming together in there you don't have million roads connecting into one place anyway in there so that doesn't happen so this is lot more flatter lot more bounded degree distribution graphs in here they have lot of excellent locality near because of course all the crowds in in Cambridge might be connected but roads in Cambridge can be separated from roads in New York City so they are they are separate they are locality nice luck writing these kind of graphs so even if you say the graph is the same size the shape of the graph matters in computation a lot of times so what happens is now when you want to operate on this graph you have to look at three interesting properties one property is okay how much parallelism my algorithm what I trying to do to this graph is going to get it's like a Goldilocks type thing you don't want too much parallelism if you say I have algorithm that huge you want to parallelism if I can't take at is not useful so you need to get a parallel some good enough that I can actually use it then I really like to have locality because if I have locality if my caches will work everything will be nearby I can get runs things fast if I everytime I have to get something from main memory it can be very very slow so I want to get locality but interesting thing about graphs is to get localities and get some of these you might have do some extra work so if you saw that graph got divided into two different graphs I had extra nodes in here I might write some extra data structure so do some extra computation so I might have to do some extra work in here so in certain things I might not be that work efficient so I might I might get really good parallelism locality but I am doing too much work so for example if I want to assume I want to find one nodes neighbor very where to get good parents my everybody finds their neighbor ok but that's not efficient I mean most of the computation not useful so so there you can do things that you are doing extra work than necessary then that can get much faster other things but but but you have to be careful on doing that so you have this balance in there so certain algorithms will fit in different place in here in this trade-off space so push out with them will fit in here so for example if you go to something like a pull algorithm what you might find is you are doing less work efficient because you might do a little bit more work but it might be better in locality and and parallelism because you don't have do locks in here and then you do something like partitioning it gets really good locality in partitioning but you are doing extra work and also because when you're partying you might limit your parallelism in here so you might less parallelism but you get really good locality so all this is basically a large trade off space in here and then if when you keep adding more and more things you can do it fits into this this big trade of space so how do you decide what to go on the trade off space is a very important thing decision so it depends on the graphs if you have power log graph so you might want to do something if you have a more limited distributed graph you want do something else and the power of graph sometimes you might do something different for the high connected edges versus others you might don't even differentiate between that it depends algorithm so if you are doing will be sitting all the nodes versus a data-driven algorithm you might do something different it also different hardware you're running so for example if you are doing a Google search basically indexing you are running algorithm that has to operate of the entire graph in here and the graph is a power-law graphs in there and you're running on a cluster so the right thing might be something like a pull schedule with some partitioning and something like a vertex parallel or some kind of a parallelism scheme in here might give you the best performance but in the other side of the Google if you are trying to do a map and if you're trying to give you directions you have a very different type of a graph you are doing a data-driven algorithm in that graph and you might be running on a single machine because unity do direction fast for each individual time you might have a very different type of algorithm yonder on this graph the push out with a mean vertex parallel in the perhaps some combination in there and of course if you get a bad algorithm a bad set of way of doing it you can be very bad you can get hundreds of thousand times slower than the best you can achieve so it matters to find the right thing right way of doing thing so this is a graph it came in graphic as a domain-specific language base could be developed and one thing Rafa did was he said okay look the algorithm is mostly constant but how you process the how you go about it is very different so we won't separate these things so first thing we did was come up with algorithm is what do you want to compute it's very high level it don't tell you how we are computing that saying this is my algorithm I am going to process these nodes in this is the computation I want to do in there and you separate it with optimization of schedule how to compute so it's a okay to do this algorithm you have to do a push schedule do this which separately and the nice thing is that is now deeper if the graph change or the is the machine change I can I can give you a different schedule in here so let me show you some examples first look at algorithm in here so we show three different types of things you won't do so you want to do the entire graph in here and have the data driven oh I might want to just were operating on the vertices in here so this one we happy the the language provide a very simple way of doing that like media this function saying if there are edges all the edges of the graph apply you can give a function that function takes the basically nodes and the edges basically it will basically carry out this computation very simple way of doing that so this is the representation so the nice thing the simplicity of programming now if I write it in C if you look like a big blob of ugly code in the domain-specific language all you're right is this makes life very simple oh if you're a data driven language I have to say okay I start with this set of vertices to compute him here and here are the vertices I'm going to in here the vertex sent here and then I do some filtering because I might not go visit everybody there are some filtering of what you can do and then once you figure out exactly the things you are computing gets a function to go and apply to that so I can give you some very nice way of basically subsets in my graph with certain properties see like those things and now go compute there and if you're only doing 40 sees say okay a for each vertices again I can filter saying this subset or something go to that computation so language-wise is very simple this is what all you have to do now if you look at page rank page rank has two interesting update functions what is one is updating going looking at edges so what it says is new rank I get the destination edge and it get updated using all the sausages in here this is the update function very simple update function and then once you do that for each basically vertex I go I do internal update I give these two functions and put them together into into driver and the driver says from this function from this function and I'm done okay so I can write this chord at higher level much simpler much nice a much more elegant way okay it's much easier to understand easier than even the simple C++ code to understand what's going on if you write it in this way so this is the first advantage of for domain-specific language I can do this then the next thing you can do is now I can come up with the schedule so schedule should be easy to use and it should be powerful enough I should be able to get the best speed a possible because I can tell you all the crazy things I can do to the code so here's my program here for page drag and so what I can do is for this algorithm I can provide this schedule in here and this schedule basically says okay look at this guy s1 I marked it in there in the file in there for s1 I do I want to do sparse push type computation this is how I want to process this one and then by looking at that I can generate a pseudo code that looks like this that basically first go through the source nodes because I am doing push from source to destination and then I'm going through all the destination nodes of that source and I'm going to actually go up update them so I can do this this very simple updating you know but this might not do get that performance I say ah I want to do this in parallel a parallelism I won't run this parallel and then when I do that it will automatically generate say now I will make this two parallel and now I can't do a simple update I have two atomic add so here's my atomic add operation so the graph in here then you might think is that mm do I want to do the push can I do a pull so if I do a do a pull chain it will basically switch the know in here now I am going from destination to source I change the order in there and now I don't have to do that to make update because I am pulling everything to my know that updating and then of course if you want to do some kind of partitioning I can also say partitioning its now it appeared a subgraph in here and for the subgraph I am doing this partition so I can keep changing all these things look I didn't touch this my algorithm still says same I'm changing my my scheduling I can play with the schedule nice thing about that is now if you keep playing in the schedule here's the kind of performance like the first guy was sequential pretty bad performance the next guy just paralyzed in here I got some performance in in here but but it had all the synchronization so I changed the order of execution and I got another even better performance and now I partitioned got motor performance so this is the four order of the in doing that but of course you can play with many many different combinations and what graph it has is huge number of different combinations you can play with so there are a lot of different optimizations you can do Direction optimisation push-pull doing this path then different paralyzation cache new optimization and also data layout things like structures of arrays array of structure layout additional data structures that simplify computed all these things I can specify in here and then you can play with it's not clear which one means it depends on the algorithm depending on the graph shape graph size different machine you run so most of the time if you are a performance engineer you will be trying different things and look at the performance and say hey this doesn't get good cache behavior okay let me try differently so you won't iterate and these iterations you won't do fast and this will do that so let me tell you a little bit of results so this is a I had explained this is a little bit of a complicated graphic so what we looked at was run against bunch of different benchmarks a bunch of different frameworks that do graphs so what this says is here's a program PageRank run on a graph life journal graph in here one means it ran the fastest this ran about 8% slower this around 50% slower this ran 3x slower and 8x slower for that tough the interesting thing is as you add more different graphs if the performance changes so in fact even though we're and fastest for this rod graph it's a very different type of graph this framework R and provide the got the fastest result because the graph is different so it might be doing something that's better the interesting thing is since we have because most of other frameworks will have couple of built-in things they try they don't do give all this ability to try all these optimize and they say ha I know this this is really good I will do that it works for certain things not for everybody and so if you look at the entire different bread facade connected components shortest path algorithms what you find is is some frameworks are good sometimes they might be really bad in other times to either some algorithms from some type of data they can be really bad so this this algorithm was really kind of good at this data set but really bad than this data and really kind of not good in this algorithm we are most of the time good all the time the reason is we don't make decisions in in graph it what it will do is it will give you this ability to kind try different things and depending on the graph depending on the algorithm some optimizations might work better than the other this is what exactly what you guys have been doing in the class you are trying different optimizations by hand and the difference is every time you thought about optimizing you had to go change the entire program to make that work here you just change the scheduling language one write free compile run measure and you can do this fast any questions so far before I switch gears okay so I'm gonna switch to another domain-specific language you will find lot of similarity lot of parallels I mean near this was intentional I could have talked on many different domain-specific languages but I too took another one that you almost has kind of a mirror similarities of what's going on and you will see a pattern in team hopefully and after this I'll ask you what the patterns are this language is halide it was originally developed for image processing and it's focuses the grass focus on this past graph data structures here lights focused on because images are dense regular structures you do regular computation on the images and you process this thing and you have very complex pipeline like for example camera pipeline do many very complex algorithms to the image before you get from the bits coming out of your CCD to the the beautiful picture you see in Facebook and the primary coil of ferrite was you own match and exceed the hand optimize performance basically this was the pop art you wanna do and we want to reduce the road amount of programming that normally a performance engineer has to do to achieve this thing and we won't also increase the portability ability to take that program from different machine to difference so let me give you example here's a three by three blur example so what this does is this steal two loops go in the x-direction and do a blur in in in extreme direction get to get the average the three values next to each other and then it will go the result of that do it in my direction and average that okay very simple filter that you might want to do for image you can run this this is a valid C code but if you want to get performance you want to generate this guy this thing on the other hand Rand about 11 times faster than this one this has done tile it has fused multiple loops it has vectorize it has multi-threaded it turns into something and computation I'll get a little bit later and in basic needs a new roof line optimum performance that means it's using the machine resources to this max because this has bunch of floating-point operations so it's it's basically floating-point unit is running at the max performance so so that's nothing much else you could do to this one but you're right this thing and this is not that easy so this project started some time ago with one of my the person who did it going to he went to Adobe and they had this thing called a local laplacian filter in Camera Raw and Lightroom and Photoshop projects in here the reference implementation was about three hundred lines of code but implementation that they use was about thousand five hundred lines of code it took three months of one of their best engineers to get to that performance but it made sense because that give me I was able to get 10x faster bite by trial and error for this piece of code it's a non-trivial piece of coding here and who could do that so the student Jonathan who's not professor at Berkeley he basically in one day in sixty line of a halide he was able to beat 2x of Adobe code name some sense then those days didn't generate any code for GPUs because they decided GPUs are change too fast and they can't keep up updating for GPUs in every generation then they because they because of that they were not the Adobe applications for not using GPU so I had Photoshop it's not going to use a GP when you imagine has a GPU so Jonathan still had some time left in the day so he said okay let me try to write on GPUs so he just basically the same code he changed GPUs and got 9x faster faster than the faster player I will Adobe had ever had for this piece of thought so how do you do it again the key principle here is decoupling algorithm from schedule so algorithm again is what is computed and algorithm define the pipeline of very simple pure functions operating that in their execution order parallel sub all those things is left for the schedule the pipeline of halide just looks like this for the blur filter it says ok get the image in X dimension and and do it in the Y dimension that's all and the image size is because its operating on enter image you don't have loops in here that's all you are saying there then you'll copy the schedule again the same way when and where it's computed it to be simple that that you need to be able to say that and it has been powerful you need to be able to get the hand optimized performance or better by doing this something looks a little bit familiar because it's all these things a lot of work do you do performance kind of fit into these this genre you need to do a trade off between locality parallelism and redundant work that's what you look for in here so so let's look at the three things you need to do first you need to get parallelism parallelism is you need to keep the multi-core and vector units happy and probably the GPU BC but if you have too much parallelism it's not going to help you I means nobody is going to take advantage to pan parallelism so let's look at a piece of code in here so assume I have go and say I'm gonna run all these things parallel and all these things parallel afterwards if you have three cores great I got a lot more parallelism I got six times parallelism for are nobody's going to use that it's not that useful to get six times pedals I mean here on the other hand if you run like this one at a time you have parallelism of fun that's not that good because you are going to not be use the machine so what you really want is something basic it's done that actually do parallel sums of three might be the best way of running that machine to get to the get best performance you don't want too much you don't know too little you wanna get the exact right thing the next interesting thing you need to guess is locality normally when you image do image processing what you do is you do change everything in the image in one filter then the next filters go ahead and change everything in the image so what happens if I if one filter onto the entire image and the next filter come and start right into the entire image what happens basically is that good I give then time I'd say you do my first color correction and I will do some kind of chromatic aberration correction afterwards so what happens if you do something like that entire image process one filter then the next filter takes the image and process the entire you whatever multi megapixel image oh you know you're not change okay back there [Music] access the cache because if the images are large it doesn't fit in the cache it's not that great to do this this you won't get things in the cache in here so assume I go like this processing the entire first row before you go to the second row so what happens now here is we need to start touch this one I need to read these two values and those two are the last time I read them was where before I started so I stretch them I went through all the image and I come back to that this distance by the time I reach here I might these two might be out of the cache and when I go back there oops it's not in the cache I have a problem in that so the other way right way to do that might be on it this way you fire on it like this basically what happens is as you run when I touch this I won't have I want to get these three to run this thing last time I read this one was just before the in the previous iteration so to get to that I just touched it so the next guy uses it next guy and after after I go through that my window I will never touch that dang it I have a really good luck editing here so I won't operate it that way so I want to get good locality in here so redundant work is very interesting things sometimes if you want to get both locality and parallelism you might have to do some extra work a little bit of extra work so assume in this one I have to process these elements parallel if I want to run run these three because these three needs all these four elements in the list we need these four if I want to run these two parallel in two different course it might be better if both calculates these two values because I don't have to synchronize and stuff like I can say the left guy calculate four values and then I can do the three the right guy calculate the four values and then I can do the three I can do that thoroughly but now the middle two guys these to get calculated twice because both needs it and so what did that means is oops you can't keep that so you sometimes to do everything I might have to some redundant way so the look at that is I can put this into this scheduling framework I can make Matt by computation bandwidth that means cause interleaving with Lowell operator that means I finish everything before I go back again yeah between two things if I run two things I finish this one before I go to the next one fine interleaving means I process one element one that I go back and back and forth in here that's my two two options here other side is storage granularity what that means is storage generate very lower means I calculate something I don't remember next time I want it I recalculate it again very high strength generally means once I calculate I will remember it forever anytime you need that value I have it back for you so so that means I had to get it to you from anyway I calculated stroke and low means my process I calculate I use I throw it out if I need anybody else want it they are recalculate again so now you can have many different computations in different places of the space in here so if you want to compute somewhere here this is the schedule schedule in language that means I run this one and I run this one I have no redundant computation very cost gain interleaving okay that means I run the entire thing and done the next entire thing you can go very fine kerosene here I'll calculate this one and I'll calculate these three again these three again so the everything is calculated multiple times when you need it I recalculate every time I need something I don't store anything in here so it's good I have lot of locality but I am doing lot of recomputation and then here you have something like a sliding window basically you you are not recalculating anything but you are sliding in there you have little bit less parallelism and then you can't capture this entire spectrum in between in here and and you can get different levels of fusion of these tiles and you can calculate so I don't help recalculate everything I carefully calculate few things in here these to get recalculated and then you can I'll go to this fast in some sense you can these all these operations so here's interesting thing so here's I'm showing you different schedules at different points in here so I'm going to run this game this is on time so what it says is this is two in the you're going through the first same put ink in the middle one kind output in here so this is all locality lot of redundant work good Patterson locality all parents have not good look at it here's some kind of intermediate things so what it shows is these are no good a good balance between locality file some redundant work seem to do really well this can finish the fastest so what you do is to write different schedules for these things and then you keep running and we figured out what schedule works so this is kind of trial and error part you're dear to do in here so if you look at what's going on in here what you see here is the some example is bilateral filter computation here what it says the original is about 102 lines of C++ code and you found something with good parallelism in here but we could write it inside two lights of halide in here and we were able to get about 6 s fast so they'd CPU but the best algorithm was somebody hand wrote for the paper on GPUs and what it did was it gave up some parallels of a much better locality and if you give us some panels a much better locality because we can optimize in that we got faster than their handwritten added with them so so we can change something yes again another algorithm that is doing segmenting in here and it was written in MATLAB and MATLAB is a lot less lines of code of course but in halide it's a little bit more line because you're not just calling library functions and halide was 70 times faster and if you write into GPU versus matlab it's about one hundred and a thousand times faster it's not because you are running bad matlab loops in fact what matlab did was it called very well hand optimized libraries but the problem with calling libraries there's no locality I call it really fast library for first routine it runs it really fast and then you call the next routine that has to taught the entire image again and now my image is completely off the cache so what happens is between this very fast libraries you're bringing the image from cache and and when you have something like a library you can fuse library functions together in halide we can fuse them together and say who I take this line of the image and I will do everything on that before I move to the next thing so I can do much faster no I my feeling is each function probably MATLAB was faster because they have a handwritten really fast thing but the copying of data from carrying from cache the cache effects was really slowing it down so here's the thing that we showed before this is a very complicated algorithm it's it's it's what you call a pyramidal algorithm so what it does is you take a medium here and you divide it into bunch of blocks in here in each level of pyramid and you do some computation do some lookup and do some up sampling in here you do some addition computation and compute that and then you come create more and more small and small images in here you do you basically make pyramid in here and so to do this right it's not that simple what that means in each of this level there are different balances you want to be if you have lot of data parallelism is not that important at that point because you have parallelism anyways you publish focus a lot more on locality but when you get to this smaller amount I think parallels a matter so so you have to come up with very interesting balances between those so many many things things to you at every level there's no three things there's hundreds of different levels so nice thing about here is you can play with all these things you can you can play with all these all these different concepts and and figure out which actually give you the fastest performance in that so little bit of I would say bragging rights in here for Hill he lied he lied left MIT about I think six years ago and right now it's every I in Google so it's on Android phones and it started to Google glass doesn't exist anymore but in that and in fact any old images all the videos uploaded to YouTube right now they do front end processing and that processing pipeline is written in halide and they switch to halide because hey light code was about I think four or five percent faster than the previous version and four five percent faster for Google was multi-million dollar say for them because there's so many videos get downloaded from that so recently there's a Photoshop announcement that saying they have a iOS version of Photoshop from Adobe they just announced it I don't think it's even out it and then entire Photoshop filters are written in this new version using halide Karl come released this processor called a Snapdragon image processor so that they build that processor to do image processing in there and the programming language to program that processor is basically halide so you write the code in halide so that is the kind of the assembly level that that makes it available for this in India and also Intel is using that so so there's lot of use of this system at this point which is really fun to see academic project get into a point it's very heavy to use and part of that is because it's very useful because people realize they need to optimize these code because cameras and stuff performance matter and instead of having some poor engineer spending months in the corner just trying out aid those things you can try the same things and lot more by do it faster ok so let me ask you questions so now between halite and graph it what did you find the bunch of similarities I want to figure out are there any interesting interesting similarities you guys found between these two projects so part of that is also a lot of times compilers are kind of black box we know everything just feed us we will give you the really fast code and the problem is they're never the fastest so so if you really care about performance you get 90% then you get really frustrated now what do I do but these this was okay I am NOT going to you are better yet what to do but I will make you the life simpler so we still want the performance engineer it's not the the person who just don't understand performance feed you get fast code we need to perform but we won't make performance engineers life easier so so both of them said okay we need performance engineers we can automate it we don't know how to automate all these things that's too much complexity but we will let you performance engineer explain but to do but you'll make you like very simple what else [Music] do which is like pretty different yeah because I wouldn't say it you can do a lot of domain-specific of demand solution so algorithmic optimization is one level higher you can see I have a better algorithm so okay I don't want to do a quick search I can do insertion search because cook sortie insertion sort might be faster for certain class in there so that is level change we don't do Oh words that I can say like this happens in lot of thinner machine learning yeah if I just drop a number here I am okay I don't have to get the computer exactly right away I don't calculate everything if I calculate for 10 people it's good enough so that kind of changes are you can't do because that's very contextual like for example a lot of time if we are doing things like machine learning there's no right answer you need to have a good answer so sometimes good means you can not do certain things and you need to find what things you shouldn't you cannot do that you get a huge benefit but you don't lose that much that level you can't do that that's the next level of part is saying okay how do you do that how do will somebody say okay look I can train it for 10 iteration 110 is good enough I can't if your code is written to Train 400 iteration I can't tell you a 10 is good enough that is decision that has to be lot higher level then I can make so that said that there's an interesting level that can still existing in top of that which which we can't automate that easily but we might be able to make still a language like a scheduled language give you that option that says cool option ting you say like try some of these things that actually will change algorithm but but within the algorithm that means I will still give you the same answer I will try different things any other things any other things you guys thought that was interesting how about from somewhere here what are the interesting things you found back there yes what they'd know a lot of trial and error I mean this is the modern computer systems everything is extremely complicated there's no right way of doing things when you look at this pretty large piece of code and there might be a lot of it there are caches perils locality lot of things that that can go right and so you might have to try out many things so if you know the answer if you come up with I know exactly every time on the right hand so that's amazing but even the best performance person might not be able to look at a piece of code as AHA I know your solution you do a lot of these kind of supports that and you probably have figured that one out for most of your projects it's not like you went as yeah I know what to do you probably did many different trials and and and I know from this lot of things you actually they have no impact or slow down the code then you say well that didn't work and and all if deafs you leave in your codes show us like all the crazy things you tried and nothing happened so so that's interesting thing what else anything else little bit differently that you say on this one [Music] so interesting questions are there any other domains that don't have something like this people are working on similar things to machine learning these days that seem to be the domain and then tensorflow and and all those people are trying to do to build systems like similar like frameworks that you can get bad I mean that's a very I think the way I have operated is I go talk to people and sometimes you find this poor graduate student or postdoc who want to do some research but spending all their time basically optimizing their piece of code because they can't get their performance and and that might be a good domain you find these people in physics you find these people in biology and I am actually talking about it but because for example in biology the art of this gene sequencing stuff is there are very similar things you have to do but they seem to be spending all this time writing the code and made in like code complexity okay can you do something in that I mean the key thing is this is a good way to nicely what MIT is there are good lot of very smart people in many different domains try to push the state of the art and who's spending all this time cursing in front of a computer program to get to to a point they want to do because not because they know that they don't know the algorithm because the amount of data they had to deal with astronomy I mean they get this multiple telescopes that deluge of data and and and most of them they know what they have to do they just have to can't write the program to do it fast enough so there might be domains like that if you look at that and and there might be domains from application and domains from patterns like sparse matrices or graphs or patterns it's more not only on a single application I mean if it works in multiple places there might be other patterns say this is if you want to do research which might be interesting pace of doing research and and and and and and I have spent my life finding different domains and then a bunch of people that that that spend their lifetime just had hacking things and and and and telling them ok let me see whether we can do some nice abstraction anything else that that you guys found that's interesting so to both of them what are what's the space that they operated on to optimize programs dunph Amanda what I'm saying is it's what are the things that you try to optimize that there so there's a nice space of three different things parallelism locality and and and redundant work my feeling is as you go as a performance engineer that's going to be your life if I had additional thing there might be algorithmic things that you completely get rid of work but most of the time we all of you will be different performance we'll be working on some kind of a multi-core vector GPU type unit so you have to get parallelism so getting parallelism is important but then if you don't have locality it doesn't matter because most time you're waiting to get data from all the way from memory so you had to get good locality and then more lot of times you can do that really well if you do some extra computation but if you do too much extra things that's going to oh well that's not going to help you so it's all about playing in this you know you could a final project that's exactly what you're going to do you might say I if I can do some extra work okay I can do this fast but oops now this extra pre-compute pass or whatever it's not I can't amortize the cost so these these three things that that you're trading over that so that's one interesting thing other thing is we made it available for the programmers to do this scheduling a language but can you make it can you think of a way to make it a little bit easier for programmers they're doing a schedule language what can I do what's the nice thing about scheduling languages it's very simple it has a very simple pattern yeah I mean that's the number of options it's not like you can write any program there's certain things you can do the in schedule so if you know that space he's sort of who's doing that smarty what else can we do with it test them all that's one approach there we can do auto tuning so that switching to the auto tuning part of this talk so performance engineering basically most of the time is finding right these crazy things like you start looking at I think probably a charge talk about this voodoo parameters like okay what's the right block size and it has a big impact finding that I knew a memory allocation particular to find the right strategy what right memory allocation through bunch of these things even GCC compiler there are I think about 400 different flags for GCC and you can actually get factor of for performance by having esoterically 200 flags into GCC it's crazy and and that 200 Flags it's not the same for every program and of course some programs will crash in some place but most of the time it'll slow down the speed up so you can't just just give all the flags of GCC and not rotate on that and and you can get factor of 2 to 4 performance in there it's just crazy and then because you do weird things in there and oh I know 2 or 3 will only do certain amount so o 3 doesn't it's not always right if you can try all these things so you can't do that and schedule in halide schedule in graph it all these things can be auto-tune so before taught uni when you have a large search space what do we normally do the the thing that when we think we are smart what do you do is we build models we have model for a cache and say can we understand the cache we have all these nice models in here and using the model I can predict AHA this is the right block size so what's the problem when you try to do a model exactly because most of the time when you try to model you had abstract and most of them you want to abstract out might be the most important part of the darn thing that we didn't consider so we build a model for cash but oops I could didn't figure out pages with pages made a big difference so so there might be things in real life that matters that didn't fit into your model or you didn't know it need to be fit in the model if you try to put everything that's too complicated of a model so you abstract something out you can see I have optimal result for this model but that optimal result might be like way off then the simple result you can get because the thing that you didn't put into the model are the important ones so model doesn't really the next thing is what you do heuristic based thing this is we are like this all people like come and say I know how to do this you know to do this you need to do this thing this thing this thing you can come up with some kind of the the whole grandmother solution type thing there are certain things that will always work and you hard code them so you can say if the matrix dimension is more than thousand always good to block or some kind of rules like that these roads work most of the time but obviously there are access cousins in certain cases that rules much worse that rules might be set for a certain machine said knock he takes all those things I'll give you the story so GCC has this fast table saw routine so fast table road sort routine says saw to using a parallel quicksort and when the number 616 switch to insertion sort it's hard coded in in GCC Wow some amazing person figure of 16 this amazing number has to switch from parallel quicksort to insertion sort so we are try to figure out what's the profoundness of this number the profoundness of this number is samara 1995 when this code was released in those machines that was the right number that that 16 was a really good number to switch from parallelism to doing that because the cache size stuff like that but that 16 survived from 1995 I think even today it's there the today that number should be like 500 but it's in that because somebody thought 16 is the right it's hard-coded in they didn't change so there are a lot of things in compilers code like that the date or the some programmer said I know what works here this fits in there you put it in that but there's no rhyme or reason because at that time they had reason but it doesn't scale so lot of these heuristics get out of focus very fast and there's no theory behind it to say you know how do you update you have to go ask me why did you put that I made this oh yeah because my machine has a 32 kilobytes of cache like oh okay that's the different machine what we have today so that's the problem other things you can do search you can say okay I will try every possible thing the problem here is sometimes my search base is 10 to the power 10 you don't have any enough seconds in your lifetime to do that so sorry to be too complicated and that's where the auto 2 unit comes in so ok actually I have a little bit more slices so motel model bed solution is you come up with this comprehensive model like a cache model or something like that and you do that and you can exactly sure what's right for the optimal solution in here but the problem is hard to build models cannot model everything and most of the time model will miss the most important thing here is my based things in the rule of the thumb kind of a solution that you come up and say it's hard core it in there and it's very simple and easy works most of the time if you get it right but the problem is too simplistic it doesn't scare it doesn't doesn't stand the test of time most of the time in here exhaustive search is great but the problem is just way too much possibility of searching in here too big of search space can do that so this is where you want to prune the search space and the pruning the best way to do that is basically use Auto tuning so what auto tuner you can do is you can define the space of acceptable values nicely choose a value at random that's what this system will to try it out there and even where the performance of that value went to n because n to n matters because if you try to predict the most of time it might not work and if satisfies the performance that that you need you are done otherwise choose a new value and iterate over this go to three in depth so this is the kind of thing and what you have to do is you need to have a system to figure out how to do is fast basically what space to basically do that when to basically thing you are done and how to go through the iterating through that so this is the kind of it's cartoonish way what happens is you give a value can it you compile the program you run it with a bunch of data you have to run into it but otherwise you all fitting you can't run it with one and you get the results and you get something like average and then if you go through this loop in here and what open tuna has done is come up with the ensemble of techniques so the idea there is when you're searching through a space you might be at the bottom of a hill of the space so what that means is there certain value if you keep improving value that you're getting it better and better and better and at that time something like a hill climber here you climb but something like Ned limine hill climber can actually give you the best performance you're getting very fast in this but me to arrive at the top of the hill for that parameters there's no place to go so that if you try to give try to he'll come it's not going to helpful so at that time what you want to do is do something like a random search in here so what this system do open to you know you do it it will basically test this request in there and and if something is doing very well it will give it more time if not what it will do is it will say ok look this technique is not working let's try something else it'll basically look at the timing here so so it do that this search much faster than otherwise you can do so I want to finish this by showing what you did for total infographic so you have algorithm and you have a schedule it's pain to write this schedule it's in fact there's a good interest in things like in in when you do halide we decided okay it should be similar to write the algorithm for halide and the schedule Google very fast realized many people won't use a light and they fats it's about two years they had about hundreds of programmers who can write the algorithm but they only had five people who could write the really good schedule to try to really good schedule you need to understand a little bit of the hell with them you know understand a little bit of the architecture a little bit of everything and it that's much harder for people to learn so getting the schedule right it's not that easy and same thing in here because you need to understand a lot unless you do kind of random you can search all like arbitrary but to do try you need to know a little bit more so what we can do is we can basically give some I'd imp idea about the graphs and some idea about the rhythm in there we can auto-tune these things in there and then generate the schedule and so what we found was to generate this schedule if you do exhaustive search it runs four days but in in if you see Maradona open to you know you can find a really good schedule for in less than two hours and in fact few cases p4 schedules that was better than what we thought was the best possible schedule because it was able to it was able to search much better than our intuition would say in here and fine and even if our intuition know it it has more time to try many different combinations and trying something in come something better in here so that's all I have today you 3 00:00:05,769 --> 00:00:08,019 4 00:00:08,019 --> 00:00:09,850 5 00:00:09,850 --> 00:00:10,930 6 00:00:10,930 --> 00:00:13,120 7 00:00:13,120 --> 00:00:15,160 8 00:00:15,160 --> 00:00:22,050 9 00:00:22,050 --> 00:00:24,460 10 00:00:24,460 --> 00:00:27,819 11 00:00:27,819 --> 00:00:30,339 12 00:00:30,339 --> 00:00:33,070 13 00:00:33,070 --> 00:00:34,930 14 00:00:34,930 --> 00:00:37,170 15 00:00:37,170 --> 00:00:40,470 16 00:00:40,470 --> 00:00:43,060 17 00:00:43,060 --> 00:00:45,310 18 00:00:45,310 --> 00:00:47,650 19 00:00:47,650 --> 00:00:50,020 20 00:00:50,020 --> 00:00:51,280 21 00:00:51,280 --> 00:00:53,979 22 00:00:53,979 --> 00:00:55,900 23 00:00:55,900 --> 00:01:04,539 24 00:01:04,539 --> 00:01:06,100 25 00:01:06,100 --> 00:01:08,560 26 00:01:08,560 --> 00:01:10,660 27 00:01:10,660 --> 00:01:12,670 28 00:01:12,670 --> 00:01:14,740 29 00:01:14,740 --> 00:01:18,520 30 00:01:18,520 --> 00:01:21,160 31 00:01:21,160 --> 00:01:22,960 32 00:01:22,960 --> 00:01:27,070 33 00:01:27,070 --> 00:01:29,200 34 00:01:29,200 --> 00:01:32,289 35 00:01:32,289 --> 00:01:36,430 36 00:01:36,430 --> 00:01:37,810 37 00:01:37,810 --> 00:01:39,370 38 00:01:39,370 --> 00:01:41,140 39 00:01:41,140 --> 00:01:42,910 40 00:01:42,910 --> 00:01:44,590 41 00:01:44,590 --> 00:01:47,350 42 00:01:47,350 --> 00:01:51,460 43 00:01:51,460 --> 00:01:54,700 44 00:01:54,700 --> 00:01:57,039 45 00:01:57,039 --> 00:01:59,980 46 00:01:59,980 --> 00:02:02,109 47 00:02:02,109 --> 00:02:05,950 48 00:02:05,950 --> 00:02:08,380 49 00:02:08,380 --> 00:02:12,069 50 00:02:12,069 --> 00:02:13,899 51 00:02:13,899 --> 00:02:15,849 52 00:02:15,849 --> 00:02:18,099 53 00:02:18,099 --> 00:02:20,050 54 00:02:20,050 --> 00:02:23,860 55 00:02:23,860 --> 00:02:26,410 56 00:02:26,410 --> 00:02:28,540 57 00:02:28,540 --> 00:02:31,420 58 00:02:31,420 --> 00:02:33,690 59 00:02:33,690 --> 00:02:36,339 60 00:02:36,339 --> 00:02:39,160 61 00:02:39,160 --> 00:02:41,440 62 00:02:41,440 --> 00:02:42,759 63 00:02:42,759 --> 00:02:45,339 64 00:02:45,339 --> 00:02:48,160 65 00:02:48,160 --> 00:02:50,530 66 00:02:50,530 --> 00:02:53,020 67 00:02:53,020 --> 00:02:55,390 68 00:02:55,390 --> 00:02:57,520 69 00:02:57,520 --> 00:02:59,979 70 00:02:59,979 --> 00:03:02,259 71 00:03:02,259 --> 00:03:04,360 72 00:03:04,360 --> 00:03:06,850 73 00:03:06,850 --> 00:03:08,710 74 00:03:08,710 --> 00:03:10,420 75 00:03:10,420 --> 00:03:12,460 76 00:03:12,460 --> 00:03:13,780 77 00:03:13,780 --> 00:03:16,360 78 00:03:16,360 --> 00:03:17,949 79 00:03:17,949 --> 00:03:21,070 80 00:03:21,070 --> 00:03:22,870 81 00:03:22,870 --> 00:03:26,559 82 00:03:26,559 --> 00:03:28,479 83 00:03:28,479 --> 00:03:30,940 84 00:03:30,940 --> 00:03:33,490 85 00:03:33,490 --> 00:03:35,710 86 00:03:35,710 --> 00:03:37,240 87 00:03:37,240 --> 00:03:39,009 88 00:03:39,009 --> 00:03:41,500 89 00:03:41,500 --> 00:03:43,000 90 00:03:43,000 --> 00:03:45,309 91 00:03:45,309 --> 00:03:48,130 92 00:03:48,130 --> 00:03:50,080 93 00:03:50,080 --> 00:03:51,819 94 00:03:51,819 --> 00:03:56,110 95 00:03:56,110 --> 00:03:58,930 96 00:03:58,930 --> 00:04:00,849 97 00:04:00,849 --> 00:04:02,229 98 00:04:02,229 --> 00:04:04,360 99 00:04:04,360 --> 00:04:06,879 100 00:04:06,879 --> 00:04:08,559 101 00:04:08,559 --> 00:04:10,930 102 00:04:10,930 --> 00:04:12,099 103 00:04:12,099 --> 00:04:13,780 104 00:04:13,780 --> 00:04:15,970 105 00:04:15,970 --> 00:04:17,860 106 00:04:17,860 --> 00:04:19,599 107 00:04:19,599 --> 00:04:21,430 108 00:04:21,430 --> 00:04:23,680 109 00:04:23,680 --> 00:04:25,570 110 00:04:25,570 --> 00:04:28,690 111 00:04:28,690 --> 00:04:30,400 112 00:04:30,400 --> 00:04:32,800 113 00:04:32,800 --> 00:04:32,810 114 00:04:32,810 --> 00:04:33,429 115 00:04:33,429 --> 00:04:35,829 116 00:04:35,829 --> 00:04:39,249 117 00:04:39,249 --> 00:04:40,989 118 00:04:40,989 --> 00:04:42,669 119 00:04:42,669 --> 00:04:45,459 120 00:04:45,459 --> 00:04:47,350 121 00:04:47,350 --> 00:04:49,479 122 00:04:49,479 --> 00:04:51,790 123 00:04:51,790 --> 00:04:53,709 124 00:04:53,709 --> 00:04:56,229 125 00:04:56,229 --> 00:04:59,350 126 00:04:59,350 --> 00:05:01,029 127 00:05:01,029 --> 00:05:03,339 128 00:05:03,339 --> 00:05:05,379 129 00:05:05,379 --> 00:05:07,269 130 00:05:07,269 --> 00:05:09,459 131 00:05:09,459 --> 00:05:11,290 132 00:05:11,290 --> 00:05:12,790 133 00:05:12,790 --> 00:05:13,929 134 00:05:13,929 --> 00:05:18,309 135 00:05:18,309 --> 00:05:20,439 136 00:05:20,439 --> 00:05:22,449 137 00:05:22,449 --> 00:05:24,519 138 00:05:24,519 --> 00:05:26,649 139 00:05:26,649 --> 00:05:28,239 140 00:05:28,239 --> 00:05:29,139 141 00:05:29,139 --> 00:05:32,409 142 00:05:32,409 --> 00:05:34,449 143 00:05:34,449 --> 00:05:36,040 144 00:05:36,040 --> 00:05:37,689 145 00:05:37,689 --> 00:05:40,119 146 00:05:40,119 --> 00:05:45,579 147 00:05:45,579 --> 00:05:47,199 148 00:05:47,199 --> 00:05:50,709 149 00:05:50,709 --> 00:05:52,209 150 00:05:52,209 --> 00:05:55,029 151 00:05:55,029 --> 00:05:57,969 152 00:05:57,969 --> 00:05:59,739 153 00:05:59,739 --> 00:06:02,290 154 00:06:02,290 --> 00:06:04,119 155 00:06:04,119 --> 00:06:05,619 156 00:06:05,619 --> 00:06:08,619 157 00:06:08,619 --> 00:06:11,499 158 00:06:11,499 --> 00:06:13,600 159 00:06:13,600 --> 00:06:16,179 160 00:06:16,179 --> 00:06:17,649 161 00:06:17,649 --> 00:06:20,079 162 00:06:20,079 --> 00:06:22,209 163 00:06:22,209 --> 00:06:23,619 164 00:06:23,619 --> 00:06:27,429 165 00:06:27,429 --> 00:06:32,199 166 00:06:32,199 --> 00:06:35,859 167 00:06:35,859 --> 00:06:37,269 168 00:06:37,269 --> 00:06:39,549 169 00:06:39,549 --> 00:06:40,620 170 00:06:40,620 --> 00:06:43,320 171 00:06:43,320 --> 00:06:46,590 172 00:06:46,590 --> 00:06:50,670 173 00:06:50,670 --> 00:06:54,600 174 00:06:54,600 --> 00:06:57,000 175 00:06:57,000 --> 00:06:58,530 176 00:06:58,530 --> 00:07:00,840 177 00:07:00,840 --> 00:07:03,270 178 00:07:03,270 --> 00:07:05,340 179 00:07:05,340 --> 00:07:07,170 180 00:07:07,170 --> 00:07:09,060 181 00:07:09,060 --> 00:07:11,430 182 00:07:11,430 --> 00:07:13,800 183 00:07:13,800 --> 00:07:15,540 184 00:07:15,540 --> 00:07:17,310 185 00:07:17,310 --> 00:07:20,070 186 00:07:20,070 --> 00:07:24,540 187 00:07:24,540 --> 00:07:28,410 188 00:07:28,410 --> 00:07:29,820 189 00:07:29,820 --> 00:07:32,550 190 00:07:32,550 --> 00:07:34,470 191 00:07:34,470 --> 00:07:36,210 192 00:07:36,210 --> 00:07:38,190 193 00:07:38,190 --> 00:07:40,470 194 00:07:40,470 --> 00:07:42,330 195 00:07:42,330 --> 00:07:44,220 196 00:07:44,220 --> 00:07:45,990 197 00:07:45,990 --> 00:07:48,510 198 00:07:48,510 --> 00:07:50,550 199 00:07:50,550 --> 00:07:52,530 200 00:07:52,530 --> 00:07:54,990 201 00:07:54,990 --> 00:07:57,840 202 00:07:57,840 --> 00:08:00,720 203 00:08:00,720 --> 00:08:03,420 204 00:08:03,420 --> 00:08:04,710 205 00:08:04,710 --> 00:08:06,090 206 00:08:06,090 --> 00:08:07,830 207 00:08:07,830 --> 00:08:09,000 208 00:08:09,000 --> 00:08:11,070 209 00:08:11,070 --> 00:08:13,440 210 00:08:13,440 --> 00:08:15,750 211 00:08:15,750 --> 00:08:17,670 212 00:08:17,670 --> 00:08:20,190 213 00:08:20,190 --> 00:08:22,050 214 00:08:22,050 --> 00:08:23,760 215 00:08:23,760 --> 00:08:26,970 216 00:08:26,970 --> 00:08:32,730 217 00:08:32,730 --> 00:08:34,980 218 00:08:34,980 --> 00:08:38,240 219 00:08:38,240 --> 00:08:41,850 220 00:08:41,850 --> 00:08:45,120 221 00:08:45,120 --> 00:08:49,019 222 00:08:49,019 --> 00:08:51,900 223 00:08:51,900 --> 00:08:56,939 224 00:08:56,939 --> 00:08:59,250 225 00:08:59,250 --> 00:09:01,230 226 00:09:01,230 --> 00:09:04,470 227 00:09:04,470 --> 00:09:07,379 228 00:09:07,379 --> 00:09:10,670 229 00:09:10,670 --> 00:09:13,259 230 00:09:13,259 --> 00:09:15,930 231 00:09:15,930 --> 00:09:18,269 232 00:09:18,269 --> 00:09:19,350 233 00:09:19,350 --> 00:09:21,480 234 00:09:21,480 --> 00:09:23,970 235 00:09:23,970 --> 00:09:26,040 236 00:09:26,040 --> 00:09:29,569 237 00:09:29,569 --> 00:09:36,019 238 00:09:36,019 --> 00:09:40,139 239 00:09:40,139 --> 00:09:42,900 240 00:09:42,900 --> 00:09:44,610 241 00:09:44,610 --> 00:09:46,920 242 00:09:46,920 --> 00:09:49,530 243 00:09:49,530 --> 00:09:51,120 244 00:09:51,120 --> 00:09:53,670 245 00:09:53,670 --> 00:09:56,370 246 00:09:56,370 --> 00:09:57,629 247 00:09:57,629 --> 00:09:59,730 248 00:09:59,730 --> 00:10:03,600 249 00:10:03,600 --> 00:10:05,250 250 00:10:05,250 --> 00:10:06,750 251 00:10:06,750 --> 00:10:08,460 252 00:10:08,460 --> 00:10:11,370 253 00:10:11,370 --> 00:10:14,460 254 00:10:14,460 --> 00:10:16,079 255 00:10:16,079 --> 00:10:18,420 256 00:10:18,420 --> 00:10:20,220 257 00:10:20,220 --> 00:10:21,780 258 00:10:21,780 --> 00:10:23,550 259 00:10:23,550 --> 00:10:25,170 260 00:10:25,170 --> 00:10:27,660 261 00:10:27,660 --> 00:10:29,850 262 00:10:29,850 --> 00:10:31,410 263 00:10:31,410 --> 00:10:35,340 264 00:10:35,340 --> 00:10:39,540 265 00:10:39,540 --> 00:10:42,449 266 00:10:42,449 --> 00:10:44,100 267 00:10:44,100 --> 00:10:46,019 268 00:10:46,019 --> 00:10:47,610 269 00:10:47,610 --> 00:10:50,009 270 00:10:50,009 --> 00:10:53,130 271 00:10:53,130 --> 00:10:54,509 272 00:10:54,509 --> 00:10:56,309 273 00:10:56,309 --> 00:10:58,310 274 00:10:58,310 --> 00:10:59,900 275 00:10:59,900 --> 00:11:01,970 276 00:11:01,970 --> 00:11:04,310 277 00:11:04,310 --> 00:11:07,069 278 00:11:07,069 --> 00:11:09,800 279 00:11:09,800 --> 00:11:11,960 280 00:11:11,960 --> 00:11:13,579 281 00:11:13,579 --> 00:11:15,230 282 00:11:15,230 --> 00:11:17,360 283 00:11:17,360 --> 00:11:19,939 284 00:11:19,939 --> 00:11:21,350 285 00:11:21,350 --> 00:11:22,999 286 00:11:22,999 --> 00:11:25,220 287 00:11:25,220 --> 00:11:29,300 288 00:11:29,300 --> 00:11:33,379 289 00:11:33,379 --> 00:11:35,960 290 00:11:35,960 --> 00:11:40,490 291 00:11:40,490 --> 00:11:42,259 292 00:11:42,259 --> 00:11:44,540 293 00:11:44,540 --> 00:11:45,800 294 00:11:45,800 --> 00:11:47,900 295 00:11:47,900 --> 00:11:51,350 296 00:11:51,350 --> 00:11:52,670 297 00:11:52,670 --> 00:11:55,009 298 00:11:55,009 --> 00:11:57,590 299 00:11:57,590 --> 00:12:01,389 300 00:12:01,389 --> 00:12:04,400 301 00:12:04,400 --> 00:12:07,790 302 00:12:07,790 --> 00:12:09,829 303 00:12:09,829 --> 00:12:11,809 304 00:12:11,809 --> 00:12:14,629 305 00:12:14,629 --> 00:12:17,780 306 00:12:17,780 --> 00:12:19,910 307 00:12:19,910 --> 00:12:21,559 308 00:12:21,559 --> 00:12:23,600 309 00:12:23,600 --> 00:12:25,309 310 00:12:25,309 --> 00:12:27,379 311 00:12:27,379 --> 00:12:29,090 312 00:12:29,090 --> 00:12:31,069 313 00:12:31,069 --> 00:12:34,210 314 00:12:34,210 --> 00:12:36,740 315 00:12:36,740 --> 00:12:38,480 316 00:12:38,480 --> 00:12:41,179 317 00:12:41,179 --> 00:12:42,500 318 00:12:42,500 --> 00:12:49,660 319 00:12:49,660 --> 00:12:52,160 320 00:12:52,160 --> 00:12:53,990 321 00:12:53,990 --> 00:12:56,059 322 00:12:56,059 --> 00:12:57,980 323 00:12:57,980 --> 00:12:59,900 324 00:12:59,900 --> 00:13:02,000 325 00:13:02,000 --> 00:13:03,800 326 00:13:03,800 --> 00:13:05,329 327 00:13:05,329 --> 00:13:06,650 328 00:13:06,650 --> 00:13:08,590 329 00:13:08,590 --> 00:13:11,210 330 00:13:11,210 --> 00:13:13,040 331 00:13:13,040 --> 00:13:14,810 332 00:13:14,810 --> 00:13:17,420 333 00:13:17,420 --> 00:13:19,790 334 00:13:19,790 --> 00:13:21,980 335 00:13:21,980 --> 00:13:29,810 336 00:13:29,810 --> 00:13:31,610 337 00:13:31,610 --> 00:13:33,650 338 00:13:33,650 --> 00:13:36,740 339 00:13:36,740 --> 00:13:38,150 340 00:13:38,150 --> 00:13:40,699 341 00:13:40,699 --> 00:13:44,949 342 00:13:44,949 --> 00:13:48,530 343 00:13:48,530 --> 00:13:52,189 344 00:13:52,189 --> 00:13:53,810 345 00:13:53,810 --> 00:13:55,970 346 00:13:55,970 --> 00:13:57,500 347 00:13:57,500 --> 00:14:00,590 348 00:14:00,590 --> 00:14:02,240 349 00:14:02,240 --> 00:14:05,060 350 00:14:05,060 --> 00:14:07,699 351 00:14:07,699 --> 00:14:08,990 352 00:14:08,990 --> 00:14:10,639 353 00:14:10,639 --> 00:14:13,639 354 00:14:13,639 --> 00:14:15,110 355 00:14:15,110 --> 00:14:25,370 356 00:14:25,370 --> 00:14:27,230 357 00:14:27,230 --> 00:14:31,510 358 00:14:31,510 --> 00:14:35,759 359 00:14:35,759 --> 00:14:37,990 360 00:14:37,990 --> 00:14:40,030 361 00:14:40,030 --> 00:14:41,470 362 00:14:41,470 --> 00:14:43,810 363 00:14:43,810 --> 00:14:46,030 364 00:14:46,030 --> 00:14:47,500 365 00:14:47,500 --> 00:14:49,360 366 00:14:49,360 --> 00:14:50,949 367 00:14:50,949 --> 00:14:52,150 368 00:14:52,150 --> 00:14:54,490 369 00:14:54,490 --> 00:14:58,150 370 00:14:58,150 --> 00:14:59,590 371 00:14:59,590 --> 00:15:01,389 372 00:15:01,389 --> 00:15:04,120 373 00:15:04,120 --> 00:15:07,000 374 00:15:07,000 --> 00:15:08,740 375 00:15:08,740 --> 00:15:09,970 376 00:15:09,970 --> 00:15:12,910 377 00:15:12,910 --> 00:15:14,530 378 00:15:14,530 --> 00:15:16,000 379 00:15:16,000 --> 00:15:17,829 380 00:15:17,829 --> 00:15:18,879 381 00:15:18,879 --> 00:15:21,250 382 00:15:21,250 --> 00:15:23,949 383 00:15:23,949 --> 00:15:26,199 384 00:15:26,199 --> 00:15:29,379 385 00:15:29,379 --> 00:15:33,129 386 00:15:33,129 --> 00:15:35,350 387 00:15:35,350 --> 00:15:36,970 388 00:15:36,970 --> 00:15:39,100 389 00:15:39,100 --> 00:15:42,009 390 00:15:42,009 --> 00:15:49,180 391 00:15:49,180 --> 00:15:51,430 392 00:15:51,430 --> 00:15:53,980 393 00:15:53,980 --> 00:15:55,569 394 00:15:55,569 --> 00:15:57,069 395 00:15:57,069 --> 00:15:58,689 396 00:15:58,689 --> 00:16:00,519 397 00:16:00,519 --> 00:16:01,960 398 00:16:01,960 --> 00:16:03,639 399 00:16:03,639 --> 00:16:07,319 400 00:16:07,319 --> 00:16:09,550 401 00:16:09,550 --> 00:16:11,500 402 00:16:11,500 --> 00:16:13,210 403 00:16:13,210 --> 00:16:14,740 404 00:16:14,740 --> 00:16:16,449 405 00:16:16,449 --> 00:16:18,910 406 00:16:18,910 --> 00:16:21,309 407 00:16:21,309 --> 00:16:23,800 408 00:16:23,800 --> 00:16:25,689 409 00:16:25,689 --> 00:16:28,240 410 00:16:28,240 --> 00:16:32,860 411 00:16:32,860 --> 00:16:37,000 412 00:16:37,000 --> 00:16:38,259 413 00:16:38,259 --> 00:16:40,960 414 00:16:40,960 --> 00:16:45,189 415 00:16:45,189 --> 00:16:47,350 416 00:16:47,350 --> 00:16:49,220 417 00:16:49,220 --> 00:16:51,500 418 00:16:51,500 --> 00:16:52,850 419 00:16:52,850 --> 00:16:57,139 420 00:16:57,139 --> 00:16:58,790 421 00:16:58,790 --> 00:17:06,350 422 00:17:06,350 --> 00:17:08,299 423 00:17:08,299 --> 00:17:10,189 424 00:17:10,189 --> 00:17:12,470 425 00:17:12,470 --> 00:17:14,270 426 00:17:14,270 --> 00:17:17,090 427 00:17:17,090 --> 00:17:18,620 428 00:17:18,620 --> 00:17:23,679 429 00:17:23,679 --> 00:17:27,079 430 00:17:27,079 --> 00:17:29,030 431 00:17:29,030 --> 00:17:30,919 432 00:17:30,919 --> 00:17:34,970 433 00:17:34,970 --> 00:17:37,190 434 00:17:37,190 --> 00:17:39,350 435 00:17:39,350 --> 00:17:40,790 436 00:17:40,790 --> 00:17:42,410 437 00:17:42,410 --> 00:17:45,169 438 00:17:45,169 --> 00:17:46,910 439 00:17:46,910 --> 00:17:50,600 440 00:17:50,600 --> 00:17:52,820 441 00:17:52,820 --> 00:17:54,860 442 00:17:54,860 --> 00:17:56,810 443 00:17:56,810 --> 00:17:58,580 444 00:17:58,580 --> 00:18:01,400 445 00:18:01,400 --> 00:18:05,780 446 00:18:05,780 --> 00:18:08,060 447 00:18:08,060 --> 00:18:10,640 448 00:18:10,640 --> 00:18:14,510 449 00:18:14,510 --> 00:18:16,159 450 00:18:16,159 --> 00:18:18,710 451 00:18:18,710 --> 00:18:20,299 452 00:18:20,299 --> 00:18:22,100 453 00:18:22,100 --> 00:18:29,540 454 00:18:29,540 --> 00:18:31,310 455 00:18:31,310 --> 00:18:33,350 456 00:18:33,350 --> 00:18:35,840 457 00:18:35,840 --> 00:18:37,340 458 00:18:37,340 --> 00:18:39,950 459 00:18:39,950 --> 00:18:41,360 460 00:18:41,360 --> 00:18:47,230 461 00:18:47,230 --> 00:18:50,419 462 00:18:50,419 --> 00:18:52,040 463 00:18:52,040 --> 00:18:58,690 464 00:18:58,690 --> 00:19:01,130 465 00:19:01,130 --> 00:19:03,080 466 00:19:03,080 --> 00:19:04,910 467 00:19:04,910 --> 00:19:06,290 468 00:19:06,290 --> 00:19:07,610 469 00:19:07,610 --> 00:19:09,770 470 00:19:09,770 --> 00:19:11,750 471 00:19:11,750 --> 00:19:15,620 472 00:19:15,620 --> 00:19:18,070 473 00:19:18,070 --> 00:19:20,270 474 00:19:20,270 --> 00:19:24,380 475 00:19:24,380 --> 00:19:25,910 476 00:19:25,910 --> 00:19:27,740 477 00:19:27,740 --> 00:19:29,300 478 00:19:29,300 --> 00:19:31,880 479 00:19:31,880 --> 00:19:33,350 480 00:19:33,350 --> 00:19:36,050 481 00:19:36,050 --> 00:19:37,520 482 00:19:37,520 --> 00:19:39,170 483 00:19:39,170 --> 00:19:41,390 484 00:19:41,390 --> 00:19:42,770 485 00:19:42,770 --> 00:19:46,520 486 00:19:46,520 --> 00:19:47,960 487 00:19:47,960 --> 00:19:49,670 488 00:19:49,670 --> 00:19:49,680 489 00:19:49,680 --> 00:19:50,030 490 00:19:50,030 --> 00:19:52,250 491 00:19:52,250 --> 00:19:53,630 492 00:19:53,630 --> 00:19:55,220 493 00:19:55,220 --> 00:19:57,050 494 00:19:57,050 --> 00:19:59,270 495 00:19:59,270 --> 00:20:03,890 496 00:20:03,890 --> 00:20:05,990 497 00:20:05,990 --> 00:20:09,470 498 00:20:09,470 --> 00:20:10,930 499 00:20:10,930 --> 00:20:13,250 500 00:20:13,250 --> 00:20:15,290 501 00:20:15,290 --> 00:20:17,120 502 00:20:17,120 --> 00:20:18,860 503 00:20:18,860 --> 00:20:21,740 504 00:20:21,740 --> 00:20:24,380 505 00:20:24,380 --> 00:20:25,850 506 00:20:25,850 --> 00:20:28,490 507 00:20:28,490 --> 00:20:30,830 508 00:20:30,830 --> 00:20:32,930 509 00:20:32,930 --> 00:20:34,970 510 00:20:34,970 --> 00:20:36,440 511 00:20:36,440 --> 00:20:40,220 512 00:20:40,220 --> 00:20:41,930 513 00:20:41,930 --> 00:20:43,460 514 00:20:43,460 --> 00:20:48,770 515 00:20:48,770 --> 00:20:50,660 516 00:20:50,660 --> 00:20:52,400 517 00:20:52,400 --> 00:20:56,300 518 00:20:56,300 --> 00:20:58,730 519 00:20:58,730 --> 00:21:00,140 520 00:21:00,140 --> 00:21:03,530 521 00:21:03,530 --> 00:21:04,460 522 00:21:04,460 --> 00:21:06,590 523 00:21:06,590 --> 00:21:08,390 524 00:21:08,390 --> 00:21:09,160 525 00:21:09,160 --> 00:21:11,110 526 00:21:11,110 --> 00:21:12,730 527 00:21:12,730 --> 00:21:15,400 528 00:21:15,400 --> 00:21:19,450 529 00:21:19,450 --> 00:21:21,700 530 00:21:21,700 --> 00:21:23,860 531 00:21:23,860 --> 00:21:25,420 532 00:21:25,420 --> 00:21:27,520 533 00:21:27,520 --> 00:21:30,400 534 00:21:30,400 --> 00:21:33,190 535 00:21:33,190 --> 00:21:35,050 536 00:21:35,050 --> 00:21:38,500 537 00:21:38,500 --> 00:21:40,510 538 00:21:40,510 --> 00:21:43,060 539 00:21:43,060 --> 00:21:44,830 540 00:21:44,830 --> 00:21:46,600 541 00:21:46,600 --> 00:21:49,240 542 00:21:49,240 --> 00:21:50,890 543 00:21:50,890 --> 00:21:53,710 544 00:21:53,710 --> 00:21:55,450 545 00:21:55,450 --> 00:21:58,540 546 00:21:58,540 --> 00:22:02,200 547 00:22:02,200 --> 00:22:05,680 548 00:22:05,680 --> 00:22:09,040 549 00:22:09,040 --> 00:22:10,570 550 00:22:10,570 --> 00:22:12,580 551 00:22:12,580 --> 00:22:14,170 552 00:22:14,170 --> 00:22:16,300 553 00:22:16,300 --> 00:22:19,540 554 00:22:19,540 --> 00:22:21,040 555 00:22:21,040 --> 00:22:23,350 556 00:22:23,350 --> 00:22:25,270 557 00:22:25,270 --> 00:22:28,060 558 00:22:28,060 --> 00:22:30,040 559 00:22:30,040 --> 00:22:31,330 560 00:22:31,330 --> 00:22:33,810 561 00:22:33,810 --> 00:22:36,520 562 00:22:36,520 --> 00:22:37,720 563 00:22:37,720 --> 00:22:40,330 564 00:22:40,330 --> 00:22:42,160 565 00:22:42,160 --> 00:22:44,620 566 00:22:44,620 --> 00:22:46,600 567 00:22:46,600 --> 00:22:48,730 568 00:22:48,730 --> 00:22:50,320 569 00:22:50,320 --> 00:22:51,730 570 00:22:51,730 --> 00:22:53,860 571 00:22:53,860 --> 00:22:55,630 572 00:22:55,630 --> 00:22:57,610 573 00:22:57,610 --> 00:22:59,440 574 00:22:59,440 --> 00:23:01,480 575 00:23:01,480 --> 00:23:03,850 576 00:23:03,850 --> 00:23:07,360 577 00:23:07,360 --> 00:23:08,980 578 00:23:08,980 --> 00:23:10,780 579 00:23:10,780 --> 00:23:13,510 580 00:23:13,510 --> 00:23:14,860 581 00:23:14,860 --> 00:23:17,360 582 00:23:17,360 --> 00:23:21,259 583 00:23:21,259 --> 00:23:22,220 584 00:23:22,220 --> 00:23:24,230 585 00:23:24,230 --> 00:23:25,850 586 00:23:25,850 --> 00:23:28,220 587 00:23:28,220 --> 00:23:29,360 588 00:23:29,360 --> 00:23:32,869 589 00:23:32,869 --> 00:23:35,180 590 00:23:35,180 --> 00:23:36,889 591 00:23:36,889 --> 00:23:39,529 592 00:23:39,529 --> 00:23:42,049 593 00:23:42,049 --> 00:23:44,560 594 00:23:44,560 --> 00:23:46,970 595 00:23:46,970 --> 00:23:48,499 596 00:23:48,499 --> 00:23:51,169 597 00:23:51,169 --> 00:23:53,419 598 00:23:53,419 --> 00:23:55,609 599 00:23:55,609 --> 00:23:57,440 600 00:23:57,440 --> 00:23:59,210 601 00:23:59,210 --> 00:24:01,129 602 00:24:01,129 --> 00:24:02,810 603 00:24:02,810 --> 00:24:05,930 604 00:24:05,930 --> 00:24:07,070 605 00:24:07,070 --> 00:24:08,389 606 00:24:08,389 --> 00:24:10,489 607 00:24:10,489 --> 00:24:13,039 608 00:24:13,039 --> 00:24:14,629 609 00:24:14,629 --> 00:24:15,980 610 00:24:15,980 --> 00:24:18,080 611 00:24:18,080 --> 00:24:20,629 612 00:24:20,629 --> 00:24:22,220 613 00:24:22,220 --> 00:24:23,930 614 00:24:23,930 --> 00:24:26,840 615 00:24:26,840 --> 00:24:29,180 616 00:24:29,180 --> 00:24:32,029 617 00:24:32,029 --> 00:24:34,850 618 00:24:34,850 --> 00:24:36,590 619 00:24:36,590 --> 00:24:38,450 620 00:24:38,450 --> 00:24:40,279 621 00:24:40,279 --> 00:24:43,100 622 00:24:43,100 --> 00:24:45,470 623 00:24:45,470 --> 00:24:48,379 624 00:24:48,379 --> 00:24:52,009 625 00:24:52,009 --> 00:24:55,129 626 00:24:55,129 --> 00:24:57,710 627 00:24:57,710 --> 00:24:59,810 628 00:24:59,810 --> 00:25:03,590 629 00:25:03,590 --> 00:25:05,299 630 00:25:05,299 --> 00:25:08,450 631 00:25:08,450 --> 00:25:10,129 632 00:25:10,129 --> 00:25:12,169 633 00:25:12,169 --> 00:25:13,820 634 00:25:13,820 --> 00:25:16,090 635 00:25:16,090 --> 00:25:20,180 636 00:25:20,180 --> 00:25:22,519 637 00:25:22,519 --> 00:25:24,619 638 00:25:24,619 --> 00:25:26,850 639 00:25:26,850 --> 00:25:29,230 640 00:25:29,230 --> 00:25:31,749 641 00:25:31,749 --> 00:25:33,909 642 00:25:33,909 --> 00:25:36,460 643 00:25:36,460 --> 00:25:38,860 644 00:25:38,860 --> 00:25:43,480 645 00:25:43,480 --> 00:25:44,980 646 00:25:44,980 --> 00:25:46,539 647 00:25:46,539 --> 00:25:48,610 648 00:25:48,610 --> 00:25:50,919 649 00:25:50,919 --> 00:25:55,360 650 00:25:55,360 --> 00:25:57,639 651 00:25:57,639 --> 00:25:59,139 652 00:25:59,139 --> 00:26:00,970 653 00:26:00,970 --> 00:26:03,279 654 00:26:03,279 --> 00:26:06,009 655 00:26:06,009 --> 00:26:09,039 656 00:26:09,039 --> 00:26:10,149 657 00:26:10,149 --> 00:26:11,710 658 00:26:11,710 --> 00:26:13,629 659 00:26:13,629 --> 00:26:15,310 660 00:26:15,310 --> 00:26:17,860 661 00:26:17,860 --> 00:26:20,710 662 00:26:20,710 --> 00:26:22,299 663 00:26:22,299 --> 00:26:24,669 664 00:26:24,669 --> 00:26:27,310 665 00:26:27,310 --> 00:26:29,110 666 00:26:29,110 --> 00:26:32,409 667 00:26:32,409 --> 00:26:34,360 668 00:26:34,360 --> 00:26:37,210 669 00:26:37,210 --> 00:26:38,619 670 00:26:38,619 --> 00:26:41,190 671 00:26:41,190 --> 00:26:43,990 672 00:26:43,990 --> 00:26:45,549 673 00:26:45,549 --> 00:26:46,840 674 00:26:46,840 --> 00:26:49,570 675 00:26:49,570 --> 00:26:51,580 676 00:26:51,580 --> 00:26:53,169 677 00:26:53,169 --> 00:26:55,779 678 00:26:55,779 --> 00:26:57,999 679 00:26:57,999 --> 00:26:59,710 680 00:26:59,710 --> 00:27:01,299 681 00:27:01,299 --> 00:27:03,369 682 00:27:03,369 --> 00:27:05,529 683 00:27:05,529 --> 00:27:10,600 684 00:27:10,600 --> 00:27:12,460 685 00:27:12,460 --> 00:27:15,940 686 00:27:15,940 --> 00:27:18,279 687 00:27:18,279 --> 00:27:20,830 688 00:27:20,830 --> 00:27:22,960 689 00:27:22,960 --> 00:27:24,610 690 00:27:24,610 --> 00:27:26,860 691 00:27:26,860 --> 00:27:30,490 692 00:27:30,490 --> 00:27:32,049 693 00:27:32,049 --> 00:27:34,389 694 00:27:34,389 --> 00:27:36,820 695 00:27:36,820 --> 00:27:40,419 696 00:27:40,419 --> 00:27:42,249 697 00:27:42,249 --> 00:27:44,409 698 00:27:44,409 --> 00:27:46,210 699 00:27:46,210 --> 00:27:48,369 700 00:27:48,369 --> 00:27:50,109 701 00:27:50,109 --> 00:27:52,720 702 00:27:52,720 --> 00:27:54,759 703 00:27:54,759 --> 00:27:56,350 704 00:27:56,350 --> 00:27:58,629 705 00:27:58,629 --> 00:28:01,239 706 00:28:01,239 --> 00:28:03,159 707 00:28:03,159 --> 00:28:04,600 708 00:28:04,600 --> 00:28:06,639 709 00:28:06,639 --> 00:28:08,230 710 00:28:08,230 --> 00:28:12,029 711 00:28:12,029 --> 00:28:16,450 712 00:28:16,450 --> 00:28:18,850 713 00:28:18,850 --> 00:28:20,320 714 00:28:20,320 --> 00:28:23,169 715 00:28:23,169 --> 00:28:25,930 716 00:28:25,930 --> 00:28:28,419 717 00:28:28,419 --> 00:28:31,269 718 00:28:31,269 --> 00:28:33,310 719 00:28:33,310 --> 00:28:36,310 720 00:28:36,310 --> 00:28:38,049 721 00:28:38,049 --> 00:28:40,600 722 00:28:40,600 --> 00:28:41,980 723 00:28:41,980 --> 00:28:44,049 724 00:28:44,049 --> 00:28:45,460 725 00:28:45,460 --> 00:28:47,409 726 00:28:47,409 --> 00:28:50,139 727 00:28:50,139 --> 00:28:52,570 728 00:28:52,570 --> 00:28:54,100 729 00:28:54,100 --> 00:28:56,379 730 00:28:56,379 --> 00:28:58,299 731 00:28:58,299 --> 00:29:00,460 732 00:29:00,460 --> 00:29:02,230 733 00:29:02,230 --> 00:29:04,810 734 00:29:04,810 --> 00:29:06,639 735 00:29:06,639 --> 00:29:09,249 736 00:29:09,249 --> 00:29:11,799 737 00:29:11,799 --> 00:29:14,019 738 00:29:14,019 --> 00:29:17,379 739 00:29:17,379 --> 00:29:19,389 740 00:29:19,389 --> 00:29:21,279 741 00:29:21,279 --> 00:29:23,320 742 00:29:23,320 --> 00:29:24,549 743 00:29:24,549 --> 00:29:27,159 744 00:29:27,159 --> 00:29:30,700 745 00:29:30,700 --> 00:29:32,350 746 00:29:32,350 --> 00:29:34,210 747 00:29:34,210 --> 00:29:36,369 748 00:29:36,369 --> 00:29:38,139 749 00:29:38,139 --> 00:29:39,850 750 00:29:39,850 --> 00:29:42,310 751 00:29:42,310 --> 00:29:44,830 752 00:29:44,830 --> 00:29:46,930 753 00:29:46,930 --> 00:29:51,070 754 00:29:51,070 --> 00:29:53,060 755 00:29:53,060 --> 00:29:54,499 756 00:29:54,499 --> 00:29:56,360 757 00:29:56,360 --> 00:29:58,519 758 00:29:58,519 --> 00:30:01,009 759 00:30:01,009 --> 00:30:02,930 760 00:30:02,930 --> 00:30:04,970 761 00:30:04,970 --> 00:30:07,399 762 00:30:07,399 --> 00:30:09,049 763 00:30:09,049 --> 00:30:10,970 764 00:30:10,970 --> 00:30:12,470 765 00:30:12,470 --> 00:30:13,909 766 00:30:13,909 --> 00:30:17,060 767 00:30:17,060 --> 00:30:19,249 768 00:30:19,249 --> 00:30:22,669 769 00:30:22,669 --> 00:30:24,379 770 00:30:24,379 --> 00:30:26,950 771 00:30:26,950 --> 00:30:29,029 772 00:30:29,029 --> 00:30:33,799 773 00:30:33,799 --> 00:30:36,649 774 00:30:36,649 --> 00:30:39,549 775 00:30:39,549 --> 00:30:41,810 776 00:30:41,810 --> 00:30:43,759 777 00:30:43,759 --> 00:30:46,100 778 00:30:46,100 --> 00:30:49,129 779 00:30:49,129 --> 00:30:50,419 780 00:30:50,419 --> 00:30:53,149 781 00:30:53,149 --> 00:30:55,629 782 00:30:55,629 --> 00:30:57,590 783 00:30:57,590 --> 00:30:59,119 784 00:30:59,119 --> 00:31:01,549 785 00:31:01,549 --> 00:31:02,899 786 00:31:02,899 --> 00:31:04,759 787 00:31:04,759 --> 00:31:06,799 788 00:31:06,799 --> 00:31:09,320 789 00:31:09,320 --> 00:31:12,230 790 00:31:12,230 --> 00:31:13,279 791 00:31:13,279 --> 00:31:16,039 792 00:31:16,039 --> 00:31:18,940 793 00:31:18,940 --> 00:31:20,960 794 00:31:20,960 --> 00:31:23,240 795 00:31:23,240 --> 00:31:26,269 796 00:31:26,269 --> 00:31:28,999 797 00:31:28,999 --> 00:31:31,460 798 00:31:31,460 --> 00:31:34,310 799 00:31:34,310 --> 00:31:37,460 800 00:31:37,460 --> 00:31:40,519 801 00:31:40,519 --> 00:31:42,499 802 00:31:42,499 --> 00:31:44,570 803 00:31:44,570 --> 00:31:47,269 804 00:31:47,269 --> 00:31:48,860 805 00:31:48,860 --> 00:31:53,419 806 00:31:53,419 --> 00:31:55,249 807 00:31:55,249 --> 00:31:56,810 808 00:31:56,810 --> 00:31:58,399 809 00:31:58,399 --> 00:32:00,169 810 00:32:00,169 --> 00:32:02,480 811 00:32:02,480 --> 00:32:04,119 812 00:32:04,119 --> 00:32:06,159 813 00:32:06,159 --> 00:32:07,659 814 00:32:07,659 --> 00:32:09,249 815 00:32:09,249 --> 00:32:10,690 816 00:32:10,690 --> 00:32:12,190 817 00:32:12,190 --> 00:32:14,320 818 00:32:14,320 --> 00:32:18,909 819 00:32:18,909 --> 00:32:22,389 820 00:32:22,389 --> 00:32:26,229 821 00:32:26,229 --> 00:32:29,169 822 00:32:29,169 --> 00:32:30,909 823 00:32:30,909 --> 00:32:32,440 824 00:32:32,440 --> 00:32:34,720 825 00:32:34,720 --> 00:32:37,029 826 00:32:37,029 --> 00:32:38,950 827 00:32:38,950 --> 00:32:41,080 828 00:32:41,080 --> 00:32:42,879 829 00:32:42,879 --> 00:32:46,810 830 00:32:46,810 --> 00:32:48,700 831 00:32:48,700 --> 00:32:50,049 832 00:32:50,049 --> 00:32:53,379 833 00:32:53,379 --> 00:32:56,009 834 00:32:56,009 --> 00:32:57,909 835 00:32:57,909 --> 00:32:59,859 836 00:32:59,859 --> 00:33:01,359 837 00:33:01,359 --> 00:33:03,310 838 00:33:03,310 --> 00:33:06,249 839 00:33:06,249 --> 00:33:07,539 840 00:33:07,539 --> 00:33:09,609 841 00:33:09,609 --> 00:33:11,619 842 00:33:11,619 --> 00:33:12,489 843 00:33:12,489 --> 00:33:15,220 844 00:33:15,220 --> 00:33:18,580 845 00:33:18,580 --> 00:33:25,460 846 00:33:25,460 --> 00:33:30,680 847 00:33:30,680 --> 00:33:35,190 848 00:33:35,190 --> 00:33:37,200 849 00:33:37,200 --> 00:33:39,120 850 00:33:39,120 --> 00:33:40,080 851 00:33:40,080 --> 00:33:42,660 852 00:33:42,660 --> 00:33:45,420 853 00:33:45,420 --> 00:33:47,820 854 00:33:47,820 --> 00:33:49,400 855 00:33:49,400 --> 00:33:52,080 856 00:33:52,080 --> 00:33:55,020 857 00:33:55,020 --> 00:33:58,170 858 00:33:58,170 --> 00:34:01,380 859 00:34:01,380 --> 00:34:03,450 860 00:34:03,450 --> 00:34:05,880 861 00:34:05,880 --> 00:34:08,010 862 00:34:08,010 --> 00:34:10,620 863 00:34:10,620 --> 00:34:12,090 864 00:34:12,090 --> 00:34:14,160 865 00:34:14,160 --> 00:34:16,110 866 00:34:16,110 --> 00:34:18,570 867 00:34:18,570 --> 00:34:21,270 868 00:34:21,270 --> 00:34:24,480 869 00:34:24,480 --> 00:34:32,010 870 00:34:32,010 --> 00:34:34,350 871 00:34:34,350 --> 00:34:36,030 872 00:34:36,030 --> 00:34:38,970 873 00:34:38,970 --> 00:34:40,920 874 00:34:40,920 --> 00:34:43,590 875 00:34:43,590 --> 00:34:45,750 876 00:34:45,750 --> 00:34:47,370 877 00:34:47,370 --> 00:34:48,780 878 00:34:48,780 --> 00:34:51,980 879 00:34:51,980 --> 00:34:56,760 880 00:34:56,760 --> 00:34:58,830 881 00:34:58,830 --> 00:35:02,490 882 00:35:02,490 --> 00:35:04,860 883 00:35:04,860 --> 00:35:06,390 884 00:35:06,390 --> 00:35:09,360 885 00:35:09,360 --> 00:35:11,010 886 00:35:11,010 --> 00:35:15,780 887 00:35:15,780 --> 00:35:19,200 888 00:35:19,200 --> 00:35:22,200 889 00:35:22,200 --> 00:35:24,150 890 00:35:24,150 --> 00:35:27,360 891 00:35:27,360 --> 00:35:30,300 892 00:35:30,300 --> 00:35:33,930 893 00:35:33,930 --> 00:35:36,390 894 00:35:36,390 --> 00:35:38,070 895 00:35:38,070 --> 00:35:38,080 896 00:35:38,080 --> 00:35:38,480 897 00:35:38,480 --> 00:35:39,890 898 00:35:39,890 --> 00:35:42,890 899 00:35:42,890 --> 00:35:44,600 900 00:35:44,600 --> 00:35:47,359 901 00:35:47,359 --> 00:35:49,310 902 00:35:49,310 --> 00:35:49,320 903 00:35:49,320 --> 00:35:50,090 904 00:35:50,090 --> 00:35:51,590 905 00:35:51,590 --> 00:35:53,390 906 00:35:53,390 --> 00:35:55,970 907 00:35:55,970 --> 00:35:58,160 908 00:35:58,160 --> 00:36:02,300 909 00:36:02,300 --> 00:36:08,510 910 00:36:08,510 --> 00:36:12,859 911 00:36:12,859 --> 00:36:16,400 912 00:36:16,400 --> 00:36:18,650 913 00:36:18,650 --> 00:36:20,300 914 00:36:20,300 --> 00:36:22,670 915 00:36:22,670 --> 00:36:25,100 916 00:36:25,100 --> 00:36:28,760 917 00:36:28,760 --> 00:36:30,590 918 00:36:30,590 --> 00:36:32,600 919 00:36:32,600 --> 00:36:34,430 920 00:36:34,430 --> 00:36:36,170 921 00:36:36,170 --> 00:36:39,260 922 00:36:39,260 --> 00:36:42,410 923 00:36:42,410 --> 00:36:45,440 924 00:36:45,440 --> 00:36:47,930 925 00:36:47,930 --> 00:36:51,290 926 00:36:51,290 --> 00:36:55,040 927 00:36:55,040 --> 00:36:58,520 928 00:36:58,520 --> 00:37:03,080 929 00:37:03,080 --> 00:37:07,880 930 00:37:07,880 --> 00:37:12,140 931 00:37:12,140 --> 00:37:14,870 932 00:37:14,870 --> 00:37:16,910 933 00:37:16,910 --> 00:37:18,770 934 00:37:18,770 --> 00:37:20,720 935 00:37:20,720 --> 00:37:22,940 936 00:37:22,940 --> 00:37:24,320 937 00:37:24,320 --> 00:37:26,960 938 00:37:26,960 --> 00:37:28,520 939 00:37:28,520 --> 00:37:30,170 940 00:37:30,170 --> 00:37:32,599 941 00:37:32,599 --> 00:37:35,050 942 00:37:35,050 --> 00:37:38,200 943 00:37:38,200 --> 00:37:40,089 944 00:37:40,089 --> 00:37:45,220 945 00:37:45,220 --> 00:37:47,440 946 00:37:47,440 --> 00:37:50,109 947 00:37:50,109 --> 00:37:53,140 948 00:37:53,140 --> 00:37:55,089 949 00:37:55,089 --> 00:37:57,450 950 00:37:57,450 --> 00:38:00,579 951 00:38:00,579 --> 00:38:02,710 952 00:38:02,710 --> 00:38:04,839 953 00:38:04,839 --> 00:38:09,370 954 00:38:09,370 --> 00:38:12,010 955 00:38:12,010 --> 00:38:13,690 956 00:38:13,690 --> 00:38:15,880 957 00:38:15,880 --> 00:38:17,349 958 00:38:17,349 --> 00:38:20,589 959 00:38:20,589 --> 00:38:23,050 960 00:38:23,050 --> 00:38:27,790 961 00:38:27,790 --> 00:38:29,770 962 00:38:29,770 --> 00:38:31,390 963 00:38:31,390 --> 00:38:32,500 964 00:38:32,500 --> 00:38:34,660 965 00:38:34,660 --> 00:38:39,670 966 00:38:39,670 --> 00:38:42,190 967 00:38:42,190 --> 00:38:44,230 968 00:38:44,230 --> 00:38:46,750 969 00:38:46,750 --> 00:38:48,660 970 00:38:48,660 --> 00:38:51,370 971 00:38:51,370 --> 00:39:00,370 972 00:39:00,370 --> 00:39:01,780 973 00:39:01,780 --> 00:39:03,280 974 00:39:03,280 --> 00:39:05,470 975 00:39:05,470 --> 00:39:07,420 976 00:39:07,420 --> 00:39:10,510 977 00:39:10,510 --> 00:39:11,890 978 00:39:11,890 --> 00:39:14,140 979 00:39:14,140 --> 00:39:15,670 980 00:39:15,670 --> 00:39:20,410 981 00:39:20,410 --> 00:39:22,030 982 00:39:22,030 --> 00:39:23,740 983 00:39:23,740 --> 00:39:25,089 984 00:39:25,089 --> 00:39:28,120 985 00:39:28,120 --> 00:39:30,370 986 00:39:30,370 --> 00:39:33,400 987 00:39:33,400 --> 00:39:35,050 988 00:39:35,050 --> 00:39:38,920 989 00:39:38,920 --> 00:39:41,290 990 00:39:41,290 --> 00:39:43,240 991 00:39:43,240 --> 00:39:44,920 992 00:39:44,920 --> 00:39:47,380 993 00:39:47,380 --> 00:39:49,030 994 00:39:49,030 --> 00:39:52,220 995 00:39:52,220 --> 00:39:54,110 996 00:39:54,110 --> 00:39:56,060 997 00:39:56,060 --> 00:39:57,500 998 00:39:57,500 --> 00:39:58,040 999 00:39:58,040 --> 00:39:59,480 1000 00:39:59,480 --> 00:40:04,520 1001 00:40:04,520 --> 00:40:08,570 1002 00:40:08,570 --> 00:40:10,370 1003 00:40:10,370 --> 00:40:12,290 1004 00:40:12,290 --> 00:40:15,020 1005 00:40:15,020 --> 00:40:16,250 1006 00:40:16,250 --> 00:40:18,860 1007 00:40:18,860 --> 00:40:21,650 1008 00:40:21,650 --> 00:40:23,090 1009 00:40:23,090 --> 00:40:24,590 1010 00:40:24,590 --> 00:40:27,170 1011 00:40:27,170 --> 00:40:30,500 1012 00:40:30,500 --> 00:40:32,930 1013 00:40:32,930 --> 00:40:34,940 1014 00:40:34,940 --> 00:40:36,500 1015 00:40:36,500 --> 00:40:40,370 1016 00:40:40,370 --> 00:40:41,990 1017 00:40:41,990 --> 00:40:43,610 1018 00:40:43,610 --> 00:40:49,370 1019 00:40:49,370 --> 00:40:50,810 1020 00:40:50,810 --> 00:40:50,820 1021 00:40:50,820 --> 00:40:53,550 1022 00:40:53,550 --> 00:40:55,680 1023 00:40:55,680 --> 00:40:57,060 1024 00:40:57,060 --> 00:40:59,040 1025 00:40:59,040 --> 00:41:02,090 1026 00:41:02,090 --> 00:41:05,340 1027 00:41:05,340 --> 00:41:07,950 1028 00:41:07,950 --> 00:41:10,530 1029 00:41:10,530 --> 00:41:14,100 1030 00:41:14,100 --> 00:41:16,650 1031 00:41:16,650 --> 00:41:18,900 1032 00:41:18,900 --> 00:41:22,260 1033 00:41:22,260 --> 00:41:23,880 1034 00:41:23,880 --> 00:41:27,570 1035 00:41:27,570 --> 00:41:29,760 1036 00:41:29,760 --> 00:41:31,200 1037 00:41:31,200 --> 00:41:33,840 1038 00:41:33,840 --> 00:41:37,110 1039 00:41:37,110 --> 00:41:39,470 1040 00:41:39,470 --> 00:41:42,720 1041 00:41:42,720 --> 00:41:47,010 1042 00:41:47,010 --> 00:41:48,870 1043 00:41:48,870 --> 00:41:51,090 1044 00:41:51,090 --> 00:41:53,520 1045 00:41:53,520 --> 00:41:57,270 1046 00:41:57,270 --> 00:41:59,490 1047 00:41:59,490 --> 00:42:01,380 1048 00:42:01,380 --> 00:42:02,790 1049 00:42:02,790 --> 00:42:04,860 1050 00:42:04,860 --> 00:42:06,840 1051 00:42:06,840 --> 00:42:09,630 1052 00:42:09,630 --> 00:42:11,040 1053 00:42:11,040 --> 00:42:12,840 1054 00:42:12,840 --> 00:42:15,780 1055 00:42:15,780 --> 00:42:17,580 1056 00:42:17,580 --> 00:42:23,400 1057 00:42:23,400 --> 00:42:25,740 1058 00:42:25,740 --> 00:42:27,810 1059 00:42:27,810 --> 00:42:31,380 1060 00:42:31,380 --> 00:42:33,510 1061 00:42:33,510 --> 00:42:35,640 1062 00:42:35,640 --> 00:42:38,040 1063 00:42:38,040 --> 00:42:40,560 1064 00:42:40,560 --> 00:42:42,420 1065 00:42:42,420 --> 00:42:43,830 1066 00:42:43,830 --> 00:42:45,930 1067 00:42:45,930 --> 00:42:47,880 1068 00:42:47,880 --> 00:42:49,260 1069 00:42:49,260 --> 00:42:50,940 1070 00:42:50,940 --> 00:42:52,890 1071 00:42:52,890 --> 00:42:57,330 1072 00:42:57,330 --> 00:43:00,030 1073 00:43:00,030 --> 00:43:03,390 1074 00:43:03,390 --> 00:43:05,860 1075 00:43:05,860 --> 00:43:08,610 1076 00:43:08,610 --> 00:43:10,300 1077 00:43:10,300 --> 00:43:14,050 1078 00:43:14,050 --> 00:43:16,000 1079 00:43:16,000 --> 00:43:17,470 1080 00:43:17,470 --> 00:43:19,590 1081 00:43:19,590 --> 00:43:21,850 1082 00:43:21,850 --> 00:43:23,440 1083 00:43:23,440 --> 00:43:23,450 1084 00:43:23,450 --> 00:43:23,770 1085 00:43:23,770 --> 00:43:26,530 1086 00:43:26,530 --> 00:43:28,150 1087 00:43:28,150 --> 00:43:31,120 1088 00:43:31,120 --> 00:43:34,210 1089 00:43:34,210 --> 00:43:38,830 1090 00:43:38,830 --> 00:43:41,320 1091 00:43:41,320 --> 00:43:43,210 1092 00:43:43,210 --> 00:43:47,950 1093 00:43:47,950 --> 00:43:50,140 1094 00:43:50,140 --> 00:43:51,850 1095 00:43:51,850 --> 00:43:54,190 1096 00:43:54,190 --> 00:43:55,840 1097 00:43:55,840 --> 00:43:58,270 1098 00:43:58,270 --> 00:44:00,310 1099 00:44:00,310 --> 00:44:02,500 1100 00:44:02,500 --> 00:44:05,620 1101 00:44:05,620 --> 00:44:07,390 1102 00:44:07,390 --> 00:44:09,160 1103 00:44:09,160 --> 00:44:11,320 1104 00:44:11,320 --> 00:44:13,180 1105 00:44:13,180 --> 00:44:15,280 1106 00:44:15,280 --> 00:44:17,260 1107 00:44:17,260 --> 00:44:19,960 1108 00:44:19,960 --> 00:44:22,120 1109 00:44:22,120 --> 00:44:25,600 1110 00:44:25,600 --> 00:44:27,370 1111 00:44:27,370 --> 00:44:29,380 1112 00:44:29,380 --> 00:44:31,480 1113 00:44:31,480 --> 00:44:34,120 1114 00:44:34,120 --> 00:44:35,740 1115 00:44:35,740 --> 00:44:37,600 1116 00:44:37,600 --> 00:44:39,610 1117 00:44:39,610 --> 00:44:42,250 1118 00:44:42,250 --> 00:44:45,700 1119 00:44:45,700 --> 00:44:46,630 1120 00:44:46,630 --> 00:44:50,110 1121 00:44:50,110 --> 00:44:52,000 1122 00:44:52,000 --> 00:44:53,710 1123 00:44:53,710 --> 00:44:56,260 1124 00:44:56,260 --> 00:45:00,130 1125 00:45:00,130 --> 00:45:02,860 1126 00:45:02,860 --> 00:45:06,340 1127 00:45:06,340 --> 00:45:08,110 1128 00:45:08,110 --> 00:45:09,670 1129 00:45:09,670 --> 00:45:13,180 1130 00:45:13,180 --> 00:45:15,859 1131 00:45:15,859 --> 00:45:17,510 1132 00:45:17,510 --> 00:45:19,190 1133 00:45:19,190 --> 00:45:22,269 1134 00:45:22,269 --> 00:45:25,609 1135 00:45:25,609 --> 00:45:27,529 1136 00:45:27,529 --> 00:45:29,839 1137 00:45:29,839 --> 00:45:31,460 1138 00:45:31,460 --> 00:45:34,069 1139 00:45:34,069 --> 00:45:36,680 1140 00:45:36,680 --> 00:45:38,829 1141 00:45:38,829 --> 00:45:41,269 1142 00:45:41,269 --> 00:45:43,099 1143 00:45:43,099 --> 00:45:45,440 1144 00:45:45,440 --> 00:45:47,960 1145 00:45:47,960 --> 00:45:49,819 1146 00:45:49,819 --> 00:45:52,490 1147 00:45:52,490 --> 00:45:54,140 1148 00:45:54,140 --> 00:45:55,640 1149 00:45:55,640 --> 00:45:58,370 1150 00:45:58,370 --> 00:46:00,829 1151 00:46:00,829 --> 00:46:09,080 1152 00:46:09,080 --> 00:46:14,480 1153 00:46:14,480 --> 00:46:18,560 1154 00:46:18,560 --> 00:46:21,080 1155 00:46:21,080 --> 00:46:24,020 1156 00:46:24,020 --> 00:46:26,930 1157 00:46:26,930 --> 00:46:30,650 1158 00:46:30,650 --> 00:46:34,160 1159 00:46:34,160 --> 00:46:36,620 1160 00:46:36,620 --> 00:46:41,360 1161 00:46:41,360 --> 00:46:44,030 1162 00:46:44,030 --> 00:46:46,940 1163 00:46:46,940 --> 00:46:49,160 1164 00:46:49,160 --> 00:46:51,590 1165 00:46:51,590 --> 00:46:53,270 1166 00:46:53,270 --> 00:46:55,730 1167 00:46:55,730 --> 00:46:57,200 1168 00:46:57,200 --> 00:46:59,560 1169 00:46:59,560 --> 00:47:03,110 1170 00:47:03,110 --> 00:47:06,830 1171 00:47:06,830 --> 00:47:09,500 1172 00:47:09,500 --> 00:47:11,510 1173 00:47:11,510 --> 00:47:12,980 1174 00:47:12,980 --> 00:47:15,530 1175 00:47:15,530 --> 00:47:17,630 1176 00:47:17,630 --> 00:47:21,830 1177 00:47:21,830 --> 00:47:24,590 1178 00:47:24,590 --> 00:47:27,110 1179 00:47:27,110 --> 00:47:31,010 1180 00:47:31,010 --> 00:47:33,650 1181 00:47:33,650 --> 00:47:36,530 1182 00:47:36,530 --> 00:47:38,150 1183 00:47:38,150 --> 00:47:40,040 1184 00:47:40,040 --> 00:47:42,770 1185 00:47:42,770 --> 00:47:44,300 1186 00:47:44,300 --> 00:47:46,040 1187 00:47:46,040 --> 00:47:48,020 1188 00:47:48,020 --> 00:47:49,730 1189 00:47:49,730 --> 00:47:51,170 1190 00:47:51,170 --> 00:47:53,660 1191 00:47:53,660 --> 00:47:56,060 1192 00:47:56,060 --> 00:47:58,280 1193 00:47:58,280 --> 00:48:00,530 1194 00:48:00,530 --> 00:48:01,790 1195 00:48:01,790 --> 00:48:03,350 1196 00:48:03,350 --> 00:48:03,860 1197 00:48:03,860 --> 00:48:06,560 1198 00:48:06,560 --> 00:48:08,630 1199 00:48:08,630 --> 00:48:10,910 1200 00:48:10,910 --> 00:48:13,310 1201 00:48:13,310 --> 00:48:16,070 1202 00:48:16,070 --> 00:48:20,690 1203 00:48:20,690 --> 00:48:23,230 1204 00:48:23,230 --> 00:48:25,700 1205 00:48:25,700 --> 00:48:27,830 1206 00:48:27,830 --> 00:48:29,540 1207 00:48:29,540 --> 00:48:32,240 1208 00:48:32,240 --> 00:48:34,430 1209 00:48:34,430 --> 00:48:38,390 1210 00:48:38,390 --> 00:48:40,910 1211 00:48:40,910 --> 00:48:42,950 1212 00:48:42,950 --> 00:48:45,020 1213 00:48:45,020 --> 00:48:47,420 1214 00:48:47,420 --> 00:48:49,550 1215 00:48:49,550 --> 00:48:52,820 1216 00:48:52,820 --> 00:48:56,030 1217 00:48:56,030 --> 00:48:57,740 1218 00:48:57,740 --> 00:49:00,500 1219 00:49:00,500 --> 00:49:02,900 1220 00:49:02,900 --> 00:49:04,310 1221 00:49:04,310 --> 00:49:06,020 1222 00:49:06,020 --> 00:49:08,240 1223 00:49:08,240 --> 00:49:09,860 1224 00:49:09,860 --> 00:49:11,480 1225 00:49:11,480 --> 00:49:13,370 1226 00:49:13,370 --> 00:49:15,800 1227 00:49:15,800 --> 00:49:17,180 1228 00:49:17,180 --> 00:49:19,190 1229 00:49:19,190 --> 00:49:21,350 1230 00:49:21,350 --> 00:49:23,870 1231 00:49:23,870 --> 00:49:25,160 1232 00:49:25,160 --> 00:49:26,360 1233 00:49:26,360 --> 00:49:27,380 1234 00:49:27,380 --> 00:49:34,850 1235 00:49:34,850 --> 00:49:37,250 1236 00:49:37,250 --> 00:49:41,360 1237 00:49:41,360 --> 00:49:44,420 1238 00:49:44,420 --> 00:49:47,390 1239 00:49:47,390 --> 00:49:50,720 1240 00:49:50,720 --> 00:49:55,250 1241 00:49:55,250 --> 00:49:59,090 1242 00:49:59,090 --> 00:50:01,190 1243 00:50:01,190 --> 00:50:03,350 1244 00:50:03,350 --> 00:50:07,010 1245 00:50:07,010 --> 00:50:10,490 1246 00:50:10,490 --> 00:50:12,410 1247 00:50:12,410 --> 00:50:14,330 1248 00:50:14,330 --> 00:50:18,190 1249 00:50:18,190 --> 00:50:20,060 1250 00:50:20,060 --> 00:50:22,520 1251 00:50:22,520 --> 00:50:27,620 1252 00:50:27,620 --> 00:50:29,180 1253 00:50:29,180 --> 00:50:31,730 1254 00:50:31,730 --> 00:50:33,260 1255 00:50:33,260 --> 00:50:34,910 1256 00:50:34,910 --> 00:50:38,539 1257 00:50:38,539 --> 00:50:45,549 1258 00:50:45,549 --> 00:50:48,289 1259 00:50:48,289 --> 00:50:51,109 1260 00:50:51,109 --> 00:50:52,670 1261 00:50:52,670 --> 00:50:54,980 1262 00:50:54,980 --> 00:50:57,319 1263 00:50:57,319 --> 00:50:59,809 1264 00:50:59,809 --> 00:51:01,430 1265 00:51:01,430 --> 00:51:03,770 1266 00:51:03,770 --> 00:51:06,260 1267 00:51:06,260 --> 00:51:08,539 1268 00:51:08,539 --> 00:51:11,030 1269 00:51:11,030 --> 00:51:13,450 1270 00:51:13,450 --> 00:51:16,190 1271 00:51:16,190 --> 00:51:19,069 1272 00:51:19,069 --> 00:51:21,859 1273 00:51:21,859 --> 00:51:24,230 1274 00:51:24,230 --> 00:51:26,720 1275 00:51:26,720 --> 00:51:29,569 1276 00:51:29,569 --> 00:51:32,030 1277 00:51:32,030 --> 00:51:34,069 1278 00:51:34,069 --> 00:51:38,480 1279 00:51:38,480 --> 00:51:41,170 1280 00:51:41,170 --> 00:51:47,020 1281 00:51:47,020 --> 00:51:49,220 1282 00:51:49,220 --> 00:51:51,470 1283 00:51:51,470 --> 00:51:53,539 1284 00:51:53,539 --> 00:52:07,910 1285 00:52:07,910 --> 00:52:12,650 1286 00:52:12,650 --> 00:52:14,630 1287 00:52:14,630 --> 00:52:16,640 1288 00:52:16,640 --> 00:52:18,740 1289 00:52:18,740 --> 00:52:21,349 1290 00:52:21,349 --> 00:52:22,880 1291 00:52:22,880 --> 00:52:24,769 1292 00:52:24,769 --> 00:52:25,400 1293 00:52:25,400 --> 00:52:27,470 1294 00:52:27,470 --> 00:52:29,930 1295 00:52:29,930 --> 00:52:31,789 1296 00:52:31,789 --> 00:52:33,980 1297 00:52:33,980 --> 00:52:36,559 1298 00:52:36,559 --> 00:52:38,329 1299 00:52:38,329 --> 00:52:40,069 1300 00:52:40,069 --> 00:52:41,599 1301 00:52:41,599 --> 00:52:43,880 1302 00:52:43,880 --> 00:52:45,500 1303 00:52:45,500 --> 00:52:46,759 1304 00:52:46,759 --> 00:52:49,759 1305 00:52:49,759 --> 00:52:51,829 1306 00:52:51,829 --> 00:52:53,690 1307 00:52:53,690 --> 00:52:56,200 1308 00:52:56,200 --> 00:52:56,210 1309 00:52:56,210 --> 00:53:00,380 1310 00:53:00,380 --> 00:53:00,390 1311 00:53:00,390 --> 00:53:02,540 1312 00:53:02,540 --> 00:53:08,970 1313 00:53:08,970 --> 00:53:12,210 1314 00:53:12,210 --> 00:53:14,280 1315 00:53:14,280 --> 00:53:16,470 1316 00:53:16,470 --> 00:53:18,180 1317 00:53:18,180 --> 00:53:20,610 1318 00:53:20,610 --> 00:53:22,230 1319 00:53:22,230 --> 00:53:24,480 1320 00:53:24,480 --> 00:53:26,400 1321 00:53:26,400 --> 00:53:28,680 1322 00:53:28,680 --> 00:53:31,800 1323 00:53:31,800 --> 00:53:33,090 1324 00:53:33,090 --> 00:53:35,520 1325 00:53:35,520 --> 00:53:36,990 1326 00:53:36,990 --> 00:53:39,270 1327 00:53:39,270 --> 00:53:40,830 1328 00:53:40,830 --> 00:53:43,290 1329 00:53:43,290 --> 00:53:45,090 1330 00:53:45,090 --> 00:53:46,770 1331 00:53:46,770 --> 00:53:49,020 1332 00:53:49,020 --> 00:53:50,880 1333 00:53:50,880 --> 00:53:53,400 1334 00:53:53,400 --> 00:53:55,770 1335 00:53:55,770 --> 00:53:57,540 1336 00:53:57,540 --> 00:54:00,210 1337 00:54:00,210 --> 00:54:02,130 1338 00:54:02,130 --> 00:54:04,320 1339 00:54:04,320 --> 00:54:06,510 1340 00:54:06,510 --> 00:54:07,920 1341 00:54:07,920 --> 00:54:10,440 1342 00:54:10,440 --> 00:54:14,070 1343 00:54:14,070 --> 00:54:16,650 1344 00:54:16,650 --> 00:54:18,840 1345 00:54:18,840 --> 00:54:20,970 1346 00:54:20,970 --> 00:54:23,670 1347 00:54:23,670 --> 00:54:24,810 1348 00:54:24,810 --> 00:54:28,080 1349 00:54:28,080 --> 00:54:30,270 1350 00:54:30,270 --> 00:54:32,820 1351 00:54:32,820 --> 00:54:34,530 1352 00:54:34,530 --> 00:54:35,880 1353 00:54:35,880 --> 00:54:37,740 1354 00:54:37,740 --> 00:54:38,850 1355 00:54:38,850 --> 00:54:41,280 1356 00:54:41,280 --> 00:54:43,050 1357 00:54:43,050 --> 00:54:45,120 1358 00:54:45,120 --> 00:54:45,130 1359 00:54:45,130 --> 00:54:50,580 1360 00:54:50,580 --> 00:54:54,190 1361 00:54:54,190 --> 00:55:00,670 1362 00:55:00,670 --> 00:55:03,850 1363 00:55:03,850 --> 00:55:10,950 1364 00:55:10,950 --> 00:55:13,390 1365 00:55:13,390 --> 00:55:16,480 1366 00:55:16,480 --> 00:55:17,710 1367 00:55:17,710 --> 00:55:20,320 1368 00:55:20,320 --> 00:55:21,490 1369 00:55:21,490 --> 00:55:23,740 1370 00:55:23,740 --> 00:55:25,600 1371 00:55:25,600 --> 00:55:29,710 1372 00:55:29,710 --> 00:55:31,570 1373 00:55:31,570 --> 00:55:33,490 1374 00:55:33,490 --> 00:55:34,780 1375 00:55:34,780 --> 00:55:36,520 1376 00:55:36,520 --> 00:55:38,350 1377 00:55:38,350 --> 00:55:41,110 1378 00:55:41,110 --> 00:55:42,820 1379 00:55:42,820 --> 00:55:45,640 1380 00:55:45,640 --> 00:55:47,080 1381 00:55:47,080 --> 00:55:48,880 1382 00:55:48,880 --> 00:55:50,380 1383 00:55:50,380 --> 00:55:52,960 1384 00:55:52,960 --> 00:55:54,790 1385 00:55:54,790 --> 00:55:56,560 1386 00:55:56,560 --> 00:55:58,390 1387 00:55:58,390 --> 00:55:59,680 1388 00:55:59,680 --> 00:56:02,260 1389 00:56:02,260 --> 00:56:04,690 1390 00:56:04,690 --> 00:56:07,420 1391 00:56:07,420 --> 00:56:14,660 1392 00:56:14,660 --> 00:56:16,670 1393 00:56:16,670 --> 00:56:27,000 1394 00:56:27,000 --> 00:56:27,010 1395 00:56:27,010 --> 00:56:34,530 1396 00:56:34,530 --> 00:56:37,450 1397 00:56:37,450 --> 00:56:38,950 1398 00:56:38,950 --> 00:56:42,850 1399 00:56:42,850 --> 00:56:44,560 1400 00:56:44,560 --> 00:56:47,320 1401 00:56:47,320 --> 00:56:49,390 1402 00:56:49,390 --> 00:56:53,080 1403 00:56:53,080 --> 00:56:55,150 1404 00:56:55,150 --> 00:57:01,240 1405 00:57:01,240 --> 00:57:03,640 1406 00:57:03,640 --> 00:57:06,400 1407 00:57:06,400 --> 00:57:09,550 1408 00:57:09,550 --> 00:57:11,380 1409 00:57:11,380 --> 00:57:13,780 1410 00:57:13,780 --> 00:57:14,950 1411 00:57:14,950 --> 00:57:16,660 1412 00:57:16,660 --> 00:57:18,550 1413 00:57:18,550 --> 00:57:21,160 1414 00:57:21,160 --> 00:57:22,600 1415 00:57:22,600 --> 00:57:25,030 1416 00:57:25,030 --> 00:57:27,490 1417 00:57:27,490 --> 00:57:29,620 1418 00:57:29,620 --> 00:57:30,910 1419 00:57:30,910 --> 00:57:34,690 1420 00:57:34,690 --> 00:57:36,400 1421 00:57:36,400 --> 00:57:38,680 1422 00:57:38,680 --> 00:57:40,930 1423 00:57:40,930 --> 00:57:43,300 1424 00:57:43,300 --> 00:57:45,340 1425 00:57:45,340 --> 00:57:47,320 1426 00:57:47,320 --> 00:57:48,580 1427 00:57:48,580 --> 00:57:51,280 1428 00:57:51,280 --> 00:57:52,990 1429 00:57:52,990 --> 00:57:54,850 1430 00:57:54,850 --> 00:57:56,970 1431 00:57:56,970 --> 00:58:00,670 1432 00:58:00,670 --> 00:58:03,250 1433 00:58:03,250 --> 00:58:04,660 1434 00:58:04,660 --> 00:58:06,310 1435 00:58:06,310 --> 00:58:08,080 1436 00:58:08,080 --> 00:58:09,400 1437 00:58:09,400 --> 00:58:11,020 1438 00:58:11,020 --> 00:58:12,820 1439 00:58:12,820 --> 00:58:14,830 1440 00:58:14,830 --> 00:58:17,380 1441 00:58:17,380 --> 00:58:19,180 1442 00:58:19,180 --> 00:58:21,190 1443 00:58:21,190 --> 00:58:23,260 1444 00:58:23,260 --> 00:58:25,390 1445 00:58:25,390 --> 00:58:26,950 1446 00:58:26,950 --> 00:58:28,870 1447 00:58:28,870 --> 00:58:31,090 1448 00:58:31,090 --> 00:58:32,770 1449 00:58:32,770 --> 00:58:35,470 1450 00:58:35,470 --> 00:58:37,150 1451 00:58:37,150 --> 00:58:40,380 1452 00:58:40,380 --> 00:58:42,670 1453 00:58:42,670 --> 00:58:45,700 1454 00:58:45,700 --> 00:58:48,579 1455 00:58:48,579 --> 00:58:51,609 1456 00:58:51,609 --> 00:58:51,619 1457 00:58:51,619 --> 00:58:59,910 1458 00:58:59,910 --> 00:59:02,200 1459 00:59:02,200 --> 00:59:03,579 1460 00:59:03,579 --> 00:59:05,170 1461 00:59:05,170 --> 00:59:07,180 1462 00:59:07,180 --> 00:59:09,760 1463 00:59:09,760 --> 00:59:10,960 1464 00:59:10,960 --> 00:59:13,930 1465 00:59:13,930 --> 00:59:16,930 1466 00:59:16,930 --> 00:59:18,280 1467 00:59:18,280 --> 00:59:19,720 1468 00:59:19,720 --> 00:59:22,210 1469 00:59:22,210 --> 00:59:24,430 1470 00:59:24,430 --> 00:59:25,750 1471 00:59:25,750 --> 00:59:28,660 1472 00:59:28,660 --> 00:59:30,460 1473 00:59:30,460 --> 00:59:33,190 1474 00:59:33,190 --> 00:59:35,349 1475 00:59:35,349 --> 00:59:36,910 1476 00:59:36,910 --> 00:59:38,470 1477 00:59:38,470 --> 00:59:40,839 1478 00:59:40,839 --> 00:59:43,329 1479 00:59:43,329 --> 00:59:45,309 1480 00:59:45,309 --> 00:59:46,720 1481 00:59:46,720 --> 00:59:47,920 1482 00:59:47,920 --> 00:59:50,319 1483 00:59:50,319 --> 00:59:51,609 1484 00:59:51,609 --> 00:59:52,960 1485 00:59:52,960 --> 00:59:54,880 1486 00:59:54,880 --> 00:59:56,890 1487 00:59:56,890 --> 00:59:59,020 1488 00:59:59,020 --> 01:00:01,270 1489 01:00:01,270 --> 01:00:04,480 1490 01:00:04,480 --> 01:00:07,599 1491 01:00:07,599 --> 01:00:13,000 1492 01:00:13,000 --> 01:00:15,220 1493 01:00:15,220 --> 01:00:20,140 1494 01:00:20,140 --> 01:00:21,700 1495 01:00:21,700 --> 01:00:22,750 1496 01:00:22,750 --> 01:00:25,180 1497 01:00:25,180 --> 01:00:26,309 1498 01:00:26,309 --> 01:00:28,329 1499 01:00:28,329 --> 01:00:34,450 1500 01:00:34,450 --> 01:00:39,760 1501 01:00:39,760 --> 01:00:41,890 1502 01:00:41,890 --> 01:00:43,810 1503 01:00:43,810 --> 01:00:46,570 1504 01:00:46,570 --> 01:00:50,560 1505 01:00:50,560 --> 01:00:52,870 1506 01:00:52,870 --> 01:00:57,880 1507 01:00:57,880 --> 01:01:02,109 1508 01:01:02,109 --> 01:01:04,359 1509 01:01:04,359 --> 01:01:07,870 1510 01:01:07,870 --> 01:01:10,030 1511 01:01:10,030 --> 01:01:13,359 1512 01:01:13,359 --> 01:01:16,480 1513 01:01:16,480 --> 01:01:18,670 1514 01:01:18,670 --> 01:01:20,230 1515 01:01:20,230 --> 01:01:22,500 1516 01:01:22,500 --> 01:01:25,300 1517 01:01:25,300 --> 01:01:26,980 1518 01:01:26,980 --> 01:01:28,990 1519 01:01:28,990 --> 01:01:33,310 1520 01:01:33,310 --> 01:01:36,790 1521 01:01:36,790 --> 01:01:41,440 1522 01:01:41,440 --> 01:01:44,470 1523 01:01:44,470 --> 01:01:49,060 1524 01:01:49,060 --> 01:01:50,950 1525 01:01:50,950 --> 01:01:53,080 1526 01:01:53,080 --> 01:01:54,550 1527 01:01:54,550 --> 01:01:56,380 1528 01:01:56,380 --> 01:01:58,570 1529 01:01:58,570 --> 01:02:00,340 1530 01:02:00,340 --> 01:02:02,230 1531 01:02:02,230 --> 01:02:04,270 1532 01:02:04,270 --> 01:02:05,680 1533 01:02:05,680 --> 01:02:07,960 1534 01:02:07,960 --> 01:02:10,720 1535 01:02:10,720 --> 01:02:12,220 1536 01:02:12,220 --> 01:02:16,480 1537 01:02:16,480 --> 01:02:18,550 1538 01:02:18,550 --> 01:02:23,590 1539 01:02:23,590 --> 01:02:25,390 1540 01:02:25,390 --> 01:02:28,420 1541 01:02:28,420 --> 01:02:30,849 1542 01:02:30,849 --> 01:02:32,800 1543 01:02:32,800 --> 01:02:34,510 1544 01:02:34,510 --> 01:02:36,340 1545 01:02:36,340 --> 01:02:38,200 1546 01:02:38,200 --> 01:02:41,400 1547 01:02:41,400 --> 01:02:43,660 1548 01:02:43,660 --> 01:02:51,850 1549 01:02:51,850 --> 01:02:54,620 1550 01:02:54,620 --> 01:02:57,370 1551 01:02:57,370 --> 01:02:59,660 1552 01:02:59,660 --> 01:03:01,340 1553 01:03:01,340 --> 01:03:03,080 1554 01:03:03,080 --> 01:03:05,210 1555 01:03:05,210 --> 01:03:07,310 1556 01:03:07,310 --> 01:03:08,660 1557 01:03:08,660 --> 01:03:10,730 1558 01:03:10,730 --> 01:03:13,070 1559 01:03:13,070 --> 01:03:14,600 1560 01:03:14,600 --> 01:03:16,010 1561 01:03:16,010 --> 01:03:17,840 1562 01:03:17,840 --> 01:03:20,030 1563 01:03:20,030 --> 01:03:23,600 1564 01:03:23,600 --> 01:03:26,000 1565 01:03:26,000 --> 01:03:27,590 1566 01:03:27,590 --> 01:03:30,290 1567 01:03:30,290 --> 01:03:31,760 1568 01:03:31,760 --> 01:03:33,470 1569 01:03:33,470 --> 01:03:35,510 1570 01:03:35,510 --> 01:03:37,760 1571 01:03:37,760 --> 01:03:39,620 1572 01:03:39,620 --> 01:03:40,850 1573 01:03:40,850 --> 01:03:43,280 1574 01:03:43,280 --> 01:03:48,290 1575 01:03:48,290 --> 01:03:50,090 1576 01:03:50,090 --> 01:03:51,710 1577 01:03:51,710 --> 01:03:55,330 1578 01:03:55,330 --> 01:03:57,740 1579 01:03:57,740 --> 01:04:00,100 1580 01:04:00,100 --> 01:04:01,790 1581 01:04:01,790 --> 01:04:03,650 1582 01:04:03,650 --> 01:04:06,950 1583 01:04:06,950 --> 01:04:09,800 1584 01:04:09,800 --> 01:04:12,110 1585 01:04:12,110 --> 01:04:13,670 1586 01:04:13,670 --> 01:04:21,910 1587 01:04:21,910 --> 01:04:26,090 1588 01:04:26,090 --> 01:04:29,870 1589 01:04:29,870 --> 01:04:33,110 1590 01:04:33,110 --> 01:04:37,580 1591 01:04:37,580 --> 01:04:41,270 1592 01:04:41,270 --> 01:04:43,700 1593 01:04:43,700 --> 01:04:45,350 1594 01:04:45,350 --> 01:04:46,670 1595 01:04:46,670 --> 01:04:48,830 1596 01:04:48,830 --> 01:04:51,050 1597 01:04:51,050 --> 01:04:54,920 1598 01:04:54,920 --> 01:04:57,350 1599 01:04:57,350 --> 01:04:59,940 1600 01:04:59,940 --> 01:05:01,500 1601 01:05:01,500 --> 01:05:03,180 1602 01:05:03,180 --> 01:05:05,520 1603 01:05:05,520 --> 01:05:07,440 1604 01:05:07,440 --> 01:05:09,300 1605 01:05:09,300 --> 01:05:13,260 1606 01:05:13,260 --> 01:05:14,819 1607 01:05:14,819 --> 01:05:17,160 1608 01:05:17,160 --> 01:05:18,810 1609 01:05:18,810 --> 01:05:21,960 1610 01:05:21,960 --> 01:05:23,730 1611 01:05:23,730 --> 01:05:25,230 1612 01:05:25,230 --> 01:05:27,300 1613 01:05:27,300 --> 01:05:29,339 1614 01:05:29,339 --> 01:05:31,230 1615 01:05:31,230 --> 01:05:35,280 1616 01:05:35,280 --> 01:05:36,930 1617 01:05:36,930 --> 01:05:39,060 1618 01:05:39,060 --> 01:05:40,290 1619 01:05:40,290 --> 01:05:42,450 1620 01:05:42,450 --> 01:05:45,240 1621 01:05:45,240 --> 01:05:48,480 1622 01:05:48,480 --> 01:05:51,180 1623 01:05:51,180 --> 01:05:53,160 1624 01:05:53,160 --> 01:05:54,720 1625 01:05:54,720 --> 01:05:57,180 1626 01:05:57,180 --> 01:06:02,089 1627 01:06:02,089 --> 01:06:04,770 1628 01:06:04,770 --> 01:06:06,089 1629 01:06:06,089 --> 01:06:09,990 1630 01:06:09,990 --> 01:06:11,940 1631 01:06:11,940 --> 01:06:13,920 1632 01:06:13,920 --> 01:06:15,329 1633 01:06:15,329 --> 01:06:17,010 1634 01:06:17,010 --> 01:06:21,839 1635 01:06:21,839 --> 01:06:23,550 1636 01:06:23,550 --> 01:06:26,099 1637 01:06:26,099 --> 01:06:29,910 1638 01:06:29,910 --> 01:06:30,960 1639 01:06:30,960 --> 01:06:34,530 1640 01:06:34,530 --> 01:06:35,910 1641 01:06:35,910 --> 01:06:37,380 1642 01:06:37,380 --> 01:06:40,020 1643 01:06:40,020 --> 01:06:42,180 1644 01:06:42,180 --> 01:06:44,339 1645 01:06:44,339 --> 01:06:46,109 1646 01:06:46,109 --> 01:06:47,880 1647 01:06:47,880 --> 01:06:51,420 1648 01:06:51,420 --> 01:06:54,999 1649 01:06:54,999 --> 01:06:57,489 1650 01:06:57,489 --> 01:06:59,769 1651 01:06:59,769 --> 01:07:02,979 1652 01:07:02,979 --> 01:07:06,160 1653 01:07:06,160 --> 01:07:07,689 1654 01:07:07,689 --> 01:07:10,539 1655 01:07:10,539 --> 01:07:12,669 1656 01:07:12,669 --> 01:07:14,410 1657 01:07:14,410 --> 01:07:17,859 1658 01:07:17,859 --> 01:07:19,809 1659 01:07:19,809 --> 01:07:22,089 1660 01:07:22,089 --> 01:07:23,859 1661 01:07:23,859 --> 01:07:25,479 1662 01:07:25,479 --> 01:07:27,339 1663 01:07:27,339 --> 01:07:30,819 1664 01:07:30,819 --> 01:07:34,359 1665 01:07:34,359 --> 01:07:36,189 1666 01:07:36,189 --> 01:07:37,719 1667 01:07:37,719 --> 01:07:39,789 1668 01:07:39,789 --> 01:07:41,469 1669 01:07:41,469 --> 01:07:46,120 1670 01:07:46,120 --> 01:07:48,699 1671 01:07:48,699 --> 01:07:50,199 1672 01:07:50,199 --> 01:07:53,410 1673 01:07:53,410 --> 01:07:55,150 1674 01:07:55,150 --> 01:07:57,009 1675 01:07:57,009 --> 01:07:58,479 1676 01:07:58,479 --> 01:08:00,279 1677 01:08:00,279 --> 01:08:01,989 1678 01:08:01,989 --> 01:08:03,939 1679 01:08:03,939 --> 01:08:05,979 1680 01:08:05,979 --> 01:08:08,819 1681 01:08:08,819 --> 01:08:12,999 1682 01:08:12,999 --> 01:08:14,979 1683 01:08:14,979 --> 01:08:16,089 1684 01:08:16,089 --> 01:08:19,329 1685 01:08:19,329 --> 01:08:21,819 1686 01:08:21,819 --> 01:08:23,740 1687 01:08:23,740 --> 01:08:25,269 1688 01:08:25,269 --> 01:08:27,039 1689 01:08:27,039 --> 01:08:31,499 1690 01:08:31,499 --> 01:08:34,990 1691 01:08:34,990 --> 01:08:36,549 1692 01:08:36,549 --> 01:08:37,599 1693 01:08:37,599 --> 01:08:39,430 1694 01:08:39,430 --> 01:08:41,039 1695 01:08:41,039 --> 01:08:43,299 1696 01:08:43,299 --> 01:08:44,680 1697 01:08:44,680 --> 01:08:46,390 1698 01:08:46,390 --> 01:08:48,099 1699 01:08:48,099 --> 01:08:52,240 1700 01:08:52,240 --> 01:08:55,209 1701 01:08:55,209 --> 01:08:57,069 1702 01:08:57,069 --> 01:08:59,349 1703 01:08:59,349 --> 01:08:59,359 1704 01:08:59,359 --> 01:08:59,680 1705 01:08:59,680 --> 01:09:04,299 1706 01:09:04,299 --> 01:09:06,180 1707 01:09:06,180 --> 01:09:08,249 1708 01:09:08,249 --> 01:09:10,019 1709 01:09:10,019 --> 01:09:12,240 1710 01:09:12,240 --> 01:09:14,039 1711 01:09:14,039 --> 01:09:17,099 1712 01:09:17,099 --> 01:09:18,720 1713 01:09:18,720 --> 01:09:18,730 1714 01:09:18,730 --> 01:09:19,590 1715 01:09:19,590 --> 01:09:21,780 1716 01:09:21,780 --> 01:09:23,340 1717 01:09:23,340 --> 01:09:26,550 1718 01:09:26,550 --> 01:09:28,380 1719 01:09:28,380 --> 01:09:32,010 1720 01:09:32,010 --> 01:09:34,740 1721 01:09:34,740 --> 01:09:37,289 1722 01:09:37,289 --> 01:09:39,120 1723 01:09:39,120 --> 01:09:42,030 1724 01:09:42,030 --> 01:09:43,530 1725 01:09:43,530 --> 01:09:45,479 1726 01:09:45,479 --> 01:09:47,130 1727 01:09:47,130 --> 01:09:49,110 1728 01:09:49,110 --> 01:09:50,459 1729 01:09:50,459 --> 01:09:51,420 1730 01:09:51,420 --> 01:09:52,439 1731 01:09:52,439 --> 01:09:53,789 1732 01:09:53,789 --> 01:09:56,070 1733 01:09:56,070 --> 01:09:59,130 1734 01:09:59,130 --> 01:10:01,470 1735 01:10:01,470 --> 01:10:02,610 1736 01:10:02,610 --> 01:10:05,250 1737 01:10:05,250 --> 01:10:07,890 1738 01:10:07,890 --> 01:10:10,200 1739 01:10:10,200 --> 01:10:13,800 1740 01:10:13,800 --> 01:10:16,260 1741 01:10:16,260 --> 01:10:18,420 1742 01:10:18,420 --> 01:10:19,860 1743 01:10:19,860 --> 01:10:24,120 1744 01:10:24,120 --> 01:10:25,979 1745 01:10:25,979 --> 01:10:27,959 1746 01:10:27,959 --> 01:10:31,470 1747 01:10:31,470 --> 01:10:32,790 1748 01:10:32,790 --> 01:10:36,840 1749 01:10:36,840 --> 01:10:40,290 1750 01:10:40,290 --> 01:10:43,260 1751 01:10:43,260 --> 01:10:47,030 1752 01:10:47,030 --> 01:10:50,310 1753 01:10:50,310 --> 01:10:52,050 1754 01:10:52,050 --> 01:10:54,209 1755 01:10:54,209 --> 01:10:55,800 1756 01:10:55,800 --> 01:10:57,180 1757 01:10:57,180 --> 01:11:01,350 1758 01:11:01,350 --> 01:11:10,470 1759 01:11:10,470 --> 01:11:10,480 1760 01:11:10,480 --> 01:11:12,540