1 00:00:01,069 --> 00:00:03,189 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu hi good afternoon everyone so today we're going to be talking about graph optimizations and as a reminder on Thursday we're gonna have a guest lecture by Professor Johnson of the MIT math department and he'll be talk about performance of high-level languages so please be sure to attend the guest lecture on Thursday so here's an outline of what I'm gonna be talking about today so we're first going to remind ourselves what a graph is and then we're talking you're gonna talk about various ways to represent a graph in memory and then we'll talk about how to implement an efficient breadth-first search algorithm both serially and also in parallel and then i'll talk about how to use graph compression and graph reordering to improve the locality of graph algorithms so first of all what is a graph so a graph contains vertices and edges where vertices represent certain objects of interest and add just between objects model relationships between the two objects for example you can have a social network where the people are represented as vertices and edges between people means that they're friends with each other the edges in this graph don't have to be bi-directional so you could have a one-way relationship for example if you're looking at the Twitter network Alice could follow Bob but Bob doesn't necessarily have to follow Alice back the graph also doesn't have to be connected so here this graph here is connected but for example there could be some people who don't like to talk to other people and then they're just off in their own component you can also use graphs to model protein networks where the vertices are proteins and edges between vertices means that there's some sort of interaction between the proteins so this is useful in computational biology as I said edges can be directed so the relationship can go one way or both ways in this graph here we have some directed edges and then also some edges that are directed in both directions so here John follows Alice Alice follows Peter and then Alice Falls Bob and Bob also follows Alice if you use a graph to represent the worldwide web then the vertices would be websites and then the edges would denote that there's a hyperlink from one website to another and again the edges here don't have to be bi-directional because website a could have a link to website B but website B doesn't necessarily have to have a link back I just can also be weighted so you can have a weight on the edge that denotes the strength of the relationship or some sort of distance measure corresponding to that relationship so here I have an example where I'm using a graph to represent cities and the edges between cities have a weight that corresponds to the distance between the two cities and if I want to find the quickest way to get from City a to city B then I would be interested in finding the shortest path from A to B in this graph here here's another example where the edge weights now are the cost of a direct flight from City a to city B and here the edges are directed so for example this says that there's a flight from San Francisco to LA for 45 dollars and if I want to find the cheapest way to get from one city to another city then again I would try to find the shortest path in this graph from City a to city B vertices and edges can also have metadata on them and they can also have types so for example here's the Google knowledge graph which represents all the knowledge on the internet that Google knows about and here the nodes have metadata on them so for example the node corresponding to da Vinci is labeled with his date of birth and date of death and the vertices also have a color corresponding to the type of knowledge that they refer to so you can see that some of these nodes are blue some of them are red some of them are green and some of them have other things on them so in general crafts can have types and metadata on both the vertices as well as the edges let's look at some more applications of graphs so graphs are very useful for implementing queries on social networks so here are some examples of queries that you might want to ask on a social network so for example you might be interested in finding all of your friends who went to the same high school as you on Facebook so that can be implemented using a graph algorithm you might also be interested in finding all the common friends you have with somebody else again a graph algorithm in a social network service might run a graph algorithm to recommend people that you might know and want to become friends with and they might use a graph algorithm to recommend certain products that you might be interested in so these are all examples of social network fries and there are many other queries that you might be interested in running on a social network and many of them can be implemented using graph algorithms another important application is clustering so here the goal is to find groups of vertices in a graph that are well connected internally and poorly connected externally so in this image here each blob of vertices of the same color correspond to a cluster and you can see that inside a cluster there are a lot of edges going among the vertices and between clusters there are relatively fewer edges and some applications of clustering include community detection and social networks so here you might be interested in finding groups of people with similar interests or hobbies you can also use clustering to detect fraudulent websites on your internet you can use it for clustering documents so you would cluster documents that have similar text together and clustering is often used for unsupervised learning in machine learning applications another application is connect omics so connectomics is the study of the structure of the the network structure of the brain and here the vertices correspond to neurons and edges between two vertices means that there's some sort of interaction between the two neurons and recently there's been a lot of work on trying to do high-performance connect omics and some of this work has been going on here at MIT by professor charles leiserson and professor near Shahid's research groups so recently this has been a very hot area graphs also used in computer vision for example an image segmentation so here you want to segment your image into the distinct objects that appear in the image and you can construct a graph by representing the pixels as vertices and then you would place an edge between every pair of neighboring pixels with a weight that corresponds to their similarity and then you would run some sort of minimum cost cut algorithm to partition your graph into the different objects that appear in the in the image so there are many other applications and I'm not gonna have time to go through all of them today but here's just a flavor of some of the applications of graphs so any questions so far okay so next let's look at how we can represent a graph in memory so for the rest of this lecture I'm gonna assume that my vertices are labeled in the range from 0 to Adam and minus 1 so they have an integer in this range sometimes your graph might be given to you where the vertices are already labeled in this range sometimes not but you can always get these labels by mapping each of the identifiers to unique integer in this range so for the rest of the lecture I'm just going to assume that we have these labels from 0 to n minus 1 for the vertices one way to represent a graph is to use an adjacency matrix so this is going to be an N by n matrix and there's a 1 bit in the eighth row and J column if there's an edge that goes from vertex I to vertex J and 0 otherwise another way to represent a graph is the edge list representation well we just store a list of the edges that appear in the graph so we have one pair for each edge where the pair contains the two coordinates of that edge so what is a space requirement for each of these two representations in terms of the number of edges M in the number of vertices n in the graph so it should be pretty easy yes yeah so the space for the AJC matrix is order N squared because you have N squared cells in this matrix and you have one bit for each of the cells for the edge list it's going to be order M because you have M edges and for each edge you're storing a constant amount of data in the edge list so here's another way to represent a graph this is known as the adjacency list format and the idea here is that we're going to have an array of pointers one per vertex and each pointer points to a linked list storing the edges for that vertex and the linked list is unordered in this example so what's the space requirement of this representation yes so it's going to be order n plus M and this is because we have n pointers and the sum of the number of entries across all the linked lists is just equal to the number of edges in the graph which is M what's one potential issue with this sort of representation if you think in terms of cash performance does anyone see a potential performance issue here yeah right so the so the issue here is that if you're trying to loop over all the neighbors of a vertex you're going to have to dereference the pointer in every linked list node because these are not two contiguous in memory and every time you dereference linked lists note that's going to be a random access into memory so that can be bad for cache performance one way you can improve cache performance is instead of using linked lists for each of these neighbor lists you can use an array so now you can store the neighbors just in this array and they'll be contiguous in memory one drawback of this approach is that it becomes more expensive if you're trying to update the graph and we'll talk more about that later so I knew questions so far so what's another way to represent the graph that we've seen in a previous lecture what's a more compressed or compact way to represent a graph especially a sparse graph says anybody remember the compressed sparse row format so we looked at this in one of the early lectures and in that lecture we used it to store a sparse matrix but you can also use it to store a sparse graph and as a reminder we have two arrays in the compressed sparse row or CSR format we have two offsets array and the edges array the offsets array stores an offset for each vertex into the edges array telling us where the edges for that particular vertex begins in the edges array so offsets of I stores the offset of where vertex i's edges start in the edges array so in this example vertex zero has an offset of zero so it's edges start at position zero in the edges array vertex one has an offset of four so it starts at index 4 in this offsets array so with this representation how can we get the degree of a vertex so we're not storing the degree explicitly here can we get the degree efficiently yes yes so you can get the degree of a vertex just by looking at the difference between the next offset and its own offset so for vertex zero you can see that it's degree is 4 because vertex one's offset is 4 and vertex zero is offset as zero and similarly you can do that for all the other vertices so what's the space usage of this representation sorry yeah so again it's going to be order n plus n because you need order and space for the offsets array in order M space for the edges array you can also store values or weights on the edges one way to do this is to create an additional array of size m and then for edge I you just store the weight or the value in this in the ithe index of this additional array that you created if you're always accessing to wait when you acts as an edge then it's actually better for a cache locality to interleave the weights with the edge targets so instead of creating two arrays of size m you have one array of size 2m and every other entry is the weight and this improves casual county because every time you access an edge its weight is going to be right next to it in memory and it's going to likely be on the same cache line so that's one way to improve cache locality any questions so far so let's look at some of the trade-offs in these different graph representations that we've looked at so far so here I'm listing the storage cost for each of these representations which we already discussed this is also the cost for just scanning the whole graph in one of these representations what's the cost of adding an edge in each of these representations so for adjacency matrix what's the cost of adding an edge yeah so for adjacency matrix it's just order 1 to add an edge because you have random access into this matrix so you just have to access the I comma J entry and flip the bit from a 0 to 1 what about for the edge list so we're assuming that the edgeless is unordered so you don't have to keep the list in any sorted order yeah yes so again it's just oh of one because you can just add it to the end of the edge list so that's constant time what about for that jcu list so actually this depends on whether we're using linked lists or a race for the neighbor list of the vertices if we're using a linked list adding an edge just takes constant time because we can just put it at the beginning of the linked list if we're using an array then we actually need to create a new array to make space for this edge that we add and that's going to cost us degree of V work to do that because we have to copy all the existing edges over to this new array and then add this new edge to the end of that array of course you could amortize this cost across multiple update so if you run out of memory you can double the size your array so you don't have to create these new arrays too often but the cost for any individual addition is still relatively expensive compared to say an edge list or @jc matrix and then finally for the compressed sparse row format if you add an edge in the worst case it's going to cost us order m+ and work because we're going to have to reconstruct the entire offsets array and the entire edges rating in the worst case because we have to put something in and then shift in the edges the way you have to put something in and it shift all of the values to the right of that over by one location and then for the offsets array we have to modify the offset for the particular vertex we're adding an edge to and then the offsets for all the vertices after that so the compressed bars row representation is not particularly friendly to edge updates what about for deleting an edge from some vertex V so for our JC matrix it's again it's going to be constant time because you just randomly access the correct entry and flipped a bit from a 1 to 0 what about for an edge list yeah so for an edge lights in the worst case it's going to cost us order at work because the edges are not in any sorted order so we have to scan through the whole thing in the worst case to find the edge that we're trying to delete for a JCL list it's going to take order degree of V work because the neighbors are not sorted so we have to scan through the whole thing to find this edge that we're trying to delete and then finally for a compressed Barse row it's going to be order n plus and because we're gonna have to reconstruct the whole thing in the worst case what about finding all the neighbors of a particular vertex V what's the cost of doing this in the adjacency matrix yes so it's going to cause this order n work to find all the neighbors of a particular vertex because we just scanned the correct row in this matrix the row corresponding to vertex V for the edge list we're going to have to scan the entire edge list because it's not sorted so in the worst case that's going to be order M for adjacency list that's going to take order degree of V because we can just find the pointer to the linked list for that vertex in constant time and then we just traverse over the linked list and that takes order degree of V time and then finally for the compress bars row format it's also ordered degree of V because we have constant time access into the appropriate location and the edges array and then we can just read off the edges which are consecutive in memory so what about finding if a vertex W is a neighbor of V so I'll just give you the answers so for that Jacy matrix is going to take constant time because again we just have to check the Vieth row and the W with column and check of the bit is set there for edge list we have to traverse the entire list to see if the edges there and then for a JCL list and compress bars row is going to be ordered degree of V because we just have to scan the neighbor list for that vertex so these are some graph representations but there are actually many other graph representations including variants of the ones that I've talked about here so for example for that jcu list I said you can either use a linked list or an array to store the neighbor list but you can actually use a hybrid approach where you solve a linked list but each linked list node actually stores more than one vertex so you can store maybe 16 vertices in each linked list node and that gives us better cache locality so for the rest of this lecture I'm going to talk about algorithms that are best implemented using the compressed sparse row format and this is because we're going to be dealing with sparse graphs we're going to be looking at static algorithms where we don't have to update the graph if we do have to update the graph then CSR isn't a good choice but we're just gonna be looking at static algorithms today and then for all the algorithms that we'll be looking at we're going to need to scan over all the neighbors of a vertex that we that we visit and CSR is very good for that because all the neighbors for a particular vertex are stored contiguously in memory so I need questions so far okay I do want to talk about some properties of real-world graphs so first we're seeing graphs that are quite large today but actually they're not too large so here are the sizes of some of the real-world graphs out there so there's a Twitter Network this is actually a snapshot of the Twitter network from a couple years ago has 41 million vertices and 1.5 billion edges and you can store this graph in about six point three gigabytes of memories you can probably store it in the main memory of your laptop the largest publicly available graph out there now is this common crawl web graph it has 3.5 billion vertices and 128 billion edges so storing this graph requires a little over half a terabyte of memory says it is quite a bit of memory but it's actually not too big because there are machines out there with main memory sizes in the order of terabytes of memory nowadays so for example you can rent a 2 terabyte or 4 terabyte memory instance on AWS which you're using for your homework assignments see if you have any leftover credits at the end of the semester you can and you want to play around with this graph you can rent one of these terabyte machines just to remember to turn it off when you're done because it's kind of expensive another property of reward graphs is that they're quite sparse so M tends to be much less than N squared so most of the edges most of the possible edges are not actually there and finally the degree distributions of the vertices can be highly skewed in many real-world graphs see here I'm plotting the degree on the x-axis and the number of vertices with that particular degree on the y-axis and we can see that it's highly skewed and for example in a social network most of the people would be on the left hand side so their degree is not that high and then we have some very popular people on the right hand side where their degree is very high but we don't have too many of those so this is what's known as a power law degree distribution and there have been various studies that have shown that many real-world graphs have approximately a power law degree distribution and mathematically this means that the number of vertices with degree D is proportional to D to the negative P so negative P is the exponent and for many graphs the value of P lies between 2 and 3 and this power law degree distribution does have implications when we're trying to implement parallel algorithms to process these graphs because with graphs that have a skewed degree distribution you could run into load imbalance issues if you just paralyze across the vertices the number of edges they have can vary significantly any questions ok so now let's talk about how we can implement a graph algorithm and I'm going to talk about the breadth first search algorithm so how many of you have seen Bradford search before ok so about half of you I did talk about breadth-first search in the previous lecture so I was hoping everybody would raise their hands okay so as a reminder in the BFS algorithm were given a source vertex s and we want to visit the vertices in order of their distance from the source s and there are many possible outputs that we might care about one possible output is we just want to report the vertices in the order that they were visited by the breadth first search traversal so let's say we have this graph here and our source vertex is D so what's one possible order in which we can traverse these vertices now I should specify that we should traverse this graph in a breadth-first search manner so what's the first vertex we're going to explore D so we're first going to look at deep because that's our source vertex the second vertex we can actually choose between B C and E because all we care about is that we're visiting these vertices in order of their distance from the source but these three vertices all have the same distance so let's just pick B C and then e and then finally I'm going to visit vertex a which has a distance of two from the source so this is one possible solution there are other possible solutions because we could have visited e before we visited B and so on another possible output that we might care about is two we might want to report the distance from each vertex to the source vertex s so in this example here are the distances so D as a distance of zero B C and E all have a distance of one and a as the distance of two we might also want to generate a breadth first search tree where each vertex in the tree has a parent who which is a neighbor in the previous level of the breadth-first search or in other words the parent should have a distance of one less than that vertex itself so here's an example of a breadth-first search tree and we can see that each of the vertices has a parent whose breadth-first search distance is 1 less than itself so the algorithm said I'm going to be talking about today will generate the distances as well as the BFS tree and BFS actually has many applications so it's used as a subroutine in between a centrality which has a very popular graph mining algorithm used to rank the importance of nodes in a network and the importance of nodes here corresponds to how many shortest paths go through that node other applications include a centricity estimation maximum flows are some max flow algorithms use BFS as routine you can use BFS to crawl the web do cycle detection garbage collection and so on so let's now look at a serial BFS algorithm and here I'm just going to show the pseudocode so first we're going to initialize the distances to all infinity and we're gonna initialize the parents to be nil and then we're gonna create a queue data structure we're going to set the distance of the route to be zero because the route has a distance of zero to itself and then we're going to place the route on to this queue and then while the queue is not empty we're going to DQ the first thing in the queue we're going to look at all the neighbors of the current vertex that we DQ'd and for each neighbor we're going to check if its distance is infinity if the distance is infinity that means we haven't explored that neighbor yet so we're going to go ahead and explore it and we do so by setting its distance value to be the current vertex is distance plus one we're going to set the parent of that neighbor to be the current vertex and it will place the neighbor onto the queue so some pretty simple algorithm and we're just according to keep iterating and this while loop until there are no more vertices left in the queue so what's the work of this algorithm in terms of N and M so how much work are we doing per edge [Music] yes yes so assuming that the NQ and EQ operators operators are constant time then we're doing cost amount of work per edge so sum across all edges that's going to be order M and then we're also doing a cost amount of work per vertex because we have to basically place it onto the queue and then take it off the queue and then also initialize their values so the overall work is going to be order M plus n ok so let's now look at some actual code to implement this serial BFS algorithm using the compressed sparse row format so first I'm going to initialize two arrays parent and Q and these are going to be integer arrays of size N I'm going to initialize all of the parent entries to be negative 1 I'm going to place a source vertex on to the queue so it's going to appear at Q of 0 that's the beginning of the queue and then I'll set the parent of the source vertex to be the source itself and then I also have two integers that point to the front in the back of the queues initially the front of the queue is at position 0 and the back is at position 1 and then while the queue is not empty and I can check that by checking if queue front is not equal to Q back then I'm going to take DQ the first vertex in my queue I'm gonna set current to be that vertex and then I'll increment Q front and then I'll compute the degree of that vertex which are going to do by looking at the difference between consecutive offsets and I also assume that offsets of Adhan is equal to em just to deal with the last vertex and then I'm going to loop through all of the neighbors for the current vertex and to access each neighbor what I do is I go into the address array and I know that my neighbors start at offsets of current and therefore to get the ithe I just do offsets of current plus I that's my index into the address array now I'm gonna check if my neighbor has been explored yet and I can check that by checking a parent of neighbors equal to one if it is that means I haven't explored it yet and then I'll set a parent of neighbour to be current and then I'll place the neighbor onto the back of the queue and increment queue back and I'm just gonna keep repeating this while loop until it becomes empty and here I'm only generating the parent pointers but I could also generate the distances if I wanted to with just a slight modification of this code so any questions on how this code works okay so here's a question what's the most expensive part of the code can you point to one particular line here that is the most expensive yes place sir okay so actually turns out that that's not the most expensive part of this code but you're close does anyone have any other ideas yes yes it turns out that this line here where we're accessing parent of neighbor that turns out to be the most expensive because whenever we access this parent array the neighbor can appear anywhere in memory so that's going to be a random access and if the parent array doesn't fit in our cache then that's going to cost us a cache miss almost every time this address array is actually mostly accessed sequentially because for each vertex all of its edges are stored contiguously in memory we do have one random access into the edges array per vertex because we have to look up the starting location for that vertex but it's not one per edge unlike this check of the parent array that that occurs for every edge does that makes sense so so let's do a back-of-the-envelope calculation to figure out how many cache misses we would incur assuming that we started with a cold cache and we also assume that n is much larger than the size of a cache so we can't fit any of these arrays into cache we'll assume that a cache line has 64 bytes and integers are 4 bytes each so let's try to analyze this so the initialization will cost us and over 16 cache misses and the reason here is that we're initializing this array sequentially so we're accessing contiguous locations and this can take advantage of spatial locality on each cache line we can fit 16 of the integers so overall we're going to need n over 16 cache misses just to initialize this array we also need n over 16 cache misses across the entire algorithm to DQ the vertex from the front of the queue because again this is going to be a sequential access into this queue array and across all vertices that's going to be n over 16 cache misses because we can fit 16 virtus 16 integers on a cache line to compute the degree here that's going to take n cache misses over all because each of these accesses the offsets array is going to be a random access because we we have no idea what the value of current here is it could be anything so across the entire algorithm we're going to need n cache misses to access this offsets array and then to access this edges array I claim that we're going to need at most two n plus M over 16 cache misses so does anyone see where that bound comes from so where does the M over 16 come from Yeah right so you have to pay I'm over 16 because you're accessing every edge once and you're accessing the edges contiguously so therefore across all edges that's going to take em over 16 cache misses but we also have to add to n because whenever we access the edges for a particular vertex the first cache line might not only contain that vertex as edges and similarly the last cache line that we access might also not just contain that vertex as edges so therefore we're going to waste the first cache line in the last cache line in the worst case for each vertex and summed across all vertices that's going to be two ends so this is an upper bound to n plus M over 16 accessing this parent array that's going to be a random access every time so we're gonna incur a cache miss in the worst case every time so summed across all edge accesses that's going to be M cache misses and then finally we're going to pay an over 16 cache misses 2 and cue the neighbor onto the queue because these are sequential accesses so in total we're going to incur at most 51 over 16 and plus 17 over 16 cache misses and if M is greater than 3n then the second term here is going to dominate and M is usually greater than 3n in most real-world graphs and the second term here is dominated by this random access into the parent array so let's see if we can optimize this code so that we get better cache performance so let's say we could fit a bit vector of size n into cache but we couldn't fit the entire parent array in the cache what do we do to reduce the number of cache misses so anyone have any ideas yeah [Music] yeah so that's exactly correct so we're going to use a bit vector to store whether the vertex has been explored yet or not so we only need one bit for that we're not storing the parent idea in this bit vector we're just storing a bit to say whether that vertex has been explored yet or not and then before we check this parent array we're going to first check the bit vector to see if that vertex has been explored yet and if it has been explored yet we don't even need to access this parent array if it hasn't been explored then we won't go ahead and access the parent entry of the neighbor but we only have to do this one time for each vertex in the graph because we can only visit each vertex once and therefore we can reduce a number of cache misses from em down to n so overall this might improve the number of cache misses in fact it does if if the number of edges is large enough relative to the number of vertices however you do have to do a little bit more computation because you know you have to do bit vector manipulation to check this bit vector and then also to set the bit vector when you explore a neighbor so here's the code using the bit vector optimization so here I'm initializing this bit factor called visited it's of size approximately an over 32 and then I'm setting off the bits to 0 except for the source vertex where I'm going to set its bit to 1 and I'm doing this bit calculation here to figure out the bit for the source vertex and then now when I'm trying to visit a neighbor I'm first going to check if the neighbor is visited by checking this bit array and I can do this using this computation here so I I and visited of neighbor over 32 by this mask one left should that shifted by a neighbor mod 32 and that's false that means the neighbor hasn't been visited yet so I'll go in inside this if clause and then I'll set the visited bit to be true using this statement here and then I do the same operations as I did before it turns out that this version is faster for large enough values of M relative to n because you reduce the number of cache misses overall you still have to do this extra computation here this bit manipulation but if M is large enough then the reduction in number of cache misses outweighs the additional computation that you have to do any questions okay so that was a serial implementation of preference search now let's look at a parallel implementation so I'm first going to do an animation of how a parallel breadth first search algorithm would work the parallel breadth-first search algorithm is going to operate on frontiers where the initial frontier contains just the source vertex and on every iteration i'm going to explore all of the vertices on the frontier and then place any unexplored neighbors on to the next frontier and then i move on to the next frontier so the first iteration i'm going to mark the source vertex i's explored sight its distance to be 0 and then place the neighbors of that source vertex onto the next frontier the next iteration i'm going to do the same thing set these distances to one i also am going to generate a parent pointer for each of these vertices and this parent should come from the previous front here and there should be a neighbor of the vertex and here there's only one option which is the source for a text so i'll just pick that as the parent and then i'm gonna place the neighbors on to the next frontier again mark those as explored set their distances and generate a parent pointer again and notice here when i'm generating these parent pointers there's actually more than one choice for some of these vertices and this is because there are multiple vertices on the previous frontier and some of them explore the same neighbor on the current frontier so a parallel implementation has to be aware of this potential race here I'm just picking an arbitrary parent so as we see here you can process each of these frontiers in parallel so you can paralyze over all of the vertices on the frontier as well as all of their outgoing edges however you do need to process one frontier before you move on to the next one in this BFS algorithm and a parallel implementation has to be aware of potential races so as I said earlier we could have multiple vertices on the frontier trying to visit the same neighbor so somehow that has to be resolved and also the amount of work on each frontier is changing changing throughout the course of the algorithm so you have to be careful with load balancing because you have to make sure that the amount of work each processor has has to do is about the same if you use silk to implement this then load balancing doesn't really become a problem so any questions on the BFS algorithm before I go over the code okay so here's the actual code and here I'm gonna initialize these four arrays so the parent array which is the same as before I'm gonna have an array called frontier which stores the current frontier and then I'm gonna have a rate called frontier next which is a temporary array that I use to store the next front here of the BFS and then also have an array called degrees I'm going to initialize all of the parent entries to be negative one or do that using a silk for loop I'm going to place the source vertex at the 0th index of the front here also at the front here size b1 and then I set the parent of the source to be the source itself while the frontier size is greater than zero that means I still have more work to do I'm going to first iterate over all of the vertices on my frontier in parallel using a cell for loop and then I'll set the the ithe entry of the degrees array to be the degree of the vertex on the frontier and I can do this just using the difference between consecutive offs and then I'm going to perform a prix fixe some on this degrees array and we'll see in a minute why I'm doing this prefix um but first of all does anybody recall what prefix autumn is so who knows what prefix on this do you want to tell us what it is [Music] yeah so prefix some so here I'm gonna demonstrate this with an example so let's say this this is our input array the output of this array would store for each location the sum of everything before that location in the input array so here we see that the first position has a value of zero because the sum of everything before it is zero there's nothing before it in the input the second position has a value of two because the sum of everything before it is just the first location the third location has a value six because of some of everything before it is 2 plus 4 which is 6 and so on so I believe this was on one of your homework assignments so hopefully everyone knows what prefix sum is and later on we'll see how we use this to do the parallel reference search ok so I'm gonna do a prefix sum on this degrees array and then I'm going to loop over my front here again in parallel I'm gonna let B be the Eifert X on the frontier index is going to be equal to degrees of I and then my degree is going to be offsets of V plus 1 minus offsets of V now I'm gonna loop through all these neighbors and here I just have a serial for loop but you could actually paralyze this for loop turns out that if the number of iterations in the for loop is small enough there's additional overhead to making this parallel so I just made it serial for now but you could make it parallel to get the neighbor I just index into this edges array I look at offsets of V plus J then now I'm going to check if the neighbor has been explored yet and I can check if parent of neighbor is equal to negative 1 if so that means it hasn't been explored yet so I'm gonna try to explore it and I do so using a compare and swap I'm gonna try to swap in the value of V with the original value of negative 1 in parent of neighbor and the compare and swap is going to return true if it was successful in false otherwise and if it returns true that means this vertex becomes the parent of this neighbor and then I'll place the neighbor onto frontier next at this particular index index plus J and otherwise I'll set a negative one at that location okay so let's see why I'm using index plus J here so here's how frontier next is organized so each vertex on the frontier owns a subset of these locations in the frontier next array and these are all contiguous memory locations and it turns out that the starting location for each of these vertices in this frontier next to race exactly the value in this prefix some DeRay up here so vertex one has its first location at index zero vertex two has its first location at index two vertex three has its first location index six and so on so by using a prefix um I can guarantee that all these vertices have a disjoint sub array in this frontier next array and then they can all right to this frontier next story in parallel without any races an index plus J just gives us the right location of right two in this array so index is the starting location and then J is for the J's neighbor so here's one potential output after we write to this frontier next array so we have some non-negative values and these are vertices that we explored in this iteration we also have some negative one values and the negative one here means that either the vertex has already been explored in a previous iteration or we tried to explore it in the current iteration but somebody else got there before us because somebody else is doing the compare and swap at the same time and they could have finished before we did so we failed on the compare and swap so we don't actually want these negative one values so we're gonna filter them out and we can filter them out using a prefix sum again and this is going to give us a new frontier and we'll set the frontier size equal to the sides of this new frontier and then we repeat this while loop until there are no more vertices on the frontier so any questions on this parallel BFS algorithm do you mean the filter out yeah so what you can do is you can create another array which stores a one in location i if that location is not a negative one in the zero if it is a negative one then you do a prefix um on that array which gives us unique offsets into an output array so then everybody just looks at the prefix some array there and then it writes to the output array so I might be easier if I try to draw this on the board okay so let's say we have an array of size five here so what I'm gonna do is I'm gonna generate another array which stores a one if the positive a lis in the corresponding location is not a negative one and zero otherwise and then I do a prefix um on this array here and this gives me 0 0 1 1 2 and 2 and now each of these non these values that are not negative 1 they can just look up the corresponding index in this output array and this gives us a unique index into an output array so this this element we're right to position 0 this helmet would write to position 1 and this element would write to position 2 in my final output so this would be my final frontier does that make sense okay so let's now analyze the work and span of this parallel BFS algorithm so a number of iterations required by the BFS algorithm is upper bounded by the diameter D of the graph in the diameter of a graph is just the maximum shortest path between any pair of vertices in the graph and that's an upper bound on the number of iterations we need to do each iteration is going to take log M span for the soak for loops the prefix sum in the filter and this is also assuming that the inner loop is paralyzed the inner loop over the neighbors of a vertex so to get the span we just multiply these two terms so we get theta of D times log M span what about the work so compute the work we have to figure out how much work we're doing per vertex and per edge so first notice that the sum of the frontier sizes across entire algorithm is going to be n because each vertex can be on the front here at most once also each edge is going to be traversed exactly once so that leads to M total edge visits on each iteration of the algorithm we're doing a prefix sum and the cost of this prefix sum is going to be proportional to the front here sighs so summed across all iterations the cost of the prefix sum is going to be theta and we also do this filter but the work of the filter is proportional to the number of edges traversed in that iteration and summed across all iterations that's going to give us theta of M total so overall the work is going to be theta of n plus M for this parallel BFS algorithm so this is our work efficient algorithm the work matches that of the serial algorithm any questions on the analysis okay so let's look at how this parallel BFS algorithm runs in practice so here I ran some experiments on a random graph with ten million vertices in a hundred million edges and the edges were randomly generated and I made sure that each vertex had ten inches every experiments on a forty core machine with two-way hyper-threading does anyone know what hyper threading is yeah what is it yeah so that's a great answers so hyper-threading is an Intel technology where for each physical core the operating system actually ceases as two logical cores they share many of the same resources but they have their own registers so if one of the logical cores stalls on a long latency operation the other logical core can use the shared resources and hide some of the latency okay so here I'm plotting to speed up over the single threaded time of the parallel algorithm versus the number of threads so we see that on 40 threads we get a speed-up of about 22 or 23 X and when we turn on hyper threading and use all 80 threads the speed-up is about 32 times on 40 cores and this is actually pretty good for a parallel graph algorithm it's very hard to get good very good speed ups on these irregular graph algorithms so 32 X on 40 cores is pretty good I also compare this to the serial BFS algorithm because that's what we ultimately want to compare against so we see that on 80 threads the speed-up over the serial BFS is about 21 22 X and the serial BFS is 54 percent faster than the parallel BFS on one thread this is because it's doing less work than the parallel version the parallel version has to do extra work with the prefix um in the filter whereas the serial version doesn't have to do that but overall the parallel implementation is still pretty good any questions so a couple lectures ago we saw this slide here such whorls told us never to write non-deterministic parallel programs because it's very hard to debug these programs and hard to reason about them so is there non determinism in this BFS code that we looked at yeah so there's non determinism in the compare-and-swap so let's go back to the code so this compare-and-swap here there's a race there because we can have multiple vertices trying to write to the parent entry of the neighbor at the same time and the one that wins is non-deterministic so the BFS tree that you get at the end is non-deterministic okay so let's see how we can try to fix this non determinism okay so as we said this is the line that cause done causes the non determinism it turns out that we can actually make the output BFS tree be deterministic by going over the outgoing edges in each iteration in two phases so how this works is that in the first phase the vertices on the frontier are not actually going to write to the parent array of the or they are gonna write but they're using going to be using this write min operator and the right min operator is an atomic operation that guarantees that we have concurrent writes to the same location the smallest value gets written there so the value that gets written there is going to be deterministic it's always going to be the smallest one that tries to write there then in the second phase each vertex is going to check for each neighbor whether a parent of neighbor is equal to V if it is that means it was the vertex that successfully wrote to parent of neighbor in the first phase and therefore it's going to be responsible for placing this neighbor on to the next frontier and we're also according to SAP parent of neighbor to be negative v this is just a minor detail and this is because when we're doing this right min operator we could have a future iteration where a lower vertex tries to visit the same vertex that we already explored but if we set this to a negative value we're only going to be writing non negative values to this location so the write min on a neighbor that has already makes been explored would never succeed okay so the final BFS that's final BFS tree that's generated by this code is always going to be the same every time you run it I want to point out that this code is still not deterministic with respect to the order in which individual memory locations get updated so you still have a determinacy race here in the right min operator but it's still better than a non-deterministic code in that you always get the same BFS tree so how do you actually implement the right min operation so it turns out you can implement this using a loop with a compare and swap so reitman takes as input two arguments to memory address that we're trying to update and the new value that we want to write to that address we're first going to set old Val equal to the value at that memory address and we're gonna check of new vows less in old Bell if it is then we're gonna attempt to do a compare and swap out that location writing new vowel into that address if its initial value was old Val and if that succeeds then we return otherwise we failed and that means that somebody else came in in the meantime and changed the value there and therefore we have to reread the old value at the memory address and then we repeat in there are two ways that this write min operator could finish one is if the compare and swap was successful other the other one is if new Val is greater than or equal to old Val in that case we no longer have to try to write anymore because the value that let's there is already smaller than what we're trying to write so I implemented so I implemented an optimized version of this deterministic parallel BFS code and compared it to the non-deterministic version and it turns out on 32 cores it's only a little bit slower than the non-deterministic version so it's about 5 to 20 percent slower on a range of different input graphs so this is a pretty small price to pay for determinism and you get many nice benefits such as ease of debugging and ease of reason and reasoning about the performance of your code any questions ok so let me talk about another optimization for breadth-first search and this is called the direction optimization and ideas motivated by how the sizes of the frontiers change in a typical BFS algorithm over time see here I'm plotting the frontier size on the y-axis in log scale and the x-axis is the iteration number and on the left we have a random graph on the right we have a power-law graph and we see that the the frontier size actually grows pretty rapidly especially for the power law graph and then it drops pretty rapidly so this is true for many of the real-world graphs that we see because many of them look like power law graphs and in the BFS algorithm most of the work is done when the frontier is relatively large so most of the work is going to be done in these middle iterations where the frontier is very large and turns out that there are two ways to do broth first search one way is the traditional way which I'm going to refer to that as the top-down method and this is just what we did before we look at the frontier vertices and explore all of their output neighbors and mark any of the unexplored ones as explored and place them onto the next frontier but there's actually another way to do breadth-first search and this is known as a bottom-up method and in the bottom-up method I'm gonna look at all of the vertices in the graph that haven't been explored yet and I'm gonna look at their incoming edges and if I find an incoming edge that's on the current frontier I can just say that that incoming neighbor is my parent and I don't even need to look at the rest of my incoming neighbors so in this example here vertices 9 through 12 when they loop through their incoming edges they found incoming neighbor on the frontier and they chose that neighbor as their parent and they get marked as explored and we can actually save some matched reversals here because for example if you look at vertex 9 and you imagine the edges being traversed in a top-to-bottom manner then vertex 9 is only going to look at its first incoming edge and find that the incoming neighbors on the frontier so it doesn't even need to inspect the rest of the incoming edges because all we care about finding is just one parent in the BFS tree we don't need to find all of the possible parents in this example here vertices 13 through 15 actually ended up wasting work because they looked at all their incoming edges and none of the incoming neighbors are on the frontier so they don't actually find a neighbor so the bottom-up approach turns out to work pretty well when the frontier is large and many vertices have been already explored because in this case you don't have to look at many vertices and for the ones that you do look at when you scan over their incoming edges it's very likely that early on you'll find a neighbor that is on the current frontier and you can skip a bunch of edge traversals in the top-down approach is better when the frontier is relatively small and in a paper by scott beamer in 2012 he actually studied the performance of these two approaches in BFS and this was this plot here plots the running time versus to iteration number for a power-law graph and compares the performance of the top-down and bottom-up approach so we see that for the first two steps the top-down approach is faster than the bottom-up approach but then for the next couple steps the bottom-up approach is faster than the top-down approach and then when we get to the end the top-down approach becomes faster again so the top-down approach as I said is more efficient for small frontiers whereas the bottom-up approach is more efficient for large frontiers also I want to point out that in the top-down approach when we update the parent alright that actually has to be atomic because we could have multiple vertices trying to update the same neighbor but in the bottom-up approach to update to the parent or it doesn't have to be atomic because we're scanning over the incoming neighbors of any particular vertex V serially and therefore there there can only be one processor that's writing to parent of V so we choose between these two approaches based on the size of the frontier we found that a threshold of a frontier size of about n over 20 works pretty well in practice so if the frontier has more than n over 20 vertices we use the bottom-up approach and otherwise we use the top-down approach you can also use more sophisticated thresholds such as also considering the sum of out degrees since the actual work is dependent on the sum of out degrees of the vertices on the frontier you can also use different thresholds for going from top to bottom to top down to bottom up and then another threshold for going from bottom up back to top top down and in fact that's what the original paper did they had two different thresholds we also need to generate the inverse graph where the transpose graph if we're using this method if the graph is directed because if the graph is directed in the bottom-up approach we actually need to look at the incoming neighbors not the outgoing neighbors so if the graph wasn't already symmetrized and we have to generate both incoming neighbors and outgoing neighbors for each vertex so we can do that as a pre-processing step any questions okay so how do we actually represent the front here so one way to represent the frontier is just to use a sparse integer array which is what we did before another way to do this is to use a dense array so for example here I have an array of bytes the array is of size n where as number vertices and then I have a 1 in position I if vertex eyes on the frontier and 0 otherwise I can also use a bit vector to further compress this and then use additional bit level operations to access it so for the top-down approach a sparse representation is better because the top-down approach usually it deals with small frontiers and if we use a sparse array we only have to do work proportional to the number of vertices on the frontier and then in the bottom-up approach it turns out the dense representation is better because we're looking at most of the vertices anyways and then we need to switch between these two methods based on the based on the approach that we're using so here are some performance numbers comparing the three different modes of traversal so we have bottom-up top-down and then the direction optimizing approach using a threshold of n over 20 first of all we see that the bottom-up approach is a small slowest for both of these graphs and this is because it's doing a lot of wasted work in the early iterations we also see that the direction optimising approach is always faster than both the top-down in a bottom-up approach this is because if we switch to the bottom-up approach at an appropriate time then we can save a lot of edge traversals and for example you can see for the paragraph the direction optimizing approach is almost three times faster than the top-down approach the benefits of this approach is highly dependent on the input graph so it works very well for power law power law graphs and random graphs but if you have graphs where the frontier size is always small such as a grid graph or a road network then you would never use the bottom-up approach so this wouldn't actually give you any performance gains any questions so it turns out that this direction optimizing idea is more general than just breadth-first search so a couple years ago I developed this framework called my Grove where I generalized the direction optimising idea to other graph algorithms such as betweenness centrality connected components sparse PageRank shortest paths and so on and in the library work we have an edge map operator that chooses between a sparse implementation and in dense implementation based on the size of the frontier so sparse here corresponds to the top-down approach and dense corresponds to the bottom-up approach and it turns out that using this direction optimizing idea for these other applications also gives you performance gains in practice ok so let me now talk about another optimization which is graph compression and the goal here is to reduce the amount of memory usage in the graph algorithm so recall this was our CSR representation and in the edges array we just stored the values of the target edges in sort instead of storing the actual targets we can actually do better by first sorting the edges so that they appear in non decreasing order and then just storing the differences between consecutive edges and then for the first edge for any particular vertex will store the difference between the target and the source of that edge so for example here for vertex 0 the first edge is going to have a value of 2 because we're going to take the difference between the target and the source so 2 minus 0 is 2 then for the next edge we're going to take the difference between the second edge and the first edge so that's 7-5 7-5 and there similarly we do that for all the remaining edges notice that there are some negative values here and this is because the target is smaller than the source so in this example 1 is smaller than 2 so if you do 1 minus 2 you get a negative negative 1 and this can only happen for the first edge for any particular vertex because for all of the other edges were encoding the difference between that edge in the previous edge and we already sorted these edges so that they appear in non decreasing order okay so this compress edges re will typically contain smaller values than this original edges array so now we want to be able to use fewer bits to represent these values we don't want to use 32 or 64 bits likely like we did before otherwise we wouldn't be saving any space so one way to reduce the space usage is to store these values using what's called a variable length code or a K bit code and idea is to encode each value in chunks of K bits where for each chunk we use K minus 1 bits for the data and one bit as the continue bit so for example let's encode the integer 401 using 8-bit or byte codes so first we're gonna write this value out in binary and then we're gonna take the bottom 7 bits and we're going to place that into the data field of the first chunk and then in the last bit of this Chuck we're gonna check if we still have any more bits that we need to encode and if we do then we're gonna set a 1 in the continue bit position and then we create another chunk where we place the next 7 bits into the data field of that chunk and then now we're actually done encoding this integer value so we can place a 0 in the continue bit so that's how the encoding works and decoding is just doing this process backwards so you read chunks until you find a chunk with a 0 continue bit and then you shift all of the data values left accordingly and sum them together to reconstruct the integer value that you encoded one performance issue that might occur here is that you have when you're decoding you have to check this continue bit for every chunk and decide what to do based on that continue bit and this is actually unpredictable branch so you can suffer from branch mispredictions from checking this continue bit so one way you can optimize this is to get rid of these continued bids in the idea here is to first figure out how many bytes you need to encode each integer in the sequence and then you group together integers that require the same number of bytes to encode use a run length encoding idea to encode all of these integers together by using a header byte where in the header byte use a lower 6 6 bits to store the size of the group and the high highest two bits to store the number of bytes each of these integers needs to decode and now all the integers will just in this group will just be stored after this header byte and we know exactly how many bytes they need to decode so we don't need to a story continue bit in these chunks this does slightly increase a space usage but it makes decoding cheaper because we no longer have to suffer from branch mispredictions from checking this continue bit okay so now we have to decode these edge lists on the fly as we're running our algorithm if we decoded everything at the beginning we wouldn't actually be saving any space we need to decode these edges as we access them in our algorithm since we encoded all of these edge lists separately for each vertex we can decode all of them in parallel the and each vertex just decodes as edgeless sequentially but what about high degree vertices if you have a high degree vertex you swap to decode it's actually sequentially and if you're running this in parallel this could lead to load imbalance so one way to fix this is instead of just encoding the whole thing sequentially you can chunk it up into chunks of size T and then for each chunk you encode it like you did before where you store the first route value relative to the source vertex and then all the other values relative to the previous edge and now you can actually decode the first value here for each of these chunks all in parallel without having to wait for the previous edge to be decoded and then this gives us much more parallelism because all of these chunks can be decoded in parallel and we found that a value of T where T is a chunk size between a hundred and ten thousand works pretty well in practice okay so not gonna have time to go over the experiments but at a high level experiments show that the compression schemes do save space and see really it's only slightly slower than the uncompressed version but surprisingly when you run it in parallel it actually becomes faster than the uncompressed version and this is because these graph algorithms are memory bound and if you're using less memory you can alleviate this memory subsystem bottleneck and get better scalability and the decoding part of these compressed algorithms actually get very good parallel speed-up because they're just doing local operations okay okay so let me summarize now so we saw some properties of real world graphs we saw that they're quite large but they can still fit on a multi-core server and they're relatively sparse they also have a parallel degree distribution many graph algorithms are irregular in that they involve many random memory accesses so that becomes the bottleneck of the performance of these algorithms and you can improve performance with algorithmic algorithmic optimizations such as using this Direction optimization and also by creating an exploiting locality for example by using this bit vector optimization and finally optimizations for graphs might work well for certain graphs but they might might not work well for other graphs for example the direction optimization idea works well for paralog routes but not for road graphs when you're trying to optimize your graph algorithm we should definitely test it on different types of graphs and see where it works well and where it doesn't work so that's all I have if you have any additional questions please feel free to ask me after class and as a reminder we have a guest lecture on Thursday by Professor Johnson of the MIT math department on and he'll be talked about high level languages so please be sure to attend you 2 00:00:03,189 --> 00:00:05,769 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu hi good afternoon everyone so today we're going to be talking about graph optimizations and as a reminder on Thursday we're gonna have a guest lecture by Professor Johnson of the MIT math department and he'll be talk about performance of high-level languages so please be sure to attend the guest lecture on Thursday so here's an outline of what I'm gonna be talking about today so we're first going to remind ourselves what a graph is and then we're talking you're gonna talk about various ways to represent a graph in memory and then we'll talk about how to implement an efficient breadth-first search algorithm both serially and also in parallel and then i'll talk about how to use graph compression and graph reordering to improve the locality of graph algorithms so first of all what is a graph so a graph contains vertices and edges where vertices represent certain objects of interest and add just between objects model relationships between the two objects for example you can have a social network where the people are represented as vertices and edges between people means that they're friends with each other the edges in this graph don't have to be bi-directional so you could have a one-way relationship for example if you're looking at the Twitter network Alice could follow Bob but Bob doesn't necessarily have to follow Alice back the graph also doesn't have to be connected so here this graph here is connected but for example there could be some people who don't like to talk to other people and then they're just off in their own component you can also use graphs to model protein networks where the vertices are proteins and edges between vertices means that there's some sort of interaction between the proteins so this is useful in computational biology as I said edges can be directed so the relationship can go one way or both ways in this graph here we have some directed edges and then also some edges that are directed in both directions so here John follows Alice Alice follows Peter and then Alice Falls Bob and Bob also follows Alice if you use a graph to represent the worldwide web then the vertices would be websites and then the edges would denote that there's a hyperlink from one website to another and again the edges here don't have to be bi-directional because website a could have a link to website B but website B doesn't necessarily have to have a link back I just can also be weighted so you can have a weight on the edge that denotes the strength of the relationship or some sort of distance measure corresponding to that relationship so here I have an example where I'm using a graph to represent cities and the edges between cities have a weight that corresponds to the distance between the two cities and if I want to find the quickest way to get from City a to city B then I would be interested in finding the shortest path from A to B in this graph here here's another example where the edge weights now are the cost of a direct flight from City a to city B and here the edges are directed so for example this says that there's a flight from San Francisco to LA for 45 dollars and if I want to find the cheapest way to get from one city to another city then again I would try to find the shortest path in this graph from City a to city B vertices and edges can also have metadata on them and they can also have types so for example here's the Google knowledge graph which represents all the knowledge on the internet that Google knows about and here the nodes have metadata on them so for example the node corresponding to da Vinci is labeled with his date of birth and date of death and the vertices also have a color corresponding to the type of knowledge that they refer to so you can see that some of these nodes are blue some of them are red some of them are green and some of them have other things on them so in general crafts can have types and metadata on both the vertices as well as the edges let's look at some more applications of graphs so graphs are very useful for implementing queries on social networks so here are some examples of queries that you might want to ask on a social network so for example you might be interested in finding all of your friends who went to the same high school as you on Facebook so that can be implemented using a graph algorithm you might also be interested in finding all the common friends you have with somebody else again a graph algorithm in a social network service might run a graph algorithm to recommend people that you might know and want to become friends with and they might use a graph algorithm to recommend certain products that you might be interested in so these are all examples of social network fries and there are many other queries that you might be interested in running on a social network and many of them can be implemented using graph algorithms another important application is clustering so here the goal is to find groups of vertices in a graph that are well connected internally and poorly connected externally so in this image here each blob of vertices of the same color correspond to a cluster and you can see that inside a cluster there are a lot of edges going among the vertices and between clusters there are relatively fewer edges and some applications of clustering include community detection and social networks so here you might be interested in finding groups of people with similar interests or hobbies you can also use clustering to detect fraudulent websites on your internet you can use it for clustering documents so you would cluster documents that have similar text together and clustering is often used for unsupervised learning in machine learning applications another application is connect omics so connectomics is the study of the structure of the the network structure of the brain and here the vertices correspond to neurons and edges between two vertices means that there's some sort of interaction between the two neurons and recently there's been a lot of work on trying to do high-performance connect omics and some of this work has been going on here at MIT by professor charles leiserson and professor near Shahid's research groups so recently this has been a very hot area graphs also used in computer vision for example an image segmentation so here you want to segment your image into the distinct objects that appear in the image and you can construct a graph by representing the pixels as vertices and then you would place an edge between every pair of neighboring pixels with a weight that corresponds to their similarity and then you would run some sort of minimum cost cut algorithm to partition your graph into the different objects that appear in the in the image so there are many other applications and I'm not gonna have time to go through all of them today but here's just a flavor of some of the applications of graphs so any questions so far okay so next let's look at how we can represent a graph in memory so for the rest of this lecture I'm gonna assume that my vertices are labeled in the range from 0 to Adam and minus 1 so they have an integer in this range sometimes your graph might be given to you where the vertices are already labeled in this range sometimes not but you can always get these labels by mapping each of the identifiers to unique integer in this range so for the rest of the lecture I'm just going to assume that we have these labels from 0 to n minus 1 for the vertices one way to represent a graph is to use an adjacency matrix so this is going to be an N by n matrix and there's a 1 bit in the eighth row and J column if there's an edge that goes from vertex I to vertex J and 0 otherwise another way to represent a graph is the edge list representation well we just store a list of the edges that appear in the graph so we have one pair for each edge where the pair contains the two coordinates of that edge so what is a space requirement for each of these two representations in terms of the number of edges M in the number of vertices n in the graph so it should be pretty easy yes yeah so the space for the AJC matrix is order N squared because you have N squared cells in this matrix and you have one bit for each of the cells for the edge list it's going to be order M because you have M edges and for each edge you're storing a constant amount of data in the edge list so here's another way to represent a graph this is known as the adjacency list format and the idea here is that we're going to have an array of pointers one per vertex and each pointer points to a linked list storing the edges for that vertex and the linked list is unordered in this example so what's the space requirement of this representation yes so it's going to be order n plus M and this is because we have n pointers and the sum of the number of entries across all the linked lists is just equal to the number of edges in the graph which is M what's one potential issue with this sort of representation if you think in terms of cash performance does anyone see a potential performance issue here yeah right so the so the issue here is that if you're trying to loop over all the neighbors of a vertex you're going to have to dereference the pointer in every linked list node because these are not two contiguous in memory and every time you dereference linked lists note that's going to be a random access into memory so that can be bad for cache performance one way you can improve cache performance is instead of using linked lists for each of these neighbor lists you can use an array so now you can store the neighbors just in this array and they'll be contiguous in memory one drawback of this approach is that it becomes more expensive if you're trying to update the graph and we'll talk more about that later so I knew questions so far so what's another way to represent the graph that we've seen in a previous lecture what's a more compressed or compact way to represent a graph especially a sparse graph says anybody remember the compressed sparse row format so we looked at this in one of the early lectures and in that lecture we used it to store a sparse matrix but you can also use it to store a sparse graph and as a reminder we have two arrays in the compressed sparse row or CSR format we have two offsets array and the edges array the offsets array stores an offset for each vertex into the edges array telling us where the edges for that particular vertex begins in the edges array so offsets of I stores the offset of where vertex i's edges start in the edges array so in this example vertex zero has an offset of zero so it's edges start at position zero in the edges array vertex one has an offset of four so it starts at index 4 in this offsets array so with this representation how can we get the degree of a vertex so we're not storing the degree explicitly here can we get the degree efficiently yes yes so you can get the degree of a vertex just by looking at the difference between the next offset and its own offset so for vertex zero you can see that it's degree is 4 because vertex one's offset is 4 and vertex zero is offset as zero and similarly you can do that for all the other vertices so what's the space usage of this representation sorry yeah so again it's going to be order n plus n because you need order and space for the offsets array in order M space for the edges array you can also store values or weights on the edges one way to do this is to create an additional array of size m and then for edge I you just store the weight or the value in this in the ithe index of this additional array that you created if you're always accessing to wait when you acts as an edge then it's actually better for a cache locality to interleave the weights with the edge targets so instead of creating two arrays of size m you have one array of size 2m and every other entry is the weight and this improves casual county because every time you access an edge its weight is going to be right next to it in memory and it's going to likely be on the same cache line so that's one way to improve cache locality any questions so far so let's look at some of the trade-offs in these different graph representations that we've looked at so far so here I'm listing the storage cost for each of these representations which we already discussed this is also the cost for just scanning the whole graph in one of these representations what's the cost of adding an edge in each of these representations so for adjacency matrix what's the cost of adding an edge yeah so for adjacency matrix it's just order 1 to add an edge because you have random access into this matrix so you just have to access the I comma J entry and flip the bit from a 0 to 1 what about for the edge list so we're assuming that the edgeless is unordered so you don't have to keep the list in any sorted order yeah yes so again it's just oh of one because you can just add it to the end of the edge list so that's constant time what about for that jcu list so actually this depends on whether we're using linked lists or a race for the neighbor list of the vertices if we're using a linked list adding an edge just takes constant time because we can just put it at the beginning of the linked list if we're using an array then we actually need to create a new array to make space for this edge that we add and that's going to cost us degree of V work to do that because we have to copy all the existing edges over to this new array and then add this new edge to the end of that array of course you could amortize this cost across multiple update so if you run out of memory you can double the size your array so you don't have to create these new arrays too often but the cost for any individual addition is still relatively expensive compared to say an edge list or @jc matrix and then finally for the compressed sparse row format if you add an edge in the worst case it's going to cost us order m+ and work because we're going to have to reconstruct the entire offsets array and the entire edges rating in the worst case because we have to put something in and then shift in the edges the way you have to put something in and it shift all of the values to the right of that over by one location and then for the offsets array we have to modify the offset for the particular vertex we're adding an edge to and then the offsets for all the vertices after that so the compressed bars row representation is not particularly friendly to edge updates what about for deleting an edge from some vertex V so for our JC matrix it's again it's going to be constant time because you just randomly access the correct entry and flipped a bit from a 1 to 0 what about for an edge list yeah so for an edge lights in the worst case it's going to cost us order at work because the edges are not in any sorted order so we have to scan through the whole thing in the worst case to find the edge that we're trying to delete for a JCL list it's going to take order degree of V work because the neighbors are not sorted so we have to scan through the whole thing to find this edge that we're trying to delete and then finally for a compressed Barse row it's going to be order n plus and because we're gonna have to reconstruct the whole thing in the worst case what about finding all the neighbors of a particular vertex V what's the cost of doing this in the adjacency matrix yes so it's going to cause this order n work to find all the neighbors of a particular vertex because we just scanned the correct row in this matrix the row corresponding to vertex V for the edge list we're going to have to scan the entire edge list because it's not sorted so in the worst case that's going to be order M for adjacency list that's going to take order degree of V because we can just find the pointer to the linked list for that vertex in constant time and then we just traverse over the linked list and that takes order degree of V time and then finally for the compress bars row format it's also ordered degree of V because we have constant time access into the appropriate location and the edges array and then we can just read off the edges which are consecutive in memory so what about finding if a vertex W is a neighbor of V so I'll just give you the answers so for that Jacy matrix is going to take constant time because again we just have to check the Vieth row and the W with column and check of the bit is set there for edge list we have to traverse the entire list to see if the edges there and then for a JCL list and compress bars row is going to be ordered degree of V because we just have to scan the neighbor list for that vertex so these are some graph representations but there are actually many other graph representations including variants of the ones that I've talked about here so for example for that jcu list I said you can either use a linked list or an array to store the neighbor list but you can actually use a hybrid approach where you solve a linked list but each linked list node actually stores more than one vertex so you can store maybe 16 vertices in each linked list node and that gives us better cache locality so for the rest of this lecture I'm going to talk about algorithms that are best implemented using the compressed sparse row format and this is because we're going to be dealing with sparse graphs we're going to be looking at static algorithms where we don't have to update the graph if we do have to update the graph then CSR isn't a good choice but we're just gonna be looking at static algorithms today and then for all the algorithms that we'll be looking at we're going to need to scan over all the neighbors of a vertex that we that we visit and CSR is very good for that because all the neighbors for a particular vertex are stored contiguously in memory so I need questions so far okay I do want to talk about some properties of real-world graphs so first we're seeing graphs that are quite large today but actually they're not too large so here are the sizes of some of the real-world graphs out there so there's a Twitter Network this is actually a snapshot of the Twitter network from a couple years ago has 41 million vertices and 1.5 billion edges and you can store this graph in about six point three gigabytes of memories you can probably store it in the main memory of your laptop the largest publicly available graph out there now is this common crawl web graph it has 3.5 billion vertices and 128 billion edges so storing this graph requires a little over half a terabyte of memory says it is quite a bit of memory but it's actually not too big because there are machines out there with main memory sizes in the order of terabytes of memory nowadays so for example you can rent a 2 terabyte or 4 terabyte memory instance on AWS which you're using for your homework assignments see if you have any leftover credits at the end of the semester you can and you want to play around with this graph you can rent one of these terabyte machines just to remember to turn it off when you're done because it's kind of expensive another property of reward graphs is that they're quite sparse so M tends to be much less than N squared so most of the edges most of the possible edges are not actually there and finally the degree distributions of the vertices can be highly skewed in many real-world graphs see here I'm plotting the degree on the x-axis and the number of vertices with that particular degree on the y-axis and we can see that it's highly skewed and for example in a social network most of the people would be on the left hand side so their degree is not that high and then we have some very popular people on the right hand side where their degree is very high but we don't have too many of those so this is what's known as a power law degree distribution and there have been various studies that have shown that many real-world graphs have approximately a power law degree distribution and mathematically this means that the number of vertices with degree D is proportional to D to the negative P so negative P is the exponent and for many graphs the value of P lies between 2 and 3 and this power law degree distribution does have implications when we're trying to implement parallel algorithms to process these graphs because with graphs that have a skewed degree distribution you could run into load imbalance issues if you just paralyze across the vertices the number of edges they have can vary significantly any questions ok so now let's talk about how we can implement a graph algorithm and I'm going to talk about the breadth first search algorithm so how many of you have seen Bradford search before ok so about half of you I did talk about breadth-first search in the previous lecture so I was hoping everybody would raise their hands okay so as a reminder in the BFS algorithm were given a source vertex s and we want to visit the vertices in order of their distance from the source s and there are many possible outputs that we might care about one possible output is we just want to report the vertices in the order that they were visited by the breadth first search traversal so let's say we have this graph here and our source vertex is D so what's one possible order in which we can traverse these vertices now I should specify that we should traverse this graph in a breadth-first search manner so what's the first vertex we're going to explore D so we're first going to look at deep because that's our source vertex the second vertex we can actually choose between B C and E because all we care about is that we're visiting these vertices in order of their distance from the source but these three vertices all have the same distance so let's just pick B C and then e and then finally I'm going to visit vertex a which has a distance of two from the source so this is one possible solution there are other possible solutions because we could have visited e before we visited B and so on another possible output that we might care about is two we might want to report the distance from each vertex to the source vertex s so in this example here are the distances so D as a distance of zero B C and E all have a distance of one and a as the distance of two we might also want to generate a breadth first search tree where each vertex in the tree has a parent who which is a neighbor in the previous level of the breadth-first search or in other words the parent should have a distance of one less than that vertex itself so here's an example of a breadth-first search tree and we can see that each of the vertices has a parent whose breadth-first search distance is 1 less than itself so the algorithm said I'm going to be talking about today will generate the distances as well as the BFS tree and BFS actually has many applications so it's used as a subroutine in between a centrality which has a very popular graph mining algorithm used to rank the importance of nodes in a network and the importance of nodes here corresponds to how many shortest paths go through that node other applications include a centricity estimation maximum flows are some max flow algorithms use BFS as routine you can use BFS to crawl the web do cycle detection garbage collection and so on so let's now look at a serial BFS algorithm and here I'm just going to show the pseudocode so first we're going to initialize the distances to all infinity and we're gonna initialize the parents to be nil and then we're gonna create a queue data structure we're going to set the distance of the route to be zero because the route has a distance of zero to itself and then we're going to place the route on to this queue and then while the queue is not empty we're going to DQ the first thing in the queue we're going to look at all the neighbors of the current vertex that we DQ'd and for each neighbor we're going to check if its distance is infinity if the distance is infinity that means we haven't explored that neighbor yet so we're going to go ahead and explore it and we do so by setting its distance value to be the current vertex is distance plus one we're going to set the parent of that neighbor to be the current vertex and it will place the neighbor onto the queue so some pretty simple algorithm and we're just according to keep iterating and this while loop until there are no more vertices left in the queue so what's the work of this algorithm in terms of N and M so how much work are we doing per edge [Music] yes yes so assuming that the NQ and EQ operators operators are constant time then we're doing cost amount of work per edge so sum across all edges that's going to be order M and then we're also doing a cost amount of work per vertex because we have to basically place it onto the queue and then take it off the queue and then also initialize their values so the overall work is going to be order M plus n ok so let's now look at some actual code to implement this serial BFS algorithm using the compressed sparse row format so first I'm going to initialize two arrays parent and Q and these are going to be integer arrays of size N I'm going to initialize all of the parent entries to be negative 1 I'm going to place a source vertex on to the queue so it's going to appear at Q of 0 that's the beginning of the queue and then I'll set the parent of the source vertex to be the source itself and then I also have two integers that point to the front in the back of the queues initially the front of the queue is at position 0 and the back is at position 1 and then while the queue is not empty and I can check that by checking if queue front is not equal to Q back then I'm going to take DQ the first vertex in my queue I'm gonna set current to be that vertex and then I'll increment Q front and then I'll compute the degree of that vertex which are going to do by looking at the difference between consecutive offsets and I also assume that offsets of Adhan is equal to em just to deal with the last vertex and then I'm going to loop through all of the neighbors for the current vertex and to access each neighbor what I do is I go into the address array and I know that my neighbors start at offsets of current and therefore to get the ithe I just do offsets of current plus I that's my index into the address array now I'm gonna check if my neighbor has been explored yet and I can check that by checking a parent of neighbors equal to one if it is that means I haven't explored it yet and then I'll set a parent of neighbour to be current and then I'll place the neighbor onto the back of the queue and increment queue back and I'm just gonna keep repeating this while loop until it becomes empty and here I'm only generating the parent pointers but I could also generate the distances if I wanted to with just a slight modification of this code so any questions on how this code works okay so here's a question what's the most expensive part of the code can you point to one particular line here that is the most expensive yes place sir okay so actually turns out that that's not the most expensive part of this code but you're close does anyone have any other ideas yes yes it turns out that this line here where we're accessing parent of neighbor that turns out to be the most expensive because whenever we access this parent array the neighbor can appear anywhere in memory so that's going to be a random access and if the parent array doesn't fit in our cache then that's going to cost us a cache miss almost every time this address array is actually mostly accessed sequentially because for each vertex all of its edges are stored contiguously in memory we do have one random access into the edges array per vertex because we have to look up the starting location for that vertex but it's not one per edge unlike this check of the parent array that that occurs for every edge does that makes sense so so let's do a back-of-the-envelope calculation to figure out how many cache misses we would incur assuming that we started with a cold cache and we also assume that n is much larger than the size of a cache so we can't fit any of these arrays into cache we'll assume that a cache line has 64 bytes and integers are 4 bytes each so let's try to analyze this so the initialization will cost us and over 16 cache misses and the reason here is that we're initializing this array sequentially so we're accessing contiguous locations and this can take advantage of spatial locality on each cache line we can fit 16 of the integers so overall we're going to need n over 16 cache misses just to initialize this array we also need n over 16 cache misses across the entire algorithm to DQ the vertex from the front of the queue because again this is going to be a sequential access into this queue array and across all vertices that's going to be n over 16 cache misses because we can fit 16 virtus 16 integers on a cache line to compute the degree here that's going to take n cache misses over all because each of these accesses the offsets array is going to be a random access because we we have no idea what the value of current here is it could be anything so across the entire algorithm we're going to need n cache misses to access this offsets array and then to access this edges array I claim that we're going to need at most two n plus M over 16 cache misses so does anyone see where that bound comes from so where does the M over 16 come from Yeah right so you have to pay I'm over 16 because you're accessing every edge once and you're accessing the edges contiguously so therefore across all edges that's going to take em over 16 cache misses but we also have to add to n because whenever we access the edges for a particular vertex the first cache line might not only contain that vertex as edges and similarly the last cache line that we access might also not just contain that vertex as edges so therefore we're going to waste the first cache line in the last cache line in the worst case for each vertex and summed across all vertices that's going to be two ends so this is an upper bound to n plus M over 16 accessing this parent array that's going to be a random access every time so we're gonna incur a cache miss in the worst case every time so summed across all edge accesses that's going to be M cache misses and then finally we're going to pay an over 16 cache misses 2 and cue the neighbor onto the queue because these are sequential accesses so in total we're going to incur at most 51 over 16 and plus 17 over 16 cache misses and if M is greater than 3n then the second term here is going to dominate and M is usually greater than 3n in most real-world graphs and the second term here is dominated by this random access into the parent array so let's see if we can optimize this code so that we get better cache performance so let's say we could fit a bit vector of size n into cache but we couldn't fit the entire parent array in the cache what do we do to reduce the number of cache misses so anyone have any ideas yeah [Music] yeah so that's exactly correct so we're going to use a bit vector to store whether the vertex has been explored yet or not so we only need one bit for that we're not storing the parent idea in this bit vector we're just storing a bit to say whether that vertex has been explored yet or not and then before we check this parent array we're going to first check the bit vector to see if that vertex has been explored yet and if it has been explored yet we don't even need to access this parent array if it hasn't been explored then we won't go ahead and access the parent entry of the neighbor but we only have to do this one time for each vertex in the graph because we can only visit each vertex once and therefore we can reduce a number of cache misses from em down to n so overall this might improve the number of cache misses in fact it does if if the number of edges is large enough relative to the number of vertices however you do have to do a little bit more computation because you know you have to do bit vector manipulation to check this bit vector and then also to set the bit vector when you explore a neighbor so here's the code using the bit vector optimization so here I'm initializing this bit factor called visited it's of size approximately an over 32 and then I'm setting off the bits to 0 except for the source vertex where I'm going to set its bit to 1 and I'm doing this bit calculation here to figure out the bit for the source vertex and then now when I'm trying to visit a neighbor I'm first going to check if the neighbor is visited by checking this bit array and I can do this using this computation here so I I and visited of neighbor over 32 by this mask one left should that shifted by a neighbor mod 32 and that's false that means the neighbor hasn't been visited yet so I'll go in inside this if clause and then I'll set the visited bit to be true using this statement here and then I do the same operations as I did before it turns out that this version is faster for large enough values of M relative to n because you reduce the number of cache misses overall you still have to do this extra computation here this bit manipulation but if M is large enough then the reduction in number of cache misses outweighs the additional computation that you have to do any questions okay so that was a serial implementation of preference search now let's look at a parallel implementation so I'm first going to do an animation of how a parallel breadth first search algorithm would work the parallel breadth-first search algorithm is going to operate on frontiers where the initial frontier contains just the source vertex and on every iteration i'm going to explore all of the vertices on the frontier and then place any unexplored neighbors on to the next frontier and then i move on to the next frontier so the first iteration i'm going to mark the source vertex i's explored sight its distance to be 0 and then place the neighbors of that source vertex onto the next frontier the next iteration i'm going to do the same thing set these distances to one i also am going to generate a parent pointer for each of these vertices and this parent should come from the previous front here and there should be a neighbor of the vertex and here there's only one option which is the source for a text so i'll just pick that as the parent and then i'm gonna place the neighbors on to the next frontier again mark those as explored set their distances and generate a parent pointer again and notice here when i'm generating these parent pointers there's actually more than one choice for some of these vertices and this is because there are multiple vertices on the previous frontier and some of them explore the same neighbor on the current frontier so a parallel implementation has to be aware of this potential race here I'm just picking an arbitrary parent so as we see here you can process each of these frontiers in parallel so you can paralyze over all of the vertices on the frontier as well as all of their outgoing edges however you do need to process one frontier before you move on to the next one in this BFS algorithm and a parallel implementation has to be aware of potential races so as I said earlier we could have multiple vertices on the frontier trying to visit the same neighbor so somehow that has to be resolved and also the amount of work on each frontier is changing changing throughout the course of the algorithm so you have to be careful with load balancing because you have to make sure that the amount of work each processor has has to do is about the same if you use silk to implement this then load balancing doesn't really become a problem so any questions on the BFS algorithm before I go over the code okay so here's the actual code and here I'm gonna initialize these four arrays so the parent array which is the same as before I'm gonna have an array called frontier which stores the current frontier and then I'm gonna have a rate called frontier next which is a temporary array that I use to store the next front here of the BFS and then also have an array called degrees I'm going to initialize all of the parent entries to be negative one or do that using a silk for loop I'm going to place the source vertex at the 0th index of the front here also at the front here size b1 and then I set the parent of the source to be the source itself while the frontier size is greater than zero that means I still have more work to do I'm going to first iterate over all of the vertices on my frontier in parallel using a cell for loop and then I'll set the the ithe entry of the degrees array to be the degree of the vertex on the frontier and I can do this just using the difference between consecutive offs and then I'm going to perform a prix fixe some on this degrees array and we'll see in a minute why I'm doing this prefix um but first of all does anybody recall what prefix autumn is so who knows what prefix on this do you want to tell us what it is [Music] yeah so prefix some so here I'm gonna demonstrate this with an example so let's say this this is our input array the output of this array would store for each location the sum of everything before that location in the input array so here we see that the first position has a value of zero because the sum of everything before it is zero there's nothing before it in the input the second position has a value of two because the sum of everything before it is just the first location the third location has a value six because of some of everything before it is 2 plus 4 which is 6 and so on so I believe this was on one of your homework assignments so hopefully everyone knows what prefix sum is and later on we'll see how we use this to do the parallel reference search ok so I'm gonna do a prefix sum on this degrees array and then I'm going to loop over my front here again in parallel I'm gonna let B be the Eifert X on the frontier index is going to be equal to degrees of I and then my degree is going to be offsets of V plus 1 minus offsets of V now I'm gonna loop through all these neighbors and here I just have a serial for loop but you could actually paralyze this for loop turns out that if the number of iterations in the for loop is small enough there's additional overhead to making this parallel so I just made it serial for now but you could make it parallel to get the neighbor I just index into this edges array I look at offsets of V plus J then now I'm going to check if the neighbor has been explored yet and I can check if parent of neighbor is equal to negative 1 if so that means it hasn't been explored yet so I'm gonna try to explore it and I do so using a compare and swap I'm gonna try to swap in the value of V with the original value of negative 1 in parent of neighbor and the compare and swap is going to return true if it was successful in false otherwise and if it returns true that means this vertex becomes the parent of this neighbor and then I'll place the neighbor onto frontier next at this particular index index plus J and otherwise I'll set a negative one at that location okay so let's see why I'm using index plus J here so here's how frontier next is organized so each vertex on the frontier owns a subset of these locations in the frontier next array and these are all contiguous memory locations and it turns out that the starting location for each of these vertices in this frontier next to race exactly the value in this prefix some DeRay up here so vertex one has its first location at index zero vertex two has its first location at index two vertex three has its first location index six and so on so by using a prefix um I can guarantee that all these vertices have a disjoint sub array in this frontier next array and then they can all right to this frontier next story in parallel without any races an index plus J just gives us the right location of right two in this array so index is the starting location and then J is for the J's neighbor so here's one potential output after we write to this frontier next array so we have some non-negative values and these are vertices that we explored in this iteration we also have some negative one values and the negative one here means that either the vertex has already been explored in a previous iteration or we tried to explore it in the current iteration but somebody else got there before us because somebody else is doing the compare and swap at the same time and they could have finished before we did so we failed on the compare and swap so we don't actually want these negative one values so we're gonna filter them out and we can filter them out using a prefix sum again and this is going to give us a new frontier and we'll set the frontier size equal to the sides of this new frontier and then we repeat this while loop until there are no more vertices on the frontier so any questions on this parallel BFS algorithm do you mean the filter out yeah so what you can do is you can create another array which stores a one in location i if that location is not a negative one in the zero if it is a negative one then you do a prefix um on that array which gives us unique offsets into an output array so then everybody just looks at the prefix some array there and then it writes to the output array so I might be easier if I try to draw this on the board okay so let's say we have an array of size five here so what I'm gonna do is I'm gonna generate another array which stores a one if the positive a lis in the corresponding location is not a negative one and zero otherwise and then I do a prefix um on this array here and this gives me 0 0 1 1 2 and 2 and now each of these non these values that are not negative 1 they can just look up the corresponding index in this output array and this gives us a unique index into an output array so this this element we're right to position 0 this helmet would write to position 1 and this element would write to position 2 in my final output so this would be my final frontier does that make sense okay so let's now analyze the work and span of this parallel BFS algorithm so a number of iterations required by the BFS algorithm is upper bounded by the diameter D of the graph in the diameter of a graph is just the maximum shortest path between any pair of vertices in the graph and that's an upper bound on the number of iterations we need to do each iteration is going to take log M span for the soak for loops the prefix sum in the filter and this is also assuming that the inner loop is paralyzed the inner loop over the neighbors of a vertex so to get the span we just multiply these two terms so we get theta of D times log M span what about the work so compute the work we have to figure out how much work we're doing per vertex and per edge so first notice that the sum of the frontier sizes across entire algorithm is going to be n because each vertex can be on the front here at most once also each edge is going to be traversed exactly once so that leads to M total edge visits on each iteration of the algorithm we're doing a prefix sum and the cost of this prefix sum is going to be proportional to the front here sighs so summed across all iterations the cost of the prefix sum is going to be theta and we also do this filter but the work of the filter is proportional to the number of edges traversed in that iteration and summed across all iterations that's going to give us theta of M total so overall the work is going to be theta of n plus M for this parallel BFS algorithm so this is our work efficient algorithm the work matches that of the serial algorithm any questions on the analysis okay so let's look at how this parallel BFS algorithm runs in practice so here I ran some experiments on a random graph with ten million vertices in a hundred million edges and the edges were randomly generated and I made sure that each vertex had ten inches every experiments on a forty core machine with two-way hyper-threading does anyone know what hyper threading is yeah what is it yeah so that's a great answers so hyper-threading is an Intel technology where for each physical core the operating system actually ceases as two logical cores they share many of the same resources but they have their own registers so if one of the logical cores stalls on a long latency operation the other logical core can use the shared resources and hide some of the latency okay so here I'm plotting to speed up over the single threaded time of the parallel algorithm versus the number of threads so we see that on 40 threads we get a speed-up of about 22 or 23 X and when we turn on hyper threading and use all 80 threads the speed-up is about 32 times on 40 cores and this is actually pretty good for a parallel graph algorithm it's very hard to get good very good speed ups on these irregular graph algorithms so 32 X on 40 cores is pretty good I also compare this to the serial BFS algorithm because that's what we ultimately want to compare against so we see that on 80 threads the speed-up over the serial BFS is about 21 22 X and the serial BFS is 54 percent faster than the parallel BFS on one thread this is because it's doing less work than the parallel version the parallel version has to do extra work with the prefix um in the filter whereas the serial version doesn't have to do that but overall the parallel implementation is still pretty good any questions so a couple lectures ago we saw this slide here such whorls told us never to write non-deterministic parallel programs because it's very hard to debug these programs and hard to reason about them so is there non determinism in this BFS code that we looked at yeah so there's non determinism in the compare-and-swap so let's go back to the code so this compare-and-swap here there's a race there because we can have multiple vertices trying to write to the parent entry of the neighbor at the same time and the one that wins is non-deterministic so the BFS tree that you get at the end is non-deterministic okay so let's see how we can try to fix this non determinism okay so as we said this is the line that cause done causes the non determinism it turns out that we can actually make the output BFS tree be deterministic by going over the outgoing edges in each iteration in two phases so how this works is that in the first phase the vertices on the frontier are not actually going to write to the parent array of the or they are gonna write but they're using going to be using this write min operator and the right min operator is an atomic operation that guarantees that we have concurrent writes to the same location the smallest value gets written there so the value that gets written there is going to be deterministic it's always going to be the smallest one that tries to write there then in the second phase each vertex is going to check for each neighbor whether a parent of neighbor is equal to V if it is that means it was the vertex that successfully wrote to parent of neighbor in the first phase and therefore it's going to be responsible for placing this neighbor on to the next frontier and we're also according to SAP parent of neighbor to be negative v this is just a minor detail and this is because when we're doing this right min operator we could have a future iteration where a lower vertex tries to visit the same vertex that we already explored but if we set this to a negative value we're only going to be writing non negative values to this location so the write min on a neighbor that has already makes been explored would never succeed okay so the final BFS that's final BFS tree that's generated by this code is always going to be the same every time you run it I want to point out that this code is still not deterministic with respect to the order in which individual memory locations get updated so you still have a determinacy race here in the right min operator but it's still better than a non-deterministic code in that you always get the same BFS tree so how do you actually implement the right min operation so it turns out you can implement this using a loop with a compare and swap so reitman takes as input two arguments to memory address that we're trying to update and the new value that we want to write to that address we're first going to set old Val equal to the value at that memory address and we're gonna check of new vows less in old Bell if it is then we're gonna attempt to do a compare and swap out that location writing new vowel into that address if its initial value was old Val and if that succeeds then we return otherwise we failed and that means that somebody else came in in the meantime and changed the value there and therefore we have to reread the old value at the memory address and then we repeat in there are two ways that this write min operator could finish one is if the compare and swap was successful other the other one is if new Val is greater than or equal to old Val in that case we no longer have to try to write anymore because the value that let's there is already smaller than what we're trying to write so I implemented so I implemented an optimized version of this deterministic parallel BFS code and compared it to the non-deterministic version and it turns out on 32 cores it's only a little bit slower than the non-deterministic version so it's about 5 to 20 percent slower on a range of different input graphs so this is a pretty small price to pay for determinism and you get many nice benefits such as ease of debugging and ease of reason and reasoning about the performance of your code any questions ok so let me talk about another optimization for breadth-first search and this is called the direction optimization and ideas motivated by how the sizes of the frontiers change in a typical BFS algorithm over time see here I'm plotting the frontier size on the y-axis in log scale and the x-axis is the iteration number and on the left we have a random graph on the right we have a power-law graph and we see that the the frontier size actually grows pretty rapidly especially for the power law graph and then it drops pretty rapidly so this is true for many of the real-world graphs that we see because many of them look like power law graphs and in the BFS algorithm most of the work is done when the frontier is relatively large so most of the work is going to be done in these middle iterations where the frontier is very large and turns out that there are two ways to do broth first search one way is the traditional way which I'm going to refer to that as the top-down method and this is just what we did before we look at the frontier vertices and explore all of their output neighbors and mark any of the unexplored ones as explored and place them onto the next frontier but there's actually another way to do breadth-first search and this is known as a bottom-up method and in the bottom-up method I'm gonna look at all of the vertices in the graph that haven't been explored yet and I'm gonna look at their incoming edges and if I find an incoming edge that's on the current frontier I can just say that that incoming neighbor is my parent and I don't even need to look at the rest of my incoming neighbors so in this example here vertices 9 through 12 when they loop through their incoming edges they found incoming neighbor on the frontier and they chose that neighbor as their parent and they get marked as explored and we can actually save some matched reversals here because for example if you look at vertex 9 and you imagine the edges being traversed in a top-to-bottom manner then vertex 9 is only going to look at its first incoming edge and find that the incoming neighbors on the frontier so it doesn't even need to inspect the rest of the incoming edges because all we care about finding is just one parent in the BFS tree we don't need to find all of the possible parents in this example here vertices 13 through 15 actually ended up wasting work because they looked at all their incoming edges and none of the incoming neighbors are on the frontier so they don't actually find a neighbor so the bottom-up approach turns out to work pretty well when the frontier is large and many vertices have been already explored because in this case you don't have to look at many vertices and for the ones that you do look at when you scan over their incoming edges it's very likely that early on you'll find a neighbor that is on the current frontier and you can skip a bunch of edge traversals in the top-down approach is better when the frontier is relatively small and in a paper by scott beamer in 2012 he actually studied the performance of these two approaches in BFS and this was this plot here plots the running time versus to iteration number for a power-law graph and compares the performance of the top-down and bottom-up approach so we see that for the first two steps the top-down approach is faster than the bottom-up approach but then for the next couple steps the bottom-up approach is faster than the top-down approach and then when we get to the end the top-down approach becomes faster again so the top-down approach as I said is more efficient for small frontiers whereas the bottom-up approach is more efficient for large frontiers also I want to point out that in the top-down approach when we update the parent alright that actually has to be atomic because we could have multiple vertices trying to update the same neighbor but in the bottom-up approach to update to the parent or it doesn't have to be atomic because we're scanning over the incoming neighbors of any particular vertex V serially and therefore there there can only be one processor that's writing to parent of V so we choose between these two approaches based on the size of the frontier we found that a threshold of a frontier size of about n over 20 works pretty well in practice so if the frontier has more than n over 20 vertices we use the bottom-up approach and otherwise we use the top-down approach you can also use more sophisticated thresholds such as also considering the sum of out degrees since the actual work is dependent on the sum of out degrees of the vertices on the frontier you can also use different thresholds for going from top to bottom to top down to bottom up and then another threshold for going from bottom up back to top top down and in fact that's what the original paper did they had two different thresholds we also need to generate the inverse graph where the transpose graph if we're using this method if the graph is directed because if the graph is directed in the bottom-up approach we actually need to look at the incoming neighbors not the outgoing neighbors so if the graph wasn't already symmetrized and we have to generate both incoming neighbors and outgoing neighbors for each vertex so we can do that as a pre-processing step any questions okay so how do we actually represent the front here so one way to represent the frontier is just to use a sparse integer array which is what we did before another way to do this is to use a dense array so for example here I have an array of bytes the array is of size n where as number vertices and then I have a 1 in position I if vertex eyes on the frontier and 0 otherwise I can also use a bit vector to further compress this and then use additional bit level operations to access it so for the top-down approach a sparse representation is better because the top-down approach usually it deals with small frontiers and if we use a sparse array we only have to do work proportional to the number of vertices on the frontier and then in the bottom-up approach it turns out the dense representation is better because we're looking at most of the vertices anyways and then we need to switch between these two methods based on the based on the approach that we're using so here are some performance numbers comparing the three different modes of traversal so we have bottom-up top-down and then the direction optimizing approach using a threshold of n over 20 first of all we see that the bottom-up approach is a small slowest for both of these graphs and this is because it's doing a lot of wasted work in the early iterations we also see that the direction optimising approach is always faster than both the top-down in a bottom-up approach this is because if we switch to the bottom-up approach at an appropriate time then we can save a lot of edge traversals and for example you can see for the paragraph the direction optimizing approach is almost three times faster than the top-down approach the benefits of this approach is highly dependent on the input graph so it works very well for power law power law graphs and random graphs but if you have graphs where the frontier size is always small such as a grid graph or a road network then you would never use the bottom-up approach so this wouldn't actually give you any performance gains any questions so it turns out that this direction optimizing idea is more general than just breadth-first search so a couple years ago I developed this framework called my Grove where I generalized the direction optimising idea to other graph algorithms such as betweenness centrality connected components sparse PageRank shortest paths and so on and in the library work we have an edge map operator that chooses between a sparse implementation and in dense implementation based on the size of the frontier so sparse here corresponds to the top-down approach and dense corresponds to the bottom-up approach and it turns out that using this direction optimizing idea for these other applications also gives you performance gains in practice ok so let me now talk about another optimization which is graph compression and the goal here is to reduce the amount of memory usage in the graph algorithm so recall this was our CSR representation and in the edges array we just stored the values of the target edges in sort instead of storing the actual targets we can actually do better by first sorting the edges so that they appear in non decreasing order and then just storing the differences between consecutive edges and then for the first edge for any particular vertex will store the difference between the target and the source of that edge so for example here for vertex 0 the first edge is going to have a value of 2 because we're going to take the difference between the target and the source so 2 minus 0 is 2 then for the next edge we're going to take the difference between the second edge and the first edge so that's 7-5 7-5 and there similarly we do that for all the remaining edges notice that there are some negative values here and this is because the target is smaller than the source so in this example 1 is smaller than 2 so if you do 1 minus 2 you get a negative negative 1 and this can only happen for the first edge for any particular vertex because for all of the other edges were encoding the difference between that edge in the previous edge and we already sorted these edges so that they appear in non decreasing order okay so this compress edges re will typically contain smaller values than this original edges array so now we want to be able to use fewer bits to represent these values we don't want to use 32 or 64 bits likely like we did before otherwise we wouldn't be saving any space so one way to reduce the space usage is to store these values using what's called a variable length code or a K bit code and idea is to encode each value in chunks of K bits where for each chunk we use K minus 1 bits for the data and one bit as the continue bit so for example let's encode the integer 401 using 8-bit or byte codes so first we're gonna write this value out in binary and then we're gonna take the bottom 7 bits and we're going to place that into the data field of the first chunk and then in the last bit of this Chuck we're gonna check if we still have any more bits that we need to encode and if we do then we're gonna set a 1 in the continue bit position and then we create another chunk where we place the next 7 bits into the data field of that chunk and then now we're actually done encoding this integer value so we can place a 0 in the continue bit so that's how the encoding works and decoding is just doing this process backwards so you read chunks until you find a chunk with a 0 continue bit and then you shift all of the data values left accordingly and sum them together to reconstruct the integer value that you encoded one performance issue that might occur here is that you have when you're decoding you have to check this continue bit for every chunk and decide what to do based on that continue bit and this is actually unpredictable branch so you can suffer from branch mispredictions from checking this continue bit so one way you can optimize this is to get rid of these continued bids in the idea here is to first figure out how many bytes you need to encode each integer in the sequence and then you group together integers that require the same number of bytes to encode use a run length encoding idea to encode all of these integers together by using a header byte where in the header byte use a lower 6 6 bits to store the size of the group and the high highest two bits to store the number of bytes each of these integers needs to decode and now all the integers will just in this group will just be stored after this header byte and we know exactly how many bytes they need to decode so we don't need to a story continue bit in these chunks this does slightly increase a space usage but it makes decoding cheaper because we no longer have to suffer from branch mispredictions from checking this continue bit okay so now we have to decode these edge lists on the fly as we're running our algorithm if we decoded everything at the beginning we wouldn't actually be saving any space we need to decode these edges as we access them in our algorithm since we encoded all of these edge lists separately for each vertex we can decode all of them in parallel the and each vertex just decodes as edgeless sequentially but what about high degree vertices if you have a high degree vertex you swap to decode it's actually sequentially and if you're running this in parallel this could lead to load imbalance so one way to fix this is instead of just encoding the whole thing sequentially you can chunk it up into chunks of size T and then for each chunk you encode it like you did before where you store the first route value relative to the source vertex and then all the other values relative to the previous edge and now you can actually decode the first value here for each of these chunks all in parallel without having to wait for the previous edge to be decoded and then this gives us much more parallelism because all of these chunks can be decoded in parallel and we found that a value of T where T is a chunk size between a hundred and ten thousand works pretty well in practice okay so not gonna have time to go over the experiments but at a high level experiments show that the compression schemes do save space and see really it's only slightly slower than the uncompressed version but surprisingly when you run it in parallel it actually becomes faster than the uncompressed version and this is because these graph algorithms are memory bound and if you're using less memory you can alleviate this memory subsystem bottleneck and get better scalability and the decoding part of these compressed algorithms actually get very good parallel speed-up because they're just doing local operations okay okay so let me summarize now so we saw some properties of real world graphs we saw that they're quite large but they can still fit on a multi-core server and they're relatively sparse they also have a parallel degree distribution many graph algorithms are irregular in that they involve many random memory accesses so that becomes the bottleneck of the performance of these algorithms and you can improve performance with algorithmic algorithmic optimizations such as using this Direction optimization and also by creating an exploiting locality for example by using this bit vector optimization and finally optimizations for graphs might work well for certain graphs but they might might not work well for other graphs for example the direction optimization idea works well for paralog routes but not for road graphs when you're trying to optimize your graph algorithm we should definitely test it on different types of graphs and see where it works well and where it doesn't work so that's all I have if you have any additional questions please feel free to ask me after class and as a reminder we have a guest lecture on Thursday by Professor Johnson of the MIT math department on and he'll be talked about high level languages so please be sure to attend you 3 00:00:05,769 --> 00:00:08,019 4 00:00:08,019 --> 00:00:09,850 5 00:00:09,850 --> 00:00:10,930 6 00:00:10,930 --> 00:00:13,120 7 00:00:13,120 --> 00:00:15,160 8 00:00:15,160 --> 00:00:22,470 9 00:00:22,470 --> 00:00:25,960 10 00:00:25,960 --> 00:00:28,479 11 00:00:28,479 --> 00:00:31,810 12 00:00:31,810 --> 00:00:33,490 13 00:00:33,490 --> 00:00:36,820 14 00:00:36,820 --> 00:00:38,980 15 00:00:38,980 --> 00:00:40,930 16 00:00:40,930 --> 00:00:43,870 17 00:00:43,870 --> 00:00:47,200 18 00:00:47,200 --> 00:00:49,180 19 00:00:49,180 --> 00:00:51,940 20 00:00:51,940 --> 00:00:55,060 21 00:00:55,060 --> 00:00:57,130 22 00:00:57,130 --> 00:01:01,030 23 00:01:01,030 --> 00:01:03,010 24 00:01:03,010 --> 00:01:05,020 25 00:01:05,020 --> 00:01:08,500 26 00:01:08,500 --> 00:01:11,200 27 00:01:11,200 --> 00:01:13,360 28 00:01:13,360 --> 00:01:17,550 29 00:01:17,550 --> 00:01:21,969 30 00:01:21,969 --> 00:01:24,550 31 00:01:24,550 --> 00:01:26,740 32 00:01:26,740 --> 00:01:29,740 33 00:01:29,740 --> 00:01:32,200 34 00:01:32,200 --> 00:01:35,350 35 00:01:35,350 --> 00:01:37,390 36 00:01:37,390 --> 00:01:39,550 37 00:01:39,550 --> 00:01:41,500 38 00:01:41,500 --> 00:01:45,190 39 00:01:45,190 --> 00:01:48,030 40 00:01:48,030 --> 00:01:49,840 41 00:01:49,840 --> 00:01:51,700 42 00:01:51,700 --> 00:01:53,020 43 00:01:53,020 --> 00:01:55,120 44 00:01:55,120 --> 00:01:57,840 45 00:01:57,840 --> 00:02:00,219 46 00:02:00,219 --> 00:02:03,550 47 00:02:03,550 --> 00:02:07,480 48 00:02:07,480 --> 00:02:09,279 49 00:02:09,279 --> 00:02:10,659 50 00:02:10,659 --> 00:02:13,600 51 00:02:13,600 --> 00:02:16,210 52 00:02:16,210 --> 00:02:18,280 53 00:02:18,280 --> 00:02:20,740 54 00:02:20,740 --> 00:02:22,270 55 00:02:22,270 --> 00:02:24,850 56 00:02:24,850 --> 00:02:28,330 57 00:02:28,330 --> 00:02:30,250 58 00:02:30,250 --> 00:02:33,820 59 00:02:33,820 --> 00:02:36,670 60 00:02:36,670 --> 00:02:39,010 61 00:02:39,010 --> 00:02:42,130 62 00:02:42,130 --> 00:02:45,310 63 00:02:45,310 --> 00:02:49,120 64 00:02:49,120 --> 00:02:50,890 65 00:02:50,890 --> 00:02:53,560 66 00:02:53,560 --> 00:02:56,949 67 00:02:56,949 --> 00:02:58,660 68 00:02:58,660 --> 00:03:01,420 69 00:03:01,420 --> 00:03:03,340 70 00:03:03,340 --> 00:03:05,530 71 00:03:05,530 --> 00:03:07,420 72 00:03:07,420 --> 00:03:10,530 73 00:03:10,530 --> 00:03:12,880 74 00:03:12,880 --> 00:03:14,410 75 00:03:14,410 --> 00:03:16,900 76 00:03:16,900 --> 00:03:19,810 77 00:03:19,810 --> 00:03:23,740 78 00:03:23,740 --> 00:03:26,290 79 00:03:26,290 --> 00:03:29,770 80 00:03:29,770 --> 00:03:32,590 81 00:03:32,590 --> 00:03:34,930 82 00:03:34,930 --> 00:03:37,060 83 00:03:37,060 --> 00:03:39,850 84 00:03:39,850 --> 00:03:41,650 85 00:03:41,650 --> 00:03:46,500 86 00:03:46,500 --> 00:03:49,449 87 00:03:49,449 --> 00:03:51,580 88 00:03:51,580 --> 00:03:54,100 89 00:03:54,100 --> 00:03:56,350 90 00:03:56,350 --> 00:03:58,390 91 00:03:58,390 --> 00:04:01,420 92 00:04:01,420 --> 00:04:03,310 93 00:04:03,310 --> 00:04:06,880 94 00:04:06,880 --> 00:04:09,160 95 00:04:09,160 --> 00:04:14,160 96 00:04:14,160 --> 00:04:16,570 97 00:04:16,570 --> 00:04:19,509 98 00:04:19,509 --> 00:04:21,460 99 00:04:21,460 --> 00:04:24,219 100 00:04:24,219 --> 00:04:25,900 101 00:04:25,900 --> 00:04:26,650 102 00:04:26,650 --> 00:04:29,980 103 00:04:29,980 --> 00:04:31,960 104 00:04:31,960 --> 00:04:34,510 105 00:04:34,510 --> 00:04:37,750 106 00:04:37,750 --> 00:04:39,730 107 00:04:39,730 --> 00:04:44,770 108 00:04:44,770 --> 00:04:46,360 109 00:04:46,360 --> 00:04:48,190 110 00:04:48,190 --> 00:04:50,380 111 00:04:50,380 --> 00:04:52,870 112 00:04:52,870 --> 00:04:55,480 113 00:04:55,480 --> 00:04:59,680 114 00:04:59,680 --> 00:05:03,730 115 00:05:03,730 --> 00:05:06,820 116 00:05:06,820 --> 00:05:10,300 117 00:05:10,300 --> 00:05:11,920 118 00:05:11,920 --> 00:05:13,810 119 00:05:13,810 --> 00:05:15,340 120 00:05:15,340 --> 00:05:17,440 121 00:05:17,440 --> 00:05:19,840 122 00:05:19,840 --> 00:05:22,930 123 00:05:22,930 --> 00:05:26,020 124 00:05:26,020 --> 00:05:27,520 125 00:05:27,520 --> 00:05:29,800 126 00:05:29,800 --> 00:05:33,070 127 00:05:33,070 --> 00:05:35,230 128 00:05:35,230 --> 00:05:37,840 129 00:05:37,840 --> 00:05:40,720 130 00:05:40,720 --> 00:05:42,640 131 00:05:42,640 --> 00:05:44,880 132 00:05:44,880 --> 00:05:47,080 133 00:05:47,080 --> 00:05:48,610 134 00:05:48,610 --> 00:05:50,650 135 00:05:50,650 --> 00:05:52,630 136 00:05:52,630 --> 00:05:54,700 137 00:05:54,700 --> 00:05:58,360 138 00:05:58,360 --> 00:06:01,060 139 00:06:01,060 --> 00:06:03,010 140 00:06:03,010 --> 00:06:05,350 141 00:06:05,350 --> 00:06:08,320 142 00:06:08,320 --> 00:06:11,500 143 00:06:11,500 --> 00:06:13,960 144 00:06:13,960 --> 00:06:16,150 145 00:06:16,150 --> 00:06:18,100 146 00:06:18,100 --> 00:06:20,530 147 00:06:20,530 --> 00:06:24,550 148 00:06:24,550 --> 00:06:26,350 149 00:06:26,350 --> 00:06:28,540 150 00:06:28,540 --> 00:06:29,800 151 00:06:29,800 --> 00:06:31,300 152 00:06:31,300 --> 00:06:33,790 153 00:06:33,790 --> 00:06:36,220 154 00:06:36,220 --> 00:06:38,290 155 00:06:38,290 --> 00:06:40,250 156 00:06:40,250 --> 00:06:42,200 157 00:06:42,200 --> 00:06:45,380 158 00:06:45,380 --> 00:06:47,780 159 00:06:47,780 --> 00:06:53,180 160 00:06:53,180 --> 00:06:55,460 161 00:06:55,460 --> 00:06:57,530 162 00:06:57,530 --> 00:07:00,170 163 00:07:00,170 --> 00:07:02,060 164 00:07:02,060 --> 00:07:05,450 165 00:07:05,450 --> 00:07:07,100 166 00:07:07,100 --> 00:07:09,620 167 00:07:09,620 --> 00:07:12,410 168 00:07:12,410 --> 00:07:15,770 169 00:07:15,770 --> 00:07:18,170 170 00:07:18,170 --> 00:07:20,810 171 00:07:20,810 --> 00:07:22,610 172 00:07:22,610 --> 00:07:25,910 173 00:07:25,910 --> 00:07:29,480 174 00:07:29,480 --> 00:07:31,460 175 00:07:31,460 --> 00:07:34,520 176 00:07:34,520 --> 00:07:37,850 177 00:07:37,850 --> 00:07:40,370 178 00:07:40,370 --> 00:07:42,380 179 00:07:42,380 --> 00:07:44,810 180 00:07:44,810 --> 00:07:46,550 181 00:07:46,550 --> 00:07:49,250 182 00:07:49,250 --> 00:07:52,970 183 00:07:52,970 --> 00:07:55,370 184 00:07:55,370 --> 00:07:57,530 185 00:07:57,530 --> 00:07:59,930 186 00:07:59,930 --> 00:08:02,870 187 00:08:02,870 --> 00:08:04,460 188 00:08:04,460 --> 00:08:05,810 189 00:08:05,810 --> 00:08:08,240 190 00:08:08,240 --> 00:08:12,050 191 00:08:12,050 --> 00:08:20,000 192 00:08:20,000 --> 00:08:23,610 193 00:08:23,610 --> 00:08:29,490 194 00:08:29,490 --> 00:08:31,080 195 00:08:31,080 --> 00:08:32,730 196 00:08:32,730 --> 00:08:35,580 197 00:08:35,580 --> 00:08:38,190 198 00:08:38,190 --> 00:08:40,740 199 00:08:40,740 --> 00:08:42,690 200 00:08:42,690 --> 00:08:44,970 201 00:08:44,970 --> 00:08:47,190 202 00:08:47,190 --> 00:08:50,040 203 00:08:50,040 --> 00:08:51,900 204 00:08:51,900 --> 00:08:53,310 205 00:08:53,310 --> 00:08:55,590 206 00:08:55,590 --> 00:08:58,800 207 00:08:58,800 --> 00:09:02,430 208 00:09:02,430 --> 00:09:05,010 209 00:09:05,010 --> 00:09:08,460 210 00:09:08,460 --> 00:09:10,890 211 00:09:10,890 --> 00:09:16,079 212 00:09:16,079 --> 00:09:18,329 213 00:09:18,329 --> 00:09:21,000 214 00:09:21,000 --> 00:09:22,980 215 00:09:22,980 --> 00:09:25,460 216 00:09:25,460 --> 00:09:27,780 217 00:09:27,780 --> 00:09:32,400 218 00:09:32,400 --> 00:09:34,590 219 00:09:34,590 --> 00:09:36,360 220 00:09:36,360 --> 00:09:38,550 221 00:09:38,550 --> 00:09:44,900 222 00:09:44,900 --> 00:09:44,910 223 00:09:44,910 --> 00:09:49,380 224 00:09:49,380 --> 00:09:52,800 225 00:09:52,800 --> 00:09:54,480 226 00:09:54,480 --> 00:09:57,180 227 00:09:57,180 --> 00:09:59,699 228 00:09:59,699 --> 00:10:01,350 229 00:10:01,350 --> 00:10:03,389 230 00:10:03,389 --> 00:10:05,280 231 00:10:05,280 --> 00:10:11,100 232 00:10:11,100 --> 00:10:13,650 233 00:10:13,650 --> 00:10:17,190 234 00:10:17,190 --> 00:10:18,930 235 00:10:18,930 --> 00:10:21,150 236 00:10:21,150 --> 00:10:24,079 237 00:10:24,079 --> 00:10:26,970 238 00:10:26,970 --> 00:10:29,790 239 00:10:29,790 --> 00:10:33,389 240 00:10:33,389 --> 00:10:42,950 241 00:10:42,950 --> 00:10:45,620 242 00:10:45,620 --> 00:10:48,230 243 00:10:48,230 --> 00:10:51,110 244 00:10:51,110 --> 00:10:53,060 245 00:10:53,060 --> 00:10:54,440 246 00:10:54,440 --> 00:10:59,240 247 00:10:59,240 --> 00:11:01,910 248 00:11:01,910 --> 00:11:03,620 249 00:11:03,620 --> 00:11:07,430 250 00:11:07,430 --> 00:11:23,600 251 00:11:23,600 --> 00:11:26,269 252 00:11:26,269 --> 00:11:28,100 253 00:11:28,100 --> 00:11:29,630 254 00:11:29,630 --> 00:11:32,600 255 00:11:32,600 --> 00:11:34,519 256 00:11:34,519 --> 00:11:36,050 257 00:11:36,050 --> 00:11:38,510 258 00:11:38,510 --> 00:11:40,250 259 00:11:40,250 --> 00:11:42,220 260 00:11:42,220 --> 00:11:44,900 261 00:11:44,900 --> 00:11:46,940 262 00:11:46,940 --> 00:11:50,210 263 00:11:50,210 --> 00:11:52,250 264 00:11:52,250 --> 00:11:54,740 265 00:11:54,740 --> 00:11:57,140 266 00:11:57,140 --> 00:11:58,550 267 00:11:58,550 --> 00:12:00,410 268 00:12:00,410 --> 00:12:02,300 269 00:12:02,300 --> 00:12:07,460 270 00:12:07,460 --> 00:12:07,470 271 00:12:07,470 --> 00:12:17,340 272 00:12:17,340 --> 00:12:19,810 273 00:12:19,810 --> 00:12:22,150 274 00:12:22,150 --> 00:12:30,910 275 00:12:30,910 --> 00:12:33,390 276 00:12:33,390 --> 00:12:46,150 277 00:12:46,150 --> 00:12:49,150 278 00:12:49,150 --> 00:12:53,860 279 00:12:53,860 --> 00:12:57,580 280 00:12:57,580 --> 00:12:59,860 281 00:12:59,860 --> 00:13:01,660 282 00:13:01,660 --> 00:13:04,930 283 00:13:04,930 --> 00:13:06,910 284 00:13:06,910 --> 00:13:09,550 285 00:13:09,550 --> 00:13:12,280 286 00:13:12,280 --> 00:13:14,800 287 00:13:14,800 --> 00:13:17,230 288 00:13:17,230 --> 00:13:19,360 289 00:13:19,360 --> 00:13:22,750 290 00:13:22,750 --> 00:13:24,670 291 00:13:24,670 --> 00:13:27,550 292 00:13:27,550 --> 00:13:31,300 293 00:13:31,300 --> 00:13:33,430 294 00:13:33,430 --> 00:13:36,220 295 00:13:36,220 --> 00:13:39,280 296 00:13:39,280 --> 00:13:42,730 297 00:13:42,730 --> 00:13:44,590 298 00:13:44,590 --> 00:13:46,090 299 00:13:46,090 --> 00:13:48,430 300 00:13:48,430 --> 00:14:00,190 301 00:14:00,190 --> 00:14:01,990 302 00:14:01,990 --> 00:14:03,970 303 00:14:03,970 --> 00:14:06,760 304 00:14:06,760 --> 00:14:09,280 305 00:14:09,280 --> 00:14:12,130 306 00:14:12,130 --> 00:14:14,530 307 00:14:14,530 --> 00:14:16,030 308 00:14:16,030 --> 00:14:20,260 309 00:14:20,260 --> 00:14:29,429 310 00:14:29,429 --> 00:14:29,439 311 00:14:29,439 --> 00:14:32,130 312 00:14:32,130 --> 00:14:34,920 313 00:14:34,920 --> 00:14:37,829 314 00:14:37,829 --> 00:14:39,930 315 00:14:39,930 --> 00:14:43,530 316 00:14:43,530 --> 00:14:46,319 317 00:14:46,319 --> 00:14:48,240 318 00:14:48,240 --> 00:14:52,470 319 00:14:52,470 --> 00:14:54,840 320 00:14:54,840 --> 00:14:57,139 321 00:14:57,139 --> 00:15:00,000 322 00:15:00,000 --> 00:15:01,889 323 00:15:01,889 --> 00:15:03,780 324 00:15:03,780 --> 00:15:05,960 325 00:15:05,960 --> 00:15:08,880 326 00:15:08,880 --> 00:15:10,710 327 00:15:10,710 --> 00:15:12,569 328 00:15:12,569 --> 00:15:16,470 329 00:15:16,470 --> 00:15:19,769 330 00:15:19,769 --> 00:15:21,630 331 00:15:21,630 --> 00:15:24,000 332 00:15:24,000 --> 00:15:26,310 333 00:15:26,310 --> 00:15:28,050 334 00:15:28,050 --> 00:15:31,470 335 00:15:31,470 --> 00:15:36,710 336 00:15:36,710 --> 00:15:38,880 337 00:15:38,880 --> 00:15:40,680 338 00:15:40,680 --> 00:15:43,199 339 00:15:43,199 --> 00:15:45,300 340 00:15:45,300 --> 00:15:46,710 341 00:15:46,710 --> 00:15:49,290 342 00:15:49,290 --> 00:15:51,030 343 00:15:51,030 --> 00:15:53,699 344 00:15:53,699 --> 00:15:55,530 345 00:15:55,530 --> 00:15:57,870 346 00:15:57,870 --> 00:16:03,120 347 00:16:03,120 --> 00:16:05,579 348 00:16:05,579 --> 00:16:09,630 349 00:16:09,630 --> 00:16:12,870 350 00:16:12,870 --> 00:16:15,960 351 00:16:15,960 --> 00:16:19,439 352 00:16:19,439 --> 00:16:29,100 353 00:16:29,100 --> 00:16:31,510 354 00:16:31,510 --> 00:16:33,490 355 00:16:33,490 --> 00:16:37,050 356 00:16:37,050 --> 00:16:41,470 357 00:16:41,470 --> 00:16:43,510 358 00:16:43,510 --> 00:16:46,980 359 00:16:46,980 --> 00:16:50,290 360 00:16:50,290 --> 00:16:51,730 361 00:16:51,730 --> 00:16:55,450 362 00:16:55,450 --> 00:16:58,060 363 00:16:58,060 --> 00:16:59,800 364 00:16:59,800 --> 00:17:01,960 365 00:17:01,960 --> 00:17:04,900 366 00:17:04,900 --> 00:17:06,699 367 00:17:06,699 --> 00:17:09,430 368 00:17:09,430 --> 00:17:11,920 369 00:17:11,920 --> 00:17:15,190 370 00:17:15,190 --> 00:17:16,870 371 00:17:16,870 --> 00:17:18,940 372 00:17:18,940 --> 00:17:20,770 373 00:17:20,770 --> 00:17:24,220 374 00:17:24,220 --> 00:17:26,620 375 00:17:26,620 --> 00:17:28,240 376 00:17:28,240 --> 00:17:29,770 377 00:17:29,770 --> 00:17:32,590 378 00:17:32,590 --> 00:17:34,960 379 00:17:34,960 --> 00:17:37,150 380 00:17:37,150 --> 00:17:41,740 381 00:17:41,740 --> 00:17:43,360 382 00:17:43,360 --> 00:17:46,450 383 00:17:46,450 --> 00:17:49,120 384 00:17:49,120 --> 00:17:51,310 385 00:17:51,310 --> 00:17:53,650 386 00:17:53,650 --> 00:17:55,750 387 00:17:55,750 --> 00:17:57,850 388 00:17:57,850 --> 00:17:59,440 389 00:17:59,440 --> 00:18:00,850 390 00:18:00,850 --> 00:18:02,410 391 00:18:02,410 --> 00:18:04,630 392 00:18:04,630 --> 00:18:06,010 393 00:18:06,010 --> 00:18:08,380 394 00:18:08,380 --> 00:18:09,970 395 00:18:09,970 --> 00:18:12,790 396 00:18:12,790 --> 00:18:14,650 397 00:18:14,650 --> 00:18:17,020 398 00:18:17,020 --> 00:18:21,280 399 00:18:21,280 --> 00:18:25,600 400 00:18:25,600 --> 00:18:27,340 401 00:18:27,340 --> 00:18:29,530 402 00:18:29,530 --> 00:18:31,810 403 00:18:31,810 --> 00:18:44,620 404 00:18:44,620 --> 00:18:47,090 405 00:18:47,090 --> 00:18:49,430 406 00:18:49,430 --> 00:18:52,159 407 00:18:52,159 --> 00:18:53,600 408 00:18:53,600 --> 00:18:55,190 409 00:18:55,190 --> 00:18:58,639 410 00:18:58,639 --> 00:19:01,879 411 00:19:01,879 --> 00:19:05,029 412 00:19:05,029 --> 00:19:06,649 413 00:19:06,649 --> 00:19:07,940 414 00:19:07,940 --> 00:19:09,860 415 00:19:09,860 --> 00:19:11,509 416 00:19:11,509 --> 00:19:12,860 417 00:19:12,860 --> 00:19:14,450 418 00:19:14,450 --> 00:19:20,389 419 00:19:20,389 --> 00:19:22,460 420 00:19:22,460 --> 00:19:25,700 421 00:19:25,700 --> 00:19:33,260 422 00:19:33,260 --> 00:19:36,840 423 00:19:36,840 --> 00:19:38,940 424 00:19:38,940 --> 00:19:40,410 425 00:19:40,410 --> 00:19:42,360 426 00:19:42,360 --> 00:19:47,549 427 00:19:47,549 --> 00:19:49,260 428 00:19:49,260 --> 00:19:51,000 429 00:19:51,000 --> 00:19:52,440 430 00:19:52,440 --> 00:19:55,970 431 00:19:55,970 --> 00:19:58,950 432 00:19:58,950 --> 00:20:03,150 433 00:20:03,150 --> 00:20:04,830 434 00:20:04,830 --> 00:20:06,600 435 00:20:06,600 --> 00:20:08,580 436 00:20:08,580 --> 00:20:11,610 437 00:20:11,610 --> 00:20:13,380 438 00:20:13,380 --> 00:20:15,330 439 00:20:15,330 --> 00:20:17,370 440 00:20:17,370 --> 00:20:19,020 441 00:20:19,020 --> 00:20:20,430 442 00:20:20,430 --> 00:20:26,340 443 00:20:26,340 --> 00:20:29,370 444 00:20:29,370 --> 00:20:32,909 445 00:20:32,909 --> 00:20:35,520 446 00:20:35,520 --> 00:20:38,159 447 00:20:38,159 --> 00:20:40,680 448 00:20:40,680 --> 00:20:42,720 449 00:20:42,720 --> 00:20:45,120 450 00:20:45,120 --> 00:20:47,310 451 00:20:47,310 --> 00:20:50,789 452 00:20:50,789 --> 00:20:52,860 453 00:20:52,860 --> 00:20:54,659 454 00:20:54,659 --> 00:20:58,289 455 00:20:58,289 --> 00:21:01,740 456 00:21:01,740 --> 00:21:02,970 457 00:21:02,970 --> 00:21:05,280 458 00:21:05,280 --> 00:21:07,560 459 00:21:07,560 --> 00:21:09,750 460 00:21:09,750 --> 00:21:11,700 461 00:21:11,700 --> 00:21:13,049 462 00:21:13,049 --> 00:21:15,240 463 00:21:15,240 --> 00:21:17,100 464 00:21:17,100 --> 00:21:18,780 465 00:21:18,780 --> 00:21:21,690 466 00:21:21,690 --> 00:21:23,549 467 00:21:23,549 --> 00:21:29,790 468 00:21:29,790 --> 00:21:32,190 469 00:21:32,190 --> 00:21:34,140 470 00:21:34,140 --> 00:21:36,120 471 00:21:36,120 --> 00:21:39,390 472 00:21:39,390 --> 00:21:41,100 473 00:21:41,100 --> 00:21:43,140 474 00:21:43,140 --> 00:21:45,450 475 00:21:45,450 --> 00:21:47,010 476 00:21:47,010 --> 00:21:49,410 477 00:21:49,410 --> 00:21:51,120 478 00:21:51,120 --> 00:21:54,300 479 00:21:54,300 --> 00:21:56,040 480 00:21:56,040 --> 00:21:58,530 481 00:21:58,530 --> 00:22:01,590 482 00:22:01,590 --> 00:22:03,900 483 00:22:03,900 --> 00:22:05,130 484 00:22:05,130 --> 00:22:06,540 485 00:22:06,540 --> 00:22:11,040 486 00:22:11,040 --> 00:22:22,350 487 00:22:22,350 --> 00:22:23,880 488 00:22:23,880 --> 00:22:29,760 489 00:22:29,760 --> 00:22:32,070 490 00:22:32,070 --> 00:22:34,650 491 00:22:34,650 --> 00:22:36,960 492 00:22:36,960 --> 00:22:39,930 493 00:22:39,930 --> 00:22:41,520 494 00:22:41,520 --> 00:22:43,200 495 00:22:43,200 --> 00:22:46,560 496 00:22:46,560 --> 00:22:48,360 497 00:22:48,360 --> 00:22:50,220 498 00:22:50,220 --> 00:22:51,720 499 00:22:51,720 --> 00:22:55,110 500 00:22:55,110 --> 00:22:57,060 501 00:22:57,060 --> 00:22:59,160 502 00:22:59,160 --> 00:23:03,270 503 00:23:03,270 --> 00:23:06,420 504 00:23:06,420 --> 00:23:08,520 505 00:23:08,520 --> 00:23:11,520 506 00:23:11,520 --> 00:23:13,350 507 00:23:13,350 --> 00:23:15,180 508 00:23:15,180 --> 00:23:18,030 509 00:23:18,030 --> 00:23:20,580 510 00:23:20,580 --> 00:23:22,650 511 00:23:22,650 --> 00:23:25,650 512 00:23:25,650 --> 00:23:26,880 513 00:23:26,880 --> 00:23:29,220 514 00:23:29,220 --> 00:23:30,810 515 00:23:30,810 --> 00:23:32,250 516 00:23:32,250 --> 00:23:34,040 517 00:23:34,040 --> 00:23:36,750 518 00:23:36,750 --> 00:23:37,890 519 00:23:37,890 --> 00:23:41,149 520 00:23:41,149 --> 00:23:43,709 521 00:23:43,709 --> 00:23:45,779 522 00:23:45,779 --> 00:23:48,119 523 00:23:48,119 --> 00:23:50,430 524 00:23:50,430 --> 00:23:52,459 525 00:23:52,459 --> 00:23:55,619 526 00:23:55,619 --> 00:23:57,570 527 00:23:57,570 --> 00:23:59,849 528 00:23:59,849 --> 00:24:03,509 529 00:24:03,509 --> 00:24:05,190 530 00:24:05,190 --> 00:24:07,049 531 00:24:07,049 --> 00:24:09,749 532 00:24:09,749 --> 00:24:11,369 533 00:24:11,369 --> 00:24:13,999 534 00:24:13,999 --> 00:24:17,549 535 00:24:17,549 --> 00:24:19,649 536 00:24:19,649 --> 00:24:22,469 537 00:24:22,469 --> 00:24:24,299 538 00:24:24,299 --> 00:24:29,369 539 00:24:29,369 --> 00:24:32,609 540 00:24:32,609 --> 00:24:34,019 541 00:24:34,019 --> 00:24:35,759 542 00:24:35,759 --> 00:24:37,680 543 00:24:37,680 --> 00:24:40,379 544 00:24:40,379 --> 00:24:42,180 545 00:24:42,180 --> 00:24:46,109 546 00:24:46,109 --> 00:24:48,839 547 00:24:48,839 --> 00:24:52,049 548 00:24:52,049 --> 00:24:55,889 549 00:24:55,889 --> 00:24:57,349 550 00:24:57,349 --> 00:24:59,279 551 00:24:59,279 --> 00:25:01,079 552 00:25:01,079 --> 00:25:03,839 553 00:25:03,839 --> 00:25:05,669 554 00:25:05,669 --> 00:25:08,009 555 00:25:08,009 --> 00:25:09,949 556 00:25:09,949 --> 00:25:13,589 557 00:25:13,589 --> 00:25:23,729 558 00:25:23,729 --> 00:25:25,560 559 00:25:25,560 --> 00:25:27,869 560 00:25:27,869 --> 00:25:29,810 561 00:25:29,810 --> 00:25:31,979 562 00:25:31,979 --> 00:25:38,539 563 00:25:38,539 --> 00:25:40,680 564 00:25:40,680 --> 00:25:42,449 565 00:25:42,449 --> 00:25:45,310 566 00:25:45,310 --> 00:25:48,800 567 00:25:48,800 --> 00:25:50,630 568 00:25:50,630 --> 00:25:52,430 569 00:25:52,430 --> 00:25:54,380 570 00:25:54,380 --> 00:25:57,350 571 00:25:57,350 --> 00:25:59,000 572 00:25:59,000 --> 00:26:00,560 573 00:26:00,560 --> 00:26:01,880 574 00:26:01,880 --> 00:26:03,380 575 00:26:03,380 --> 00:26:06,440 576 00:26:06,440 --> 00:26:10,520 577 00:26:10,520 --> 00:26:12,440 578 00:26:12,440 --> 00:26:23,040 579 00:26:23,040 --> 00:26:24,720 580 00:26:24,720 --> 00:26:26,550 581 00:26:26,550 --> 00:26:29,850 582 00:26:29,850 --> 00:26:36,750 583 00:26:36,750 --> 00:26:39,000 584 00:26:39,000 --> 00:26:42,900 585 00:26:42,900 --> 00:26:45,690 586 00:26:45,690 --> 00:26:48,510 587 00:26:48,510 --> 00:26:50,220 588 00:26:50,220 --> 00:26:51,630 589 00:26:51,630 --> 00:26:52,800 590 00:26:52,800 --> 00:26:55,620 591 00:26:55,620 --> 00:26:57,630 592 00:26:57,630 --> 00:27:00,360 593 00:27:00,360 --> 00:27:03,240 594 00:27:03,240 --> 00:27:05,430 595 00:27:05,430 --> 00:27:08,130 596 00:27:08,130 --> 00:27:12,450 597 00:27:12,450 --> 00:27:13,920 598 00:27:13,920 --> 00:27:16,230 599 00:27:16,230 --> 00:27:18,750 600 00:27:18,750 --> 00:27:22,020 601 00:27:22,020 --> 00:27:25,020 602 00:27:25,020 --> 00:27:26,910 603 00:27:26,910 --> 00:27:30,510 604 00:27:30,510 --> 00:27:32,250 605 00:27:32,250 --> 00:27:35,850 606 00:27:35,850 --> 00:27:38,760 607 00:27:38,760 --> 00:27:40,170 608 00:27:40,170 --> 00:27:43,530 609 00:27:43,530 --> 00:27:45,150 610 00:27:45,150 --> 00:27:48,240 611 00:27:48,240 --> 00:27:51,420 612 00:27:51,420 --> 00:27:54,510 613 00:27:54,510 --> 00:27:56,610 614 00:27:56,610 --> 00:28:00,410 615 00:28:00,410 --> 00:28:03,000 616 00:28:03,000 --> 00:28:06,120 617 00:28:06,120 --> 00:28:09,900 618 00:28:09,900 --> 00:28:13,020 619 00:28:13,020 --> 00:28:15,480 620 00:28:15,480 --> 00:28:17,630 621 00:28:17,630 --> 00:28:20,370 622 00:28:20,370 --> 00:28:23,370 623 00:28:23,370 --> 00:28:25,440 624 00:28:25,440 --> 00:28:27,600 625 00:28:27,600 --> 00:28:31,680 626 00:28:31,680 --> 00:28:34,650 627 00:28:34,650 --> 00:28:36,880 628 00:28:36,880 --> 00:28:39,370 629 00:28:39,370 --> 00:28:41,710 630 00:28:41,710 --> 00:28:45,640 631 00:28:45,640 --> 00:28:47,470 632 00:28:47,470 --> 00:28:50,500 633 00:28:50,500 --> 00:28:53,320 634 00:28:53,320 --> 00:28:55,480 635 00:28:55,480 --> 00:29:00,190 636 00:29:00,190 --> 00:29:04,060 637 00:29:04,060 --> 00:29:05,560 638 00:29:05,560 --> 00:29:08,470 639 00:29:08,470 --> 00:29:10,450 640 00:29:10,450 --> 00:29:12,460 641 00:29:12,460 --> 00:29:16,600 642 00:29:16,600 --> 00:29:18,490 643 00:29:18,490 --> 00:29:20,920 644 00:29:20,920 --> 00:29:22,300 645 00:29:22,300 --> 00:29:24,760 646 00:29:24,760 --> 00:29:26,500 647 00:29:26,500 --> 00:29:29,080 648 00:29:29,080 --> 00:29:30,730 649 00:29:30,730 --> 00:29:32,140 650 00:29:32,140 --> 00:29:34,360 651 00:29:34,360 --> 00:29:36,790 652 00:29:36,790 --> 00:29:38,740 653 00:29:38,740 --> 00:29:40,630 654 00:29:40,630 --> 00:29:42,700 655 00:29:42,700 --> 00:29:46,050 656 00:29:46,050 --> 00:29:48,550 657 00:29:48,550 --> 00:29:50,380 658 00:29:50,380 --> 00:29:52,930 659 00:29:52,930 --> 00:29:56,200 660 00:29:56,200 --> 00:29:58,330 661 00:29:58,330 --> 00:30:01,990 662 00:30:01,990 --> 00:30:09,170 663 00:30:09,170 --> 00:30:09,180 664 00:30:09,180 --> 00:30:11,460 665 00:30:11,460 --> 00:30:21,070 666 00:30:21,070 --> 00:30:23,560 667 00:30:23,560 --> 00:30:26,049 668 00:30:26,049 --> 00:30:28,299 669 00:30:28,299 --> 00:30:30,250 670 00:30:30,250 --> 00:30:32,020 671 00:30:32,020 --> 00:30:35,289 672 00:30:35,289 --> 00:30:36,880 673 00:30:36,880 --> 00:30:38,409 674 00:30:38,409 --> 00:30:39,850 675 00:30:39,850 --> 00:30:44,890 676 00:30:44,890 --> 00:30:46,899 677 00:30:46,899 --> 00:30:49,090 678 00:30:49,090 --> 00:30:53,680 679 00:30:53,680 --> 00:30:55,630 680 00:30:55,630 --> 00:30:58,510 681 00:30:58,510 --> 00:31:01,840 682 00:31:01,840 --> 00:31:03,610 683 00:31:03,610 --> 00:31:05,740 684 00:31:05,740 --> 00:31:08,350 685 00:31:08,350 --> 00:31:10,570 686 00:31:10,570 --> 00:31:12,430 687 00:31:12,430 --> 00:31:14,140 688 00:31:14,140 --> 00:31:16,720 689 00:31:16,720 --> 00:31:20,440 690 00:31:20,440 --> 00:31:21,820 691 00:31:21,820 --> 00:31:23,680 692 00:31:23,680 --> 00:31:27,399 693 00:31:27,399 --> 00:31:29,860 694 00:31:29,860 --> 00:31:31,990 695 00:31:31,990 --> 00:31:34,210 696 00:31:34,210 --> 00:31:36,880 697 00:31:36,880 --> 00:31:39,039 698 00:31:39,039 --> 00:31:42,610 699 00:31:42,610 --> 00:31:44,230 700 00:31:44,230 --> 00:31:45,640 701 00:31:45,640 --> 00:31:47,590 702 00:31:47,590 --> 00:31:51,190 703 00:31:51,190 --> 00:31:52,899 704 00:31:52,899 --> 00:31:57,669 705 00:31:57,669 --> 00:31:59,740 706 00:31:59,740 --> 00:32:02,470 707 00:32:02,470 --> 00:32:04,539 708 00:32:04,539 --> 00:32:06,370 709 00:32:06,370 --> 00:32:08,740 710 00:32:08,740 --> 00:32:12,220 711 00:32:12,220 --> 00:32:13,990 712 00:32:13,990 --> 00:32:18,310 713 00:32:18,310 --> 00:32:19,539 714 00:32:19,539 --> 00:32:22,060 715 00:32:22,060 --> 00:32:24,200 716 00:32:24,200 --> 00:32:26,299 717 00:32:26,299 --> 00:32:27,169 718 00:32:27,169 --> 00:32:29,419 719 00:32:29,419 --> 00:32:31,850 720 00:32:31,850 --> 00:32:34,460 721 00:32:34,460 --> 00:32:38,180 722 00:32:38,180 --> 00:32:40,940 723 00:32:40,940 --> 00:32:43,070 724 00:32:43,070 --> 00:32:44,690 725 00:32:44,690 --> 00:32:46,609 726 00:32:46,609 --> 00:32:48,440 727 00:32:48,440 --> 00:32:51,289 728 00:32:51,289 --> 00:32:57,379 729 00:32:57,379 --> 00:32:58,669 730 00:32:58,669 --> 00:33:00,980 731 00:33:00,980 --> 00:33:03,139 732 00:33:03,139 --> 00:33:16,250 733 00:33:16,250 --> 00:33:16,260 734 00:33:16,260 --> 00:33:22,420 735 00:33:22,420 --> 00:33:22,430 736 00:33:22,430 --> 00:33:24,890 737 00:33:24,890 --> 00:33:30,060 738 00:33:30,060 --> 00:33:31,830 739 00:33:31,830 --> 00:33:35,700 740 00:33:35,700 --> 00:33:49,230 741 00:33:49,230 --> 00:33:57,370 742 00:33:57,370 --> 00:33:59,500 743 00:33:59,500 --> 00:34:02,190 744 00:34:02,190 --> 00:34:04,960 745 00:34:04,960 --> 00:34:07,120 746 00:34:07,120 --> 00:34:08,889 747 00:34:08,889 --> 00:34:11,710 748 00:34:11,710 --> 00:34:13,270 749 00:34:13,270 --> 00:34:17,399 750 00:34:17,399 --> 00:34:20,169 751 00:34:20,169 --> 00:34:23,050 752 00:34:23,050 --> 00:34:25,570 753 00:34:25,570 --> 00:34:28,000 754 00:34:28,000 --> 00:34:29,889 755 00:34:29,889 --> 00:34:31,389 756 00:34:31,389 --> 00:34:33,730 757 00:34:33,730 --> 00:34:36,700 758 00:34:36,700 --> 00:34:38,590 759 00:34:38,590 --> 00:34:43,680 760 00:34:43,680 --> 00:34:46,139 761 00:34:46,139 --> 00:34:48,280 762 00:34:48,280 --> 00:34:51,070 763 00:34:51,070 --> 00:34:54,010 764 00:34:54,010 --> 00:34:56,230 765 00:34:56,230 --> 00:34:58,150 766 00:34:58,150 --> 00:35:00,490 767 00:35:00,490 --> 00:35:03,340 768 00:35:03,340 --> 00:35:08,650 769 00:35:08,650 --> 00:35:12,120 770 00:35:12,120 --> 00:35:15,220 771 00:35:15,220 --> 00:35:18,550 772 00:35:18,550 --> 00:35:21,490 773 00:35:21,490 --> 00:35:23,350 774 00:35:23,350 --> 00:35:25,720 775 00:35:25,720 --> 00:35:28,270 776 00:35:28,270 --> 00:35:30,820 777 00:35:30,820 --> 00:35:33,640 778 00:35:33,640 --> 00:35:39,230 779 00:35:39,230 --> 00:35:41,390 780 00:35:41,390 --> 00:35:45,079 781 00:35:45,079 --> 00:35:46,670 782 00:35:46,670 --> 00:35:48,470 783 00:35:48,470 --> 00:35:50,630 784 00:35:50,630 --> 00:35:53,180 785 00:35:53,180 --> 00:35:55,010 786 00:35:55,010 --> 00:35:58,460 787 00:35:58,460 --> 00:36:05,000 788 00:36:05,000 --> 00:36:07,670 789 00:36:07,670 --> 00:36:11,089 790 00:36:11,089 --> 00:36:12,470 791 00:36:12,470 --> 00:36:14,630 792 00:36:14,630 --> 00:36:16,670 793 00:36:16,670 --> 00:36:19,820 794 00:36:19,820 --> 00:36:21,710 795 00:36:21,710 --> 00:36:28,520 796 00:36:28,520 --> 00:36:30,260 797 00:36:30,260 --> 00:36:33,170 798 00:36:33,170 --> 00:36:36,320 799 00:36:36,320 --> 00:36:53,770 800 00:36:53,770 --> 00:37:06,900 801 00:37:06,900 --> 00:37:13,710 802 00:37:13,710 --> 00:37:15,779 803 00:37:15,779 --> 00:37:18,210 804 00:37:18,210 --> 00:37:21,180 805 00:37:21,180 --> 00:37:23,220 806 00:37:23,220 --> 00:37:25,650 807 00:37:25,650 --> 00:37:29,579 808 00:37:29,579 --> 00:37:31,440 809 00:37:31,440 --> 00:37:35,339 810 00:37:35,339 --> 00:37:37,650 811 00:37:37,650 --> 00:37:39,960 812 00:37:39,960 --> 00:37:42,870 813 00:37:42,870 --> 00:37:44,880 814 00:37:44,880 --> 00:37:46,740 815 00:37:46,740 --> 00:37:49,319 816 00:37:49,319 --> 00:37:50,759 817 00:37:50,759 --> 00:37:52,769 818 00:37:52,769 --> 00:37:57,420 819 00:37:57,420 --> 00:37:59,549 820 00:37:59,549 --> 00:38:01,230 821 00:38:01,230 --> 00:38:03,539 822 00:38:03,539 --> 00:38:06,089 823 00:38:06,089 --> 00:38:08,849 824 00:38:08,849 --> 00:38:10,799 825 00:38:10,799 --> 00:38:14,009 826 00:38:14,009 --> 00:38:15,450 827 00:38:15,450 --> 00:38:19,620 828 00:38:19,620 --> 00:38:23,400 829 00:38:23,400 --> 00:38:27,569 830 00:38:27,569 --> 00:38:30,509 831 00:38:30,509 --> 00:38:32,339 832 00:38:32,339 --> 00:38:35,069 833 00:38:35,069 --> 00:38:37,140 834 00:38:37,140 --> 00:38:39,809 835 00:38:39,809 --> 00:38:44,339 836 00:38:44,339 --> 00:38:46,349 837 00:38:46,349 --> 00:38:49,980 838 00:38:49,980 --> 00:38:52,200 839 00:38:52,200 --> 00:38:54,749 840 00:38:54,749 --> 00:38:56,339 841 00:38:56,339 --> 00:38:59,670 842 00:38:59,670 --> 00:39:11,100 843 00:39:11,100 --> 00:39:11,110 844 00:39:11,110 --> 00:39:13,570 845 00:39:13,570 --> 00:39:17,180 846 00:39:17,180 --> 00:39:20,570 847 00:39:20,570 --> 00:39:22,610 848 00:39:22,610 --> 00:39:24,290 849 00:39:24,290 --> 00:39:26,150 850 00:39:26,150 --> 00:39:27,860 851 00:39:27,860 --> 00:39:29,570 852 00:39:29,570 --> 00:39:32,990 853 00:39:32,990 --> 00:39:34,910 854 00:39:34,910 --> 00:39:36,950 855 00:39:36,950 --> 00:39:39,590 856 00:39:39,590 --> 00:39:41,390 857 00:39:41,390 --> 00:39:44,480 858 00:39:44,480 --> 00:39:46,280 859 00:39:46,280 --> 00:39:49,550 860 00:39:49,550 --> 00:39:51,410 861 00:39:51,410 --> 00:39:53,720 862 00:39:53,720 --> 00:39:56,660 863 00:39:56,660 --> 00:39:58,700 864 00:39:58,700 --> 00:40:03,040 865 00:40:03,040 --> 00:40:05,840 866 00:40:05,840 --> 00:40:09,560 867 00:40:09,560 --> 00:40:12,650 868 00:40:12,650 --> 00:40:14,950 869 00:40:14,950 --> 00:40:17,150 870 00:40:17,150 --> 00:40:18,800 871 00:40:18,800 --> 00:40:20,990 872 00:40:20,990 --> 00:40:24,020 873 00:40:24,020 --> 00:40:26,030 874 00:40:26,030 --> 00:40:30,080 875 00:40:30,080 --> 00:40:33,770 876 00:40:33,770 --> 00:40:35,330 877 00:40:35,330 --> 00:40:38,090 878 00:40:38,090 --> 00:40:41,630 879 00:40:41,630 --> 00:40:44,990 880 00:40:44,990 --> 00:40:47,210 881 00:40:47,210 --> 00:40:50,690 882 00:40:50,690 --> 00:40:53,200 883 00:40:53,200 --> 00:40:57,290 884 00:40:57,290 --> 00:40:59,210 885 00:40:59,210 --> 00:41:01,400 886 00:41:01,400 --> 00:41:03,800 887 00:41:03,800 --> 00:41:06,980 888 00:41:06,980 --> 00:41:10,070 889 00:41:10,070 --> 00:41:13,240 890 00:41:13,240 --> 00:41:13,250 891 00:41:13,250 --> 00:41:14,779 892 00:41:14,779 --> 00:41:16,640 893 00:41:16,640 --> 00:41:18,529 894 00:41:18,529 --> 00:41:21,319 895 00:41:21,319 --> 00:41:23,539 896 00:41:23,539 --> 00:41:25,390 897 00:41:25,390 --> 00:41:31,609 898 00:41:31,609 --> 00:41:33,890 899 00:41:33,890 --> 00:41:36,709 900 00:41:36,709 --> 00:41:38,089 901 00:41:38,089 --> 00:41:41,630 902 00:41:41,630 --> 00:41:45,349 903 00:41:45,349 --> 00:41:47,989 904 00:41:47,989 --> 00:41:49,849 905 00:41:49,849 --> 00:41:52,519 906 00:41:52,519 --> 00:42:04,339 907 00:42:04,339 --> 00:42:06,469 908 00:42:06,469 --> 00:42:08,179 909 00:42:08,179 --> 00:42:11,390 910 00:42:11,390 --> 00:42:13,159 911 00:42:13,159 --> 00:42:14,509 912 00:42:14,509 --> 00:42:18,349 913 00:42:18,349 --> 00:42:20,029 914 00:42:20,029 --> 00:42:23,209 915 00:42:23,209 --> 00:42:25,789 916 00:42:25,789 --> 00:42:27,559 917 00:42:27,559 --> 00:42:30,109 918 00:42:30,109 --> 00:42:32,179 919 00:42:32,179 --> 00:42:33,859 920 00:42:33,859 --> 00:42:36,739 921 00:42:36,739 --> 00:42:38,299 922 00:42:38,299 --> 00:42:40,579 923 00:42:40,579 --> 00:42:42,739 924 00:42:42,739 --> 00:42:46,009 925 00:42:46,009 --> 00:42:47,359 926 00:42:47,359 --> 00:42:49,999 927 00:42:49,999 --> 00:42:51,919 928 00:42:51,919 --> 00:42:54,319 929 00:42:54,319 --> 00:42:56,120 930 00:42:56,120 --> 00:42:57,709 931 00:42:57,709 --> 00:42:59,989 932 00:42:59,989 --> 00:43:02,569 933 00:43:02,569 --> 00:43:05,209 934 00:43:05,209 --> 00:43:06,709 935 00:43:06,709 --> 00:43:09,229 936 00:43:09,229 --> 00:43:11,239 937 00:43:11,239 --> 00:43:14,809 938 00:43:14,809 --> 00:43:16,309 939 00:43:16,309 --> 00:43:17,719 940 00:43:17,719 --> 00:43:19,130 941 00:43:19,130 --> 00:43:20,719 942 00:43:20,719 --> 00:43:22,249 943 00:43:22,249 --> 00:43:25,009 944 00:43:25,009 --> 00:43:28,000 945 00:43:28,000 --> 00:43:30,250 946 00:43:30,250 --> 00:43:32,680 947 00:43:32,680 --> 00:43:37,210 948 00:43:37,210 --> 00:43:39,010 949 00:43:39,010 --> 00:43:40,570 950 00:43:40,570 --> 00:43:42,520 951 00:43:42,520 --> 00:43:43,840 952 00:43:43,840 --> 00:43:45,970 953 00:43:45,970 --> 00:43:47,860 954 00:43:47,860 --> 00:43:51,330 955 00:43:51,330 --> 00:43:55,150 956 00:43:55,150 --> 00:43:57,760 957 00:43:57,760 --> 00:43:59,800 958 00:43:59,800 --> 00:44:01,540 959 00:44:01,540 --> 00:44:03,310 960 00:44:03,310 --> 00:44:06,370 961 00:44:06,370 --> 00:44:09,370 962 00:44:09,370 --> 00:44:10,720 963 00:44:10,720 --> 00:44:12,760 964 00:44:12,760 --> 00:44:14,710 965 00:44:14,710 --> 00:44:16,300 966 00:44:16,300 --> 00:44:20,350 967 00:44:20,350 --> 00:44:22,060 968 00:44:22,060 --> 00:44:23,620 969 00:44:23,620 --> 00:44:29,920 970 00:44:29,920 --> 00:44:36,130 971 00:44:36,130 --> 00:44:41,110 972 00:44:41,110 --> 00:44:43,060 973 00:44:43,060 --> 00:44:44,770 974 00:44:44,770 --> 00:44:47,080 975 00:44:47,080 --> 00:44:49,060 976 00:44:49,060 --> 00:44:51,910 977 00:44:51,910 --> 00:44:53,410 978 00:44:53,410 --> 00:44:55,330 979 00:44:55,330 --> 00:44:58,360 980 00:44:58,360 --> 00:45:02,470 981 00:45:02,470 --> 00:45:04,060 982 00:45:04,060 --> 00:45:06,010 983 00:45:06,010 --> 00:45:09,280 984 00:45:09,280 --> 00:45:13,030 985 00:45:13,030 --> 00:45:15,250 986 00:45:15,250 --> 00:45:17,290 987 00:45:17,290 --> 00:45:19,690 988 00:45:19,690 --> 00:45:21,490 989 00:45:21,490 --> 00:45:24,220 990 00:45:24,220 --> 00:45:26,740 991 00:45:26,740 --> 00:45:28,780 992 00:45:28,780 --> 00:45:31,060 993 00:45:31,060 --> 00:45:34,570 994 00:45:34,570 --> 00:45:37,600 995 00:45:37,600 --> 00:45:39,640 996 00:45:39,640 --> 00:45:42,260 997 00:45:42,260 --> 00:45:45,170 998 00:45:45,170 --> 00:45:47,870 999 00:45:47,870 --> 00:45:49,520 1000 00:45:49,520 --> 00:45:52,450 1001 00:45:52,450 --> 00:46:03,349 1002 00:46:03,349 --> 00:46:09,020 1003 00:46:09,020 --> 00:46:12,620 1004 00:46:12,620 --> 00:46:12,630 1005 00:46:12,630 --> 00:46:15,390 1006 00:46:15,390 --> 00:46:21,660 1007 00:46:21,660 --> 00:46:24,539 1008 00:46:24,539 --> 00:46:26,509 1009 00:46:26,509 --> 00:46:30,390 1010 00:46:30,390 --> 00:46:32,099 1011 00:46:32,099 --> 00:46:34,799 1012 00:46:34,799 --> 00:46:37,230 1013 00:46:37,230 --> 00:46:39,390 1014 00:46:39,390 --> 00:46:40,829 1015 00:46:40,829 --> 00:46:43,259 1016 00:46:43,259 --> 00:46:45,120 1017 00:46:45,120 --> 00:46:47,759 1018 00:46:47,759 --> 00:46:50,130 1019 00:46:50,130 --> 00:46:52,319 1020 00:46:52,319 --> 00:46:54,870 1021 00:46:54,870 --> 00:46:58,170 1022 00:46:58,170 --> 00:47:00,299 1023 00:47:00,299 --> 00:47:03,329 1024 00:47:03,329 --> 00:47:06,420 1025 00:47:06,420 --> 00:47:09,289 1026 00:47:09,289 --> 00:47:12,749 1027 00:47:12,749 --> 00:47:15,930 1028 00:47:15,930 --> 00:47:19,410 1029 00:47:19,410 --> 00:47:21,420 1030 00:47:21,420 --> 00:47:24,180 1031 00:47:24,180 --> 00:47:28,200 1032 00:47:28,200 --> 00:47:30,450 1033 00:47:30,450 --> 00:47:34,670 1034 00:47:34,670 --> 00:47:37,650 1035 00:47:37,650 --> 00:47:39,239 1036 00:47:39,239 --> 00:47:41,999 1037 00:47:41,999 --> 00:47:44,370 1038 00:47:44,370 --> 00:47:46,049 1039 00:47:46,049 --> 00:47:48,390 1040 00:47:48,390 --> 00:47:49,829 1041 00:47:49,829 --> 00:47:52,859 1042 00:47:52,859 --> 00:47:52,869 1043 00:47:52,869 --> 00:47:53,400 1044 00:47:53,400 --> 00:47:55,710 1045 00:47:55,710 --> 00:48:01,349 1046 00:48:01,349 --> 00:48:02,700 1047 00:48:02,700 --> 00:48:05,190 1048 00:48:05,190 --> 00:48:08,099 1049 00:48:08,099 --> 00:48:09,690 1050 00:48:09,690 --> 00:48:11,549 1051 00:48:11,549 --> 00:48:14,039 1052 00:48:14,039 --> 00:48:16,589 1053 00:48:16,589 --> 00:48:19,470 1054 00:48:19,470 --> 00:48:22,349 1055 00:48:22,349 --> 00:48:24,329 1056 00:48:24,329 --> 00:48:26,789 1057 00:48:26,789 --> 00:48:27,950 1058 00:48:27,950 --> 00:48:29,630 1059 00:48:29,630 --> 00:48:31,940 1060 00:48:31,940 --> 00:48:34,280 1061 00:48:34,280 --> 00:48:36,770 1062 00:48:36,770 --> 00:48:38,930 1063 00:48:38,930 --> 00:48:42,920 1064 00:48:42,920 --> 00:48:46,910 1065 00:48:46,910 --> 00:48:49,190 1066 00:48:49,190 --> 00:48:52,400 1067 00:48:52,400 --> 00:48:54,770 1068 00:48:54,770 --> 00:48:56,120 1069 00:48:56,120 --> 00:48:59,030 1070 00:48:59,030 --> 00:49:00,859 1071 00:49:00,859 --> 00:49:03,290 1072 00:49:03,290 --> 00:49:06,230 1073 00:49:06,230 --> 00:49:08,780 1074 00:49:08,780 --> 00:49:11,359 1075 00:49:11,359 --> 00:49:13,190 1076 00:49:13,190 --> 00:49:15,140 1077 00:49:15,140 --> 00:49:19,099 1078 00:49:19,099 --> 00:49:21,680 1079 00:49:21,680 --> 00:49:24,980 1080 00:49:24,980 --> 00:49:26,510 1081 00:49:26,510 --> 00:49:28,880 1082 00:49:28,880 --> 00:49:32,260 1083 00:49:32,260 --> 00:49:35,120 1084 00:49:35,120 --> 00:49:37,400 1085 00:49:37,400 --> 00:49:40,520 1086 00:49:40,520 --> 00:49:45,620 1087 00:49:45,620 --> 00:49:48,530 1088 00:49:48,530 --> 00:49:51,400 1089 00:49:51,400 --> 00:49:53,240 1090 00:49:53,240 --> 00:49:54,800 1091 00:49:54,800 --> 00:49:57,109 1092 00:49:57,109 --> 00:49:59,599 1093 00:49:59,599 --> 00:50:01,640 1094 00:50:01,640 --> 00:50:04,130 1095 00:50:04,130 --> 00:50:05,540 1096 00:50:05,540 --> 00:50:07,310 1097 00:50:07,310 --> 00:50:10,160 1098 00:50:10,160 --> 00:50:11,540 1099 00:50:11,540 --> 00:50:12,859 1100 00:50:12,859 --> 00:50:15,070 1101 00:50:15,070 --> 00:50:17,540 1102 00:50:17,540 --> 00:50:19,520 1103 00:50:19,520 --> 00:50:22,460 1104 00:50:22,460 --> 00:50:25,430 1105 00:50:25,430 --> 00:50:28,010 1106 00:50:28,010 --> 00:50:29,839 1107 00:50:29,839 --> 00:50:31,849 1108 00:50:31,849 --> 00:50:33,530 1109 00:50:33,530 --> 00:50:37,210 1110 00:50:37,210 --> 00:50:40,240 1111 00:50:40,240 --> 00:50:58,980 1112 00:50:58,980 --> 00:51:02,650 1113 00:51:02,650 --> 00:51:06,160 1114 00:51:06,160 --> 00:51:09,940 1115 00:51:09,940 --> 00:51:11,950 1116 00:51:11,950 --> 00:51:13,750 1117 00:51:13,750 --> 00:51:15,880 1118 00:51:15,880 --> 00:51:17,559 1119 00:51:17,559 --> 00:51:19,990 1120 00:51:19,990 --> 00:51:22,390 1121 00:51:22,390 --> 00:51:25,420 1122 00:51:25,420 --> 00:51:41,680 1123 00:51:41,680 --> 00:51:45,099 1124 00:51:45,099 --> 00:51:47,319 1125 00:51:47,319 --> 00:51:51,190 1126 00:51:51,190 --> 00:51:53,800 1127 00:51:53,800 --> 00:51:55,599 1128 00:51:55,599 --> 00:52:02,050 1129 00:52:02,050 --> 00:52:10,150 1130 00:52:10,150 --> 00:52:17,890 1131 00:52:17,890 --> 00:52:20,260 1132 00:52:20,260 --> 00:52:21,940 1133 00:52:21,940 --> 00:52:24,400 1134 00:52:24,400 --> 00:52:27,700 1135 00:52:27,700 --> 00:52:30,370 1136 00:52:30,370 --> 00:52:32,319 1137 00:52:32,319 --> 00:52:34,270 1138 00:52:34,270 --> 00:52:38,500 1139 00:52:38,500 --> 00:52:50,250 1140 00:52:50,250 --> 00:53:01,029 1141 00:53:01,029 --> 00:53:02,799 1142 00:53:02,799 --> 00:53:06,849 1143 00:53:06,849 --> 00:53:09,760 1144 00:53:09,760 --> 00:53:11,769 1145 00:53:11,769 --> 00:53:14,260 1146 00:53:14,260 --> 00:53:17,620 1147 00:53:17,620 --> 00:53:19,599 1148 00:53:19,599 --> 00:53:21,099 1149 00:53:21,099 --> 00:53:24,069 1150 00:53:24,069 --> 00:53:27,400 1151 00:53:27,400 --> 00:53:29,680 1152 00:53:29,680 --> 00:53:31,750 1153 00:53:31,750 --> 00:53:33,579 1154 00:53:33,579 --> 00:53:37,180 1155 00:53:37,180 --> 00:53:39,519 1156 00:53:39,519 --> 00:53:43,900 1157 00:53:43,900 --> 00:53:46,779 1158 00:53:46,779 --> 00:53:48,549 1159 00:53:48,549 --> 00:53:51,339 1160 00:53:51,339 --> 00:53:53,230 1161 00:53:53,230 --> 00:53:55,240 1162 00:53:55,240 --> 00:53:57,519 1163 00:53:57,519 --> 00:54:01,900 1164 00:54:01,900 --> 00:54:03,819 1165 00:54:03,819 --> 00:54:06,099 1166 00:54:06,099 --> 00:54:10,720 1167 00:54:10,720 --> 00:54:12,460 1168 00:54:12,460 --> 00:54:15,190 1169 00:54:15,190 --> 00:54:16,960 1170 00:54:16,960 --> 00:54:19,960 1171 00:54:19,960 --> 00:54:21,490 1172 00:54:21,490 --> 00:54:25,990 1173 00:54:25,990 --> 00:54:27,579 1174 00:54:27,579 --> 00:54:29,289 1175 00:54:29,289 --> 00:54:31,269 1176 00:54:31,269 --> 00:54:32,829 1177 00:54:32,829 --> 00:54:35,950 1178 00:54:35,950 --> 00:54:37,900 1179 00:54:37,900 --> 00:54:40,180 1180 00:54:40,180 --> 00:54:41,710 1181 00:54:41,710 --> 00:54:45,940 1182 00:54:45,940 --> 00:54:53,390 1183 00:54:53,390 --> 00:54:56,430 1184 00:54:56,430 --> 00:55:00,150 1185 00:55:00,150 --> 00:55:02,150 1186 00:55:02,150 --> 00:55:05,160 1187 00:55:05,160 --> 00:55:07,620 1188 00:55:07,620 --> 00:55:09,420 1189 00:55:09,420 --> 00:55:12,059 1190 00:55:12,059 --> 00:55:14,339 1191 00:55:14,339 --> 00:55:16,650 1192 00:55:16,650 --> 00:55:30,380 1193 00:55:30,380 --> 00:55:32,750 1194 00:55:32,750 --> 00:55:34,210 1195 00:55:34,210 --> 00:55:37,250 1196 00:55:37,250 --> 00:55:39,260 1197 00:55:39,260 --> 00:55:41,180 1198 00:55:41,180 --> 00:55:42,650 1199 00:55:42,650 --> 00:55:45,020 1200 00:55:45,020 --> 00:55:47,540 1201 00:55:47,540 --> 00:55:50,390 1202 00:55:50,390 --> 00:55:53,080 1203 00:55:53,080 --> 00:55:56,450 1204 00:55:56,450 --> 00:55:59,000 1205 00:55:59,000 --> 00:56:01,610 1206 00:56:01,610 --> 00:56:04,700 1207 00:56:04,700 --> 00:56:08,360 1208 00:56:08,360 --> 00:56:10,220 1209 00:56:10,220 --> 00:56:13,190 1210 00:56:13,190 --> 00:56:15,320 1211 00:56:15,320 --> 00:56:17,570 1212 00:56:17,570 --> 00:56:19,370 1213 00:56:19,370 --> 00:56:22,010 1214 00:56:22,010 --> 00:56:24,890 1215 00:56:24,890 --> 00:56:27,740 1216 00:56:27,740 --> 00:56:29,870 1217 00:56:29,870 --> 00:56:32,570 1218 00:56:32,570 --> 00:56:37,040 1219 00:56:37,040 --> 00:56:42,070 1220 00:56:42,070 --> 00:56:46,220 1221 00:56:46,220 --> 00:56:50,120 1222 00:56:50,120 --> 00:56:51,680 1223 00:56:51,680 --> 00:56:53,660 1224 00:56:53,660 --> 00:56:55,730 1225 00:56:55,730 --> 00:56:57,470 1226 00:56:57,470 --> 00:57:01,280 1227 00:57:01,280 --> 00:57:03,320 1228 00:57:03,320 --> 00:57:16,070 1229 00:57:16,070 --> 00:57:20,609 1230 00:57:20,609 --> 00:57:23,400 1231 00:57:23,400 --> 00:57:24,990 1232 00:57:24,990 --> 00:57:27,690 1233 00:57:27,690 --> 00:57:29,400 1234 00:57:29,400 --> 00:57:29,410 1235 00:57:29,410 --> 00:57:30,170 1236 00:57:30,170 --> 00:57:33,150 1237 00:57:33,150 --> 00:57:40,920 1238 00:57:40,920 --> 00:57:43,880 1239 00:57:43,880 --> 00:57:47,990 1240 00:57:47,990 --> 00:57:50,550 1241 00:57:50,550 --> 00:57:53,400 1242 00:57:53,400 --> 00:57:56,340 1243 00:57:56,340 --> 00:57:57,930 1244 00:57:57,930 --> 00:57:59,730 1245 00:57:59,730 --> 00:58:01,620 1246 00:58:01,620 --> 00:58:07,760 1247 00:58:07,760 --> 00:58:13,349 1248 00:58:13,349 --> 00:58:16,859 1249 00:58:16,859 --> 00:58:19,140 1250 00:58:19,140 --> 00:58:24,870 1251 00:58:24,870 --> 00:58:27,950 1252 00:58:27,950 --> 00:58:30,900 1253 00:58:30,900 --> 00:58:34,140 1254 00:58:34,140 --> 00:58:36,780 1255 00:58:36,780 --> 00:58:39,570 1256 00:58:39,570 --> 00:58:40,950 1257 00:58:40,950 --> 00:58:42,780 1258 00:58:42,780 --> 00:58:44,280 1259 00:58:44,280 --> 00:58:48,390 1260 00:58:48,390 --> 00:58:49,980 1261 00:58:49,980 --> 00:58:52,320 1262 00:58:52,320 --> 00:58:54,240 1263 00:58:54,240 --> 00:58:57,330 1264 00:58:57,330 --> 00:58:58,620 1265 00:58:58,620 --> 00:59:00,120 1266 00:59:00,120 --> 00:59:02,370 1267 00:59:02,370 --> 00:59:04,910 1268 00:59:04,910 --> 00:59:07,980 1269 00:59:07,980 --> 00:59:10,620 1270 00:59:10,620 --> 00:59:13,140 1271 00:59:13,140 --> 00:59:15,120 1272 00:59:15,120 --> 00:59:16,800 1273 00:59:16,800 --> 00:59:18,990 1274 00:59:18,990 --> 00:59:21,750 1275 00:59:21,750 --> 00:59:23,390 1276 00:59:23,390 --> 00:59:25,609 1277 00:59:25,609 --> 00:59:27,980 1278 00:59:27,980 --> 00:59:30,819 1279 00:59:30,819 --> 00:59:33,289 1280 00:59:33,289 --> 00:59:35,599 1281 00:59:35,599 --> 00:59:37,519 1282 00:59:37,519 --> 00:59:39,499 1283 00:59:39,499 --> 00:59:41,420 1284 00:59:41,420 --> 00:59:42,829 1285 00:59:42,829 --> 00:59:45,049 1286 00:59:45,049 --> 00:59:47,539 1287 00:59:47,539 --> 00:59:52,519 1288 00:59:52,519 --> 00:59:56,390 1289 00:59:56,390 --> 00:59:58,460 1290 00:59:58,460 --> 00:59:59,989 1291 00:59:59,989 --> 01:00:02,720 1292 01:00:02,720 --> 01:00:04,670 1293 01:00:04,670 --> 01:00:06,140 1294 01:00:06,140 --> 01:00:08,089 1295 01:00:08,089 --> 01:00:10,819 1296 01:00:10,819 --> 01:00:13,099 1297 01:00:13,099 --> 01:00:14,599 1298 01:00:14,599 --> 01:00:18,079 1299 01:00:18,079 --> 01:00:19,700 1300 01:00:19,700 --> 01:00:21,829 1301 01:00:21,829 --> 01:00:23,960 1302 01:00:23,960 --> 01:00:27,380 1303 01:00:27,380 --> 01:00:29,660 1304 01:00:29,660 --> 01:00:31,430 1305 01:00:31,430 --> 01:00:32,870 1306 01:00:32,870 --> 01:00:35,390 1307 01:00:35,390 --> 01:00:37,309 1308 01:00:37,309 --> 01:00:39,470 1309 01:00:39,470 --> 01:00:41,900 1310 01:00:41,900 --> 01:00:43,309 1311 01:00:43,309 --> 01:00:46,489 1312 01:00:46,489 --> 01:00:49,009 1313 01:00:49,009 --> 01:00:51,589 1314 01:00:51,589 --> 01:00:54,470 1315 01:00:54,470 --> 01:00:55,910 1316 01:00:55,910 --> 01:00:57,680 1317 01:00:57,680 --> 01:00:59,630 1318 01:00:59,630 --> 01:01:01,940 1319 01:01:01,940 --> 01:01:05,450 1320 01:01:05,450 --> 01:01:08,329 1321 01:01:08,329 --> 01:01:10,370 1322 01:01:10,370 --> 01:01:14,269 1323 01:01:14,269 --> 01:01:16,400 1324 01:01:16,400 --> 01:01:18,200 1325 01:01:18,200 --> 01:01:20,450 1326 01:01:20,450 --> 01:01:22,249 1327 01:01:22,249 --> 01:01:27,559 1328 01:01:27,559 --> 01:01:29,839 1329 01:01:29,839 --> 01:01:32,109 1330 01:01:32,109 --> 01:01:34,519 1331 01:01:34,519 --> 01:01:37,220 1332 01:01:37,220 --> 01:01:38,690 1333 01:01:38,690 --> 01:01:40,460 1334 01:01:40,460 --> 01:01:43,220 1335 01:01:43,220 --> 01:01:45,020 1336 01:01:45,020 --> 01:01:46,720 1337 01:01:46,720 --> 01:01:48,980 1338 01:01:48,980 --> 01:01:51,860 1339 01:01:51,860 --> 01:01:53,690 1340 01:01:53,690 --> 01:02:05,020 1341 01:02:05,020 --> 01:02:08,270 1342 01:02:08,270 --> 01:02:11,080 1343 01:02:11,080 --> 01:02:13,570 1344 01:02:13,570 --> 01:02:17,600 1345 01:02:17,600 --> 01:02:20,570 1346 01:02:20,570 --> 01:02:23,960 1347 01:02:23,960 --> 01:02:25,670 1348 01:02:25,670 --> 01:02:28,700 1349 01:02:28,700 --> 01:02:32,210 1350 01:02:32,210 --> 01:02:33,530 1351 01:02:33,530 --> 01:02:35,750 1352 01:02:35,750 --> 01:02:37,910 1353 01:02:37,910 --> 01:02:39,800 1354 01:02:39,800 --> 01:02:41,780 1355 01:02:41,780 --> 01:02:46,010 1356 01:02:46,010 --> 01:02:47,210 1357 01:02:47,210 --> 01:02:49,450 1358 01:02:49,450 --> 01:02:52,220 1359 01:02:52,220 --> 01:02:54,200 1360 01:02:54,200 --> 01:02:56,690 1361 01:02:56,690 --> 01:02:58,070 1362 01:02:58,070 --> 01:03:00,950 1363 01:03:00,950 --> 01:03:06,140 1364 01:03:06,140 --> 01:03:09,380 1365 01:03:09,380 --> 01:03:10,970 1366 01:03:10,970 --> 01:03:13,460 1367 01:03:13,460 --> 01:03:15,410 1368 01:03:15,410 --> 01:03:18,560 1369 01:03:18,560 --> 01:03:20,060 1370 01:03:20,060 --> 01:03:22,520 1371 01:03:22,520 --> 01:03:25,370 1372 01:03:25,370 --> 01:03:26,570 1373 01:03:26,570 --> 01:03:28,340 1374 01:03:28,340 --> 01:03:30,380 1375 01:03:30,380 --> 01:03:31,970 1376 01:03:31,970 --> 01:03:33,290 1377 01:03:33,290 --> 01:03:34,910 1378 01:03:34,910 --> 01:03:37,550 1379 01:03:37,550 --> 01:03:39,470 1380 01:03:39,470 --> 01:03:41,630 1381 01:03:41,630 --> 01:03:43,700 1382 01:03:43,700 --> 01:03:45,290 1383 01:03:45,290 --> 01:03:47,570 1384 01:03:47,570 --> 01:03:50,210 1385 01:03:50,210 --> 01:03:51,500 1386 01:03:51,500 --> 01:03:53,300 1387 01:03:53,300 --> 01:03:55,940 1388 01:03:55,940 --> 01:03:57,800 1389 01:03:57,800 --> 01:04:00,140 1390 01:04:00,140 --> 01:04:02,180 1391 01:04:02,180 --> 01:04:04,599 1392 01:04:04,599 --> 01:04:07,370 1393 01:04:07,370 --> 01:04:10,099 1394 01:04:10,099 --> 01:04:12,050 1395 01:04:12,050 --> 01:04:13,820 1396 01:04:13,820 --> 01:04:15,440 1397 01:04:15,440 --> 01:04:16,880 1398 01:04:16,880 --> 01:04:18,470 1399 01:04:18,470 --> 01:04:20,810 1400 01:04:20,810 --> 01:04:22,609 1401 01:04:22,609 --> 01:04:26,480 1402 01:04:26,480 --> 01:04:28,910 1403 01:04:28,910 --> 01:04:30,830 1404 01:04:30,830 --> 01:04:32,300 1405 01:04:32,300 --> 01:04:33,800 1406 01:04:33,800 --> 01:04:35,770 1407 01:04:35,770 --> 01:04:38,270 1408 01:04:38,270 --> 01:04:39,830 1409 01:04:39,830 --> 01:04:41,510 1410 01:04:41,510 --> 01:04:43,010 1411 01:04:43,010 --> 01:04:44,599 1412 01:04:44,599 --> 01:04:46,700 1413 01:04:46,700 --> 01:04:48,410 1414 01:04:48,410 --> 01:04:50,750 1415 01:04:50,750 --> 01:04:52,849 1416 01:04:52,849 --> 01:04:55,130 1417 01:04:55,130 --> 01:04:58,160 1418 01:04:58,160 --> 01:05:00,079 1419 01:05:00,079 --> 01:05:04,339 1420 01:05:04,339 --> 01:05:07,220 1421 01:05:07,220 --> 01:05:08,780 1422 01:05:08,780 --> 01:05:12,589 1423 01:05:12,589 --> 01:05:14,780 1424 01:05:14,780 --> 01:05:17,270 1425 01:05:17,270 --> 01:05:19,250 1426 01:05:19,250 --> 01:05:21,230 1427 01:05:21,230 --> 01:05:22,880 1428 01:05:22,880 --> 01:05:24,950 1429 01:05:24,950 --> 01:05:26,870 1430 01:05:26,870 --> 01:05:30,260 1431 01:05:30,260 --> 01:05:31,730 1432 01:05:31,730 --> 01:05:33,020 1433 01:05:33,020 --> 01:05:37,339 1434 01:05:37,339 --> 01:05:39,829 1435 01:05:39,829 --> 01:05:42,230 1436 01:05:42,230 --> 01:05:43,670 1437 01:05:43,670 --> 01:05:46,940 1438 01:05:46,940 --> 01:05:49,130 1439 01:05:49,130 --> 01:05:50,750 1440 01:05:50,750 --> 01:05:52,370 1441 01:05:52,370 --> 01:05:53,630 1442 01:05:53,630 --> 01:05:55,609 1443 01:05:55,609 --> 01:05:57,530 1444 01:05:57,530 --> 01:05:59,960 1445 01:05:59,960 --> 01:06:01,730 1446 01:06:01,730 --> 01:06:03,470 1447 01:06:03,470 --> 01:06:05,450 1448 01:06:05,450 --> 01:06:07,220 1449 01:06:07,220 --> 01:06:11,000 1450 01:06:11,000 --> 01:06:12,830 1451 01:06:12,830 --> 01:06:16,580 1452 01:06:16,580 --> 01:06:19,100 1453 01:06:19,100 --> 01:06:20,480 1454 01:06:20,480 --> 01:06:22,610 1455 01:06:22,610 --> 01:06:24,050 1456 01:06:24,050 --> 01:06:25,730 1457 01:06:25,730 --> 01:06:27,890 1458 01:06:27,890 --> 01:06:31,310 1459 01:06:31,310 --> 01:06:33,680 1460 01:06:33,680 --> 01:06:35,750 1461 01:06:35,750 --> 01:06:37,280 1462 01:06:37,280 --> 01:06:39,560 1463 01:06:39,560 --> 01:06:42,290 1464 01:06:42,290 --> 01:06:45,050 1465 01:06:45,050 --> 01:06:46,820 1466 01:06:46,820 --> 01:06:49,640 1467 01:06:49,640 --> 01:06:51,290 1468 01:06:51,290 --> 01:06:54,650 1469 01:06:54,650 --> 01:06:56,900 1470 01:06:56,900 --> 01:06:58,550 1471 01:06:58,550 --> 01:07:01,940 1472 01:07:01,940 --> 01:07:04,790 1473 01:07:04,790 --> 01:07:06,170 1474 01:07:06,170 --> 01:07:07,670 1475 01:07:07,670 --> 01:07:09,280 1476 01:07:09,280 --> 01:07:11,870 1477 01:07:11,870 --> 01:07:13,670 1478 01:07:13,670 --> 01:07:15,530 1479 01:07:15,530 --> 01:07:18,100 1480 01:07:18,100 --> 01:07:26,290 1481 01:07:26,290 --> 01:07:28,600 1482 01:07:28,600 --> 01:07:31,360 1483 01:07:31,360 --> 01:07:33,070 1484 01:07:33,070 --> 01:07:36,210 1485 01:07:36,210 --> 01:07:38,830 1486 01:07:38,830 --> 01:07:40,840 1487 01:07:40,840 --> 01:07:43,840 1488 01:07:43,840 --> 01:07:45,610 1489 01:07:45,610 --> 01:07:48,760 1490 01:07:48,760 --> 01:07:52,060 1491 01:07:52,060 --> 01:07:54,970 1492 01:07:54,970 --> 01:07:56,560 1493 01:07:56,560 --> 01:08:00,580 1494 01:08:00,580 --> 01:08:01,870 1495 01:08:01,870 --> 01:08:03,940 1496 01:08:03,940 --> 01:08:05,530 1497 01:08:05,530 --> 01:08:07,960 1498 01:08:07,960 --> 01:08:09,160 1499 01:08:09,160 --> 01:08:11,110 1500 01:08:11,110 --> 01:08:13,300 1501 01:08:13,300 --> 01:08:14,860 1502 01:08:14,860 --> 01:08:16,480 1503 01:08:16,480 --> 01:08:17,800 1504 01:08:17,800 --> 01:08:20,620 1505 01:08:20,620 --> 01:08:23,579 1506 01:08:23,579 --> 01:08:28,800 1507 01:08:28,800 --> 01:08:31,269 1508 01:08:31,269 --> 01:08:33,730 1509 01:08:33,730 --> 01:08:35,320 1510 01:08:35,320 --> 01:08:37,870 1511 01:08:37,870 --> 01:08:40,349 1512 01:08:40,349 --> 01:08:42,579 1513 01:08:42,579 --> 01:08:45,130 1514 01:08:45,130 --> 01:08:46,570 1515 01:08:46,570 --> 01:08:49,360 1516 01:08:49,360 --> 01:08:52,750 1517 01:08:52,750 --> 01:08:54,940 1518 01:08:54,940 --> 01:08:57,190 1519 01:08:57,190 --> 01:09:00,010 1520 01:09:00,010 --> 01:09:01,840 1521 01:09:01,840 --> 01:09:03,670 1522 01:09:03,670 --> 01:09:06,490 1523 01:09:06,490 --> 01:09:08,320 1524 01:09:08,320 --> 01:09:10,030 1525 01:09:10,030 --> 01:09:14,430 1526 01:09:14,430 --> 01:09:17,260 1527 01:09:17,260 --> 01:09:20,500 1528 01:09:20,500 --> 01:09:22,809 1529 01:09:22,809 --> 01:09:25,000 1530 01:09:25,000 --> 01:09:27,550 1531 01:09:27,550 --> 01:09:29,530 1532 01:09:29,530 --> 01:09:31,900 1533 01:09:31,900 --> 01:09:33,490 1534 01:09:33,490 --> 01:09:42,980 1535 01:09:42,980 --> 01:09:45,300 1536 01:09:45,300 --> 01:09:47,280 1537 01:09:47,280 --> 01:09:49,650 1538 01:09:49,650 --> 01:09:52,530 1539 01:09:52,530 --> 01:09:52,540 1540 01:09:52,540 --> 01:09:53,130 1541 01:09:53,130 --> 01:09:54,810 1542 01:09:54,810 --> 01:09:57,150 1543 01:09:57,150 --> 01:09:59,520 1544 01:09:59,520 --> 01:10:01,620 1545 01:10:01,620 --> 01:10:04,710 1546 01:10:04,710 --> 01:10:06,660 1547 01:10:06,660 --> 01:10:09,120 1548 01:10:09,120 --> 01:10:10,350 1549 01:10:10,350 --> 01:10:12,300 1550 01:10:12,300 --> 01:10:14,610 1551 01:10:14,610 --> 01:10:16,410 1552 01:10:16,410 --> 01:10:18,810 1553 01:10:18,810 --> 01:10:21,060 1554 01:10:21,060 --> 01:10:22,560 1555 01:10:22,560 --> 01:10:24,530 1556 01:10:24,530 --> 01:10:32,430 1557 01:10:32,430 --> 01:10:35,010 1558 01:10:35,010 --> 01:10:38,400 1559 01:10:38,400 --> 01:10:39,900 1560 01:10:39,900 --> 01:10:44,190 1561 01:10:44,190 --> 01:10:47,190 1562 01:10:47,190 --> 01:10:48,810 1563 01:10:48,810 --> 01:10:52,470 1564 01:10:52,470 --> 01:10:55,110 1565 01:10:55,110 --> 01:10:58,500 1566 01:10:58,500 --> 01:11:00,630 1567 01:11:00,630 --> 01:11:03,030 1568 01:11:03,030 --> 01:11:04,770 1569 01:11:04,770 --> 01:11:06,930 1570 01:11:06,930 --> 01:11:08,550 1571 01:11:08,550 --> 01:11:10,320 1572 01:11:10,320 --> 01:11:16,470 1573 01:11:16,470 --> 01:11:19,260 1574 01:11:19,260 --> 01:11:21,480 1575 01:11:21,480 --> 01:11:22,860 1576 01:11:22,860 --> 01:11:26,310 1577 01:11:26,310 --> 01:11:28,410 1578 01:11:28,410 --> 01:11:30,300 1579 01:11:30,300 --> 01:11:34,980 1580 01:11:34,980 --> 01:11:37,170 1581 01:11:37,170 --> 01:11:40,170 1582 01:11:40,170 --> 01:11:41,760 1583 01:11:41,760 --> 01:11:45,390 1584 01:11:45,390 --> 01:11:48,120 1585 01:11:48,120 --> 01:11:50,130 1586 01:11:50,130 --> 01:11:52,500 1587 01:11:52,500 --> 01:11:54,690 1588 01:11:54,690 --> 01:11:56,610 1589 01:11:56,610 --> 01:11:58,920 1590 01:11:58,920 --> 01:12:01,350 1591 01:12:01,350 --> 01:12:03,120 1592 01:12:03,120 --> 01:12:05,280 1593 01:12:05,280 --> 01:12:11,820 1594 01:12:11,820 --> 01:12:14,430 1595 01:12:14,430 --> 01:12:18,080 1596 01:12:18,080 --> 01:12:20,910 1597 01:12:20,910 --> 01:12:22,620 1598 01:12:22,620 --> 01:12:25,590 1599 01:12:25,590 --> 01:12:27,480 1600 01:12:27,480 --> 01:12:30,120 1601 01:12:30,120 --> 01:12:32,880 1602 01:12:32,880 --> 01:12:34,890 1603 01:12:34,890 --> 01:12:38,130 1604 01:12:38,130 --> 01:12:41,280 1605 01:12:41,280 --> 01:12:43,620 1606 01:12:43,620 --> 01:12:47,400 1607 01:12:47,400 --> 01:12:49,530 1608 01:12:49,530 --> 01:12:53,010 1609 01:12:53,010 --> 01:12:55,020 1610 01:12:55,020 --> 01:12:57,060 1611 01:12:57,060 --> 01:12:59,250 1612 01:12:59,250 --> 01:13:02,130 1613 01:13:02,130 --> 01:13:04,470 1614 01:13:04,470 --> 01:13:06,000 1615 01:13:06,000 --> 01:13:07,920 1616 01:13:07,920 --> 01:13:10,050 1617 01:13:10,050 --> 01:13:12,000 1618 01:13:12,000 --> 01:13:15,300 1619 01:13:15,300 --> 01:13:17,130 1620 01:13:17,130 --> 01:13:18,900 1621 01:13:18,900 --> 01:13:20,760 1622 01:13:20,760 --> 01:13:24,060 1623 01:13:24,060 --> 01:13:26,640 1624 01:13:26,640 --> 01:13:29,730 1625 01:13:29,730 --> 01:13:31,830 1626 01:13:31,830 --> 01:13:34,560 1627 01:13:34,560 --> 01:13:36,600 1628 01:13:36,600 --> 01:13:38,730 1629 01:13:38,730 --> 01:13:42,930 1630 01:13:42,930 --> 01:13:45,870 1631 01:13:45,870 --> 01:13:47,190 1632 01:13:47,190 --> 01:13:48,990 1633 01:13:48,990 --> 01:13:51,810 1634 01:13:51,810 --> 01:13:54,410 1635 01:13:54,410 --> 01:13:56,850 1636 01:13:56,850 --> 01:13:59,940 1637 01:13:59,940 --> 01:14:04,140 1638 01:14:04,140 --> 01:14:06,270 1639 01:14:06,270 --> 01:14:09,300 1640 01:14:09,300 --> 01:14:11,100 1641 01:14:11,100 --> 01:14:14,340 1642 01:14:14,340 --> 01:14:17,940 1643 01:14:17,940 --> 01:14:19,770 1644 01:14:19,770 --> 01:14:23,100 1645 01:14:23,100 --> 01:14:26,100 1646 01:14:26,100 --> 01:14:28,230 1647 01:14:28,230 --> 01:14:31,080 1648 01:14:31,080 --> 01:14:33,660 1649 01:14:33,660 --> 01:14:35,280 1650 01:14:35,280 --> 01:14:37,950 1651 01:14:37,950 --> 01:14:42,000 1652 01:14:42,000 --> 01:14:43,680 1653 01:14:43,680 --> 01:14:45,540 1654 01:14:45,540 --> 01:14:48,060 1655 01:14:48,060 --> 01:14:49,770 1656 01:14:49,770 --> 01:14:54,930 1657 01:14:54,930 --> 01:14:57,180 1658 01:14:57,180 --> 01:14:58,770 1659 01:14:58,770 --> 01:15:02,190 1660 01:15:02,190 --> 01:15:06,660 1661 01:15:06,660 --> 01:15:09,180 1662 01:15:09,180 --> 01:15:11,010 1663 01:15:11,010 --> 01:15:12,570 1664 01:15:12,570 --> 01:15:13,860 1665 01:15:13,860 --> 01:15:15,720 1666 01:15:15,720 --> 01:15:18,960 1667 01:15:18,960 --> 01:15:20,880 1668 01:15:20,880 --> 01:15:23,010 1669 01:15:23,010 --> 01:15:27,120 1670 01:15:27,120 --> 01:15:28,890 1671 01:15:28,890 --> 01:15:31,530 1672 01:15:31,530 --> 01:15:32,940 1673 01:15:32,940 --> 01:15:35,040 1674 01:15:35,040 --> 01:15:36,900 1675 01:15:36,900 --> 01:15:39,180 1676 01:15:39,180 --> 01:15:42,600 1677 01:15:42,600 --> 01:15:45,270 1678 01:15:45,270 --> 01:15:47,220 1679 01:15:47,220 --> 01:15:49,890 1680 01:15:49,890 --> 01:15:52,380 1681 01:15:52,380 --> 01:15:54,840 1682 01:15:54,840 --> 01:15:56,610 1683 01:15:56,610 --> 01:15:58,080 1684 01:15:58,080 --> 01:16:00,750 1685 01:16:00,750 --> 01:16:02,910 1686 01:16:02,910 --> 01:16:05,040 1687 01:16:05,040 --> 01:16:08,150 1688 01:16:08,150 --> 01:16:10,710 1689 01:16:10,710 --> 01:16:12,540 1690 01:16:12,540 --> 01:16:17,190 1691 01:16:17,190 --> 01:16:19,560 1692 01:16:19,560 --> 01:16:19,570 1693 01:16:19,570 --> 01:16:20,069 1694 01:16:20,069 --> 01:16:22,379 1695 01:16:22,379 --> 01:16:29,129 1696 01:16:29,129 --> 01:16:30,180 1697 01:16:30,180 --> 01:16:32,100 1698 01:16:32,100 --> 01:16:34,530 1699 01:16:34,530 --> 01:16:38,520 1700 01:16:38,520 --> 01:16:40,500 1701 01:16:40,500 --> 01:16:42,510 1702 01:16:42,510 --> 01:16:44,370 1703 01:16:44,370 --> 01:16:46,049 1704 01:16:46,049 --> 01:16:48,089 1705 01:16:48,089 --> 01:16:50,310 1706 01:16:50,310 --> 01:16:51,780 1707 01:16:51,780 --> 01:16:53,399 1708 01:16:53,399 --> 01:16:56,270 1709 01:16:56,270 --> 01:16:58,890 1710 01:16:58,890 --> 01:17:01,109 1711 01:17:01,109 --> 01:17:02,640 1712 01:17:02,640 --> 01:17:09,330 1713 01:17:09,330 --> 01:17:13,260 1714 01:17:13,260 --> 01:17:14,970 1715 01:17:14,970 --> 01:17:17,010 1716 01:17:17,010 --> 01:17:18,930 1717 01:17:18,930 --> 01:17:21,450 1718 01:17:21,450 --> 01:17:24,120 1719 01:17:24,120 --> 01:17:26,310 1720 01:17:26,310 --> 01:17:28,350 1721 01:17:28,350 --> 01:17:30,419 1722 01:17:30,419 --> 01:17:32,280 1723 01:17:32,280 --> 01:17:34,109 1724 01:17:34,109 --> 01:17:36,330 1725 01:17:36,330 --> 01:17:39,689 1726 01:17:39,689 --> 01:17:41,490 1727 01:17:41,490 --> 01:17:42,930 1728 01:17:42,930 --> 01:17:46,319 1729 01:17:46,319 --> 01:17:47,879 1730 01:17:47,879 --> 01:17:49,500 1731 01:17:49,500 --> 01:17:51,030 1732 01:17:51,030 --> 01:17:52,919 1733 01:17:52,919 --> 01:17:54,959 1734 01:17:54,959 --> 01:17:56,399 1735 01:17:56,399 --> 01:17:57,750 1736 01:17:57,750 --> 01:18:00,060 1737 01:18:00,060 --> 01:18:01,950 1738 01:18:01,950 --> 01:18:04,560 1739 01:18:04,560 --> 01:18:06,359 1740 01:18:06,359 --> 01:18:08,100 1741 01:18:08,100 --> 01:18:10,379 1742 01:18:10,379 --> 01:18:13,290 1743 01:18:13,290 --> 01:18:15,689 1744 01:18:15,689 --> 01:18:17,250 1745 01:18:17,250 --> 01:18:29,770 1746 01:18:29,770 --> 01:18:29,780 1747 01:18:29,780 --> 01:18:31,840