1 00:00:01,069 --> 00:00:03,189 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu good afternoon everyone let's get started so welcome to the eleventh lecture of six 172 seems that there are many fewer people here today that on Tuesday all right so today we're going to talk about storage allocation and turns out that storage allocation is about both allocating memory and also freeing it but in the literature it's just called storage allocation so that's the term we're gonna use and whenever you do a malloc or free that you're doing storage allocation so how many of you have used malloc or free before so hopefully all of you since you needed it for for the projects and homeworks so the simplest form of storage is a stack and in a stack you just have an array in a pointer so here we have an array which we call a and then there's some portion of this array that's used for memory and the rest of it is free it's not used and then there's a pointer SP that points to the end of the use region in this stack and if you want to allocate X bytes on the stack all you do is you just increment this SP pointer by ax and then of course you should also check for overflow to make sure that you don't actually go off the end of this array because if you do that then your you'll get a segmentation fault but actually nowadays compilers don't really check first stack overflow because your stack is usually a big enough for most of your program and when you do get a stack overflow you're just gonna seg fault in and then you go debug your program so for efficiency reasons the stack overflow isn't actually checked okay and then it returns a pointer to the beginning of the memory that you just allocated so it's just SP minus X so that's pretty pretty simple and it fact this is how the see call stack works it also uses a stack discipline so what you call a function you you save local variables and registers on the stack and you also save the return address of the function that's calling another function and that when you return you pop things off the stack so you can also free things from the stack so what you do is you just decrement SP by XV one of free x bytes so here we just decremented SP by X and everything after SP is now considered to be free and again if you're careful you would check for a stack underflow but again the compiler usually doesn't do this because if you do have a stack underflow there's a bug in your program and you'll get a seg fault and you should go fix it so allocating and freeing in a stack takes constant time because all you have to do is manipulate the stack pointer so it's pretty efficient however you have to free consistently with the stack discipline so the stack has limited applicability does anybody see why you can't do everything with just a stack so what's one limitation of the stack yes I so so it turns out that you can actually a pass a compile-time constant to make the stack bigger if you wanted to there's actually a more fundamental limitation of the stack yes yeah so the answer is that you can only free the last thing that you allocated so there's no way to free anything in the middle of this use region here you have to free the last thing here because the stack doesn't keep track of the objects in the middle of this used region so there's limited applicability but it's great when it works because it's very efficient and all this code can essentially be in line you don't have to have any function calls it's very fast and it also turns out that you can allocate on the call stack using this function called aleca it's actually not a function it's just a keyword that the compiler recognizes and it will transform it to instructions that manipulate the stack however this function is now deprecated because it turns out that the compiler is actually more efficient when you're dealing with these fixed size frames if you just allocate a pointer on the stack that points to some piece of memory on the heap but nevertheless if you want to allocate on the call stack you can call this aleca function but you should check that it's doing the right thing since it's now deprecated and the implementation is compiler dependent so what's another type of storage besides the stack so you can't do everything with a stack so what else can we use yes yes so we also have the heap which is more general than the stack so a stack looks very nice and tidy and it's very efficient to use the stack but it doesn't work for everything so that's why we have to heap and the heap is much more general that it's very messy it's very hard to organize this and work with it efficiently and for the rest of this lecture I'm going to be talking about how to manage memory in the heap and I found these pictures on Stack Overflow so maybe they're biased towards stacks okay so how do we do heap allocation so let's first start with fixed size heap allocation where we assume that all of the objects that we're dealing with are of the same size in general this isn't true but let's just start with this simpler case first okay so as I said earlier if you use malloc and free and C then you're doing heap allocation C++ has the new and delete operators which works similarly to malloc and free they also call the object constructor and destructor and the C functions don't do that and unlike Java and Python C and C++ don't provide a garbage collector so the programmer has to manage memory him or herself and this is one of the reasons for the efficiency of C and C++ because there's no garbage collector running in a background however this makes it much harder to write correct programs and C because you have to be careful of memory leaks dangling pointers and double freeing so a memory leak is if you allocate something you forget to free it in your program peace of running and allocating more and more stuff but without freeing it eventually you're gonna run out of memory and your program is going to crash so you need to be careful of memory leaks dangling pointers are pointers to pieces of memory that you have already freed and if you try to dereference a dangling pointer the behavior is going to be undefined so maybe you'll get a segmentation fault maybe you won't see any until later on in your program because that memory might have been reallocated for something else and it's actually illegal to dereference that memory so Dan blowing pointers are very annoying when you're using C if you're lucky you'll get a sack fault right away and you can go fix your bug but sometimes these are very hard to find there's also double freeing so this is when you free something more than once and again this will lead to undefined behavior maybe I'll get a segfault or maybe that piece of memory was allocated for something else and then when you free it again it's actually illegal but your program is going to be incorrect so you need to be careful that you don't free something more than once and this is why some people for prefer to use a language like Java and Python that provide these built-in garbage collectors however these languages are less efficient because they have a general-purpose garbage collector running in the background so in this class we're going to use C because we want to be able to write the fastest programs as possible so we need to study how to manage memory and there are some tools you can use to reduce the number of memory bugs you have in your program so there's memory checkers like address sanitizer and Val brain which can assist you in finding these pernicious bugs so address sanitizer is a compiler instrumentation tool when you compile your program you plot you pass a flag and then when you run your program it's going to report any it's going to report possible memory bugs you have in your program and then valgrind it works directly off the binary so you don't need to do anything special when you compile it you can just pass your binary to valgrind and if there's a memory bug it will it might find it but valgrind tends to be a slower than now just saying sanitizer and it tends to catch fewer bugs because it knows less about the program then address sanitizer and address sanitizer it sees the source code of the program and has more information whereas valgrind just works directly off the binary also don't confuse the heap with a heap data structure that you might have seen before in your algorithms or data structures courses so these are two different concepts the heap data structure in your algorithms course it was a data structure used to represent a priority queue where you can officially extract the highest priority element and you can also update the priorities of elements in the set and this could be used for algorithms like sorting or graph search but today we're going to be talking about another heap which is the heap that's used for storage allocation so don't get confused so any questions so far okay all right so we're gonna first start with fixed size allocation since that's the easier case so we're gonna assume that every piece of storage has the same size some of these blocks are used and some of them are unused and among the unused blocks we're going to keep a list that we call the free list and each block and this free list has a pointer to the next block in the free list and since this memory is unused we can actually use the memory to store a pointer as part of our storage allocator implementation there's actually another way to do fixed size allocations instead of using a free list you could actually place a bit for each block saying whether or not it's free and then when you do allocation you can use bid tricks but today I'm just I'm gonna talk about the free list implementation so to allocate one object from the free list you set the pointer X to be free so free is pointing to the first object in this free list then you set the free pointer to point to the next thing in the free list so this is doing free equal to the next pointer of free and then finally you return X which is a pointer to the first object in the free list so here's an animation so X is gonna point to what free points - you also need to check if free is equal to null because if free is equal to no that means there are no more free blocks in the free list and the programmer should know this you should return a special value otherwise we're going to set the free pointer to point to the next thing in the free list and then finally we return X to the program and now the program has a block of memory that can use there is still a garbage pointer in this block that we pass back to the program because we didn't clear it so the implementation of the storage alligator could design decide to zero this out or can just pass it back to the program and leave it up to the programmer to do whatever it wants with it so that in the latter case the programmer should be careful not to try to dereference this pointer okay so how about D allocation so let's say we want to free some object X what we do is we just set the next pointer of X to be equal to free so it's going to point to the first thing in the free list and then we set free equal to X so right so now free is pointing to X in this X object that we want it to free now as a pointer to the first object in the original free list so pretty simple any any questions on this so this sort of acts like a stack in that the last thing that you freed is going to be the first thing that you allocates you get temple locality and that in that way but unlike a stack you can actually free any of the blocks and not just the last block that you allocated so with a free list allocating and freeing take constant time because you're just adjusting some pointers as good temporal locality because as I said the things that you freed most recently are going to be the things that are going to be allocated it has poor spatial locality due to external fragmentation and external fragmentation means that your blocks of used memory are spread out all over the place in the space of all memory and this can be bad for performance because it can increase the size of the page table and it can also cause dis thrashing so if you recall whenever you access a page in virtual memory it has to do an address translation to physical to the physical memory address and if if your memory spread out across many pages in virtual memory then you're gonna have a lot of entries in the page table because the page table is storing this mapping between the virtual memory address of the page and the physical memory address of the page so it's gonna complicate the page table make it less efficient to do lookups in it and then if you have more pages than you can fit in your main memory then this hidden cause disk disk thrashing because you have to move pages in and out of disk the translation lookaside buffer TLB can also be a problem does anybody know what a TLB is yes yeah so the TOB is essentially a cache for the page table so it will cache the results of the translation from virtual memory addresses to physical memory addresses for the most recent translations and looking up a translation in the TLB is much more efficient than going through the page table and if you have if you have a lot of external fragmentation then you have a lot of pages that you might access and this means that when you go to the TLB it's more likely you'll get a TLB miss and you have to go to the page table to look up the appropriate address so that's why external fragmentation is bad so let's look at some ways to mitigate external fragmentation so one way to do this is to keep a free list or a bitmap per disk page and then when you want to allocate something you allocate from the free list of the fullest page so you sort of skew the memory that's being used to as few pages as possible and when you free a block of stories you just you just return it to the page on which that block resides and if a page becomes completely empty there are no more items that are used on that page then the virtual memory system can page it out without affecting the program perform is because you're not going to access that page anyways so this might seem counterintuitive why do we want to skew the items to as few pages as possible so let's look at a simple example to convince ourselves why this is actually good for dealing with external fragmentation so here I have two cases in the first case I have 90% of my blocks on one page and 10% of the blocks on the other page in the second case I have half of my blocks on one page in half on the other page so now let's look at the probability that two random accesses will hit the same page so let's assume that all the random accesses are going to go to one of the two pages so in the first case the building up both of that disease hit the first page is going to be 0.9 times 0.9 and then the probability that they both hit the second page is 0.1 times 0.1 and if you sum this up you get point a 2 that's the probability that both of the random access these are going to hit the same page in the other case the probability that both of the accesses hit the same page the first page it's gonna be 0.5 times 0.5 the second page is also gonna be 0.5 times 0.5 so that sums to 0.5 and that means that there's only a 50% chance that two random acts diseases are gonna hit the same page so in the first case you actually have a higher chance that the two random acts disease hit the same page and that's why we want to skew the items as much as possible so that we can reduce the external fragmentation any questions okay so that was fixed size heap allocation and obviously you can't use that for many programs if if you're allocating memory of different sizes so now let's look at variable size heap allocation so we're going to look at one allocation scheme called bin freelee's the idea is to leverage the efficiency of free lists and also has topped a bounded amount of internal fragmentation so internal fragmentation is a waste of space within a block so that means when you allocate possibly more space than you're using then there's some wasted space in there so in bin free list what we're going to do is we're gonna have a whole bunch of bins and each bin is going to store blocks of a particular size so here I'm going to say that Benkei holds mammary blocks of size two to the case I'm gonna store blocks of sizes powers of two so why don't I just store a bin for every possible size does anybody know why why why am i rounding up two powers of two here yes if I wanted a bin for every possible size I would have way too many bins and just the pointers to these bins are not gonna fit in memory so that's why I'm only using bins that store blocks of size two to decay and now let's look at how I'm going to allocate X bytes from a bin free list so what I'm gonna do is I'm gonna look up the bin for which I should take a block from and to get that I'm gonna take the ceiling of log base X this is log base two so recall that LG is log base two if that bin is non-empty then I can just return a block from that bin however if that bin is empty then I need to go to the next highest bin that's non-empty and then I'm gonna take a block from that bin and then split it up into smaller chunks and place them into smaller bins and then I'll also get a chunk that is of the right size so for this example let's say I wanted to allocate three bytes the ceiling of log base two of X is two so I could have been two but Ben two is empty so I need to look for the next bin that's not empty and that's going to be bin 4 and I'm going to split up this block into a smaller powers of two so in particular I'm gonna find a non empty bin K Prime greater than K and split up a block in two sizes of 2 to the K prime minus 1 to the K prime minus 2 all the way down to two to the K so I'm gonna split it in two sizes of all the powers of two less than two decay prime and greater than or equal to two to decay and I'm gonna actually have two blocks of size to decay and one of those will be returned to the program so here I'm gonna split up this block I'm gonna place one of the smaller blocks in bin three one of them into bin two and then I also have another block here that I'm just gonna return to the program so any questions on how this scheme works okay and if there are no larger blocks that exist so that means all the bins higher than the bin I'm looking at are empty then I need to go to the OS to replace for more memory and then after I get that memory I'll split it up so I can satisfy my allocation request in practice this this exact scheming isn't used so there are many variants of this scheme so it turns out that efficiency is very important for small allocations because there's not that much work performed on these small pieces of memory and the overheads of the storage allocation scheme could could cause a performance bottleneck so in practice you usually don't go all the way down to blocks of size one you might stop at blocks of size eight bytes so that you don't have that much overhead but this does increase the internal fragmentation by by a little bit because now you have some wasted space and then one second and then you can also group blocks into pages as I said before so that all of the blocks in the same page have the same size and then you don't have to store the information of the size of the blocks yes yeah so there are two commands you can use one is called M map and the other one is called s BRK so those are system calls that you you just I mean you just call that and then the OS will give you more memory and then your storage alligator can use it yes no some al I mean the standard implementation of malloc internally it uses these commands and map and s break to get memory from the OS it just the OS just gives you a huge chunk of memory it doesn't split it up into smaller block so anything that's up to the storage alligator to do it just gives you a big chunk of memory and then the storage allocated will break it up into smaller blocks then you can also there there similar commands where you can free memory back to the OS when you're not using them anymore so yeah so what I said was that you can actually keep blocks of different sizes on different pages so then you don't actually have to store the size of each block you can just look up what page that block resides in when you get the memory address and then for each page you you have one field that stores the size of those blocks on that page so this saves you the overhead of having to store information per block to figure out its size yeah yeah so if you I mean if you do change the size of the blocks and you can't use you can't actually use this scheme so this is actually a variant where you don't change the size of the blocks if you do change the size and you have to change it for the entire page yes so there are many variants of memory allocators out there this is just the simplest one that I described but this it turns out that this isn't this exact scheme isn't the one that's used in practice or many variants like some allocators instead of using powers of two they use Fibonacci numbers to determine the different bins any other questions see you actually get a chance to play around with implementing some allocators in project three and homework six so let's let's briefly look at the storage layout of a program so this is how our virtual memory address space is laid out so we have this stack all the way at the top and the stack grows downwards so we have the high addresses up top and the low addresses below then we have the heap which grows upward and the heap and the stack basically grow towards each other in this this space is dynamically allocated as the program runs then there's the BSS segment the data segment in the text segment which all reside below the heap so the code segment just stores the code for your program so when you load up your program this codes it's gonna put your program into this tag segment then there's this data segment which stores all the global variables and static variables these constants that you defined in your program these are all stored in the data segment and when you load your program you also have to read this segment read this data from disk and store it into the data segment then there's the BSS segment this segment is used to store all the uninitialized variables in your program and it this is just initialized to 0 at the start of your program since your program hasn't initializes it doesn't matter what we set it to and then the heap this is the memory that we're using when we're calling malloc and malloc and free and then we have the stack which we talked about so in practice the stack and the heap are never actually going to hit each other because we're working with 64-bit addresses so even though they're growing towards each other you don't have to worry about them actually hitting each other and another point to note is that if you're doing a lot of pre computation in your in your program for example generating these huge tables of constants those all have to be read from dead when you start your program and stored in this data segment so if you have a lot of these constants is actually going to make your program loading time much higher however it's usually okay to do a little bit of pre-computation especially if you can save a lot of computation at run time but in some cases it might actually be faster overall to just compute the things in memory when you start your program because then you don't have to read stuff from disk so here's a question so since a 64-bit address space takes over a century to write at a rate of 4 billion bytes per second we're never effectively going to run out of virtual memory so why don't we just allocate out of virtual memory and never free anything yes yeah so one thing is that you have this issue of fragmentation you're the blocks of memory that you're using are not going to be contiguous in memory and then it makes it harder for you to find large blocks so this this is called external fragmentation which I mentioned earlier so if you do this external fragmentation is going to be very bad the performance of the page table is going to degrade tremendously because the memory that you're using is going to be spread all over virtual memory you're gonna use many pages and this leads to disk thrashing so you have to do a lot of swaps of pages in and out of disk your TLB hit rate is going to be very low and basically you and another another reason is that you're also going to run out of physical memory if you never free anything so one of the goals of storage allocation is to try to use as little virtual memory as possible and to try to keep the used portions of the memory relatively compact any questions so far okay so let's do a analysis of the bin free list storage allocation scheme so here's a theorem suppose that the maximum amount of heap memory and in use at any time by a program is M if the heap is managed by a bin free less allocator then the amount of virtual memory consumed by the heap storage is upper bounded by M log M does anybody have an intuition about why this theorem could be true so how many bins do we have at most right so the number of bins we have is upper bounded by a log M and each bin is going to use order ab memory so let's look at this more formally so an allocation request for a block of size X is going to consume to to the ceiling of log base 2 of X storage which is upper bounded by 2 X so we're only wasting a factor of 2 storage here so therefore the amount of virtual memory devoted to blocks of size 2 decay is at most 2 m and since there are at most log base 2 of M free list the theorem holds just by multiplying the two terms you can only have log base 2 of M free list because that's the maximum amount of memory you're using and therefore your largest bin is only going to hold blocks of size M and it turns out that the bin free list allocation scheme is Theta of 1 competitive with the optimal allocator and this here in optimal allocator knows all the memory requests in the future so it can basically do a lot of clever things to optimize the memory allocation process but it turns out that the Brent bin free list is only going to be a constant factor worse than the optimal allocator this is assuming that we don't coalesce blocks together which I'll talk about on the next slide it turns out that this constant is 6 so charles leiserson has a paper describing this result and there's also lower bound of six so this is tight so coalescing so coalescing is when you splice together smaller blocks into a larger block so you can do this if you have two free blocks that are contiguous in memory this will allow you to put them together into a larger block so been free lists can sometimes be heuristic Allah improved by doing coalescing and there are many clever schemes for trying to find adjacent blocks efficiently so there's something called the body system where each block has a body that's contiguous and memory cover it turns out that these this scheme especially the buddy system scheme has pretty high overhead so it's usually gonna be slower than just the standard bin free list algorithm there are no good theoretical bounds that exists that prove the effectiveness of coalescing but it does seem to work pretty well in practice at reducing fragmentation because heap storage tends to be de-allocated as a stack or in batches so what I mean by this is that the objects that you free tend to be pretty close together in memory so if you D allocate as a stack then all of them are going to be near the top of the stack and when you deallocate in batches this is when when you D allocate a whole bunch of things that you allocate it together in your program for example if you have a graph data structure and you allocated data for the vertices all at the same time then when you d allocate them all together this is going to give you chunk of contiguous memory that you can splice together okay so now let's look at a garbage collection it's going to be slightly different from storage allocation so the idea of garbage collection is to free the programmer from having two free objects so languages like Java and Python they have built in garbage collectors so the programmer doesn't have to free stuff themselves and this makes it easier to write programs because you'll have to worry about freeing and dangling pointers and so forth so garbage collectors can identify and recycle the objects that the programmer can no longer access so that these memory objects can be used for future allocations and in addition to having a built in garbage collector you can also create your own garbage collector and see which doesn't have a garbage collector see if you have a application you can actually create a special-purpose garbage collector that might be more efficient than a general garbage collector yes why is it not order em because for each of the bins you could use up to order M memories so if you don't do coalescing I could have basically I could have a bunch of small allocations and then I stopped up all of my blocks and then they all go into smaller bins and then I want to allocate something larger I can't just splice those together I have to make another memory allocation so if you order your memory request in a certain way you can make it so that each of the bins has order m memory okay so for garbage collection let's go over some terminology so there are three types of memory objects roots live objects and dead objects roots are objects that are directly accessed will by the program so these are global variables things on the stack and so on then there are live objects which are reachable by following the routes via pointers and then finally there are dead objects and these objects are inaccessible via sequences of pointers and these can be recycled because the programmer can no longer reach these dead objects so in order for garbage collection to work in general you need to be able to have the garbage collector identify pointers and this requires strong typing so languages like Python and Java have strong typing but in C it doesn't have strong typing this means that when you have a pointer you don't actually know whether it's a pointer because a pointer just looks like a integer it could be either a point or an integer and you can cast things and see you can also do pointer arithmetic in C so in in contrast in other languages once you declare something to be a pointer it's always going to be a pointer and for those languages that have strong typing this makes it much easier to do garbage collection you also need to prohibit doing pointer arithmetic on these pointers because if you if you do pointer arithmetic and you change the location of the pointer then the garbage collector no longer knows where the memory region starts anymore and see sometimes you do do pointer arithmetic and that's why you can't actually have a general-purpose garbage collector and see that works well so let's first let's look at one simple form of garbage collection and this is called reference counting the idea is that for each object I'm going to keep a count of the number of pointers referencing that object and if the count ever goes to zero then that means I can free that object be is there no more pointers I can reach that object so here I have a bunch of routes so these are directly accessible my program and then I have a bunch of objects that can be reached via following pointers from the starting from the root and that each of them have have a reference count that indicates how many incoming pointers they have so let's say now I change one of these pointers so initially I had a pointer going to here but now I changed this so that it goes down here so what happens now is I have to adjust a reference counts of both of these objects so this object here now it doesn't have any incoming pointers I have to decrement its reference count so that goes to zero and then for this one I have to increment its reference count so now it's three and now I have an object that has a reference kind of zero and with this reference counting algorithm I can free this object so let's go ahead and free this object but when I free this object it actually has pointers to other objects so I also have to decrement the reference counts of these other objects when I free this object so I'm going to decrement the counts and now it turns out that this object also has a reference count of zero so I can free that as well and in general I just keep doing this process until the reference counts of the objects don't change anymore and whenever I encounter an object with the reference kind of zero I can free it immediately and the memory that I freed can be recycled it can be used for future memory allocations so questions on how the reference counting procedure works so there's one issue with reference counting does anybody see what the issue is yes yes so what if it has a reference to itself more generally what if it has a cycle you can't ever reference you can't ever collect garbage collect a cycle when you're using reference counts so here we have a cycle of length three they all have a reference count of one but you can never reach this cycle by following pointers from the route and therefore you can never delete any object in this cycle and the reference counts are always going to be nonzero and furthermore so let's just illustrate this cycle and furthermore furthermore any object that's pointed to by objects in this cycle can can cannot be garbage collected as well because you can't garbage collect a cycle so all the pointers going out of the objects in the cycle are always going to be there so there could be a lot of objects downstream from this object here that can't be garbage collected so this makes it very bad and as we all know uncollected garbage stinks so we don't want that so let's see if we can come up with another garbage collection scheme so it turns out that reference counting is actually pretty good when it does work because it's very efficient and simple to implement so if you know that your program doesn't have these cycles in them among pointers that you can use a reference counting scheme there are some languages like Objective C that have two different types of pointers strong pointers and weak pointers and if you're doing a reference counting with a language with these two types of pointers the reference count only stores the number of incoming strong pointers and therefore if you define these pointers in a cycle to be weak pointers they're not going to contribute to the reference count and therefore you can still garbage collect however programming with strong weak pointers can be kind of tricky because you need to make sure that you're not d referencing something that a weak pointer points to because that thing might have been garbage collected already so you need to be careful and C doesn't have these two types of pointers so we need to use another method of garbage collection to make sure we can garbage collect these cycles so we're going to look at two more garbage collection schemes the first one is called mark-and-sweep and the second one is called stop and copy so first we need to define a graph abstraction so let's say we have a graph with vertices V and edges E and the vertex set D contains all of the memory objects in memory and the edges E are directed edges between objects if there's a so there's a directed edge from object a to object B if object a has a pointer to object B and then as we said earlier the live objects are the ones that are reachable from the roots so we can use a breadth-first search like procedure to find all the live objects so we just start our breadth first search from the roots and we'll mark all the objects that can be reachable from the roots and then everything else that isn't reached those are those are available for to be reclaimed so we're gonna have a FIFO queue first in first out queue for our breadth-first search this is represented as an array and we have two pointers one to the head of the queue and one to the tail of the queue and here let's look at this code which essentially is like a breadth-first search so we're first going to go over all of the vertices in our graph and we're going to check if each vertex V is a root if it is a root we're going to set its mark to be one and we're gonna place the vertex onto the queue and otherwise we're gonna set the mark of V to be zero and then while the queue is not empty we're going to DQ the first thing from the queue let that be you then we're gonna look at all the outgoing neighbors of you so these are vertices v such that there's a directed edge from u to V we're gonna check if these mark is equal to zero if it is that means we haven't explored it yet so we'll set its mark to be one and we place it onto the queue and if the neighbor has already been explored then we don't have to do anything so let's illustrate how this algorithm works on this simple graph here and for this example I'm just gonna assume that I have one root vertex are in general I can have multiple roots and I just place all of them onto the queue at the beginning but for this example I'm just gonna have a single root so I'm gonna place it onto the queue and the location that I place it is going to be where the tail pointer points to and after I place it on queue I increment the tail pointer now I'm going to take the first thing off of my queue which is R and I'll explore my neighbors so the neighbors are B and C here both of them haven't been marked yet so I'm gonna mark them and I'm gonna indicate the marked vertices with the shade shaded blue and I'll place them onto the queue now I'm gonna take the next thing B I'm gonna check its neighbors it only has a neighbor to see but C is already on the queue it's already marked so I don't have to do anything now i DQ c and c has neighbors D and E so I place them onto the queue D doesn't have any outgoing neighbors so I don't have to do anything now when I DQ e has neighbors F when I DQ f as a neighbor G and when I D huger it doesn't have any neighbors so now my queue is empty and my breadth-first search procedure finishes so at this point I've marked all the objects that are accessible from the root and all the unmarked objects can now be garbage collected because there's no way I can access them in the program so the mark and sweep procedure has to stages the first stage is called the marked stage where I use a breadth-first search to mark all the live objects and the sweep stage will scan over memory to free the unmarked objects so this is a pretty simple scheme there is one issue with this scheme does anybody see what the possible issue is yes yeah so that's one issue where you you have to scan over all of memory there are some variants so if mark-and-sweep where it keeps track of just allocated objects so you only have to scan over those instead of the entire memory space besides that are there any other possible issues with this yes right so let's assume that we do have strong typing any other possible limitations anybody else think I called on yeah yes so for the scheme that I described you have to look over all of the things that don't have references to it so so that that is a another overhead so these are so those are all issues good the issue I want to get at is that the mark and sweep algorithm that I presented here doesn't deal with fragmentation so it doesn't compact the live objects to be contiguous in memory it just frees the ones that are unreachable but it doesn't do anything with the ones that are reachable so let's look at another procedure that does deal with fragmentation this is called the stop and copy garbage collection procedure at a high level it's pretty similar to the mark and sweep algorithm we're still going to use a breadth-first search to identify all the live objects but if you look at how this breadth first search is implemented is there any information you can use here to try to get the live objects to be contiguous in memory so any anybody see anything here that we can use to try to reduce fragmentation yes yes so so the answer is that the objects that we visited are contiguous on the queue so here in the in the market sweep algorithm I just placed the IDS of the vertices on the qubit if I just place two actual objects onto the queue instead then I can just use my queue as my new memory and then all the objects that are unreachable will be implicitly deleted so this this procedure here will deal with external fragmentation so let's see how this works so we're gonna have two separate memory spaces the from space in the to space so in the from space I'm just gonna do allocation and freeing on it until it becomes full so when I allocate something I place it at the end of this space when I free something I just mark it as free by don't compact it out yet and when this from space becomes full then I'm going to run my garbage collection algorithm and I'm gonna use the two space as my queue when I do my breadth-first search so after I run my breadth-first search all the live objects are going to appear in the two space in contiguous memory since I use the two space as my queue right and then and then I just keep allocating stuff from the two space and also marking things as deleted when I freed them until the two space becomes full then I do the same thing but I swapped the roles of the two and the from spaces so this is called the stopping copy algorithm there is one problem with this algorithm which we haven't addressed yet does anybody see what the potential problem is yes yeah so that's one good observation if nothing is dead then you're wasting a lot of work because you have to copy this although with the mark-and-sweep algorithm you still have to do some copying although you're not copying the entire objects you're just copying the IDS there's actually a correctness issue here so does anybody see what the correctness issue is yes yeah so so the answer is that if you had pointers that pointed to objects in the from space if you move your objects to the to space those pointers aren't going to be correct anymore so if I had a pointer to a live object before and I moved my live object to a different memory address I need to also update that pointer so let's see how we can deal with this so the idea is that when an object is copied to the to space we're gonna store a forwarding pointer in the in the corresponding object in the from space and this this implicit as moved and then when I remove an object from the 5oq in my breadth-first search from in the to space I'm gonna update all the pointers by following these forwarding pointers so let's look at an example of how this works so let's say I'm executing the breadth-first search and this is my current cue right now what I'm gonna do is when i DQ an element from my cue I'm going to first I'm going to place the neighboring objects that haven't been explored yet onto the cue so here it actually has two neighbors but the first one has already been placed in the queue so I can ignore it and the second one hasn't been placed on the cue yet so I place it on to the cue and then I'm also going to oh so this object also has a pointer to something in the from space which I'm not going to change at this time but I am going to store a forwarding pointer from the object that I moved from the from space to the to space so now it has a pointer that tells me the new address and then for the for the object that I just DQ'd I'm gonna follow the forwarding pointers of its neighbors and that will give me the correct addresses now so I'm going to update the pointers by just following a forwarding pointer so this the first partner pointed to this object where's a which has a forwarding pointer to this so I just I just make a point to this object in the to space and there similarly for the other point or I'm going to make it point to this object so that's that's the idea how of how to adjust the pointers one question is why can't we just adjust the pointers when we include object so why do I have to adjust the point is when I teach you an object yes yeah so the the answer is that when you end Q an object you don't actually know where your neighbors are gonna reside in the to space and you only know that when you DQ the object because when you do you the object you must have explored your neighbors and therefore you can generate these forward pointers so any questions on this scheme so how much time does it take to do the stop and copy procedure so let's say n is the number of objects and the number of pointers I have so it's a sum with the number of objects and number of pointers how much time would it take to run this algorithm yeah so so it's just gonna be linear time because we're running a browser search and that takes linear time you also have to you also have to do work in order to copy these objects to the to space so you also have to do work proportional to the number of bytes that you're copying over so it's linear in the number of objects the number of pointers and the total amount of space you're copying over and the advantage of this scheme is that you don't actually need to go over the objects that aren't reachable because those are gonna be implicitly deleted because they're not copied over to the to space whereas in the mark and suite procedure you have to actually go through your entire memory and then free all the objects that aren't reachable so this this makes the stop and copy procedure more efficient and it also deals with the external fragmentation issue so what happens when the from space becomes full so what you do is you're gonna request a new heap space equal to the used space you're just gonna double the size of your from space and then you're gonna consider and then you're gonna consider it from space to be full when when the newly allocated space becomes full see essentially what you're gonna do is you're gonna double the space and when that becomes full you're gonna double it again and so on and with this method you can amortize the cost of garbage collection to the size of the new heap space so it's gonna be amortized constant overhead per per byte of memory and this is assuming that the user program is going to touch all of the memory that it allocates and furthermore the virtual memory space required by this scheme is just a constant times the optimal if you locate the from and the two spaces in different regions of virtual memory so that they can't interfere with each other and the reason why it's a constant times the optimal is because you only lose a factor of two because you're maintaining two separate spaces and then another factor of two comes from the fact that you're doubling the size if your array when it becomes full and up to half of it will be unused but it's constant times optimal since we're just multiplying constants together and similarly when when you're from space becomes relatively empty for example if it's less than half-full you can also release memory back to the OS and then the analysis of the amortized constant overhead is similar okay any other questions okay so there's a lot more that's known and also unknown about dynamic storage allocation so I've only scratched the surface of dynamic storage allocation today there are many other topics for example there's the body system for doing coalescing there are many variants of the smart mark and sweep procedure so there are optimizations to improve the performance of it there's generational garbage collection and this is based on idea that many objects are short-lived so a lot of the objects are going to be freed pretty close to the time when you allocate it and for the one set aren't gonna be freed they tend to be pretty long-lived and the idea of generational garbage collection is instead of scanning your whole memory every time you just do work on the younger objects most of the time and then once in a while you try to collect the garbage from the older objects because those tend to not change that often there's also real time garbage collections so the methods I talked about today assume that the program isn't running when the garbage collection procedure is running but in practice you might want to actually have your garbage collector running in the in the background when your program is running but this can lead to correctness issues because the static algorithms I just described assumes that the graph of objects and pointers isn't changing and when the objects and pointers are changing you need to make sure that you still get a correct answer real time garbage collection tends to be concerted conservative so it doesn't always free everything that's garbage but it does but for the things that it does decide to free those can be actually reclaimed and there are various techniques to make real time garbage collection efficient one possible way is instead of just having one from into space you can have multiple from into spaces and then you just work on one of the spaces at a time so that it doesn't actually take that long to do garbage collection you can do it incrementally throughout your program there's also multi-threaded storage allocation and parallel garbage collection so this is when you have multiple threads broadening how do you allocate memory and also how do you collect garbage in the background so the the algorithms become much trickier because they're multiple threads running and you have to deal with races and correctness issues and so forth and that's actually a topic of the next lecture so in summary these are the things that we talked about today so we have the most basic form of storage which is a stack the limitation of the stack is that you can only free things at the top of the stack you can't free arbitrary things in this stack but it's very efficient when it works because the code is very simple and it can be inlined and in fact this is what the see calling calling procedure uses it places local variables and the return address of the function on the stack the heap is the more general form of storage but it's much more complicated to manage and we talked about various ways to do allocation and de-allocation for the heap we have fixed size allocation using free lists variable size allocation using been free list and then many variants of these ideas are used in practice for garbage collection this is where you want to free the programmer from having two free objects and garbage collection algorithms are supported in languages like Java and Python we talked about various ways to do this reference counting which suffers from the limitation that it can't free cycles mark-and-sweep and stop and copy these can free cycles this market sweet procedure doesn't deal with external fragmentation but the stopping copy procedure does we also talked about internal and external fragmentation so external fragmentation is when your memory blocks are all over the place in virtual memory this can cause performance issues like disk thrashing and TLB misses then there's internal fragmentation where you're actually not using all of the space in the block that you allocate so for example in the bin free list algorithm you do have a little bit of internal fragmentation because you're always rounding up to the nearest power of two greater than the size you want so you're wasting up to a factor of two in space and in project three you're gonna look much more at these storage allocation schemes and then you also get to try some of these in homework 6 so any any other questions so that's all I have for today's lecture you 2 00:00:03,189 --> 00:00:05,769 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu good afternoon everyone let's get started so welcome to the eleventh lecture of six 172 seems that there are many fewer people here today that on Tuesday all right so today we're going to talk about storage allocation and turns out that storage allocation is about both allocating memory and also freeing it but in the literature it's just called storage allocation so that's the term we're gonna use and whenever you do a malloc or free that you're doing storage allocation so how many of you have used malloc or free before so hopefully all of you since you needed it for for the projects and homeworks so the simplest form of storage is a stack and in a stack you just have an array in a pointer so here we have an array which we call a and then there's some portion of this array that's used for memory and the rest of it is free it's not used and then there's a pointer SP that points to the end of the use region in this stack and if you want to allocate X bytes on the stack all you do is you just increment this SP pointer by ax and then of course you should also check for overflow to make sure that you don't actually go off the end of this array because if you do that then your you'll get a segmentation fault but actually nowadays compilers don't really check first stack overflow because your stack is usually a big enough for most of your program and when you do get a stack overflow you're just gonna seg fault in and then you go debug your program so for efficiency reasons the stack overflow isn't actually checked okay and then it returns a pointer to the beginning of the memory that you just allocated so it's just SP minus X so that's pretty pretty simple and it fact this is how the see call stack works it also uses a stack discipline so what you call a function you you save local variables and registers on the stack and you also save the return address of the function that's calling another function and that when you return you pop things off the stack so you can also free things from the stack so what you do is you just decrement SP by XV one of free x bytes so here we just decremented SP by X and everything after SP is now considered to be free and again if you're careful you would check for a stack underflow but again the compiler usually doesn't do this because if you do have a stack underflow there's a bug in your program and you'll get a seg fault and you should go fix it so allocating and freeing in a stack takes constant time because all you have to do is manipulate the stack pointer so it's pretty efficient however you have to free consistently with the stack discipline so the stack has limited applicability does anybody see why you can't do everything with just a stack so what's one limitation of the stack yes I so so it turns out that you can actually a pass a compile-time constant to make the stack bigger if you wanted to there's actually a more fundamental limitation of the stack yes yeah so the answer is that you can only free the last thing that you allocated so there's no way to free anything in the middle of this use region here you have to free the last thing here because the stack doesn't keep track of the objects in the middle of this used region so there's limited applicability but it's great when it works because it's very efficient and all this code can essentially be in line you don't have to have any function calls it's very fast and it also turns out that you can allocate on the call stack using this function called aleca it's actually not a function it's just a keyword that the compiler recognizes and it will transform it to instructions that manipulate the stack however this function is now deprecated because it turns out that the compiler is actually more efficient when you're dealing with these fixed size frames if you just allocate a pointer on the stack that points to some piece of memory on the heap but nevertheless if you want to allocate on the call stack you can call this aleca function but you should check that it's doing the right thing since it's now deprecated and the implementation is compiler dependent so what's another type of storage besides the stack so you can't do everything with a stack so what else can we use yes yes so we also have the heap which is more general than the stack so a stack looks very nice and tidy and it's very efficient to use the stack but it doesn't work for everything so that's why we have to heap and the heap is much more general that it's very messy it's very hard to organize this and work with it efficiently and for the rest of this lecture I'm going to be talking about how to manage memory in the heap and I found these pictures on Stack Overflow so maybe they're biased towards stacks okay so how do we do heap allocation so let's first start with fixed size heap allocation where we assume that all of the objects that we're dealing with are of the same size in general this isn't true but let's just start with this simpler case first okay so as I said earlier if you use malloc and free and C then you're doing heap allocation C++ has the new and delete operators which works similarly to malloc and free they also call the object constructor and destructor and the C functions don't do that and unlike Java and Python C and C++ don't provide a garbage collector so the programmer has to manage memory him or herself and this is one of the reasons for the efficiency of C and C++ because there's no garbage collector running in a background however this makes it much harder to write correct programs and C because you have to be careful of memory leaks dangling pointers and double freeing so a memory leak is if you allocate something you forget to free it in your program peace of running and allocating more and more stuff but without freeing it eventually you're gonna run out of memory and your program is going to crash so you need to be careful of memory leaks dangling pointers are pointers to pieces of memory that you have already freed and if you try to dereference a dangling pointer the behavior is going to be undefined so maybe you'll get a segmentation fault maybe you won't see any until later on in your program because that memory might have been reallocated for something else and it's actually illegal to dereference that memory so Dan blowing pointers are very annoying when you're using C if you're lucky you'll get a sack fault right away and you can go fix your bug but sometimes these are very hard to find there's also double freeing so this is when you free something more than once and again this will lead to undefined behavior maybe I'll get a segfault or maybe that piece of memory was allocated for something else and then when you free it again it's actually illegal but your program is going to be incorrect so you need to be careful that you don't free something more than once and this is why some people for prefer to use a language like Java and Python that provide these built-in garbage collectors however these languages are less efficient because they have a general-purpose garbage collector running in the background so in this class we're going to use C because we want to be able to write the fastest programs as possible so we need to study how to manage memory and there are some tools you can use to reduce the number of memory bugs you have in your program so there's memory checkers like address sanitizer and Val brain which can assist you in finding these pernicious bugs so address sanitizer is a compiler instrumentation tool when you compile your program you plot you pass a flag and then when you run your program it's going to report any it's going to report possible memory bugs you have in your program and then valgrind it works directly off the binary so you don't need to do anything special when you compile it you can just pass your binary to valgrind and if there's a memory bug it will it might find it but valgrind tends to be a slower than now just saying sanitizer and it tends to catch fewer bugs because it knows less about the program then address sanitizer and address sanitizer it sees the source code of the program and has more information whereas valgrind just works directly off the binary also don't confuse the heap with a heap data structure that you might have seen before in your algorithms or data structures courses so these are two different concepts the heap data structure in your algorithms course it was a data structure used to represent a priority queue where you can officially extract the highest priority element and you can also update the priorities of elements in the set and this could be used for algorithms like sorting or graph search but today we're going to be talking about another heap which is the heap that's used for storage allocation so don't get confused so any questions so far okay all right so we're gonna first start with fixed size allocation since that's the easier case so we're gonna assume that every piece of storage has the same size some of these blocks are used and some of them are unused and among the unused blocks we're going to keep a list that we call the free list and each block and this free list has a pointer to the next block in the free list and since this memory is unused we can actually use the memory to store a pointer as part of our storage allocator implementation there's actually another way to do fixed size allocations instead of using a free list you could actually place a bit for each block saying whether or not it's free and then when you do allocation you can use bid tricks but today I'm just I'm gonna talk about the free list implementation so to allocate one object from the free list you set the pointer X to be free so free is pointing to the first object in this free list then you set the free pointer to point to the next thing in the free list so this is doing free equal to the next pointer of free and then finally you return X which is a pointer to the first object in the free list so here's an animation so X is gonna point to what free points - you also need to check if free is equal to null because if free is equal to no that means there are no more free blocks in the free list and the programmer should know this you should return a special value otherwise we're going to set the free pointer to point to the next thing in the free list and then finally we return X to the program and now the program has a block of memory that can use there is still a garbage pointer in this block that we pass back to the program because we didn't clear it so the implementation of the storage alligator could design decide to zero this out or can just pass it back to the program and leave it up to the programmer to do whatever it wants with it so that in the latter case the programmer should be careful not to try to dereference this pointer okay so how about D allocation so let's say we want to free some object X what we do is we just set the next pointer of X to be equal to free so it's going to point to the first thing in the free list and then we set free equal to X so right so now free is pointing to X in this X object that we want it to free now as a pointer to the first object in the original free list so pretty simple any any questions on this so this sort of acts like a stack in that the last thing that you freed is going to be the first thing that you allocates you get temple locality and that in that way but unlike a stack you can actually free any of the blocks and not just the last block that you allocated so with a free list allocating and freeing take constant time because you're just adjusting some pointers as good temporal locality because as I said the things that you freed most recently are going to be the things that are going to be allocated it has poor spatial locality due to external fragmentation and external fragmentation means that your blocks of used memory are spread out all over the place in the space of all memory and this can be bad for performance because it can increase the size of the page table and it can also cause dis thrashing so if you recall whenever you access a page in virtual memory it has to do an address translation to physical to the physical memory address and if if your memory spread out across many pages in virtual memory then you're gonna have a lot of entries in the page table because the page table is storing this mapping between the virtual memory address of the page and the physical memory address of the page so it's gonna complicate the page table make it less efficient to do lookups in it and then if you have more pages than you can fit in your main memory then this hidden cause disk disk thrashing because you have to move pages in and out of disk the translation lookaside buffer TLB can also be a problem does anybody know what a TLB is yes yeah so the TOB is essentially a cache for the page table so it will cache the results of the translation from virtual memory addresses to physical memory addresses for the most recent translations and looking up a translation in the TLB is much more efficient than going through the page table and if you have if you have a lot of external fragmentation then you have a lot of pages that you might access and this means that when you go to the TLB it's more likely you'll get a TLB miss and you have to go to the page table to look up the appropriate address so that's why external fragmentation is bad so let's look at some ways to mitigate external fragmentation so one way to do this is to keep a free list or a bitmap per disk page and then when you want to allocate something you allocate from the free list of the fullest page so you sort of skew the memory that's being used to as few pages as possible and when you free a block of stories you just you just return it to the page on which that block resides and if a page becomes completely empty there are no more items that are used on that page then the virtual memory system can page it out without affecting the program perform is because you're not going to access that page anyways so this might seem counterintuitive why do we want to skew the items to as few pages as possible so let's look at a simple example to convince ourselves why this is actually good for dealing with external fragmentation so here I have two cases in the first case I have 90% of my blocks on one page and 10% of the blocks on the other page in the second case I have half of my blocks on one page in half on the other page so now let's look at the probability that two random accesses will hit the same page so let's assume that all the random accesses are going to go to one of the two pages so in the first case the building up both of that disease hit the first page is going to be 0.9 times 0.9 and then the probability that they both hit the second page is 0.1 times 0.1 and if you sum this up you get point a 2 that's the probability that both of the random access these are going to hit the same page in the other case the probability that both of the accesses hit the same page the first page it's gonna be 0.5 times 0.5 the second page is also gonna be 0.5 times 0.5 so that sums to 0.5 and that means that there's only a 50% chance that two random acts diseases are gonna hit the same page so in the first case you actually have a higher chance that the two random acts disease hit the same page and that's why we want to skew the items as much as possible so that we can reduce the external fragmentation any questions okay so that was fixed size heap allocation and obviously you can't use that for many programs if if you're allocating memory of different sizes so now let's look at variable size heap allocation so we're going to look at one allocation scheme called bin freelee's the idea is to leverage the efficiency of free lists and also has topped a bounded amount of internal fragmentation so internal fragmentation is a waste of space within a block so that means when you allocate possibly more space than you're using then there's some wasted space in there so in bin free list what we're going to do is we're gonna have a whole bunch of bins and each bin is going to store blocks of a particular size so here I'm going to say that Benkei holds mammary blocks of size two to the case I'm gonna store blocks of sizes powers of two so why don't I just store a bin for every possible size does anybody know why why why am i rounding up two powers of two here yes if I wanted a bin for every possible size I would have way too many bins and just the pointers to these bins are not gonna fit in memory so that's why I'm only using bins that store blocks of size two to decay and now let's look at how I'm going to allocate X bytes from a bin free list so what I'm gonna do is I'm gonna look up the bin for which I should take a block from and to get that I'm gonna take the ceiling of log base X this is log base two so recall that LG is log base two if that bin is non-empty then I can just return a block from that bin however if that bin is empty then I need to go to the next highest bin that's non-empty and then I'm gonna take a block from that bin and then split it up into smaller chunks and place them into smaller bins and then I'll also get a chunk that is of the right size so for this example let's say I wanted to allocate three bytes the ceiling of log base two of X is two so I could have been two but Ben two is empty so I need to look for the next bin that's not empty and that's going to be bin 4 and I'm going to split up this block into a smaller powers of two so in particular I'm gonna find a non empty bin K Prime greater than K and split up a block in two sizes of 2 to the K prime minus 1 to the K prime minus 2 all the way down to two to the K so I'm gonna split it in two sizes of all the powers of two less than two decay prime and greater than or equal to two to decay and I'm gonna actually have two blocks of size to decay and one of those will be returned to the program so here I'm gonna split up this block I'm gonna place one of the smaller blocks in bin three one of them into bin two and then I also have another block here that I'm just gonna return to the program so any questions on how this scheme works okay and if there are no larger blocks that exist so that means all the bins higher than the bin I'm looking at are empty then I need to go to the OS to replace for more memory and then after I get that memory I'll split it up so I can satisfy my allocation request in practice this this exact scheming isn't used so there are many variants of this scheme so it turns out that efficiency is very important for small allocations because there's not that much work performed on these small pieces of memory and the overheads of the storage allocation scheme could could cause a performance bottleneck so in practice you usually don't go all the way down to blocks of size one you might stop at blocks of size eight bytes so that you don't have that much overhead but this does increase the internal fragmentation by by a little bit because now you have some wasted space and then one second and then you can also group blocks into pages as I said before so that all of the blocks in the same page have the same size and then you don't have to store the information of the size of the blocks yes yeah so there are two commands you can use one is called M map and the other one is called s BRK so those are system calls that you you just I mean you just call that and then the OS will give you more memory and then your storage alligator can use it yes no some al I mean the standard implementation of malloc internally it uses these commands and map and s break to get memory from the OS it just the OS just gives you a huge chunk of memory it doesn't split it up into smaller block so anything that's up to the storage alligator to do it just gives you a big chunk of memory and then the storage allocated will break it up into smaller blocks then you can also there there similar commands where you can free memory back to the OS when you're not using them anymore so yeah so what I said was that you can actually keep blocks of different sizes on different pages so then you don't actually have to store the size of each block you can just look up what page that block resides in when you get the memory address and then for each page you you have one field that stores the size of those blocks on that page so this saves you the overhead of having to store information per block to figure out its size yeah yeah so if you I mean if you do change the size of the blocks and you can't use you can't actually use this scheme so this is actually a variant where you don't change the size of the blocks if you do change the size and you have to change it for the entire page yes so there are many variants of memory allocators out there this is just the simplest one that I described but this it turns out that this isn't this exact scheme isn't the one that's used in practice or many variants like some allocators instead of using powers of two they use Fibonacci numbers to determine the different bins any other questions see you actually get a chance to play around with implementing some allocators in project three and homework six so let's let's briefly look at the storage layout of a program so this is how our virtual memory address space is laid out so we have this stack all the way at the top and the stack grows downwards so we have the high addresses up top and the low addresses below then we have the heap which grows upward and the heap and the stack basically grow towards each other in this this space is dynamically allocated as the program runs then there's the BSS segment the data segment in the text segment which all reside below the heap so the code segment just stores the code for your program so when you load up your program this codes it's gonna put your program into this tag segment then there's this data segment which stores all the global variables and static variables these constants that you defined in your program these are all stored in the data segment and when you load your program you also have to read this segment read this data from disk and store it into the data segment then there's the BSS segment this segment is used to store all the uninitialized variables in your program and it this is just initialized to 0 at the start of your program since your program hasn't initializes it doesn't matter what we set it to and then the heap this is the memory that we're using when we're calling malloc and malloc and free and then we have the stack which we talked about so in practice the stack and the heap are never actually going to hit each other because we're working with 64-bit addresses so even though they're growing towards each other you don't have to worry about them actually hitting each other and another point to note is that if you're doing a lot of pre computation in your in your program for example generating these huge tables of constants those all have to be read from dead when you start your program and stored in this data segment so if you have a lot of these constants is actually going to make your program loading time much higher however it's usually okay to do a little bit of pre-computation especially if you can save a lot of computation at run time but in some cases it might actually be faster overall to just compute the things in memory when you start your program because then you don't have to read stuff from disk so here's a question so since a 64-bit address space takes over a century to write at a rate of 4 billion bytes per second we're never effectively going to run out of virtual memory so why don't we just allocate out of virtual memory and never free anything yes yeah so one thing is that you have this issue of fragmentation you're the blocks of memory that you're using are not going to be contiguous in memory and then it makes it harder for you to find large blocks so this this is called external fragmentation which I mentioned earlier so if you do this external fragmentation is going to be very bad the performance of the page table is going to degrade tremendously because the memory that you're using is going to be spread all over virtual memory you're gonna use many pages and this leads to disk thrashing so you have to do a lot of swaps of pages in and out of disk your TLB hit rate is going to be very low and basically you and another another reason is that you're also going to run out of physical memory if you never free anything so one of the goals of storage allocation is to try to use as little virtual memory as possible and to try to keep the used portions of the memory relatively compact any questions so far okay so let's do a analysis of the bin free list storage allocation scheme so here's a theorem suppose that the maximum amount of heap memory and in use at any time by a program is M if the heap is managed by a bin free less allocator then the amount of virtual memory consumed by the heap storage is upper bounded by M log M does anybody have an intuition about why this theorem could be true so how many bins do we have at most right so the number of bins we have is upper bounded by a log M and each bin is going to use order ab memory so let's look at this more formally so an allocation request for a block of size X is going to consume to to the ceiling of log base 2 of X storage which is upper bounded by 2 X so we're only wasting a factor of 2 storage here so therefore the amount of virtual memory devoted to blocks of size 2 decay is at most 2 m and since there are at most log base 2 of M free list the theorem holds just by multiplying the two terms you can only have log base 2 of M free list because that's the maximum amount of memory you're using and therefore your largest bin is only going to hold blocks of size M and it turns out that the bin free list allocation scheme is Theta of 1 competitive with the optimal allocator and this here in optimal allocator knows all the memory requests in the future so it can basically do a lot of clever things to optimize the memory allocation process but it turns out that the Brent bin free list is only going to be a constant factor worse than the optimal allocator this is assuming that we don't coalesce blocks together which I'll talk about on the next slide it turns out that this constant is 6 so charles leiserson has a paper describing this result and there's also lower bound of six so this is tight so coalescing so coalescing is when you splice together smaller blocks into a larger block so you can do this if you have two free blocks that are contiguous in memory this will allow you to put them together into a larger block so been free lists can sometimes be heuristic Allah improved by doing coalescing and there are many clever schemes for trying to find adjacent blocks efficiently so there's something called the body system where each block has a body that's contiguous and memory cover it turns out that these this scheme especially the buddy system scheme has pretty high overhead so it's usually gonna be slower than just the standard bin free list algorithm there are no good theoretical bounds that exists that prove the effectiveness of coalescing but it does seem to work pretty well in practice at reducing fragmentation because heap storage tends to be de-allocated as a stack or in batches so what I mean by this is that the objects that you free tend to be pretty close together in memory so if you D allocate as a stack then all of them are going to be near the top of the stack and when you deallocate in batches this is when when you D allocate a whole bunch of things that you allocate it together in your program for example if you have a graph data structure and you allocated data for the vertices all at the same time then when you d allocate them all together this is going to give you chunk of contiguous memory that you can splice together okay so now let's look at a garbage collection it's going to be slightly different from storage allocation so the idea of garbage collection is to free the programmer from having two free objects so languages like Java and Python they have built in garbage collectors so the programmer doesn't have to free stuff themselves and this makes it easier to write programs because you'll have to worry about freeing and dangling pointers and so forth so garbage collectors can identify and recycle the objects that the programmer can no longer access so that these memory objects can be used for future allocations and in addition to having a built in garbage collector you can also create your own garbage collector and see which doesn't have a garbage collector see if you have a application you can actually create a special-purpose garbage collector that might be more efficient than a general garbage collector yes why is it not order em because for each of the bins you could use up to order M memories so if you don't do coalescing I could have basically I could have a bunch of small allocations and then I stopped up all of my blocks and then they all go into smaller bins and then I want to allocate something larger I can't just splice those together I have to make another memory allocation so if you order your memory request in a certain way you can make it so that each of the bins has order m memory okay so for garbage collection let's go over some terminology so there are three types of memory objects roots live objects and dead objects roots are objects that are directly accessed will by the program so these are global variables things on the stack and so on then there are live objects which are reachable by following the routes via pointers and then finally there are dead objects and these objects are inaccessible via sequences of pointers and these can be recycled because the programmer can no longer reach these dead objects so in order for garbage collection to work in general you need to be able to have the garbage collector identify pointers and this requires strong typing so languages like Python and Java have strong typing but in C it doesn't have strong typing this means that when you have a pointer you don't actually know whether it's a pointer because a pointer just looks like a integer it could be either a point or an integer and you can cast things and see you can also do pointer arithmetic in C so in in contrast in other languages once you declare something to be a pointer it's always going to be a pointer and for those languages that have strong typing this makes it much easier to do garbage collection you also need to prohibit doing pointer arithmetic on these pointers because if you if you do pointer arithmetic and you change the location of the pointer then the garbage collector no longer knows where the memory region starts anymore and see sometimes you do do pointer arithmetic and that's why you can't actually have a general-purpose garbage collector and see that works well so let's first let's look at one simple form of garbage collection and this is called reference counting the idea is that for each object I'm going to keep a count of the number of pointers referencing that object and if the count ever goes to zero then that means I can free that object be is there no more pointers I can reach that object so here I have a bunch of routes so these are directly accessible my program and then I have a bunch of objects that can be reached via following pointers from the starting from the root and that each of them have have a reference count that indicates how many incoming pointers they have so let's say now I change one of these pointers so initially I had a pointer going to here but now I changed this so that it goes down here so what happens now is I have to adjust a reference counts of both of these objects so this object here now it doesn't have any incoming pointers I have to decrement its reference count so that goes to zero and then for this one I have to increment its reference count so now it's three and now I have an object that has a reference kind of zero and with this reference counting algorithm I can free this object so let's go ahead and free this object but when I free this object it actually has pointers to other objects so I also have to decrement the reference counts of these other objects when I free this object so I'm going to decrement the counts and now it turns out that this object also has a reference count of zero so I can free that as well and in general I just keep doing this process until the reference counts of the objects don't change anymore and whenever I encounter an object with the reference kind of zero I can free it immediately and the memory that I freed can be recycled it can be used for future memory allocations so questions on how the reference counting procedure works so there's one issue with reference counting does anybody see what the issue is yes yes so what if it has a reference to itself more generally what if it has a cycle you can't ever reference you can't ever collect garbage collect a cycle when you're using reference counts so here we have a cycle of length three they all have a reference count of one but you can never reach this cycle by following pointers from the route and therefore you can never delete any object in this cycle and the reference counts are always going to be nonzero and furthermore so let's just illustrate this cycle and furthermore furthermore any object that's pointed to by objects in this cycle can can cannot be garbage collected as well because you can't garbage collect a cycle so all the pointers going out of the objects in the cycle are always going to be there so there could be a lot of objects downstream from this object here that can't be garbage collected so this makes it very bad and as we all know uncollected garbage stinks so we don't want that so let's see if we can come up with another garbage collection scheme so it turns out that reference counting is actually pretty good when it does work because it's very efficient and simple to implement so if you know that your program doesn't have these cycles in them among pointers that you can use a reference counting scheme there are some languages like Objective C that have two different types of pointers strong pointers and weak pointers and if you're doing a reference counting with a language with these two types of pointers the reference count only stores the number of incoming strong pointers and therefore if you define these pointers in a cycle to be weak pointers they're not going to contribute to the reference count and therefore you can still garbage collect however programming with strong weak pointers can be kind of tricky because you need to make sure that you're not d referencing something that a weak pointer points to because that thing might have been garbage collected already so you need to be careful and C doesn't have these two types of pointers so we need to use another method of garbage collection to make sure we can garbage collect these cycles so we're going to look at two more garbage collection schemes the first one is called mark-and-sweep and the second one is called stop and copy so first we need to define a graph abstraction so let's say we have a graph with vertices V and edges E and the vertex set D contains all of the memory objects in memory and the edges E are directed edges between objects if there's a so there's a directed edge from object a to object B if object a has a pointer to object B and then as we said earlier the live objects are the ones that are reachable from the roots so we can use a breadth-first search like procedure to find all the live objects so we just start our breadth first search from the roots and we'll mark all the objects that can be reachable from the roots and then everything else that isn't reached those are those are available for to be reclaimed so we're gonna have a FIFO queue first in first out queue for our breadth-first search this is represented as an array and we have two pointers one to the head of the queue and one to the tail of the queue and here let's look at this code which essentially is like a breadth-first search so we're first going to go over all of the vertices in our graph and we're going to check if each vertex V is a root if it is a root we're going to set its mark to be one and we're gonna place the vertex onto the queue and otherwise we're gonna set the mark of V to be zero and then while the queue is not empty we're going to DQ the first thing from the queue let that be you then we're gonna look at all the outgoing neighbors of you so these are vertices v such that there's a directed edge from u to V we're gonna check if these mark is equal to zero if it is that means we haven't explored it yet so we'll set its mark to be one and we place it onto the queue and if the neighbor has already been explored then we don't have to do anything so let's illustrate how this algorithm works on this simple graph here and for this example I'm just gonna assume that I have one root vertex are in general I can have multiple roots and I just place all of them onto the queue at the beginning but for this example I'm just gonna have a single root so I'm gonna place it onto the queue and the location that I place it is going to be where the tail pointer points to and after I place it on queue I increment the tail pointer now I'm going to take the first thing off of my queue which is R and I'll explore my neighbors so the neighbors are B and C here both of them haven't been marked yet so I'm gonna mark them and I'm gonna indicate the marked vertices with the shade shaded blue and I'll place them onto the queue now I'm gonna take the next thing B I'm gonna check its neighbors it only has a neighbor to see but C is already on the queue it's already marked so I don't have to do anything now i DQ c and c has neighbors D and E so I place them onto the queue D doesn't have any outgoing neighbors so I don't have to do anything now when I DQ e has neighbors F when I DQ f as a neighbor G and when I D huger it doesn't have any neighbors so now my queue is empty and my breadth-first search procedure finishes so at this point I've marked all the objects that are accessible from the root and all the unmarked objects can now be garbage collected because there's no way I can access them in the program so the mark and sweep procedure has to stages the first stage is called the marked stage where I use a breadth-first search to mark all the live objects and the sweep stage will scan over memory to free the unmarked objects so this is a pretty simple scheme there is one issue with this scheme does anybody see what the possible issue is yes yeah so that's one issue where you you have to scan over all of memory there are some variants so if mark-and-sweep where it keeps track of just allocated objects so you only have to scan over those instead of the entire memory space besides that are there any other possible issues with this yes right so let's assume that we do have strong typing any other possible limitations anybody else think I called on yeah yes so for the scheme that I described you have to look over all of the things that don't have references to it so so that that is a another overhead so these are so those are all issues good the issue I want to get at is that the mark and sweep algorithm that I presented here doesn't deal with fragmentation so it doesn't compact the live objects to be contiguous in memory it just frees the ones that are unreachable but it doesn't do anything with the ones that are reachable so let's look at another procedure that does deal with fragmentation this is called the stop and copy garbage collection procedure at a high level it's pretty similar to the mark and sweep algorithm we're still going to use a breadth-first search to identify all the live objects but if you look at how this breadth first search is implemented is there any information you can use here to try to get the live objects to be contiguous in memory so any anybody see anything here that we can use to try to reduce fragmentation yes yes so so the answer is that the objects that we visited are contiguous on the queue so here in the in the market sweep algorithm I just placed the IDS of the vertices on the qubit if I just place two actual objects onto the queue instead then I can just use my queue as my new memory and then all the objects that are unreachable will be implicitly deleted so this this procedure here will deal with external fragmentation so let's see how this works so we're gonna have two separate memory spaces the from space in the to space so in the from space I'm just gonna do allocation and freeing on it until it becomes full so when I allocate something I place it at the end of this space when I free something I just mark it as free by don't compact it out yet and when this from space becomes full then I'm going to run my garbage collection algorithm and I'm gonna use the two space as my queue when I do my breadth-first search so after I run my breadth-first search all the live objects are going to appear in the two space in contiguous memory since I use the two space as my queue right and then and then I just keep allocating stuff from the two space and also marking things as deleted when I freed them until the two space becomes full then I do the same thing but I swapped the roles of the two and the from spaces so this is called the stopping copy algorithm there is one problem with this algorithm which we haven't addressed yet does anybody see what the potential problem is yes yeah so that's one good observation if nothing is dead then you're wasting a lot of work because you have to copy this although with the mark-and-sweep algorithm you still have to do some copying although you're not copying the entire objects you're just copying the IDS there's actually a correctness issue here so does anybody see what the correctness issue is yes yeah so so the answer is that if you had pointers that pointed to objects in the from space if you move your objects to the to space those pointers aren't going to be correct anymore so if I had a pointer to a live object before and I moved my live object to a different memory address I need to also update that pointer so let's see how we can deal with this so the idea is that when an object is copied to the to space we're gonna store a forwarding pointer in the in the corresponding object in the from space and this this implicit as moved and then when I remove an object from the 5oq in my breadth-first search from in the to space I'm gonna update all the pointers by following these forwarding pointers so let's look at an example of how this works so let's say I'm executing the breadth-first search and this is my current cue right now what I'm gonna do is when i DQ an element from my cue I'm going to first I'm going to place the neighboring objects that haven't been explored yet onto the cue so here it actually has two neighbors but the first one has already been placed in the queue so I can ignore it and the second one hasn't been placed on the cue yet so I place it on to the cue and then I'm also going to oh so this object also has a pointer to something in the from space which I'm not going to change at this time but I am going to store a forwarding pointer from the object that I moved from the from space to the to space so now it has a pointer that tells me the new address and then for the for the object that I just DQ'd I'm gonna follow the forwarding pointers of its neighbors and that will give me the correct addresses now so I'm going to update the pointers by just following a forwarding pointer so this the first partner pointed to this object where's a which has a forwarding pointer to this so I just I just make a point to this object in the to space and there similarly for the other point or I'm going to make it point to this object so that's that's the idea how of how to adjust the pointers one question is why can't we just adjust the pointers when we include object so why do I have to adjust the point is when I teach you an object yes yeah so the the answer is that when you end Q an object you don't actually know where your neighbors are gonna reside in the to space and you only know that when you DQ the object because when you do you the object you must have explored your neighbors and therefore you can generate these forward pointers so any questions on this scheme so how much time does it take to do the stop and copy procedure so let's say n is the number of objects and the number of pointers I have so it's a sum with the number of objects and number of pointers how much time would it take to run this algorithm yeah so so it's just gonna be linear time because we're running a browser search and that takes linear time you also have to you also have to do work in order to copy these objects to the to space so you also have to do work proportional to the number of bytes that you're copying over so it's linear in the number of objects the number of pointers and the total amount of space you're copying over and the advantage of this scheme is that you don't actually need to go over the objects that aren't reachable because those are gonna be implicitly deleted because they're not copied over to the to space whereas in the mark and suite procedure you have to actually go through your entire memory and then free all the objects that aren't reachable so this this makes the stop and copy procedure more efficient and it also deals with the external fragmentation issue so what happens when the from space becomes full so what you do is you're gonna request a new heap space equal to the used space you're just gonna double the size of your from space and then you're gonna consider and then you're gonna consider it from space to be full when when the newly allocated space becomes full see essentially what you're gonna do is you're gonna double the space and when that becomes full you're gonna double it again and so on and with this method you can amortize the cost of garbage collection to the size of the new heap space so it's gonna be amortized constant overhead per per byte of memory and this is assuming that the user program is going to touch all of the memory that it allocates and furthermore the virtual memory space required by this scheme is just a constant times the optimal if you locate the from and the two spaces in different regions of virtual memory so that they can't interfere with each other and the reason why it's a constant times the optimal is because you only lose a factor of two because you're maintaining two separate spaces and then another factor of two comes from the fact that you're doubling the size if your array when it becomes full and up to half of it will be unused but it's constant times optimal since we're just multiplying constants together and similarly when when you're from space becomes relatively empty for example if it's less than half-full you can also release memory back to the OS and then the analysis of the amortized constant overhead is similar okay any other questions okay so there's a lot more that's known and also unknown about dynamic storage allocation so I've only scratched the surface of dynamic storage allocation today there are many other topics for example there's the body system for doing coalescing there are many variants of the smart mark and sweep procedure so there are optimizations to improve the performance of it there's generational garbage collection and this is based on idea that many objects are short-lived so a lot of the objects are going to be freed pretty close to the time when you allocate it and for the one set aren't gonna be freed they tend to be pretty long-lived and the idea of generational garbage collection is instead of scanning your whole memory every time you just do work on the younger objects most of the time and then once in a while you try to collect the garbage from the older objects because those tend to not change that often there's also real time garbage collections so the methods I talked about today assume that the program isn't running when the garbage collection procedure is running but in practice you might want to actually have your garbage collector running in the in the background when your program is running but this can lead to correctness issues because the static algorithms I just described assumes that the graph of objects and pointers isn't changing and when the objects and pointers are changing you need to make sure that you still get a correct answer real time garbage collection tends to be concerted conservative so it doesn't always free everything that's garbage but it does but for the things that it does decide to free those can be actually reclaimed and there are various techniques to make real time garbage collection efficient one possible way is instead of just having one from into space you can have multiple from into spaces and then you just work on one of the spaces at a time so that it doesn't actually take that long to do garbage collection you can do it incrementally throughout your program there's also multi-threaded storage allocation and parallel garbage collection so this is when you have multiple threads broadening how do you allocate memory and also how do you collect garbage in the background so the the algorithms become much trickier because they're multiple threads running and you have to deal with races and correctness issues and so forth and that's actually a topic of the next lecture so in summary these are the things that we talked about today so we have the most basic form of storage which is a stack the limitation of the stack is that you can only free things at the top of the stack you can't free arbitrary things in this stack but it's very efficient when it works because the code is very simple and it can be inlined and in fact this is what the see calling calling procedure uses it places local variables and the return address of the function on the stack the heap is the more general form of storage but it's much more complicated to manage and we talked about various ways to do allocation and de-allocation for the heap we have fixed size allocation using free lists variable size allocation using been free list and then many variants of these ideas are used in practice for garbage collection this is where you want to free the programmer from having two free objects and garbage collection algorithms are supported in languages like Java and Python we talked about various ways to do this reference counting which suffers from the limitation that it can't free cycles mark-and-sweep and stop and copy these can free cycles this market sweet procedure doesn't deal with external fragmentation but the stopping copy procedure does we also talked about internal and external fragmentation so external fragmentation is when your memory blocks are all over the place in virtual memory this can cause performance issues like disk thrashing and TLB misses then there's internal fragmentation where you're actually not using all of the space in the block that you allocate so for example in the bin free list algorithm you do have a little bit of internal fragmentation because you're always rounding up to the nearest power of two greater than the size you want so you're wasting up to a factor of two in space and in project three you're gonna look much more at these storage allocation schemes and then you also get to try some of these in homework 6 so any any other questions so that's all I have for today's lecture you 3 00:00:05,769 --> 00:00:08,019 4 00:00:08,019 --> 00:00:09,850 5 00:00:09,850 --> 00:00:10,930 6 00:00:10,930 --> 00:00:13,120 7 00:00:13,120 --> 00:00:15,160 8 00:00:15,160 --> 00:00:21,569 9 00:00:21,569 --> 00:00:23,530 10 00:00:23,530 --> 00:00:28,150 11 00:00:28,150 --> 00:00:32,799 12 00:00:32,799 --> 00:00:34,690 13 00:00:34,690 --> 00:00:41,590 14 00:00:41,590 --> 00:00:42,940 15 00:00:42,940 --> 00:00:47,230 16 00:00:47,230 --> 00:00:49,900 17 00:00:49,900 --> 00:00:52,569 18 00:00:52,569 --> 00:00:54,250 19 00:00:54,250 --> 00:00:56,200 20 00:00:56,200 --> 00:00:58,300 21 00:00:58,300 --> 00:01:00,460 22 00:01:00,460 --> 00:01:05,620 23 00:01:05,620 --> 00:01:10,450 24 00:01:10,450 --> 00:01:14,170 25 00:01:14,170 --> 00:01:18,360 26 00:01:18,360 --> 00:01:22,480 27 00:01:22,480 --> 00:01:25,930 28 00:01:25,930 --> 00:01:28,330 29 00:01:28,330 --> 00:01:31,270 30 00:01:31,270 --> 00:01:33,700 31 00:01:33,700 --> 00:01:37,180 32 00:01:37,180 --> 00:01:40,500 33 00:01:40,500 --> 00:01:45,130 34 00:01:45,130 --> 00:01:47,830 35 00:01:47,830 --> 00:01:53,340 36 00:01:53,340 --> 00:01:55,750 37 00:01:55,750 --> 00:01:57,429 38 00:01:57,429 --> 00:01:59,350 39 00:01:59,350 --> 00:02:02,140 40 00:02:02,140 --> 00:02:04,330 41 00:02:04,330 --> 00:02:06,429 42 00:02:06,429 --> 00:02:08,949 43 00:02:08,949 --> 00:02:12,009 44 00:02:12,009 --> 00:02:13,720 45 00:02:13,720 --> 00:02:15,459 46 00:02:15,459 --> 00:02:17,500 47 00:02:17,500 --> 00:02:19,360 48 00:02:19,360 --> 00:02:23,199 49 00:02:23,199 --> 00:02:25,390 50 00:02:25,390 --> 00:02:27,519 51 00:02:27,519 --> 00:02:32,140 52 00:02:32,140 --> 00:02:35,190 53 00:02:35,190 --> 00:02:37,110 54 00:02:37,110 --> 00:02:40,979 55 00:02:40,979 --> 00:02:43,770 56 00:02:43,770 --> 00:02:46,110 57 00:02:46,110 --> 00:02:47,970 58 00:02:47,970 --> 00:02:51,540 59 00:02:51,540 --> 00:02:53,400 60 00:02:53,400 --> 00:02:56,670 61 00:02:56,670 --> 00:02:59,070 62 00:02:59,070 --> 00:03:03,990 63 00:03:03,990 --> 00:03:09,479 64 00:03:09,479 --> 00:03:12,720 65 00:03:12,720 --> 00:03:16,039 66 00:03:16,039 --> 00:03:18,180 67 00:03:18,180 --> 00:03:21,960 68 00:03:21,960 --> 00:03:23,580 69 00:03:23,580 --> 00:03:25,920 70 00:03:25,920 --> 00:03:27,720 71 00:03:27,720 --> 00:03:32,809 72 00:03:32,809 --> 00:03:35,970 73 00:03:35,970 --> 00:03:40,740 74 00:03:40,740 --> 00:03:42,809 75 00:03:42,809 --> 00:03:45,780 76 00:03:45,780 --> 00:03:47,699 77 00:03:47,699 --> 00:03:52,559 78 00:03:52,559 --> 00:03:55,470 79 00:03:55,470 --> 00:03:58,009 80 00:03:58,009 --> 00:04:07,510 81 00:04:07,510 --> 00:04:11,500 82 00:04:11,500 --> 00:04:13,840 83 00:04:13,840 --> 00:04:15,940 84 00:04:15,940 --> 00:04:17,920 85 00:04:17,920 --> 00:04:26,230 86 00:04:26,230 --> 00:04:27,910 87 00:04:27,910 --> 00:04:29,740 88 00:04:29,740 --> 00:04:32,650 89 00:04:32,650 --> 00:04:34,750 90 00:04:34,750 --> 00:04:36,820 91 00:04:36,820 --> 00:04:39,280 92 00:04:39,280 --> 00:04:41,680 93 00:04:41,680 --> 00:04:44,650 94 00:04:44,650 --> 00:04:46,060 95 00:04:46,060 --> 00:04:48,460 96 00:04:48,460 --> 00:04:50,110 97 00:04:50,110 --> 00:04:52,590 98 00:04:52,590 --> 00:04:56,050 99 00:04:56,050 --> 00:04:58,420 100 00:04:58,420 --> 00:05:01,480 101 00:05:01,480 --> 00:05:03,550 102 00:05:03,550 --> 00:05:04,930 103 00:05:04,930 --> 00:05:07,690 104 00:05:07,690 --> 00:05:10,210 105 00:05:10,210 --> 00:05:14,290 106 00:05:14,290 --> 00:05:15,730 107 00:05:15,730 --> 00:05:17,680 108 00:05:17,680 --> 00:05:19,510 109 00:05:19,510 --> 00:05:20,980 110 00:05:20,980 --> 00:05:23,020 111 00:05:23,020 --> 00:05:25,600 112 00:05:25,600 --> 00:05:28,660 113 00:05:28,660 --> 00:05:30,100 114 00:05:30,100 --> 00:05:31,450 115 00:05:31,450 --> 00:05:33,250 116 00:05:33,250 --> 00:05:38,050 117 00:05:38,050 --> 00:05:40,000 118 00:05:40,000 --> 00:05:43,360 119 00:05:43,360 --> 00:05:53,770 120 00:05:53,770 --> 00:05:58,360 121 00:05:58,360 --> 00:06:01,660 122 00:06:01,660 --> 00:06:04,690 123 00:06:04,690 --> 00:06:07,000 124 00:06:07,000 --> 00:06:08,440 125 00:06:08,440 --> 00:06:10,990 126 00:06:10,990 --> 00:06:13,630 127 00:06:13,630 --> 00:06:15,880 128 00:06:15,880 --> 00:06:18,760 129 00:06:18,760 --> 00:06:20,470 130 00:06:20,470 --> 00:06:23,860 131 00:06:23,860 --> 00:06:25,810 132 00:06:25,810 --> 00:06:30,630 133 00:06:30,630 --> 00:06:34,630 134 00:06:34,630 --> 00:06:36,430 135 00:06:36,430 --> 00:06:38,590 136 00:06:38,590 --> 00:06:40,150 137 00:06:40,150 --> 00:06:42,730 138 00:06:42,730 --> 00:06:44,110 139 00:06:44,110 --> 00:06:50,500 140 00:06:50,500 --> 00:06:53,830 141 00:06:53,830 --> 00:06:57,370 142 00:06:57,370 --> 00:06:59,500 143 00:06:59,500 --> 00:07:03,190 144 00:07:03,190 --> 00:07:05,050 145 00:07:05,050 --> 00:07:08,260 146 00:07:08,260 --> 00:07:13,180 147 00:07:13,180 --> 00:07:16,420 148 00:07:16,420 --> 00:07:18,820 149 00:07:18,820 --> 00:07:20,380 150 00:07:20,380 --> 00:07:22,870 151 00:07:22,870 --> 00:07:25,060 152 00:07:25,060 --> 00:07:27,760 153 00:07:27,760 --> 00:07:30,880 154 00:07:30,880 --> 00:07:33,190 155 00:07:33,190 --> 00:07:35,290 156 00:07:35,290 --> 00:07:38,620 157 00:07:38,620 --> 00:07:41,590 158 00:07:41,590 --> 00:07:43,540 159 00:07:43,540 --> 00:07:45,010 160 00:07:45,010 --> 00:07:46,420 161 00:07:46,420 --> 00:07:48,610 162 00:07:48,610 --> 00:07:51,730 163 00:07:51,730 --> 00:07:55,210 164 00:07:55,210 --> 00:07:57,370 165 00:07:57,370 --> 00:07:59,320 166 00:07:59,320 --> 00:08:01,719 167 00:08:01,719 --> 00:08:03,670 168 00:08:03,670 --> 00:08:05,589 169 00:08:05,589 --> 00:08:07,300 170 00:08:07,300 --> 00:08:07,310 171 00:08:07,310 --> 00:08:07,629 172 00:08:07,629 --> 00:08:09,189 173 00:08:09,189 --> 00:08:10,899 174 00:08:10,899 --> 00:08:13,570 175 00:08:13,570 --> 00:08:16,089 176 00:08:16,089 --> 00:08:18,129 177 00:08:18,129 --> 00:08:20,080 178 00:08:20,080 --> 00:08:21,700 179 00:08:21,700 --> 00:08:23,350 180 00:08:23,350 --> 00:08:26,679 181 00:08:26,679 --> 00:08:28,809 182 00:08:28,809 --> 00:08:31,330 183 00:08:31,330 --> 00:08:33,699 184 00:08:33,699 --> 00:08:35,589 185 00:08:35,589 --> 00:08:38,980 186 00:08:38,980 --> 00:08:40,659 187 00:08:40,659 --> 00:08:43,600 188 00:08:43,600 --> 00:08:45,940 189 00:08:45,940 --> 00:08:47,410 190 00:08:47,410 --> 00:08:52,000 191 00:08:52,000 --> 00:08:54,130 192 00:08:54,130 --> 00:08:56,920 193 00:08:56,920 --> 00:08:59,530 194 00:08:59,530 --> 00:09:01,840 195 00:09:01,840 --> 00:09:03,340 196 00:09:03,340 --> 00:09:05,019 197 00:09:05,019 --> 00:09:07,900 198 00:09:07,900 --> 00:09:10,660 199 00:09:10,660 --> 00:09:12,550 200 00:09:12,550 --> 00:09:15,610 201 00:09:15,610 --> 00:09:18,850 202 00:09:18,850 --> 00:09:21,579 203 00:09:21,579 --> 00:09:23,380 204 00:09:23,380 --> 00:09:26,350 205 00:09:26,350 --> 00:09:29,170 206 00:09:29,170 --> 00:09:31,449 207 00:09:31,449 --> 00:09:34,060 208 00:09:34,060 --> 00:09:36,280 209 00:09:36,280 --> 00:09:38,380 210 00:09:38,380 --> 00:09:42,009 211 00:09:42,009 --> 00:09:44,710 212 00:09:44,710 --> 00:09:46,360 213 00:09:46,360 --> 00:09:48,759 214 00:09:48,759 --> 00:09:50,769 215 00:09:50,769 --> 00:09:53,199 216 00:09:53,199 --> 00:09:55,389 217 00:09:55,389 --> 00:09:58,720 218 00:09:58,720 --> 00:10:01,630 219 00:10:01,630 --> 00:10:03,759 220 00:10:03,759 --> 00:10:06,670 221 00:10:06,670 --> 00:10:08,650 222 00:10:08,650 --> 00:10:10,569 223 00:10:10,569 --> 00:10:12,790 224 00:10:12,790 --> 00:10:14,319 225 00:10:14,319 --> 00:10:16,720 226 00:10:16,720 --> 00:10:20,880 227 00:10:20,880 --> 00:10:25,660 228 00:10:25,660 --> 00:10:27,160 229 00:10:27,160 --> 00:10:29,650 230 00:10:29,650 --> 00:10:32,140 231 00:10:32,140 --> 00:10:34,690 232 00:10:34,690 --> 00:10:37,840 233 00:10:37,840 --> 00:10:39,760 234 00:10:39,760 --> 00:10:41,530 235 00:10:41,530 --> 00:10:44,380 236 00:10:44,380 --> 00:10:46,180 237 00:10:46,180 --> 00:10:48,280 238 00:10:48,280 --> 00:10:50,430 239 00:10:50,430 --> 00:10:53,350 240 00:10:53,350 --> 00:10:56,350 241 00:10:56,350 --> 00:10:57,910 242 00:10:57,910 --> 00:11:03,070 243 00:11:03,070 --> 00:11:09,790 244 00:11:09,790 --> 00:11:12,310 245 00:11:12,310 --> 00:11:15,100 246 00:11:15,100 --> 00:11:16,720 247 00:11:16,720 --> 00:11:19,630 248 00:11:19,630 --> 00:11:23,880 249 00:11:23,880 --> 00:11:26,470 250 00:11:26,470 --> 00:11:28,720 251 00:11:28,720 --> 00:11:31,450 252 00:11:31,450 --> 00:11:33,640 253 00:11:33,640 --> 00:11:35,890 254 00:11:35,890 --> 00:11:38,200 255 00:11:38,200 --> 00:11:40,600 256 00:11:40,600 --> 00:11:46,390 257 00:11:46,390 --> 00:11:48,970 258 00:11:48,970 --> 00:11:51,100 259 00:11:51,100 --> 00:11:54,250 260 00:11:54,250 --> 00:11:55,990 261 00:11:55,990 --> 00:11:58,200 262 00:11:58,200 --> 00:12:01,060 263 00:12:01,060 --> 00:12:06,160 264 00:12:06,160 --> 00:12:10,050 265 00:12:10,050 --> 00:12:15,010 266 00:12:15,010 --> 00:12:16,810 267 00:12:16,810 --> 00:12:19,510 268 00:12:19,510 --> 00:12:21,490 269 00:12:21,490 --> 00:12:25,390 270 00:12:25,390 --> 00:12:27,910 271 00:12:27,910 --> 00:12:31,480 272 00:12:31,480 --> 00:12:34,250 273 00:12:34,250 --> 00:12:37,440 274 00:12:37,440 --> 00:12:40,800 275 00:12:40,800 --> 00:12:42,960 276 00:12:42,960 --> 00:12:45,390 277 00:12:45,390 --> 00:12:47,760 278 00:12:47,760 --> 00:12:49,740 279 00:12:49,740 --> 00:12:51,420 280 00:12:51,420 --> 00:12:57,210 281 00:12:57,210 --> 00:12:58,920 282 00:12:58,920 --> 00:13:01,290 283 00:13:01,290 --> 00:13:04,140 284 00:13:04,140 --> 00:13:06,210 285 00:13:06,210 --> 00:13:12,180 286 00:13:12,180 --> 00:13:14,190 287 00:13:14,190 --> 00:13:16,080 288 00:13:16,080 --> 00:13:18,360 289 00:13:18,360 --> 00:13:20,460 290 00:13:20,460 --> 00:13:22,590 291 00:13:22,590 --> 00:13:23,760 292 00:13:23,760 --> 00:13:25,290 293 00:13:25,290 --> 00:13:27,240 294 00:13:27,240 --> 00:13:29,310 295 00:13:29,310 --> 00:13:38,370 296 00:13:38,370 --> 00:13:41,220 297 00:13:41,220 --> 00:13:44,760 298 00:13:44,760 --> 00:13:48,720 299 00:13:48,720 --> 00:13:51,960 300 00:13:51,960 --> 00:13:55,310 301 00:13:55,310 --> 00:14:01,950 302 00:14:01,950 --> 00:14:04,710 303 00:14:04,710 --> 00:14:06,810 304 00:14:06,810 --> 00:14:09,390 305 00:14:09,390 --> 00:14:14,430 306 00:14:14,430 --> 00:14:24,460 307 00:14:24,460 --> 00:14:27,290 308 00:14:27,290 --> 00:14:30,710 309 00:14:30,710 --> 00:14:32,180 310 00:14:32,180 --> 00:14:34,280 311 00:14:34,280 --> 00:14:38,150 312 00:14:38,150 --> 00:14:40,130 313 00:14:40,130 --> 00:14:42,490 314 00:14:42,490 --> 00:14:48,860 315 00:14:48,860 --> 00:14:51,110 316 00:14:51,110 --> 00:14:54,680 317 00:14:54,680 --> 00:14:56,900 318 00:14:56,900 --> 00:14:59,960 319 00:14:59,960 --> 00:15:01,660 320 00:15:01,660 --> 00:15:05,780 321 00:15:05,780 --> 00:15:08,480 322 00:15:08,480 --> 00:15:10,880 323 00:15:10,880 --> 00:15:14,180 324 00:15:14,180 --> 00:15:16,880 325 00:15:16,880 --> 00:15:21,170 326 00:15:21,170 --> 00:15:23,090 327 00:15:23,090 --> 00:15:26,330 328 00:15:26,330 --> 00:15:28,520 329 00:15:28,520 --> 00:15:32,570 330 00:15:32,570 --> 00:15:34,400 331 00:15:34,400 --> 00:15:37,760 332 00:15:37,760 --> 00:15:41,270 333 00:15:41,270 --> 00:15:43,430 334 00:15:43,430 --> 00:15:46,220 335 00:15:46,220 --> 00:15:48,260 336 00:15:48,260 --> 00:15:50,000 337 00:15:50,000 --> 00:15:52,640 338 00:15:52,640 --> 00:15:54,020 339 00:15:54,020 --> 00:15:56,720 340 00:15:56,720 --> 00:15:59,150 341 00:15:59,150 --> 00:16:02,150 342 00:16:02,150 --> 00:16:05,240 343 00:16:05,240 --> 00:16:07,310 344 00:16:07,310 --> 00:16:09,380 345 00:16:09,380 --> 00:16:14,060 346 00:16:14,060 --> 00:16:16,250 347 00:16:16,250 --> 00:16:27,770 348 00:16:27,770 --> 00:16:35,370 349 00:16:35,370 --> 00:16:38,310 350 00:16:38,310 --> 00:16:40,950 351 00:16:40,950 --> 00:16:42,450 352 00:16:42,450 --> 00:16:44,820 353 00:16:44,820 --> 00:16:48,240 354 00:16:48,240 --> 00:16:50,250 355 00:16:50,250 --> 00:16:52,140 356 00:16:52,140 --> 00:16:55,050 357 00:16:55,050 --> 00:16:56,940 358 00:16:56,940 --> 00:16:59,520 359 00:16:59,520 --> 00:17:01,890 360 00:17:01,890 --> 00:17:03,840 361 00:17:03,840 --> 00:17:05,460 362 00:17:05,460 --> 00:17:06,960 363 00:17:06,960 --> 00:17:11,930 364 00:17:11,930 --> 00:17:14,850 365 00:17:14,850 --> 00:17:19,620 366 00:17:19,620 --> 00:17:23,120 367 00:17:23,120 --> 00:17:26,070 368 00:17:26,070 --> 00:17:28,800 369 00:17:28,800 --> 00:17:30,930 370 00:17:30,930 --> 00:17:33,480 371 00:17:33,480 --> 00:17:37,470 372 00:17:37,470 --> 00:17:39,000 373 00:17:39,000 --> 00:17:42,480 374 00:17:42,480 --> 00:17:46,080 375 00:17:46,080 --> 00:17:48,030 376 00:17:48,030 --> 00:17:50,160 377 00:17:50,160 --> 00:17:52,680 378 00:17:52,680 --> 00:17:54,360 379 00:17:54,360 --> 00:17:55,620 380 00:17:55,620 --> 00:17:59,160 381 00:17:59,160 --> 00:18:00,810 382 00:18:00,810 --> 00:18:03,450 383 00:18:03,450 --> 00:18:05,400 384 00:18:05,400 --> 00:18:07,290 385 00:18:07,290 --> 00:18:09,210 386 00:18:09,210 --> 00:18:11,760 387 00:18:11,760 --> 00:18:14,220 388 00:18:14,220 --> 00:18:17,850 389 00:18:17,850 --> 00:18:20,670 390 00:18:20,670 --> 00:18:22,770 391 00:18:22,770 --> 00:18:25,830 392 00:18:25,830 --> 00:18:27,450 393 00:18:27,450 --> 00:18:29,610 394 00:18:29,610 --> 00:18:31,320 395 00:18:31,320 --> 00:18:33,270 396 00:18:33,270 --> 00:18:36,370 397 00:18:36,370 --> 00:18:38,320 398 00:18:38,320 --> 00:18:41,520 399 00:18:41,520 --> 00:18:43,900 400 00:18:43,900 --> 00:18:46,600 401 00:18:46,600 --> 00:18:49,090 402 00:18:49,090 --> 00:18:52,090 403 00:18:52,090 --> 00:18:53,800 404 00:18:53,800 --> 00:18:56,320 405 00:18:56,320 --> 00:18:58,390 406 00:18:58,390 --> 00:19:00,880 407 00:19:00,880 --> 00:19:03,880 408 00:19:03,880 --> 00:19:06,940 409 00:19:06,940 --> 00:19:09,340 410 00:19:09,340 --> 00:19:11,590 411 00:19:11,590 --> 00:19:12,970 412 00:19:12,970 --> 00:19:14,320 413 00:19:14,320 --> 00:19:16,930 414 00:19:16,930 --> 00:19:18,580 415 00:19:18,580 --> 00:19:20,980 416 00:19:20,980 --> 00:19:23,710 417 00:19:23,710 --> 00:19:33,030 418 00:19:33,030 --> 00:19:37,660 419 00:19:37,660 --> 00:19:41,230 420 00:19:41,230 --> 00:19:44,470 421 00:19:44,470 --> 00:19:46,690 422 00:19:46,690 --> 00:19:48,400 423 00:19:48,400 --> 00:19:53,530 424 00:19:53,530 --> 00:19:56,220 425 00:19:56,220 --> 00:19:59,680 426 00:19:59,680 --> 00:20:04,270 427 00:20:04,270 --> 00:20:05,830 428 00:20:05,830 --> 00:20:09,160 429 00:20:09,160 --> 00:20:11,050 430 00:20:11,050 --> 00:20:12,940 431 00:20:12,940 --> 00:20:14,860 432 00:20:14,860 --> 00:20:18,250 433 00:20:18,250 --> 00:20:20,080 434 00:20:20,080 --> 00:20:22,540 435 00:20:22,540 --> 00:20:24,790 436 00:20:24,790 --> 00:20:24,800 437 00:20:24,800 --> 00:20:29,350 438 00:20:29,350 --> 00:20:33,100 439 00:20:33,100 --> 00:20:36,250 440 00:20:36,250 --> 00:20:39,460 441 00:20:39,460 --> 00:20:44,050 442 00:20:44,050 --> 00:20:46,180 443 00:20:46,180 --> 00:20:52,000 444 00:20:52,000 --> 00:20:58,870 445 00:20:58,870 --> 00:21:01,210 446 00:21:01,210 --> 00:21:05,560 447 00:21:05,560 --> 00:21:07,720 448 00:21:07,720 --> 00:21:09,490 449 00:21:09,490 --> 00:21:13,990 450 00:21:13,990 --> 00:21:20,710 451 00:21:20,710 --> 00:21:23,200 452 00:21:23,200 --> 00:21:25,120 453 00:21:25,120 --> 00:21:27,430 454 00:21:27,430 --> 00:21:31,420 455 00:21:31,420 --> 00:21:34,770 456 00:21:34,770 --> 00:21:37,540 457 00:21:37,540 --> 00:21:41,770 458 00:21:41,770 --> 00:21:43,660 459 00:21:43,660 --> 00:21:47,230 460 00:21:47,230 --> 00:21:50,830 461 00:21:50,830 --> 00:21:53,680 462 00:21:53,680 --> 00:21:55,570 463 00:21:55,570 --> 00:21:57,940 464 00:21:57,940 --> 00:22:00,490 465 00:22:00,490 --> 00:22:03,310 466 00:22:03,310 --> 00:22:06,040 467 00:22:06,040 --> 00:22:08,590 468 00:22:08,590 --> 00:22:10,660 469 00:22:10,660 --> 00:22:13,990 470 00:22:13,990 --> 00:22:15,460 471 00:22:15,460 --> 00:22:19,480 472 00:22:19,480 --> 00:22:22,870 473 00:22:22,870 --> 00:22:24,670 474 00:22:24,670 --> 00:22:27,850 475 00:22:27,850 --> 00:22:31,000 476 00:22:31,000 --> 00:22:34,150 477 00:22:34,150 --> 00:22:36,670 478 00:22:36,670 --> 00:22:38,440 479 00:22:38,440 --> 00:22:41,530 480 00:22:41,530 --> 00:22:42,560 481 00:22:42,560 --> 00:22:45,980 482 00:22:45,980 --> 00:22:48,889 483 00:22:48,889 --> 00:22:51,320 484 00:22:51,320 --> 00:22:54,200 485 00:22:54,200 --> 00:22:57,470 486 00:22:57,470 --> 00:22:59,720 487 00:22:59,720 --> 00:23:02,240 488 00:23:02,240 --> 00:23:05,810 489 00:23:05,810 --> 00:23:10,310 490 00:23:10,310 --> 00:23:17,019 491 00:23:17,019 --> 00:23:20,330 492 00:23:20,330 --> 00:23:22,759 493 00:23:22,759 --> 00:23:24,649 494 00:23:24,649 --> 00:23:27,649 495 00:23:27,649 --> 00:23:30,980 496 00:23:30,980 --> 00:23:32,960 497 00:23:32,960 --> 00:23:39,200 498 00:23:39,200 --> 00:23:42,830 499 00:23:42,830 --> 00:23:44,990 500 00:23:44,990 --> 00:23:46,549 501 00:23:46,549 --> 00:23:48,950 502 00:23:48,950 --> 00:23:51,799 503 00:23:51,799 --> 00:23:54,049 504 00:23:54,049 --> 00:23:56,389 505 00:23:56,389 --> 00:23:58,940 506 00:23:58,940 --> 00:24:00,889 507 00:24:00,889 --> 00:24:03,230 508 00:24:03,230 --> 00:24:05,659 509 00:24:05,659 --> 00:24:09,019 510 00:24:09,019 --> 00:24:10,700 511 00:24:10,700 --> 00:24:12,560 512 00:24:12,560 --> 00:24:15,350 513 00:24:15,350 --> 00:24:20,840 514 00:24:20,840 --> 00:24:22,700 515 00:24:22,700 --> 00:24:25,310 516 00:24:25,310 --> 00:24:27,049 517 00:24:27,049 --> 00:24:28,730 518 00:24:28,730 --> 00:24:32,299 519 00:24:32,299 --> 00:24:38,320 520 00:24:38,320 --> 00:24:40,789 521 00:24:40,789 --> 00:24:43,399 522 00:24:43,399 --> 00:24:46,609 523 00:24:46,609 --> 00:24:48,950 524 00:24:48,950 --> 00:24:50,479 525 00:24:50,479 --> 00:24:51,919 526 00:24:51,919 --> 00:25:06,799 527 00:25:06,799 --> 00:25:09,139 528 00:25:09,139 --> 00:25:12,619 529 00:25:12,619 --> 00:25:15,739 530 00:25:15,739 --> 00:25:17,899 531 00:25:17,899 --> 00:25:20,509 532 00:25:20,509 --> 00:25:22,009 533 00:25:22,009 --> 00:25:23,690 534 00:25:23,690 --> 00:25:26,180 535 00:25:26,180 --> 00:25:28,639 536 00:25:28,639 --> 00:25:30,979 537 00:25:30,979 --> 00:25:34,249 538 00:25:34,249 --> 00:25:36,019 539 00:25:36,019 --> 00:25:42,340 540 00:25:42,340 --> 00:25:46,090 541 00:25:46,090 --> 00:25:48,190 542 00:25:48,190 --> 00:25:51,910 543 00:25:51,910 --> 00:25:53,380 544 00:25:53,380 --> 00:25:55,630 545 00:25:55,630 --> 00:25:57,280 546 00:25:57,280 --> 00:25:58,960 547 00:25:58,960 --> 00:26:01,480 548 00:26:01,480 --> 00:26:04,090 549 00:26:04,090 --> 00:26:05,770 550 00:26:05,770 --> 00:26:07,660 551 00:26:07,660 --> 00:26:13,960 552 00:26:13,960 --> 00:26:16,630 553 00:26:16,630 --> 00:26:18,580 554 00:26:18,580 --> 00:26:19,840 555 00:26:19,840 --> 00:26:22,240 556 00:26:22,240 --> 00:26:24,160 557 00:26:24,160 --> 00:26:25,360 558 00:26:25,360 --> 00:26:30,330 559 00:26:30,330 --> 00:26:33,160 560 00:26:33,160 --> 00:26:35,620 561 00:26:35,620 --> 00:26:39,070 562 00:26:39,070 --> 00:26:40,600 563 00:26:40,600 --> 00:26:43,360 564 00:26:43,360 --> 00:26:44,950 565 00:26:44,950 --> 00:26:47,950 566 00:26:47,950 --> 00:26:53,890 567 00:26:53,890 --> 00:26:53,900 568 00:26:53,900 --> 00:26:55,980 569 00:26:55,980 --> 00:26:58,510 570 00:26:58,510 --> 00:27:00,540 571 00:27:00,540 --> 00:27:11,500 572 00:27:11,500 --> 00:27:13,690 573 00:27:13,690 --> 00:27:17,380 574 00:27:17,380 --> 00:27:20,470 575 00:27:20,470 --> 00:27:23,950 576 00:27:23,950 --> 00:27:26,410 577 00:27:26,410 --> 00:27:28,930 578 00:27:28,930 --> 00:27:32,050 579 00:27:32,050 --> 00:27:34,120 580 00:27:34,120 --> 00:27:36,640 581 00:27:36,640 --> 00:27:38,830 582 00:27:38,830 --> 00:27:40,600 583 00:27:40,600 --> 00:27:40,610 584 00:27:40,610 --> 00:27:41,730 585 00:27:41,730 --> 00:27:45,700 586 00:27:45,700 --> 00:27:48,310 587 00:27:48,310 --> 00:27:52,180 588 00:27:52,180 --> 00:27:55,150 589 00:27:55,150 --> 00:27:56,980 590 00:27:56,980 --> 00:27:58,450 591 00:27:58,450 --> 00:28:00,610 592 00:28:00,610 --> 00:28:04,720 593 00:28:04,720 --> 00:28:07,360 594 00:28:07,360 --> 00:28:10,150 595 00:28:10,150 --> 00:28:11,650 596 00:28:11,650 --> 00:28:13,780 597 00:28:13,780 --> 00:28:16,270 598 00:28:16,270 --> 00:28:18,700 599 00:28:18,700 --> 00:28:20,320 600 00:28:20,320 --> 00:28:23,530 601 00:28:23,530 --> 00:28:26,410 602 00:28:26,410 --> 00:28:28,590 603 00:28:28,590 --> 00:28:31,900 604 00:28:31,900 --> 00:28:33,760 605 00:28:33,760 --> 00:28:35,530 606 00:28:35,530 --> 00:28:39,130 607 00:28:39,130 --> 00:28:42,550 608 00:28:42,550 --> 00:28:44,380 609 00:28:44,380 --> 00:28:48,280 610 00:28:48,280 --> 00:28:55,900 611 00:28:55,900 --> 00:28:58,180 612 00:28:58,180 --> 00:28:59,590 613 00:28:59,590 --> 00:29:02,050 614 00:29:02,050 --> 00:29:03,730 615 00:29:03,730 --> 00:29:05,080 616 00:29:05,080 --> 00:29:06,760 617 00:29:06,760 --> 00:29:12,580 618 00:29:12,580 --> 00:29:15,310 619 00:29:15,310 --> 00:29:17,530 620 00:29:17,530 --> 00:29:19,210 621 00:29:19,210 --> 00:29:22,870 622 00:29:22,870 --> 00:29:22,880 623 00:29:22,880 --> 00:29:23,200 624 00:29:23,200 --> 00:29:24,820 625 00:29:24,820 --> 00:29:26,980 626 00:29:26,980 --> 00:29:28,570 627 00:29:28,570 --> 00:29:31,510 628 00:29:31,510 --> 00:29:34,720 629 00:29:34,720 --> 00:29:36,039 630 00:29:36,039 --> 00:29:37,750 631 00:29:37,750 --> 00:29:40,419 632 00:29:40,419 --> 00:29:41,830 633 00:29:41,830 --> 00:29:44,260 634 00:29:44,260 --> 00:29:46,389 635 00:29:46,389 --> 00:29:47,440 636 00:29:47,440 --> 00:29:55,720 637 00:29:55,720 --> 00:29:58,990 638 00:29:58,990 --> 00:30:01,060 639 00:30:01,060 --> 00:30:03,880 640 00:30:03,880 --> 00:30:06,159 641 00:30:06,159 --> 00:30:09,519 642 00:30:09,519 --> 00:30:11,080 643 00:30:11,080 --> 00:30:11,090 644 00:30:11,090 --> 00:30:20,180 645 00:30:20,180 --> 00:30:20,190 646 00:30:20,190 --> 00:30:31,120 647 00:30:31,120 --> 00:30:34,010 648 00:30:34,010 --> 00:30:37,580 649 00:30:37,580 --> 00:30:38,750 650 00:30:38,750 --> 00:30:40,370 651 00:30:40,370 --> 00:30:42,260 652 00:30:42,260 --> 00:30:46,790 653 00:30:46,790 --> 00:30:49,070 654 00:30:49,070 --> 00:30:51,170 655 00:30:51,170 --> 00:30:53,260 656 00:30:53,260 --> 00:30:55,700 657 00:30:55,700 --> 00:30:57,230 658 00:30:57,230 --> 00:30:59,930 659 00:30:59,930 --> 00:31:01,850 660 00:31:01,850 --> 00:31:04,280 661 00:31:04,280 --> 00:31:07,910 662 00:31:07,910 --> 00:31:09,830 663 00:31:09,830 --> 00:31:13,610 664 00:31:13,610 --> 00:31:19,150 665 00:31:19,150 --> 00:31:21,380 666 00:31:21,380 --> 00:31:22,820 667 00:31:22,820 --> 00:31:26,750 668 00:31:26,750 --> 00:31:28,940 669 00:31:28,940 --> 00:31:31,130 670 00:31:31,130 --> 00:31:33,470 671 00:31:33,470 --> 00:31:38,960 672 00:31:38,960 --> 00:31:47,610 673 00:31:47,610 --> 00:31:51,490 674 00:31:51,490 --> 00:31:54,909 675 00:31:54,909 --> 00:31:56,409 676 00:31:56,409 --> 00:31:58,990 677 00:31:58,990 --> 00:32:02,529 678 00:32:02,529 --> 00:32:04,419 679 00:32:04,419 --> 00:32:06,940 680 00:32:06,940 --> 00:32:09,100 681 00:32:09,100 --> 00:32:12,760 682 00:32:12,760 --> 00:32:15,460 683 00:32:15,460 --> 00:32:18,130 684 00:32:18,130 --> 00:32:25,750 685 00:32:25,750 --> 00:32:30,460 686 00:32:30,460 --> 00:32:33,190 687 00:32:33,190 --> 00:32:35,230 688 00:32:35,230 --> 00:32:37,180 689 00:32:37,180 --> 00:32:40,690 690 00:32:40,690 --> 00:32:43,570 691 00:32:43,570 --> 00:32:46,110 692 00:32:46,110 --> 00:32:51,430 693 00:32:51,430 --> 00:32:53,260 694 00:32:53,260 --> 00:32:56,590 695 00:32:56,590 --> 00:33:00,940 696 00:33:00,940 --> 00:33:04,960 697 00:33:04,960 --> 00:33:07,360 698 00:33:07,360 --> 00:33:09,520 699 00:33:09,520 --> 00:33:11,590 700 00:33:11,590 --> 00:33:14,919 701 00:33:14,919 --> 00:33:16,810 702 00:33:16,810 --> 00:33:23,140 703 00:33:23,140 --> 00:33:26,409 704 00:33:26,409 --> 00:33:28,659 705 00:33:28,659 --> 00:33:31,750 706 00:33:31,750 --> 00:33:34,810 707 00:33:34,810 --> 00:33:37,450 708 00:33:37,450 --> 00:33:39,690 709 00:33:39,690 --> 00:33:42,190 710 00:33:42,190 --> 00:33:43,870 711 00:33:43,870 --> 00:33:46,149 712 00:33:46,149 --> 00:33:48,549 713 00:33:48,549 --> 00:33:50,529 714 00:33:50,529 --> 00:33:53,799 715 00:33:53,799 --> 00:33:56,740 716 00:33:56,740 --> 00:33:58,720 717 00:33:58,720 --> 00:33:58,730 718 00:33:58,730 --> 00:33:59,990 719 00:33:59,990 --> 00:34:02,780 720 00:34:02,780 --> 00:34:10,010 721 00:34:10,010 --> 00:34:13,310 722 00:34:13,310 --> 00:34:15,740 723 00:34:15,740 --> 00:34:17,360 724 00:34:17,360 --> 00:34:18,770 725 00:34:18,770 --> 00:34:20,990 726 00:34:20,990 --> 00:34:25,460 727 00:34:25,460 --> 00:34:27,320 728 00:34:27,320 --> 00:34:30,110 729 00:34:30,110 --> 00:34:31,910 730 00:34:31,910 --> 00:34:33,980 731 00:34:33,980 --> 00:34:35,540 732 00:34:35,540 --> 00:34:37,760 733 00:34:37,760 --> 00:34:39,800 734 00:34:39,800 --> 00:34:42,410 735 00:34:42,410 --> 00:34:43,580 736 00:34:43,580 --> 00:34:46,430 737 00:34:46,430 --> 00:34:49,570 738 00:34:49,570 --> 00:34:53,900 739 00:34:53,900 --> 00:34:55,970 740 00:34:55,970 --> 00:34:57,950 741 00:34:57,950 --> 00:35:00,680 742 00:35:00,680 --> 00:35:02,630 743 00:35:02,630 --> 00:35:06,170 744 00:35:06,170 --> 00:35:08,960 745 00:35:08,960 --> 00:35:11,740 746 00:35:11,740 --> 00:35:16,610 747 00:35:16,610 --> 00:35:19,250 748 00:35:19,250 --> 00:35:20,900 749 00:35:20,900 --> 00:35:22,940 750 00:35:22,940 --> 00:35:25,130 751 00:35:25,130 --> 00:35:28,250 752 00:35:28,250 --> 00:35:29,690 753 00:35:29,690 --> 00:35:29,700 754 00:35:29,700 --> 00:35:30,170 755 00:35:30,170 --> 00:35:32,180 756 00:35:32,180 --> 00:35:34,730 757 00:35:34,730 --> 00:35:37,640 758 00:35:37,640 --> 00:35:40,790 759 00:35:40,790 --> 00:35:43,490 760 00:35:43,490 --> 00:35:45,200 761 00:35:45,200 --> 00:35:48,710 762 00:35:48,710 --> 00:35:52,490 763 00:35:52,490 --> 00:35:53,690 764 00:35:53,690 --> 00:35:57,710 765 00:35:57,710 --> 00:35:59,480 766 00:35:59,480 --> 00:36:01,820 767 00:36:01,820 --> 00:36:04,340 768 00:36:04,340 --> 00:36:06,170 769 00:36:06,170 --> 00:36:08,020 770 00:36:08,020 --> 00:36:10,490 771 00:36:10,490 --> 00:36:11,900 772 00:36:11,900 --> 00:36:13,550 773 00:36:13,550 --> 00:36:15,830 774 00:36:15,830 --> 00:36:20,870 775 00:36:20,870 --> 00:36:22,790 776 00:36:22,790 --> 00:36:25,220 777 00:36:25,220 --> 00:36:27,380 778 00:36:27,380 --> 00:36:30,920 779 00:36:30,920 --> 00:36:32,720 780 00:36:32,720 --> 00:36:34,460 781 00:36:34,460 --> 00:36:37,070 782 00:36:37,070 --> 00:36:38,270 783 00:36:38,270 --> 00:36:40,040 784 00:36:40,040 --> 00:36:41,690 785 00:36:41,690 --> 00:36:43,820 786 00:36:43,820 --> 00:36:54,830 787 00:36:54,830 --> 00:36:59,330 788 00:36:59,330 --> 00:37:02,840 789 00:37:02,840 --> 00:37:06,760 790 00:37:06,760 --> 00:37:09,140 791 00:37:09,140 --> 00:37:11,810 792 00:37:11,810 --> 00:37:14,000 793 00:37:14,000 --> 00:37:15,500 794 00:37:15,500 --> 00:37:17,480 795 00:37:17,480 --> 00:37:19,160 796 00:37:19,160 --> 00:37:21,650 797 00:37:21,650 --> 00:37:23,660 798 00:37:23,660 --> 00:37:25,700 799 00:37:25,700 --> 00:37:34,970 800 00:37:34,970 --> 00:37:37,740 801 00:37:37,740 --> 00:37:41,310 802 00:37:41,310 --> 00:37:44,460 803 00:37:44,460 --> 00:37:46,830 804 00:37:46,830 --> 00:37:48,600 805 00:37:48,600 --> 00:37:50,250 806 00:37:50,250 --> 00:37:53,180 807 00:37:53,180 --> 00:37:55,650 808 00:37:55,650 --> 00:37:57,830 809 00:37:57,830 --> 00:38:01,050 810 00:38:01,050 --> 00:38:02,610 811 00:38:02,610 --> 00:38:05,490 812 00:38:05,490 --> 00:38:07,140 813 00:38:07,140 --> 00:38:09,030 814 00:38:09,030 --> 00:38:16,320 815 00:38:16,320 --> 00:38:19,560 816 00:38:19,560 --> 00:38:22,140 817 00:38:22,140 --> 00:38:24,390 818 00:38:24,390 --> 00:38:29,100 819 00:38:29,100 --> 00:38:32,130 820 00:38:32,130 --> 00:38:34,200 821 00:38:34,200 --> 00:38:36,630 822 00:38:36,630 --> 00:38:38,130 823 00:38:38,130 --> 00:38:39,830 824 00:38:39,830 --> 00:38:42,660 825 00:38:42,660 --> 00:38:44,370 826 00:38:44,370 --> 00:38:46,890 827 00:38:46,890 --> 00:38:50,970 828 00:38:50,970 --> 00:38:52,350 829 00:38:52,350 --> 00:38:53,910 830 00:38:53,910 --> 00:38:56,520 831 00:38:56,520 --> 00:38:58,590 832 00:38:58,590 --> 00:39:01,560 833 00:39:01,560 --> 00:39:03,120 834 00:39:03,120 --> 00:39:06,630 835 00:39:06,630 --> 00:39:09,830 836 00:39:09,830 --> 00:39:12,360 837 00:39:12,360 --> 00:39:14,010 838 00:39:14,010 --> 00:39:16,500 839 00:39:16,500 --> 00:39:19,470 840 00:39:19,470 --> 00:39:22,680 841 00:39:22,680 --> 00:39:24,420 842 00:39:24,420 --> 00:39:30,060 843 00:39:30,060 --> 00:39:32,370 844 00:39:32,370 --> 00:39:36,750 845 00:39:36,750 --> 00:39:39,630 846 00:39:39,630 --> 00:39:41,970 847 00:39:41,970 --> 00:39:43,350 848 00:39:43,350 --> 00:39:46,890 849 00:39:46,890 --> 00:39:48,270 850 00:39:48,270 --> 00:39:50,100 851 00:39:50,100 --> 00:39:51,570 852 00:39:51,570 --> 00:39:56,850 853 00:39:56,850 --> 00:39:58,770 854 00:39:58,770 --> 00:40:01,050 855 00:40:01,050 --> 00:40:03,480 856 00:40:03,480 --> 00:40:06,120 857 00:40:06,120 --> 00:40:08,040 858 00:40:08,040 --> 00:40:10,410 859 00:40:10,410 --> 00:40:16,410 860 00:40:16,410 --> 00:40:18,450 861 00:40:18,450 --> 00:40:20,310 862 00:40:20,310 --> 00:40:22,500 863 00:40:22,500 --> 00:40:25,290 864 00:40:25,290 --> 00:40:27,120 865 00:40:27,120 --> 00:40:30,000 866 00:40:30,000 --> 00:40:32,910 867 00:40:32,910 --> 00:40:34,530 868 00:40:34,530 --> 00:40:37,290 869 00:40:37,290 --> 00:40:38,940 870 00:40:38,940 --> 00:40:40,620 871 00:40:40,620 --> 00:40:43,650 872 00:40:43,650 --> 00:40:46,350 873 00:40:46,350 --> 00:40:47,910 874 00:40:47,910 --> 00:40:51,030 875 00:40:51,030 --> 00:40:53,190 876 00:40:53,190 --> 00:40:55,110 877 00:40:55,110 --> 00:40:58,290 878 00:40:58,290 --> 00:41:00,450 879 00:41:00,450 --> 00:41:02,730 880 00:41:02,730 --> 00:41:05,790 881 00:41:05,790 --> 00:41:07,770 882 00:41:07,770 --> 00:41:09,330 883 00:41:09,330 --> 00:41:11,820 884 00:41:11,820 --> 00:41:14,100 885 00:41:14,100 --> 00:41:15,510 886 00:41:15,510 --> 00:41:17,940 887 00:41:17,940 --> 00:41:19,410 888 00:41:19,410 --> 00:41:23,730 889 00:41:23,730 --> 00:41:26,550 890 00:41:26,550 --> 00:41:29,870 891 00:41:29,870 --> 00:41:32,040 892 00:41:32,040 --> 00:41:39,870 893 00:41:39,870 --> 00:41:43,240 894 00:41:43,240 --> 00:41:45,040 895 00:41:45,040 --> 00:41:53,650 896 00:41:53,650 --> 00:41:57,190 897 00:41:57,190 --> 00:41:59,710 898 00:41:59,710 --> 00:42:02,620 899 00:42:02,620 --> 00:42:05,550 900 00:42:05,550 --> 00:42:09,180 901 00:42:09,180 --> 00:42:11,800 902 00:42:11,800 --> 00:42:15,190 903 00:42:15,190 --> 00:42:17,830 904 00:42:17,830 --> 00:42:19,390 905 00:42:19,390 --> 00:42:22,180 906 00:42:22,180 --> 00:42:23,440 907 00:42:23,440 --> 00:42:29,080 908 00:42:29,080 --> 00:42:33,790 909 00:42:33,790 --> 00:42:37,150 910 00:42:37,150 --> 00:42:41,200 911 00:42:41,200 --> 00:42:43,360 912 00:42:43,360 --> 00:42:45,250 913 00:42:45,250 --> 00:42:48,070 914 00:42:48,070 --> 00:42:50,680 915 00:42:50,680 --> 00:42:51,940 916 00:42:51,940 --> 00:42:54,310 917 00:42:54,310 --> 00:42:56,470 918 00:42:56,470 --> 00:43:00,630 919 00:43:00,630 --> 00:43:04,750 920 00:43:04,750 --> 00:43:07,060 921 00:43:07,060 --> 00:43:12,480 922 00:43:12,480 --> 00:43:14,680 923 00:43:14,680 --> 00:43:16,390 924 00:43:16,390 --> 00:43:19,330 925 00:43:19,330 --> 00:43:21,130 926 00:43:21,130 --> 00:43:24,520 927 00:43:24,520 --> 00:43:26,350 928 00:43:26,350 --> 00:43:30,040 929 00:43:30,040 --> 00:43:32,140 930 00:43:32,140 --> 00:43:34,060 931 00:43:34,060 --> 00:43:37,420 932 00:43:37,420 --> 00:43:40,150 933 00:43:40,150 --> 00:43:41,800 934 00:43:41,800 --> 00:43:44,440 935 00:43:44,440 --> 00:43:46,360 936 00:43:46,360 --> 00:43:49,750 937 00:43:49,750 --> 00:43:51,040 938 00:43:51,040 --> 00:43:53,200 939 00:43:53,200 --> 00:43:54,790 940 00:43:54,790 --> 00:43:56,230 941 00:43:56,230 --> 00:43:59,730 942 00:43:59,730 --> 00:44:02,260 943 00:44:02,260 --> 00:44:03,700 944 00:44:03,700 --> 00:44:06,610 945 00:44:06,610 --> 00:44:08,290 946 00:44:08,290 --> 00:44:09,610 947 00:44:09,610 --> 00:44:11,140 948 00:44:11,140 --> 00:44:14,890 949 00:44:14,890 --> 00:44:17,320 950 00:44:17,320 --> 00:44:19,600 951 00:44:19,600 --> 00:44:21,190 952 00:44:21,190 --> 00:44:24,160 953 00:44:24,160 --> 00:44:25,810 954 00:44:25,810 --> 00:44:27,580 955 00:44:27,580 --> 00:44:31,650 956 00:44:31,650 --> 00:44:35,590 957 00:44:35,590 --> 00:44:39,460 958 00:44:39,460 --> 00:44:42,910 959 00:44:42,910 --> 00:44:45,190 960 00:44:45,190 --> 00:44:48,430 961 00:44:48,430 --> 00:44:52,270 962 00:44:52,270 --> 00:44:54,700 963 00:44:54,700 --> 00:44:57,190 964 00:44:57,190 --> 00:45:03,850 965 00:45:03,850 --> 00:45:05,710 966 00:45:05,710 --> 00:45:07,120 967 00:45:07,120 --> 00:45:09,130 968 00:45:09,130 --> 00:45:11,410 969 00:45:11,410 --> 00:45:13,120 970 00:45:13,120 --> 00:45:15,880 971 00:45:15,880 --> 00:45:17,710 972 00:45:17,710 --> 00:45:19,030 973 00:45:19,030 --> 00:45:21,040 974 00:45:21,040 --> 00:45:25,180 975 00:45:25,180 --> 00:45:29,800 976 00:45:29,800 --> 00:45:31,900 977 00:45:31,900 --> 00:45:34,330 978 00:45:34,330 --> 00:45:37,180 979 00:45:37,180 --> 00:45:39,280 980 00:45:39,280 --> 00:45:42,370 981 00:45:42,370 --> 00:45:45,220 982 00:45:45,220 --> 00:45:46,990 983 00:45:46,990 --> 00:45:49,510 984 00:45:49,510 --> 00:45:51,970 985 00:45:51,970 --> 00:45:54,520 986 00:45:54,520 --> 00:45:57,610 987 00:45:57,610 --> 00:46:00,730 988 00:46:00,730 --> 00:46:03,520 989 00:46:03,520 --> 00:46:05,210 990 00:46:05,210 --> 00:46:07,759 991 00:46:07,759 --> 00:46:10,249 992 00:46:10,249 --> 00:46:12,680 993 00:46:12,680 --> 00:46:14,690 994 00:46:14,690 --> 00:46:17,359 995 00:46:17,359 --> 00:46:19,989 996 00:46:19,989 --> 00:46:23,390 997 00:46:23,390 --> 00:46:25,789 998 00:46:25,789 --> 00:46:27,829 999 00:46:27,829 --> 00:46:30,109 1000 00:46:30,109 --> 00:46:31,999 1001 00:46:31,999 --> 00:46:33,200 1002 00:46:33,200 --> 00:46:33,210 1003 00:46:33,210 --> 00:46:35,620 1004 00:46:35,620 --> 00:46:38,329 1005 00:46:38,329 --> 00:46:42,079 1006 00:46:42,079 --> 00:46:43,519 1007 00:46:43,519 --> 00:46:46,249 1008 00:46:46,249 --> 00:46:47,900 1009 00:46:47,900 --> 00:46:49,430 1010 00:46:49,430 --> 00:46:51,109 1011 00:46:51,109 --> 00:46:54,109 1012 00:46:54,109 --> 00:46:57,589 1013 00:46:57,589 --> 00:46:59,779 1014 00:46:59,779 --> 00:47:02,239 1015 00:47:02,239 --> 00:47:04,870 1016 00:47:04,870 --> 00:47:07,339 1017 00:47:07,339 --> 00:47:10,279 1018 00:47:10,279 --> 00:47:12,499 1019 00:47:12,499 --> 00:47:15,109 1020 00:47:15,109 --> 00:47:16,989 1021 00:47:16,989 --> 00:47:19,430 1022 00:47:19,430 --> 00:47:23,180 1023 00:47:23,180 --> 00:47:26,989 1024 00:47:26,989 --> 00:47:28,819 1025 00:47:28,819 --> 00:47:31,190 1026 00:47:31,190 --> 00:47:33,049 1027 00:47:33,049 --> 00:47:35,059 1028 00:47:35,059 --> 00:47:38,809 1029 00:47:38,809 --> 00:47:42,049 1030 00:47:42,049 --> 00:47:45,380 1031 00:47:45,380 --> 00:47:47,329 1032 00:47:47,329 --> 00:47:51,039 1033 00:47:51,039 --> 00:47:54,589 1034 00:47:54,589 --> 00:47:56,390 1035 00:47:56,390 --> 00:47:57,979 1036 00:47:57,979 --> 00:48:02,019 1037 00:48:02,019 --> 00:48:05,210 1038 00:48:05,210 --> 00:48:07,759 1039 00:48:07,759 --> 00:48:11,989 1040 00:48:11,989 --> 00:48:13,519 1041 00:48:13,519 --> 00:48:15,940 1042 00:48:15,940 --> 00:48:18,769 1043 00:48:18,769 --> 00:48:21,919 1044 00:48:21,919 --> 00:48:23,509 1045 00:48:23,509 --> 00:48:27,709 1046 00:48:27,709 --> 00:48:30,289 1047 00:48:30,289 --> 00:48:34,219 1048 00:48:34,219 --> 00:48:38,169 1049 00:48:38,169 --> 00:48:41,329 1050 00:48:41,329 --> 00:48:49,230 1051 00:48:49,230 --> 00:48:51,780 1052 00:48:51,780 --> 00:48:54,660 1053 00:48:54,660 --> 00:48:55,980 1054 00:48:55,980 --> 00:49:00,300 1055 00:49:00,300 --> 00:49:01,860 1056 00:49:01,860 --> 00:49:05,010 1057 00:49:05,010 --> 00:49:06,900 1058 00:49:06,900 --> 00:49:18,480 1059 00:49:18,480 --> 00:49:19,950 1060 00:49:19,950 --> 00:49:25,130 1061 00:49:25,130 --> 00:49:42,360 1062 00:49:42,360 --> 00:49:45,120 1063 00:49:45,120 --> 00:49:47,400 1064 00:49:47,400 --> 00:49:51,240 1065 00:49:51,240 --> 00:49:56,070 1066 00:49:56,070 --> 00:49:58,470 1067 00:49:58,470 --> 00:50:00,360 1068 00:50:00,360 --> 00:50:01,740 1069 00:50:01,740 --> 00:50:04,410 1070 00:50:04,410 --> 00:50:07,350 1071 00:50:07,350 --> 00:50:10,020 1072 00:50:10,020 --> 00:50:11,700 1073 00:50:11,700 --> 00:50:13,350 1074 00:50:13,350 --> 00:50:16,620 1075 00:50:16,620 --> 00:50:19,220 1076 00:50:19,220 --> 00:50:23,910 1077 00:50:23,910 --> 00:50:30,750 1078 00:50:30,750 --> 00:50:34,140 1079 00:50:34,140 --> 00:50:37,110 1080 00:50:37,110 --> 00:50:38,730 1081 00:50:38,730 --> 00:50:44,070 1082 00:50:44,070 --> 00:50:46,080 1083 00:50:46,080 --> 00:50:48,390 1084 00:50:48,390 --> 00:50:50,760 1085 00:50:50,760 --> 00:50:55,050 1086 00:50:55,050 --> 00:50:57,240 1087 00:50:57,240 --> 00:50:58,770 1088 00:50:58,770 --> 00:51:04,230 1089 00:51:04,230 --> 00:51:06,240 1090 00:51:06,240 --> 00:51:10,740 1091 00:51:10,740 --> 00:51:12,570 1092 00:51:12,570 --> 00:51:14,490 1093 00:51:14,490 --> 00:51:16,470 1094 00:51:16,470 --> 00:51:19,170 1095 00:51:19,170 --> 00:51:23,670 1096 00:51:23,670 --> 00:51:25,500 1097 00:51:25,500 --> 00:51:28,560 1098 00:51:28,560 --> 00:51:30,510 1099 00:51:30,510 --> 00:51:32,910 1100 00:51:32,910 --> 00:51:36,090 1101 00:51:36,090 --> 00:51:38,760 1102 00:51:38,760 --> 00:51:42,150 1103 00:51:42,150 --> 00:51:45,330 1104 00:51:45,330 --> 00:51:47,520 1105 00:51:47,520 --> 00:51:49,170 1106 00:51:49,170 --> 00:51:52,920 1107 00:51:52,920 --> 00:51:55,579 1108 00:51:55,579 --> 00:51:59,029 1109 00:51:59,029 --> 00:52:01,009 1110 00:52:01,009 --> 00:52:04,729 1111 00:52:04,729 --> 00:52:07,579 1112 00:52:07,579 --> 00:52:10,789 1113 00:52:10,789 --> 00:52:13,160 1114 00:52:13,160 --> 00:52:16,670 1115 00:52:16,670 --> 00:52:19,120 1116 00:52:19,120 --> 00:52:23,199 1117 00:52:23,199 --> 00:52:27,559 1118 00:52:27,559 --> 00:52:30,589 1119 00:52:30,589 --> 00:52:32,599 1120 00:52:32,599 --> 00:52:34,579 1121 00:52:34,579 --> 00:52:36,410 1122 00:52:36,410 --> 00:52:37,969 1123 00:52:37,969 --> 00:52:41,029 1124 00:52:41,029 --> 00:52:44,059 1125 00:52:44,059 --> 00:52:46,999 1126 00:52:46,999 --> 00:52:49,039 1127 00:52:49,039 --> 00:52:57,380 1128 00:52:57,380 --> 00:52:58,849 1129 00:52:58,849 --> 00:53:01,249 1130 00:53:01,249 --> 00:53:03,499 1131 00:53:03,499 --> 00:53:05,900 1132 00:53:05,900 --> 00:53:07,609 1133 00:53:07,609 --> 00:53:08,959 1134 00:53:08,959 --> 00:53:10,759 1135 00:53:10,759 --> 00:53:14,150 1136 00:53:14,150 --> 00:53:16,669 1137 00:53:16,669 --> 00:53:27,960 1138 00:53:27,960 --> 00:53:27,970 1139 00:53:27,970 --> 00:53:35,890 1140 00:53:35,890 --> 00:53:38,650 1141 00:53:38,650 --> 00:53:42,999 1142 00:53:42,999 --> 00:53:44,650 1143 00:53:44,650 --> 00:53:46,870 1144 00:53:46,870 --> 00:53:48,039 1145 00:53:48,039 --> 00:53:50,019 1146 00:53:50,019 --> 00:53:53,049 1147 00:53:53,049 --> 00:53:54,999 1148 00:53:54,999 --> 00:53:58,299 1149 00:53:58,299 --> 00:54:02,380 1150 00:54:02,380 --> 00:54:04,599 1151 00:54:04,599 --> 00:54:07,150 1152 00:54:07,150 --> 00:54:10,120 1153 00:54:10,120 --> 00:54:12,029 1154 00:54:12,029 --> 00:54:17,140 1155 00:54:17,140 --> 00:54:19,930 1156 00:54:19,930 --> 00:54:23,049 1157 00:54:23,049 --> 00:54:24,970 1158 00:54:24,970 --> 00:54:27,390 1159 00:54:27,390 --> 00:54:30,400 1160 00:54:30,400 --> 00:54:30,410 1161 00:54:30,410 --> 00:54:31,589 1162 00:54:31,589 --> 00:54:34,059 1163 00:54:34,059 --> 00:54:37,779 1164 00:54:37,779 --> 00:54:43,089 1165 00:54:43,089 --> 00:54:46,390 1166 00:54:46,390 --> 00:54:50,079 1167 00:54:50,079 --> 00:54:52,029 1168 00:54:52,029 --> 00:54:54,640 1169 00:54:54,640 --> 00:54:56,200 1170 00:54:56,200 --> 00:54:57,729 1171 00:54:57,729 --> 00:54:59,920 1172 00:54:59,920 --> 00:55:01,569 1173 00:55:01,569 --> 00:55:07,509 1174 00:55:07,509 --> 00:55:11,289 1175 00:55:11,289 --> 00:55:14,019 1176 00:55:14,019 --> 00:55:15,339 1177 00:55:15,339 --> 00:55:17,470 1178 00:55:17,470 --> 00:55:19,720 1179 00:55:19,720 --> 00:55:22,420 1180 00:55:22,420 --> 00:55:26,079 1181 00:55:26,079 --> 00:55:30,400 1182 00:55:30,400 --> 00:55:32,589 1183 00:55:32,589 --> 00:55:34,809 1184 00:55:34,809 --> 00:55:36,940 1185 00:55:36,940 --> 00:55:40,299 1186 00:55:40,299 --> 00:55:43,180 1187 00:55:43,180 --> 00:55:45,279 1188 00:55:45,279 --> 00:55:47,140 1189 00:55:47,140 --> 00:55:49,089 1190 00:55:49,089 --> 00:55:50,440 1191 00:55:50,440 --> 00:55:52,690 1192 00:55:52,690 --> 00:55:55,569 1193 00:55:55,569 --> 00:55:57,640 1194 00:55:57,640 --> 00:56:04,089 1195 00:56:04,089 --> 00:56:06,730 1196 00:56:06,730 --> 00:56:09,910 1197 00:56:09,910 --> 00:56:12,250 1198 00:56:12,250 --> 00:56:14,620 1199 00:56:14,620 --> 00:56:22,769 1200 00:56:22,769 --> 00:56:27,190 1201 00:56:27,190 --> 00:56:29,170 1202 00:56:29,170 --> 00:56:30,910 1203 00:56:30,910 --> 00:56:34,120 1204 00:56:34,120 --> 00:56:36,759 1205 00:56:36,759 --> 00:56:39,130 1206 00:56:39,130 --> 00:56:40,420 1207 00:56:40,420 --> 00:56:45,160 1208 00:56:45,160 --> 00:56:53,950 1209 00:56:53,950 --> 00:56:58,150 1210 00:56:58,150 --> 00:57:01,120 1211 00:57:01,120 --> 00:57:04,510 1212 00:57:04,510 --> 00:57:06,069 1213 00:57:06,069 --> 00:57:07,510 1214 00:57:07,510 --> 00:57:10,809 1215 00:57:10,809 --> 00:57:22,530 1216 00:57:22,530 --> 00:57:24,870 1217 00:57:24,870 --> 00:57:26,370 1218 00:57:26,370 --> 00:57:31,530 1219 00:57:31,530 --> 00:57:33,930 1220 00:57:33,930 --> 00:57:38,220 1221 00:57:38,220 --> 00:57:39,690 1222 00:57:39,690 --> 00:57:41,670 1223 00:57:41,670 --> 00:57:44,880 1224 00:57:44,880 --> 00:57:47,010 1225 00:57:47,010 --> 00:57:50,070 1226 00:57:50,070 --> 00:57:53,970 1227 00:57:53,970 --> 00:57:57,480 1228 00:57:57,480 --> 00:58:00,690 1229 00:58:00,690 --> 00:58:02,340 1230 00:58:02,340 --> 00:58:04,800 1231 00:58:04,800 --> 00:58:06,810 1232 00:58:06,810 --> 00:58:08,730 1233 00:58:08,730 --> 00:58:10,880 1234 00:58:10,880 --> 00:58:13,800 1235 00:58:13,800 --> 00:58:17,400 1236 00:58:17,400 --> 00:58:19,530 1237 00:58:19,530 --> 00:58:21,420 1238 00:58:21,420 --> 00:58:29,760 1239 00:58:29,760 --> 00:58:35,100 1240 00:58:35,100 --> 00:58:37,530 1241 00:58:37,530 --> 00:58:40,440 1242 00:58:40,440 --> 00:58:42,330 1243 00:58:42,330 --> 00:58:45,510 1244 00:58:45,510 --> 00:58:46,920 1245 00:58:46,920 --> 00:58:49,290 1246 00:58:49,290 --> 00:58:51,060 1247 00:58:51,060 --> 00:58:52,320 1248 00:58:52,320 --> 00:58:54,150 1249 00:58:54,150 --> 00:58:55,980 1250 00:58:55,980 --> 00:58:58,290 1251 00:58:58,290 --> 00:59:01,200 1252 00:59:01,200 --> 00:59:02,910 1253 00:59:02,910 --> 00:59:07,350 1254 00:59:07,350 --> 00:59:09,750 1255 00:59:09,750 --> 00:59:12,060 1256 00:59:12,060 --> 00:59:16,310 1257 00:59:16,310 --> 00:59:19,020 1258 00:59:19,020 --> 00:59:21,090 1259 00:59:21,090 --> 00:59:24,540 1260 00:59:24,540 --> 00:59:26,370 1261 00:59:26,370 --> 00:59:28,290 1262 00:59:28,290 --> 00:59:30,450 1263 00:59:30,450 --> 00:59:32,340 1264 00:59:32,340 --> 00:59:35,400 1265 00:59:35,400 --> 00:59:36,270 1266 00:59:36,270 --> 00:59:37,770 1267 00:59:37,770 --> 00:59:40,320 1268 00:59:40,320 --> 00:59:42,930 1269 00:59:42,930 --> 00:59:45,660 1270 00:59:45,660 --> 00:59:49,520 1271 00:59:49,520 --> 00:59:52,080 1272 00:59:52,080 --> 00:59:53,610 1273 00:59:53,610 --> 00:59:57,270 1274 00:59:57,270 --> 01:00:00,420 1275 01:00:00,420 --> 01:00:02,100 1276 01:00:02,100 --> 01:00:05,190 1277 01:00:05,190 --> 01:00:09,000 1278 01:00:09,000 --> 01:00:16,560 1279 01:00:16,560 --> 01:00:27,060 1280 01:00:27,060 --> 01:00:30,300 1281 01:00:30,300 --> 01:00:32,310 1282 01:00:32,310 --> 01:00:34,230 1283 01:00:34,230 --> 01:00:36,540 1284 01:00:36,540 --> 01:00:38,790 1285 01:00:38,790 --> 01:00:43,140 1286 01:00:43,140 --> 01:00:45,300 1287 01:00:45,300 --> 01:00:46,320 1288 01:00:46,320 --> 01:00:49,320 1289 01:00:49,320 --> 01:00:51,720 1290 01:00:51,720 --> 01:00:54,180 1291 01:00:54,180 --> 01:00:56,490 1292 01:00:56,490 --> 01:00:59,640 1293 01:00:59,640 --> 01:01:01,170 1294 01:01:01,170 --> 01:01:03,150 1295 01:01:03,150 --> 01:01:05,400 1296 01:01:05,400 --> 01:01:08,220 1297 01:01:08,220 --> 01:01:09,690 1298 01:01:09,690 --> 01:01:11,880 1299 01:01:11,880 --> 01:01:14,640 1300 01:01:14,640 --> 01:01:16,440 1301 01:01:16,440 --> 01:01:18,120 1302 01:01:18,120 --> 01:01:19,730 1303 01:01:19,730 --> 01:01:22,350 1304 01:01:22,350 --> 01:01:26,880 1305 01:01:26,880 --> 01:01:29,580 1306 01:01:29,580 --> 01:01:31,830 1307 01:01:31,830 --> 01:01:34,290 1308 01:01:34,290 --> 01:01:37,020 1309 01:01:37,020 --> 01:01:39,000 1310 01:01:39,000 --> 01:01:41,340 1311 01:01:41,340 --> 01:01:42,840 1312 01:01:42,840 --> 01:01:44,760 1313 01:01:44,760 --> 01:01:47,940 1314 01:01:47,940 --> 01:01:49,980 1315 01:01:49,980 --> 01:01:52,260 1316 01:01:52,260 --> 01:01:55,079 1317 01:01:55,079 --> 01:01:56,460 1318 01:01:56,460 --> 01:01:58,520 1319 01:01:58,520 --> 01:02:02,160 1320 01:02:02,160 --> 01:02:04,140 1321 01:02:04,140 --> 01:02:07,560 1322 01:02:07,560 --> 01:02:09,570 1323 01:02:09,570 --> 01:02:11,520 1324 01:02:11,520 --> 01:02:14,460 1325 01:02:14,460 --> 01:02:16,440 1326 01:02:16,440 --> 01:02:20,099 1327 01:02:20,099 --> 01:02:21,839 1328 01:02:21,839 --> 01:02:24,000 1329 01:02:24,000 --> 01:02:26,460 1330 01:02:26,460 --> 01:02:28,530 1331 01:02:28,530 --> 01:02:30,599 1332 01:02:30,599 --> 01:02:31,680 1333 01:02:31,680 --> 01:02:35,690 1334 01:02:35,690 --> 01:02:37,770 1335 01:02:37,770 --> 01:02:40,290 1336 01:02:40,290 --> 01:02:41,430 1337 01:02:41,430 --> 01:02:44,270 1338 01:02:44,270 --> 01:02:46,980 1339 01:02:46,980 --> 01:02:51,270 1340 01:02:51,270 --> 01:02:52,920 1341 01:02:52,920 --> 01:02:55,500 1342 01:02:55,500 --> 01:02:57,300 1343 01:02:57,300 --> 01:02:59,460 1344 01:02:59,460 --> 01:03:03,920 1345 01:03:03,920 --> 01:03:06,900 1346 01:03:06,900 --> 01:03:09,060 1347 01:03:09,060 --> 01:03:11,070 1348 01:03:11,070 --> 01:03:14,370 1349 01:03:14,370 --> 01:03:17,250 1350 01:03:17,250 --> 01:03:19,050 1351 01:03:19,050 --> 01:03:21,000 1352 01:03:21,000 --> 01:03:23,910 1353 01:03:23,910 --> 01:03:25,770 1354 01:03:25,770 --> 01:03:28,010 1355 01:03:28,010 --> 01:03:31,589 1356 01:03:31,589 --> 01:03:33,570 1357 01:03:33,570 --> 01:03:36,450 1358 01:03:36,450 --> 01:03:38,040 1359 01:03:38,040 --> 01:03:41,190 1360 01:03:41,190 --> 01:03:42,770 1361 01:03:42,770 --> 01:03:44,880 1362 01:03:44,880 --> 01:03:47,370 1363 01:03:47,370 --> 01:03:49,440 1364 01:03:49,440 --> 01:03:52,680 1365 01:03:52,680 --> 01:03:54,960 1366 01:03:54,960 --> 01:03:58,920 1367 01:03:58,920 --> 01:04:01,380 1368 01:04:01,380 --> 01:04:03,420 1369 01:04:03,420 --> 01:04:06,539 1370 01:04:06,539 --> 01:04:08,640 1371 01:04:08,640 --> 01:04:11,250 1372 01:04:11,250 --> 01:04:13,170 1373 01:04:13,170 --> 01:04:15,240 1374 01:04:15,240 --> 01:04:18,029 1375 01:04:18,029 --> 01:04:20,819 1376 01:04:20,819 --> 01:04:22,500 1377 01:04:22,500 --> 01:04:24,180 1378 01:04:24,180 --> 01:04:27,359 1379 01:04:27,359 --> 01:04:29,839 1380 01:04:29,839 --> 01:04:32,010 1381 01:04:32,010 --> 01:04:34,680 1382 01:04:34,680 --> 01:04:36,000 1383 01:04:36,000 --> 01:04:38,519 1384 01:04:38,519 --> 01:04:40,769 1385 01:04:40,769 --> 01:04:44,190 1386 01:04:44,190 --> 01:04:46,019 1387 01:04:46,019 --> 01:04:48,059 1388 01:04:48,059 --> 01:04:50,339 1389 01:04:50,339 --> 01:04:52,200 1390 01:04:52,200 --> 01:04:53,730 1391 01:04:53,730 --> 01:04:56,039 1392 01:04:56,039 --> 01:04:57,720 1393 01:04:57,720 --> 01:05:00,660 1394 01:05:00,660 --> 01:05:03,210 1395 01:05:03,210 --> 01:05:05,730 1396 01:05:05,730 --> 01:05:09,900 1397 01:05:09,900 --> 01:05:13,440 1398 01:05:13,440 --> 01:05:23,480 1399 01:05:23,480 --> 01:05:23,490 1400 01:05:23,490 --> 01:05:25,550