1 00:00:01,069 --> 00:00:03,189 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu good afternoon everyone so today we have TB shuttle here he's going to give us the lecture on seat assembly so TB is a research scientist here at MIT working with charles leiserson he also taught this class with me last year and he got one of the best ratings ever for this class so I'm really looking forward to his lecture all right great so thank you for the introduction Julien today will be I so I hear you just submitted the beta for project 1 hopefully that went pretty well how many of you slept in the last 24 hours okay good all right so you know it went pretty well but that sounds great yeah so today we're gonna be talking about C to assembly and this is really a continuation from the topic of last lecture where you saw computer architecture if I understand correctly is that right you've looked at computer architecture x86 64 assembly that sort of thing so how many of you walked away from that lecture thinking oh yeah x86 64 assembly this is easy this is totally intuitive everything makes perfect sense you know there's no weirdness going on here whatsoever how many of you walked away not thinking that thinking that perhaps this is a little bit strange this whole assembly language like yeah I'm really in the later camp x86 is kind of a strange beast you know there are things in there that make no sense you know quadword has eight bytes P stands for integer you know what a that sort of thing and that's going to so when we move on to the topic of you know seeing how C code gets translated into assembly we're translating into something that's all pretty complicated and the translation itself isn't going to be that straightforward so we're gonna have to find a way to work through that and I'll outline the strategy that that we'll be using in the start of this presentation but first let's quickly review why do we care about looking at assembly you should have seen this slide from the last lecture but essentially assembly is a is more a more precise representation of the program than the C code itself and if you look at the assembly that can reveal details about the program they're not obvious when you just look at the C code directly they're implicit things going on in the C code such as type casts or the usage of registers versus memory on the machine and those can have performance implications so it's valuable to take a look at the assembly code directly it can also reveal what the compiler did or did not do when it tried to optimize the program for example you may have written a division operation or a multiply operation but somehow the compiler figured out that it didn't really need to do a divide or multiply to implement that operation it could implement it more quickly using simpler faster operations like addition and subtraction or shift and you would be able to see that from looking at the assembly bugs can also arise only at a low level for example there may be a bug in the program that only creates unexpected behavior when you optimize the code at oh three so that means when you're debugging with og or - oh one you wouldn't see any unusual behaviors but when you crank up the optimization level suddenly things start to fall apart and those bug and because the C code itself didn't change it can be hard to spot those bugs looking at the assembly can help out in that regard and when worse comes worse if you really want to make your code fast it is possible to modify the assembly code by hand one of my favorite uses of looking at the assembly though is actually reverse engineering if you can read the assembly for some code you can actually decipher what that program does even when you only have access the binary of that program which is kind of a cool thing it takes some practice to read assembly at that level but this is actually one trick that we that some of us in professor license research group have used in the past to say figure out what Intel's math kernal library is doing to multiply matrices now as I mentioned before at the end of last lecture you saw some computer architecture and you saw the basics of x86 64 assembly including all this stuff like the instructions the registers the various data types memory addressing modes the are flags register with those condition codes and that sort of thing and today we want to talk about how C code gets implemented in that assembly language okay well if we consider how C code becomes assembly what that process actually looks like we know that there's a compiler involved and the compiler is a pretty sophisticated piece of software and frankly the compiler has a lot of work to do in order to translate a C program into assembly for example it has to choose what assembly instructions are going to be used to implement those C operations it has to implement C conditionals and loops those if-then else's and those loops foreign while loops into jumps and branches it has to choose registers and memory locations to store all the data in the program it has to move data it may have to move data among the registers and the memory locations in order to satisfy various data dependencies it has to coordinate all the function calls that happen when subroutine a calls B call C and then returns and so on and so forth and on top of that these days we expect our compiler to try really hard to make that code fast so that's a lot of work that the compiler has to do and as a result if we take a look at the assembly for any arbitrary piece of C code the mapping from that C code to the assembly is not exactly obvious which makes it hard to describe hard to execute this particular lecture and it hard to in general read the binary or these for some program I figure out what's really going on so what we're gonna do today to understand this translation process is we're going to take a look at how that compiler actually reasons about translating C code into assembly now this is not a compiler class 6 172 is not a class that you take if you want to learn how to build a compiler and you're not going to need to know everything about a compiler to follow today's lecture but but what we will see are just a little bit it's just a little bit about how the compiler understands a program and later on how about how the compiler can translate that program into assembly code now when a compiler compiles a program it does so through a sequence of stages which are illustrated on this slide starting from the C code at first pre processes that code dealing with all the macros and that produces pre-process source then the compiler will translate that source code into an intermediate representation for the client Copiah that you're using that intermediate representation is called LLVM ir LLVM being the name of the underlying compiler and i are being the creative name for the intermediate representation that l vm ir is really a sort of pseudo assembly it's kind of like assembly but as we'll see that's actually it's actually a lot simpler than x86 64 assembly and that's why we'll use it to understand this translation process now it turns out that the compiler does a whole lot of work on that intermediate representation we're not going to worry about that today we'll just skip to the end of this pipeline when the compiler translates LLVM ir ultimately into assembly code now the nice thing about taking a look at the LVM IR is that if you're curious you can actually follow along with the compiler it is possible to ask clang to compile your code and give you the LLVM ir rather than the assembly and the flags to do that are somewhat familiar rather than passing the - s flag which hopefully you've already seen that will translate C code directly to assembly if you pass - yes - Amit LVM that will produce the LV Mir you can also ask clang to translate LLB Mir itself directly into assembly code and that process is pretty straightforward you just use the - s flag once again so this is the outline of today's lecture first we're gonna start with a simple primer on LLVM ir i know that l vm ir it sounds like another language oh gosh we have to learn another language but don't worry this primer I would say is simpler than the x86 64 primer based on the slides for x86 64 that primer was 20 some slides long this primer is 6 slides so maybe a little bit little over a quarter then we'll take a look at how the various constructs in the C programming language get translated into ellaby mi are including straight line code c functions conditionals in other words if then else loops and will conclude that section with just a brief mention of LVM are attributes and finally we'll take a look at how LLVM ir gets translated into assembly and for that we'll have to focus on what's called the Linux x86 64 calling convention and we'll conclude with a case study where we see how all this whole process works on a very simple code to compute Fibonacci numbers any questions so far all right let's get started brief primer on LVM ir so I've shown this in smaller font on some previous slides but here is a snippet of Olivia Meyer code in particular this is one function within an LLVM ir file and just from looking at this code we can see a couple of the basic components of LLVM I are in LOV Mir we have functions that's how code is organized into these chunks chunks called functions and within each function the operations of the function are encoded within instructions and each instruction shows up and at least on this slide on a separate line those functions operate on what are called LLVM IR registers these are kind of like the variables and each of those variables has some Associated type so the types are actually explicit within the IR and we'll take a look at the types in more detail in a couple of slides so based on that so based on that high-level overview we actually see a couple we can do a little bit of a comparison between LLVM IR and assembly language the first thing that we see is that it looks kind of similar to assembly right it still has a simple instruction format there's some destination operand which we were calling a register and then there's an equal sign and then an opcode be it add or call or what have you and then some list of source operands that's roughly what each instruction looks like we can also see that the LVM our code adopt it'll turn out rather that the LLVM IR code adopts a similar structure to the assembly code itself and control flow once again is implemented using conditional branches as well as on conditional branches but one thing that we'll notice is that LVM ir is simpler than assembly it has a much smaller instruction sets and unlike assembly language LVM are supports an infinite number of registers if you can name it it's a register so in that sense lVN's notion of registers is a lot closer to C's notion of variables and when you read lv Mir and you see those registers you should just think about see variables there's nothing there's no implicit are flags register there are no implicit condition codes going on everything is pretty explicit in terms of the LLVM there's no explicit stack pointer or frame pointer there's a type system that's explicit in the IR itself and it's see like in nature and there are see like functions for organizing the code overall so let's take a look at each of these components starting with LLVM i/o registers this is basically LVM x' name for a variable all the data and LVM is stored in these variables which are called registers and the syntax is a percent symbol followed by a name so present 0% 1% to that sort of thing and as I mentioned before LVM registers are a lot like sea variables algorithm supports an infinite number of these things and each distinct register is just distinguished by its name so percent zero is different from percent one because they have different names register names are also local to each LV Mir function and in this regard they're also similar to C variables if you wrote a C program with two functions a and B and each function had a local variable Apple those are two different apples the Apple an a is not the same thing as the Apple and B similarly if you had two different LV Mir functions and they both described some register v those are two different variables they're not automatically aliased so here's an example of um IRS snippet and what we've done here is just highlighted all of the registers some of them are being assigned because they're on the left-hand side of an equal symbol and some of them are being used as arguments when they show up on the right-hand side there is one catch which we'll see later on namely that the syntax for LVM registers ends up being hijacked when we when LLVM needs to refer to different basic blocks we haven't defined basic blocks yet we'll see what that's all about injustice just a couple slides are going good so far all right so LVM i our code is organized into instructions and the syntax for these instructions is pretty straightforward we have a register name on the left hand side then an equal symbol and then an opcode followed by an operand list for example the highlighted instruction the top highlighted instruction has register sticks equal to add of some arguments I will see a little bit more about those arguments later that's the syntax for when an instruction actually returns some value so addition returns there was the sum of the two operands other instructions don't return a value per se not a value that you'd store in a local register and so the syntax for those instructions is just an opcode followed by a list of operands ironically the return function that you'd find at the end of a the return instruction that you'd find at the end of a function doesn't assign a particular register value and of course the operands can be either registers or constants or as well see you later on and they can identify basic blocks within the within the function the LV Mir instruction set is smaller than that of x86 x86 contains hundreds of instructions when you start counting up all the vector instructions and llv Mir is far more modest in that regard there are some instructions for data movements including stack allocation reading memory writing memory converting between types or doing well yeah that's pretty much it there are some instructions for doing arithmetic or logic including integer arithmetic floating-point arithmetic boolean logic binary logic or address calculations and then there are a couple of instructions to do control flow there are unconditional branches or jumps conditional branches or jumps subroutines that's call or return and then there's this magical fee function which we'll see more of and if later on this slide in these slides finally as I mentioned for everything in LV Mir is exposed types it's a strongly typed language in that sense and the type system looks something like this for integers whenever there's a variable of an integer type you'll see an eye followed by some number a number defines the number of bits in that integer so if you see a variable of type I 64 that means it's a 64-bit integer if you see a variable of type I 1 that would be a 1 bit integer or in other words a boolean value there are also floating-point types such as double and float there are pointer types when you follow it with follow an integer or floating-point type with a star much like in C you can have a raise and that uses a bracket notation square bracket notation where within the square brackets you'll have some number and then times and then some other type maybe it's a primitive type like an integer or floating-point maybe it's something more complicated you can have structs with an LV miR and that uses squiggly brackets with types and new breeding on the inside you can have vector types which uses angle brackets and otherwise adopts a similar syntax to the array type finally you can you can occasionally see a variable which looks like an ordinary register except that it's type is label and that actually refers to a basic block those are the basic components of LV miR any questions so far everything clear everything unclear that should be unclear and we'll talk about it yeah is a vector notation used for the vector registers in a sense yes the vector operations within LLVM don't look like sse or AVX per se there they look more like ordinary operations except those ordinary operations work on a vector type so that's how the vector operations show up in LV Mir that make some sense cool anything else okay that's the whole Primmer that's a that's pretty much all of the language that you're going to need to know at least for this slide deck we'll cover some of the details as we go long let's start translating C code into LLVM ir so good all right let's start with pretty much the simplest thing we can straight line C code what do I mean by straight line C code I mean that this is a blob of C code that contains no conditionals or loops so it's just a whole sequence of operations and that sequence of operations in C code turns into a sequence of operations in LLVM ir so in this example here we have foo of n minus 1 plus bar of n minus 2 that is a sequence of operations and it turns into the LLVM ir on the right we can see how that happens there are a couple of rules of thumb when reading straight line C code and interpreting it and in the ir arguments to any operation are evaluated before the operation itself so what do I mean by that well in this case we need to evaluate n minus 1 before we pass the results to foo and what we see in the LLVM ir is that we have an addition operation that computes n minus 1 and then the result of that stored in to register for gets passed to the call instruction on the next line which calls out to function foo sound good similarly we need to evaluate at minus-2 before passing its results to the function bar and we see that sequence of instructions showing up next in the LLVM ir and now we actually need the return Val yeah question NSW that is a essentially that it's an attribute which we'll talk about later these are things that decorate the instructions as well as the types within LVM basically as the compiler figure stuff out so it helps the compiler along with analysis and optimization good so for the last operation here we had to evaluate both foo and bar and get their return their return values before we could add them together and so the very last operation in this sequence is the addition that just takes those return values and computes their sum now all of that use primitive types in particular integers but it's possible that your code uses aggregate types by aggregate types I mean arrays or structs that sort of thing and aggregate types are harder to store within registers typically speaking and so they're typically stored within memory as a result if you want to access something within an aggregate type if you want to read some elements out of an array that involves performing a memory access or more precisely computing some address into memory and then loading or storing that address so here for example we have an array a of seven integers and then we're going to access a sub X in LLVM IR that turns into two instructions this get element pointer instruction followed by a load and in the get element pointer case this computes an address into memory and stores the result of that address into a register in this case register five the next instruction the load takes the address stored in register five and simply loads that particular memory address storing the result into another register in this case six pretty simple when reading the get element pointer instruction the basic syntax involves a pointer into memory followed by a sequence of indices and all that ghetto and pointer really does is it computes an address by taking that pointer and then adding on that sequence of indices so in this case we have a get element pointer instruction which takes the address in register two and then adds on to it yeah that's a pointer into memory and then it adds onto a two indices one is the literal value zero and the other is the value stored in register for that just computes the address starting at two plus zero plus whatever was in register four so that's all for straight line code good so far feel free to interrupt if you have questions cool functions let's talk about C functions so when there's a C function when there's a function in your C code generally speaking you'll have a function within the LVM code as well and similarly when there's a return statement in the C code you'll end up with a return statement in the LV Mir so here we have just the bare-bones C code for this fib routine that corresponds to this fib call with it or not fib call this fib function with in LV Mir and the function declaration itself looks pretty similar to what you would get in ordinary see the return statement is also similar it may take an argument if you're returning some value to the caller in this case for the fib routine we're going to return a 64-bit integer and so we see that this return statement returns the 64-bit integer stored in register 0 a lot like in C functions cannot parameters and when you have a C function with a list of parameters basically in LV Mir you're gonna end up with a similar-looking function with the exact same list of parameters just translate it into LV Mir so here we have this C code for the mm base routine and we have the corresponding LV Mir for an mm based function and what we see is we have a pointer to a double as the first parameter followed by a 32-bit integer followed by another pointer to a double followed by another 32-bit integer follow my another pointer to a double and another 32-bit nature and another 32-bit integer one implicit thing with an LP Mir if you're looking at a function declaration or definition the parameters are automatically named present zero percent one percent two so on and so forth there's one unfortunate thing about LVM ir the registers are a lot like c functions but unfortunately that implies that when you're reading LV Mir it's a lot like reading the code from your teammate who always insists on naming things with nondescript single letter variable names also that teammate doesn't comment is code or her code or their code yeah okay so basic blocks when you look at the code within a function that code gets partitioned into chunks which are called basic blocks a basic block has a property that's a sequence of instructions in other words it's a blob of straight line code where control can only enter from the first instruction in that block and it can only leave from the last instruction in that block so here we have the C code for this routine fib dot C we're gonna see a lot of this routine v dot C by the way and we have the corresponding LV Mir and what we have in the C code what the C code is telling us is that if n is less than 2 you want to do one thing otherwise you want to do some complicated computation and then return that result and if we think about that we've got this branch in our control flow and we'll end up with our three different blocks within the LV Mir so we end up with one block which figures out which does the computation is and less than two and then we end up with an up with another block that says well in one case just go ahead and return something in this case the input to the function in the other case do some complicated calculations some straight line code and then return that result now when we have when we partition the the code of a function into these basic blocks we actually have connections between the basic blocks based on how control can move between the basic blocks these control flow instructions in particular the branch instructions as we'll see induce edges among these basic blocks whenever there's a branch instruction that can specify that control can leave this basic block and go to that other basic block or that other basic block or may be one or the other depending on how the result of some computation unfolded and so for the for the fib function that we saw before we had those three basic blocks and based on whether or not n was less than two either we would execute the simple return statement or we would execute the blob a straight line code shown on the left so those are basic blocks and functions everyone's still good so far any questions clear as mud all right let's talk about conditionals we've already seen one of these conditionals that's given rise to these basic blocks and these control flow edges so let's let's tease that apart a little bit further when we have a see conditional in other words an if-then-else statement or a switch statement for that matter that gets translated into a conditional branch instruction or B are in the LLVM ir representation so we saw before is that we had this if and lesson and less than two and this basic block with two outgoing edges if we take a really close look at that first at that first basic block we can tease it apart and see what each what each operation does so first in order to do this conditional operation we need to compute whether or not n is less than two we need to do a comparison between N and the literal value two and that comparison operation turns into an eye comp instruction within the LLVM ir and integer comparison in the lv m ir the result of that comparison then gets passed to a conditional branch as one of its arguments and the conditional branch specifies a couple other a couple of things beyond that one argument in particular that conditional branch takes that one bit integer that boolean result as well as labels of two different basic blocks so that boolean value is called the predicate and that's in this case a result of that comparison from before and then the two basic blocks say where to go if the predicate is true or where to go if the predicate is false the first label is the destination when it's true second label destination when it's false pretty straightforward and if we decide to map this onto our control flow graph which we were looking at before we can identify the two branches coming out of our first basic block as either the true branch or the false branch based on whether or not you follow that edge when the predicate is true or you follow it on the false it's not good this should be straightforward let me know if it's not let me know if if it's confusing now it's also possible that you can have an unconditional branch in LLVM ir you can just have a branch instruction with one operand and that one operand specifies a basic block there's no predicate there's no true or false it's just the one basic block and what that what that instruction says is when you get here now go to that other basic block this might seem kind of silly right why would we just need to jump to another basic block with why not just merge this code with the code of the subsequent basic block any thoughts correct other things might go to that basic block and in general when we look at the the structure that we get for any particular conditional and see we end up with this sort of diamond shape and in order to implement that diamond shape we need these unconditional branches so there's a good reason for them to be around and here we just have an example of a slightly more complicated conditional that that creates this diamond shape in our control flow graph so let's choose this this piece of code apart in the first block we're going to evaluate if some predicate in this case our predicate is X bitwise and one and when we see in the first basic block is that we compute the bitwise and store that results do a comparison between that result and the literal value one that gives us a boolean a boolean value which is store in register three and we branch conditionally on whether three is true or false in the case that it's true we'll branch to block four and in block four that contains the code for the consequent the venn Clause of the if then else and in the consequent we just call function foo and then we need to leave the conditional so we'll just branch unconditionally the alternative if x + 1 is 0 if it's false then we will execute the function bar but then also need to leave the conditional and so we see in block 5 following the false branch that we call bar and we'd just branch a block 6 and finally in block 6 we return the results so we end up with this diamond pattern whenever we have a conditional in general we may delete certain basic blocks if conditionals are Perdition and the code is particularly simple but in general is going to be this kind of diamond looking thing everyone good so far one lassie construct loops unfortunately this is the most complicated si construct when it comes to the LLVM ir but things haven't been too bad so far so yeah let's walk into this with some confidence so the simple part is that what we'll see is the C code for a loop translates into LV Mir that in the control flow graph representation is a loop so loop and C is literally a loop in this graph representation which is kind of nice but to figure out what's really going on with these loops we need to tease apart let's first tease apart the components of a C loop because we have a couple different pieces in an arbitrary seal if we have a loop body which is what's executed on each iteration and then we have some loop control which manages all the iterations of that loop so in this case we have a simple C loop which multiplies each element of an input vector X by some scalar a and stores our results into y that body gets translated into a blob of straight line code I won't step through all this straight line code just now there's plenty of it and you'll be able to see the slides after this after this lecture but that blob a straight line code corresponds to the loop body and the rest of the code in the LLVM Arsene if it corresponds to the loop control so we have the assign the initial assignment of the induction variable the comparison with the end of the loop and the you know increment operation at the end all that gets encoded in this stuff highlighted in yellow that loop control part now if we take a look at this code there's one odd piece that we haven't really understood yet and it's this fie instruction at the beginning the fie instruction is weird and it arises pretty commonly when you're dealing with loops and it arises for it basically is there to solve a problem with LLVM representation of the code so before we describe the fie instruction let's actually take a look at the problem that this fee instruction tries to solve so let's let's first easy part of the tease apart the lupin to reveal the problem the c loop produces this looping pattern in the control flow graph literally an edge that goes back to the beginning and if we look at the different basic blocks we have we have one block at the beginning which initializes the induction variable and sees if there are any iterations of the loop that need to be run assuming that there are some iterations oh if there aren't any iterations and that'll branch directly to the end of the loop it'll just skip the loop entirely no need to try to execute any of that code and in this case it would simply return and then inside the loop block we have these two incoming edges one from the entry point of the loop where I has just been set to zero and another where we're repeating the loop where we've decided there's one more iteration to execute and we're going to go back from the end of the loop to the beginning and that back edge is what creates the loop structure in the control flow graph make sense all right at least see one nod over there so that's that's encouraging okay so we take a look at the loop control there a couple of components to that loop control there's the initialization of the induction variable there's the condition and there's the increment condition says when do you exit increments updates the updates and value of the induction variable and we can translate each of these components from the C code for the loop control into the LLVM ir code for that loop so the increments we would expect to see some sort of addition with an addition where we add one to some register somewhere and lo and behold there's an ADD operation so we'll call that the increment for the condition we expect some comparison operation and the conditional branch based on that comparison look at that right after the increment there's a compare and a conditional branch that will either take us back to the beginning of the loop or out of loop entirely and we do see that there is some form of initialization the initial value of this induction variable is zero and we do see a zero among this loop control code it's kind of squirreled away in that weird notation there and that weird notation sitting next to the fee instruction what's not so clear here is what where exactly is the induction variable we had the single variable I in our C code and what we're looking at in the LLVM eye are are a whole bunch of different registers we have a register that stores what we're claiming to be I plus 1 then we do this comparison and branch thing and then we have this this fee instruction that takes that takes zero or the result of the increment where did I actually go so the problem here is that I is really represented across all of those instructions and that happens because the value of the induction variable changes as you execute the loop the value of I is different on iteration zero versus iteration 1 versus iteration 2 versus iteration 3 and so on and so forth I is changing as you execute the loop and there's this funny invariant yeah so if we try to map that induction variable to the LV Mir it kind of maps to all of these locations it maps to various uses in the loop body and maps roughly speaking to the return value of this fie instruction even though we're not sure what that's all about but we can tell it maps to that because we're going to increment that later on and then we're gonna use that in a comparison so it kind of maps all over the place because it changes values we're we're going to encounter because it changes values with the increment operation we're going to encounter oh yeah so why does it change registers well we have this property in LVM that each instruction defines the value of a register at most once so for any particular register with an LLVM we can identify unique place in the code of the function that defines that register value this invariant is called the static single assignment environment and it seems a little bit weird but it turns out to be an extremely powerful invariant within the compiler it assisted with a lot of the compiler analysis and it also can help with reading the LLVM ir if you know if you if you expect it so this is a nice invariant but it poses a problem when we're dealing with induction variables which change as the iteration or as the loop unfolds and so what happens when control flow merges at the point of at the entry point of a loop for example how do we define what the induction variable is at that location because it could either be 0 if this is the first time through the loop or whatever you last incremented and the solution to that problem is the fie instruction the fie instruction specifies it defines a red shirt that says depending on how you get to this location in the code this register will have one of several different values and the P instruction simply lists what the value of that register will be depending on which basic block you came from so in this particular code the fie instruction says if you came from block 6 which was the entry point of the loop where you initially where you initially compared you initially checked if there were any loop iterations to perform if you come from that block then this register 9 is going to adopt the value 0 if however you followed the back edge of the loop then the register is going to adopt the value in this case 14 and 14 low and behold as there is the increment operation and so this fee instruction says either you're going to start from zero or you're gonna be I plus 1 just to note the fee instruction is not a real instruction it's really a solution to a problem within LLVM and when you translate this code into assembly the fee instruction isn't going to map to any particular assembly instruction it's really a representational trick does that make some sense any questions about that yeah why is it called fee that's a great question I actually don't know why it's called why they chose the name fee I don't think they had a particular affinity for the golden ratio but I'm not sure what the rationale was I don't know if anyone else knows yeah Google knows all sort of yeah so adopt the value SiC zero from block six or a 14 from block eight so that's all of the basic components of C translating into LV Mir the last thing I want to leave you with in this section on LV Mir is a discussion of these attributes and we already saw one of these attributes before is this NSW thing attach the add instruction in general the these LV Mir constructs might be decorated with these extra words and key words and those are the key words that I'm referring to as attributes those attributes can be a variety of a variety of information so in this case what we have here is C code that performs this memory calculation which you might have seen from a previous lecture and we see in the corresponding LV Mir is that there's some extra stuff associated attacked on to that load instruction where you load memory one of those pieces of extra information is this a line four and what that a line for attribute says is it describes the alignment of that read from memory and so if subsequent stages of the compiler can employ that information if they can opt reeds that are aligned that are 4-byte aligned then this attribute will say this is a load that you can go ahead and optimize there are a bunch of places where attributes might come from some of them are derived directly from the source code if you write a function that takes a parameter marked as Const or marked as restrict then in the LV miR you might see that the corresponding function parameter is marked as no alias because the restrict keyword said this pointer can never alias or the cause keyword says you're only ever going to read from this pointer so this pointer is going to be marked read-only so in that case the source code itself the C code was the the source of the information for those attributes there's some other attributes that occur simply because the compiler is smart and it does some clever analysis so in this case the LV Mir has a load operation that's eight eight byte aligned it was really analysis that figured out the alignment of that load operation good so far cool so let's summarize this part of the part of discussion with you know will be seen about LLVM ir l vm r is similar to assembly but a lot simpler in many many ways all the computed values are stored in registers and really when you're reading lv Mir you can think of those registers a lot like ordinary C variables l vm ir is a little bit funny and that it adopts this static single assignment paradigm this invariant where each register name each variable is written by a most one instruction within the LLVM ir code so if you're ever curious where you know percent 14 is defined within this function just do a search for where percent 14 is on the left hand side of an equals and there you go we can model a function in Elvia Meier as a control flow graph whose nodes correspond to basic blocks these blobs of straight line code and whose edges denote control flow among those basic blocks and compared to C ellaby meyer is pretty similar except that all of these operations are explicit the types are explicit everywhere the integer sizes are all apparent you don't have to remember that int really means a 32-bit integer and you need in 64 to be a 64-bit integer or you need a long or anything it's just I and then a bit loop and then a bit width there are no implicit operations at the all of you admire level all the type castes are explicit in some sense lb Mir is like assembly if assembly were more like C and that's definitely a statement that would not have made sense forty minutes ago alright so we've seen how to translate C code into LLVM ir there's one last step we want to translate the LVM ir into assembly and it turns out that structurally speaking LLVM ir is very similar to assembly we can more or less map each line of LLVM ir to some sequence of lines in the final assembly code but there is some additional complexity the compiler isn't done with its work yet when it's compiling c to lv mir-2 assembly and there are a couple there are three main tasks that the compiler still has to perform in order to generate x86 64 first it has to select the actual x86 assembly instructions that are going to implement these various LLVM ir operations it has to decide which general-purpose registers are going to hold different values and which values need to be squirreled away into memory because it just has no other choice and it has to coordinate all of the function calls it's not just the function calls within this particular source file it's also function calls between that source file and other source files that you're compiling and binary libraries that are just sitting on the system but the compiler never really gets to touch that's according all of those calls that's a bit complicated that's going to be the reason for a lot of the remaining complexity and that's what brings our discussion to the Linux x86 64 calling convention this isn't a very fun convention don't worry but never losses so it's useful so to talk about this convention we need to let's first take a look at how a program gets laid out in memory when you run it so when a program executes s-- virtual memory gets organized into a whole bunch of different chunks which are called segments there's a segment that corresponds to the stack that's actually located near the top a virtual memory and it grows downwards the stack grows down remember this there is a heap segment which grows upwards from a middle location in memory and those two those two dynamically allocated segments live at the top of the virtual address space there are then two additional segments the BSS segment for uninitialized data and the data segment for initialized data and finally at the bottom of virtual address space there is a text segment and that just stores the code of the program itself now when you read assembly code directly you'll see that the assembly code contains more than just some labels and some instructions in fact it's decorated with a whole bunch of other stuff and these are called assembly assembler directives and these directives operate on different sections of the assembly code some of those directives refer to the various segments of virtual memory and the and those segments directives are used to organize the content of the assembly file for example the dot text directive identifies some chunk of the assembly which is really code and should be located in the text segment when the program is run the dot BSS segment identifies stuff that lives in the assembler directive identify stuff in the BSS segment the data directive identify stuff in the data segment so on and so forth they're also very storage directives that will store content of some variety directly into the her in segments whatever was last identified by a segment directive so if at some point there is a there's a directive X colon dot space xx that space directive says allocates some amount of memory and in this case it says allocate 20 bytes of memory and we're gonna label that location X the dot long segment says soirée says store a constant long integer value in this case 172 in this example at location Y the asked ez segment similarly stores a string at that particular location so here we're storing the string 6.17 2 at location Z there's an aligned segment or an aligned directive that aligns the content the next content in the assembly file to an 8 byte boundary there additional segments for the linker to obey and those are the scope and linkage directives for example you might see Doc global in front of a label and that single sill linker that that's particulars imbel should be visible to the other files that the linker touches this case doc global fib makes viv fib visible to the other object files and that allows those other object files to call or refer to this fib location now let's turn our attention to this segment at the top the call or the stack segment this segment is used to store data and memory in order to manage function calls and returns that's a nice high-level description but what exactly ends up in the stack segment why do we need a stack what Dana will end up going there can't even tell me local variables and function anything else you already answered once let me I've made calling you again but go ahead sorry function arguments very good anything else I thought I saw a hand over here the return address anything else yeah there's at least yeah there's one other thing that one other important thing that gets stored on the stack yeah the return value actually that one's interesting it might be stored on the stack but it might not be stored on the stack good guess though yeah intermediate results in a manner of speaking yes there are more intermediate results than meets the eye when it comes to assembly or comparing comparing it to see but in particular by intermediate results let's say register state there are only so many registers on the machine and sometimes that's not enough and so the and so the the function may want to screw away some data that's in registers and stash it somewhere in order to read it back later the stack is a very natural place to do it that's the dedicated place to do it so yeah that's pretty much all the content of the what ends up on the call stack as the program executes now here's the thing there are a whole bunch of functions in the program some of them are in the may have been defined in the in the source file that you're compiling right now some of them might be defined in other source files some of them might be defined in libraries that were compiled by someone else possibly using a different compiler with different flags under different parameters presumably for this architecture at least one hopes but they're those libraries are completely out of your control and now we have this problem all those object files might define these functions and those functions want to call each other regardless of where those functions are necessarily defined and so somehow we need to coordinate all those function calls and make sure that if one function wants to use these registers and this other function wants to use the same registers those functions aren't going to interfere with each other or if they both want to read stack memory they're not going to clobber each other stacks so how do we deal with this coordination problem out of a high level what's the high level strategy we're gonna adopt to deal with this coordination problem that will be part of it but for the higher level strategy so that's a component of this higher level strategy so what yeah good calling convention you remember the title of this section of the talk great we're going to make sure that every single function regardless of where it's defined they all abide by the same calling convention so it's a standard that all the functions will obey in order to make sure they all play nicely together so let's unpack the linux x86 64 calling convention well not the whole thing because it's actually pretty complicated but at least enough to understand the basics of what's going on so at a high level this calling convention organizes the stack segment into frames such that each function instantiation each time you call a function that instantiation gets a single frame all to itself and to manage all those stack frames the calling convention is going to use these two pointers our VP and our SP which you should have seen last time are BP the base pointer will point to the top of the current stack frame RSP will point to the bottom of the current stack frame and remember the stack grows down now when the code executes call and return instructions those instructions are going to operate on the stack on these various on the stack these various stack pointers as well as the instruction pointer are IP in order to manage the return address of each function in particular when a call instruction gets executed in x86 that call instruction will push the current value of our IP on to the stack and that will be the return address and then the column structuring will jump to its operand this opera and being the address of some function in the program memory or at least one hopes perhaps there was buffer overflow corruption of some kind and your program is is in dire straits but presumably is the address of a function the return instruction complements the call and it's going to undo the operations all of that call instruction it'll pop the return address off the stack and put that into our IP and that will cause the execution to return to the caller and resume execution from the statement right after the original call so that's the high level of how the stack gets managed as well as the return address how about how do we maintain registers across all of those calls well there's a bit of a problem because we might have two different functions that want to use the same registers some of this might be review by the way from six double four but if it's not hopefully you know if you have questions just let me know so we have this problem where two different functions function a which might call another function B those two functions might want to use the same registers so who's responsible for making sure that a function B operates on the same registers as a that one B is done a doesn't end up with corrupted State in its registers well they're two different strategies that could be adopted one is to have the caller save off the register state before invoking a call but that has some downsides the caller might waste work saying well I have to save all of this register state in case the call in case the function I'm calling wants to use those registers if the calling function doesn't use those registers that was a bunch of wasted work so on the other side we might say well let's just have the Kali save all that register state that could waste work if the Kali is going to save off register state that the caller wasn't using so the Kali says well I want to use all these registers I don't know what what the calling function used so I'm just going to push everything on the stack that could be a lot of ways to work so what does the x86 calling convention do if you had to guess yeah that's exactly right it does a little bit of both it specifies some of the registers as being Kali save registers and the rest of the registers are call or save registers and so the caller will be responsible for saving some stuff the Kali will be responsible for saving other stuff and if either of those functions doesn't need one of those registers then it can avoid wasted work in x86 64 in this calling convention turns out that the RB x RB p and r 12 through r15 registers are all Kali saved and the rest of the registers are caller saved in particular the see linkage defined by this calling convention for all the registers looks something like this and that identifies lots of stuff identifies a register for storing the return value registers restoring a bunch of the arguments caller save registers Kali save registers a register just for linking I don't expect you to memorize this in in 12 seconds and I think on any quibble I won't say what the course app will do on quizzes this year but yeah okay well there you go so you'll have these slides later you can you can practice memorizing them not show on the slide there are a couple other registers that are used for saving function arguments and return values and in particular whenever you're placing passing floating point stuff around the xmm register is 0 through 7 are used to deal with those floating point values cool so we have strategies for maintaining the stack we have strategies for maintaining register States but we still have the situation where functions may want to use overlapping parts of stack memory and so we need to coordinate how all those functions are going to use the stack memory itself this is a bit hard to describe the cleanest way I know to describe it is just to work through an example so here's the setup let's imagine that we have some function a that is called a function B and we're in the midst of executing function B and now function B is about to call some other function C as we've mentioned before B has a frame all to itself and that frame contains a whole bunch of stuff it contains arguments that a pass to be it contains a return address it contains a base pointer it contains some local variables and because B is about to call C it's also going to contain some data for arguments that B will pass to C so that's our setup we have one function ready to call another let's well let's take a let's take a look at how this stack memory is organized first so at the top we have what's called a linkage block and in this linkage block this is the region of sack memory where function B will access nan register arguments from its caller function a and it'll access these by indexing off of the base pointer rbp using positive offsets again the stack grows down B will also have a block of stack space after the linkage block and return and base return address and base pointer he'll have a region of its frame for local variables and it can access those local variables by indexing off of RVP in the negative direction stack grows down if you don't remember anything else stack grows down now B is about to call a function C and we want to see how all of this unfolds so before calling C B is going to place non register arguments for C on to a reserved linkage block in its own stack memory below its local variables it'll access those by your next thing our VP with negative offsets so those arguments from B to its colors will we'll specify those to be arguments from B to C and then what's gonna happen then B is going to call C and as we saw before the call instruction saves off the return address on to the stack and then it branches control to the entry point of a function see when the function C starts it's going to execute what's called a function Prolog and the function prototype consists of a couple of steps first it's going to save off the base pointer for B's stack frame so we'll just squirrel away the value of our BP on to the stack that's going to set our BP equal to our SP because we're now entering a brand new frame a frame for the invocation of C and then so you can go ahead and allocate the space that it needs on the stack and those who this will be space that C needs for its own local variables as well as space that's you will use for any linkage blocks that it creates for its callers or for the things that that it calls now there is one common optimization that the compiler will attempt to perform if a function never needs to perform stack allocations except to handle these function calls in other words if the difference between our BP and our SP is a compile time constant then the compiler might might go ahead and just get rid of our BP and do all of the indexing base off the stack pointer RSP and the reason we'll do that is because if it could get one more general purpose register out of our BP well now our VP is general-purpose and it has one extra register to use to do all of its calculations reading from a register takes unit time reading from even l1 cache takes significantly more for I think four times that amount and so this is a common optimizations the compiler will want to perform now turns out that there's a lot more to the calling convention than just what's shown on these slides we're not going to go through that today if you'd like to have more details there is a there's a nice document the system 5 ABI that describes the whole calling convention any questions so far all right so lots let's wrap all this up with a final case study and let's take a look at how all these components fit together when we're translating a simple C function to compute Fibonacci numbers all the way down to assembly and as we've been describing this whole time we're gonna take this in two steps let's describe our starting point fib dot see this should be basically no surprise here at this point this is AC function fib which competes the nth Fibonacci number in one of the worst computational ways possible it turns out but it competes a fib the Fibonacci number F of n recursively using the formula F of n is equal to n when n is either 0 or 1 or it computes F of n minus 1 and F of n minus 2 and takes their sum this is an exponential time algorithm to compute Fibonacci numbers I would say don't run this at home except invariably you'll run this at home there are much faster algorithms to compute Fibonacci numbers but this is good enough for a didactic example we're not really worried about how fast can we compute fib today now the C code for fib dot C is even simpler than the recurrence implies we're not even gonna bother checking that the input value n is some non-negative value what we're gonna do is say look if n is less than 2 go ahead and return that value then otherwise do the recursive thing we've already seen in this code a couple of times everyone good so far any questions on these three lines great all right so let's translate fib C into fib LL we've seen a lot of these pieces in lecture so far and here we've just rewritten fib dot C a little bit to make drawing all the drawing other lines a little bit simpler so here we have the C code for fib dot see the corresponding LLVM IR looks like this and as we could guess from looking at the code for fitzy we have this conditional and then to two different things that might occur based on whether or not n is less than 2 and so we end up with three basic blocks within the LLVM ir the first basic block checks if n is less than 2 and then branches based on that result and we've seen how all that works previously if n happens to be less than 2 then the consequent the the true case of that branch ends up showing up at the end and all it does is it returns the input value which is stored in register 0 otherwise it's gonna do some straight line code to compute fib of n minus 1 and fib of n minus 2 it'll take those return values add them together return that result that's the end Fibonacci number so that gets us from C code to LV miR questions about that alright figure out a nice one there guys to add them return it we're good ok so one last step we want to compile LV Mir all the way down to assembly as I alluded you before roughly speaking the structure of the LV miR resembles the structure of the assembly code there's just extra stuff in the assembly code and so we're gonna translate the LV mi are more or less line by line into the assembly code and see where that extra stuff shows up so at the beginning we have a function we were defining a function fib and in the assembly code we make sure that fib is a globally accessible function using some assembler directives the global fib directive and we align to make sure we do an alignment to make sure that function a lies in a nice location in instruction memory and then we declare the the symbol fib which just makes this function available or just defines where this function lives in memory all right let's take a look at this assembly the next thing that we see here are these two instructions a push queue or var BP and a mob queue of RSP rbp who can tell me what these do yes cool does that sound like a familiar thing we described earlier in this lecture yep it's part of the calling convention this is part of the function prologue save off our BP and then set our BP equal to our SP so we already have a couple extra instructions that weren't in the LLVM IR but must be in the assembly in order to coordinate everyone okay so now we have these two instructions we're now going to push a couple more registers onto the stack so why do why does the assembly do this any guesses yeah Kali save registers yes Kali save registers the fit routine we're guessing will want to use our 14 and RBX during his calculation and so if there were interesting values in those registers save them off onto the stack presumably we'll restore them later then we have this move instruction for our di into our BX this requires a little bit more arcane knowledge but can anyone any guesses as to what this is for RTI is the argument to the function exactly that's the arcane knowledge so this is implicit from the from the assembly which is why you just you either have to memorize that huge chart of G PRC linkage nonsense but all this operation does is it takes whatever that argument was and it squirrels it away into the RB x register for some purpose I will find out find out about soon then we have this instruction and this corresponds to the highlighted instruction on the left if in case that gives any hints what does this instruction do sorry correct it evaluates the predicate its checks whether it's just gonna do a comparison between the value of N and the literal value to variance do so based on the result of that comparison if you recall last lecture the results of comparison will set some bits in this implicit eflags register or are flags register and based on the setting of those bits the various conditional jumps that occur next in the code will have varying behavior so in case the comparison results to false if n is in fact greater than or equal to 2 then the next instruction this j ge will jump to the label lb be zero underscore one you can tell already that reading assembly is super fun now that's a conditional jump and it's possible that the the setting of bits and our flags doesn't actually follow or doesn't follow a true for that condition code and so it's possible that the code would just fall through pass this JGE instruction and instead execute these operations and these operations correspond to the true side of the LLVM branch operation when n is less than 2 this will move n into our X and then jump to the label lb b03 any guesses as to Y moves n into our ax yeah that's a return value exactly the return value if you can return a value through registers it'll return it through our ax very good so now we see this label el bebé 0 1 that's the label as we saw before for the false side of the LVN branch and the first thing in that label is this operation la q- one of our bx r di any guesses as to what that's for the corresponding ellaby mir is highlighted on the left by the way this is the L EA instruction means load effective address all L EA does is an address calculation but something that compilers really like to do is exploit the L EA instruction to do simple integer arithmetic as long as that arithmetic fits with the things that L EA can actually compute and so all this instruction is doing is it's adding negative 1 to r bx + r bx as we recall stored the input value of n and it'll store the result in to r di that's all that this instruction does so COMPETES negative 1 stores r into r bi how about this instruction dish this one should be easier her there's no is there no add immediate instruction so you can do an add instruction as in x86 and specify an immediate value the advantage of this instruction is that you can specify a different destination operand yeah that's why compilers like to use it more arcane knowledge I don't blame you if you if you if this kind of thing turns you off from reading x86 it certainly turns me off from reading x86 but any case so this instruction should be a little bit easier guesses as to what it does feel free to shout it out because we're running a little short on time calls the function what function call fib exactly great then we have this move operation which moves rx into R 14 any guesses as to why we do this say yeah sorry get the result of the call so Rx is gonna be there store the return value of that call and we're just gonna scroll it away into R 14 question sorry uh it'll actually store the whole return value from the previous call it's part it's part of that result this will be a component in computing the the return value for this call of fib you're exactly right but we need to save off this result because we're gonna do as we see another call to fib and that's gonna clobber our ax so make sense cool so all right store as a result of the function save it into our 14 great these instructions since we're running short of time well anyone tell me really quickly what these instructions do just a wild guess if you had to n minus 2 compute itemize to buy this addition operation stash it into RDI and then you call fib on n minus 2 and that will return them and that will return the results into our ax as we saw before so now we do this operation add our 14 to our ax and this does what exactly add the thing so our next door is the result of the last function return' added into our 14 which is where we stash the result of fib of n minus 1 cool then we have the label for the true side of the branch this is the last pop quiz question I'll ask o puff quiz god I didn't even intend that one why do we do these pop operations in the front restore the registers before anything the stack frame exactly that's in calling convention terms that's called the function epilogue and then finally we return so that is how we get from C to assembly this is just a summary slide of everything we covered today we took we took the trip from C to assembly via l vm ir and we saw how we can represent things in a control flow graph as basic blocks connected by control flow edges and then there's additional complexity when you get to the actual assembly mostly to deal with this calling invention that's all I have for you today thanks for your time 2 00:00:03,189 --> 00:00:05,769 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu good afternoon everyone so today we have TB shuttle here he's going to give us the lecture on seat assembly so TB is a research scientist here at MIT working with charles leiserson he also taught this class with me last year and he got one of the best ratings ever for this class so I'm really looking forward to his lecture all right great so thank you for the introduction Julien today will be I so I hear you just submitted the beta for project 1 hopefully that went pretty well how many of you slept in the last 24 hours okay good all right so you know it went pretty well but that sounds great yeah so today we're gonna be talking about C to assembly and this is really a continuation from the topic of last lecture where you saw computer architecture if I understand correctly is that right you've looked at computer architecture x86 64 assembly that sort of thing so how many of you walked away from that lecture thinking oh yeah x86 64 assembly this is easy this is totally intuitive everything makes perfect sense you know there's no weirdness going on here whatsoever how many of you walked away not thinking that thinking that perhaps this is a little bit strange this whole assembly language like yeah I'm really in the later camp x86 is kind of a strange beast you know there are things in there that make no sense you know quadword has eight bytes P stands for integer you know what a that sort of thing and that's going to so when we move on to the topic of you know seeing how C code gets translated into assembly we're translating into something that's all pretty complicated and the translation itself isn't going to be that straightforward so we're gonna have to find a way to work through that and I'll outline the strategy that that we'll be using in the start of this presentation but first let's quickly review why do we care about looking at assembly you should have seen this slide from the last lecture but essentially assembly is a is more a more precise representation of the program than the C code itself and if you look at the assembly that can reveal details about the program they're not obvious when you just look at the C code directly they're implicit things going on in the C code such as type casts or the usage of registers versus memory on the machine and those can have performance implications so it's valuable to take a look at the assembly code directly it can also reveal what the compiler did or did not do when it tried to optimize the program for example you may have written a division operation or a multiply operation but somehow the compiler figured out that it didn't really need to do a divide or multiply to implement that operation it could implement it more quickly using simpler faster operations like addition and subtraction or shift and you would be able to see that from looking at the assembly bugs can also arise only at a low level for example there may be a bug in the program that only creates unexpected behavior when you optimize the code at oh three so that means when you're debugging with og or - oh one you wouldn't see any unusual behaviors but when you crank up the optimization level suddenly things start to fall apart and those bug and because the C code itself didn't change it can be hard to spot those bugs looking at the assembly can help out in that regard and when worse comes worse if you really want to make your code fast it is possible to modify the assembly code by hand one of my favorite uses of looking at the assembly though is actually reverse engineering if you can read the assembly for some code you can actually decipher what that program does even when you only have access the binary of that program which is kind of a cool thing it takes some practice to read assembly at that level but this is actually one trick that we that some of us in professor license research group have used in the past to say figure out what Intel's math kernal library is doing to multiply matrices now as I mentioned before at the end of last lecture you saw some computer architecture and you saw the basics of x86 64 assembly including all this stuff like the instructions the registers the various data types memory addressing modes the are flags register with those condition codes and that sort of thing and today we want to talk about how C code gets implemented in that assembly language okay well if we consider how C code becomes assembly what that process actually looks like we know that there's a compiler involved and the compiler is a pretty sophisticated piece of software and frankly the compiler has a lot of work to do in order to translate a C program into assembly for example it has to choose what assembly instructions are going to be used to implement those C operations it has to implement C conditionals and loops those if-then else's and those loops foreign while loops into jumps and branches it has to choose registers and memory locations to store all the data in the program it has to move data it may have to move data among the registers and the memory locations in order to satisfy various data dependencies it has to coordinate all the function calls that happen when subroutine a calls B call C and then returns and so on and so forth and on top of that these days we expect our compiler to try really hard to make that code fast so that's a lot of work that the compiler has to do and as a result if we take a look at the assembly for any arbitrary piece of C code the mapping from that C code to the assembly is not exactly obvious which makes it hard to describe hard to execute this particular lecture and it hard to in general read the binary or these for some program I figure out what's really going on so what we're gonna do today to understand this translation process is we're going to take a look at how that compiler actually reasons about translating C code into assembly now this is not a compiler class 6 172 is not a class that you take if you want to learn how to build a compiler and you're not going to need to know everything about a compiler to follow today's lecture but but what we will see are just a little bit it's just a little bit about how the compiler understands a program and later on how about how the compiler can translate that program into assembly code now when a compiler compiles a program it does so through a sequence of stages which are illustrated on this slide starting from the C code at first pre processes that code dealing with all the macros and that produces pre-process source then the compiler will translate that source code into an intermediate representation for the client Copiah that you're using that intermediate representation is called LLVM ir LLVM being the name of the underlying compiler and i are being the creative name for the intermediate representation that l vm ir is really a sort of pseudo assembly it's kind of like assembly but as we'll see that's actually it's actually a lot simpler than x86 64 assembly and that's why we'll use it to understand this translation process now it turns out that the compiler does a whole lot of work on that intermediate representation we're not going to worry about that today we'll just skip to the end of this pipeline when the compiler translates LLVM ir ultimately into assembly code now the nice thing about taking a look at the LVM IR is that if you're curious you can actually follow along with the compiler it is possible to ask clang to compile your code and give you the LLVM ir rather than the assembly and the flags to do that are somewhat familiar rather than passing the - s flag which hopefully you've already seen that will translate C code directly to assembly if you pass - yes - Amit LVM that will produce the LV Mir you can also ask clang to translate LLB Mir itself directly into assembly code and that process is pretty straightforward you just use the - s flag once again so this is the outline of today's lecture first we're gonna start with a simple primer on LLVM ir i know that l vm ir it sounds like another language oh gosh we have to learn another language but don't worry this primer I would say is simpler than the x86 64 primer based on the slides for x86 64 that primer was 20 some slides long this primer is 6 slides so maybe a little bit little over a quarter then we'll take a look at how the various constructs in the C programming language get translated into ellaby mi are including straight line code c functions conditionals in other words if then else loops and will conclude that section with just a brief mention of LVM are attributes and finally we'll take a look at how LLVM ir gets translated into assembly and for that we'll have to focus on what's called the Linux x86 64 calling convention and we'll conclude with a case study where we see how all this whole process works on a very simple code to compute Fibonacci numbers any questions so far all right let's get started brief primer on LVM ir so I've shown this in smaller font on some previous slides but here is a snippet of Olivia Meyer code in particular this is one function within an LLVM ir file and just from looking at this code we can see a couple of the basic components of LLVM I are in LOV Mir we have functions that's how code is organized into these chunks chunks called functions and within each function the operations of the function are encoded within instructions and each instruction shows up and at least on this slide on a separate line those functions operate on what are called LLVM IR registers these are kind of like the variables and each of those variables has some Associated type so the types are actually explicit within the IR and we'll take a look at the types in more detail in a couple of slides so based on that so based on that high-level overview we actually see a couple we can do a little bit of a comparison between LLVM IR and assembly language the first thing that we see is that it looks kind of similar to assembly right it still has a simple instruction format there's some destination operand which we were calling a register and then there's an equal sign and then an opcode be it add or call or what have you and then some list of source operands that's roughly what each instruction looks like we can also see that the LVM our code adopt it'll turn out rather that the LLVM IR code adopts a similar structure to the assembly code itself and control flow once again is implemented using conditional branches as well as on conditional branches but one thing that we'll notice is that LVM ir is simpler than assembly it has a much smaller instruction sets and unlike assembly language LVM are supports an infinite number of registers if you can name it it's a register so in that sense lVN's notion of registers is a lot closer to C's notion of variables and when you read lv Mir and you see those registers you should just think about see variables there's nothing there's no implicit are flags register there are no implicit condition codes going on everything is pretty explicit in terms of the LLVM there's no explicit stack pointer or frame pointer there's a type system that's explicit in the IR itself and it's see like in nature and there are see like functions for organizing the code overall so let's take a look at each of these components starting with LLVM i/o registers this is basically LVM x' name for a variable all the data and LVM is stored in these variables which are called registers and the syntax is a percent symbol followed by a name so present 0% 1% to that sort of thing and as I mentioned before LVM registers are a lot like sea variables algorithm supports an infinite number of these things and each distinct register is just distinguished by its name so percent zero is different from percent one because they have different names register names are also local to each LV Mir function and in this regard they're also similar to C variables if you wrote a C program with two functions a and B and each function had a local variable Apple those are two different apples the Apple an a is not the same thing as the Apple and B similarly if you had two different LV Mir functions and they both described some register v those are two different variables they're not automatically aliased so here's an example of um IRS snippet and what we've done here is just highlighted all of the registers some of them are being assigned because they're on the left-hand side of an equal symbol and some of them are being used as arguments when they show up on the right-hand side there is one catch which we'll see later on namely that the syntax for LVM registers ends up being hijacked when we when LLVM needs to refer to different basic blocks we haven't defined basic blocks yet we'll see what that's all about injustice just a couple slides are going good so far all right so LVM i our code is organized into instructions and the syntax for these instructions is pretty straightforward we have a register name on the left hand side then an equal symbol and then an opcode followed by an operand list for example the highlighted instruction the top highlighted instruction has register sticks equal to add of some arguments I will see a little bit more about those arguments later that's the syntax for when an instruction actually returns some value so addition returns there was the sum of the two operands other instructions don't return a value per se not a value that you'd store in a local register and so the syntax for those instructions is just an opcode followed by a list of operands ironically the return function that you'd find at the end of a the return instruction that you'd find at the end of a function doesn't assign a particular register value and of course the operands can be either registers or constants or as well see you later on and they can identify basic blocks within the within the function the LV Mir instruction set is smaller than that of x86 x86 contains hundreds of instructions when you start counting up all the vector instructions and llv Mir is far more modest in that regard there are some instructions for data movements including stack allocation reading memory writing memory converting between types or doing well yeah that's pretty much it there are some instructions for doing arithmetic or logic including integer arithmetic floating-point arithmetic boolean logic binary logic or address calculations and then there are a couple of instructions to do control flow there are unconditional branches or jumps conditional branches or jumps subroutines that's call or return and then there's this magical fee function which we'll see more of and if later on this slide in these slides finally as I mentioned for everything in LV Mir is exposed types it's a strongly typed language in that sense and the type system looks something like this for integers whenever there's a variable of an integer type you'll see an eye followed by some number a number defines the number of bits in that integer so if you see a variable of type I 64 that means it's a 64-bit integer if you see a variable of type I 1 that would be a 1 bit integer or in other words a boolean value there are also floating-point types such as double and float there are pointer types when you follow it with follow an integer or floating-point type with a star much like in C you can have a raise and that uses a bracket notation square bracket notation where within the square brackets you'll have some number and then times and then some other type maybe it's a primitive type like an integer or floating-point maybe it's something more complicated you can have structs with an LV miR and that uses squiggly brackets with types and new breeding on the inside you can have vector types which uses angle brackets and otherwise adopts a similar syntax to the array type finally you can you can occasionally see a variable which looks like an ordinary register except that it's type is label and that actually refers to a basic block those are the basic components of LV miR any questions so far everything clear everything unclear that should be unclear and we'll talk about it yeah is a vector notation used for the vector registers in a sense yes the vector operations within LLVM don't look like sse or AVX per se there they look more like ordinary operations except those ordinary operations work on a vector type so that's how the vector operations show up in LV Mir that make some sense cool anything else okay that's the whole Primmer that's a that's pretty much all of the language that you're going to need to know at least for this slide deck we'll cover some of the details as we go long let's start translating C code into LLVM ir so good all right let's start with pretty much the simplest thing we can straight line C code what do I mean by straight line C code I mean that this is a blob of C code that contains no conditionals or loops so it's just a whole sequence of operations and that sequence of operations in C code turns into a sequence of operations in LLVM ir so in this example here we have foo of n minus 1 plus bar of n minus 2 that is a sequence of operations and it turns into the LLVM ir on the right we can see how that happens there are a couple of rules of thumb when reading straight line C code and interpreting it and in the ir arguments to any operation are evaluated before the operation itself so what do I mean by that well in this case we need to evaluate n minus 1 before we pass the results to foo and what we see in the LLVM ir is that we have an addition operation that computes n minus 1 and then the result of that stored in to register for gets passed to the call instruction on the next line which calls out to function foo sound good similarly we need to evaluate at minus-2 before passing its results to the function bar and we see that sequence of instructions showing up next in the LLVM ir and now we actually need the return Val yeah question NSW that is a essentially that it's an attribute which we'll talk about later these are things that decorate the instructions as well as the types within LVM basically as the compiler figure stuff out so it helps the compiler along with analysis and optimization good so for the last operation here we had to evaluate both foo and bar and get their return their return values before we could add them together and so the very last operation in this sequence is the addition that just takes those return values and computes their sum now all of that use primitive types in particular integers but it's possible that your code uses aggregate types by aggregate types I mean arrays or structs that sort of thing and aggregate types are harder to store within registers typically speaking and so they're typically stored within memory as a result if you want to access something within an aggregate type if you want to read some elements out of an array that involves performing a memory access or more precisely computing some address into memory and then loading or storing that address so here for example we have an array a of seven integers and then we're going to access a sub X in LLVM IR that turns into two instructions this get element pointer instruction followed by a load and in the get element pointer case this computes an address into memory and stores the result of that address into a register in this case register five the next instruction the load takes the address stored in register five and simply loads that particular memory address storing the result into another register in this case six pretty simple when reading the get element pointer instruction the basic syntax involves a pointer into memory followed by a sequence of indices and all that ghetto and pointer really does is it computes an address by taking that pointer and then adding on that sequence of indices so in this case we have a get element pointer instruction which takes the address in register two and then adds on to it yeah that's a pointer into memory and then it adds onto a two indices one is the literal value zero and the other is the value stored in register for that just computes the address starting at two plus zero plus whatever was in register four so that's all for straight line code good so far feel free to interrupt if you have questions cool functions let's talk about C functions so when there's a C function when there's a function in your C code generally speaking you'll have a function within the LVM code as well and similarly when there's a return statement in the C code you'll end up with a return statement in the LV Mir so here we have just the bare-bones C code for this fib routine that corresponds to this fib call with it or not fib call this fib function with in LV Mir and the function declaration itself looks pretty similar to what you would get in ordinary see the return statement is also similar it may take an argument if you're returning some value to the caller in this case for the fib routine we're going to return a 64-bit integer and so we see that this return statement returns the 64-bit integer stored in register 0 a lot like in C functions cannot parameters and when you have a C function with a list of parameters basically in LV Mir you're gonna end up with a similar-looking function with the exact same list of parameters just translate it into LV Mir so here we have this C code for the mm base routine and we have the corresponding LV Mir for an mm based function and what we see is we have a pointer to a double as the first parameter followed by a 32-bit integer followed by another pointer to a double followed by another 32-bit integer follow my another pointer to a double and another 32-bit nature and another 32-bit integer one implicit thing with an LP Mir if you're looking at a function declaration or definition the parameters are automatically named present zero percent one percent two so on and so forth there's one unfortunate thing about LVM ir the registers are a lot like c functions but unfortunately that implies that when you're reading LV Mir it's a lot like reading the code from your teammate who always insists on naming things with nondescript single letter variable names also that teammate doesn't comment is code or her code or their code yeah okay so basic blocks when you look at the code within a function that code gets partitioned into chunks which are called basic blocks a basic block has a property that's a sequence of instructions in other words it's a blob of straight line code where control can only enter from the first instruction in that block and it can only leave from the last instruction in that block so here we have the C code for this routine fib dot C we're gonna see a lot of this routine v dot C by the way and we have the corresponding LV Mir and what we have in the C code what the C code is telling us is that if n is less than 2 you want to do one thing otherwise you want to do some complicated computation and then return that result and if we think about that we've got this branch in our control flow and we'll end up with our three different blocks within the LV Mir so we end up with one block which figures out which does the computation is and less than two and then we end up with an up with another block that says well in one case just go ahead and return something in this case the input to the function in the other case do some complicated calculations some straight line code and then return that result now when we have when we partition the the code of a function into these basic blocks we actually have connections between the basic blocks based on how control can move between the basic blocks these control flow instructions in particular the branch instructions as we'll see induce edges among these basic blocks whenever there's a branch instruction that can specify that control can leave this basic block and go to that other basic block or that other basic block or may be one or the other depending on how the result of some computation unfolded and so for the for the fib function that we saw before we had those three basic blocks and based on whether or not n was less than two either we would execute the simple return statement or we would execute the blob a straight line code shown on the left so those are basic blocks and functions everyone's still good so far any questions clear as mud all right let's talk about conditionals we've already seen one of these conditionals that's given rise to these basic blocks and these control flow edges so let's let's tease that apart a little bit further when we have a see conditional in other words an if-then-else statement or a switch statement for that matter that gets translated into a conditional branch instruction or B are in the LLVM ir representation so we saw before is that we had this if and lesson and less than two and this basic block with two outgoing edges if we take a really close look at that first at that first basic block we can tease it apart and see what each what each operation does so first in order to do this conditional operation we need to compute whether or not n is less than two we need to do a comparison between N and the literal value two and that comparison operation turns into an eye comp instruction within the LLVM ir and integer comparison in the lv m ir the result of that comparison then gets passed to a conditional branch as one of its arguments and the conditional branch specifies a couple other a couple of things beyond that one argument in particular that conditional branch takes that one bit integer that boolean result as well as labels of two different basic blocks so that boolean value is called the predicate and that's in this case a result of that comparison from before and then the two basic blocks say where to go if the predicate is true or where to go if the predicate is false the first label is the destination when it's true second label destination when it's false pretty straightforward and if we decide to map this onto our control flow graph which we were looking at before we can identify the two branches coming out of our first basic block as either the true branch or the false branch based on whether or not you follow that edge when the predicate is true or you follow it on the false it's not good this should be straightforward let me know if it's not let me know if if it's confusing now it's also possible that you can have an unconditional branch in LLVM ir you can just have a branch instruction with one operand and that one operand specifies a basic block there's no predicate there's no true or false it's just the one basic block and what that what that instruction says is when you get here now go to that other basic block this might seem kind of silly right why would we just need to jump to another basic block with why not just merge this code with the code of the subsequent basic block any thoughts correct other things might go to that basic block and in general when we look at the the structure that we get for any particular conditional and see we end up with this sort of diamond shape and in order to implement that diamond shape we need these unconditional branches so there's a good reason for them to be around and here we just have an example of a slightly more complicated conditional that that creates this diamond shape in our control flow graph so let's choose this this piece of code apart in the first block we're going to evaluate if some predicate in this case our predicate is X bitwise and one and when we see in the first basic block is that we compute the bitwise and store that results do a comparison between that result and the literal value one that gives us a boolean a boolean value which is store in register three and we branch conditionally on whether three is true or false in the case that it's true we'll branch to block four and in block four that contains the code for the consequent the venn Clause of the if then else and in the consequent we just call function foo and then we need to leave the conditional so we'll just branch unconditionally the alternative if x + 1 is 0 if it's false then we will execute the function bar but then also need to leave the conditional and so we see in block 5 following the false branch that we call bar and we'd just branch a block 6 and finally in block 6 we return the results so we end up with this diamond pattern whenever we have a conditional in general we may delete certain basic blocks if conditionals are Perdition and the code is particularly simple but in general is going to be this kind of diamond looking thing everyone good so far one lassie construct loops unfortunately this is the most complicated si construct when it comes to the LLVM ir but things haven't been too bad so far so yeah let's walk into this with some confidence so the simple part is that what we'll see is the C code for a loop translates into LV Mir that in the control flow graph representation is a loop so loop and C is literally a loop in this graph representation which is kind of nice but to figure out what's really going on with these loops we need to tease apart let's first tease apart the components of a C loop because we have a couple different pieces in an arbitrary seal if we have a loop body which is what's executed on each iteration and then we have some loop control which manages all the iterations of that loop so in this case we have a simple C loop which multiplies each element of an input vector X by some scalar a and stores our results into y that body gets translated into a blob of straight line code I won't step through all this straight line code just now there's plenty of it and you'll be able to see the slides after this after this lecture but that blob a straight line code corresponds to the loop body and the rest of the code in the LLVM Arsene if it corresponds to the loop control so we have the assign the initial assignment of the induction variable the comparison with the end of the loop and the you know increment operation at the end all that gets encoded in this stuff highlighted in yellow that loop control part now if we take a look at this code there's one odd piece that we haven't really understood yet and it's this fie instruction at the beginning the fie instruction is weird and it arises pretty commonly when you're dealing with loops and it arises for it basically is there to solve a problem with LLVM representation of the code so before we describe the fie instruction let's actually take a look at the problem that this fee instruction tries to solve so let's let's first easy part of the tease apart the lupin to reveal the problem the c loop produces this looping pattern in the control flow graph literally an edge that goes back to the beginning and if we look at the different basic blocks we have we have one block at the beginning which initializes the induction variable and sees if there are any iterations of the loop that need to be run assuming that there are some iterations oh if there aren't any iterations and that'll branch directly to the end of the loop it'll just skip the loop entirely no need to try to execute any of that code and in this case it would simply return and then inside the loop block we have these two incoming edges one from the entry point of the loop where I has just been set to zero and another where we're repeating the loop where we've decided there's one more iteration to execute and we're going to go back from the end of the loop to the beginning and that back edge is what creates the loop structure in the control flow graph make sense all right at least see one nod over there so that's that's encouraging okay so we take a look at the loop control there a couple of components to that loop control there's the initialization of the induction variable there's the condition and there's the increment condition says when do you exit increments updates the updates and value of the induction variable and we can translate each of these components from the C code for the loop control into the LLVM ir code for that loop so the increments we would expect to see some sort of addition with an addition where we add one to some register somewhere and lo and behold there's an ADD operation so we'll call that the increment for the condition we expect some comparison operation and the conditional branch based on that comparison look at that right after the increment there's a compare and a conditional branch that will either take us back to the beginning of the loop or out of loop entirely and we do see that there is some form of initialization the initial value of this induction variable is zero and we do see a zero among this loop control code it's kind of squirreled away in that weird notation there and that weird notation sitting next to the fee instruction what's not so clear here is what where exactly is the induction variable we had the single variable I in our C code and what we're looking at in the LLVM eye are are a whole bunch of different registers we have a register that stores what we're claiming to be I plus 1 then we do this comparison and branch thing and then we have this this fee instruction that takes that takes zero or the result of the increment where did I actually go so the problem here is that I is really represented across all of those instructions and that happens because the value of the induction variable changes as you execute the loop the value of I is different on iteration zero versus iteration 1 versus iteration 2 versus iteration 3 and so on and so forth I is changing as you execute the loop and there's this funny invariant yeah so if we try to map that induction variable to the LV Mir it kind of maps to all of these locations it maps to various uses in the loop body and maps roughly speaking to the return value of this fie instruction even though we're not sure what that's all about but we can tell it maps to that because we're going to increment that later on and then we're gonna use that in a comparison so it kind of maps all over the place because it changes values we're we're going to encounter because it changes values with the increment operation we're going to encounter oh yeah so why does it change registers well we have this property in LVM that each instruction defines the value of a register at most once so for any particular register with an LLVM we can identify unique place in the code of the function that defines that register value this invariant is called the static single assignment environment and it seems a little bit weird but it turns out to be an extremely powerful invariant within the compiler it assisted with a lot of the compiler analysis and it also can help with reading the LLVM ir if you know if you if you expect it so this is a nice invariant but it poses a problem when we're dealing with induction variables which change as the iteration or as the loop unfolds and so what happens when control flow merges at the point of at the entry point of a loop for example how do we define what the induction variable is at that location because it could either be 0 if this is the first time through the loop or whatever you last incremented and the solution to that problem is the fie instruction the fie instruction specifies it defines a red shirt that says depending on how you get to this location in the code this register will have one of several different values and the P instruction simply lists what the value of that register will be depending on which basic block you came from so in this particular code the fie instruction says if you came from block 6 which was the entry point of the loop where you initially where you initially compared you initially checked if there were any loop iterations to perform if you come from that block then this register 9 is going to adopt the value 0 if however you followed the back edge of the loop then the register is going to adopt the value in this case 14 and 14 low and behold as there is the increment operation and so this fee instruction says either you're going to start from zero or you're gonna be I plus 1 just to note the fee instruction is not a real instruction it's really a solution to a problem within LLVM and when you translate this code into assembly the fee instruction isn't going to map to any particular assembly instruction it's really a representational trick does that make some sense any questions about that yeah why is it called fee that's a great question I actually don't know why it's called why they chose the name fee I don't think they had a particular affinity for the golden ratio but I'm not sure what the rationale was I don't know if anyone else knows yeah Google knows all sort of yeah so adopt the value SiC zero from block six or a 14 from block eight so that's all of the basic components of C translating into LV Mir the last thing I want to leave you with in this section on LV Mir is a discussion of these attributes and we already saw one of these attributes before is this NSW thing attach the add instruction in general the these LV Mir constructs might be decorated with these extra words and key words and those are the key words that I'm referring to as attributes those attributes can be a variety of a variety of information so in this case what we have here is C code that performs this memory calculation which you might have seen from a previous lecture and we see in the corresponding LV Mir is that there's some extra stuff associated attacked on to that load instruction where you load memory one of those pieces of extra information is this a line four and what that a line for attribute says is it describes the alignment of that read from memory and so if subsequent stages of the compiler can employ that information if they can opt reeds that are aligned that are 4-byte aligned then this attribute will say this is a load that you can go ahead and optimize there are a bunch of places where attributes might come from some of them are derived directly from the source code if you write a function that takes a parameter marked as Const or marked as restrict then in the LV miR you might see that the corresponding function parameter is marked as no alias because the restrict keyword said this pointer can never alias or the cause keyword says you're only ever going to read from this pointer so this pointer is going to be marked read-only so in that case the source code itself the C code was the the source of the information for those attributes there's some other attributes that occur simply because the compiler is smart and it does some clever analysis so in this case the LV Mir has a load operation that's eight eight byte aligned it was really analysis that figured out the alignment of that load operation good so far cool so let's summarize this part of the part of discussion with you know will be seen about LLVM ir l vm r is similar to assembly but a lot simpler in many many ways all the computed values are stored in registers and really when you're reading lv Mir you can think of those registers a lot like ordinary C variables l vm ir is a little bit funny and that it adopts this static single assignment paradigm this invariant where each register name each variable is written by a most one instruction within the LLVM ir code so if you're ever curious where you know percent 14 is defined within this function just do a search for where percent 14 is on the left hand side of an equals and there you go we can model a function in Elvia Meier as a control flow graph whose nodes correspond to basic blocks these blobs of straight line code and whose edges denote control flow among those basic blocks and compared to C ellaby meyer is pretty similar except that all of these operations are explicit the types are explicit everywhere the integer sizes are all apparent you don't have to remember that int really means a 32-bit integer and you need in 64 to be a 64-bit integer or you need a long or anything it's just I and then a bit loop and then a bit width there are no implicit operations at the all of you admire level all the type castes are explicit in some sense lb Mir is like assembly if assembly were more like C and that's definitely a statement that would not have made sense forty minutes ago alright so we've seen how to translate C code into LLVM ir there's one last step we want to translate the LVM ir into assembly and it turns out that structurally speaking LLVM ir is very similar to assembly we can more or less map each line of LLVM ir to some sequence of lines in the final assembly code but there is some additional complexity the compiler isn't done with its work yet when it's compiling c to lv mir-2 assembly and there are a couple there are three main tasks that the compiler still has to perform in order to generate x86 64 first it has to select the actual x86 assembly instructions that are going to implement these various LLVM ir operations it has to decide which general-purpose registers are going to hold different values and which values need to be squirreled away into memory because it just has no other choice and it has to coordinate all of the function calls it's not just the function calls within this particular source file it's also function calls between that source file and other source files that you're compiling and binary libraries that are just sitting on the system but the compiler never really gets to touch that's according all of those calls that's a bit complicated that's going to be the reason for a lot of the remaining complexity and that's what brings our discussion to the Linux x86 64 calling convention this isn't a very fun convention don't worry but never losses so it's useful so to talk about this convention we need to let's first take a look at how a program gets laid out in memory when you run it so when a program executes s-- virtual memory gets organized into a whole bunch of different chunks which are called segments there's a segment that corresponds to the stack that's actually located near the top a virtual memory and it grows downwards the stack grows down remember this there is a heap segment which grows upwards from a middle location in memory and those two those two dynamically allocated segments live at the top of the virtual address space there are then two additional segments the BSS segment for uninitialized data and the data segment for initialized data and finally at the bottom of virtual address space there is a text segment and that just stores the code of the program itself now when you read assembly code directly you'll see that the assembly code contains more than just some labels and some instructions in fact it's decorated with a whole bunch of other stuff and these are called assembly assembler directives and these directives operate on different sections of the assembly code some of those directives refer to the various segments of virtual memory and the and those segments directives are used to organize the content of the assembly file for example the dot text directive identifies some chunk of the assembly which is really code and should be located in the text segment when the program is run the dot BSS segment identifies stuff that lives in the assembler directive identify stuff in the BSS segment the data directive identify stuff in the data segment so on and so forth they're also very storage directives that will store content of some variety directly into the her in segments whatever was last identified by a segment directive so if at some point there is a there's a directive X colon dot space xx that space directive says allocates some amount of memory and in this case it says allocate 20 bytes of memory and we're gonna label that location X the dot long segment says soirée says store a constant long integer value in this case 172 in this example at location Y the asked ez segment similarly stores a string at that particular location so here we're storing the string 6.17 2 at location Z there's an aligned segment or an aligned directive that aligns the content the next content in the assembly file to an 8 byte boundary there additional segments for the linker to obey and those are the scope and linkage directives for example you might see Doc global in front of a label and that single sill linker that that's particulars imbel should be visible to the other files that the linker touches this case doc global fib makes viv fib visible to the other object files and that allows those other object files to call or refer to this fib location now let's turn our attention to this segment at the top the call or the stack segment this segment is used to store data and memory in order to manage function calls and returns that's a nice high-level description but what exactly ends up in the stack segment why do we need a stack what Dana will end up going there can't even tell me local variables and function anything else you already answered once let me I've made calling you again but go ahead sorry function arguments very good anything else I thought I saw a hand over here the return address anything else yeah there's at least yeah there's one other thing that one other important thing that gets stored on the stack yeah the return value actually that one's interesting it might be stored on the stack but it might not be stored on the stack good guess though yeah intermediate results in a manner of speaking yes there are more intermediate results than meets the eye when it comes to assembly or comparing comparing it to see but in particular by intermediate results let's say register state there are only so many registers on the machine and sometimes that's not enough and so the and so the the function may want to screw away some data that's in registers and stash it somewhere in order to read it back later the stack is a very natural place to do it that's the dedicated place to do it so yeah that's pretty much all the content of the what ends up on the call stack as the program executes now here's the thing there are a whole bunch of functions in the program some of them are in the may have been defined in the in the source file that you're compiling right now some of them might be defined in other source files some of them might be defined in libraries that were compiled by someone else possibly using a different compiler with different flags under different parameters presumably for this architecture at least one hopes but they're those libraries are completely out of your control and now we have this problem all those object files might define these functions and those functions want to call each other regardless of where those functions are necessarily defined and so somehow we need to coordinate all those function calls and make sure that if one function wants to use these registers and this other function wants to use the same registers those functions aren't going to interfere with each other or if they both want to read stack memory they're not going to clobber each other stacks so how do we deal with this coordination problem out of a high level what's the high level strategy we're gonna adopt to deal with this coordination problem that will be part of it but for the higher level strategy so that's a component of this higher level strategy so what yeah good calling convention you remember the title of this section of the talk great we're going to make sure that every single function regardless of where it's defined they all abide by the same calling convention so it's a standard that all the functions will obey in order to make sure they all play nicely together so let's unpack the linux x86 64 calling convention well not the whole thing because it's actually pretty complicated but at least enough to understand the basics of what's going on so at a high level this calling convention organizes the stack segment into frames such that each function instantiation each time you call a function that instantiation gets a single frame all to itself and to manage all those stack frames the calling convention is going to use these two pointers our VP and our SP which you should have seen last time are BP the base pointer will point to the top of the current stack frame RSP will point to the bottom of the current stack frame and remember the stack grows down now when the code executes call and return instructions those instructions are going to operate on the stack on these various on the stack these various stack pointers as well as the instruction pointer are IP in order to manage the return address of each function in particular when a call instruction gets executed in x86 that call instruction will push the current value of our IP on to the stack and that will be the return address and then the column structuring will jump to its operand this opera and being the address of some function in the program memory or at least one hopes perhaps there was buffer overflow corruption of some kind and your program is is in dire straits but presumably is the address of a function the return instruction complements the call and it's going to undo the operations all of that call instruction it'll pop the return address off the stack and put that into our IP and that will cause the execution to return to the caller and resume execution from the statement right after the original call so that's the high level of how the stack gets managed as well as the return address how about how do we maintain registers across all of those calls well there's a bit of a problem because we might have two different functions that want to use the same registers some of this might be review by the way from six double four but if it's not hopefully you know if you have questions just let me know so we have this problem where two different functions function a which might call another function B those two functions might want to use the same registers so who's responsible for making sure that a function B operates on the same registers as a that one B is done a doesn't end up with corrupted State in its registers well they're two different strategies that could be adopted one is to have the caller save off the register state before invoking a call but that has some downsides the caller might waste work saying well I have to save all of this register state in case the call in case the function I'm calling wants to use those registers if the calling function doesn't use those registers that was a bunch of wasted work so on the other side we might say well let's just have the Kali save all that register state that could waste work if the Kali is going to save off register state that the caller wasn't using so the Kali says well I want to use all these registers I don't know what what the calling function used so I'm just going to push everything on the stack that could be a lot of ways to work so what does the x86 calling convention do if you had to guess yeah that's exactly right it does a little bit of both it specifies some of the registers as being Kali save registers and the rest of the registers are call or save registers and so the caller will be responsible for saving some stuff the Kali will be responsible for saving other stuff and if either of those functions doesn't need one of those registers then it can avoid wasted work in x86 64 in this calling convention turns out that the RB x RB p and r 12 through r15 registers are all Kali saved and the rest of the registers are caller saved in particular the see linkage defined by this calling convention for all the registers looks something like this and that identifies lots of stuff identifies a register for storing the return value registers restoring a bunch of the arguments caller save registers Kali save registers a register just for linking I don't expect you to memorize this in in 12 seconds and I think on any quibble I won't say what the course app will do on quizzes this year but yeah okay well there you go so you'll have these slides later you can you can practice memorizing them not show on the slide there are a couple other registers that are used for saving function arguments and return values and in particular whenever you're placing passing floating point stuff around the xmm register is 0 through 7 are used to deal with those floating point values cool so we have strategies for maintaining the stack we have strategies for maintaining register States but we still have the situation where functions may want to use overlapping parts of stack memory and so we need to coordinate how all those functions are going to use the stack memory itself this is a bit hard to describe the cleanest way I know to describe it is just to work through an example so here's the setup let's imagine that we have some function a that is called a function B and we're in the midst of executing function B and now function B is about to call some other function C as we've mentioned before B has a frame all to itself and that frame contains a whole bunch of stuff it contains arguments that a pass to be it contains a return address it contains a base pointer it contains some local variables and because B is about to call C it's also going to contain some data for arguments that B will pass to C so that's our setup we have one function ready to call another let's well let's take a let's take a look at how this stack memory is organized first so at the top we have what's called a linkage block and in this linkage block this is the region of sack memory where function B will access nan register arguments from its caller function a and it'll access these by indexing off of the base pointer rbp using positive offsets again the stack grows down B will also have a block of stack space after the linkage block and return and base return address and base pointer he'll have a region of its frame for local variables and it can access those local variables by indexing off of RVP in the negative direction stack grows down if you don't remember anything else stack grows down now B is about to call a function C and we want to see how all of this unfolds so before calling C B is going to place non register arguments for C on to a reserved linkage block in its own stack memory below its local variables it'll access those by your next thing our VP with negative offsets so those arguments from B to its colors will we'll specify those to be arguments from B to C and then what's gonna happen then B is going to call C and as we saw before the call instruction saves off the return address on to the stack and then it branches control to the entry point of a function see when the function C starts it's going to execute what's called a function Prolog and the function prototype consists of a couple of steps first it's going to save off the base pointer for B's stack frame so we'll just squirrel away the value of our BP on to the stack that's going to set our BP equal to our SP because we're now entering a brand new frame a frame for the invocation of C and then so you can go ahead and allocate the space that it needs on the stack and those who this will be space that C needs for its own local variables as well as space that's you will use for any linkage blocks that it creates for its callers or for the things that that it calls now there is one common optimization that the compiler will attempt to perform if a function never needs to perform stack allocations except to handle these function calls in other words if the difference between our BP and our SP is a compile time constant then the compiler might might go ahead and just get rid of our BP and do all of the indexing base off the stack pointer RSP and the reason we'll do that is because if it could get one more general purpose register out of our BP well now our VP is general-purpose and it has one extra register to use to do all of its calculations reading from a register takes unit time reading from even l1 cache takes significantly more for I think four times that amount and so this is a common optimizations the compiler will want to perform now turns out that there's a lot more to the calling convention than just what's shown on these slides we're not going to go through that today if you'd like to have more details there is a there's a nice document the system 5 ABI that describes the whole calling convention any questions so far all right so lots let's wrap all this up with a final case study and let's take a look at how all these components fit together when we're translating a simple C function to compute Fibonacci numbers all the way down to assembly and as we've been describing this whole time we're gonna take this in two steps let's describe our starting point fib dot see this should be basically no surprise here at this point this is AC function fib which competes the nth Fibonacci number in one of the worst computational ways possible it turns out but it competes a fib the Fibonacci number F of n recursively using the formula F of n is equal to n when n is either 0 or 1 or it computes F of n minus 1 and F of n minus 2 and takes their sum this is an exponential time algorithm to compute Fibonacci numbers I would say don't run this at home except invariably you'll run this at home there are much faster algorithms to compute Fibonacci numbers but this is good enough for a didactic example we're not really worried about how fast can we compute fib today now the C code for fib dot C is even simpler than the recurrence implies we're not even gonna bother checking that the input value n is some non-negative value what we're gonna do is say look if n is less than 2 go ahead and return that value then otherwise do the recursive thing we've already seen in this code a couple of times everyone good so far any questions on these three lines great all right so let's translate fib C into fib LL we've seen a lot of these pieces in lecture so far and here we've just rewritten fib dot C a little bit to make drawing all the drawing other lines a little bit simpler so here we have the C code for fib dot see the corresponding LLVM IR looks like this and as we could guess from looking at the code for fitzy we have this conditional and then to two different things that might occur based on whether or not n is less than 2 and so we end up with three basic blocks within the LLVM ir the first basic block checks if n is less than 2 and then branches based on that result and we've seen how all that works previously if n happens to be less than 2 then the consequent the the true case of that branch ends up showing up at the end and all it does is it returns the input value which is stored in register 0 otherwise it's gonna do some straight line code to compute fib of n minus 1 and fib of n minus 2 it'll take those return values add them together return that result that's the end Fibonacci number so that gets us from C code to LV miR questions about that alright figure out a nice one there guys to add them return it we're good ok so one last step we want to compile LV Mir all the way down to assembly as I alluded you before roughly speaking the structure of the LV miR resembles the structure of the assembly code there's just extra stuff in the assembly code and so we're gonna translate the LV mi are more or less line by line into the assembly code and see where that extra stuff shows up so at the beginning we have a function we were defining a function fib and in the assembly code we make sure that fib is a globally accessible function using some assembler directives the global fib directive and we align to make sure we do an alignment to make sure that function a lies in a nice location in instruction memory and then we declare the the symbol fib which just makes this function available or just defines where this function lives in memory all right let's take a look at this assembly the next thing that we see here are these two instructions a push queue or var BP and a mob queue of RSP rbp who can tell me what these do yes cool does that sound like a familiar thing we described earlier in this lecture yep it's part of the calling convention this is part of the function prologue save off our BP and then set our BP equal to our SP so we already have a couple extra instructions that weren't in the LLVM IR but must be in the assembly in order to coordinate everyone okay so now we have these two instructions we're now going to push a couple more registers onto the stack so why do why does the assembly do this any guesses yeah Kali save registers yes Kali save registers the fit routine we're guessing will want to use our 14 and RBX during his calculation and so if there were interesting values in those registers save them off onto the stack presumably we'll restore them later then we have this move instruction for our di into our BX this requires a little bit more arcane knowledge but can anyone any guesses as to what this is for RTI is the argument to the function exactly that's the arcane knowledge so this is implicit from the from the assembly which is why you just you either have to memorize that huge chart of G PRC linkage nonsense but all this operation does is it takes whatever that argument was and it squirrels it away into the RB x register for some purpose I will find out find out about soon then we have this instruction and this corresponds to the highlighted instruction on the left if in case that gives any hints what does this instruction do sorry correct it evaluates the predicate its checks whether it's just gonna do a comparison between the value of N and the literal value to variance do so based on the result of that comparison if you recall last lecture the results of comparison will set some bits in this implicit eflags register or are flags register and based on the setting of those bits the various conditional jumps that occur next in the code will have varying behavior so in case the comparison results to false if n is in fact greater than or equal to 2 then the next instruction this j ge will jump to the label lb be zero underscore one you can tell already that reading assembly is super fun now that's a conditional jump and it's possible that the the setting of bits and our flags doesn't actually follow or doesn't follow a true for that condition code and so it's possible that the code would just fall through pass this JGE instruction and instead execute these operations and these operations correspond to the true side of the LLVM branch operation when n is less than 2 this will move n into our X and then jump to the label lb b03 any guesses as to Y moves n into our ax yeah that's a return value exactly the return value if you can return a value through registers it'll return it through our ax very good so now we see this label el bebé 0 1 that's the label as we saw before for the false side of the LVN branch and the first thing in that label is this operation la q- one of our bx r di any guesses as to what that's for the corresponding ellaby mir is highlighted on the left by the way this is the L EA instruction means load effective address all L EA does is an address calculation but something that compilers really like to do is exploit the L EA instruction to do simple integer arithmetic as long as that arithmetic fits with the things that L EA can actually compute and so all this instruction is doing is it's adding negative 1 to r bx + r bx as we recall stored the input value of n and it'll store the result in to r di that's all that this instruction does so COMPETES negative 1 stores r into r bi how about this instruction dish this one should be easier her there's no is there no add immediate instruction so you can do an add instruction as in x86 and specify an immediate value the advantage of this instruction is that you can specify a different destination operand yeah that's why compilers like to use it more arcane knowledge I don't blame you if you if you if this kind of thing turns you off from reading x86 it certainly turns me off from reading x86 but any case so this instruction should be a little bit easier guesses as to what it does feel free to shout it out because we're running a little short on time calls the function what function call fib exactly great then we have this move operation which moves rx into R 14 any guesses as to why we do this say yeah sorry get the result of the call so Rx is gonna be there store the return value of that call and we're just gonna scroll it away into R 14 question sorry uh it'll actually store the whole return value from the previous call it's part it's part of that result this will be a component in computing the the return value for this call of fib you're exactly right but we need to save off this result because we're gonna do as we see another call to fib and that's gonna clobber our ax so make sense cool so all right store as a result of the function save it into our 14 great these instructions since we're running short of time well anyone tell me really quickly what these instructions do just a wild guess if you had to n minus 2 compute itemize to buy this addition operation stash it into RDI and then you call fib on n minus 2 and that will return them and that will return the results into our ax as we saw before so now we do this operation add our 14 to our ax and this does what exactly add the thing so our next door is the result of the last function return' added into our 14 which is where we stash the result of fib of n minus 1 cool then we have the label for the true side of the branch this is the last pop quiz question I'll ask o puff quiz god I didn't even intend that one why do we do these pop operations in the front restore the registers before anything the stack frame exactly that's in calling convention terms that's called the function epilogue and then finally we return so that is how we get from C to assembly this is just a summary slide of everything we covered today we took we took the trip from C to assembly via l vm ir and we saw how we can represent things in a control flow graph as basic blocks connected by control flow edges and then there's additional complexity when you get to the actual assembly mostly to deal with this calling invention that's all I have for you today thanks for your time 3 00:00:05,769 --> 00:00:08,019 4 00:00:08,019 --> 00:00:09,850 5 00:00:09,850 --> 00:00:10,930 6 00:00:10,930 --> 00:00:13,120 7 00:00:13,120 --> 00:00:15,160 8 00:00:15,160 --> 00:00:21,300 9 00:00:21,300 --> 00:00:23,370 10 00:00:23,370 --> 00:00:27,339 11 00:00:27,339 --> 00:00:29,859 12 00:00:29,859 --> 00:00:33,250 13 00:00:33,250 --> 00:00:35,440 14 00:00:35,440 --> 00:00:37,630 15 00:00:37,630 --> 00:00:41,740 16 00:00:41,740 --> 00:00:44,380 17 00:00:44,380 --> 00:00:47,460 18 00:00:47,460 --> 00:00:49,770 19 00:00:49,770 --> 00:00:52,510 20 00:00:52,510 --> 00:00:55,600 21 00:00:55,600 --> 00:00:57,990 22 00:00:57,990 --> 00:01:01,780 23 00:01:01,780 --> 00:01:05,010 24 00:01:05,010 --> 00:01:08,110 25 00:01:08,110 --> 00:01:10,420 26 00:01:10,420 --> 00:01:12,430 27 00:01:12,430 --> 00:01:16,420 28 00:01:16,420 --> 00:01:18,580 29 00:01:18,580 --> 00:01:20,289 30 00:01:20,289 --> 00:01:21,580 31 00:01:21,580 --> 00:01:23,920 32 00:01:23,920 --> 00:01:28,000 33 00:01:28,000 --> 00:01:30,219 34 00:01:30,219 --> 00:01:33,070 35 00:01:33,070 --> 00:01:36,730 36 00:01:36,730 --> 00:01:38,380 37 00:01:38,380 --> 00:01:40,120 38 00:01:40,120 --> 00:01:43,450 39 00:01:43,450 --> 00:01:47,320 40 00:01:47,320 --> 00:01:48,730 41 00:01:48,730 --> 00:01:50,320 42 00:01:50,320 --> 00:01:53,380 43 00:01:53,380 --> 00:01:55,390 44 00:01:55,390 --> 00:01:57,070 45 00:01:57,070 --> 00:02:00,070 46 00:02:00,070 --> 00:02:02,320 47 00:02:02,320 --> 00:02:03,010 48 00:02:03,010 --> 00:02:07,870 49 00:02:07,870 --> 00:02:10,359 50 00:02:10,359 --> 00:02:12,879 51 00:02:12,879 --> 00:02:14,260 52 00:02:14,260 --> 00:02:17,950 53 00:02:17,950 --> 00:02:19,630 54 00:02:19,630 --> 00:02:21,130 55 00:02:21,130 --> 00:02:24,340 56 00:02:24,340 --> 00:02:26,350 57 00:02:26,350 --> 00:02:29,460 58 00:02:29,460 --> 00:02:32,170 59 00:02:32,170 --> 00:02:33,910 60 00:02:33,910 --> 00:02:35,560 61 00:02:35,560 --> 00:02:38,530 62 00:02:38,530 --> 00:02:41,050 63 00:02:41,050 --> 00:02:44,350 64 00:02:44,350 --> 00:02:46,510 65 00:02:46,510 --> 00:02:48,790 66 00:02:48,790 --> 00:02:51,130 67 00:02:51,130 --> 00:02:54,400 68 00:02:54,400 --> 00:02:56,470 69 00:02:56,470 --> 00:02:59,650 70 00:02:59,650 --> 00:03:01,930 71 00:03:01,930 --> 00:03:03,700 72 00:03:03,700 --> 00:03:05,680 73 00:03:05,680 --> 00:03:09,070 74 00:03:09,070 --> 00:03:11,740 75 00:03:11,740 --> 00:03:13,930 76 00:03:13,930 --> 00:03:15,970 77 00:03:15,970 --> 00:03:18,370 78 00:03:18,370 --> 00:03:20,740 79 00:03:20,740 --> 00:03:22,510 80 00:03:22,510 --> 00:03:24,699 81 00:03:24,699 --> 00:03:26,890 82 00:03:26,890 --> 00:03:29,560 83 00:03:29,560 --> 00:03:32,650 84 00:03:32,650 --> 00:03:34,090 85 00:03:34,090 --> 00:03:34,100 86 00:03:34,100 --> 00:03:35,670 87 00:03:35,670 --> 00:03:38,740 88 00:03:38,740 --> 00:03:40,900 89 00:03:40,900 --> 00:03:44,530 90 00:03:44,530 --> 00:03:47,290 91 00:03:47,290 --> 00:03:49,420 92 00:03:49,420 --> 00:03:52,900 93 00:03:52,900 --> 00:03:54,670 94 00:03:54,670 --> 00:03:56,260 95 00:03:56,260 --> 00:03:58,960 96 00:03:58,960 --> 00:04:00,940 97 00:04:00,940 --> 00:04:02,500 98 00:04:02,500 --> 00:04:04,660 99 00:04:04,660 --> 00:04:08,230 100 00:04:08,230 --> 00:04:10,330 101 00:04:10,330 --> 00:04:13,060 102 00:04:13,060 --> 00:04:16,210 103 00:04:16,210 --> 00:04:17,740 104 00:04:17,740 --> 00:04:19,690 105 00:04:19,690 --> 00:04:22,210 106 00:04:22,210 --> 00:04:24,909 107 00:04:24,909 --> 00:04:27,490 108 00:04:27,490 --> 00:04:27,500 109 00:04:27,500 --> 00:04:28,140 110 00:04:28,140 --> 00:04:30,930 111 00:04:30,930 --> 00:04:33,240 112 00:04:33,240 --> 00:04:35,070 113 00:04:35,070 --> 00:04:38,159 114 00:04:38,159 --> 00:04:40,860 115 00:04:40,860 --> 00:04:42,780 116 00:04:42,780 --> 00:04:44,760 117 00:04:44,760 --> 00:04:48,590 118 00:04:48,590 --> 00:04:51,629 119 00:04:51,629 --> 00:04:54,270 120 00:04:54,270 --> 00:04:57,230 121 00:04:57,230 --> 00:05:00,659 122 00:05:00,659 --> 00:05:03,480 123 00:05:03,480 --> 00:05:05,790 124 00:05:05,790 --> 00:05:08,430 125 00:05:08,430 --> 00:05:10,590 126 00:05:10,590 --> 00:05:12,270 127 00:05:12,270 --> 00:05:14,879 128 00:05:14,879 --> 00:05:20,279 129 00:05:20,279 --> 00:05:22,920 130 00:05:22,920 --> 00:05:25,080 131 00:05:25,080 --> 00:05:27,240 132 00:05:27,240 --> 00:05:29,189 133 00:05:29,189 --> 00:05:31,379 134 00:05:31,379 --> 00:05:33,870 135 00:05:33,870 --> 00:05:37,080 136 00:05:37,080 --> 00:05:39,270 137 00:05:39,270 --> 00:05:41,100 138 00:05:41,100 --> 00:05:43,500 139 00:05:43,500 --> 00:05:45,270 140 00:05:45,270 --> 00:05:47,400 141 00:05:47,400 --> 00:05:51,420 142 00:05:51,420 --> 00:05:53,550 143 00:05:53,550 --> 00:05:56,310 144 00:05:56,310 --> 00:05:57,990 145 00:05:57,990 --> 00:05:59,790 146 00:05:59,790 --> 00:06:01,529 147 00:06:01,529 --> 00:06:04,200 148 00:06:04,200 --> 00:06:06,750 149 00:06:06,750 --> 00:06:08,969 150 00:06:08,969 --> 00:06:12,000 151 00:06:12,000 --> 00:06:13,920 152 00:06:13,920 --> 00:06:15,810 153 00:06:15,810 --> 00:06:19,050 154 00:06:19,050 --> 00:06:21,150 155 00:06:21,150 --> 00:06:23,730 156 00:06:23,730 --> 00:06:26,640 157 00:06:26,640 --> 00:06:29,610 158 00:06:29,610 --> 00:06:34,980 159 00:06:34,980 --> 00:06:37,320 160 00:06:37,320 --> 00:06:39,719 161 00:06:39,719 --> 00:06:42,119 162 00:06:42,119 --> 00:06:44,399 163 00:06:44,399 --> 00:06:47,699 164 00:06:47,699 --> 00:06:49,529 165 00:06:49,529 --> 00:06:52,889 166 00:06:52,889 --> 00:06:55,709 167 00:06:55,709 --> 00:06:58,619 168 00:06:58,619 --> 00:07:01,079 169 00:07:01,079 --> 00:07:03,359 170 00:07:03,359 --> 00:07:05,549 171 00:07:05,549 --> 00:07:07,019 172 00:07:07,019 --> 00:07:08,850 173 00:07:08,850 --> 00:07:12,959 174 00:07:12,959 --> 00:07:14,609 175 00:07:14,609 --> 00:07:17,249 176 00:07:17,249 --> 00:07:20,070 177 00:07:20,070 --> 00:07:22,049 178 00:07:22,049 --> 00:07:26,699 179 00:07:26,699 --> 00:07:29,040 180 00:07:29,040 --> 00:07:30,419 181 00:07:30,419 --> 00:07:32,820 182 00:07:32,820 --> 00:07:35,759 183 00:07:35,759 --> 00:07:38,999 184 00:07:38,999 --> 00:07:41,759 185 00:07:41,759 --> 00:07:44,189 186 00:07:44,189 --> 00:07:46,259 187 00:07:46,259 --> 00:07:47,730 188 00:07:47,730 --> 00:07:49,619 189 00:07:49,619 --> 00:07:52,469 190 00:07:52,469 --> 00:07:55,259 191 00:07:55,259 --> 00:07:57,199 192 00:07:57,199 --> 00:08:01,889 193 00:08:01,889 --> 00:08:05,730 194 00:08:05,730 --> 00:08:07,709 195 00:08:07,709 --> 00:08:09,089 196 00:08:09,089 --> 00:08:12,540 197 00:08:12,540 --> 00:08:14,429 198 00:08:14,429 --> 00:08:17,879 199 00:08:17,879 --> 00:08:19,259 200 00:08:19,259 --> 00:08:21,540 201 00:08:21,540 --> 00:08:23,129 202 00:08:23,129 --> 00:08:25,919 203 00:08:25,919 --> 00:08:28,559 204 00:08:28,559 --> 00:08:31,969 205 00:08:31,969 --> 00:08:34,079 206 00:08:34,079 --> 00:08:36,420 207 00:08:36,420 --> 00:08:38,759 208 00:08:38,759 --> 00:08:41,999 209 00:08:41,999 --> 00:08:44,009 210 00:08:44,009 --> 00:08:46,769 211 00:08:46,769 --> 00:08:49,309 212 00:08:49,309 --> 00:08:52,110 213 00:08:52,110 --> 00:08:54,179 214 00:08:54,179 --> 00:08:55,900 215 00:08:55,900 --> 00:08:59,110 216 00:08:59,110 --> 00:09:03,790 217 00:09:03,790 --> 00:09:06,100 218 00:09:06,100 --> 00:09:08,940 219 00:09:08,940 --> 00:09:10,990 220 00:09:10,990 --> 00:09:16,180 221 00:09:16,180 --> 00:09:17,290 222 00:09:17,290 --> 00:09:19,090 223 00:09:19,090 --> 00:09:23,140 224 00:09:23,140 --> 00:09:24,790 225 00:09:24,790 --> 00:09:26,890 226 00:09:26,890 --> 00:09:29,620 227 00:09:29,620 --> 00:09:32,500 228 00:09:32,500 --> 00:09:35,830 229 00:09:35,830 --> 00:09:38,620 230 00:09:38,620 --> 00:09:42,100 231 00:09:42,100 --> 00:09:44,560 232 00:09:44,560 --> 00:09:46,690 233 00:09:46,690 --> 00:09:48,340 234 00:09:48,340 --> 00:09:50,530 235 00:09:50,530 --> 00:09:53,890 236 00:09:53,890 --> 00:09:56,530 237 00:09:56,530 --> 00:09:58,030 238 00:09:58,030 --> 00:10:00,220 239 00:10:00,220 --> 00:10:02,650 240 00:10:02,650 --> 00:10:06,100 241 00:10:06,100 --> 00:10:08,320 242 00:10:08,320 --> 00:10:10,870 243 00:10:10,870 --> 00:10:13,330 244 00:10:13,330 --> 00:10:15,580 245 00:10:15,580 --> 00:10:17,260 246 00:10:17,260 --> 00:10:20,110 247 00:10:20,110 --> 00:10:23,760 248 00:10:23,760 --> 00:10:26,380 249 00:10:26,380 --> 00:10:30,370 250 00:10:30,370 --> 00:10:33,100 251 00:10:33,100 --> 00:10:35,410 252 00:10:35,410 --> 00:10:37,360 253 00:10:37,360 --> 00:10:41,500 254 00:10:41,500 --> 00:10:43,660 255 00:10:43,660 --> 00:10:46,450 256 00:10:46,450 --> 00:10:51,190 257 00:10:51,190 --> 00:10:52,990 258 00:10:52,990 --> 00:10:56,170 259 00:10:56,170 --> 00:10:59,170 260 00:10:59,170 --> 00:11:00,490 261 00:11:00,490 --> 00:11:03,430 262 00:11:03,430 --> 00:11:05,380 263 00:11:05,380 --> 00:11:09,670 264 00:11:09,670 --> 00:11:11,499 265 00:11:11,499 --> 00:11:14,619 266 00:11:14,619 --> 00:11:16,299 267 00:11:16,299 --> 00:11:18,489 268 00:11:18,489 --> 00:11:21,160 269 00:11:21,160 --> 00:11:23,739 270 00:11:23,739 --> 00:11:28,989 271 00:11:28,989 --> 00:11:31,030 272 00:11:31,030 --> 00:11:32,559 273 00:11:32,559 --> 00:11:34,989 274 00:11:34,989 --> 00:11:36,910 275 00:11:36,910 --> 00:11:38,859 276 00:11:38,859 --> 00:11:42,220 277 00:11:42,220 --> 00:11:44,679 278 00:11:44,679 --> 00:11:47,109 279 00:11:47,109 --> 00:11:49,509 280 00:11:49,509 --> 00:11:51,790 281 00:11:51,790 --> 00:11:54,699 282 00:11:54,699 --> 00:11:56,410 283 00:11:56,410 --> 00:11:59,499 284 00:11:59,499 --> 00:12:02,470 285 00:12:02,470 --> 00:12:05,410 286 00:12:05,410 --> 00:12:08,410 287 00:12:08,410 --> 00:12:11,530 288 00:12:11,530 --> 00:12:12,910 289 00:12:12,910 --> 00:12:14,949 290 00:12:14,949 --> 00:12:18,160 291 00:12:18,160 --> 00:12:20,049 292 00:12:20,049 --> 00:12:22,360 293 00:12:22,360 --> 00:12:25,329 294 00:12:25,329 --> 00:12:27,699 295 00:12:27,699 --> 00:12:29,470 296 00:12:29,470 --> 00:12:33,100 297 00:12:33,100 --> 00:12:35,710 298 00:12:35,710 --> 00:12:38,139 299 00:12:38,139 --> 00:12:40,869 300 00:12:40,869 --> 00:12:42,309 301 00:12:42,309 --> 00:12:45,189 302 00:12:45,189 --> 00:12:47,799 303 00:12:47,799 --> 00:12:49,299 304 00:12:49,299 --> 00:12:51,460 305 00:12:51,460 --> 00:12:53,350 306 00:12:53,350 --> 00:12:55,329 307 00:12:55,329 --> 00:12:57,280 308 00:12:57,280 --> 00:13:00,789 309 00:13:00,789 --> 00:13:03,519 310 00:13:03,519 --> 00:13:06,669 311 00:13:06,669 --> 00:13:10,179 312 00:13:10,179 --> 00:13:11,350 313 00:13:11,350 --> 00:13:14,470 314 00:13:14,470 --> 00:13:17,169 315 00:13:17,169 --> 00:13:18,910 316 00:13:18,910 --> 00:13:21,009 317 00:13:21,009 --> 00:13:23,079 318 00:13:23,079 --> 00:13:27,069 319 00:13:27,069 --> 00:13:30,249 320 00:13:30,249 --> 00:13:32,769 321 00:13:32,769 --> 00:13:33,849 322 00:13:33,849 --> 00:13:36,519 323 00:13:36,519 --> 00:13:39,009 324 00:13:39,009 --> 00:13:41,139 325 00:13:41,139 --> 00:13:43,409 326 00:13:43,409 --> 00:13:46,779 327 00:13:46,779 --> 00:13:49,389 328 00:13:49,389 --> 00:13:52,509 329 00:13:52,509 --> 00:13:54,519 330 00:13:54,519 --> 00:13:56,679 331 00:13:56,679 --> 00:14:00,339 332 00:14:00,339 --> 00:14:02,169 333 00:14:02,169 --> 00:14:04,779 334 00:14:04,779 --> 00:14:07,719 335 00:14:07,719 --> 00:14:11,379 336 00:14:11,379 --> 00:14:15,039 337 00:14:15,039 --> 00:14:19,119 338 00:14:19,119 --> 00:14:21,399 339 00:14:21,399 --> 00:14:22,899 340 00:14:22,899 --> 00:14:24,549 341 00:14:24,549 --> 00:14:25,719 342 00:14:25,719 --> 00:14:27,459 343 00:14:27,459 --> 00:14:29,289 344 00:14:29,289 --> 00:14:31,289 345 00:14:31,289 --> 00:14:34,329 346 00:14:34,329 --> 00:14:38,019 347 00:14:38,019 --> 00:14:40,659 348 00:14:40,659 --> 00:14:43,299 349 00:14:43,299 --> 00:14:45,249 350 00:14:45,249 --> 00:14:46,629 351 00:14:46,629 --> 00:14:49,989 352 00:14:49,989 --> 00:14:56,130 353 00:14:56,130 --> 00:15:00,600 354 00:15:00,600 --> 00:15:02,980 355 00:15:02,980 --> 00:15:05,400 356 00:15:05,400 --> 00:15:08,950 357 00:15:08,950 --> 00:15:11,230 358 00:15:11,230 --> 00:15:14,320 359 00:15:14,320 --> 00:15:16,840 360 00:15:16,840 --> 00:15:19,870 361 00:15:19,870 --> 00:15:23,830 362 00:15:23,830 --> 00:15:25,780 363 00:15:25,780 --> 00:15:29,230 364 00:15:29,230 --> 00:15:30,790 365 00:15:30,790 --> 00:15:33,340 366 00:15:33,340 --> 00:15:35,860 367 00:15:35,860 --> 00:15:38,560 368 00:15:38,560 --> 00:15:40,930 369 00:15:40,930 --> 00:15:43,390 370 00:15:43,390 --> 00:15:45,190 371 00:15:45,190 --> 00:15:48,580 372 00:15:48,580 --> 00:15:50,980 373 00:15:50,980 --> 00:15:52,660 374 00:15:52,660 --> 00:15:53,880 375 00:15:53,880 --> 00:15:56,770 376 00:15:56,770 --> 00:16:00,670 377 00:16:00,670 --> 00:16:03,670 378 00:16:03,670 --> 00:16:04,750 379 00:16:04,750 --> 00:16:07,240 380 00:16:07,240 --> 00:16:12,720 381 00:16:12,720 --> 00:16:16,030 382 00:16:16,030 --> 00:16:17,620 383 00:16:17,620 --> 00:16:19,090 384 00:16:19,090 --> 00:16:22,390 385 00:16:22,390 --> 00:16:23,980 386 00:16:23,980 --> 00:16:26,470 387 00:16:26,470 --> 00:16:29,790 388 00:16:29,790 --> 00:16:34,090 389 00:16:34,090 --> 00:16:36,190 390 00:16:36,190 --> 00:16:37,750 391 00:16:37,750 --> 00:16:40,170 392 00:16:40,170 --> 00:16:43,090 393 00:16:43,090 --> 00:16:47,440 394 00:16:47,440 --> 00:16:48,700 395 00:16:48,700 --> 00:16:50,950 396 00:16:50,950 --> 00:16:52,660 397 00:16:52,660 --> 00:16:54,780 398 00:16:54,780 --> 00:16:57,550 399 00:16:57,550 --> 00:16:59,830 400 00:16:59,830 --> 00:17:03,160 401 00:17:03,160 --> 00:17:07,240 402 00:17:07,240 --> 00:17:09,220 403 00:17:09,220 --> 00:17:09,230 404 00:17:09,230 --> 00:17:09,700 405 00:17:09,700 --> 00:17:12,070 406 00:17:12,070 --> 00:17:15,070 407 00:17:15,070 --> 00:17:16,600 408 00:17:16,600 --> 00:17:18,730 409 00:17:18,730 --> 00:17:20,260 410 00:17:20,260 --> 00:17:22,840 411 00:17:22,840 --> 00:17:26,410 412 00:17:26,410 --> 00:17:29,560 413 00:17:29,560 --> 00:17:33,370 414 00:17:33,370 --> 00:17:37,210 415 00:17:37,210 --> 00:17:39,730 416 00:17:39,730 --> 00:17:42,430 417 00:17:42,430 --> 00:17:45,340 418 00:17:45,340 --> 00:17:47,110 419 00:17:47,110 --> 00:17:49,060 420 00:17:49,060 --> 00:17:52,270 421 00:17:52,270 --> 00:17:54,870 422 00:17:54,870 --> 00:17:57,430 423 00:17:57,430 --> 00:17:58,780 424 00:17:58,780 --> 00:18:01,660 425 00:18:01,660 --> 00:18:03,700 426 00:18:03,700 --> 00:18:05,710 427 00:18:05,710 --> 00:18:08,320 428 00:18:08,320 --> 00:18:10,860 429 00:18:10,860 --> 00:18:13,930 430 00:18:13,930 --> 00:18:15,580 431 00:18:15,580 --> 00:18:17,980 432 00:18:17,980 --> 00:18:20,830 433 00:18:20,830 --> 00:18:25,330 434 00:18:25,330 --> 00:18:27,460 435 00:18:27,460 --> 00:18:28,900 436 00:18:28,900 --> 00:18:31,330 437 00:18:31,330 --> 00:18:35,950 438 00:18:35,950 --> 00:18:38,170 439 00:18:38,170 --> 00:18:44,050 440 00:18:44,050 --> 00:18:44,060 441 00:18:44,060 --> 00:18:49,980 442 00:18:49,980 --> 00:18:52,210 443 00:18:52,210 --> 00:19:03,310 444 00:19:03,310 --> 00:19:07,630 445 00:19:07,630 --> 00:19:11,200 446 00:19:11,200 --> 00:19:14,710 447 00:19:14,710 --> 00:19:16,299 448 00:19:16,299 --> 00:19:19,510 449 00:19:19,510 --> 00:19:21,610 450 00:19:21,610 --> 00:19:25,000 451 00:19:25,000 --> 00:19:31,740 452 00:19:31,740 --> 00:19:34,600 453 00:19:34,600 --> 00:19:36,399 454 00:19:36,399 --> 00:19:37,659 455 00:19:37,659 --> 00:19:40,000 456 00:19:40,000 --> 00:19:42,820 457 00:19:42,820 --> 00:19:45,669 458 00:19:45,669 --> 00:19:49,960 459 00:19:49,960 --> 00:19:51,940 460 00:19:51,940 --> 00:19:54,310 461 00:19:54,310 --> 00:19:56,560 462 00:19:56,560 --> 00:19:58,840 463 00:19:58,840 --> 00:20:02,710 464 00:20:02,710 --> 00:20:05,289 465 00:20:05,289 --> 00:20:07,899 466 00:20:07,899 --> 00:20:12,610 467 00:20:12,610 --> 00:20:15,430 468 00:20:15,430 --> 00:20:18,010 469 00:20:18,010 --> 00:20:20,200 470 00:20:20,200 --> 00:20:23,769 471 00:20:23,769 --> 00:20:24,970 472 00:20:24,970 --> 00:20:27,010 473 00:20:27,010 --> 00:20:29,669 474 00:20:29,669 --> 00:20:32,590 475 00:20:32,590 --> 00:20:35,860 476 00:20:35,860 --> 00:20:38,649 477 00:20:38,649 --> 00:20:42,070 478 00:20:42,070 --> 00:20:45,519 479 00:20:45,519 --> 00:20:47,649 480 00:20:47,649 --> 00:20:49,779 481 00:20:49,779 --> 00:20:52,210 482 00:20:52,210 --> 00:20:54,610 483 00:20:54,610 --> 00:20:56,889 484 00:20:56,889 --> 00:21:01,930 485 00:21:01,930 --> 00:21:04,850 486 00:21:04,850 --> 00:21:07,070 487 00:21:07,070 --> 00:21:09,860 488 00:21:09,860 --> 00:21:12,620 489 00:21:12,620 --> 00:21:15,500 490 00:21:15,500 --> 00:21:21,370 491 00:21:21,370 --> 00:21:23,630 492 00:21:23,630 --> 00:21:26,540 493 00:21:26,540 --> 00:21:28,490 494 00:21:28,490 --> 00:21:31,340 495 00:21:31,340 --> 00:21:33,380 496 00:21:33,380 --> 00:21:35,030 497 00:21:35,030 --> 00:21:42,230 498 00:21:42,230 --> 00:21:44,360 499 00:21:44,360 --> 00:21:47,270 500 00:21:47,270 --> 00:21:49,130 501 00:21:49,130 --> 00:21:52,040 502 00:21:52,040 --> 00:21:54,290 503 00:21:54,290 --> 00:21:56,090 504 00:21:56,090 --> 00:22:03,100 505 00:22:03,100 --> 00:22:05,710 506 00:22:05,710 --> 00:22:08,300 507 00:22:08,300 --> 00:22:10,370 508 00:22:10,370 --> 00:22:12,770 509 00:22:12,770 --> 00:22:15,560 510 00:22:15,560 --> 00:22:17,720 511 00:22:17,720 --> 00:22:20,990 512 00:22:20,990 --> 00:22:23,630 513 00:22:23,630 --> 00:22:25,940 514 00:22:25,940 --> 00:22:27,950 515 00:22:27,950 --> 00:22:30,740 516 00:22:30,740 --> 00:22:32,920 517 00:22:32,920 --> 00:22:35,300 518 00:22:35,300 --> 00:22:38,930 519 00:22:38,930 --> 00:22:41,870 520 00:22:41,870 --> 00:22:43,730 521 00:22:43,730 --> 00:22:47,000 522 00:22:47,000 --> 00:22:50,000 523 00:22:50,000 --> 00:22:51,740 524 00:22:51,740 --> 00:22:55,280 525 00:22:55,280 --> 00:22:58,430 526 00:22:58,430 --> 00:23:00,440 527 00:23:00,440 --> 00:23:03,560 528 00:23:03,560 --> 00:23:06,200 529 00:23:06,200 --> 00:23:08,260 530 00:23:08,260 --> 00:23:11,210 531 00:23:11,210 --> 00:23:14,150 532 00:23:14,150 --> 00:23:17,210 533 00:23:17,210 --> 00:23:21,770 534 00:23:21,770 --> 00:23:24,790 535 00:23:24,790 --> 00:23:27,530 536 00:23:27,530 --> 00:23:30,140 537 00:23:30,140 --> 00:23:32,560 538 00:23:32,560 --> 00:23:34,760 539 00:23:34,760 --> 00:23:37,070 540 00:23:37,070 --> 00:23:40,850 541 00:23:40,850 --> 00:23:42,860 542 00:23:42,860 --> 00:23:46,310 543 00:23:46,310 --> 00:23:49,610 544 00:23:49,610 --> 00:23:51,050 545 00:23:51,050 --> 00:23:53,780 546 00:23:53,780 --> 00:23:56,360 547 00:23:56,360 --> 00:23:58,700 548 00:23:58,700 --> 00:24:02,660 549 00:24:02,660 --> 00:24:08,180 550 00:24:08,180 --> 00:24:12,880 551 00:24:12,880 --> 00:24:14,780 552 00:24:14,780 --> 00:24:19,550 553 00:24:19,550 --> 00:24:22,820 554 00:24:22,820 --> 00:24:24,110 555 00:24:24,110 --> 00:24:25,940 556 00:24:25,940 --> 00:24:30,190 557 00:24:30,190 --> 00:24:31,790 558 00:24:31,790 --> 00:24:33,560 559 00:24:33,560 --> 00:24:36,620 560 00:24:36,620 --> 00:24:39,890 561 00:24:39,890 --> 00:24:42,860 562 00:24:42,860 --> 00:24:46,610 563 00:24:46,610 --> 00:24:50,570 564 00:24:50,570 --> 00:24:53,420 565 00:24:53,420 --> 00:24:54,890 566 00:24:54,890 --> 00:25:00,890 567 00:25:00,890 --> 00:25:04,700 568 00:25:04,700 --> 00:25:06,440 569 00:25:06,440 --> 00:25:09,050 570 00:25:09,050 --> 00:25:11,030 571 00:25:11,030 --> 00:25:13,060 572 00:25:13,060 --> 00:25:17,140 573 00:25:17,140 --> 00:25:23,800 574 00:25:23,800 --> 00:25:26,720 575 00:25:26,720 --> 00:25:28,130 576 00:25:28,130 --> 00:25:30,440 577 00:25:30,440 --> 00:25:31,640 578 00:25:31,640 --> 00:25:34,700 579 00:25:34,700 --> 00:25:38,000 580 00:25:38,000 --> 00:25:42,650 581 00:25:42,650 --> 00:25:44,510 582 00:25:44,510 --> 00:25:47,360 583 00:25:47,360 --> 00:25:50,480 584 00:25:50,480 --> 00:25:52,700 585 00:25:52,700 --> 00:25:55,390 586 00:25:55,390 --> 00:25:57,680 587 00:25:57,680 --> 00:25:59,590 588 00:25:59,590 --> 00:26:01,910 589 00:26:01,910 --> 00:26:05,330 590 00:26:05,330 --> 00:26:09,200 591 00:26:09,200 --> 00:26:10,730 592 00:26:10,730 --> 00:26:13,580 593 00:26:13,580 --> 00:26:15,590 594 00:26:15,590 --> 00:26:18,560 595 00:26:18,560 --> 00:26:20,420 596 00:26:20,420 --> 00:26:23,000 597 00:26:23,000 --> 00:26:25,400 598 00:26:25,400 --> 00:26:27,170 599 00:26:27,170 --> 00:26:30,500 600 00:26:30,500 --> 00:26:33,800 601 00:26:33,800 --> 00:26:35,960 602 00:26:35,960 --> 00:26:38,600 603 00:26:38,600 --> 00:26:40,490 604 00:26:40,490 --> 00:26:48,350 605 00:26:48,350 --> 00:26:51,260 606 00:26:51,260 --> 00:26:53,300 607 00:26:53,300 --> 00:26:56,690 608 00:26:56,690 --> 00:26:59,720 609 00:26:59,720 --> 00:27:01,550 610 00:27:01,550 --> 00:27:03,560 611 00:27:03,560 --> 00:27:06,230 612 00:27:06,230 --> 00:27:09,350 613 00:27:09,350 --> 00:27:12,650 614 00:27:12,650 --> 00:27:15,740 615 00:27:15,740 --> 00:27:18,050 616 00:27:18,050 --> 00:27:20,270 617 00:27:20,270 --> 00:27:22,400 618 00:27:22,400 --> 00:27:25,490 619 00:27:25,490 --> 00:27:27,740 620 00:27:27,740 --> 00:27:29,450 621 00:27:29,450 --> 00:27:31,510 622 00:27:31,510 --> 00:27:34,390 623 00:27:34,390 --> 00:27:36,560 624 00:27:36,560 --> 00:27:39,320 625 00:27:39,320 --> 00:27:41,840 626 00:27:41,840 --> 00:27:44,620 627 00:27:44,620 --> 00:27:48,399 628 00:27:48,399 --> 00:27:51,049 629 00:27:51,049 --> 00:27:52,549 630 00:27:52,549 --> 00:27:56,509 631 00:27:56,509 --> 00:27:58,580 632 00:27:58,580 --> 00:28:01,310 633 00:28:01,310 --> 00:28:03,919 634 00:28:03,919 --> 00:28:07,730 635 00:28:07,730 --> 00:28:09,769 636 00:28:09,769 --> 00:28:12,619 637 00:28:12,619 --> 00:28:19,009 638 00:28:19,009 --> 00:28:22,190 639 00:28:22,190 --> 00:28:24,490 640 00:28:24,490 --> 00:28:26,389 641 00:28:26,389 --> 00:28:28,820 642 00:28:28,820 --> 00:28:32,480 643 00:28:32,480 --> 00:28:34,820 644 00:28:34,820 --> 00:28:37,519 645 00:28:37,519 --> 00:28:39,590 646 00:28:39,590 --> 00:28:43,249 647 00:28:43,249 --> 00:28:45,110 648 00:28:45,110 --> 00:28:48,320 649 00:28:48,320 --> 00:28:51,350 650 00:28:51,350 --> 00:28:54,110 651 00:28:54,110 --> 00:28:56,980 652 00:28:56,980 --> 00:29:00,019 653 00:29:00,019 --> 00:29:01,490 654 00:29:01,490 --> 00:29:04,039 655 00:29:04,039 --> 00:29:05,960 656 00:29:05,960 --> 00:29:08,600 657 00:29:08,600 --> 00:29:10,249 658 00:29:10,249 --> 00:29:16,700 659 00:29:16,700 --> 00:29:18,740 660 00:29:18,740 --> 00:29:26,740 661 00:29:26,740 --> 00:29:31,539 662 00:29:31,539 --> 00:29:32,409 663 00:29:32,409 --> 00:29:35,020 664 00:29:35,020 --> 00:29:36,700 665 00:29:36,700 --> 00:29:39,460 666 00:29:39,460 --> 00:29:42,399 667 00:29:42,399 --> 00:29:43,360 668 00:29:43,360 --> 00:29:45,250 669 00:29:45,250 --> 00:29:48,399 670 00:29:48,399 --> 00:29:51,390 671 00:29:51,390 --> 00:29:55,419 672 00:29:55,419 --> 00:29:58,090 673 00:29:58,090 --> 00:30:00,760 674 00:30:00,760 --> 00:30:02,799 675 00:30:02,799 --> 00:30:05,409 676 00:30:05,409 --> 00:30:07,810 677 00:30:07,810 --> 00:30:11,080 678 00:30:11,080 --> 00:30:14,860 679 00:30:14,860 --> 00:30:16,799 680 00:30:16,799 --> 00:30:19,480 681 00:30:19,480 --> 00:30:20,950 682 00:30:20,950 --> 00:30:23,950 683 00:30:23,950 --> 00:30:26,470 684 00:30:26,470 --> 00:30:28,810 685 00:30:28,810 --> 00:30:31,149 686 00:30:31,149 --> 00:30:35,260 687 00:30:35,260 --> 00:30:37,180 688 00:30:37,180 --> 00:30:39,279 689 00:30:39,279 --> 00:30:42,190 690 00:30:42,190 --> 00:30:45,220 691 00:30:45,220 --> 00:30:47,830 692 00:30:47,830 --> 00:30:50,260 693 00:30:50,260 --> 00:30:52,600 694 00:30:52,600 --> 00:30:55,840 695 00:30:55,840 --> 00:30:59,710 696 00:30:59,710 --> 00:31:02,440 697 00:31:02,440 --> 00:31:04,149 698 00:31:04,149 --> 00:31:06,820 699 00:31:06,820 --> 00:31:09,460 700 00:31:09,460 --> 00:31:11,919 701 00:31:11,919 --> 00:31:14,260 702 00:31:14,260 --> 00:31:16,600 703 00:31:16,600 --> 00:31:21,460 704 00:31:21,460 --> 00:31:24,190 705 00:31:24,190 --> 00:31:27,250 706 00:31:27,250 --> 00:31:30,100 707 00:31:30,100 --> 00:31:33,130 708 00:31:33,130 --> 00:31:35,560 709 00:31:35,560 --> 00:31:38,139 710 00:31:38,139 --> 00:31:39,669 711 00:31:39,669 --> 00:31:40,270 712 00:31:40,270 --> 00:31:43,630 713 00:31:43,630 --> 00:31:47,260 714 00:31:47,260 --> 00:31:53,680 715 00:31:53,680 --> 00:31:55,720 716 00:31:55,720 --> 00:31:58,990 717 00:31:58,990 --> 00:32:00,940 718 00:32:00,940 --> 00:32:03,490 719 00:32:03,490 --> 00:32:04,710 720 00:32:04,710 --> 00:32:07,270 721 00:32:07,270 --> 00:32:09,340 722 00:32:09,340 --> 00:32:11,830 723 00:32:11,830 --> 00:32:14,620 724 00:32:14,620 --> 00:32:17,620 725 00:32:17,620 --> 00:32:20,200 726 00:32:20,200 --> 00:32:22,600 727 00:32:22,600 --> 00:32:25,180 728 00:32:25,180 --> 00:32:34,480 729 00:32:34,480 --> 00:32:36,649 730 00:32:36,649 --> 00:32:39,230 731 00:32:39,230 --> 00:32:42,620 732 00:32:42,620 --> 00:32:46,039 733 00:32:46,039 --> 00:32:47,720 734 00:32:47,720 --> 00:32:50,360 735 00:32:50,360 --> 00:32:53,210 736 00:32:53,210 --> 00:32:54,500 737 00:32:54,500 --> 00:32:56,930 738 00:32:56,930 --> 00:32:58,970 739 00:32:58,970 --> 00:33:01,519 740 00:33:01,519 --> 00:33:03,710 741 00:33:03,710 --> 00:33:06,799 742 00:33:06,799 --> 00:33:08,990 743 00:33:08,990 --> 00:33:12,320 744 00:33:12,320 --> 00:33:15,200 745 00:33:15,200 --> 00:33:17,299 746 00:33:17,299 --> 00:33:19,820 747 00:33:19,820 --> 00:33:22,490 748 00:33:22,490 --> 00:33:24,919 749 00:33:24,919 --> 00:33:28,340 750 00:33:28,340 --> 00:33:30,230 751 00:33:30,230 --> 00:33:32,990 752 00:33:32,990 --> 00:33:35,899 753 00:33:35,899 --> 00:33:39,080 754 00:33:39,080 --> 00:33:40,879 755 00:33:40,879 --> 00:33:44,240 756 00:33:44,240 --> 00:33:46,430 757 00:33:46,430 --> 00:33:48,620 758 00:33:48,620 --> 00:33:50,659 759 00:33:50,659 --> 00:33:54,529 760 00:33:54,529 --> 00:34:02,060 761 00:34:02,060 --> 00:34:04,669 762 00:34:04,669 --> 00:34:08,030 763 00:34:08,030 --> 00:34:10,250 764 00:34:10,250 --> 00:34:12,169 765 00:34:12,169 --> 00:34:15,290 766 00:34:15,290 --> 00:34:19,250 767 00:34:19,250 --> 00:34:20,990 768 00:34:20,990 --> 00:34:23,000 769 00:34:23,000 --> 00:34:24,050 770 00:34:24,050 --> 00:34:26,599 771 00:34:26,599 --> 00:34:28,609 772 00:34:28,609 --> 00:34:29,839 773 00:34:29,839 --> 00:34:33,589 774 00:34:33,589 --> 00:34:33,599 775 00:34:33,599 --> 00:34:36,430 776 00:34:36,430 --> 00:34:39,760 777 00:34:39,760 --> 00:34:40,899 778 00:34:40,899 --> 00:34:44,069 779 00:34:44,069 --> 00:34:46,569 780 00:34:46,569 --> 00:34:49,059 781 00:34:49,059 --> 00:34:55,589 782 00:34:55,589 --> 00:35:00,029 783 00:35:00,029 --> 00:35:03,130 784 00:35:03,130 --> 00:35:05,529 785 00:35:05,529 --> 00:35:09,309 786 00:35:09,309 --> 00:35:11,589 787 00:35:11,589 --> 00:35:14,230 788 00:35:14,230 --> 00:35:15,700 789 00:35:15,700 --> 00:35:16,599 790 00:35:16,599 --> 00:35:18,039 791 00:35:18,039 --> 00:35:21,339 792 00:35:21,339 --> 00:35:23,529 793 00:35:23,529 --> 00:35:25,599 794 00:35:25,599 --> 00:35:27,670 795 00:35:27,670 --> 00:35:30,609 796 00:35:30,609 --> 00:35:34,690 797 00:35:34,690 --> 00:35:36,279 798 00:35:36,279 --> 00:35:37,960 799 00:35:37,960 --> 00:35:40,900 800 00:35:40,900 --> 00:35:43,329 801 00:35:43,329 --> 00:35:45,099 802 00:35:45,099 --> 00:35:46,900 803 00:35:46,900 --> 00:35:49,870 804 00:35:49,870 --> 00:35:51,400 805 00:35:51,400 --> 00:35:54,970 806 00:35:54,970 --> 00:35:56,710 807 00:35:56,710 --> 00:35:58,539 808 00:35:58,539 --> 00:36:01,089 809 00:36:01,089 --> 00:36:05,680 810 00:36:05,680 --> 00:36:07,359 811 00:36:07,359 --> 00:36:10,029 812 00:36:10,029 --> 00:36:11,920 813 00:36:11,920 --> 00:36:13,720 814 00:36:13,720 --> 00:36:16,359 815 00:36:16,359 --> 00:36:22,390 816 00:36:22,390 --> 00:36:24,819 817 00:36:24,819 --> 00:36:27,430 818 00:36:27,430 --> 00:36:30,490 819 00:36:30,490 --> 00:36:34,960 820 00:36:34,960 --> 00:36:38,260 821 00:36:38,260 --> 00:36:40,779 822 00:36:40,779 --> 00:36:44,019 823 00:36:44,019 --> 00:36:46,299 824 00:36:46,299 --> 00:36:48,490 825 00:36:48,490 --> 00:36:50,260 826 00:36:50,260 --> 00:36:52,540 827 00:36:52,540 --> 00:37:00,220 828 00:37:00,220 --> 00:37:02,290 829 00:37:02,290 --> 00:37:05,260 830 00:37:05,260 --> 00:37:07,210 831 00:37:07,210 --> 00:37:09,640 832 00:37:09,640 --> 00:37:11,500 833 00:37:11,500 --> 00:37:13,000 834 00:37:13,000 --> 00:37:14,760 835 00:37:14,760 --> 00:37:16,990 836 00:37:16,990 --> 00:37:18,640 837 00:37:18,640 --> 00:37:21,550 838 00:37:21,550 --> 00:37:24,370 839 00:37:24,370 --> 00:37:25,900 840 00:37:25,900 --> 00:37:27,640 841 00:37:27,640 --> 00:37:29,950 842 00:37:29,950 --> 00:37:32,770 843 00:37:32,770 --> 00:37:36,010 844 00:37:36,010 --> 00:37:38,740 845 00:37:38,740 --> 00:37:41,260 846 00:37:41,260 --> 00:37:43,750 847 00:37:43,750 --> 00:37:46,000 848 00:37:46,000 --> 00:37:49,060 849 00:37:49,060 --> 00:37:50,650 850 00:37:50,650 --> 00:37:52,750 851 00:37:52,750 --> 00:37:54,550 852 00:37:54,550 --> 00:37:56,050 853 00:37:56,050 --> 00:37:59,290 854 00:37:59,290 --> 00:38:03,250 855 00:38:03,250 --> 00:38:07,590 856 00:38:07,590 --> 00:38:10,270 857 00:38:10,270 --> 00:38:12,910 858 00:38:12,910 --> 00:38:14,140 859 00:38:14,140 --> 00:38:16,090 860 00:38:16,090 --> 00:38:18,520 861 00:38:18,520 --> 00:38:20,290 862 00:38:20,290 --> 00:38:24,460 863 00:38:24,460 --> 00:38:26,740 864 00:38:26,740 --> 00:38:28,270 865 00:38:28,270 --> 00:38:30,880 866 00:38:30,880 --> 00:38:34,210 867 00:38:34,210 --> 00:38:36,820 868 00:38:36,820 --> 00:38:40,480 869 00:38:40,480 --> 00:38:42,520 870 00:38:42,520 --> 00:38:44,560 871 00:38:44,560 --> 00:38:46,030 872 00:38:46,030 --> 00:38:50,080 873 00:38:50,080 --> 00:38:51,910 874 00:38:51,910 --> 00:38:53,349 875 00:38:53,349 --> 00:38:55,930 876 00:38:55,930 --> 00:38:58,359 877 00:38:58,359 --> 00:38:59,680 878 00:38:59,680 --> 00:39:01,690 879 00:39:01,690 --> 00:39:03,089 880 00:39:03,089 --> 00:39:08,430 881 00:39:08,430 --> 00:39:12,599 882 00:39:12,599 --> 00:39:14,339 883 00:39:14,339 --> 00:39:16,859 884 00:39:16,859 --> 00:39:18,870 885 00:39:18,870 --> 00:39:20,849 886 00:39:20,849 --> 00:39:23,549 887 00:39:23,549 --> 00:39:27,660 888 00:39:27,660 --> 00:39:29,999 889 00:39:29,999 --> 00:39:33,029 890 00:39:33,029 --> 00:39:35,640 891 00:39:35,640 --> 00:39:37,829 892 00:39:37,829 --> 00:39:39,509 893 00:39:39,509 --> 00:39:41,489 894 00:39:41,489 --> 00:39:44,249 895 00:39:44,249 --> 00:39:47,400 896 00:39:47,400 --> 00:39:50,430 897 00:39:50,430 --> 00:39:52,769 898 00:39:52,769 --> 00:39:59,579 899 00:39:59,579 --> 00:40:03,239 900 00:40:03,239 --> 00:40:05,160 901 00:40:05,160 --> 00:40:07,440 902 00:40:07,440 --> 00:40:09,989 903 00:40:09,989 --> 00:40:11,880 904 00:40:11,880 --> 00:40:14,519 905 00:40:14,519 --> 00:40:16,589 906 00:40:16,589 --> 00:40:18,479 907 00:40:18,479 --> 00:40:21,289 908 00:40:21,289 --> 00:40:25,890 909 00:40:25,890 --> 00:40:27,930 910 00:40:27,930 --> 00:40:30,959 911 00:40:30,959 --> 00:40:33,779 912 00:40:33,779 --> 00:40:36,359 913 00:40:36,359 --> 00:40:38,009 914 00:40:38,009 --> 00:40:39,749 915 00:40:39,749 --> 00:40:40,859 916 00:40:40,859 --> 00:40:42,569 917 00:40:42,569 --> 00:40:43,829 918 00:40:43,829 --> 00:40:45,779 919 00:40:45,779 --> 00:40:48,420 920 00:40:48,420 --> 00:40:53,540 921 00:40:53,540 --> 00:40:57,420 922 00:40:57,420 --> 00:40:59,100 923 00:40:59,100 --> 00:41:00,420 924 00:41:00,420 --> 00:41:04,320 925 00:41:04,320 --> 00:41:05,910 926 00:41:05,910 --> 00:41:08,790 927 00:41:08,790 --> 00:41:11,880 928 00:41:11,880 --> 00:41:14,550 929 00:41:14,550 --> 00:41:17,490 930 00:41:17,490 --> 00:41:20,850 931 00:41:20,850 --> 00:41:22,800 932 00:41:22,800 --> 00:41:25,950 933 00:41:25,950 --> 00:41:28,130 934 00:41:28,130 --> 00:41:31,380 935 00:41:31,380 --> 00:41:32,340 936 00:41:32,340 --> 00:41:35,310 937 00:41:35,310 --> 00:41:36,900 938 00:41:36,900 --> 00:41:39,570 939 00:41:39,570 --> 00:41:42,690 940 00:41:42,690 --> 00:41:46,320 941 00:41:46,320 --> 00:41:49,080 942 00:41:49,080 --> 00:41:50,970 943 00:41:50,970 --> 00:41:53,730 944 00:41:53,730 --> 00:41:56,670 945 00:41:56,670 --> 00:42:00,420 946 00:42:00,420 --> 00:42:02,930 947 00:42:02,930 --> 00:42:05,430 948 00:42:05,430 --> 00:42:07,020 949 00:42:07,020 --> 00:42:09,000 950 00:42:09,000 --> 00:42:10,620 951 00:42:10,620 --> 00:42:13,320 952 00:42:13,320 --> 00:42:18,120 953 00:42:18,120 --> 00:42:21,600 954 00:42:21,600 --> 00:42:23,850 955 00:42:23,850 --> 00:42:26,310 956 00:42:26,310 --> 00:42:29,250 957 00:42:29,250 --> 00:42:31,770 958 00:42:31,770 --> 00:42:34,230 959 00:42:34,230 --> 00:42:37,290 960 00:42:37,290 --> 00:42:40,110 961 00:42:40,110 --> 00:42:42,030 962 00:42:42,030 --> 00:42:45,270 963 00:42:45,270 --> 00:42:47,160 964 00:42:47,160 --> 00:42:48,960 965 00:42:48,960 --> 00:42:50,310 966 00:42:50,310 --> 00:42:52,590 967 00:42:52,590 --> 00:42:55,980 968 00:42:55,980 --> 00:42:58,890 969 00:42:58,890 --> 00:43:00,740 970 00:43:00,740 --> 00:43:02,850 971 00:43:02,850 --> 00:43:06,300 972 00:43:06,300 --> 00:43:07,030 973 00:43:07,030 --> 00:43:09,760 974 00:43:09,760 --> 00:43:11,410 975 00:43:11,410 --> 00:43:13,420 976 00:43:13,420 --> 00:43:19,330 977 00:43:19,330 --> 00:43:21,730 978 00:43:21,730 --> 00:43:24,630 979 00:43:24,630 --> 00:43:27,100 980 00:43:27,100 --> 00:43:29,380 981 00:43:29,380 --> 00:43:31,060 982 00:43:31,060 --> 00:43:32,860 983 00:43:32,860 --> 00:43:35,890 984 00:43:35,890 --> 00:43:40,500 985 00:43:40,500 --> 00:43:42,820 986 00:43:42,820 --> 00:43:44,620 987 00:43:44,620 --> 00:43:46,750 988 00:43:46,750 --> 00:43:48,220 989 00:43:48,220 --> 00:43:52,210 990 00:43:52,210 --> 00:43:54,520 991 00:43:54,520 --> 00:43:59,350 992 00:43:59,350 --> 00:44:04,120 993 00:44:04,120 --> 00:44:06,550 994 00:44:06,550 --> 00:44:10,870 995 00:44:10,870 --> 00:44:12,460 996 00:44:12,460 --> 00:44:16,570 997 00:44:16,570 --> 00:44:17,770 998 00:44:17,770 --> 00:44:19,930 999 00:44:19,930 --> 00:44:21,880 1000 00:44:21,880 --> 00:44:23,920 1001 00:44:23,920 --> 00:44:27,610 1002 00:44:27,610 --> 00:44:31,000 1003 00:44:31,000 --> 00:44:34,300 1004 00:44:34,300 --> 00:44:36,640 1005 00:44:36,640 --> 00:44:37,840 1006 00:44:37,840 --> 00:44:39,700 1007 00:44:39,700 --> 00:44:45,040 1008 00:44:45,040 --> 00:44:47,380 1009 00:44:47,380 --> 00:44:50,970 1010 00:44:50,970 --> 00:44:52,840 1011 00:44:52,840 --> 00:44:55,810 1012 00:44:55,810 --> 00:44:58,030 1013 00:44:58,030 --> 00:45:00,580 1014 00:45:00,580 --> 00:45:02,650 1015 00:45:02,650 --> 00:45:06,130 1016 00:45:06,130 --> 00:45:09,310 1017 00:45:09,310 --> 00:45:11,980 1018 00:45:11,980 --> 00:45:14,020 1019 00:45:14,020 --> 00:45:16,960 1020 00:45:16,960 --> 00:45:18,460 1021 00:45:18,460 --> 00:45:20,650 1022 00:45:20,650 --> 00:45:22,690 1023 00:45:22,690 --> 00:45:25,690 1024 00:45:25,690 --> 00:45:28,030 1025 00:45:28,030 --> 00:45:33,130 1026 00:45:33,130 --> 00:45:34,720 1027 00:45:34,720 --> 00:45:36,220 1028 00:45:36,220 --> 00:45:37,990 1029 00:45:37,990 --> 00:45:40,510 1030 00:45:40,510 --> 00:45:43,570 1031 00:45:43,570 --> 00:45:45,730 1032 00:45:45,730 --> 00:45:47,890 1033 00:45:47,890 --> 00:45:50,410 1034 00:45:50,410 --> 00:45:53,260 1035 00:45:53,260 --> 00:45:54,580 1036 00:45:54,580 --> 00:45:56,290 1037 00:45:56,290 --> 00:45:58,300 1038 00:45:58,300 --> 00:46:00,040 1039 00:46:00,040 --> 00:46:04,240 1040 00:46:04,240 --> 00:46:07,210 1041 00:46:07,210 --> 00:46:09,700 1042 00:46:09,700 --> 00:46:11,410 1043 00:46:11,410 --> 00:46:14,560 1044 00:46:14,560 --> 00:46:17,950 1045 00:46:17,950 --> 00:46:21,250 1046 00:46:21,250 --> 00:46:22,570 1047 00:46:22,570 --> 00:46:27,820 1048 00:46:27,820 --> 00:46:32,680 1049 00:46:32,680 --> 00:46:36,370 1050 00:46:36,370 --> 00:46:39,880 1051 00:46:39,880 --> 00:46:42,430 1052 00:46:42,430 --> 00:46:43,230 1053 00:46:43,230 --> 00:46:45,820 1054 00:46:45,820 --> 00:46:47,710 1055 00:46:47,710 --> 00:46:50,350 1056 00:46:50,350 --> 00:46:55,930 1057 00:46:55,930 --> 00:46:57,730 1058 00:46:57,730 --> 00:46:59,770 1059 00:46:59,770 --> 00:47:02,080 1060 00:47:02,080 --> 00:47:05,200 1061 00:47:05,200 --> 00:47:09,670 1062 00:47:09,670 --> 00:47:11,170 1063 00:47:11,170 --> 00:47:13,000 1064 00:47:13,000 --> 00:47:15,160 1065 00:47:15,160 --> 00:47:17,380 1066 00:47:17,380 --> 00:47:21,730 1067 00:47:21,730 --> 00:47:23,560 1068 00:47:23,560 --> 00:47:25,720 1069 00:47:25,720 --> 00:47:28,120 1070 00:47:28,120 --> 00:47:30,490 1071 00:47:30,490 --> 00:47:32,280 1072 00:47:32,280 --> 00:47:34,210 1073 00:47:34,210 --> 00:47:37,480 1074 00:47:37,480 --> 00:47:39,700 1075 00:47:39,700 --> 00:47:41,380 1076 00:47:41,380 --> 00:47:43,570 1077 00:47:43,570 --> 00:47:45,700 1078 00:47:45,700 --> 00:47:48,040 1079 00:47:48,040 --> 00:47:51,310 1080 00:47:51,310 --> 00:47:53,530 1081 00:47:53,530 --> 00:47:55,950 1082 00:47:55,950 --> 00:47:58,540 1083 00:47:58,540 --> 00:48:00,220 1084 00:48:00,220 --> 00:48:05,020 1085 00:48:05,020 --> 00:48:08,830 1086 00:48:08,830 --> 00:48:10,960 1087 00:48:10,960 --> 00:48:12,670 1088 00:48:12,670 --> 00:48:18,970 1089 00:48:18,970 --> 00:48:22,900 1090 00:48:22,900 --> 00:48:24,580 1091 00:48:24,580 --> 00:48:28,300 1092 00:48:28,300 --> 00:48:31,180 1093 00:48:31,180 --> 00:48:34,330 1094 00:48:34,330 --> 00:48:38,050 1095 00:48:38,050 --> 00:48:41,020 1096 00:48:41,020 --> 00:48:44,260 1097 00:48:44,260 --> 00:48:45,849 1098 00:48:45,849 --> 00:48:49,930 1099 00:48:49,930 --> 00:48:52,180 1100 00:48:52,180 --> 00:48:54,700 1101 00:48:54,700 --> 00:48:57,099 1102 00:48:57,099 --> 00:49:01,900 1103 00:49:01,900 --> 00:49:03,940 1104 00:49:03,940 --> 00:49:05,620 1105 00:49:05,620 --> 00:49:09,190 1106 00:49:09,190 --> 00:49:11,050 1107 00:49:11,050 --> 00:49:13,210 1108 00:49:13,210 --> 00:49:15,130 1109 00:49:15,130 --> 00:49:17,620 1110 00:49:17,620 --> 00:49:20,290 1111 00:49:20,290 --> 00:49:21,660 1112 00:49:21,660 --> 00:49:23,890 1113 00:49:23,890 --> 00:49:27,120 1114 00:49:27,120 --> 00:49:29,320 1115 00:49:29,320 --> 00:49:30,910 1116 00:49:30,910 --> 00:49:34,089 1117 00:49:34,089 --> 00:49:35,800 1118 00:49:35,800 --> 00:49:37,270 1119 00:49:37,270 --> 00:49:39,270 1120 00:49:39,270 --> 00:49:42,040 1121 00:49:42,040 --> 00:49:44,589 1122 00:49:44,589 --> 00:49:47,800 1123 00:49:47,800 --> 00:49:51,190 1124 00:49:51,190 --> 00:49:54,100 1125 00:49:54,100 --> 00:49:57,280 1126 00:49:57,280 --> 00:50:00,700 1127 00:50:00,700 --> 00:50:03,070 1128 00:50:03,070 --> 00:50:05,590 1129 00:50:05,590 --> 00:50:09,040 1130 00:50:09,040 --> 00:50:11,290 1131 00:50:11,290 --> 00:50:13,030 1132 00:50:13,030 --> 00:50:14,380 1133 00:50:14,380 --> 00:50:16,300 1134 00:50:16,300 --> 00:50:18,850 1135 00:50:18,850 --> 00:50:20,470 1136 00:50:20,470 --> 00:50:22,870 1137 00:50:22,870 --> 00:50:26,140 1138 00:50:26,140 --> 00:50:28,810 1139 00:50:28,810 --> 00:50:31,830 1140 00:50:31,830 --> 00:50:36,000 1141 00:50:36,000 --> 00:50:38,320 1142 00:50:38,320 --> 00:50:40,990 1143 00:50:40,990 --> 00:50:43,230 1144 00:50:43,230 --> 00:50:45,850 1145 00:50:45,850 --> 00:50:49,180 1146 00:50:49,180 --> 00:50:51,100 1147 00:50:51,100 --> 00:50:53,050 1148 00:50:53,050 --> 00:50:59,530 1149 00:50:59,530 --> 00:51:03,760 1150 00:51:03,760 --> 00:51:05,500 1151 00:51:05,500 --> 00:51:08,200 1152 00:51:08,200 --> 00:51:10,600 1153 00:51:10,600 --> 00:51:13,300 1154 00:51:13,300 --> 00:51:15,580 1155 00:51:15,580 --> 00:51:18,010 1156 00:51:18,010 --> 00:51:20,260 1157 00:51:20,260 --> 00:51:23,200 1158 00:51:23,200 --> 00:51:25,090 1159 00:51:25,090 --> 00:51:27,550 1160 00:51:27,550 --> 00:51:30,580 1161 00:51:30,580 --> 00:51:33,370 1162 00:51:33,370 --> 00:51:36,490 1163 00:51:36,490 --> 00:51:38,620 1164 00:51:38,620 --> 00:51:40,780 1165 00:51:40,780 --> 00:51:42,970 1166 00:51:42,970 --> 00:51:46,080 1167 00:51:46,080 --> 00:51:48,370 1168 00:51:48,370 --> 00:51:51,520 1169 00:51:51,520 --> 00:51:53,350 1170 00:51:53,350 --> 00:51:55,750 1171 00:51:55,750 --> 00:51:57,970 1172 00:51:57,970 --> 00:52:00,910 1173 00:52:00,910 --> 00:52:02,470 1174 00:52:02,470 --> 00:52:06,700 1175 00:52:06,700 --> 00:52:08,350 1176 00:52:08,350 --> 00:52:11,800 1177 00:52:11,800 --> 00:52:13,840 1178 00:52:13,840 --> 00:52:15,730 1179 00:52:15,730 --> 00:52:18,070 1180 00:52:18,070 --> 00:52:21,610 1181 00:52:21,610 --> 00:52:24,700 1182 00:52:24,700 --> 00:52:27,520 1183 00:52:27,520 --> 00:52:33,390 1184 00:52:33,390 --> 00:52:36,280 1185 00:52:36,280 --> 00:52:38,590 1186 00:52:38,590 --> 00:52:40,660 1187 00:52:40,660 --> 00:52:44,490 1188 00:52:44,490 --> 00:52:47,320 1189 00:52:47,320 --> 00:52:50,170 1190 00:52:50,170 --> 00:52:53,800 1191 00:52:53,800 --> 00:52:55,810 1192 00:52:55,810 --> 00:52:59,110 1193 00:52:59,110 --> 00:53:00,280 1194 00:53:00,280 --> 00:53:03,010 1195 00:53:03,010 --> 00:53:05,890 1196 00:53:05,890 --> 00:53:07,480 1197 00:53:07,480 --> 00:53:10,540 1198 00:53:10,540 --> 00:53:13,360 1199 00:53:13,360 --> 00:53:17,140 1200 00:53:17,140 --> 00:53:19,810 1201 00:53:19,810 --> 00:53:21,850 1202 00:53:21,850 --> 00:53:27,880 1203 00:53:27,880 --> 00:53:30,760 1204 00:53:30,760 --> 00:53:33,540 1205 00:53:33,540 --> 00:53:36,340 1206 00:53:36,340 --> 00:53:38,290 1207 00:53:38,290 --> 00:53:39,750 1208 00:53:39,750 --> 00:53:42,580 1209 00:53:42,580 --> 00:53:44,890 1210 00:53:44,890 --> 00:53:47,950 1211 00:53:47,950 --> 00:53:50,200 1212 00:53:50,200 --> 00:53:50,210 1213 00:53:50,210 --> 00:53:56,290 1214 00:53:56,290 --> 00:53:58,610 1215 00:53:58,610 --> 00:54:02,210 1216 00:54:02,210 --> 00:54:04,870 1217 00:54:04,870 --> 00:54:08,300 1218 00:54:08,300 --> 00:54:11,450 1219 00:54:11,450 --> 00:54:17,359 1220 00:54:17,359 --> 00:54:24,009 1221 00:54:24,009 --> 00:54:26,809 1222 00:54:26,809 --> 00:54:29,299 1223 00:54:29,299 --> 00:54:37,479 1224 00:54:37,479 --> 00:54:40,849 1225 00:54:40,849 --> 00:54:42,469 1226 00:54:42,469 --> 00:54:44,959 1227 00:54:44,959 --> 00:54:50,019 1228 00:54:50,019 --> 00:54:52,519 1229 00:54:52,519 --> 00:54:54,950 1230 00:54:54,950 --> 00:54:58,700 1231 00:54:58,700 --> 00:55:01,099 1232 00:55:01,099 --> 00:55:04,849 1233 00:55:04,849 --> 00:55:07,640 1234 00:55:07,640 --> 00:55:09,200 1235 00:55:09,200 --> 00:55:11,959 1236 00:55:11,959 --> 00:55:15,920 1237 00:55:15,920 --> 00:55:17,779 1238 00:55:17,779 --> 00:55:19,940 1239 00:55:19,940 --> 00:55:22,039 1240 00:55:22,039 --> 00:55:23,209 1241 00:55:23,209 --> 00:55:26,839 1242 00:55:26,839 --> 00:55:28,969 1243 00:55:28,969 --> 00:55:32,239 1244 00:55:32,239 --> 00:55:35,959 1245 00:55:35,959 --> 00:55:37,489 1246 00:55:37,489 --> 00:55:40,339 1247 00:55:40,339 --> 00:55:43,630 1248 00:55:43,630 --> 00:55:46,489 1249 00:55:46,489 --> 00:55:48,380 1250 00:55:48,380 --> 00:55:50,450 1251 00:55:50,450 --> 00:55:52,039 1252 00:55:52,039 --> 00:55:54,200 1253 00:55:54,200 --> 00:55:55,940 1254 00:55:55,940 --> 00:55:58,059 1255 00:55:58,059 --> 00:56:00,380 1256 00:56:00,380 --> 00:56:04,370 1257 00:56:04,370 --> 00:56:05,660 1258 00:56:05,660 --> 00:56:09,589 1259 00:56:09,589 --> 00:56:11,719 1260 00:56:11,719 --> 00:56:14,239 1261 00:56:14,239 --> 00:56:15,799 1262 00:56:15,799 --> 00:56:17,769 1263 00:56:17,769 --> 00:56:21,140 1264 00:56:21,140 --> 00:56:23,509 1265 00:56:23,509 --> 00:56:25,609 1266 00:56:25,609 --> 00:56:27,380 1267 00:56:27,380 --> 00:56:29,269 1268 00:56:29,269 --> 00:56:30,680 1269 00:56:30,680 --> 00:56:31,280 1270 00:56:31,280 --> 00:56:33,620 1271 00:56:33,620 --> 00:56:35,630 1272 00:56:35,630 --> 00:56:39,320 1273 00:56:39,320 --> 00:56:42,410 1274 00:56:42,410 --> 00:56:44,660 1275 00:56:44,660 --> 00:56:46,250 1276 00:56:46,250 --> 00:56:55,460 1277 00:56:55,460 --> 00:56:58,720 1278 00:56:58,720 --> 00:57:00,830 1279 00:57:00,830 --> 00:57:04,340 1280 00:57:04,340 --> 00:57:05,840 1281 00:57:05,840 --> 00:57:08,990 1282 00:57:08,990 --> 00:57:10,730 1283 00:57:10,730 --> 00:57:12,680 1284 00:57:12,680 --> 00:57:15,140 1285 00:57:15,140 --> 00:57:17,630 1286 00:57:17,630 --> 00:57:19,910 1287 00:57:19,910 --> 00:57:21,740 1288 00:57:21,740 --> 00:57:26,480 1289 00:57:26,480 --> 00:57:28,310 1290 00:57:28,310 --> 00:57:29,480 1291 00:57:29,480 --> 00:57:31,160 1292 00:57:31,160 --> 00:57:33,940 1293 00:57:33,940 --> 00:57:36,740 1294 00:57:36,740 --> 00:57:39,230 1295 00:57:39,230 --> 00:57:42,710 1296 00:57:42,710 --> 00:57:44,420 1297 00:57:44,420 --> 00:57:46,850 1298 00:57:46,850 --> 00:57:51,350 1299 00:57:51,350 --> 00:57:53,150 1300 00:57:53,150 --> 00:57:54,560 1301 00:57:54,560 --> 00:57:57,020 1302 00:57:57,020 --> 00:57:59,390 1303 00:57:59,390 --> 00:58:01,880 1304 00:58:01,880 --> 00:58:04,310 1305 00:58:04,310 --> 00:58:05,840 1306 00:58:05,840 --> 00:58:11,330 1307 00:58:11,330 --> 00:58:13,700 1308 00:58:13,700 --> 00:58:16,220 1309 00:58:16,220 --> 00:58:19,250 1310 00:58:19,250 --> 00:58:21,200 1311 00:58:21,200 --> 00:58:23,270 1312 00:58:23,270 --> 00:58:26,180 1313 00:58:26,180 --> 00:58:28,700 1314 00:58:28,700 --> 00:58:31,310 1315 00:58:31,310 --> 00:58:35,150 1316 00:58:35,150 --> 00:58:37,700 1317 00:58:37,700 --> 00:58:39,710 1318 00:58:39,710 --> 00:58:41,990 1319 00:58:41,990 --> 00:58:44,540 1320 00:58:44,540 --> 00:58:44,550 1321 00:58:44,550 --> 00:58:45,049 1322 00:58:45,049 --> 00:58:49,159 1323 00:58:49,159 --> 00:58:51,799 1324 00:58:51,799 --> 00:58:54,319 1325 00:58:54,319 --> 00:58:55,849 1326 00:58:55,849 --> 00:58:59,569 1327 00:58:59,569 --> 00:59:02,719 1328 00:59:02,719 --> 00:59:05,390 1329 00:59:05,390 --> 00:59:07,759 1330 00:59:07,759 --> 00:59:09,769 1331 00:59:09,769 --> 00:59:13,159 1332 00:59:13,159 --> 00:59:15,229 1333 00:59:15,229 --> 00:59:18,049 1334 00:59:18,049 --> 00:59:20,120 1335 00:59:20,120 --> 00:59:22,309 1336 00:59:22,309 --> 00:59:28,399 1337 00:59:28,399 --> 00:59:29,959 1338 00:59:29,959 --> 00:59:32,359 1339 00:59:32,359 --> 00:59:33,979 1340 00:59:33,979 --> 00:59:36,529 1341 00:59:36,529 --> 00:59:38,659 1342 00:59:38,659 --> 00:59:40,069 1343 00:59:40,069 --> 00:59:42,109 1344 00:59:42,109 --> 00:59:44,749 1345 00:59:44,749 --> 00:59:49,159 1346 00:59:49,159 --> 00:59:51,099 1347 00:59:51,099 --> 00:59:53,120 1348 00:59:53,120 --> 00:59:55,549 1349 00:59:55,549 --> 00:59:57,469 1350 00:59:57,469 --> 01:00:00,769 1351 01:00:00,769 --> 01:00:03,709 1352 01:00:03,709 --> 01:00:05,749 1353 01:00:05,749 --> 01:00:08,749 1354 01:00:08,749 --> 01:00:10,909 1355 01:00:10,909 --> 01:00:13,640 1356 01:00:13,640 --> 01:00:16,219 1357 01:00:16,219 --> 01:00:19,729 1358 01:00:19,729 --> 01:00:22,849 1359 01:00:22,849 --> 01:00:25,249 1360 01:00:25,249 --> 01:00:27,199 1361 01:00:27,199 --> 01:00:28,849 1362 01:00:28,849 --> 01:00:30,620 1363 01:00:30,620 --> 01:00:33,380 1364 01:00:33,380 --> 01:00:34,459 1365 01:00:34,459 --> 01:00:35,899 1366 01:00:35,899 --> 01:00:39,679 1367 01:00:39,679 --> 01:00:41,539 1368 01:00:41,539 --> 01:00:44,539 1369 01:00:44,539 --> 01:00:46,399 1370 01:00:46,399 --> 01:00:48,079 1371 01:00:48,079 --> 01:00:50,539 1372 01:00:50,539 --> 01:00:51,769 1373 01:00:51,769 --> 01:00:54,559 1374 01:00:54,559 --> 01:00:55,669 1375 01:00:55,669 --> 01:00:57,589 1376 01:00:57,589 --> 01:00:57,599 1377 01:00:57,599 --> 01:00:58,690 1378 01:00:58,690 --> 01:01:02,090 1379 01:01:02,090 --> 01:01:14,019 1380 01:01:14,019 --> 01:01:16,149 1381 01:01:16,149 --> 01:01:18,639 1382 01:01:18,639 --> 01:01:21,429 1383 01:01:21,429 --> 01:01:23,379 1384 01:01:23,379 --> 01:01:26,769 1385 01:01:26,769 --> 01:01:29,199 1386 01:01:29,199 --> 01:01:31,179 1387 01:01:31,179 --> 01:01:34,479 1388 01:01:34,479 --> 01:01:36,039 1389 01:01:36,039 --> 01:01:39,899 1390 01:01:39,899 --> 01:01:43,869 1391 01:01:43,869 --> 01:01:46,629 1392 01:01:46,629 --> 01:01:48,849 1393 01:01:48,849 --> 01:01:50,709 1394 01:01:50,709 --> 01:01:54,009 1395 01:01:54,009 --> 01:01:56,589 1396 01:01:56,589 --> 01:01:58,479 1397 01:01:58,479 --> 01:02:02,289 1398 01:02:02,289 --> 01:02:04,509 1399 01:02:04,509 --> 01:02:07,089 1400 01:02:07,089 --> 01:02:09,429 1401 01:02:09,429 --> 01:02:12,849 1402 01:02:12,849 --> 01:02:14,889 1403 01:02:14,889 --> 01:02:19,089 1404 01:02:19,089 --> 01:02:21,639 1405 01:02:21,639 --> 01:02:25,899 1406 01:02:25,899 --> 01:02:28,449 1407 01:02:28,449 --> 01:02:29,739 1408 01:02:29,739 --> 01:02:35,169 1409 01:02:35,169 --> 01:02:36,729 1410 01:02:36,729 --> 01:02:39,879 1411 01:02:39,879 --> 01:02:42,609 1412 01:02:42,609 --> 01:02:43,539 1413 01:02:43,539 --> 01:02:46,209 1414 01:02:46,209 --> 01:02:48,789 1415 01:02:48,789 --> 01:02:51,899 1416 01:02:51,899 --> 01:02:54,819 1417 01:02:54,819 --> 01:02:57,819 1418 01:02:57,819 --> 01:03:00,369 1419 01:03:00,369 --> 01:03:02,979 1420 01:03:02,979 --> 01:03:05,469 1421 01:03:05,469 --> 01:03:07,149 1422 01:03:07,149 --> 01:03:08,949 1423 01:03:08,949 --> 01:03:12,419 1424 01:03:12,419 --> 01:03:14,739 1425 01:03:14,739 --> 01:03:16,299 1426 01:03:16,299 --> 01:03:19,029 1427 01:03:19,029 --> 01:03:21,369 1428 01:03:21,369 --> 01:03:23,799 1429 01:03:23,799 --> 01:03:25,689 1430 01:03:25,689 --> 01:03:27,270 1431 01:03:27,270 --> 01:03:29,820 1432 01:03:29,820 --> 01:03:33,900 1433 01:03:33,900 --> 01:03:37,380 1434 01:03:37,380 --> 01:03:39,480 1435 01:03:39,480 --> 01:03:41,310 1436 01:03:41,310 --> 01:03:43,770 1437 01:03:43,770 --> 01:03:46,380 1438 01:03:46,380 --> 01:03:48,540 1439 01:03:48,540 --> 01:03:50,400 1440 01:03:50,400 --> 01:03:53,010 1441 01:03:53,010 --> 01:03:58,350 1442 01:03:58,350 --> 01:04:00,690 1443 01:04:00,690 --> 01:04:02,730 1444 01:04:02,730 --> 01:04:05,720 1445 01:04:05,720 --> 01:04:08,430 1446 01:04:08,430 --> 01:04:10,490 1447 01:04:10,490 --> 01:04:13,530 1448 01:04:13,530 --> 01:04:15,900 1449 01:04:15,900 --> 01:04:18,920 1450 01:04:18,920 --> 01:04:22,080 1451 01:04:22,080 --> 01:04:24,300 1452 01:04:24,300 --> 01:04:32,160 1453 01:04:32,160 --> 01:04:34,470 1454 01:04:34,470 --> 01:04:36,840 1455 01:04:36,840 --> 01:04:38,490 1456 01:04:38,490 --> 01:04:41,160 1457 01:04:41,160 --> 01:04:42,960 1458 01:04:42,960 --> 01:04:45,450 1459 01:04:45,450 --> 01:04:48,390 1460 01:04:48,390 --> 01:04:50,430 1461 01:04:50,430 --> 01:04:57,210 1462 01:04:57,210 --> 01:05:00,330 1463 01:05:00,330 --> 01:05:04,560 1464 01:05:04,560 --> 01:05:06,450 1465 01:05:06,450 --> 01:05:11,010 1466 01:05:11,010 --> 01:05:14,520 1467 01:05:14,520 --> 01:05:14,530 1468 01:05:14,530 --> 01:05:16,880 1469 01:05:16,880 --> 01:05:19,110 1470 01:05:19,110 --> 01:05:22,530 1471 01:05:22,530 --> 01:05:24,720 1472 01:05:24,720 --> 01:05:27,060 1473 01:05:27,060 --> 01:05:30,060 1474 01:05:30,060 --> 01:05:32,760 1475 01:05:32,760 --> 01:05:35,010 1476 01:05:35,010 --> 01:05:37,020 1477 01:05:37,020 --> 01:05:39,960 1478 01:05:39,960 --> 01:05:45,839 1479 01:05:45,839 --> 01:05:47,460 1480 01:05:47,460 --> 01:05:48,990 1481 01:05:48,990 --> 01:05:51,240 1482 01:05:51,240 --> 01:05:53,339 1483 01:05:53,339 --> 01:05:55,970 1484 01:05:55,970 --> 01:05:58,470 1485 01:05:58,470 --> 01:06:02,310 1486 01:06:02,310 --> 01:06:04,859 1487 01:06:04,859 --> 01:06:06,540 1488 01:06:06,540 --> 01:06:11,099 1489 01:06:11,099 --> 01:06:13,680 1490 01:06:13,680 --> 01:06:15,780 1491 01:06:15,780 --> 01:06:17,760 1492 01:06:17,760 --> 01:06:20,640 1493 01:06:20,640 --> 01:06:23,550 1494 01:06:23,550 --> 01:06:25,260 1495 01:06:25,260 --> 01:06:31,410 1496 01:06:31,410 --> 01:06:33,060 1497 01:06:33,060 --> 01:06:35,760 1498 01:06:35,760 --> 01:06:38,250 1499 01:06:38,250 --> 01:06:42,180 1500 01:06:42,180 --> 01:06:44,370 1501 01:06:44,370 --> 01:06:46,440 1502 01:06:46,440 --> 01:06:49,079 1503 01:06:49,079 --> 01:06:51,150 1504 01:06:51,150 --> 01:06:53,700 1505 01:06:53,700 --> 01:06:56,720 1506 01:06:56,720 --> 01:06:59,250 1507 01:06:59,250 --> 01:07:00,930 1508 01:07:00,930 --> 01:07:04,589 1509 01:07:04,589 --> 01:07:06,570 1510 01:07:06,570 --> 01:07:08,160 1511 01:07:08,160 --> 01:07:10,230 1512 01:07:10,230 --> 01:07:13,770 1513 01:07:13,770 --> 01:07:15,839 1514 01:07:15,839 --> 01:07:20,310 1515 01:07:20,310 --> 01:07:21,720 1516 01:07:21,720 --> 01:07:24,930 1517 01:07:24,930 --> 01:07:26,010 1518 01:07:26,010 --> 01:07:27,599 1519 01:07:27,599 --> 01:07:29,579 1520 01:07:29,579 --> 01:07:32,070 1521 01:07:32,070 --> 01:07:34,170 1522 01:07:34,170 --> 01:07:37,440 1523 01:07:37,440 --> 01:07:40,470 1524 01:07:40,470 --> 01:07:45,790 1525 01:07:45,790 --> 01:07:49,340 1526 01:07:49,340 --> 01:07:51,650 1527 01:07:51,650 --> 01:07:53,870 1528 01:07:53,870 --> 01:07:56,390 1529 01:07:56,390 --> 01:07:58,370 1530 01:07:58,370 --> 01:08:02,330 1531 01:08:02,330 --> 01:08:03,620 1532 01:08:03,620 --> 01:08:06,440 1533 01:08:06,440 --> 01:08:07,400 1534 01:08:07,400 --> 01:08:10,490 1535 01:08:10,490 --> 01:08:12,380 1536 01:08:12,380 --> 01:08:14,720 1537 01:08:14,720 --> 01:08:17,480 1538 01:08:17,480 --> 01:08:19,300 1539 01:08:19,300 --> 01:08:22,010 1540 01:08:22,010 --> 01:08:24,190 1541 01:08:24,190 --> 01:08:26,720 1542 01:08:26,720 --> 01:08:30,530 1543 01:08:30,530 --> 01:08:33,200 1544 01:08:33,200 --> 01:08:35,840 1545 01:08:35,840 --> 01:08:37,130 1546 01:08:37,130 --> 01:08:39,770 1547 01:08:39,770 --> 01:08:41,390 1548 01:08:41,390 --> 01:08:41,960 1549 01:08:41,960 --> 01:08:43,730 1550 01:08:43,730 --> 01:08:45,830 1551 01:08:45,830 --> 01:08:48,110 1552 01:08:48,110 --> 01:08:49,880 1553 01:08:49,880 --> 01:08:53,720 1554 01:08:53,720 --> 01:08:56,060 1555 01:08:56,060 --> 01:08:58,160 1556 01:08:58,160 --> 01:09:00,680 1557 01:09:00,680 --> 01:09:03,260 1558 01:09:03,260 --> 01:09:05,210 1559 01:09:05,210 --> 01:09:07,160 1560 01:09:07,160 --> 01:09:11,810 1561 01:09:11,810 --> 01:09:12,980 1562 01:09:12,980 --> 01:09:12,990 1563 01:09:12,990 --> 01:09:13,640 1564 01:09:13,640 --> 01:09:15,890 1565 01:09:15,890 --> 01:09:21,080 1566 01:09:21,080 --> 01:09:23,960 1567 01:09:23,960 --> 01:09:27,110 1568 01:09:27,110 --> 01:09:29,810 1569 01:09:29,810 --> 01:09:32,810 1570 01:09:32,810 --> 01:09:34,640 1571 01:09:34,640 --> 01:09:36,800 1572 01:09:36,800 --> 01:09:39,470 1573 01:09:39,470 --> 01:09:42,860 1574 01:09:42,860 --> 01:09:46,280 1575 01:09:46,280 --> 01:09:49,940 1576 01:09:49,940 --> 01:09:51,920 1577 01:09:51,920 --> 01:09:53,750 1578 01:09:53,750 --> 01:09:55,880 1579 01:09:55,880 --> 01:09:57,790 1580 01:09:57,790 --> 01:10:00,590 1581 01:10:00,590 --> 01:10:02,510 1582 01:10:02,510 --> 01:10:04,670 1583 01:10:04,670 --> 01:10:08,840 1584 01:10:08,840 --> 01:10:11,300 1585 01:10:11,300 --> 01:10:14,090 1586 01:10:14,090 --> 01:10:15,500 1587 01:10:15,500 --> 01:10:17,689 1588 01:10:17,689 --> 01:10:22,340 1589 01:10:22,340 --> 01:10:25,010 1590 01:10:25,010 --> 01:10:27,040 1591 01:10:27,040 --> 01:10:30,410 1592 01:10:30,410 --> 01:10:32,630 1593 01:10:32,630 --> 01:10:36,410 1594 01:10:36,410 --> 01:10:39,560 1595 01:10:39,560 --> 01:10:45,410 1596 01:10:45,410 --> 01:10:47,209 1597 01:10:47,209 --> 01:10:51,020 1598 01:10:51,020 --> 01:10:53,570 1599 01:10:53,570 --> 01:10:55,490 1600 01:10:55,490 --> 01:10:59,000 1601 01:10:59,000 --> 01:11:01,220 1602 01:11:01,220 --> 01:11:02,630 1603 01:11:02,630 --> 01:11:04,850 1604 01:11:04,850 --> 01:11:08,030 1605 01:11:08,030 --> 01:11:10,880 1606 01:11:10,880 --> 01:11:12,530 1607 01:11:12,530 --> 01:11:18,229 1608 01:11:18,229 --> 01:11:20,000 1609 01:11:20,000 --> 01:11:22,850 1610 01:11:22,850 --> 01:11:24,979 1611 01:11:24,979 --> 01:11:28,820 1612 01:11:28,820 --> 01:11:30,860 1613 01:11:30,860 --> 01:11:33,229 1614 01:11:33,229 --> 01:11:35,120 1615 01:11:35,120 --> 01:11:37,000 1616 01:11:37,000 --> 01:11:40,760 1617 01:11:40,760 --> 01:11:43,939 1618 01:11:43,939 --> 01:11:45,590 1619 01:11:45,590 --> 01:11:51,800 1620 01:11:51,800 --> 01:11:54,439 1621 01:11:54,439 --> 01:11:56,720 1622 01:11:56,720 --> 01:11:59,709 1623 01:11:59,709 --> 01:12:11,930 1624 01:12:11,930 --> 01:12:11,940 1625 01:12:11,940 --> 01:12:18,050 1626 01:12:18,050 --> 01:12:20,240 1627 01:12:20,240 --> 01:12:23,900 1628 01:12:23,900 --> 01:12:28,850 1629 01:12:28,850 --> 01:12:31,850 1630 01:12:31,850 --> 01:12:34,220 1631 01:12:34,220 --> 01:12:38,240 1632 01:12:38,240 --> 01:12:39,860 1633 01:12:39,860 --> 01:12:41,930 1634 01:12:41,930 --> 01:12:43,010 1635 01:12:43,010 --> 01:12:48,590 1636 01:12:48,590 --> 01:12:50,270 1637 01:12:50,270 --> 01:12:52,850 1638 01:12:52,850 --> 01:12:56,060 1639 01:12:56,060 --> 01:13:04,280 1640 01:13:04,280 --> 01:13:07,670 1641 01:13:07,670 --> 01:13:10,940 1642 01:13:10,940 --> 01:13:13,940 1643 01:13:13,940 --> 01:13:15,830 1644 01:13:15,830 --> 01:13:17,710 1645 01:13:17,710 --> 01:13:22,820 1646 01:13:22,820 --> 01:13:25,930 1647 01:13:25,930 --> 01:13:31,550 1648 01:13:31,550 --> 01:13:33,830 1649 01:13:33,830 --> 01:13:45,609 1650 01:13:45,609 --> 01:13:47,649 1651 01:13:47,649 --> 01:13:47,659 1652 01:13:47,659 --> 01:13:48,799 1653 01:13:48,799 --> 01:13:52,250 1654 01:13:52,250 --> 01:13:54,589 1655 01:13:54,589 --> 01:13:57,169 1656 01:13:57,169 --> 01:14:01,520 1657 01:14:01,520 --> 01:14:05,060 1658 01:14:05,060 --> 01:14:07,160 1659 01:14:07,160 --> 01:14:09,979 1660 01:14:09,979 --> 01:14:12,350 1661 01:14:12,350 --> 01:14:18,109 1662 01:14:18,109 --> 01:14:21,259 1663 01:14:21,259 --> 01:14:23,149 1664 01:14:23,149 --> 01:14:25,640 1665 01:14:25,640 --> 01:14:33,709 1666 01:14:33,709 --> 01:14:36,110 1667 01:14:36,110 --> 01:14:37,820 1668 01:14:37,820 --> 01:14:40,360 1669 01:14:40,360 --> 01:14:48,020 1670 01:14:48,020 --> 01:14:49,670 1671 01:14:49,670 --> 01:14:53,270 1672 01:14:53,270 --> 01:14:55,660 1673 01:14:55,660 --> 01:14:58,670 1674 01:14:58,670 --> 01:15:02,030 1675 01:15:02,030 --> 01:15:04,850 1676 01:15:04,850 --> 01:15:07,430 1677 01:15:07,430 --> 01:15:10,550 1678 01:15:10,550 --> 01:15:12,740 1679 01:15:12,740 --> 01:15:15,830 1680 01:15:15,830 --> 01:15:19,910 1681 01:15:19,910 --> 01:15:22,820 1682 01:15:22,820 --> 01:15:24,260 1683 01:15:24,260 --> 01:15:28,030 1684 01:15:28,030 --> 01:15:32,540 1685 01:15:32,540 --> 01:15:35,780 1686 01:15:35,780 --> 01:15:38,330 1687 01:15:38,330 --> 01:15:40,880 1688 01:15:40,880 --> 01:15:43,430 1689 01:15:43,430 --> 01:15:45,560 1690 01:15:45,560 --> 01:15:47,479 1691 01:15:47,479 --> 01:15:49,310 1692 01:15:49,310 --> 01:15:52,040 1693 01:15:52,040 --> 01:15:55,510 1694 01:15:55,510 --> 01:15:58,550 1695 01:15:58,550 --> 01:16:01,180 1696 01:16:01,180 --> 01:16:10,780 1697 01:16:10,780 --> 01:16:10,790 1698 01:16:10,790 --> 01:16:13,579 1699 01:16:13,579 --> 01:16:15,739 1700 01:16:15,739 --> 01:16:17,869 1701 01:16:17,869 --> 01:16:20,079 1702 01:16:20,079 --> 01:16:24,770 1703 01:16:24,770 --> 01:16:27,169 1704 01:16:27,169 --> 01:16:28,969 1705 01:16:28,969 --> 01:16:33,290 1706 01:16:33,290 --> 01:16:38,689 1707 01:16:38,689 --> 01:16:41,149 1708 01:16:41,149 --> 01:16:42,889 1709 01:16:42,889 --> 01:16:46,489 1710 01:16:46,489 --> 01:16:49,239 1711 01:16:49,239 --> 01:16:52,719 1712 01:16:52,719 --> 01:16:55,219 1713 01:16:55,219 --> 01:16:58,549 1714 01:16:58,549 --> 01:17:01,459 1715 01:17:01,459 --> 01:17:04,219 1716 01:17:04,219 --> 01:17:07,189 1717 01:17:07,189 --> 01:17:08,839 1718 01:17:08,839 --> 01:17:12,589 1719 01:17:12,589 --> 01:17:15,009 1720 01:17:15,009 --> 01:17:21,109 1721 01:17:21,109 --> 01:17:24,770 1722 01:17:24,770 --> 01:17:28,029 1723 01:17:28,029 --> 01:17:30,319 1724 01:17:30,319 --> 01:17:44,580 1725 01:17:44,580 --> 01:17:47,440 1726 01:17:47,440 --> 01:17:49,390 1727 01:17:49,390 --> 01:17:51,790 1728 01:17:51,790 --> 01:17:54,010 1729 01:17:54,010 --> 01:17:56,110 1730 01:17:56,110 --> 01:17:59,230 1731 01:17:59,230 --> 01:18:02,530 1732 01:18:02,530 --> 01:18:04,810 1733 01:18:04,810 --> 01:18:07,270 1734 01:18:07,270 --> 01:18:09,370 1735 01:18:09,370 --> 01:18:11,680 1736 01:18:11,680 --> 01:18:14,110 1737 01:18:14,110 --> 01:18:16,000 1738 01:18:16,000 --> 01:18:17,290 1739 01:18:17,290 --> 01:18:20,250 1740 01:18:20,250 --> 01:18:23,620 1741 01:18:23,620 --> 01:18:26,530 1742 01:18:26,530 --> 01:18:29,920 1743 01:18:29,920 --> 01:18:34,600 1744 01:18:34,600 --> 01:18:39,430 1745 01:18:39,430 --> 01:18:41,560 1746 01:18:41,560 --> 01:18:43,840 1747 01:18:43,840 --> 01:18:50,450 1748 01:18:50,450 --> 01:18:53,340 1749 01:18:53,340 --> 01:19:00,480 1750 01:19:00,480 --> 01:19:01,800 1751 01:19:01,800 --> 01:19:03,870 1752 01:19:03,870 --> 01:19:05,430 1753 01:19:05,430 --> 01:19:06,060 1754 01:19:06,060 --> 01:19:08,220 1755 01:19:08,220 --> 01:19:11,070 1756 01:19:11,070 --> 01:19:13,320 1757 01:19:13,320 --> 01:19:17,990 1758 01:19:17,990 --> 01:19:20,160 1759 01:19:20,160 --> 01:19:24,660 1760 01:19:24,660 --> 01:19:26,970 1761 01:19:26,970 --> 01:19:27,390 1762 01:19:27,390 --> 01:19:29,250 1763 01:19:29,250 --> 01:19:31,440 1764 01:19:31,440 --> 01:19:39,530 1765 01:19:39,530 --> 01:19:42,270 1766 01:19:42,270 --> 01:19:44,330 1767 01:19:44,330 --> 01:19:48,390 1768 01:19:48,390 --> 01:19:50,520 1769 01:19:50,520 --> 01:19:52,770 1770 01:19:52,770 --> 01:19:56,070 1771 01:19:56,070 --> 01:20:07,120 1772 01:20:07,120 --> 01:20:07,130 1773 01:20:07,130 --> 01:20:08,270 1774 01:20:08,270 --> 01:20:10,640 1775 01:20:10,640 --> 01:20:11,900 1776 01:20:11,900 --> 01:20:14,330 1777 01:20:14,330 --> 01:20:16,510 1778 01:20:16,510 --> 01:20:22,340 1779 01:20:22,340 --> 01:20:26,510 1780 01:20:26,510 --> 01:20:28,940 1781 01:20:28,940 --> 01:20:33,350 1782 01:20:33,350 --> 01:20:45,460 1783 01:20:45,460 --> 01:20:47,540 1784 01:20:47,540 --> 01:20:50,120 1785 01:20:50,120 --> 01:20:51,830 1786 01:20:51,830 --> 01:20:54,790 1787 01:20:54,790 --> 01:21:01,490 1788 01:21:01,490 --> 01:21:03,320 1789 01:21:03,320 --> 01:21:07,280 1790 01:21:07,280 --> 01:21:10,100 1791 01:21:10,100 --> 01:21:13,760 1792 01:21:13,760 --> 01:21:16,910 1793 01:21:16,910 --> 01:21:18,860 1794 01:21:18,860 --> 01:21:20,990 1795 01:21:20,990 --> 01:21:22,390 1796 01:21:22,390 --> 01:21:24,290 1797 01:21:24,290 --> 01:21:26,390 1798 01:21:26,390 --> 01:21:29,690