1 00:00:01,069 --> 00:00:03,189 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu hey everybody let's get going who here has heard of the FFT that's most of you so I first met Steve Johnson when he worked with one of my graduate students now former graduate student Mateo frigo yeah and they came up with a really spectacular piece of performance engineering for the FFT a system they call fftw for the fastest fourier transform in the West and it has for over years and years been you know a staple of anybody doing signal processing will know fftw so anyway it's a great pleasure to welcome Steve Johnson who is going to talk about some of the work that he's been doing on dynamic languages such as Giulia and Python so yes I said about high-level dynamic languages and how you get performance in these and so most of you probably use Python or our MATLAB and so these are really popular for people doing technical computing statistics and anything where you want kind of interactive exploration you'd like to have a dynamically typed language where you can just type x equals 3 and then three lines later said Oh X as an array because you're doing things interactively don't you don't have to be stuck with a particular set of types and there's a lot of choices for these but they usually hit a wall when it comes to writing performance critical code in these languages and so traditionally people doing some serious computing in these languages have a two language solution so they so they do high-level exploration and pro tippity and so forth and in Python or whatever but when they need to write performance critical code then you drop down to a lower level language Fortran or C or scythe on or one of these things and you use Python as the glue for these low-level kernels and the problem and this is this was work workable you know I've done this myself many of you probably done this but when you drop down from Python to see or even to scythe on there's a huge discontinuous jump in the complexity of the coding there and there's usually a loss of generality like you you know when you write code and C or something like that it's specific to a very small set of types whereas the nice thing about high-level languages is you can write generic code that works for a lot of different types so at this point there's often someone to pops up and says oh well you know I I knew performance programming in Python and it everyone knows you just need to vectorize your code right so basically what they mean is is you you rely on mature external libraries that you pass them a big block of data it does a huge amount of computation and comes back and so you never write your own loops and this is great you know this is good advice if you have a if there's someone who's already written the code that you need you know you should try and leverage that as much as possible but somebody has to write those and eventually that person will be you and and because eventually if you do scientific computing there's a prop you went into a problem inevitably that you just can't express in terms of existing libraries very very easily or at all so so this was the state of affairs for a long time a few years ago starting in Ella needleman's group at MIT there was a proposal for a new language called Giulia which tries to be as high-level and interactive is it's a dynamically typed language you know as my MATLAB or Python and so forth but you know general-purpose language like Python very productive for technical works are really oriented towards scientific numerical computing but you can write a loop and you write low-level code in that that's as fast as C that's that was the goal the first release was in 2013 sits pretty not you language the 1.0 release was in august of this year so before that point every year there was a new release point 1 point 2 point 3 and every year would like break all your old old code and you have to update everything to keep it working so now they said ok it's stable we'll add new features will make it faster but well from this point onwards for at least until 2.0 of many years in the future the it'll be backwards compatible so there's lots of it so in my experience this pretty much holds up I haven't found any problem where I could you know where there was a heist nice highly optimized C or Fortran code where I couldn't write equivalent code in or equivalent performance equivalently performing code and Julia you know given enough time right obviously if something is there's a library with a hundred thousand lines of code you know it's quite a long time to rewrite that in any other language so there's lots of benchmarks that that illustrate this the goal of Julia is usually to stay within a factor of two of C and my my experience is usually with an effector a few percent that you can get if you know what you're doing so here's a very simple example that I like to use it it is generating a vandermonde matrix so given a vector of values alpha 1 alpha 2 alpha alpha m and you want to make an M by n matrix whose columns are just those entries to you know due to the power first power squared cubed and so forth element wise I said this this this kind of matrix shows up at a lot of problems so most you know matrix and vector libraries have a built-in function to do this on Python in numpy there is a function called num pi dot Vander to do this and if you look at the it's generating a big matrix it could be performance critical so they can't implement it in Python so if you look at the numpad limitation it's a little python shim that calls immediately to see and then if you look at the C code I won't scroll through it but several hundred lines of code it's quite long and complicated and all about 100 several hundred lines of code is doing is just figuring out what types and to work with but like what kernels to dispatch to at the end of that it dispatches to a kernel that does the actual work and that kernel is also C code but that that C code was generated by a special purpose code generation so it's quite you know involved to get to to get good performance for this while still being somewhat type generic so that their goal was to have something that works for basically any numpy array and any numpy type which is like there's a handful like maybe a dozen scalar types that it should work with right so if you're implementing this in C it's really trivial to write you know 20 lines of code that implements this but only for double precision you know a pointer to double position array I know so the difficulty is getting typed into generic in C so in in Julia here's the implementation in Julia it looks at first glance much like what us roughly about like a C or Fortran implementation would look like it's just I just implement in the most simple way it's just two nested loops I suggest just for it from basically you loop across and as you go across you know you accumulate powers by multiplying repeatedly by Vioxx right that's that's all it is and it just fills in the array the performance of the graph here is the time for the numpy implementation divided by the time for the julie implementation as a function of n for an N by n matrix the first data point I think that's there's something funny going on it's not ten thousand times slower but you know for fur like a ten by ten twenty five twenty array the non pipe version is actually ten times slower because it's basically the overhead that's imposed by all going through all those layers once you get to like 100 hundred matrix that overhead doesn't matter and then it's all this optimized C code code generated hundred code generation and so forth it's pretty much the same speed as the juliek code except the Juliet code there as I said it looks much like C code would except there's no types its Vandor X there's no type declaration that X can be anything and if it's in fact this works with any container type as long as it has an indexing operation and any numeric type could be real numbers could be complex numbers could be quaternions anything that supports the x operation and there's also a call to one so one returns the multiplicative identity for whatever so whatever group you're in you need to have a one alright that's the first column and that might be a different type of one for for a different object right might be an array of matrices for example and then one is the identity matrix so in fact there are even cases where you can do get significantly faster and then then it's optimized seen Fortran code so I found this when I was implementing special functions so things like the error function or polygamous unction or the inverse the error function I've consistently found that I can get often two to three times faster than then like optimize C and Fortran libraries out there partly because I'm just smarter than the people who wrote those libraries but no mainly because in in julia i'm using basically the same expansions the same series rational functions that everyone else is using the difference is that in julia has built-in techniques for what's called meta programming or cogeneration so in usually the special functions involve lots of polynomial evaluations that's the way they boil down to and you can basically write cogeneration that generates very optimized in-line valuation of the specific polynomials for these functions that would be really awkward to write in Fortran you do that to write it all by hand or write like a separate routine separate program that wrote Fortran code for you so you can do this it's a high level languages allow you to do tricks for performance that it would be really hard to do in low level languages so mainly what I want to talk about it's give some idea of why Julia can be fast and to understand this you also need to understand why is is Python slow and and in general what's going on in determining the performance in a language like this what do you need in a language to enable you to compile it to fast code it while still still being completely generic like like this this van der function which works on any any type any user even user-defined in numeric type user-defined container type will be just as fast as there's no privileged in fact if you look at julia almost all of duo's Julia's implemented in julia like integer operations and things like the really basic types most of that is implemented in julia right obviously it if you're multiplying two 32-bit integers at some point it's calling a assembly language instruction but but even not calling out to this assembly is actually in julia so so at this point I want to switch over to sort of a live calculation so this is from a notebook that I developed as part of a short course with Alan Adelman is sitting over there like you know 9 6 on performance optimization high level languages and so I'm gonna go through this is a very simple calculation of course you would never I mean in any language usually you would have a built often have a built in function for this but it's just a son function it's just written written up there so we need to know about we just have a list an array of n numbers we're just gonna add them up right and if we can't make this fast right then then we have real problems and you know we're not gonna be able to do anything in this list so so this is the simple sort of thing where if someone didn't doesn't provide this for you you're gonna have to write a loop to do this so I'm gonna look at it not just in Julia but also in Python in C and set some set some in Python with numpy and so forth so this document that I'm showing you here is is a Jupiter notebook some of you may have seen this kind of thing so Jupiter is this really nice they provide this really nice browser-based front-end where I can put in equations and text and code and results and graphs all in like one set of Mathematica notebook like document and they can plug in different languages so initially was for Python but we plugged in julia and now there's our and there's 30 different undred different languages that you can plug in to the same front end okay so I'll start with the C implementation of this so this is a Julia notebook but I can easily compile and call it to see so I just made a string that has just you know there's a ten line C implementation it's just the most obvious function that just takes in a pointer to an array of doubles and its length and it just loops over them and sums them up might just wait what you would what you would do and then I'll call I'll compile it with GCC - o3 and link it to a shared library and load that shared library and Julia and just call it so there's a function called in Julia where I can just call out to a sea library with zero zero overhead basically so which is nice because you have lots of existing sea libraries out there you don't want to lose them so I just say seek call I recall this see some function in my library it returns a float 64 it takes two to two parameters a size T to float 64 and I'm going to pass the length of my array and the array and it will automatically a julia array of course it is just a bunch of numbers and it'll pass a pointer to that under the hood so do that and write a little function to call rail rail air that computes the relative error between the fractional difference between x and y and I'll just check it I'll just generate ten to the seventh random numbers and zero one and compare that to the Julia because julie has a built-in function called sum that sums an array and it's giving the same answer to thirteen dot some places that you know so not quite machine precision but there's ten of the seven numbers of the errors kind of accumulated when you add it across okay so it's I'm calling it it's giving the right answer and now I want to just benchmark this implementation use that as kind of a baseline for this this is the this should be pretty fast for an array of floating point values so there's a Julia package called benchmark tools as you probably know from this class benchmarking is a little bit tricky so this this will take something run it lots of times collect some statistics return the minimum Tyne or you can also get the variance and other other things so so I'm gonna get the number be time is something called a macro and Julia so it takes takes an expression rewrites it into something that basically has a loop and times it and does all that stuff okay so it takes eleven milliseconds to some ten to the seventh numbers rooster with this straight seedless e loop compiled with GCC - oh three you know no special tricks okay and so that's you know so that's one gigaflop basically billion operations per second so it's not hitting my the the peak rate of the CPU but of course this this you know there's additional calculations this this doesn't this way doesn't fit in cash and so forth okay so now let's before I do anything Julie let's do some Python but I'll do a trick I can call Python from Julia so that way I can just do all everything from one notebook using a package call I wrote called pike haul and pike call just calls directly out to lib Python so with myth you know no it virtually no overheads it's just like calling Python from within Python I'm calling directly after the to lib Python functions to call I can and I can pass any type I want and call any function and do conversions back and forth ok so I take that array I'll kind of I'll convert it to a Python list object so I I don't want any time the overhead of converting that my array to a Python array so I'll just convert ahead of time get and just start with the built Python has a built-in function called sum so I'll use the built-in sum function and I'll get this PI object for it I'll call it PI sum on this list and make sure it's giving the right answer okay there's the differences 10-13 again and now let's benchmark it oops so it takes a few seconds because it it has to run it a few times and custard statistics okay so take 40 milliseconds that's not bad it's actually it's four or five times slower than C but it's it's pretty good so misil is it five times slower than C is it is it because PI Lib answer is oh well python is slow because it's interpreted but the sum function is actually written in C here's the here's the C implementation of the some function that I'm calling you know I'm just blinking to the github code there's a whole bunch of boilerplate that just checks with the type of the object and then it has some loops and so forth and so if you look look carefully it turns out it's actually doing really well and the reason it does really well is it has a fast path if you have a list where everything is a number type so that then it can you know it has a an optimized implementation for that case but it's still five times slower than C and they've spent a lot of work on it used to be ten times slower than C so a couple years ago so that they do a lot of work on optimizing this and so why aren't they able to get C speed since they have a simplement ation of a some are they just done you know no it's because the the semantics of the datatype prevent them from getting anything anything faster than that so and this is one of the the things you when you learn when you do you know high level performance in high level languages you have to think about datatypes you need to think about what the semantics are and that that that greatly constrains what any conceivable compiler can do and if the language is if the language doesn't provide you with the with the ability to express the semantics so you want then you're dead and that's that's one of the basic things that julia does so what is a Python list right so yeah you know you can have three four all right all right yep I thought unless it's a bunch of objects Python objects but the Python hundreds can be anything they can be any type so it's completely heterogeneous types okay so of course a particular list like in this case can be homogeneous but the data structure has to be hit Oh genius because in fact I can take that homogeneous thing at any point I could assign the third element to a string right it has to it has to support that so think about what that means for how it has to be implemented in memory so so what this has to be so this is a list of this case three items but what are those items so that's if they can be an item of any type right they could be things that it could be another array right it could be of different sizes and so forth you don't want to have an array where everything is a different size first well so so it has to be this an array of pointers all right we're the first pointer is three it's to not pick three the next one is for the next one is to fine but it can't just be that it can't just be pointer to like if this is a you know 64-bit number it's can't just be pointer to you know to one 64-bit value in memory because it has to know it has to similar store what type is subject is so it has to there has to be a type tag this says this is an integer and this one has to have a type tag that says it's a string so this is sometimes called a box see have a value you have a type tag plus the value and so imagine what even the most optimized seek implementation has to do given this kind of data structure okay here's the first time it has to chase the pointer and then ask what type of object is it okay then depending on what type of object is it is okay though I initialize my some to that then I read the next object I have to change the second pointer read the type tag figure out the type this all then at runtime right and then oh this is another integer that tells me I want to use the plus function for two integers okay and then I read the next value which may be but it's a 48 is it so that the Plus which plus function that's using depends upon the type of the object it's not drawing language I can define my own type and which has its own plus function it should work with some so it's looking at the types of the objects at runtime it's looking at the plus function at runtime and not only that but each time it does a loop iteration it has to add two things and allocate a result that result in general might be another has to be a box because it might type my changes you're summing through if you start with integers and then you get a floating point value and then you get an array right the the the type will change so sorry each time you do a loop iteration it allocates another box so what happens is this implementation is a fast path if they're all like integer types it doesn't I think it doesn't reallocate that box for the the summits accumulating all the time cassia's the value of the plus function it's using so it's a little bit faster but still it has to inspect every type tag and and chase all these pointers for every element of the array whereas the C implementation this of some if you imagine what this is this compiles down to write each loop iteration who has to do it increments a pointer to the next element fetches it knows that at compile time the types are all focused before so it flushes closely to for value into a register and then as it keeps has it running some in another register calls one machine instruction to add that running Sun sum and then goes to checks to see if we're done has a you know an if statement there and then goes on all right so just a few instructions and in a very tight loop here where as each loop iteration here has to be lots and lots of instructions to chase all these pointers to get the type tag and that's in the fast case where they're all the same type and it's optimized for that so oh where was I wrong wrong thing so that's the Python sum function now most many of you use Python know that there's another type of array and there's a whole library called numpy for working with the Marek's so what problem is that addressing so the basic problem is this data structure this data structure is as soon as you have a list of items that can be any type you're dead right there's no way to make this as fast as a c loop over a double pointer so to make it fast what you need but we haven't have as a way to say oh every element of this array is the same type so I don't need to store type tags for every element I just our type tag once for the whole array so that so there there is a tag for the before that there's a type wait to say float64 okay there is maybe a length of the array and then there's just a bunch of values went after the other so this is just a you know 1.0 three point seven eight point nine and each of these are just eight bytes an 8 byte double in c notation right so it's just one after the other so it reads this once reads the length and then it dispatches to code that says okay now basically dispatches the equivalent of my C code alright so now once it knows the type in the length then okay says it runs this okay and that can be quite fast and the only problem is you cannot write implement this in Python so Python doesn't provide you a way to to have that semantics to have a list of objects we say they all have to be the same type there's no way to enforce that or to inform the language of that in in Python and then and then they tell it Oh for this test these are all the same type you can throw away the boxes like every Python object looks like this you can see there's no way to tell Python oh well these are these are all the same type so you don't need to store the type tags you don't need that pointers you don't need to have reference counting you can just slam the values into memory one after the other it doesn't provide you that with that facility and so there's no way to like you can make a fast Python compiler that will do this so nothing is implemented didn't see even with some of you are familiar with pi PI which is an attempt to make a fast like a tracing JIT for Python so they when they poured it to Mata num PI to 2 pi PI or they attempted to even then you know they could they could implement more of it in Python but they had to implement the core in C ok but given that I can do this I can I can I can I can import the numpy module into julia get it's some function and benchmark the numpy some function and okay it's good again it takes a few seconds to run okay and it takes three point eight milliseconds so this C was ten milliseconds so it's actually it's actually doing faster than the C code almost a little over twice as fast actually and and what's going on is it's is there's there C code is better than my C code there C code is using assembly instructions so you party and at this point I'm sure that you guys all know about these things well you you can leave read in two numbers or four numbers into like one giant register and one instruction had all four numbers at once okay so what about if we go in the other direction we write our own Python some function so we don't use the Python some implemented in C I write our own in Python so here's a little my some function in Python only works for floating-point I initialize X to zero point zero so really only is accumulates floating point values but that's okay and then I just loop for X in a s equals s plus X return s is the most obvious thing you would write in Python okay and check that it works and yet arrows to the minus 13th it's giving the right answer and now let's climb it so remember C was 10 milliseconds numpy was full it's like 5 milliseconds and then the built-in Python was like if some was 50 milliseconds operating on this on this list now so we now we have this a C code operate on this list with 50 milliseconds and now we have Python code operating on this list and that is 230 milliseconds I so it's it's quite a bit slower and it's because basically in Python there's in the pure Python code there's no way to implement this fast path that checks oh they're all the same type so I can cash the plus function and so far I don't think it's it's feasible to implement that and so so basically then on every loop iteration has to look up the plus function dynamically and allocate a new box for the result and and do that 10 to the seventh times cuz now so there's a built in some function in Julia I saw this distant benchmark that is a it's actually implemented in Julia it's not implemented in C I won't show you the code for the built in one because it's it's a little messy because it's actually computing this some more accurately than than the loop that I've done it is its uses so that's three point nine milliseconds so it's comparable to the to the numpy code okay so it's also using sim D but it's it's so this is also fast so now it's so why can Julie do that so it has to be that the array type first of all has the type attached to it so you can see the type of the array is an array of low 64 so there's a type tag attached to the array itself so somehow that that's involved so it looks more like a numpy array in memory than then this is then a Python list you can make the equivalent of a Python list that's called an array of any so if I convert this to an array of any so that an array of any is something where the elements types can be any any julia type and so that it has to be stored as something like this as an array of pointers to boxes and when I do that let's see there it is then it's 355 milliseconds so it's actually it's actually even worse than Python so the Julia the Python they spent a lot of time optimizing their code paths for things that had to allocate lots of boxes all the time so and Julia it's usually understood that if you're writing optimized code you're gonna do it not on a raise of pointers to boxes like if you're you're gonna write it on homogeneous arrays or things where the types are known at compile time okay so so let's write our own julia some function this is the julia some function there's no type declaration it works on any container type i initialize s2 to for this function call zero for the element type of a so it initializes it to the add of identity so to work on any container of anything that supports a plus function that hasn't added has an additive identity so it's completely generic it looks a lot like the Python code except for this there's no there's no zero function in Python and this make sure gives the right answer it does and that's benchmark it all right so this is the code you'd like to be able to write you like to be able to write high-level code that's a straight level straight loop unlike the C code it's completely generic right it works on any any container to anything you can loop over and anything that has a plus function right so an array of quaternions or whatever and a benchmark it it's 11 milliseconds it's the same as the C code I wrote in the beginning it's not using Cindy says instruction sorry that's where the additional factors are to come but it's it's the same as the nonce MD C code and in fact if I want to use Cindy there's a little tag you can put in a loop to tell it's compiling this with LLVM you can say tell LLVM to try and vectorize the loop sometimes it can sometimes it can't but something like this it should be able to vectorize it's simple enough you don't need to hand code 70 instructions for a loop this simple yeah osya why isn't it the default because in most of the code like most code the compiler cannot auto auto vectorize so it increases the compilation time and like bloats that often blows the code size for no benefit so it's only really it's really only relatively simple loops on operating and doing simple operations and arrays that benefit from sim days so you don't want it to be there yeah so now it's it's 4.3 milliseconds so it's about the same as the as the the numpy and so forth it's a little slower than the numpy it's interesting a year ago when I tried this it was almost exactly the same speed as the numpy and then since then both the numpy and the julie have gotten better but the number I got better more so there's something going on with basically how well the compiler can use AVX instructions by it seems like we're still investigating what that is but it's an LOV M limitation looks like so and is it still completely tech generic so if I make a random array of complex numbers and that I I sum them to call it my which one am i calling now am i calling the vector my son is the vectorized one right so complex numbers each complex number is two floating point numbers it should take about twice the time you two naively think right so twice the number of operations to add the same number of complex numbers as seen over real numbers yeah and it does it takes about 11 milliseconds so for 10 to the seventh which is about twice the 5 milliseconds it took four for the same number of real numbers and the code works for everything so why okay so so what's going on here so okay so we saw this my sum function I'll just take out the Simbi for now and we did all that okay okay so we're and it works for any type it doesn't even have to be an array so for example there's a another container type called a set in Julia which is just an unordered collection of unique elements but you can also loop over it if it's a set of integers you you can also sum it and I'm waiting for the benchmarks to complete so let me Alec ate a set and there's no type declarations here they like mine some a there was no a has to be an array of particular type or even it doesn't even have to be an array all right so a set is a different data structure and it's and so it's a set of integers I can it's a sets are unique so if I add if I add something to already this set it's in there it won't it won't add it twice and it supports the fast checking if you know is two in the set is three in the set doesn't have to look through all the elements it's a hash internally but I can call my my sum function on it and it sums up to plus seventeen plus six but 24 which is hopefully 49 all right so okay so what's going on here so suppose you two find the key one of the keys to several things are going on to make Julia fess so what one key thing is that when you have a function like this my sum or even here's a simpler function f of x equals x plus one when I call it with a particular type of argument like an integer or an array of integers or whatever then it it compiles a specialized version of that for that type so here's f of x equals x plus one it works on any type supporting plus so if I call F of three so here I'm passing a 64-bit integer when it did that it says okay X is a 64-bit integer I mean I want to sit on a compiled a specialized version of F with that with that knowledge and then when I call with a different type 3.10 now X is a floating-point number it'll compile a specialized version with that value if I call with another integer it says oh that version was already compiled as well we use it so it only composite the first time you call it with a particular type if I call with a string it'll give an error because it doesn't know how to add plus to a string so it's in particular okay so what is it what is going on so we can actually look at the compiled code system called code this function is called code these macros called cold LLVM and code native let's say okay when I call F of 1 what's the lm-2 people know what LLVM is you guys okay so so LVM right compiles to bytecode first and that goes to machine code so you can see the LLVM bit code or bytecode or whatever it's called and you can sell to see the native machine code so here's the LVM bytecode for it compels to if everyone so it's a it's bit called a die 64 0 it caused one basically one nella VM instruction which turns into one machine instruction l load effective address is actually a 64-bit Edition function is an instruction so let's think about what had to happen there so if you have f of x equals x plus 1 now compile if I don't compile for X is is a in 64 so 60 64 bit integer or a Julia we'd say X : : in 64 that's the : call means this is this is of this type okay so this is 64-bit integer type so what I just have to do it first has to figure out what plus function to cause a plus MIDI there's a plus for two matrices a plus for a lots of different things and so depending on the types of the arguments it decides on which plus function to call so so it first realizes oh this is an integer oh this is an integer this is also a suitable integer so that means a sound a call the plus function for two integers so I'm gonna look into that function and then oh that one returns and in 64 so that's the return value for my function and oh by the way this function is so simple that I'm gonna inline it so it's it's its type specializing and this process of going from X is an integer to that to figuring out the type of the output is called type inference okay so in general for type inference is is you know given the types of the inputs it tries to infer the types of the outputs and in fact all intermediate values as well now what makes it a dynamic language is this can fail so in some languages like ml or some of the languages you don't really declare types but their designs so that you give the types of the inputs it can figure it everything and if it can't figure out everything it gives it gives an error basically it has to it has to do it for everything so Julia is a dynamic language this can fail and it have a fallback of C if it doesn't know the type it gets six things in a box but obviously the fast path is when it exceeds the succeeds and one of the key things is you have to try and make this kind of thing work in a language you have to design the language so that the release further all the built in constructs the standard library or in general in the culture for people designing designing packages and so forth that you think to to to design things so that this type inference can succeed and I'll give a counter example to that in a minute right so and this works recursively so it's not suppose I define a function G of x equals f of X times 2 okay and then I call G of 1 so it's gonna say okay X here is an integer okay I'm gonna call F with an integer argument oh I should compile F for an integer argument figure it's return type use its return to have to figure out what x function to call and and it had doing and do all this at compile time right not at not at run time ideally so we can look at the elleven code for this and in fact so so this is remember f of X adds 1 to X and then we're telling x 2 so the result computes 2 X plus 2 and LLVM is smart enough so in F is so simple that it in lines it and then LLVM is smart enough that I don't have to well I know it does it by one shift instruction to multiply X by two and then it adds two so it actually combines the x two and the plus one it does constant folding okay and you can continue on if you look at H of X equals G of X tends to then it that compiles to shift one shift instruction to multiply X by four and then adding four so you see you want this so this process Cascades so and you can even do it for a recursive function so here's a stupid implementation of a Fibonacci number calculation and recursive implementation right so this is given n that's must be it's an integer okay if n is less than three returns one otherwise it adds the previous two numbers I can compute the first call this in the first ten integers here's the first 10 Fibonacci numbers there's also in that acute notation and Julia can say fib dot so if you do F dot arguments it calls the function element wise on a collection and returns a collection so f dot 1 to 10 where it turns the first 10 Fibonacci numbers and I can call there's a function called code warren type that'll tell me what it'll tell me the output of type interface a type inference so a 10 is a 1 and it goes through this is this kind of a hard to read format but this is like the after like one of the compiler passes called lowering but yeah it's figuring out the types of every of every intermediate call so here is it's invoking main dot fib it's recursively and it's figured out that the type return type is Austin in 64 so it knows everything okay so you'll notice that I here I declared a type I call I said that this is an integer okay I don't have to do that for type in first inference this doesn't this doesn't help the compiler at all because it it does type inference depending on what I pass so what this is is it more like a filter it says that if I pass that this function only accepts integers if you pass something else you should throw an error like so I don't want this function because if I pass three point seven right if I don't fill out any number if you look at the you know three point seven I can check whether it's less than three it can call it recursively I mean the the function would run it would just give nonsense all right side i want to prevent someone from passing nonsense for this so that's one reason to do a type declaration but another region reason is to do something called dispatch so what we can do is we can define different versions of a function for different arguments so for example another nicer version of that is a factorial function so here is a stupid stupid recursive implementation of a vectorial function it takes an integer argument and just recursively calls itself on n minus 1 so you can call it on 10 factorial okay if I want a hundred factorial I need to use a different type that 64-bit integers I need some arbitrary precision integer and and since since I said it was an integer if I call it 3.7 it'll give and it'll give an error that's good but now I can define a different version of this so actually there isn't definitely there is a generalization of factorial to two arbitrary real in fact even complex numbers call the gamut function and so I can define a fallback that works for any type of number and it calls a gamma function from someplace else and then then if I can pass it a floating point value I can if you if you take the factorial of minus minus 1/2 it turns out that's actually square root of pi so if I square it it gives PI all right so so now I have one function and I have two methods all right so so these types here so there's a hierarchy of type so so this is this is what's called an abstract type it's most of you probably seen like so there's a type called number and underneath there's a class of subtypes called integer and underneath there's for example 8 64 or int 8 for 8 bit or 64 bit integers underneath number there's there's actually another sub type Pole real and underneath that there's a couple sub types and then there's they say float64 our or float32 for for a a single precision 32-bit floating-point number and so forth so there's a hierarchy of these things when I specify something can take integer I'm just saying it so this type has not helped the compilers to help it's to provide a filter says this vert this method only works for these types and then this this other method only where it works this my second method works for any number type so I have one Martha that works for any number type of series one method that works for any number type and one metric method that only works for integers so when I call it for 3 which ones does it call because they're actually called both methods and what it does is it calls the most specific one it calls the one that's sort of farthest down the tree so fair so if the so if I have a method is defined for a number and one for defined for integer if I passed an energy little do this if I've one that's defined for number one that's defined by integer and one that's defined specifically for int eight and I call it a pass an 8-bit integer it'll call that version all right so it gives you a filter but in general you can do this on multiple argument so this is like the key abstraction and Julia something called multiple dispatch so this was not invented by Julia I guess it was present in small talk in Dillon it's it's been in a bunch of languages have been floating around for a while but it's not been in a lot of mainstream languages not in a high-performance way and you can think of it as a generalization of object-oriented programming so I'm sure all of you have done object oriented in Python or C++ or something like this so now tutorial programming typically the way you you think of it as you as you say you have an object it's usually spelled like object method you know X Y for example right and what it does is this type the object type determines the method right so you could have a method called plus and would cope but I would actually call a different plus function for a complex number or real number something like that or a method called length right which for Python lists would call a different function than for a numpy array right in Julia you know why he would spell the same thing would you say method Minh take dot method right so you don't think of the here you think of the object is sort of owning the method right and Julia the object would just be maybe the first argument in fact in fact under the hood if you're looking in Python for example the object is passed as an implicit first argument called self right so essentially how is doing this it's just spelling different spelling of the same thing but as soon as you write it this way you realize like what what Python and what o up languages are doing is they're looking at the type of the first argument determine the method but why just the first argument so in a multiple dispatch a language you look at all the types so so this is sometimes in Julia this will be sometimes be called single dispatch because this this is this determining the method is called it's called dispatch right figuring out which function is that spelled length which function actually are you calling this dispatching to the right function so this is called multiple dispatch and it's its clearest if you look at something like a plus function alright so a plus function alright if you do a plus B right which plus you do really should depend on both a and B right it shouldn't depend on just a or just B and so it's actually quite awkward in languages in oo up languages like Python or especially suppose C++ to overload a plus operation that operates on sort of mixed types as a consequence for example in C++ you know there's a built in complex number type all right so you can have a complex sing a float or complex double complex ant you know cup box with different meal types but you can't add a complex float to a couple X double you can't you can't add a single precision complex number to a double precision couples member or any mixed operation because any mixed complex operation because it can't figure out who owns who owns the method it doesn't have a wave of doing that kind of promotion alright so in and enjoy so now you can have a method for for adding a float 32 to a float 32 but also a method for adding a I don't know let's see here's adding a a complex number to a real number for example alright you want to specialize or a real number to a complex number you want to specialize things in fact we can click on the link here and see the code so complex number to a real number and Julia looks like this it's the most obvious thing it's implemented in Julia plus complex real creates a new complex number but you only have to add the real part you can leave the imaginary part alone and this works on any any complex type ok so ah there's too many methods for okay I can shrink that let's see so I can talk but there's another type inference thing called I'll just mention it briefly so one of the things you have to do to make this this this type inference process work is given the types of the arguments you have to figure out the type of the return value okay so that means when you design a function we yeah it has to be what's called type stable that the the type of the results should depend on the types of the arguments and not on the values of the arguments because the types are known at compile time the values are only known at runtime and it turns out if you know if you don't have this in mind I mean in C of no choice but to obey this but it's something like Python a dynamic language at python and matlab if you're not thinking about this it's really easy to to design things so that doesn't work so that's not true so a classic example is the square root function all right so suppose i pass an integer to it okay so the square root is square root of this is 2 square root of 5 right the result has to be a floating-point number right it's two point two three whatever so if i do square root of four of course that square root is an integer but if i return an integer for that type then it wouldn't be type stable anymore then the return value type would depend on the value of the input whether it was a perfect square or not all right so it was returned to floating-point value even if the input as an integer yes [Music] well it's just a look up comes it comes with compiled time so it's really kind of irrelevant it at least if type inferencing succeeds if type entrance fails then it's run time it's it's slower but it's not like mistake at research so it's not it's not as slow as you might might think but I most the time you don't worry about that because if you care about performance you want to arrange your code so that type inference succeeds so you prototype maybe - is it something you do in performance optimization like when you when your prototype but you don't care about types you say x equals three and the next line you say x equals an array whatever but when you're optimizing your code then Y okay you tweak it a little bit to make sure that things don't change types willy-nilly and that the types of a function depend on the types of the arguments not on the values so as I mentioned square root is what really confuses people at first if you take square root of minus one you might think you should get a complex value all right and instead it gives you an error all right and because that's basically what are the choices here it could give you an error it could give you a complex value but if it gave you a complex value then the return type of square root would depend upon the value of the input not just the type some MATLAB for example if you take square root of minus one it'll happen to give you a couple expander but as a result if you have MATLAB MATLAB has a compiler right but it has many many challenges but like one simple thing to understand is if you have it if the MATLAB compiler sees a square root function anywhere in your function even if it knows the inputs that are real it doesn't know if the outputs are complex or real unless it can prove that the inputs were positive or non-negative right and that means that it could then compile to code paths for the output right but then suppose it calls skirt again or some other function about you quickly get a next combinatorial explosion of possible code pass because it impossible types and so some point you just give up and just put things in a box but as soon as you put things in a box and you're looking at types at runtime you're dead right from a performance perspective so Python actually gets if you want a complex result from square root you have to give it a complex and but so in Julia a complex number I is actually M they decided I is too useful for loop variables so I and J so M is the complex unit if you take squared of a complex input it gives you a complex output so so Python actually does the same thing so in Python if it takes square root of a negative value it gives a an error unless you give it a complex input but Python but but Python made other mistakes so for example in Python an integer is guaranteed never to overflow if you add one one plus one plus one over and over again in Python eventually low overflow the size of history 64-bit integer and Python will just switch under the hood to an arbitrary position an integer right but seemed like a nice idea probably at the time and the rest of Python is so slow that like the performance cost of this this test you know it makes no difference in a typical Python code but it makes it very difficult to compile because that means if you have integer inputs and you see X plus 1 in Python the compiler cannot decide just can't just compile that to one instruction because unless you can somehow prove that X is sufficiently small so in Julia you know integer arithmetic will overflow but the default integer integer if Nick is integer arithmetic is 64 bits so in practice that never overflows unless you're doing number theory and you usually know if you're doing number theory and then use a reserved precision in integers it is much it was much worse in the days like this is something people worried a lot you know before you were born when there were 16-bit machines right and integers it's really really easy to overflow 16 bits so I guess the biggest sign value is then you know thirty two thousand seven nice to see seven right so it's really easy to overflow it so you're constantly worrying about overflow and even 32 bits the biggest sign value is two billion it's really easy to overflow that even just counting bytes right you know files that are bigger than two gigabytes easily nowadays so some people worried about this all the time that with 64-bit integers basically a 64-bit integer will never overflow if it's counting objects that exist in the universe like you know bytes or loop iterations or something like that so you just you just say okay to the 64 bits all you're doing number theory you should use begins so okay so let me talk the final thing I want to talk about let's see how much doing time good is defining our own types so this is where like this is the real test of the language right so McHale anguish where there's a certain built in set of functions and built-in types and there are things are fast so for example for Python there actually is a compiler called numba that does exactly what julia does it looks at the arguments type specializes things and then calls LVM and composite to fast code but it only works if your if your only container type is a numpy array and your only scalar type is one of the 12 scalar types that none pi supports if you have your own user-defined number type or your own user-defined container type then it doesn't work and user-defined container types it's probably easy to understand why that's useful usually find number types are extremely useful as well so for example there's a package in julia that provides the number type called dual numbers and that those have the property that if you pass them into the function they compute the function and it's derivative and it's just a slightly different maleeh carries around function and derivative values and as a slightly different you know plus and times and so forth that just do the the product rule and so forth and it just propagates derivatives and then if you have julia code like that you know vandermonde function it'll just compute its derivative as well okay so i want to be able to find my own type so a very simple type that might want to add would be points to D vectors into into space right so of course I could have an array of two values but an array is a really heavy weight object for for just two values right if I know it compile-time there's two values that I don't need to have a pointer to a block I can actually store the registers I don't know I can unroll the loop over these everything should be faster you know you can get an order of magnitude in speed by specializing on the on the the number of elements for small arrays compared to you know just a general rate data structure so let's make a point okay so this is the and I'm going to go through several iterations one of it start with a string with a slow iteration and then define a immutable struct okay so this will be a mutable object where I can add a point that has two values x and y can be of any type I'll define a plus function that can add them and it is the most obvious thing it adds the X components as the Y components I'll define a zero function that's the additive identities that just returns the point zero zero and then I can construct an object point three four I can say point three four plus 0.5 six it works it can hold actually right now it's very generic and probably too generic the so they like the real part can be a flying point number and the imaginary the X can be a floating point number here and then Y is a is a complex number of two integers or even I could make a string and an array that it wasn't Disney makes sense so I so here's I probably should have restricted the types of X&Y a little bit just to prevent the user from putting in something that makes no sense at all okay so so these things they can be anything so this this this type is not ideal in several ways so let's think about how this has to be stored in memory so this is a point 1 X plus a point one one three point seven right so in memory it's there's an X and there's a y but x and y can be of any type so that means they have to be a pointers to boxes there's a pointer to int one and there's a pointer to a float64 in this case three point seven so this already we know this is not going to be good news for performance right and it's mutable so that mutable struct ments if I if I take P equals point a point like then say P X equals seven I can change the value and which seems like a harmless thing to do but actually is a big problem because for example if I make an array of of three piece okay and then I say P dot y equals eight and a look at that array it has to it has to change the Y component okay so if I have say five up P or in general five of P that's looking at that object if this is an object you can mutate it means that if I have another element Q it's also pointing the same object and I say P px equals 4 then Q dot X had better also before at that point so so to have mutable semantics it to have the semantics of something you can change and reference other references you can see that change that means that this object has to be stored in memory on the heap as a pointer to two objects so that you know other pointers the same object and I mutated and the and the other references see it it can't just be stuck in a register or something like that you have to it has to be something that other references can see so this is bad so if I have so I can call point 1 dot a equals the constructor element wise and this a is this array of 10 the 7th random numbers I was benchmarking on before that was taking 10 milliseconds okay and if I call I can sum it I can call the built in sum function on this I can even call my sum function on this because ID supports a zero function it sort of supports a plus so here if I have an array if I could scroll back up I have an array array here of 10 to the 7th values of type point 1 so that the type of point 1 is attached to the array so the array in memory so I've been to if I have an array of 0.1 the 1 here means it's a one dimensional ready there's also two 2d 3d and so forth that looks like a point 1 value a point 1 value point 1 value but each one of those but each one of those now sorry have to be a pointer to an X and the y which themselves are pointers to boxes right so operating something this is gonna be really slow this is a lot of pointer chasing it has to at runtime check what's the type of X what's the type of Y and in fact it was it took instead of 10 milliseconds it took in five or six hundred milliseconds so to do better we need to do two things so first of all x and y they have to be we have to be able to see what type they are okay it can't be just any arbitrary old thing that has to be a point to a box okay and the point object has to be immutable it has to be something where if I have if I have if I have P equals something Q equals something I can't change P and expect Q to see it otherwise if it's mutable they have to be stored as the little semantics have to be implemented as some pointer to an object in place and you're done you're dead so I can just say struct so struct now is not mutable it doesn't have the mutable keyword and I can give the arguments type so I can say that both float64 and then x and y are both the same type that both the 64 but floating point numbers I'll define plus the same way 0 the same way and now I can add them and so forth but now if I make an array of these things and if I say px equals 6 it will give an error says though you can't actually mutate don't even try to mutate it because because we can't support those semantics on this type but that means so that type is actually you look at look at Botany memory what the compiler is allowed to do and what it does do for this is if you have an array of point two one then it looks like just you know the x value the Y value the x value the Y value and so forth each of these are exactly one eight bytes float64 and all the types are known at compile time and so if I sum them it should take about point 20 milliseconds right summing in real numbers it was 10 milliseconds and this is twice as many because there's some the X's haven't some of the Y's and let's benchmark it and let's see oh actually - something sort of something the real numbers took 5 milliseconds so something should do it take about 10 let's see if that's still true yeah it took about 10 so actually the compiler is smart enough it stores this so it's first of all it stores this in line as one big block consecutive memory and then when you sum them remember some function this well this is the built-in sum but our sum function will work the same way the compiler will elevate and we'll be smart enough to say oh I can load this into a register load Y into a register call it have a tight loop where I basically call one instruction to sum the X's one instruction to sum the Y's and then repeat so there's about as good as you could get but you paid a big price we've lost all generality right these can only be to 64-bit floating-point numbers so I can't have two single precision numbers or two so this is like a struct of 240 explosives two doubles and C alright so if I have to do this to get performance in Julia that would suck basically I'm it's maybe an you know it's I'm basically writing C code in a slightly higher level syntax I'm losing that benefit of using high level language so the way you get around this is you define what you want is to define something like this point type but you don't want you to find just one type you wanted to find a whole family of types you wanted to find this for a to 54 is to float 32 for twos in fact you wanted to find an infinite family of types for ED to two things of any type you want as long as they're two real numbers to real types and so the way you do that in julia is a parameterize type this is called parametric polymorphism is similar to what you see in C++ templates so now I have a struct this it's not mutable 0.3 but the curly braces t it says it's parameterised by type t so x and y I'm restricted restricted slightly here I've said x and y had to be the same type I didn't have to do that but most of that medev had two parameters one for the type of X ones at that of Y but most the time you'd be doing something like this he'd want them both both of his same type but they could be both lossless ephors both float 32 is both integers whatever so T is any type that less than : means is a subtype of so T is this any subtype of real it could be flow 64 it could be in 64 could be in tade could be big flow it could be a user defined type it doesn't care so this is really not it's a it's a point three here is a whole hierarchy is not define this I'm not defining one type I'm defining a whole set of types so point three is a set of types there's a point 0.3 in 64 there's a point 3 float 32 at 4 for 64 and so on and infinitely many types you know as many as you want and basically McCree 8 more types on the fly and is just by just by instantiating so for example otherwise it looks the same the plus function is basically the same I add the X components of Y components the zero function is the same I make we submit now I make sure there's zeros of type t whatever that type is and now if I say 0.34 now I'm instantiating a particular instance of this create create and now that particular instance of 0.3 will have what this is an abstract type will have one of these concrete types and the concrete type it has in this case is 0.3 of 2 in 64 s22 64-bit integers okay and I can add them and actually adding mixed types will already work because the plus what the addition function here it works for any 2.3 I didn't say they had to be 0.3 is of the same type I did so and any two of these they don't have to be two of these it could be one of these in one of these and then it determines the type of the result by the type of the return it does type inference so if you have an in 16.3 of two and 64 x' and 0.32 432 is all it says oh px is an n64 QX is a float 36 full 64 Oh which plus function do I call there's a plus function form it that mixed thing and it promotes the result of flow 64 so that means that that sum is posted for the other film is full 64 oh then I'm creating a polling point 3 of felicity force so this kind of mixed promotion is is done automatically you can actually define your own promotion rules and Julia as well and I can make an array and so now if I have an array of 0.3 float64 so this type is attached to the whole array and this is not an arbitrary point 3 it's an appoint 3 of 250 force so it gets stored again as just 10 to the seventh elements of XY XY we each one is 8 bytes 8 bytes 8 bytes one after the other the compiler knows all the types so when you sum it it knows everything at compile time and it will you know it will sum to these things by loaded this into a register load this into a register called one instruction add them and and so the sum function should be fast so we can call the built in sum function we can call our own sum function our own sum function I didn't put CMD here so that's gonna be twice as slow but yeah yeah yeah in fact if you look this the built in some function the built in sometimes was implemented in Julie it just hasn't had Cindy on the Sun so yes it says so LLVM is smart enough that if you give it a structure to values and and then load load them and if you tell it they know I mean adding these two value to these two values these two values to these two values it will it will actually use them the instructions I think oh maybe not no wait my some use Cindy confused I thought it did I thought I removed it yes maybe maybe it's not smart enough to use Cindy I've seen in some cases where it's smart enough to use yeah yeah okay so they're the same speed okay no I take I take it back so so maybe the LLVM is not smart enough in this case T to use somebody automatically we could try putting the Cindy annotation there and try it again but I thought it was but maybe not let's see this puts MD so weak we need to find that and then just rerun this so it'll notice that I changed the definition it'll recompile it but the B time since that times at multiple times it the first time it calls it sits low because it's compiling it but it takes the minimum over several times so so let's see yeah I mean this is this is a problem ins in general with with vectorizing compilers is they're not that smart if you're using anything other than just like an array of an elementary data type yeah no it didn't make any difference so I took him back you Sol Sol Sol Sol Sol for more complicated data structures you often have to use semi structures explicitly and there is there is a way to do that in julia and there and there's a higher-level library on top of that you can basically create a tuple and then add things and it'll um and it'll do 70 MD acceleration but yeah so anyway so that's there's a whole bunch of like this the story of like why Julie can be compiled of fast code it's a good combination of lots of little things but there there are a few big things one is the is that it's specialized thing as compiled times but of course you can do that in any language so that relies on designing the language so that you can do type inference it relies on having these kinds of parameterize types so and giving you a way to talk about types and attach types to other types so so the array you notice probably see now that you understand what these little braces mean you can see that the the the array is defined in Julia as another parametrized type its parametrized by the type of the element and also by the dimensionality so that that's that's uses the same mechanism to attach types to an array and you can have your own user the array type and julie is implemented mostly in julia and there are other packages that implement their own types of arrays that have the same performance one of the one of the goals of julia is to build in as little as possible so that that like there's not like some set of privileged privileged types that the compiler knows about and everything else is second class it's like you know a user code is majest as good as the built in code and in fact built in code is mostly just implemented in julia there's a small core that's implemented in c for bootstrapping basically yeah so having parameterize types having another technicalities having um all concrete types are final and julia so that means you can't it's a concrete type is something you can actually store memories so like 0.33 for something you actually have right you can an object of two integers is that type so it's concrete right as opposed to this thing this is an abstract type you can't actually have one of these you can only have one of the instances of the concrete types so but there are no this is final there it's not possible to have a subtype of this because if you could then you'd be dead because this is an array of these things if the compiler has to know it's actually these things and not some subtype of this alright so where is in other languages that Python you can have subtypes of concrete types and so then even if you said this is an array of a particular Python type you wouldn't really know it's that thing that type or might be some subtype of that that's one of the reasons why you can't implement numpy in Python this is there's no way to say though this is really that type and nothing else at that in a language level yeah oh yeah so it's calling LVM so basically the stage is you call so there's a few passes okay so and one of the fun things is you can actually inspect all the passes and almost intercept all of them practically so of course typing told like this first it gets parsed okay and you can a matter oh those things that are coming with AK actually are functions that are called at the right after parsing they can just take the parsed code and rewrite it arbitrarily so that's they can extend the language that way so it's first but then it's parsed maybe we may be rewritten by a macro and then you get an abstract syntax tree and then when you call it I say f of three then this is o X is an integer n 64 then it runs the type inference pass it tries to figure out what's what's the type of everything in which which version of Plus to call and so forth then it decides whether to inline some things and then Minh sits done all that it spits out LLVM byte code then calls LVM and compiles it to machine code and then it caches that someplace so for the next time you call you call f you know F of for F with another integer just calling this it doesn't repeat the same processes notice it's cached so that's that's beer but it's bit at the lowest level it's just a la vía so there's tons of things I haven't written so I haven't showed you I mentioned metaprogramming so it has this is macro facility and so you can basically write syntax that rewrites other syntax which is really cool for cogeneration you can also intercept it after the type inference face you can write something called the generated function that basically takes because at part's time it knows how things are spelled and you can rewrite how they're spelled but it doesn't know what anything actually means it does it just knows X as a symbol doesn't know X as an integer or whatever just knows the spelling so when you actually compile them of EXO at that point it knows X is an integer and so you can write something called a generator or a stage function that that basically runs at that time and says oh you told me X is an integer now I'll rewrite the code based on that and so this is really useful for there's some cool facilities for multi dimensional arrays because the the dimensionality of the array is actually part of the type so you can say oh this is a three dimensional array alright three loops oh this you have a four dimensional array alright for loops and and it can rewrite the code depending on the dimensionality and with code generation so you can have you can have code that basically generates any number of nested loops depending on the types of the arguments and all the generation is done at compile time after type inference so it knows the dimensionality of the array and yeah so that's that's there's lots of fun things like that of course it has parallel facilities and they're not quite as advanced as silk at this point but that's that's sort of the direction they're heading there's no global interpreter lock like in Python that you know there's no interpreter so the you know there's a threading facility and there's a pool of workers and you can spawn and you can you can thread a loop and the the garbage collection is is threating aware so that's safe and you know they're gradually having more and more powerful runtimes hopefully eventually looking into some of Professor license you know advanced threading compiler paper compiler or whatever it is there's also you know from most of what I do in my research is more coarse-grained distributed memory parallelism so you know running on supercomputers and stuff like that and that's there's there's MPI there's a remote procedure call library there's different different you know flavors of that but yeah so any other questions yeah the big numb type in julia is actually calling [ __ ] you so the so you know minh of those things let me just something we could a new notebook so so if i say i know big int you know three three thousand okay and then i'd say that to the SI factorial I think there's a built in fakie factorial of that right so so and this is called a big numb it's something where the number of digits changes at runtime so of course these are orders of magnitude slower than I know Hardware things because it basically it has to implement it as a loop of a loop of digits in some base and when you add a multiple you have to just loop over those at runtime so these big name libraries they're quite large and heavily optimized and so so nobody's reimplemented one in julia they're just calling out to a c library called the new multi precision library and for floating-point values there's there's something called big float it's a big float of pi is is that I can actually this let's set set precision big float mm that's 1,000 binary digits and then say big float of Pi and and it's no it's more by the way yeah you might have so I can have a variable alpha oops alpha hat sub 2 equals 17 that's allowed all this happening here is that you is that Julia allows almost arbitrary Unicode things for identifiers so I can have you know make it bigger it's like you know we can have an identifier koala right so so there's two issues so one is just a lot of a language that allows those things as identifiers so Python 3 also allows Unicode identifiers although I think Julia out of all the existing the like the common language is probably as the widest Unicode support but most most languages only have a allow a very narrow range of Unicode characters for identifiers so Python would not would allow the koala but with python 3 would not allow without alpha hat some two because it doesn't the numeric subscript Unicode characters it doesn't allow the other thing is how do you type these things and that's more of an editor thing and so in Julie we mean we implemented it initially in the repple and in in in jupiter and now all the editors support you can just do tab completion of latex I can type I can type you know gamma tab and the tab completes to the Unicode character I can see I can say dot and it puts a dot over it and backslash superscript four and it you know if it's a four and that's its and that and that's that's allowed so I says it's it's quite nice so like Debbie emails and I put equations and emails I go to the Julia repple and and tab complete all my latex characters so that I can put equations and emails because it's the easiest way to type these unicode math characters but yeah so that's yeah I Python borrowed this so the second I'll do the same thing in in the ipython notebooks as well so because it's it's it's really fun when you're writing it finally you know because if you read old math codes especially old Fortran codes or things you see lots of variables that are named alpha hat or something like the alpha hat underscore three it's so much nicer to have a variable that's actually the alpha hat sub 3 so that's cute Steve thanks very much thanks this was great we are as he'd mentioned looking actually at a project to merge the Julia Technology with a silk technology and so we're right now in the process of putting together grant proposal and if that gets funded there may be some Europe's [Music] you 2 00:00:03,189 --> 00:00:05,769 the following content is provided under a Creative Commons license your support will help MIT OpenCourseWare continue to offer high quality educational resources for free to make a donation or to view additional materials from hundreds of MIT courses visit MIT opencourseware at ocw.mit.edu hey everybody let's get going who here has heard of the FFT that's most of you so I first met Steve Johnson when he worked with one of my graduate students now former graduate student Mateo frigo yeah and they came up with a really spectacular piece of performance engineering for the FFT a system they call fftw for the fastest fourier transform in the West and it has for over years and years been you know a staple of anybody doing signal processing will know fftw so anyway it's a great pleasure to welcome Steve Johnson who is going to talk about some of the work that he's been doing on dynamic languages such as Giulia and Python so yes I said about high-level dynamic languages and how you get performance in these and so most of you probably use Python or our MATLAB and so these are really popular for people doing technical computing statistics and anything where you want kind of interactive exploration you'd like to have a dynamically typed language where you can just type x equals 3 and then three lines later said Oh X as an array because you're doing things interactively don't you don't have to be stuck with a particular set of types and there's a lot of choices for these but they usually hit a wall when it comes to writing performance critical code in these languages and so traditionally people doing some serious computing in these languages have a two language solution so they so they do high-level exploration and pro tippity and so forth and in Python or whatever but when they need to write performance critical code then you drop down to a lower level language Fortran or C or scythe on or one of these things and you use Python as the glue for these low-level kernels and the problem and this is this was work workable you know I've done this myself many of you probably done this but when you drop down from Python to see or even to scythe on there's a huge discontinuous jump in the complexity of the coding there and there's usually a loss of generality like you you know when you write code and C or something like that it's specific to a very small set of types whereas the nice thing about high-level languages is you can write generic code that works for a lot of different types so at this point there's often someone to pops up and says oh well you know I I knew performance programming in Python and it everyone knows you just need to vectorize your code right so basically what they mean is is you you rely on mature external libraries that you pass them a big block of data it does a huge amount of computation and comes back and so you never write your own loops and this is great you know this is good advice if you have a if there's someone who's already written the code that you need you know you should try and leverage that as much as possible but somebody has to write those and eventually that person will be you and and because eventually if you do scientific computing there's a prop you went into a problem inevitably that you just can't express in terms of existing libraries very very easily or at all so so this was the state of affairs for a long time a few years ago starting in Ella needleman's group at MIT there was a proposal for a new language called Giulia which tries to be as high-level and interactive is it's a dynamically typed language you know as my MATLAB or Python and so forth but you know general-purpose language like Python very productive for technical works are really oriented towards scientific numerical computing but you can write a loop and you write low-level code in that that's as fast as C that's that was the goal the first release was in 2013 sits pretty not you language the 1.0 release was in august of this year so before that point every year there was a new release point 1 point 2 point 3 and every year would like break all your old old code and you have to update everything to keep it working so now they said ok it's stable we'll add new features will make it faster but well from this point onwards for at least until 2.0 of many years in the future the it'll be backwards compatible so there's lots of it so in my experience this pretty much holds up I haven't found any problem where I could you know where there was a heist nice highly optimized C or Fortran code where I couldn't write equivalent code in or equivalent performance equivalently performing code and Julia you know given enough time right obviously if something is there's a library with a hundred thousand lines of code you know it's quite a long time to rewrite that in any other language so there's lots of benchmarks that that illustrate this the goal of Julia is usually to stay within a factor of two of C and my my experience is usually with an effector a few percent that you can get if you know what you're doing so here's a very simple example that I like to use it it is generating a vandermonde matrix so given a vector of values alpha 1 alpha 2 alpha alpha m and you want to make an M by n matrix whose columns are just those entries to you know due to the power first power squared cubed and so forth element wise I said this this this kind of matrix shows up at a lot of problems so most you know matrix and vector libraries have a built-in function to do this on Python in numpy there is a function called num pi dot Vander to do this and if you look at the it's generating a big matrix it could be performance critical so they can't implement it in Python so if you look at the numpad limitation it's a little python shim that calls immediately to see and then if you look at the C code I won't scroll through it but several hundred lines of code it's quite long and complicated and all about 100 several hundred lines of code is doing is just figuring out what types and to work with but like what kernels to dispatch to at the end of that it dispatches to a kernel that does the actual work and that kernel is also C code but that that C code was generated by a special purpose code generation so it's quite you know involved to get to to get good performance for this while still being somewhat type generic so that their goal was to have something that works for basically any numpy array and any numpy type which is like there's a handful like maybe a dozen scalar types that it should work with right so if you're implementing this in C it's really trivial to write you know 20 lines of code that implements this but only for double precision you know a pointer to double position array I know so the difficulty is getting typed into generic in C so in in Julia here's the implementation in Julia it looks at first glance much like what us roughly about like a C or Fortran implementation would look like it's just I just implement in the most simple way it's just two nested loops I suggest just for it from basically you loop across and as you go across you know you accumulate powers by multiplying repeatedly by Vioxx right that's that's all it is and it just fills in the array the performance of the graph here is the time for the numpy implementation divided by the time for the julie implementation as a function of n for an N by n matrix the first data point I think that's there's something funny going on it's not ten thousand times slower but you know for fur like a ten by ten twenty five twenty array the non pipe version is actually ten times slower because it's basically the overhead that's imposed by all going through all those layers once you get to like 100 hundred matrix that overhead doesn't matter and then it's all this optimized C code code generated hundred code generation and so forth it's pretty much the same speed as the juliek code except the Juliet code there as I said it looks much like C code would except there's no types its Vandor X there's no type declaration that X can be anything and if it's in fact this works with any container type as long as it has an indexing operation and any numeric type could be real numbers could be complex numbers could be quaternions anything that supports the x operation and there's also a call to one so one returns the multiplicative identity for whatever so whatever group you're in you need to have a one alright that's the first column and that might be a different type of one for for a different object right might be an array of matrices for example and then one is the identity matrix so in fact there are even cases where you can do get significantly faster and then then it's optimized seen Fortran code so I found this when I was implementing special functions so things like the error function or polygamous unction or the inverse the error function I've consistently found that I can get often two to three times faster than then like optimize C and Fortran libraries out there partly because I'm just smarter than the people who wrote those libraries but no mainly because in in julia i'm using basically the same expansions the same series rational functions that everyone else is using the difference is that in julia has built-in techniques for what's called meta programming or cogeneration so in usually the special functions involve lots of polynomial evaluations that's the way they boil down to and you can basically write cogeneration that generates very optimized in-line valuation of the specific polynomials for these functions that would be really awkward to write in Fortran you do that to write it all by hand or write like a separate routine separate program that wrote Fortran code for you so you can do this it's a high level languages allow you to do tricks for performance that it would be really hard to do in low level languages so mainly what I want to talk about it's give some idea of why Julia can be fast and to understand this you also need to understand why is is Python slow and and in general what's going on in determining the performance in a language like this what do you need in a language to enable you to compile it to fast code it while still still being completely generic like like this this van der function which works on any any type any user even user-defined in numeric type user-defined container type will be just as fast as there's no privileged in fact if you look at julia almost all of duo's Julia's implemented in julia like integer operations and things like the really basic types most of that is implemented in julia right obviously it if you're multiplying two 32-bit integers at some point it's calling a assembly language instruction but but even not calling out to this assembly is actually in julia so so at this point I want to switch over to sort of a live calculation so this is from a notebook that I developed as part of a short course with Alan Adelman is sitting over there like you know 9 6 on performance optimization high level languages and so I'm gonna go through this is a very simple calculation of course you would never I mean in any language usually you would have a built often have a built in function for this but it's just a son function it's just written written up there so we need to know about we just have a list an array of n numbers we're just gonna add them up right and if we can't make this fast right then then we have real problems and you know we're not gonna be able to do anything in this list so so this is the simple sort of thing where if someone didn't doesn't provide this for you you're gonna have to write a loop to do this so I'm gonna look at it not just in Julia but also in Python in C and set some set some in Python with numpy and so forth so this document that I'm showing you here is is a Jupiter notebook some of you may have seen this kind of thing so Jupiter is this really nice they provide this really nice browser-based front-end where I can put in equations and text and code and results and graphs all in like one set of Mathematica notebook like document and they can plug in different languages so initially was for Python but we plugged in julia and now there's our and there's 30 different undred different languages that you can plug in to the same front end okay so I'll start with the C implementation of this so this is a Julia notebook but I can easily compile and call it to see so I just made a string that has just you know there's a ten line C implementation it's just the most obvious function that just takes in a pointer to an array of doubles and its length and it just loops over them and sums them up might just wait what you would what you would do and then I'll call I'll compile it with GCC - o3 and link it to a shared library and load that shared library and Julia and just call it so there's a function called in Julia where I can just call out to a sea library with zero zero overhead basically so which is nice because you have lots of existing sea libraries out there you don't want to lose them so I just say seek call I recall this see some function in my library it returns a float 64 it takes two to two parameters a size T to float 64 and I'm going to pass the length of my array and the array and it will automatically a julia array of course it is just a bunch of numbers and it'll pass a pointer to that under the hood so do that and write a little function to call rail rail air that computes the relative error between the fractional difference between x and y and I'll just check it I'll just generate ten to the seventh random numbers and zero one and compare that to the Julia because julie has a built-in function called sum that sums an array and it's giving the same answer to thirteen dot some places that you know so not quite machine precision but there's ten of the seven numbers of the errors kind of accumulated when you add it across okay so it's I'm calling it it's giving the right answer and now I want to just benchmark this implementation use that as kind of a baseline for this this is the this should be pretty fast for an array of floating point values so there's a Julia package called benchmark tools as you probably know from this class benchmarking is a little bit tricky so this this will take something run it lots of times collect some statistics return the minimum Tyne or you can also get the variance and other other things so so I'm gonna get the number be time is something called a macro and Julia so it takes takes an expression rewrites it into something that basically has a loop and times it and does all that stuff okay so it takes eleven milliseconds to some ten to the seventh numbers rooster with this straight seedless e loop compiled with GCC - oh three you know no special tricks okay and so that's you know so that's one gigaflop basically billion operations per second so it's not hitting my the the peak rate of the CPU but of course this this you know there's additional calculations this this doesn't this way doesn't fit in cash and so forth okay so now let's before I do anything Julie let's do some Python but I'll do a trick I can call Python from Julia so that way I can just do all everything from one notebook using a package call I wrote called pike haul and pike call just calls directly out to lib Python so with myth you know no it virtually no overheads it's just like calling Python from within Python I'm calling directly after the to lib Python functions to call I can and I can pass any type I want and call any function and do conversions back and forth ok so I take that array I'll kind of I'll convert it to a Python list object so I I don't want any time the overhead of converting that my array to a Python array so I'll just convert ahead of time get and just start with the built Python has a built-in function called sum so I'll use the built-in sum function and I'll get this PI object for it I'll call it PI sum on this list and make sure it's giving the right answer okay there's the differences 10-13 again and now let's benchmark it oops so it takes a few seconds because it it has to run it a few times and custard statistics okay so take 40 milliseconds that's not bad it's actually it's four or five times slower than C but it's it's pretty good so misil is it five times slower than C is it is it because PI Lib answer is oh well python is slow because it's interpreted but the sum function is actually written in C here's the here's the C implementation of the some function that I'm calling you know I'm just blinking to the github code there's a whole bunch of boilerplate that just checks with the type of the object and then it has some loops and so forth and so if you look look carefully it turns out it's actually doing really well and the reason it does really well is it has a fast path if you have a list where everything is a number type so that then it can you know it has a an optimized implementation for that case but it's still five times slower than C and they've spent a lot of work on it used to be ten times slower than C so a couple years ago so that they do a lot of work on optimizing this and so why aren't they able to get C speed since they have a simplement ation of a some are they just done you know no it's because the the semantics of the datatype prevent them from getting anything anything faster than that so and this is one of the the things you when you learn when you do you know high level performance in high level languages you have to think about datatypes you need to think about what the semantics are and that that that greatly constrains what any conceivable compiler can do and if the language is if the language doesn't provide you with the with the ability to express the semantics so you want then you're dead and that's that's one of the basic things that julia does so what is a Python list right so yeah you know you can have three four all right all right yep I thought unless it's a bunch of objects Python objects but the Python hundreds can be anything they can be any type so it's completely heterogeneous types okay so of course a particular list like in this case can be homogeneous but the data structure has to be hit Oh genius because in fact I can take that homogeneous thing at any point I could assign the third element to a string right it has to it has to support that so think about what that means for how it has to be implemented in memory so so what this has to be so this is a list of this case three items but what are those items so that's if they can be an item of any type right they could be things that it could be another array right it could be of different sizes and so forth you don't want to have an array where everything is a different size first well so so it has to be this an array of pointers all right we're the first pointer is three it's to not pick three the next one is for the next one is to fine but it can't just be that it can't just be pointer to like if this is a you know 64-bit number it's can't just be pointer to you know to one 64-bit value in memory because it has to know it has to similar store what type is subject is so it has to there has to be a type tag this says this is an integer and this one has to have a type tag that says it's a string so this is sometimes called a box see have a value you have a type tag plus the value and so imagine what even the most optimized seek implementation has to do given this kind of data structure okay here's the first time it has to chase the pointer and then ask what type of object is it okay then depending on what type of object is it is okay though I initialize my some to that then I read the next object I have to change the second pointer read the type tag figure out the type this all then at runtime right and then oh this is another integer that tells me I want to use the plus function for two integers okay and then I read the next value which may be but it's a 48 is it so that the Plus which plus function that's using depends upon the type of the object it's not drawing language I can define my own type and which has its own plus function it should work with some so it's looking at the types of the objects at runtime it's looking at the plus function at runtime and not only that but each time it does a loop iteration it has to add two things and allocate a result that result in general might be another has to be a box because it might type my changes you're summing through if you start with integers and then you get a floating point value and then you get an array right the the the type will change so sorry each time you do a loop iteration it allocates another box so what happens is this implementation is a fast path if they're all like integer types it doesn't I think it doesn't reallocate that box for the the summits accumulating all the time cassia's the value of the plus function it's using so it's a little bit faster but still it has to inspect every type tag and and chase all these pointers for every element of the array whereas the C implementation this of some if you imagine what this is this compiles down to write each loop iteration who has to do it increments a pointer to the next element fetches it knows that at compile time the types are all focused before so it flushes closely to for value into a register and then as it keeps has it running some in another register calls one machine instruction to add that running Sun sum and then goes to checks to see if we're done has a you know an if statement there and then goes on all right so just a few instructions and in a very tight loop here where as each loop iteration here has to be lots and lots of instructions to chase all these pointers to get the type tag and that's in the fast case where they're all the same type and it's optimized for that so oh where was I wrong wrong thing so that's the Python sum function now most many of you use Python know that there's another type of array and there's a whole library called numpy for working with the Marek's so what problem is that addressing so the basic problem is this data structure this data structure is as soon as you have a list of items that can be any type you're dead right there's no way to make this as fast as a c loop over a double pointer so to make it fast what you need but we haven't have as a way to say oh every element of this array is the same type so I don't need to store type tags for every element I just our type tag once for the whole array so that so there there is a tag for the before that there's a type wait to say float64 okay there is maybe a length of the array and then there's just a bunch of values went after the other so this is just a you know 1.0 three point seven eight point nine and each of these are just eight bytes an 8 byte double in c notation right so it's just one after the other so it reads this once reads the length and then it dispatches to code that says okay now basically dispatches the equivalent of my C code alright so now once it knows the type in the length then okay says it runs this okay and that can be quite fast and the only problem is you cannot write implement this in Python so Python doesn't provide you a way to to have that semantics to have a list of objects we say they all have to be the same type there's no way to enforce that or to inform the language of that in in Python and then and then they tell it Oh for this test these are all the same type you can throw away the boxes like every Python object looks like this you can see there's no way to tell Python oh well these are these are all the same type so you don't need to store the type tags you don't need that pointers you don't need to have reference counting you can just slam the values into memory one after the other it doesn't provide you that with that facility and so there's no way to like you can make a fast Python compiler that will do this so nothing is implemented didn't see even with some of you are familiar with pi PI which is an attempt to make a fast like a tracing JIT for Python so they when they poured it to Mata num PI to 2 pi PI or they attempted to even then you know they could they could implement more of it in Python but they had to implement the core in C ok but given that I can do this I can I can I can I can import the numpy module into julia get it's some function and benchmark the numpy some function and okay it's good again it takes a few seconds to run okay and it takes three point eight milliseconds so this C was ten milliseconds so it's actually it's actually doing faster than the C code almost a little over twice as fast actually and and what's going on is it's is there's there C code is better than my C code there C code is using assembly instructions so you party and at this point I'm sure that you guys all know about these things well you you can leave read in two numbers or four numbers into like one giant register and one instruction had all four numbers at once okay so what about if we go in the other direction we write our own Python some function so we don't use the Python some implemented in C I write our own in Python so here's a little my some function in Python only works for floating-point I initialize X to zero point zero so really only is accumulates floating point values but that's okay and then I just loop for X in a s equals s plus X return s is the most obvious thing you would write in Python okay and check that it works and yet arrows to the minus 13th it's giving the right answer and now let's climb it so remember C was 10 milliseconds numpy was full it's like 5 milliseconds and then the built-in Python was like if some was 50 milliseconds operating on this on this list now so we now we have this a C code operate on this list with 50 milliseconds and now we have Python code operating on this list and that is 230 milliseconds I so it's it's quite a bit slower and it's because basically in Python there's in the pure Python code there's no way to implement this fast path that checks oh they're all the same type so I can cash the plus function and so far I don't think it's it's feasible to implement that and so so basically then on every loop iteration has to look up the plus function dynamically and allocate a new box for the result and and do that 10 to the seventh times cuz now so there's a built in some function in Julia I saw this distant benchmark that is a it's actually implemented in Julia it's not implemented in C I won't show you the code for the built in one because it's it's a little messy because it's actually computing this some more accurately than than the loop that I've done it is its uses so that's three point nine milliseconds so it's comparable to the to the numpy code okay so it's also using sim D but it's it's so this is also fast so now it's so why can Julie do that so it has to be that the array type first of all has the type attached to it so you can see the type of the array is an array of low 64 so there's a type tag attached to the array itself so somehow that that's involved so it looks more like a numpy array in memory than then this is then a Python list you can make the equivalent of a Python list that's called an array of any so if I convert this to an array of any so that an array of any is something where the elements types can be any any julia type and so that it has to be stored as something like this as an array of pointers to boxes and when I do that let's see there it is then it's 355 milliseconds so it's actually it's actually even worse than Python so the Julia the Python they spent a lot of time optimizing their code paths for things that had to allocate lots of boxes all the time so and Julia it's usually understood that if you're writing optimized code you're gonna do it not on a raise of pointers to boxes like if you're you're gonna write it on homogeneous arrays or things where the types are known at compile time okay so so let's write our own julia some function this is the julia some function there's no type declaration it works on any container type i initialize s2 to for this function call zero for the element type of a so it initializes it to the add of identity so to work on any container of anything that supports a plus function that hasn't added has an additive identity so it's completely generic it looks a lot like the Python code except for this there's no there's no zero function in Python and this make sure gives the right answer it does and that's benchmark it all right so this is the code you'd like to be able to write you like to be able to write high-level code that's a straight level straight loop unlike the C code it's completely generic right it works on any any container to anything you can loop over and anything that has a plus function right so an array of quaternions or whatever and a benchmark it it's 11 milliseconds it's the same as the C code I wrote in the beginning it's not using Cindy says instruction sorry that's where the additional factors are to come but it's it's the same as the nonce MD C code and in fact if I want to use Cindy there's a little tag you can put in a loop to tell it's compiling this with LLVM you can say tell LLVM to try and vectorize the loop sometimes it can sometimes it can't but something like this it should be able to vectorize it's simple enough you don't need to hand code 70 instructions for a loop this simple yeah osya why isn't it the default because in most of the code like most code the compiler cannot auto auto vectorize so it increases the compilation time and like bloats that often blows the code size for no benefit so it's only really it's really only relatively simple loops on operating and doing simple operations and arrays that benefit from sim days so you don't want it to be there yeah so now it's it's 4.3 milliseconds so it's about the same as the as the the numpy and so forth it's a little slower than the numpy it's interesting a year ago when I tried this it was almost exactly the same speed as the numpy and then since then both the numpy and the julie have gotten better but the number I got better more so there's something going on with basically how well the compiler can use AVX instructions by it seems like we're still investigating what that is but it's an LOV M limitation looks like so and is it still completely tech generic so if I make a random array of complex numbers and that I I sum them to call it my which one am i calling now am i calling the vector my son is the vectorized one right so complex numbers each complex number is two floating point numbers it should take about twice the time you two naively think right so twice the number of operations to add the same number of complex numbers as seen over real numbers yeah and it does it takes about 11 milliseconds so for 10 to the seventh which is about twice the 5 milliseconds it took four for the same number of real numbers and the code works for everything so why okay so so what's going on here so okay so we saw this my sum function I'll just take out the Simbi for now and we did all that okay okay so we're and it works for any type it doesn't even have to be an array so for example there's a another container type called a set in Julia which is just an unordered collection of unique elements but you can also loop over it if it's a set of integers you you can also sum it and I'm waiting for the benchmarks to complete so let me Alec ate a set and there's no type declarations here they like mine some a there was no a has to be an array of particular type or even it doesn't even have to be an array all right so a set is a different data structure and it's and so it's a set of integers I can it's a sets are unique so if I add if I add something to already this set it's in there it won't it won't add it twice and it supports the fast checking if you know is two in the set is three in the set doesn't have to look through all the elements it's a hash internally but I can call my my sum function on it and it sums up to plus seventeen plus six but 24 which is hopefully 49 all right so okay so what's going on here so suppose you two find the key one of the keys to several things are going on to make Julia fess so what one key thing is that when you have a function like this my sum or even here's a simpler function f of x equals x plus one when I call it with a particular type of argument like an integer or an array of integers or whatever then it it compiles a specialized version of that for that type so here's f of x equals x plus one it works on any type supporting plus so if I call F of three so here I'm passing a 64-bit integer when it did that it says okay X is a 64-bit integer I mean I want to sit on a compiled a specialized version of F with that with that knowledge and then when I call with a different type 3.10 now X is a floating-point number it'll compile a specialized version with that value if I call with another integer it says oh that version was already compiled as well we use it so it only composite the first time you call it with a particular type if I call with a string it'll give an error because it doesn't know how to add plus to a string so it's in particular okay so what is it what is going on so we can actually look at the compiled code system called code this function is called code these macros called cold LLVM and code native let's say okay when I call F of 1 what's the lm-2 people know what LLVM is you guys okay so so LVM right compiles to bytecode first and that goes to machine code so you can see the LLVM bit code or bytecode or whatever it's called and you can sell to see the native machine code so here's the LVM bytecode for it compels to if everyone so it's a it's bit called a die 64 0 it caused one basically one nella VM instruction which turns into one machine instruction l load effective address is actually a 64-bit Edition function is an instruction so let's think about what had to happen there so if you have f of x equals x plus 1 now compile if I don't compile for X is is a in 64 so 60 64 bit integer or a Julia we'd say X : : in 64 that's the : call means this is this is of this type okay so this is 64-bit integer type so what I just have to do it first has to figure out what plus function to cause a plus MIDI there's a plus for two matrices a plus for a lots of different things and so depending on the types of the arguments it decides on which plus function to call so so it first realizes oh this is an integer oh this is an integer this is also a suitable integer so that means a sound a call the plus function for two integers so I'm gonna look into that function and then oh that one returns and in 64 so that's the return value for my function and oh by the way this function is so simple that I'm gonna inline it so it's it's its type specializing and this process of going from X is an integer to that to figuring out the type of the output is called type inference okay so in general for type inference is is you know given the types of the inputs it tries to infer the types of the outputs and in fact all intermediate values as well now what makes it a dynamic language is this can fail so in some languages like ml or some of the languages you don't really declare types but their designs so that you give the types of the inputs it can figure it everything and if it can't figure out everything it gives it gives an error basically it has to it has to do it for everything so Julia is a dynamic language this can fail and it have a fallback of C if it doesn't know the type it gets six things in a box but obviously the fast path is when it exceeds the succeeds and one of the key things is you have to try and make this kind of thing work in a language you have to design the language so that the release further all the built in constructs the standard library or in general in the culture for people designing designing packages and so forth that you think to to to design things so that this type inference can succeed and I'll give a counter example to that in a minute right so and this works recursively so it's not suppose I define a function G of x equals f of X times 2 okay and then I call G of 1 so it's gonna say okay X here is an integer okay I'm gonna call F with an integer argument oh I should compile F for an integer argument figure it's return type use its return to have to figure out what x function to call and and it had doing and do all this at compile time right not at not at run time ideally so we can look at the elleven code for this and in fact so so this is remember f of X adds 1 to X and then we're telling x 2 so the result computes 2 X plus 2 and LLVM is smart enough so in F is so simple that it in lines it and then LLVM is smart enough that I don't have to well I know it does it by one shift instruction to multiply X by two and then it adds two so it actually combines the x two and the plus one it does constant folding okay and you can continue on if you look at H of X equals G of X tends to then it that compiles to shift one shift instruction to multiply X by four and then adding four so you see you want this so this process Cascades so and you can even do it for a recursive function so here's a stupid implementation of a Fibonacci number calculation and recursive implementation right so this is given n that's must be it's an integer okay if n is less than three returns one otherwise it adds the previous two numbers I can compute the first call this in the first ten integers here's the first 10 Fibonacci numbers there's also in that acute notation and Julia can say fib dot so if you do F dot arguments it calls the function element wise on a collection and returns a collection so f dot 1 to 10 where it turns the first 10 Fibonacci numbers and I can call there's a function called code warren type that'll tell me what it'll tell me the output of type interface a type inference so a 10 is a 1 and it goes through this is this kind of a hard to read format but this is like the after like one of the compiler passes called lowering but yeah it's figuring out the types of every of every intermediate call so here is it's invoking main dot fib it's recursively and it's figured out that the type return type is Austin in 64 so it knows everything okay so you'll notice that I here I declared a type I call I said that this is an integer okay I don't have to do that for type in first inference this doesn't this doesn't help the compiler at all because it it does type inference depending on what I pass so what this is is it more like a filter it says that if I pass that this function only accepts integers if you pass something else you should throw an error like so I don't want this function because if I pass three point seven right if I don't fill out any number if you look at the you know three point seven I can check whether it's less than three it can call it recursively I mean the the function would run it would just give nonsense all right side i want to prevent someone from passing nonsense for this so that's one reason to do a type declaration but another region reason is to do something called dispatch so what we can do is we can define different versions of a function for different arguments so for example another nicer version of that is a factorial function so here is a stupid stupid recursive implementation of a vectorial function it takes an integer argument and just recursively calls itself on n minus 1 so you can call it on 10 factorial okay if I want a hundred factorial I need to use a different type that 64-bit integers I need some arbitrary precision integer and and since since I said it was an integer if I call it 3.7 it'll give and it'll give an error that's good but now I can define a different version of this so actually there isn't definitely there is a generalization of factorial to two arbitrary real in fact even complex numbers call the gamut function and so I can define a fallback that works for any type of number and it calls a gamma function from someplace else and then then if I can pass it a floating point value I can if you if you take the factorial of minus minus 1/2 it turns out that's actually square root of pi so if I square it it gives PI all right so so now I have one function and I have two methods all right so so these types here so there's a hierarchy of type so so this is this is what's called an abstract type it's most of you probably seen like so there's a type called number and underneath there's a class of subtypes called integer and underneath there's for example 8 64 or int 8 for 8 bit or 64 bit integers underneath number there's there's actually another sub type Pole real and underneath that there's a couple sub types and then there's they say float64 our or float32 for for a a single precision 32-bit floating-point number and so forth so there's a hierarchy of these things when I specify something can take integer I'm just saying it so this type has not helped the compilers to help it's to provide a filter says this vert this method only works for these types and then this this other method only where it works this my second method works for any number type so I have one Martha that works for any number type of series one method that works for any number type and one metric method that only works for integers so when I call it for 3 which ones does it call because they're actually called both methods and what it does is it calls the most specific one it calls the one that's sort of farthest down the tree so fair so if the so if I have a method is defined for a number and one for defined for integer if I passed an energy little do this if I've one that's defined for number one that's defined by integer and one that's defined specifically for int eight and I call it a pass an 8-bit integer it'll call that version all right so it gives you a filter but in general you can do this on multiple argument so this is like the key abstraction and Julia something called multiple dispatch so this was not invented by Julia I guess it was present in small talk in Dillon it's it's been in a bunch of languages have been floating around for a while but it's not been in a lot of mainstream languages not in a high-performance way and you can think of it as a generalization of object-oriented programming so I'm sure all of you have done object oriented in Python or C++ or something like this so now tutorial programming typically the way you you think of it as you as you say you have an object it's usually spelled like object method you know X Y for example right and what it does is this type the object type determines the method right so you could have a method called plus and would cope but I would actually call a different plus function for a complex number or real number something like that or a method called length right which for Python lists would call a different function than for a numpy array right in Julia you know why he would spell the same thing would you say method Minh take dot method right so you don't think of the here you think of the object is sort of owning the method right and Julia the object would just be maybe the first argument in fact in fact under the hood if you're looking in Python for example the object is passed as an implicit first argument called self right so essentially how is doing this it's just spelling different spelling of the same thing but as soon as you write it this way you realize like what what Python and what o up languages are doing is they're looking at the type of the first argument determine the method but why just the first argument so in a multiple dispatch a language you look at all the types so so this is sometimes in Julia this will be sometimes be called single dispatch because this this is this determining the method is called it's called dispatch right figuring out which function is that spelled length which function actually are you calling this dispatching to the right function so this is called multiple dispatch and it's its clearest if you look at something like a plus function alright so a plus function alright if you do a plus B right which plus you do really should depend on both a and B right it shouldn't depend on just a or just B and so it's actually quite awkward in languages in oo up languages like Python or especially suppose C++ to overload a plus operation that operates on sort of mixed types as a consequence for example in C++ you know there's a built in complex number type all right so you can have a complex sing a float or complex double complex ant you know cup box with different meal types but you can't add a complex float to a couple X double you can't you can't add a single precision complex number to a double precision couples member or any mixed operation because any mixed complex operation because it can't figure out who owns who owns the method it doesn't have a wave of doing that kind of promotion alright so in and enjoy so now you can have a method for for adding a float 32 to a float 32 but also a method for adding a I don't know let's see here's adding a a complex number to a real number for example alright you want to specialize or a real number to a complex number you want to specialize things in fact we can click on the link here and see the code so complex number to a real number and Julia looks like this it's the most obvious thing it's implemented in Julia plus complex real creates a new complex number but you only have to add the real part you can leave the imaginary part alone and this works on any any complex type ok so ah there's too many methods for okay I can shrink that let's see so I can talk but there's another type inference thing called I'll just mention it briefly so one of the things you have to do to make this this this type inference process work is given the types of the arguments you have to figure out the type of the return value okay so that means when you design a function we yeah it has to be what's called type stable that the the type of the results should depend on the types of the arguments and not on the values of the arguments because the types are known at compile time the values are only known at runtime and it turns out if you know if you don't have this in mind I mean in C of no choice but to obey this but it's something like Python a dynamic language at python and matlab if you're not thinking about this it's really easy to to design things so that doesn't work so that's not true so a classic example is the square root function all right so suppose i pass an integer to it okay so the square root is square root of this is 2 square root of 5 right the result has to be a floating-point number right it's two point two three whatever so if i do square root of four of course that square root is an integer but if i return an integer for that type then it wouldn't be type stable anymore then the return value type would depend on the value of the input whether it was a perfect square or not all right so it was returned to floating-point value even if the input as an integer yes [Music] well it's just a look up comes it comes with compiled time so it's really kind of irrelevant it at least if type inferencing succeeds if type entrance fails then it's run time it's it's slower but it's not like mistake at research so it's not it's not as slow as you might might think but I most the time you don't worry about that because if you care about performance you want to arrange your code so that type inference succeeds so you prototype maybe - is it something you do in performance optimization like when you when your prototype but you don't care about types you say x equals three and the next line you say x equals an array whatever but when you're optimizing your code then Y okay you tweak it a little bit to make sure that things don't change types willy-nilly and that the types of a function depend on the types of the arguments not on the values so as I mentioned square root is what really confuses people at first if you take square root of minus one you might think you should get a complex value all right and instead it gives you an error all right and because that's basically what are the choices here it could give you an error it could give you a complex value but if it gave you a complex value then the return type of square root would depend upon the value of the input not just the type some MATLAB for example if you take square root of minus one it'll happen to give you a couple expander but as a result if you have MATLAB MATLAB has a compiler right but it has many many challenges but like one simple thing to understand is if you have it if the MATLAB compiler sees a square root function anywhere in your function even if it knows the inputs that are real it doesn't know if the outputs are complex or real unless it can prove that the inputs were positive or non-negative right and that means that it could then compile to code paths for the output right but then suppose it calls skirt again or some other function about you quickly get a next combinatorial explosion of possible code pass because it impossible types and so some point you just give up and just put things in a box but as soon as you put things in a box and you're looking at types at runtime you're dead right from a performance perspective so Python actually gets if you want a complex result from square root you have to give it a complex and but so in Julia a complex number I is actually M they decided I is too useful for loop variables so I and J so M is the complex unit if you take squared of a complex input it gives you a complex output so so Python actually does the same thing so in Python if it takes square root of a negative value it gives a an error unless you give it a complex input but Python but but Python made other mistakes so for example in Python an integer is guaranteed never to overflow if you add one one plus one plus one over and over again in Python eventually low overflow the size of history 64-bit integer and Python will just switch under the hood to an arbitrary position an integer right but seemed like a nice idea probably at the time and the rest of Python is so slow that like the performance cost of this this test you know it makes no difference in a typical Python code but it makes it very difficult to compile because that means if you have integer inputs and you see X plus 1 in Python the compiler cannot decide just can't just compile that to one instruction because unless you can somehow prove that X is sufficiently small so in Julia you know integer arithmetic will overflow but the default integer integer if Nick is integer arithmetic is 64 bits so in practice that never overflows unless you're doing number theory and you usually know if you're doing number theory and then use a reserved precision in integers it is much it was much worse in the days like this is something people worried a lot you know before you were born when there were 16-bit machines right and integers it's really really easy to overflow 16 bits so I guess the biggest sign value is then you know thirty two thousand seven nice to see seven right so it's really easy to overflow it so you're constantly worrying about overflow and even 32 bits the biggest sign value is two billion it's really easy to overflow that even just counting bytes right you know files that are bigger than two gigabytes easily nowadays so some people worried about this all the time that with 64-bit integers basically a 64-bit integer will never overflow if it's counting objects that exist in the universe like you know bytes or loop iterations or something like that so you just you just say okay to the 64 bits all you're doing number theory you should use begins so okay so let me talk the final thing I want to talk about let's see how much doing time good is defining our own types so this is where like this is the real test of the language right so McHale anguish where there's a certain built in set of functions and built-in types and there are things are fast so for example for Python there actually is a compiler called numba that does exactly what julia does it looks at the arguments type specializes things and then calls LVM and composite to fast code but it only works if your if your only container type is a numpy array and your only scalar type is one of the 12 scalar types that none pi supports if you have your own user-defined number type or your own user-defined container type then it doesn't work and user-defined container types it's probably easy to understand why that's useful usually find number types are extremely useful as well so for example there's a package in julia that provides the number type called dual numbers and that those have the property that if you pass them into the function they compute the function and it's derivative and it's just a slightly different maleeh carries around function and derivative values and as a slightly different you know plus and times and so forth that just do the the product rule and so forth and it just propagates derivatives and then if you have julia code like that you know vandermonde function it'll just compute its derivative as well okay so i want to be able to find my own type so a very simple type that might want to add would be points to D vectors into into space right so of course I could have an array of two values but an array is a really heavy weight object for for just two values right if I know it compile-time there's two values that I don't need to have a pointer to a block I can actually store the registers I don't know I can unroll the loop over these everything should be faster you know you can get an order of magnitude in speed by specializing on the on the the number of elements for small arrays compared to you know just a general rate data structure so let's make a point okay so this is the and I'm going to go through several iterations one of it start with a string with a slow iteration and then define a immutable struct okay so this will be a mutable object where I can add a point that has two values x and y can be of any type I'll define a plus function that can add them and it is the most obvious thing it adds the X components as the Y components I'll define a zero function that's the additive identities that just returns the point zero zero and then I can construct an object point three four I can say point three four plus 0.5 six it works it can hold actually right now it's very generic and probably too generic the so they like the real part can be a flying point number and the imaginary the X can be a floating point number here and then Y is a is a complex number of two integers or even I could make a string and an array that it wasn't Disney makes sense so I so here's I probably should have restricted the types of X&Y a little bit just to prevent the user from putting in something that makes no sense at all okay so so these things they can be anything so this this this type is not ideal in several ways so let's think about how this has to be stored in memory so this is a point 1 X plus a point one one three point seven right so in memory it's there's an X and there's a y but x and y can be of any type so that means they have to be a pointers to boxes there's a pointer to int one and there's a pointer to a float64 in this case three point seven so this already we know this is not going to be good news for performance right and it's mutable so that mutable struct ments if I if I take P equals point a point like then say P X equals seven I can change the value and which seems like a harmless thing to do but actually is a big problem because for example if I make an array of of three piece okay and then I say P dot y equals eight and a look at that array it has to it has to change the Y component okay so if I have say five up P or in general five of P that's looking at that object if this is an object you can mutate it means that if I have another element Q it's also pointing the same object and I say P px equals 4 then Q dot X had better also before at that point so so to have mutable semantics it to have the semantics of something you can change and reference other references you can see that change that means that this object has to be stored in memory on the heap as a pointer to two objects so that you know other pointers the same object and I mutated and the and the other references see it it can't just be stuck in a register or something like that you have to it has to be something that other references can see so this is bad so if I have so I can call point 1 dot a equals the constructor element wise and this a is this array of 10 the 7th random numbers I was benchmarking on before that was taking 10 milliseconds okay and if I call I can sum it I can call the built in sum function on this I can even call my sum function on this because ID supports a zero function it sort of supports a plus so here if I have an array if I could scroll back up I have an array array here of 10 to the 7th values of type point 1 so that the type of point 1 is attached to the array so the array in memory so I've been to if I have an array of 0.1 the 1 here means it's a one dimensional ready there's also two 2d 3d and so forth that looks like a point 1 value a point 1 value point 1 value but each one of those but each one of those now sorry have to be a pointer to an X and the y which themselves are pointers to boxes right so operating something this is gonna be really slow this is a lot of pointer chasing it has to at runtime check what's the type of X what's the type of Y and in fact it was it took instead of 10 milliseconds it took in five or six hundred milliseconds so to do better we need to do two things so first of all x and y they have to be we have to be able to see what type they are okay it can't be just any arbitrary old thing that has to be a point to a box okay and the point object has to be immutable it has to be something where if I have if I have if I have P equals something Q equals something I can't change P and expect Q to see it otherwise if it's mutable they have to be stored as the little semantics have to be implemented as some pointer to an object in place and you're done you're dead so I can just say struct so struct now is not mutable it doesn't have the mutable keyword and I can give the arguments type so I can say that both float64 and then x and y are both the same type that both the 64 but floating point numbers I'll define plus the same way 0 the same way and now I can add them and so forth but now if I make an array of these things and if I say px equals 6 it will give an error says though you can't actually mutate don't even try to mutate it because because we can't support those semantics on this type but that means so that type is actually you look at look at Botany memory what the compiler is allowed to do and what it does do for this is if you have an array of point two one then it looks like just you know the x value the Y value the x value the Y value and so forth each of these are exactly one eight bytes float64 and all the types are known at compile time and so if I sum them it should take about point 20 milliseconds right summing in real numbers it was 10 milliseconds and this is twice as many because there's some the X's haven't some of the Y's and let's benchmark it and let's see oh actually - something sort of something the real numbers took 5 milliseconds so something should do it take about 10 let's see if that's still true yeah it took about 10 so actually the compiler is smart enough it stores this so it's first of all it stores this in line as one big block consecutive memory and then when you sum them remember some function this well this is the built-in sum but our sum function will work the same way the compiler will elevate and we'll be smart enough to say oh I can load this into a register load Y into a register call it have a tight loop where I basically call one instruction to sum the X's one instruction to sum the Y's and then repeat so there's about as good as you could get but you paid a big price we've lost all generality right these can only be to 64-bit floating-point numbers so I can't have two single precision numbers or two so this is like a struct of 240 explosives two doubles and C alright so if I have to do this to get performance in Julia that would suck basically I'm it's maybe an you know it's I'm basically writing C code in a slightly higher level syntax I'm losing that benefit of using high level language so the way you get around this is you define what you want is to define something like this point type but you don't want you to find just one type you wanted to find a whole family of types you wanted to find this for a to 54 is to float 32 for twos in fact you wanted to find an infinite family of types for ED to two things of any type you want as long as they're two real numbers to real types and so the way you do that in julia is a parameterize type this is called parametric polymorphism is similar to what you see in C++ templates so now I have a struct this it's not mutable 0.3 but the curly braces t it says it's parameterised by type t so x and y I'm restricted restricted slightly here I've said x and y had to be the same type I didn't have to do that but most of that medev had two parameters one for the type of X ones at that of Y but most the time you'd be doing something like this he'd want them both both of his same type but they could be both lossless ephors both float 32 is both integers whatever so T is any type that less than : means is a subtype of so T is this any subtype of real it could be flow 64 it could be in 64 could be in tade could be big flow it could be a user defined type it doesn't care so this is really not it's a it's a point three here is a whole hierarchy is not define this I'm not defining one type I'm defining a whole set of types so point three is a set of types there's a point 0.3 in 64 there's a point 3 float 32 at 4 for 64 and so on and infinitely many types you know as many as you want and basically McCree 8 more types on the fly and is just by just by instantiating so for example otherwise it looks the same the plus function is basically the same I add the X components of Y components the zero function is the same I make we submit now I make sure there's zeros of type t whatever that type is and now if I say 0.34 now I'm instantiating a particular instance of this create create and now that particular instance of 0.3 will have what this is an abstract type will have one of these concrete types and the concrete type it has in this case is 0.3 of 2 in 64 s22 64-bit integers okay and I can add them and actually adding mixed types will already work because the plus what the addition function here it works for any 2.3 I didn't say they had to be 0.3 is of the same type I did so and any two of these they don't have to be two of these it could be one of these in one of these and then it determines the type of the result by the type of the return it does type inference so if you have an in 16.3 of two and 64 x' and 0.32 432 is all it says oh px is an n64 QX is a float 36 full 64 Oh which plus function do I call there's a plus function form it that mixed thing and it promotes the result of flow 64 so that means that that sum is posted for the other film is full 64 oh then I'm creating a polling point 3 of felicity force so this kind of mixed promotion is is done automatically you can actually define your own promotion rules and Julia as well and I can make an array and so now if I have an array of 0.3 float64 so this type is attached to the whole array and this is not an arbitrary point 3 it's an appoint 3 of 250 force so it gets stored again as just 10 to the seventh elements of XY XY we each one is 8 bytes 8 bytes 8 bytes one after the other the compiler knows all the types so when you sum it it knows everything at compile time and it will you know it will sum to these things by loaded this into a register load this into a register called one instruction add them and and so the sum function should be fast so we can call the built in sum function we can call our own sum function our own sum function I didn't put CMD here so that's gonna be twice as slow but yeah yeah yeah in fact if you look this the built in some function the built in sometimes was implemented in Julie it just hasn't had Cindy on the Sun so yes it says so LLVM is smart enough that if you give it a structure to values and and then load load them and if you tell it they know I mean adding these two value to these two values these two values to these two values it will it will actually use them the instructions I think oh maybe not no wait my some use Cindy confused I thought it did I thought I removed it yes maybe maybe it's not smart enough to use Cindy I've seen in some cases where it's smart enough to use yeah yeah okay so they're the same speed okay no I take I take it back so so maybe the LLVM is not smart enough in this case T to use somebody automatically we could try putting the Cindy annotation there and try it again but I thought it was but maybe not let's see this puts MD so weak we need to find that and then just rerun this so it'll notice that I changed the definition it'll recompile it but the B time since that times at multiple times it the first time it calls it sits low because it's compiling it but it takes the minimum over several times so so let's see yeah I mean this is this is a problem ins in general with with vectorizing compilers is they're not that smart if you're using anything other than just like an array of an elementary data type yeah no it didn't make any difference so I took him back you Sol Sol Sol Sol Sol for more complicated data structures you often have to use semi structures explicitly and there is there is a way to do that in julia and there and there's a higher-level library on top of that you can basically create a tuple and then add things and it'll um and it'll do 70 MD acceleration but yeah so anyway so that's there's a whole bunch of like this the story of like why Julie can be compiled of fast code it's a good combination of lots of little things but there there are a few big things one is the is that it's specialized thing as compiled times but of course you can do that in any language so that relies on designing the language so that you can do type inference it relies on having these kinds of parameterize types so and giving you a way to talk about types and attach types to other types so so the array you notice probably see now that you understand what these little braces mean you can see that the the the array is defined in Julia as another parametrized type its parametrized by the type of the element and also by the dimensionality so that that's that's uses the same mechanism to attach types to an array and you can have your own user the array type and julie is implemented mostly in julia and there are other packages that implement their own types of arrays that have the same performance one of the one of the goals of julia is to build in as little as possible so that that like there's not like some set of privileged privileged types that the compiler knows about and everything else is second class it's like you know a user code is majest as good as the built in code and in fact built in code is mostly just implemented in julia there's a small core that's implemented in c for bootstrapping basically yeah so having parameterize types having another technicalities having um all concrete types are final and julia so that means you can't it's a concrete type is something you can actually store memories so like 0.33 for something you actually have right you can an object of two integers is that type so it's concrete right as opposed to this thing this is an abstract type you can't actually have one of these you can only have one of the instances of the concrete types so but there are no this is final there it's not possible to have a subtype of this because if you could then you'd be dead because this is an array of these things if the compiler has to know it's actually these things and not some subtype of this alright so where is in other languages that Python you can have subtypes of concrete types and so then even if you said this is an array of a particular Python type you wouldn't really know it's that thing that type or might be some subtype of that that's one of the reasons why you can't implement numpy in Python this is there's no way to say though this is really that type and nothing else at that in a language level yeah oh yeah so it's calling LVM so basically the stage is you call so there's a few passes okay so and one of the fun things is you can actually inspect all the passes and almost intercept all of them practically so of course typing told like this first it gets parsed okay and you can a matter oh those things that are coming with AK actually are functions that are called at the right after parsing they can just take the parsed code and rewrite it arbitrarily so that's they can extend the language that way so it's first but then it's parsed maybe we may be rewritten by a macro and then you get an abstract syntax tree and then when you call it I say f of three then this is o X is an integer n 64 then it runs the type inference pass it tries to figure out what's what's the type of everything in which which version of Plus to call and so forth then it decides whether to inline some things and then Minh sits done all that it spits out LLVM byte code then calls LVM and compiles it to machine code and then it caches that someplace so for the next time you call you call f you know F of for F with another integer just calling this it doesn't repeat the same processes notice it's cached so that's that's beer but it's bit at the lowest level it's just a la vía so there's tons of things I haven't written so I haven't showed you I mentioned metaprogramming so it has this is macro facility and so you can basically write syntax that rewrites other syntax which is really cool for cogeneration you can also intercept it after the type inference face you can write something called the generated function that basically takes because at part's time it knows how things are spelled and you can rewrite how they're spelled but it doesn't know what anything actually means it does it just knows X as a symbol doesn't know X as an integer or whatever just knows the spelling so when you actually compile them of EXO at that point it knows X is an integer and so you can write something called a generator or a stage function that that basically runs at that time and says oh you told me X is an integer now I'll rewrite the code based on that and so this is really useful for there's some cool facilities for multi dimensional arrays because the the dimensionality of the array is actually part of the type so you can say oh this is a three dimensional array alright three loops oh this you have a four dimensional array alright for loops and and it can rewrite the code depending on the dimensionality and with code generation so you can have you can have code that basically generates any number of nested loops depending on the types of the arguments and all the generation is done at compile time after type inference so it knows the dimensionality of the array and yeah so that's that's there's lots of fun things like that of course it has parallel facilities and they're not quite as advanced as silk at this point but that's that's sort of the direction they're heading there's no global interpreter lock like in Python that you know there's no interpreter so the you know there's a threading facility and there's a pool of workers and you can spawn and you can you can thread a loop and the the garbage collection is is threating aware so that's safe and you know they're gradually having more and more powerful runtimes hopefully eventually looking into some of Professor license you know advanced threading compiler paper compiler or whatever it is there's also you know from most of what I do in my research is more coarse-grained distributed memory parallelism so you know running on supercomputers and stuff like that and that's there's there's MPI there's a remote procedure call library there's different different you know flavors of that but yeah so any other questions yeah the big numb type in julia is actually calling [ __ ] you so the so you know minh of those things let me just something we could a new notebook so so if i say i know big int you know three three thousand okay and then i'd say that to the SI factorial I think there's a built in fakie factorial of that right so so and this is called a big numb it's something where the number of digits changes at runtime so of course these are orders of magnitude slower than I know Hardware things because it basically it has to implement it as a loop of a loop of digits in some base and when you add a multiple you have to just loop over those at runtime so these big name libraries they're quite large and heavily optimized and so so nobody's reimplemented one in julia they're just calling out to a c library called the new multi precision library and for floating-point values there's there's something called big float it's a big float of pi is is that I can actually this let's set set precision big float mm that's 1,000 binary digits and then say big float of Pi and and it's no it's more by the way yeah you might have so I can have a variable alpha oops alpha hat sub 2 equals 17 that's allowed all this happening here is that you is that Julia allows almost arbitrary Unicode things for identifiers so I can have you know make it bigger it's like you know we can have an identifier koala right so so there's two issues so one is just a lot of a language that allows those things as identifiers so Python 3 also allows Unicode identifiers although I think Julia out of all the existing the like the common language is probably as the widest Unicode support but most most languages only have a allow a very narrow range of Unicode characters for identifiers so Python would not would allow the koala but with python 3 would not allow without alpha hat some two because it doesn't the numeric subscript Unicode characters it doesn't allow the other thing is how do you type these things and that's more of an editor thing and so in Julie we mean we implemented it initially in the repple and in in in jupiter and now all the editors support you can just do tab completion of latex I can type I can type you know gamma tab and the tab completes to the Unicode character I can see I can say dot and it puts a dot over it and backslash superscript four and it you know if it's a four and that's its and that and that's that's allowed so I says it's it's quite nice so like Debbie emails and I put equations and emails I go to the Julia repple and and tab complete all my latex characters so that I can put equations and emails because it's the easiest way to type these unicode math characters but yeah so that's yeah I Python borrowed this so the second I'll do the same thing in in the ipython notebooks as well so because it's it's it's really fun when you're writing it finally you know because if you read old math codes especially old Fortran codes or things you see lots of variables that are named alpha hat or something like the alpha hat underscore three it's so much nicer to have a variable that's actually the alpha hat sub 3 so that's cute Steve thanks very much thanks this was great we are as he'd mentioned looking actually at a project to merge the Julia Technology with a silk technology and so we're right now in the process of putting together grant proposal and if that gets funded there may be some Europe's [Music] you 3 00:00:05,769 --> 00:00:08,019 4 00:00:08,019 --> 00:00:09,850 5 00:00:09,850 --> 00:00:10,930 6 00:00:10,930 --> 00:00:13,120 7 00:00:13,120 --> 00:00:15,160 8 00:00:15,160 --> 00:00:21,910 9 00:00:21,910 --> 00:00:26,620 10 00:00:26,620 --> 00:00:32,380 11 00:00:32,380 --> 00:00:35,160 12 00:00:35,160 --> 00:00:39,070 13 00:00:39,070 --> 00:00:41,530 14 00:00:41,530 --> 00:00:43,690 15 00:00:43,690 --> 00:00:43,700 16 00:00:43,700 --> 00:00:44,740 17 00:00:44,740 --> 00:00:46,570 18 00:00:46,570 --> 00:00:48,760 19 00:00:48,760 --> 00:00:53,380 20 00:00:53,380 --> 00:00:56,350 21 00:00:56,350 --> 00:01:00,220 22 00:01:00,220 --> 00:01:02,080 23 00:01:02,080 --> 00:01:04,900 24 00:01:04,900 --> 00:01:06,900 25 00:01:06,900 --> 00:01:09,690 26 00:01:09,690 --> 00:01:12,190 27 00:01:12,190 --> 00:01:14,859 28 00:01:14,859 --> 00:01:18,540 29 00:01:18,540 --> 00:01:25,690 30 00:01:25,690 --> 00:01:27,789 31 00:01:27,789 --> 00:01:30,580 32 00:01:30,580 --> 00:01:33,670 33 00:01:33,670 --> 00:01:35,770 34 00:01:35,770 --> 00:01:37,600 35 00:01:37,600 --> 00:01:40,000 36 00:01:40,000 --> 00:01:42,520 37 00:01:42,520 --> 00:01:43,840 38 00:01:43,840 --> 00:01:45,160 39 00:01:45,160 --> 00:01:47,560 40 00:01:47,560 --> 00:01:49,270 41 00:01:49,270 --> 00:01:50,890 42 00:01:50,890 --> 00:01:52,570 43 00:01:52,570 --> 00:01:54,310 44 00:01:54,310 --> 00:01:58,149 45 00:01:58,149 --> 00:01:59,859 46 00:01:59,859 --> 00:02:02,020 47 00:02:02,020 --> 00:02:04,690 48 00:02:04,690 --> 00:02:07,359 49 00:02:07,359 --> 00:02:09,070 50 00:02:09,070 --> 00:02:11,800 51 00:02:11,800 --> 00:02:13,720 52 00:02:13,720 --> 00:02:15,970 53 00:02:15,970 --> 00:02:18,400 54 00:02:18,400 --> 00:02:20,350 55 00:02:20,350 --> 00:02:22,630 56 00:02:22,630 --> 00:02:25,030 57 00:02:25,030 --> 00:02:27,670 58 00:02:27,670 --> 00:02:29,710 59 00:02:29,710 --> 00:02:30,990 60 00:02:30,990 --> 00:02:33,240 61 00:02:33,240 --> 00:02:35,550 62 00:02:35,550 --> 00:02:37,290 63 00:02:37,290 --> 00:02:39,360 64 00:02:39,360 --> 00:02:42,210 65 00:02:42,210 --> 00:02:43,770 66 00:02:43,770 --> 00:02:45,240 67 00:02:45,240 --> 00:02:47,190 68 00:02:47,190 --> 00:02:49,710 69 00:02:49,710 --> 00:02:50,940 70 00:02:50,940 --> 00:02:53,070 71 00:02:53,070 --> 00:02:56,130 72 00:02:56,130 --> 00:02:58,140 73 00:02:58,140 --> 00:02:59,610 74 00:02:59,610 --> 00:03:02,250 75 00:03:02,250 --> 00:03:03,330 76 00:03:03,330 --> 00:03:05,640 77 00:03:05,640 --> 00:03:08,330 78 00:03:08,330 --> 00:03:11,490 79 00:03:11,490 --> 00:03:13,199 80 00:03:13,199 --> 00:03:15,180 81 00:03:15,180 --> 00:03:17,000 82 00:03:17,000 --> 00:03:19,199 83 00:03:19,199 --> 00:03:21,120 84 00:03:21,120 --> 00:03:22,350 85 00:03:22,350 --> 00:03:23,880 86 00:03:23,880 --> 00:03:25,590 87 00:03:25,590 --> 00:03:28,800 88 00:03:28,800 --> 00:03:31,040 89 00:03:31,040 --> 00:03:33,300 90 00:03:33,300 --> 00:03:34,949 91 00:03:34,949 --> 00:03:37,259 92 00:03:37,259 --> 00:03:39,090 93 00:03:39,090 --> 00:03:42,080 94 00:03:42,080 --> 00:03:45,479 95 00:03:45,479 --> 00:03:48,660 96 00:03:48,660 --> 00:03:52,050 97 00:03:52,050 --> 00:03:54,410 98 00:03:54,410 --> 00:03:59,340 99 00:03:59,340 --> 00:04:01,620 100 00:04:01,620 --> 00:04:04,470 101 00:04:04,470 --> 00:04:06,690 102 00:04:06,690 --> 00:04:08,340 103 00:04:08,340 --> 00:04:10,259 104 00:04:10,259 --> 00:04:12,539 105 00:04:12,539 --> 00:04:15,990 106 00:04:15,990 --> 00:04:17,610 107 00:04:17,610 --> 00:04:19,830 108 00:04:19,830 --> 00:04:23,310 109 00:04:23,310 --> 00:04:26,219 110 00:04:26,219 --> 00:04:28,110 111 00:04:28,110 --> 00:04:30,480 112 00:04:30,480 --> 00:04:33,330 113 00:04:33,330 --> 00:04:35,159 114 00:04:35,159 --> 00:04:36,719 115 00:04:36,719 --> 00:04:39,029 116 00:04:39,029 --> 00:04:40,830 117 00:04:40,830 --> 00:04:43,409 118 00:04:43,409 --> 00:04:45,450 119 00:04:45,450 --> 00:04:47,820 120 00:04:47,820 --> 00:04:49,830 121 00:04:49,830 --> 00:04:53,490 122 00:04:53,490 --> 00:04:55,499 123 00:04:55,499 --> 00:04:57,899 124 00:04:57,899 --> 00:04:59,700 125 00:04:59,700 --> 00:05:02,309 126 00:05:02,309 --> 00:05:06,510 127 00:05:06,510 --> 00:05:09,149 128 00:05:09,149 --> 00:05:11,249 129 00:05:11,249 --> 00:05:12,779 130 00:05:12,779 --> 00:05:14,580 131 00:05:14,580 --> 00:05:16,200 132 00:05:16,200 --> 00:05:18,179 133 00:05:18,179 --> 00:05:20,790 134 00:05:20,790 --> 00:05:22,679 135 00:05:22,679 --> 00:05:24,029 136 00:05:24,029 --> 00:05:26,219 137 00:05:26,219 --> 00:05:27,809 138 00:05:27,809 --> 00:05:29,820 139 00:05:29,820 --> 00:05:31,679 140 00:05:31,679 --> 00:05:35,820 141 00:05:35,820 --> 00:05:38,730 142 00:05:38,730 --> 00:05:41,670 143 00:05:41,670 --> 00:05:44,399 144 00:05:44,399 --> 00:05:46,170 145 00:05:46,170 --> 00:05:49,079 146 00:05:49,079 --> 00:05:51,600 147 00:05:51,600 --> 00:05:53,820 148 00:05:53,820 --> 00:05:55,379 149 00:05:55,379 --> 00:05:57,839 150 00:05:57,839 --> 00:05:59,850 151 00:05:59,850 --> 00:06:02,339 152 00:06:02,339 --> 00:06:04,920 153 00:06:04,920 --> 00:06:08,399 154 00:06:08,399 --> 00:06:10,140 155 00:06:10,140 --> 00:06:12,240 156 00:06:12,240 --> 00:06:13,679 157 00:06:13,679 --> 00:06:15,269 158 00:06:15,269 --> 00:06:17,969 159 00:06:17,969 --> 00:06:20,730 160 00:06:20,730 --> 00:06:22,140 161 00:06:22,140 --> 00:06:24,420 162 00:06:24,420 --> 00:06:26,249 163 00:06:26,249 --> 00:06:27,420 164 00:06:27,420 --> 00:06:31,679 165 00:06:31,679 --> 00:06:33,420 166 00:06:33,420 --> 00:06:35,249 167 00:06:35,249 --> 00:06:36,779 168 00:06:36,779 --> 00:06:39,360 169 00:06:39,360 --> 00:06:41,279 170 00:06:41,279 --> 00:06:42,869 171 00:06:42,869 --> 00:06:46,559 172 00:06:46,559 --> 00:06:48,779 173 00:06:48,779 --> 00:06:50,790 174 00:06:50,790 --> 00:06:51,809 175 00:06:51,809 --> 00:06:54,460 176 00:06:54,460 --> 00:06:56,920 177 00:06:56,920 --> 00:06:59,200 178 00:06:59,200 --> 00:07:01,540 179 00:07:01,540 --> 00:07:03,490 180 00:07:03,490 --> 00:07:05,800 181 00:07:05,800 --> 00:07:07,210 182 00:07:07,210 --> 00:07:09,220 183 00:07:09,220 --> 00:07:11,470 184 00:07:11,470 --> 00:07:13,330 185 00:07:13,330 --> 00:07:19,740 186 00:07:19,740 --> 00:07:23,530 187 00:07:23,530 --> 00:07:25,810 188 00:07:25,810 --> 00:07:27,490 189 00:07:27,490 --> 00:07:29,520 190 00:07:29,520 --> 00:07:31,930 191 00:07:31,930 --> 00:07:35,020 192 00:07:35,020 --> 00:07:37,540 193 00:07:37,540 --> 00:07:39,490 194 00:07:39,490 --> 00:07:41,860 195 00:07:41,860 --> 00:07:44,020 196 00:07:44,020 --> 00:07:46,570 197 00:07:46,570 --> 00:07:48,400 198 00:07:48,400 --> 00:07:50,290 199 00:07:50,290 --> 00:07:51,550 200 00:07:51,550 --> 00:07:53,470 201 00:07:53,470 --> 00:07:55,180 202 00:07:55,180 --> 00:07:56,470 203 00:07:56,470 --> 00:07:58,090 204 00:07:58,090 --> 00:08:00,909 205 00:08:00,909 --> 00:08:02,860 206 00:08:02,860 --> 00:08:03,970 207 00:08:03,970 --> 00:08:06,159 208 00:08:06,159 --> 00:08:07,900 209 00:08:07,900 --> 00:08:09,969 210 00:08:09,969 --> 00:08:11,560 211 00:08:11,560 --> 00:08:13,900 212 00:08:13,900 --> 00:08:16,750 213 00:08:16,750 --> 00:08:18,670 214 00:08:18,670 --> 00:08:20,200 215 00:08:20,200 --> 00:08:23,020 216 00:08:23,020 --> 00:08:25,420 217 00:08:25,420 --> 00:08:28,659 218 00:08:28,659 --> 00:08:30,510 219 00:08:30,510 --> 00:08:33,010 220 00:08:33,010 --> 00:08:34,600 221 00:08:34,600 --> 00:08:37,510 222 00:08:37,510 --> 00:08:38,920 223 00:08:38,920 --> 00:08:41,050 224 00:08:41,050 --> 00:08:43,060 225 00:08:43,060 --> 00:08:46,840 226 00:08:46,840 --> 00:08:49,120 227 00:08:49,120 --> 00:08:50,860 228 00:08:50,860 --> 00:08:52,540 229 00:08:52,540 --> 00:08:53,680 230 00:08:53,680 --> 00:08:55,420 231 00:08:55,420 --> 00:08:57,670 232 00:08:57,670 --> 00:08:59,710 233 00:08:59,710 --> 00:09:01,700 234 00:09:01,700 --> 00:09:05,480 235 00:09:05,480 --> 00:09:08,210 236 00:09:08,210 --> 00:09:10,850 237 00:09:10,850 --> 00:09:12,019 238 00:09:12,019 --> 00:09:14,060 239 00:09:14,060 --> 00:09:16,250 240 00:09:16,250 --> 00:09:17,780 241 00:09:17,780 --> 00:09:20,150 242 00:09:20,150 --> 00:09:22,940 243 00:09:22,940 --> 00:09:25,790 244 00:09:25,790 --> 00:09:28,519 245 00:09:28,519 --> 00:09:29,870 246 00:09:29,870 --> 00:09:33,079 247 00:09:33,079 --> 00:09:35,300 248 00:09:35,300 --> 00:09:37,550 249 00:09:37,550 --> 00:09:38,570 250 00:09:38,570 --> 00:09:41,389 251 00:09:41,389 --> 00:09:42,590 252 00:09:42,590 --> 00:09:46,120 253 00:09:46,120 --> 00:09:48,139 254 00:09:48,139 --> 00:09:50,030 255 00:09:50,030 --> 00:09:51,829 256 00:09:51,829 --> 00:09:53,990 257 00:09:53,990 --> 00:09:55,900 258 00:09:55,900 --> 00:09:58,760 259 00:09:58,760 --> 00:10:00,910 260 00:10:00,910 --> 00:10:03,710 261 00:10:03,710 --> 00:10:05,060 262 00:10:05,060 --> 00:10:07,160 263 00:10:07,160 --> 00:10:09,740 264 00:10:09,740 --> 00:10:11,750 265 00:10:11,750 --> 00:10:13,699 266 00:10:13,699 --> 00:10:15,199 267 00:10:15,199 --> 00:10:17,390 268 00:10:17,390 --> 00:10:21,380 269 00:10:21,380 --> 00:10:25,640 270 00:10:25,640 --> 00:10:28,040 271 00:10:28,040 --> 00:10:30,949 272 00:10:30,949 --> 00:10:33,440 273 00:10:33,440 --> 00:10:34,850 274 00:10:34,850 --> 00:10:37,070 275 00:10:37,070 --> 00:10:40,460 276 00:10:40,460 --> 00:10:43,430 277 00:10:43,430 --> 00:10:45,140 278 00:10:45,140 --> 00:10:47,720 279 00:10:47,720 --> 00:10:49,400 280 00:10:49,400 --> 00:10:51,319 281 00:10:51,319 --> 00:10:53,930 282 00:10:53,930 --> 00:10:56,060 283 00:10:56,060 --> 00:10:59,030 284 00:10:59,030 --> 00:11:01,010 285 00:11:01,010 --> 00:11:02,540 286 00:11:02,540 --> 00:11:04,310 287 00:11:04,310 --> 00:11:06,050 288 00:11:06,050 --> 00:11:07,970 289 00:11:07,970 --> 00:11:10,790 290 00:11:10,790 --> 00:11:14,450 291 00:11:14,450 --> 00:11:16,250 292 00:11:16,250 --> 00:11:27,800 293 00:11:27,800 --> 00:11:30,920 294 00:11:30,920 --> 00:11:32,840 295 00:11:32,840 --> 00:11:35,290 296 00:11:35,290 --> 00:11:37,160 297 00:11:37,160 --> 00:11:41,180 298 00:11:41,180 --> 00:11:43,490 299 00:11:43,490 --> 00:11:45,680 300 00:11:45,680 --> 00:11:47,150 301 00:11:47,150 --> 00:11:48,650 302 00:11:48,650 --> 00:11:50,480 303 00:11:50,480 --> 00:11:51,829 304 00:11:51,829 --> 00:11:53,750 305 00:11:53,750 --> 00:11:55,730 306 00:11:55,730 --> 00:11:58,790 307 00:11:58,790 --> 00:12:01,130 308 00:12:01,130 --> 00:12:02,870 309 00:12:02,870 --> 00:12:05,660 310 00:12:05,660 --> 00:12:06,889 311 00:12:06,889 --> 00:12:08,630 312 00:12:08,630 --> 00:12:10,940 313 00:12:10,940 --> 00:12:15,290 314 00:12:15,290 --> 00:12:19,490 315 00:12:19,490 --> 00:12:22,370 316 00:12:22,370 --> 00:12:25,850 317 00:12:25,850 --> 00:12:28,910 318 00:12:28,910 --> 00:12:30,530 319 00:12:30,530 --> 00:12:32,000 320 00:12:32,000 --> 00:12:33,620 321 00:12:33,620 --> 00:12:35,329 322 00:12:35,329 --> 00:12:38,840 323 00:12:38,840 --> 00:12:41,600 324 00:12:41,600 --> 00:12:43,160 325 00:12:43,160 --> 00:12:44,780 326 00:12:44,780 --> 00:12:46,610 327 00:12:46,610 --> 00:12:48,440 328 00:12:48,440 --> 00:12:50,150 329 00:12:50,150 --> 00:12:51,530 330 00:12:51,530 --> 00:12:53,420 331 00:12:53,420 --> 00:12:56,660 332 00:12:56,660 --> 00:12:58,060 333 00:12:58,060 --> 00:13:00,710 334 00:13:00,710 --> 00:13:02,090 335 00:13:02,090 --> 00:13:04,310 336 00:13:04,310 --> 00:13:05,930 337 00:13:05,930 --> 00:13:08,329 338 00:13:08,329 --> 00:13:11,120 339 00:13:11,120 --> 00:13:13,850 340 00:13:13,850 --> 00:13:15,710 341 00:13:15,710 --> 00:13:18,110 342 00:13:18,110 --> 00:13:22,280 343 00:13:22,280 --> 00:13:24,590 344 00:13:24,590 --> 00:13:26,660 345 00:13:26,660 --> 00:13:26,670 346 00:13:26,670 --> 00:13:27,530 347 00:13:27,530 --> 00:13:30,350 348 00:13:30,350 --> 00:13:32,600 349 00:13:32,600 --> 00:13:35,780 350 00:13:35,780 --> 00:13:37,310 351 00:13:37,310 --> 00:13:38,480 352 00:13:38,480 --> 00:13:40,730 353 00:13:40,730 --> 00:13:43,820 354 00:13:43,820 --> 00:13:46,790 355 00:13:46,790 --> 00:13:49,820 356 00:13:49,820 --> 00:13:51,590 357 00:13:51,590 --> 00:13:54,800 358 00:13:54,800 --> 00:13:57,380 359 00:13:57,380 --> 00:13:59,300 360 00:13:59,300 --> 00:14:03,200 361 00:14:03,200 --> 00:14:06,470 362 00:14:06,470 --> 00:14:08,300 363 00:14:08,300 --> 00:14:10,040 364 00:14:10,040 --> 00:14:12,140 365 00:14:12,140 --> 00:14:13,670 366 00:14:13,670 --> 00:14:15,920 367 00:14:15,920 --> 00:14:17,690 368 00:14:17,690 --> 00:14:20,150 369 00:14:20,150 --> 00:14:21,860 370 00:14:21,860 --> 00:14:23,780 371 00:14:23,780 --> 00:14:25,910 372 00:14:25,910 --> 00:14:27,290 373 00:14:27,290 --> 00:14:28,970 374 00:14:28,970 --> 00:14:30,800 375 00:14:30,800 --> 00:14:32,570 376 00:14:32,570 --> 00:14:34,010 377 00:14:34,010 --> 00:14:35,510 378 00:14:35,510 --> 00:14:38,360 379 00:14:38,360 --> 00:14:41,090 380 00:14:41,090 --> 00:14:43,340 381 00:14:43,340 --> 00:14:45,680 382 00:14:45,680 --> 00:14:46,670 383 00:14:46,670 --> 00:14:48,890 384 00:14:48,890 --> 00:14:50,300 385 00:14:50,300 --> 00:14:51,140 386 00:14:51,140 --> 00:14:53,000 387 00:14:53,000 --> 00:14:54,860 388 00:14:54,860 --> 00:14:59,240 389 00:14:59,240 --> 00:15:01,370 390 00:15:01,370 --> 00:15:03,290 391 00:15:03,290 --> 00:15:05,540 392 00:15:05,540 --> 00:15:08,090 393 00:15:08,090 --> 00:15:09,790 394 00:15:09,790 --> 00:15:09,800 395 00:15:09,800 --> 00:15:11,000 396 00:15:11,000 --> 00:15:13,640 397 00:15:13,640 --> 00:15:15,350 398 00:15:15,350 --> 00:15:17,420 399 00:15:17,420 --> 00:15:20,720 400 00:15:20,720 --> 00:15:24,950 401 00:15:24,950 --> 00:15:26,960 402 00:15:26,960 --> 00:15:29,420 403 00:15:29,420 --> 00:15:32,690 404 00:15:32,690 --> 00:15:36,710 405 00:15:36,710 --> 00:15:38,300 406 00:15:38,300 --> 00:15:38,310 407 00:15:38,310 --> 00:15:38,710 408 00:15:38,710 --> 00:15:40,329 409 00:15:40,329 --> 00:15:43,929 410 00:15:43,929 --> 00:15:46,360 411 00:15:46,360 --> 00:15:48,639 412 00:15:48,639 --> 00:15:50,710 413 00:15:50,710 --> 00:15:52,809 414 00:15:52,809 --> 00:15:54,369 415 00:15:54,369 --> 00:15:57,040 416 00:15:57,040 --> 00:15:57,970 417 00:15:57,970 --> 00:16:01,030 418 00:16:01,030 --> 00:16:02,679 419 00:16:02,679 --> 00:16:04,569 420 00:16:04,569 --> 00:16:06,309 421 00:16:06,309 --> 00:16:08,139 422 00:16:08,139 --> 00:16:10,150 423 00:16:10,150 --> 00:16:13,809 424 00:16:13,809 --> 00:16:16,449 425 00:16:16,449 --> 00:16:18,819 426 00:16:18,819 --> 00:16:21,069 427 00:16:21,069 --> 00:16:23,470 428 00:16:23,470 --> 00:16:26,590 429 00:16:26,590 --> 00:16:28,059 430 00:16:28,059 --> 00:16:30,460 431 00:16:30,460 --> 00:16:33,730 432 00:16:33,730 --> 00:16:35,710 433 00:16:35,710 --> 00:16:38,379 434 00:16:38,379 --> 00:16:40,420 435 00:16:40,420 --> 00:16:42,670 436 00:16:42,670 --> 00:16:51,189 437 00:16:51,189 --> 00:16:53,439 438 00:16:53,439 --> 00:16:55,059 439 00:16:55,059 --> 00:17:00,069 440 00:17:00,069 --> 00:17:01,780 441 00:17:01,780 --> 00:17:04,240 442 00:17:04,240 --> 00:17:09,789 443 00:17:09,789 --> 00:17:11,590 444 00:17:11,590 --> 00:17:13,689 445 00:17:13,689 --> 00:17:16,809 446 00:17:16,809 --> 00:17:18,760 447 00:17:18,760 --> 00:17:21,640 448 00:17:21,640 --> 00:17:23,289 449 00:17:23,289 --> 00:17:25,500 450 00:17:25,500 --> 00:17:27,610 451 00:17:27,610 --> 00:17:29,649 452 00:17:29,649 --> 00:17:34,299 453 00:17:34,299 --> 00:17:35,770 454 00:17:35,770 --> 00:17:37,990 455 00:17:37,990 --> 00:17:39,610 456 00:17:39,610 --> 00:17:43,870 457 00:17:43,870 --> 00:17:46,610 458 00:17:46,610 --> 00:17:49,669 459 00:17:49,669 --> 00:17:51,260 460 00:17:51,260 --> 00:17:52,820 461 00:17:52,820 --> 00:17:55,760 462 00:17:55,760 --> 00:17:57,769 463 00:17:57,769 --> 00:18:00,799 464 00:18:00,799 --> 00:18:03,380 465 00:18:03,380 --> 00:18:05,269 466 00:18:05,269 --> 00:18:07,810 467 00:18:07,810 --> 00:18:11,149 468 00:18:11,149 --> 00:18:13,220 469 00:18:13,220 --> 00:18:15,320 470 00:18:15,320 --> 00:18:18,190 471 00:18:18,190 --> 00:18:21,019 472 00:18:21,019 --> 00:18:22,820 473 00:18:22,820 --> 00:18:23,870 474 00:18:23,870 --> 00:18:25,340 475 00:18:25,340 --> 00:18:26,659 476 00:18:26,659 --> 00:18:28,130 477 00:18:28,130 --> 00:18:30,440 478 00:18:30,440 --> 00:18:33,470 479 00:18:33,470 --> 00:18:35,659 480 00:18:35,659 --> 00:18:37,730 481 00:18:37,730 --> 00:18:38,990 482 00:18:38,990 --> 00:18:41,210 483 00:18:41,210 --> 00:18:42,529 484 00:18:42,529 --> 00:18:48,320 485 00:18:48,320 --> 00:18:55,639 486 00:18:55,639 --> 00:18:57,440 487 00:18:57,440 --> 00:18:59,720 488 00:18:59,720 --> 00:19:01,399 489 00:19:01,399 --> 00:19:04,159 490 00:19:04,159 --> 00:19:11,450 491 00:19:11,450 --> 00:19:12,860 492 00:19:12,860 --> 00:19:16,310 493 00:19:16,310 --> 00:19:17,600 494 00:19:17,600 --> 00:19:19,190 495 00:19:19,190 --> 00:19:20,480 496 00:19:20,480 --> 00:19:22,610 497 00:19:22,610 --> 00:19:25,279 498 00:19:25,279 --> 00:19:27,320 499 00:19:27,320 --> 00:19:31,970 500 00:19:31,970 --> 00:19:36,799 501 00:19:36,799 --> 00:19:40,760 502 00:19:40,760 --> 00:19:43,310 503 00:19:43,310 --> 00:19:44,779 504 00:19:44,779 --> 00:19:46,100 505 00:19:46,100 --> 00:19:49,580 506 00:19:49,580 --> 00:19:50,810 507 00:19:50,810 --> 00:19:52,340 508 00:19:52,340 --> 00:19:55,750 509 00:19:55,750 --> 00:19:59,200 510 00:19:59,200 --> 00:20:01,670 511 00:20:01,670 --> 00:20:08,060 512 00:20:08,060 --> 00:20:09,290 513 00:20:09,290 --> 00:20:11,980 514 00:20:11,980 --> 00:20:14,060 515 00:20:14,060 --> 00:20:16,100 516 00:20:16,100 --> 00:20:19,400 517 00:20:19,400 --> 00:20:21,620 518 00:20:21,620 --> 00:20:23,000 519 00:20:23,000 --> 00:20:30,860 520 00:20:30,860 --> 00:20:32,390 521 00:20:32,390 --> 00:20:35,780 522 00:20:35,780 --> 00:20:40,190 523 00:20:40,190 --> 00:20:48,470 524 00:20:48,470 --> 00:20:50,360 525 00:20:50,360 --> 00:20:52,340 526 00:20:52,340 --> 00:20:54,950 527 00:20:54,950 --> 00:20:56,720 528 00:20:56,720 --> 00:20:59,240 529 00:20:59,240 --> 00:21:01,160 530 00:21:01,160 --> 00:21:03,740 531 00:21:03,740 --> 00:21:06,290 532 00:21:06,290 --> 00:21:08,030 533 00:21:08,030 --> 00:21:10,190 534 00:21:10,190 --> 00:21:12,800 535 00:21:12,800 --> 00:21:15,290 536 00:21:15,290 --> 00:21:16,760 537 00:21:16,760 --> 00:21:19,490 538 00:21:19,490 --> 00:21:21,440 539 00:21:21,440 --> 00:21:23,060 540 00:21:23,060 --> 00:21:24,890 541 00:21:24,890 --> 00:21:26,330 542 00:21:26,330 --> 00:21:28,220 543 00:21:28,220 --> 00:21:29,570 544 00:21:29,570 --> 00:21:31,910 545 00:21:31,910 --> 00:21:33,980 546 00:21:33,980 --> 00:21:36,200 547 00:21:36,200 --> 00:21:38,030 548 00:21:38,030 --> 00:21:40,160 549 00:21:40,160 --> 00:21:42,560 550 00:21:42,560 --> 00:21:44,300 551 00:21:44,300 --> 00:21:46,010 552 00:21:46,010 --> 00:21:47,690 553 00:21:47,690 --> 00:21:49,190 554 00:21:49,190 --> 00:21:52,040 555 00:21:52,040 --> 00:21:53,870 556 00:21:53,870 --> 00:21:55,310 557 00:21:55,310 --> 00:21:57,290 558 00:21:57,290 --> 00:21:58,940 559 00:21:58,940 --> 00:22:01,820 560 00:22:01,820 --> 00:22:04,220 561 00:22:04,220 --> 00:22:05,840 562 00:22:05,840 --> 00:22:05,850 563 00:22:05,850 --> 00:22:06,690 564 00:22:06,690 --> 00:22:09,120 565 00:22:09,120 --> 00:22:11,520 566 00:22:11,520 --> 00:22:13,320 567 00:22:13,320 --> 00:22:15,780 568 00:22:15,780 --> 00:22:17,910 569 00:22:17,910 --> 00:22:22,200 570 00:22:22,200 --> 00:22:24,240 571 00:22:24,240 --> 00:22:27,480 572 00:22:27,480 --> 00:22:29,490 573 00:22:29,490 --> 00:22:31,950 574 00:22:31,950 --> 00:22:33,660 575 00:22:33,660 --> 00:22:35,760 576 00:22:35,760 --> 00:22:38,340 577 00:22:38,340 --> 00:22:40,020 578 00:22:40,020 --> 00:22:41,550 579 00:22:41,550 --> 00:22:44,430 580 00:22:44,430 --> 00:22:46,500 581 00:22:46,500 --> 00:22:48,570 582 00:22:48,570 --> 00:22:50,940 583 00:22:50,940 --> 00:22:53,670 584 00:22:53,670 --> 00:22:55,410 585 00:22:55,410 --> 00:22:56,640 586 00:22:56,640 --> 00:22:58,890 587 00:22:58,890 --> 00:23:00,390 588 00:23:00,390 --> 00:23:04,200 589 00:23:04,200 --> 00:23:10,470 590 00:23:10,470 --> 00:23:14,310 591 00:23:14,310 --> 00:23:17,250 592 00:23:17,250 --> 00:23:18,480 593 00:23:18,480 --> 00:23:19,980 594 00:23:19,980 --> 00:23:24,750 595 00:23:24,750 --> 00:23:26,580 596 00:23:26,580 --> 00:23:29,430 597 00:23:29,430 --> 00:23:31,320 598 00:23:31,320 --> 00:23:33,480 599 00:23:33,480 --> 00:23:36,060 600 00:23:36,060 --> 00:23:39,270 601 00:23:39,270 --> 00:23:42,660 602 00:23:42,660 --> 00:23:45,150 603 00:23:45,150 --> 00:23:47,190 604 00:23:47,190 --> 00:23:48,960 605 00:23:48,960 --> 00:23:51,240 606 00:23:51,240 --> 00:23:56,880 607 00:23:56,880 --> 00:24:00,930 608 00:24:00,930 --> 00:24:03,870 609 00:24:03,870 --> 00:24:11,280 610 00:24:11,280 --> 00:24:13,620 611 00:24:13,620 --> 00:24:14,940 612 00:24:14,940 --> 00:24:18,570 613 00:24:18,570 --> 00:24:21,419 614 00:24:21,419 --> 00:24:24,840 615 00:24:24,840 --> 00:24:28,410 616 00:24:28,410 --> 00:24:30,480 617 00:24:30,480 --> 00:24:33,030 618 00:24:33,030 --> 00:24:35,669 619 00:24:35,669 --> 00:24:37,409 620 00:24:37,409 --> 00:24:39,720 621 00:24:39,720 --> 00:24:42,060 622 00:24:42,060 --> 00:24:45,990 623 00:24:45,990 --> 00:24:48,150 624 00:24:48,150 --> 00:24:50,909 625 00:24:50,909 --> 00:24:54,600 626 00:24:54,600 --> 00:24:57,480 627 00:24:57,480 --> 00:24:58,980 628 00:24:58,980 --> 00:25:02,180 629 00:25:02,180 --> 00:25:05,400 630 00:25:05,400 --> 00:25:07,500 631 00:25:07,500 --> 00:25:09,120 632 00:25:09,120 --> 00:25:11,280 633 00:25:11,280 --> 00:25:12,840 634 00:25:12,840 --> 00:25:14,909 635 00:25:14,909 --> 00:25:16,530 636 00:25:16,530 --> 00:25:18,120 637 00:25:18,120 --> 00:25:20,100 638 00:25:20,100 --> 00:25:21,630 639 00:25:21,630 --> 00:25:24,480 640 00:25:24,480 --> 00:25:26,760 641 00:25:26,760 --> 00:25:28,080 642 00:25:28,080 --> 00:25:30,090 643 00:25:30,090 --> 00:25:32,250 644 00:25:32,250 --> 00:25:33,510 645 00:25:33,510 --> 00:25:36,870 646 00:25:36,870 --> 00:25:38,880 647 00:25:38,880 --> 00:25:41,610 648 00:25:41,610 --> 00:25:43,980 649 00:25:43,980 --> 00:25:47,700 650 00:25:47,700 --> 00:25:49,049 651 00:25:49,049 --> 00:25:51,030 652 00:25:51,030 --> 00:25:54,360 653 00:25:54,360 --> 00:25:57,030 654 00:25:57,030 --> 00:26:00,299 655 00:26:00,299 --> 00:26:04,200 656 00:26:04,200 --> 00:26:08,990 657 00:26:08,990 --> 00:26:11,490 658 00:26:11,490 --> 00:26:18,510 659 00:26:18,510 --> 00:26:20,970 660 00:26:20,970 --> 00:26:23,130 661 00:26:23,130 --> 00:26:24,570 662 00:26:24,570 --> 00:26:26,940 663 00:26:26,940 --> 00:26:29,880 664 00:26:29,880 --> 00:26:32,310 665 00:26:32,310 --> 00:26:35,039 666 00:26:35,039 --> 00:26:38,820 667 00:26:38,820 --> 00:26:40,409 668 00:26:40,409 --> 00:26:41,820 669 00:26:41,820 --> 00:26:43,440 670 00:26:43,440 --> 00:26:46,260 671 00:26:46,260 --> 00:26:48,030 672 00:26:48,030 --> 00:26:53,490 673 00:26:53,490 --> 00:26:55,289 674 00:26:55,289 --> 00:26:57,180 675 00:26:57,180 --> 00:26:59,669 676 00:26:59,669 --> 00:27:01,710 677 00:27:01,710 --> 00:27:04,650 678 00:27:04,650 --> 00:27:06,539 679 00:27:06,539 --> 00:27:08,280 680 00:27:08,280 --> 00:27:09,780 681 00:27:09,780 --> 00:27:12,750 682 00:27:12,750 --> 00:27:15,210 683 00:27:15,210 --> 00:27:19,260 684 00:27:19,260 --> 00:27:21,870 685 00:27:21,870 --> 00:27:23,250 686 00:27:23,250 --> 00:27:28,789 687 00:27:28,789 --> 00:27:31,890 688 00:27:31,890 --> 00:27:34,169 689 00:27:34,169 --> 00:27:37,740 690 00:27:37,740 --> 00:27:39,600 691 00:27:39,600 --> 00:27:41,610 692 00:27:41,610 --> 00:27:44,610 693 00:27:44,610 --> 00:27:47,010 694 00:27:47,010 --> 00:27:52,970 695 00:27:52,970 --> 00:27:57,000 696 00:27:57,000 --> 00:27:59,520 697 00:27:59,520 --> 00:28:01,530 698 00:28:01,530 --> 00:28:02,789 699 00:28:02,789 --> 00:28:05,010 700 00:28:05,010 --> 00:28:07,049 701 00:28:07,049 --> 00:28:09,390 702 00:28:09,390 --> 00:28:12,000 703 00:28:12,000 --> 00:28:14,159 704 00:28:14,159 --> 00:28:16,880 705 00:28:16,880 --> 00:28:20,240 706 00:28:20,240 --> 00:28:23,030 707 00:28:23,030 --> 00:28:26,630 708 00:28:26,630 --> 00:28:30,620 709 00:28:30,620 --> 00:28:32,450 710 00:28:32,450 --> 00:28:35,660 711 00:28:35,660 --> 00:28:37,040 712 00:28:37,040 --> 00:28:38,450 713 00:28:38,450 --> 00:28:39,590 714 00:28:39,590 --> 00:28:41,870 715 00:28:41,870 --> 00:28:45,590 716 00:28:45,590 --> 00:28:48,710 717 00:28:48,710 --> 00:28:51,770 718 00:28:51,770 --> 00:28:54,050 719 00:28:54,050 --> 00:29:01,340 720 00:29:01,340 --> 00:29:03,170 721 00:29:03,170 --> 00:29:05,600 722 00:29:05,600 --> 00:29:06,860 723 00:29:06,860 --> 00:29:09,380 724 00:29:09,380 --> 00:29:11,570 725 00:29:11,570 --> 00:29:14,920 726 00:29:14,920 --> 00:29:17,930 727 00:29:17,930 --> 00:29:21,650 728 00:29:21,650 --> 00:29:25,190 729 00:29:25,190 --> 00:29:26,690 730 00:29:26,690 --> 00:29:29,570 731 00:29:29,570 --> 00:29:33,620 732 00:29:33,620 --> 00:29:34,880 733 00:29:34,880 --> 00:29:36,950 734 00:29:36,950 --> 00:29:38,060 735 00:29:38,060 --> 00:29:40,610 736 00:29:40,610 --> 00:29:46,100 737 00:29:46,100 --> 00:29:48,440 738 00:29:48,440 --> 00:29:49,940 739 00:29:49,940 --> 00:29:52,970 740 00:29:52,970 --> 00:29:54,500 741 00:29:54,500 --> 00:29:56,360 742 00:29:56,360 --> 00:29:58,220 743 00:29:58,220 --> 00:30:00,050 744 00:30:00,050 --> 00:30:01,850 745 00:30:01,850 --> 00:30:04,070 746 00:30:04,070 --> 00:30:06,130 747 00:30:06,130 --> 00:30:09,140 748 00:30:09,140 --> 00:30:10,220 749 00:30:10,220 --> 00:30:13,850 750 00:30:13,850 --> 00:30:16,580 751 00:30:16,580 --> 00:30:18,020 752 00:30:18,020 --> 00:30:20,150 753 00:30:20,150 --> 00:30:25,310 754 00:30:25,310 --> 00:30:27,290 755 00:30:27,290 --> 00:30:28,899 756 00:30:28,899 --> 00:30:30,580 757 00:30:30,580 --> 00:30:32,080 758 00:30:32,080 --> 00:30:34,149 759 00:30:34,149 --> 00:30:35,919 760 00:30:35,919 --> 00:30:37,899 761 00:30:37,899 --> 00:30:39,999 762 00:30:39,999 --> 00:30:42,879 763 00:30:42,879 --> 00:30:44,909 764 00:30:44,909 --> 00:30:52,659 765 00:30:52,659 --> 00:30:53,799 766 00:30:53,799 --> 00:30:55,239 767 00:30:55,239 --> 00:30:56,589 768 00:30:56,589 --> 00:30:59,440 769 00:30:59,440 --> 00:31:01,210 770 00:31:01,210 --> 00:31:03,009 771 00:31:03,009 --> 00:31:05,619 772 00:31:05,619 --> 00:31:07,719 773 00:31:07,719 --> 00:31:10,960 774 00:31:10,960 --> 00:31:12,729 775 00:31:12,729 --> 00:31:15,700 776 00:31:15,700 --> 00:31:17,859 777 00:31:17,859 --> 00:31:19,180 778 00:31:19,180 --> 00:31:21,519 779 00:31:21,519 --> 00:31:24,060 780 00:31:24,060 --> 00:31:27,339 781 00:31:27,339 --> 00:31:29,560 782 00:31:29,560 --> 00:31:31,960 783 00:31:31,960 --> 00:31:33,820 784 00:31:33,820 --> 00:31:36,759 785 00:31:36,759 --> 00:31:38,200 786 00:31:38,200 --> 00:31:41,169 787 00:31:41,169 --> 00:31:43,149 788 00:31:43,149 --> 00:31:47,589 789 00:31:47,589 --> 00:31:50,229 790 00:31:50,229 --> 00:31:52,389 791 00:31:52,389 --> 00:31:53,710 792 00:31:53,710 --> 00:31:56,440 793 00:31:56,440 --> 00:31:58,299 794 00:31:58,299 --> 00:32:00,729 795 00:32:00,729 --> 00:32:02,859 796 00:32:02,859 --> 00:32:04,089 797 00:32:04,089 --> 00:32:07,779 798 00:32:07,779 --> 00:32:10,269 799 00:32:10,269 --> 00:32:12,969 800 00:32:12,969 --> 00:32:15,999 801 00:32:15,999 --> 00:32:18,669 802 00:32:18,669 --> 00:32:20,139 803 00:32:20,139 --> 00:32:21,789 804 00:32:21,789 --> 00:32:25,060 805 00:32:25,060 --> 00:32:26,379 806 00:32:26,379 --> 00:32:27,879 807 00:32:27,879 --> 00:32:28,659 808 00:32:28,659 --> 00:32:30,909 809 00:32:30,909 --> 00:32:33,909 810 00:32:33,909 --> 00:32:35,440 811 00:32:35,440 --> 00:32:38,649 812 00:32:38,649 --> 00:32:39,549 813 00:32:39,549 --> 00:32:41,019 814 00:32:41,019 --> 00:32:42,850 815 00:32:42,850 --> 00:32:48,789 816 00:32:48,789 --> 00:32:50,259 817 00:32:50,259 --> 00:32:51,820 818 00:32:51,820 --> 00:32:55,090 819 00:32:55,090 --> 00:32:57,610 820 00:32:57,610 --> 00:32:59,649 821 00:32:59,649 --> 00:33:01,119 822 00:33:01,119 --> 00:33:03,399 823 00:33:03,399 --> 00:33:04,600 824 00:33:04,600 --> 00:33:06,489 825 00:33:06,489 --> 00:33:11,100 826 00:33:11,100 --> 00:33:13,690 827 00:33:13,690 --> 00:33:15,549 828 00:33:15,549 --> 00:33:18,129 829 00:33:18,129 --> 00:33:21,489 830 00:33:21,489 --> 00:33:28,119 831 00:33:28,119 --> 00:33:29,560 832 00:33:29,560 --> 00:33:37,139 833 00:33:37,139 --> 00:33:39,279 834 00:33:39,279 --> 00:33:42,009 835 00:33:42,009 --> 00:33:43,989 836 00:33:43,989 --> 00:33:47,830 837 00:33:47,830 --> 00:33:50,230 838 00:33:50,230 --> 00:33:52,480 839 00:33:52,480 --> 00:33:54,669 840 00:33:54,669 --> 00:33:57,190 841 00:33:57,190 --> 00:34:01,060 842 00:34:01,060 --> 00:34:08,139 843 00:34:08,139 --> 00:34:09,669 844 00:34:09,669 --> 00:34:11,079 845 00:34:11,079 --> 00:34:13,480 846 00:34:13,480 --> 00:34:15,040 847 00:34:15,040 --> 00:34:18,700 848 00:34:18,700 --> 00:34:22,809 849 00:34:22,809 --> 00:34:25,990 850 00:34:25,990 --> 00:34:27,220 851 00:34:27,220 --> 00:34:28,839 852 00:34:28,839 --> 00:34:30,849 853 00:34:30,849 --> 00:34:33,040 854 00:34:33,040 --> 00:34:34,300 855 00:34:34,300 --> 00:34:35,399 856 00:34:35,399 --> 00:34:39,339 857 00:34:39,339 --> 00:34:42,369 858 00:34:42,369 --> 00:34:43,990 859 00:34:43,990 --> 00:34:46,669 860 00:34:46,669 --> 00:34:51,260 861 00:34:51,260 --> 00:34:53,899 862 00:34:53,899 --> 00:34:56,299 863 00:34:56,299 --> 00:34:58,010 864 00:34:58,010 --> 00:34:59,690 865 00:34:59,690 --> 00:35:01,549 866 00:35:01,549 --> 00:35:03,079 867 00:35:03,079 --> 00:35:06,410 868 00:35:06,410 --> 00:35:09,589 869 00:35:09,589 --> 00:35:14,690 870 00:35:14,690 --> 00:35:18,019 871 00:35:18,019 --> 00:35:19,099 872 00:35:19,099 --> 00:35:21,289 873 00:35:21,289 --> 00:35:26,180 874 00:35:26,180 --> 00:35:29,990 875 00:35:29,990 --> 00:35:32,569 876 00:35:32,569 --> 00:35:35,059 877 00:35:35,059 --> 00:35:36,559 878 00:35:36,559 --> 00:35:37,730 879 00:35:37,730 --> 00:35:40,400 880 00:35:40,400 --> 00:35:45,710 881 00:35:45,710 --> 00:35:48,019 882 00:35:48,019 --> 00:35:51,620 883 00:35:51,620 --> 00:35:54,289 884 00:35:54,289 --> 00:35:55,789 885 00:35:55,789 --> 00:35:57,410 886 00:35:57,410 --> 00:35:58,579 887 00:35:58,579 --> 00:36:00,829 888 00:36:00,829 --> 00:36:02,120 889 00:36:02,120 --> 00:36:06,019 890 00:36:06,019 --> 00:36:09,140 891 00:36:09,140 --> 00:36:12,349 892 00:36:12,349 --> 00:36:14,750 893 00:36:14,750 --> 00:36:16,609 894 00:36:16,609 --> 00:36:19,010 895 00:36:19,010 --> 00:36:20,690 896 00:36:20,690 --> 00:36:23,240 897 00:36:23,240 --> 00:36:27,170 898 00:36:27,170 --> 00:36:28,519 899 00:36:28,519 --> 00:36:30,620 900 00:36:30,620 --> 00:36:32,089 901 00:36:32,089 --> 00:36:34,010 902 00:36:34,010 --> 00:36:36,079 903 00:36:36,079 --> 00:36:40,789 904 00:36:40,789 --> 00:36:43,579 905 00:36:43,579 --> 00:36:46,309 906 00:36:46,309 --> 00:36:50,269 907 00:36:50,269 --> 00:36:52,940 908 00:36:52,940 --> 00:36:54,980 909 00:36:54,980 --> 00:36:54,990 910 00:36:54,990 --> 00:36:57,320 911 00:36:57,320 --> 00:36:59,700 912 00:36:59,700 --> 00:37:02,550 913 00:37:02,550 --> 00:37:10,470 914 00:37:10,470 --> 00:37:19,710 915 00:37:19,710 --> 00:37:24,870 916 00:37:24,870 --> 00:37:27,060 917 00:37:27,060 --> 00:37:30,180 918 00:37:30,180 --> 00:37:33,780 919 00:37:33,780 --> 00:37:35,550 920 00:37:35,550 --> 00:37:40,589 921 00:37:40,589 --> 00:37:42,510 922 00:37:42,510 --> 00:37:44,849 923 00:37:44,849 --> 00:37:46,800 924 00:37:46,800 --> 00:37:51,300 925 00:37:51,300 --> 00:37:53,910 926 00:37:53,910 --> 00:37:56,130 927 00:37:56,130 --> 00:38:01,410 928 00:38:01,410 --> 00:38:09,810 929 00:38:09,810 --> 00:38:13,079 930 00:38:13,079 --> 00:38:18,870 931 00:38:18,870 --> 00:38:21,329 932 00:38:21,329 --> 00:38:22,800 933 00:38:22,800 --> 00:38:27,329 934 00:38:27,329 --> 00:38:30,349 935 00:38:30,349 --> 00:38:33,839 936 00:38:33,839 --> 00:38:35,490 937 00:38:35,490 --> 00:38:43,770 938 00:38:43,770 --> 00:38:48,630 939 00:38:48,630 --> 00:38:54,849 940 00:38:54,849 --> 00:39:01,859 941 00:39:01,859 --> 00:39:07,210 942 00:39:07,210 --> 00:39:15,670 943 00:39:15,670 --> 00:39:18,969 944 00:39:18,969 --> 00:39:21,759 945 00:39:21,759 --> 00:39:23,670 946 00:39:23,670 --> 00:39:26,829 947 00:39:26,829 --> 00:39:27,880 948 00:39:27,880 --> 00:39:30,130 949 00:39:30,130 --> 00:39:31,539 950 00:39:31,539 --> 00:39:33,579 951 00:39:33,579 --> 00:39:35,319 952 00:39:35,319 --> 00:39:37,299 953 00:39:37,299 --> 00:39:38,710 954 00:39:38,710 --> 00:39:41,229 955 00:39:41,229 --> 00:39:43,029 956 00:39:43,029 --> 00:39:46,120 957 00:39:46,120 --> 00:39:48,819 958 00:39:48,819 --> 00:39:50,259 959 00:39:50,259 --> 00:39:53,709 960 00:39:53,709 --> 00:39:55,299 961 00:39:55,299 --> 00:39:57,160 962 00:39:57,160 --> 00:39:58,630 963 00:39:58,630 --> 00:40:00,640 964 00:40:00,640 --> 00:40:02,589 965 00:40:02,589 --> 00:40:04,959 966 00:40:04,959 --> 00:40:09,190 967 00:40:09,190 --> 00:40:12,999 968 00:40:12,999 --> 00:40:14,799 969 00:40:14,799 --> 00:40:17,259 970 00:40:17,259 --> 00:40:21,459 971 00:40:21,459 --> 00:40:24,329 972 00:40:24,329 --> 00:40:27,459 973 00:40:27,459 --> 00:40:29,680 974 00:40:29,680 --> 00:40:31,089 975 00:40:31,089 --> 00:40:32,499 976 00:40:32,499 --> 00:40:34,569 977 00:40:34,569 --> 00:40:36,609 978 00:40:36,609 --> 00:40:39,400 979 00:40:39,400 --> 00:40:40,779 980 00:40:40,779 --> 00:40:44,890 981 00:40:44,890 --> 00:40:48,450 982 00:40:48,450 --> 00:40:52,709 983 00:40:52,709 --> 00:40:55,779 984 00:40:55,779 --> 00:40:57,910 985 00:40:57,910 --> 00:40:59,980 986 00:40:59,980 --> 00:41:03,790 987 00:41:03,790 --> 00:41:05,859 988 00:41:05,859 --> 00:41:07,690 989 00:41:07,690 --> 00:41:10,120 990 00:41:10,120 --> 00:41:12,310 991 00:41:12,310 --> 00:41:14,740 992 00:41:14,740 --> 00:41:17,740 993 00:41:17,740 --> 00:41:21,760 994 00:41:21,760 --> 00:41:23,620 995 00:41:23,620 --> 00:41:26,589 996 00:41:26,589 --> 00:41:30,849 997 00:41:30,849 --> 00:41:32,140 998 00:41:32,140 --> 00:41:33,910 999 00:41:33,910 --> 00:41:35,380 1000 00:41:35,380 --> 00:41:37,540 1001 00:41:37,540 --> 00:41:41,470 1002 00:41:41,470 --> 00:41:43,750 1003 00:41:43,750 --> 00:41:47,140 1004 00:41:47,140 --> 00:41:49,030 1005 00:41:49,030 --> 00:41:50,560 1006 00:41:50,560 --> 00:41:52,270 1007 00:41:52,270 --> 00:41:54,490 1008 00:41:54,490 --> 00:41:57,520 1009 00:41:57,520 --> 00:41:59,050 1010 00:41:59,050 --> 00:42:01,210 1011 00:42:01,210 --> 00:42:04,180 1012 00:42:04,180 --> 00:42:06,820 1013 00:42:06,820 --> 00:42:08,140 1014 00:42:08,140 --> 00:42:10,210 1015 00:42:10,210 --> 00:42:11,710 1016 00:42:11,710 --> 00:42:15,940 1017 00:42:15,940 --> 00:42:17,680 1018 00:42:17,680 --> 00:42:19,750 1019 00:42:19,750 --> 00:42:21,849 1020 00:42:21,849 --> 00:42:24,880 1021 00:42:24,880 --> 00:42:27,070 1022 00:42:27,070 --> 00:42:29,859 1023 00:42:29,859 --> 00:42:32,170 1024 00:42:32,170 --> 00:42:33,640 1025 00:42:33,640 --> 00:42:38,579 1026 00:42:38,579 --> 00:42:41,230 1027 00:42:41,230 --> 00:42:42,970 1028 00:42:42,970 --> 00:42:47,260 1029 00:42:47,260 --> 00:42:48,700 1030 00:42:48,700 --> 00:42:50,470 1031 00:42:50,470 --> 00:42:52,750 1032 00:42:52,750 --> 00:42:54,940 1033 00:42:54,940 --> 00:42:57,190 1034 00:42:57,190 --> 00:42:59,710 1035 00:42:59,710 --> 00:43:01,329 1036 00:43:01,329 --> 00:43:02,710 1037 00:43:02,710 --> 00:43:03,940 1038 00:43:03,940 --> 00:43:06,700 1039 00:43:06,700 --> 00:43:09,010 1040 00:43:09,010 --> 00:43:10,210 1041 00:43:10,210 --> 00:43:11,530 1042 00:43:11,530 --> 00:43:13,060 1043 00:43:13,060 --> 00:43:14,590 1044 00:43:14,590 --> 00:43:16,810 1045 00:43:16,810 --> 00:43:18,790 1046 00:43:18,790 --> 00:43:21,330 1047 00:43:21,330 --> 00:43:25,240 1048 00:43:25,240 --> 00:43:27,010 1049 00:43:27,010 --> 00:43:30,070 1050 00:43:30,070 --> 00:43:32,200 1051 00:43:32,200 --> 00:43:34,180 1052 00:43:34,180 --> 00:43:36,130 1053 00:43:36,130 --> 00:43:38,680 1054 00:43:38,680 --> 00:43:40,330 1055 00:43:40,330 --> 00:43:42,400 1056 00:43:42,400 --> 00:43:43,930 1057 00:43:43,930 --> 00:43:47,290 1058 00:43:47,290 --> 00:43:50,920 1059 00:43:50,920 --> 00:43:53,050 1060 00:43:53,050 --> 00:43:55,590 1061 00:43:55,590 --> 00:44:02,680 1062 00:44:02,680 --> 00:44:04,600 1063 00:44:04,600 --> 00:44:07,150 1064 00:44:07,150 --> 00:44:09,280 1065 00:44:09,280 --> 00:44:11,320 1066 00:44:11,320 --> 00:44:12,490 1067 00:44:12,490 --> 00:44:16,350 1068 00:44:16,350 --> 00:44:18,430 1069 00:44:18,430 --> 00:44:22,540 1070 00:44:22,540 --> 00:44:26,200 1071 00:44:26,200 --> 00:44:29,860 1072 00:44:29,860 --> 00:44:32,230 1073 00:44:32,230 --> 00:44:34,810 1074 00:44:34,810 --> 00:44:37,240 1075 00:44:37,240 --> 00:44:39,730 1076 00:44:39,730 --> 00:44:41,740 1077 00:44:41,740 --> 00:44:45,910 1078 00:44:45,910 --> 00:44:49,420 1079 00:44:49,420 --> 00:44:54,610 1080 00:44:54,610 --> 00:44:57,040 1081 00:44:57,040 --> 00:44:58,660 1082 00:44:58,660 --> 00:45:00,280 1083 00:45:00,280 --> 00:45:02,980 1084 00:45:02,980 --> 00:45:06,400 1085 00:45:06,400 --> 00:45:09,160 1086 00:45:09,160 --> 00:45:14,950 1087 00:45:14,950 --> 00:45:18,120 1088 00:45:18,120 --> 00:45:20,530 1089 00:45:20,530 --> 00:45:22,180 1090 00:45:22,180 --> 00:45:24,460 1091 00:45:24,460 --> 00:45:24,470 1092 00:45:24,470 --> 00:45:24,970 1093 00:45:24,970 --> 00:45:30,250 1094 00:45:30,250 --> 00:45:33,579 1095 00:45:33,579 --> 00:45:34,810 1096 00:45:34,810 --> 00:45:37,240 1097 00:45:37,240 --> 00:45:39,550 1098 00:45:39,550 --> 00:45:41,200 1099 00:45:41,200 --> 00:45:42,849 1100 00:45:42,849 --> 00:45:44,530 1101 00:45:44,530 --> 00:45:46,150 1102 00:45:46,150 --> 00:45:48,670 1103 00:45:48,670 --> 00:45:51,220 1104 00:45:51,220 --> 00:45:53,920 1105 00:45:53,920 --> 00:45:55,680 1106 00:45:55,680 --> 00:45:57,910 1107 00:45:57,910 --> 00:45:59,500 1108 00:45:59,500 --> 00:46:01,839 1109 00:46:01,839 --> 00:46:04,540 1110 00:46:04,540 --> 00:46:06,760 1111 00:46:06,760 --> 00:46:08,410 1112 00:46:08,410 --> 00:46:10,540 1113 00:46:10,540 --> 00:46:12,010 1114 00:46:12,010 --> 00:46:13,510 1115 00:46:13,510 --> 00:46:14,829 1116 00:46:14,829 --> 00:46:16,390 1117 00:46:16,390 --> 00:46:18,250 1118 00:46:18,250 --> 00:46:19,780 1119 00:46:19,780 --> 00:46:21,099 1120 00:46:21,099 --> 00:46:23,079 1121 00:46:23,079 --> 00:46:24,579 1122 00:46:24,579 --> 00:46:27,099 1123 00:46:27,099 --> 00:46:28,270 1124 00:46:28,270 --> 00:46:30,300 1125 00:46:30,300 --> 00:46:32,230 1126 00:46:32,230 --> 00:46:35,890 1127 00:46:35,890 --> 00:46:38,440 1128 00:46:38,440 --> 00:46:41,530 1129 00:46:41,530 --> 00:46:42,670 1130 00:46:42,670 --> 00:46:44,140 1131 00:46:44,140 --> 00:46:46,210 1132 00:46:46,210 --> 00:46:48,790 1133 00:46:48,790 --> 00:46:50,920 1134 00:46:50,920 --> 00:46:55,720 1135 00:46:55,720 --> 00:46:58,720 1136 00:46:58,720 --> 00:47:01,270 1137 00:47:01,270 --> 00:47:03,730 1138 00:47:03,730 --> 00:47:07,030 1139 00:47:07,030 --> 00:47:09,579 1140 00:47:09,579 --> 00:47:15,040 1141 00:47:15,040 --> 00:47:19,230 1142 00:47:19,230 --> 00:47:27,460 1143 00:47:27,460 --> 00:47:32,810 1144 00:47:32,810 --> 00:47:35,450 1145 00:47:35,450 --> 00:47:36,500 1146 00:47:36,500 --> 00:47:38,650 1147 00:47:38,650 --> 00:47:40,580 1148 00:47:40,580 --> 00:47:43,460 1149 00:47:43,460 --> 00:47:44,720 1150 00:47:44,720 --> 00:47:50,510 1151 00:47:50,510 --> 00:47:53,480 1152 00:47:53,480 --> 00:48:05,210 1153 00:48:05,210 --> 00:48:07,130 1154 00:48:07,130 --> 00:48:08,780 1155 00:48:08,780 --> 00:48:12,860 1156 00:48:12,860 --> 00:48:14,780 1157 00:48:14,780 --> 00:48:16,610 1158 00:48:16,610 --> 00:48:18,230 1159 00:48:18,230 --> 00:48:20,060 1160 00:48:20,060 --> 00:48:21,620 1161 00:48:21,620 --> 00:48:24,020 1162 00:48:24,020 --> 00:48:25,730 1163 00:48:25,730 --> 00:48:28,160 1164 00:48:28,160 --> 00:48:30,440 1165 00:48:30,440 --> 00:48:32,360 1166 00:48:32,360 --> 00:48:33,800 1167 00:48:33,800 --> 00:48:36,920 1168 00:48:36,920 --> 00:48:39,590 1169 00:48:39,590 --> 00:48:44,050 1170 00:48:44,050 --> 00:48:56,060 1171 00:48:56,060 --> 00:48:57,710 1172 00:48:57,710 --> 00:49:02,210 1173 00:49:02,210 --> 00:49:03,950 1174 00:49:03,950 --> 00:49:09,170 1175 00:49:09,170 --> 00:49:11,630 1176 00:49:11,630 --> 00:49:13,640 1177 00:49:13,640 --> 00:49:14,990 1178 00:49:14,990 --> 00:49:22,620 1179 00:49:22,620 --> 00:49:25,450 1180 00:49:25,450 --> 00:49:27,760 1181 00:49:27,760 --> 00:49:32,650 1182 00:49:32,650 --> 00:49:36,550 1183 00:49:36,550 --> 00:49:38,320 1184 00:49:38,320 --> 00:49:41,350 1185 00:49:41,350 --> 00:49:42,970 1186 00:49:42,970 --> 00:49:45,580 1187 00:49:45,580 --> 00:49:48,130 1188 00:49:48,130 --> 00:49:50,020 1189 00:49:50,020 --> 00:49:52,600 1190 00:49:52,600 --> 00:49:54,430 1191 00:49:54,430 --> 00:49:56,620 1192 00:49:56,620 --> 00:49:59,170 1193 00:49:59,170 --> 00:50:01,360 1194 00:50:01,360 --> 00:50:04,120 1195 00:50:04,120 --> 00:50:06,910 1196 00:50:06,910 --> 00:50:08,200 1197 00:50:08,200 --> 00:50:09,400 1198 00:50:09,400 --> 00:50:11,410 1199 00:50:11,410 --> 00:50:14,500 1200 00:50:14,500 --> 00:50:16,900 1201 00:50:16,900 --> 00:50:18,790 1202 00:50:18,790 --> 00:50:20,980 1203 00:50:20,980 --> 00:50:24,100 1204 00:50:24,100 --> 00:50:30,490 1205 00:50:30,490 --> 00:50:33,600 1206 00:50:33,600 --> 00:50:39,600 1207 00:50:39,600 --> 00:50:42,460 1208 00:50:42,460 --> 00:50:44,200 1209 00:50:44,200 --> 00:50:46,270 1210 00:50:46,270 --> 00:50:47,740 1211 00:50:47,740 --> 00:50:50,010 1212 00:50:50,010 --> 00:50:53,070 1213 00:50:53,070 --> 00:50:55,870 1214 00:50:55,870 --> 00:50:58,450 1215 00:50:58,450 --> 00:51:01,330 1216 00:51:01,330 --> 00:51:02,650 1217 00:51:02,650 --> 00:51:03,880 1218 00:51:03,880 --> 00:51:06,940 1219 00:51:06,940 --> 00:51:13,280 1220 00:51:13,280 --> 00:51:18,110 1221 00:51:18,110 --> 00:51:26,360 1222 00:51:26,360 --> 00:51:28,100 1223 00:51:28,100 --> 00:51:31,400 1224 00:51:31,400 --> 00:51:32,750 1225 00:51:32,750 --> 00:51:35,570 1226 00:51:35,570 --> 00:51:38,000 1227 00:51:38,000 --> 00:51:39,140 1228 00:51:39,140 --> 00:51:42,470 1229 00:51:42,470 --> 00:51:44,270 1230 00:51:44,270 --> 00:51:46,490 1231 00:51:46,490 --> 00:51:48,110 1232 00:51:48,110 --> 00:51:50,330 1233 00:51:50,330 --> 00:51:52,160 1234 00:51:52,160 --> 00:51:53,480 1235 00:51:53,480 --> 00:51:56,420 1236 00:51:56,420 --> 00:51:57,800 1237 00:51:57,800 --> 00:52:00,290 1238 00:52:00,290 --> 00:52:01,940 1239 00:52:01,940 --> 00:52:03,560 1240 00:52:03,560 --> 00:52:05,330 1241 00:52:05,330 --> 00:52:08,000 1242 00:52:08,000 --> 00:52:09,620 1243 00:52:09,620 --> 00:52:11,420 1244 00:52:11,420 --> 00:52:15,080 1245 00:52:15,080 --> 00:52:17,930 1246 00:52:17,930 --> 00:52:19,850 1247 00:52:19,850 --> 00:52:24,860 1248 00:52:24,860 --> 00:52:27,560 1249 00:52:27,560 --> 00:52:29,240 1250 00:52:29,240 --> 00:52:31,310 1251 00:52:31,310 --> 00:52:33,410 1252 00:52:33,410 --> 00:52:35,180 1253 00:52:35,180 --> 00:52:37,280 1254 00:52:37,280 --> 00:52:39,590 1255 00:52:39,590 --> 00:52:40,940 1256 00:52:40,940 --> 00:52:42,590 1257 00:52:42,590 --> 00:52:44,060 1258 00:52:44,060 --> 00:52:49,800 1259 00:52:49,800 --> 00:52:49,810 1260 00:52:49,810 --> 00:52:52,130 1261 00:52:52,130 --> 00:52:55,620 1262 00:52:55,620 --> 00:52:57,359 1263 00:52:57,359 --> 00:53:01,260 1264 00:53:01,260 --> 00:53:02,940 1265 00:53:02,940 --> 00:53:04,410 1266 00:53:04,410 --> 00:53:08,400 1267 00:53:08,400 --> 00:53:10,859 1268 00:53:10,859 --> 00:53:13,410 1269 00:53:13,410 --> 00:53:14,460 1270 00:53:14,460 --> 00:53:16,710 1271 00:53:16,710 --> 00:53:18,029 1272 00:53:18,029 --> 00:53:19,190 1273 00:53:19,190 --> 00:53:22,410 1274 00:53:22,410 --> 00:53:24,029 1275 00:53:24,029 --> 00:53:25,170 1276 00:53:25,170 --> 00:53:26,730 1277 00:53:26,730 --> 00:53:28,319 1278 00:53:28,319 --> 00:53:30,359 1279 00:53:30,359 --> 00:53:31,680 1280 00:53:31,680 --> 00:53:33,120 1281 00:53:33,120 --> 00:53:34,410 1282 00:53:34,410 --> 00:53:37,260 1283 00:53:37,260 --> 00:53:38,579 1284 00:53:38,579 --> 00:53:41,490 1285 00:53:41,490 --> 00:53:43,680 1286 00:53:43,680 --> 00:53:45,809 1287 00:53:45,809 --> 00:53:47,880 1288 00:53:47,880 --> 00:53:51,270 1289 00:53:51,270 --> 00:53:55,470 1290 00:53:55,470 --> 00:53:57,569 1291 00:53:57,569 --> 00:53:59,970 1292 00:53:59,970 --> 00:54:01,920 1293 00:54:01,920 --> 00:54:04,079 1294 00:54:04,079 --> 00:54:05,849 1295 00:54:05,849 --> 00:54:08,940 1296 00:54:08,940 --> 00:54:10,410 1297 00:54:10,410 --> 00:54:11,579 1298 00:54:11,579 --> 00:54:13,980 1299 00:54:13,980 --> 00:54:16,890 1300 00:54:16,890 --> 00:54:19,380 1301 00:54:19,380 --> 00:54:20,609 1302 00:54:20,609 --> 00:54:22,200 1303 00:54:22,200 --> 00:54:24,569 1304 00:54:24,569 --> 00:54:27,240 1305 00:54:27,240 --> 00:54:29,250 1306 00:54:29,250 --> 00:54:31,799 1307 00:54:31,799 --> 00:54:33,390 1308 00:54:33,390 --> 00:54:35,579 1309 00:54:35,579 --> 00:54:39,450 1310 00:54:39,450 --> 00:54:41,460 1311 00:54:41,460 --> 00:54:43,319 1312 00:54:43,319 --> 00:54:45,269 1313 00:54:45,269 --> 00:54:48,240 1314 00:54:48,240 --> 00:54:50,700 1315 00:54:50,700 --> 00:54:52,470 1316 00:54:52,470 --> 00:54:53,970 1317 00:54:53,970 --> 00:54:55,410 1318 00:54:55,410 --> 00:54:56,880 1319 00:54:56,880 --> 00:54:59,880 1320 00:54:59,880 --> 00:55:01,230 1321 00:55:01,230 --> 00:55:03,000 1322 00:55:03,000 --> 00:55:03,930 1323 00:55:03,930 --> 00:55:08,700 1324 00:55:08,700 --> 00:55:11,130 1325 00:55:11,130 --> 00:55:14,610 1326 00:55:14,610 --> 00:55:16,620 1327 00:55:16,620 --> 00:55:17,910 1328 00:55:17,910 --> 00:55:21,090 1329 00:55:21,090 --> 00:55:22,440 1330 00:55:22,440 --> 00:55:23,550 1331 00:55:23,550 --> 00:55:27,120 1332 00:55:27,120 --> 00:55:31,560 1333 00:55:31,560 --> 00:55:34,040 1334 00:55:34,040 --> 00:55:37,560 1335 00:55:37,560 --> 00:55:39,600 1336 00:55:39,600 --> 00:55:41,900 1337 00:55:41,900 --> 00:55:44,130 1338 00:55:44,130 --> 00:55:45,780 1339 00:55:45,780 --> 00:55:47,520 1340 00:55:47,520 --> 00:55:50,370 1341 00:55:50,370 --> 00:55:52,320 1342 00:55:52,320 --> 00:55:55,080 1343 00:55:55,080 --> 00:55:56,520 1344 00:55:56,520 --> 00:55:58,950 1345 00:55:58,950 --> 00:56:00,750 1346 00:56:00,750 --> 00:56:02,070 1347 00:56:02,070 --> 00:56:03,570 1348 00:56:03,570 --> 00:56:06,480 1349 00:56:06,480 --> 00:56:09,930 1350 00:56:09,930 --> 00:56:11,520 1351 00:56:11,520 --> 00:56:13,860 1352 00:56:13,860 --> 00:56:18,000 1353 00:56:18,000 --> 00:56:19,590 1354 00:56:19,590 --> 00:56:21,600 1355 00:56:21,600 --> 00:56:24,060 1356 00:56:24,060 --> 00:56:26,100 1357 00:56:26,100 --> 00:56:27,570 1358 00:56:27,570 --> 00:56:29,040 1359 00:56:29,040 --> 00:56:30,900 1360 00:56:30,900 --> 00:56:32,880 1361 00:56:32,880 --> 00:56:34,890 1362 00:56:34,890 --> 00:56:36,840 1363 00:56:36,840 --> 00:56:38,820 1364 00:56:38,820 --> 00:56:41,550 1365 00:56:41,550 --> 00:56:43,620 1366 00:56:43,620 --> 00:56:48,000 1367 00:56:48,000 --> 00:56:50,940 1368 00:56:50,940 --> 00:56:53,850 1369 00:56:53,850 --> 00:56:55,050 1370 00:56:55,050 --> 00:56:57,390 1371 00:56:57,390 --> 00:56:59,010 1372 00:56:59,010 --> 00:57:01,740 1373 00:57:01,740 --> 00:57:03,420 1374 00:57:03,420 --> 00:57:04,950 1375 00:57:04,950 --> 00:57:07,650 1376 00:57:07,650 --> 00:57:09,270 1377 00:57:09,270 --> 00:57:11,790 1378 00:57:11,790 --> 00:57:14,250 1379 00:57:14,250 --> 00:57:15,099 1380 00:57:15,099 --> 00:57:17,890 1381 00:57:17,890 --> 00:57:19,390 1382 00:57:19,390 --> 00:57:22,209 1383 00:57:22,209 --> 00:57:23,140 1384 00:57:23,140 --> 00:57:27,209 1385 00:57:27,209 --> 00:57:29,380 1386 00:57:29,380 --> 00:57:32,579 1387 00:57:32,579 --> 00:57:36,459 1388 00:57:36,459 --> 00:57:39,549 1389 00:57:39,549 --> 00:57:43,239 1390 00:57:43,239 --> 00:57:44,469 1391 00:57:44,469 --> 00:57:46,749 1392 00:57:46,749 --> 00:57:49,209 1393 00:57:49,209 --> 00:57:51,549 1394 00:57:51,549 --> 00:57:53,380 1395 00:57:53,380 --> 00:57:54,910 1396 00:57:54,910 --> 00:57:57,699 1397 00:57:57,699 --> 00:58:00,579 1398 00:58:00,579 --> 00:58:02,259 1399 00:58:02,259 --> 00:58:04,539 1400 00:58:04,539 --> 00:58:06,699 1401 00:58:06,699 --> 00:58:08,799 1402 00:58:08,799 --> 00:58:11,109 1403 00:58:11,109 --> 00:58:12,819 1404 00:58:12,819 --> 00:58:16,059 1405 00:58:16,059 --> 00:58:17,410 1406 00:58:17,410 --> 00:58:19,779 1407 00:58:19,779 --> 00:58:21,370 1408 00:58:21,370 --> 00:58:23,769 1409 00:58:23,769 --> 00:58:26,469 1410 00:58:26,469 --> 00:58:29,680 1411 00:58:29,680 --> 00:58:31,329 1412 00:58:31,329 --> 00:58:33,130 1413 00:58:33,130 --> 00:58:35,709 1414 00:58:35,709 --> 00:58:38,739 1415 00:58:38,739 --> 00:58:40,599 1416 00:58:40,599 --> 00:58:42,039 1417 00:58:42,039 --> 00:58:43,809 1418 00:58:43,809 --> 00:58:45,969 1419 00:58:45,969 --> 00:58:48,849 1420 00:58:48,849 --> 00:58:50,109 1421 00:58:50,109 --> 00:58:51,969 1422 00:58:51,969 --> 00:58:54,699 1423 00:58:54,699 --> 00:58:57,069 1424 00:58:57,069 --> 00:58:58,539 1425 00:58:58,539 --> 00:59:02,019 1426 00:59:02,019 --> 00:59:03,999 1427 00:59:03,999 --> 00:59:06,400 1428 00:59:06,400 --> 00:59:10,390 1429 00:59:10,390 --> 00:59:11,949 1430 00:59:11,949 --> 00:59:14,589 1431 00:59:14,589 --> 00:59:16,029 1432 00:59:16,029 --> 00:59:17,829 1433 00:59:17,829 --> 00:59:19,509 1434 00:59:19,509 --> 00:59:22,029 1435 00:59:22,029 --> 00:59:23,529 1436 00:59:23,529 --> 00:59:26,260 1437 00:59:26,260 --> 00:59:28,500 1438 00:59:28,500 --> 00:59:32,290 1439 00:59:32,290 --> 00:59:34,090 1440 00:59:34,090 --> 00:59:38,859 1441 00:59:38,859 --> 00:59:40,420 1442 00:59:40,420 --> 00:59:41,500 1443 00:59:41,500 --> 00:59:43,780 1444 00:59:43,780 --> 00:59:46,060 1445 00:59:46,060 --> 00:59:49,810 1446 00:59:49,810 --> 00:59:52,150 1447 00:59:52,150 --> 00:59:56,350 1448 00:59:56,350 --> 00:59:58,270 1449 00:59:58,270 --> 00:59:59,680 1450 00:59:59,680 --> 01:00:02,410 1451 01:00:02,410 --> 01:00:04,030 1452 01:00:04,030 --> 01:00:06,580 1453 01:00:06,580 --> 01:00:08,770 1454 01:00:08,770 --> 01:00:11,020 1455 01:00:11,020 --> 01:00:14,560 1456 01:00:14,560 --> 01:00:16,300 1457 01:00:16,300 --> 01:00:18,609 1458 01:00:18,609 --> 01:00:21,150 1459 01:00:21,150 --> 01:00:23,650 1460 01:00:23,650 --> 01:00:26,920 1461 01:00:26,920 --> 01:00:29,200 1462 01:00:29,200 --> 01:00:31,570 1463 01:00:31,570 --> 01:00:33,430 1464 01:00:33,430 --> 01:00:35,080 1465 01:00:35,080 --> 01:00:37,030 1466 01:00:37,030 --> 01:00:38,260 1467 01:00:38,260 --> 01:00:40,080 1468 01:00:40,080 --> 01:00:43,540 1469 01:00:43,540 --> 01:00:46,270 1470 01:00:46,270 --> 01:00:48,970 1471 01:00:48,970 --> 01:00:51,760 1472 01:00:51,760 --> 01:00:58,720 1473 01:00:58,720 --> 01:01:03,310 1474 01:01:03,310 --> 01:01:10,510 1475 01:01:10,510 --> 01:01:13,660 1476 01:01:13,660 --> 01:01:16,660 1477 01:01:16,660 --> 01:01:20,830 1478 01:01:20,830 --> 01:01:25,930 1479 01:01:25,930 --> 01:01:29,620 1480 01:01:29,620 --> 01:01:31,150 1481 01:01:31,150 --> 01:01:33,880 1482 01:01:33,880 --> 01:01:35,890 1483 01:01:35,890 --> 01:01:39,559 1484 01:01:39,559 --> 01:01:41,779 1485 01:01:41,779 --> 01:01:44,180 1486 01:01:44,180 --> 01:01:46,549 1487 01:01:46,549 --> 01:01:49,670 1488 01:01:49,670 --> 01:01:55,549 1489 01:01:55,549 --> 01:01:58,009 1490 01:01:58,009 --> 01:02:00,410 1491 01:02:00,410 --> 01:02:04,729 1492 01:02:04,729 --> 01:02:07,130 1493 01:02:07,130 --> 01:02:11,989 1494 01:02:11,989 --> 01:02:14,239 1495 01:02:14,239 --> 01:02:18,140 1496 01:02:18,140 --> 01:02:21,140 1497 01:02:21,140 --> 01:02:26,930 1498 01:02:26,930 --> 01:02:29,989 1499 01:02:29,989 --> 01:02:31,670 1500 01:02:31,670 --> 01:02:33,529 1501 01:02:33,529 --> 01:02:35,749 1502 01:02:35,749 --> 01:02:37,489 1503 01:02:37,489 --> 01:02:39,079 1504 01:02:39,079 --> 01:02:41,239 1505 01:02:41,239 --> 01:02:42,890 1506 01:02:42,890 --> 01:02:44,599 1507 01:02:44,599 --> 01:02:46,999 1508 01:02:46,999 --> 01:02:49,219 1509 01:02:49,219 --> 01:02:50,120 1510 01:02:50,120 --> 01:02:52,809 1511 01:02:52,809 --> 01:02:57,469 1512 01:02:57,469 --> 01:02:59,449 1513 01:02:59,449 --> 01:03:01,039 1514 01:03:01,039 --> 01:03:02,630 1515 01:03:02,630 --> 01:03:04,719 1516 01:03:04,719 --> 01:03:09,949 1517 01:03:09,949 --> 01:03:11,719 1518 01:03:11,719 --> 01:03:13,459 1519 01:03:13,459 --> 01:03:14,930 1520 01:03:14,930 --> 01:03:17,089 1521 01:03:17,089 --> 01:03:19,099 1522 01:03:19,099 --> 01:03:22,130 1523 01:03:22,130 --> 01:03:25,910 1524 01:03:25,910 --> 01:03:30,579 1525 01:03:30,579 --> 01:03:32,959 1526 01:03:32,959 --> 01:03:38,749 1527 01:03:38,749 --> 01:03:39,979 1528 01:03:39,979 --> 01:03:42,319 1529 01:03:42,319 --> 01:03:44,689 1530 01:03:44,689 --> 01:03:46,759 1531 01:03:46,759 --> 01:03:50,190 1532 01:03:50,190 --> 01:03:54,720 1533 01:03:54,720 --> 01:03:56,849 1534 01:03:56,849 --> 01:03:59,819 1535 01:03:59,819 --> 01:04:01,349 1536 01:04:01,349 --> 01:04:03,000 1537 01:04:03,000 --> 01:04:04,950 1538 01:04:04,950 --> 01:04:08,150 1539 01:04:08,150 --> 01:04:11,339 1540 01:04:11,339 --> 01:04:14,640 1541 01:04:14,640 --> 01:04:17,400 1542 01:04:17,400 --> 01:04:21,720 1543 01:04:21,720 --> 01:04:22,650 1544 01:04:22,650 --> 01:04:26,550 1545 01:04:26,550 --> 01:04:27,930 1546 01:04:27,930 --> 01:04:29,520 1547 01:04:29,520 --> 01:04:31,950 1548 01:04:31,950 --> 01:04:33,270 1549 01:04:33,270 --> 01:04:37,020 1550 01:04:37,020 --> 01:04:39,060 1551 01:04:39,060 --> 01:04:42,000 1552 01:04:42,000 --> 01:04:45,839 1553 01:04:45,839 --> 01:04:47,910 1554 01:04:47,910 --> 01:04:49,740 1555 01:04:49,740 --> 01:04:52,440 1556 01:04:52,440 --> 01:04:55,559 1557 01:04:55,559 --> 01:04:57,210 1558 01:04:57,210 --> 01:04:59,010 1559 01:04:59,010 --> 01:05:01,490 1560 01:05:01,490 --> 01:05:04,349 1561 01:05:04,349 --> 01:05:05,579 1562 01:05:05,579 --> 01:05:08,130 1563 01:05:08,130 --> 01:05:12,630 1564 01:05:12,630 --> 01:05:14,099 1565 01:05:14,099 --> 01:05:17,819 1566 01:05:17,819 --> 01:05:19,290 1567 01:05:19,290 --> 01:05:20,819 1568 01:05:20,819 --> 01:05:22,950 1569 01:05:22,950 --> 01:05:25,260 1570 01:05:25,260 --> 01:05:30,560 1571 01:05:30,560 --> 01:05:35,010 1572 01:05:35,010 --> 01:05:36,660 1573 01:05:36,660 --> 01:05:38,640 1574 01:05:38,640 --> 01:05:47,609 1575 01:05:47,609 --> 01:05:51,480 1576 01:05:51,480 --> 01:05:55,560 1577 01:05:55,560 --> 01:05:57,390 1578 01:05:57,390 --> 01:06:03,630 1579 01:06:03,630 --> 01:06:05,910 1580 01:06:05,910 --> 01:06:09,030 1581 01:06:09,030 --> 01:06:10,440 1582 01:06:10,440 --> 01:06:12,120 1583 01:06:12,120 --> 01:06:13,980 1584 01:06:13,980 --> 01:06:15,450 1585 01:06:15,450 --> 01:06:18,359 1586 01:06:18,359 --> 01:06:23,099 1587 01:06:23,099 --> 01:06:24,089 1588 01:06:24,089 --> 01:06:26,040 1589 01:06:26,040 --> 01:06:27,990 1590 01:06:27,990 --> 01:06:30,690 1591 01:06:30,690 --> 01:06:32,609 1592 01:06:32,609 --> 01:06:34,200 1593 01:06:34,200 --> 01:06:37,470 1594 01:06:37,470 --> 01:06:39,000 1595 01:06:39,000 --> 01:06:42,120 1596 01:06:42,120 --> 01:06:44,550 1597 01:06:44,550 --> 01:06:47,700 1598 01:06:47,700 --> 01:06:49,109 1599 01:06:49,109 --> 01:06:51,390 1600 01:06:51,390 --> 01:06:54,780 1601 01:06:54,780 --> 01:06:56,579 1602 01:06:56,579 --> 01:06:59,670 1603 01:06:59,670 --> 01:07:01,530 1604 01:07:01,530 --> 01:07:03,750 1605 01:07:03,750 --> 01:07:06,450 1606 01:07:06,450 --> 01:07:10,500 1607 01:07:10,500 --> 01:07:14,250 1608 01:07:14,250 --> 01:07:16,230 1609 01:07:16,230 --> 01:07:18,270 1610 01:07:18,270 --> 01:07:21,060 1611 01:07:21,060 --> 01:07:24,720 1612 01:07:24,720 --> 01:07:26,310 1613 01:07:26,310 --> 01:07:29,760 1614 01:07:29,760 --> 01:07:31,530 1615 01:07:31,530 --> 01:07:33,359 1616 01:07:33,359 --> 01:07:35,520 1617 01:07:35,520 --> 01:07:38,280 1618 01:07:38,280 --> 01:07:42,420 1619 01:07:42,420 --> 01:07:44,280 1620 01:07:44,280 --> 01:07:46,650 1621 01:07:46,650 --> 01:07:47,940 1622 01:07:47,940 --> 01:07:50,070 1623 01:07:50,070 --> 01:07:52,680 1624 01:07:52,680 --> 01:07:53,850 1625 01:07:53,850 --> 01:07:55,890 1626 01:07:55,890 --> 01:07:57,630 1627 01:07:57,630 --> 01:08:00,570 1628 01:08:00,570 --> 01:08:02,130 1629 01:08:02,130 --> 01:08:04,410 1630 01:08:04,410 --> 01:08:06,750 1631 01:08:06,750 --> 01:08:10,470 1632 01:08:10,470 --> 01:08:13,500 1633 01:08:13,500 --> 01:08:16,950 1634 01:08:16,950 --> 01:08:19,620 1635 01:08:19,620 --> 01:08:21,450 1636 01:08:21,450 --> 01:08:22,890 1637 01:08:22,890 --> 01:08:25,039 1638 01:08:25,039 --> 01:08:27,240 1639 01:08:27,240 --> 01:08:29,460 1640 01:08:29,460 --> 01:08:30,960 1641 01:08:30,960 --> 01:08:32,220 1642 01:08:32,220 --> 01:08:34,140 1643 01:08:34,140 --> 01:08:36,660 1644 01:08:36,660 --> 01:08:39,660 1645 01:08:39,660 --> 01:08:44,010 1646 01:08:44,010 --> 01:08:47,460 1647 01:08:47,460 --> 01:08:49,680 1648 01:08:49,680 --> 01:08:51,420 1649 01:08:51,420 --> 01:08:54,090 1650 01:08:54,090 --> 01:08:58,170 1651 01:08:58,170 --> 01:08:59,970 1652 01:08:59,970 --> 01:09:02,160 1653 01:09:02,160 --> 01:09:06,780 1654 01:09:06,780 --> 01:09:13,430 1655 01:09:13,430 --> 01:09:21,930 1656 01:09:21,930 --> 01:09:25,620 1657 01:09:25,620 --> 01:09:27,890 1658 01:09:27,890 --> 01:09:30,599 1659 01:09:30,599 --> 01:09:32,579 1660 01:09:32,579 --> 01:09:34,950 1661 01:09:34,950 --> 01:09:36,510 1662 01:09:36,510 --> 01:09:38,910 1663 01:09:38,910 --> 01:09:42,840 1664 01:09:42,840 --> 01:09:44,579 1665 01:09:44,579 --> 01:09:48,269 1666 01:09:48,269 --> 01:09:52,050 1667 01:09:52,050 --> 01:09:54,030 1668 01:09:54,030 --> 01:09:55,710 1669 01:09:55,710 --> 01:09:56,580 1670 01:09:56,580 --> 01:09:57,640 1671 01:09:57,640 --> 01:09:58,840 1672 01:09:58,840 --> 01:10:00,910 1673 01:10:00,910 --> 01:10:04,689 1674 01:10:04,689 --> 01:10:07,990 1675 01:10:07,990 --> 01:10:13,660 1676 01:10:13,660 --> 01:10:15,220 1677 01:10:15,220 --> 01:10:21,149 1678 01:10:21,149 --> 01:10:24,910 1679 01:10:24,910 --> 01:10:27,430 1680 01:10:27,430 --> 01:10:29,320 1681 01:10:29,320 --> 01:10:30,520 1682 01:10:30,520 --> 01:10:32,620 1683 01:10:32,620 --> 01:10:34,540 1684 01:10:34,540 --> 01:10:37,090 1685 01:10:37,090 --> 01:10:39,970 1686 01:10:39,970 --> 01:10:43,270 1687 01:10:43,270 --> 01:10:47,770 1688 01:10:47,770 --> 01:10:48,790 1689 01:10:48,790 --> 01:10:50,770 1690 01:10:50,770 --> 01:10:52,630 1691 01:10:52,630 --> 01:10:54,939 1692 01:10:54,939 --> 01:10:56,410 1693 01:10:56,410 --> 01:10:58,990 1694 01:10:58,990 --> 01:11:00,760 1695 01:11:00,760 --> 01:11:04,840 1696 01:11:04,840 --> 01:11:06,370 1697 01:11:06,370 --> 01:11:07,899 1698 01:11:07,899 --> 01:11:13,800 1699 01:11:13,800 --> 01:11:22,200 1700 01:11:22,200 --> 01:11:27,340 1701 01:11:27,340 --> 01:11:32,290 1702 01:11:32,290 --> 01:11:34,270 1703 01:11:34,270 --> 01:11:37,540 1704 01:11:37,540 --> 01:11:40,540 1705 01:11:40,540 --> 01:11:43,060 1706 01:11:43,060 --> 01:11:44,740 1707 01:11:44,740 --> 01:11:47,200 1708 01:11:47,200 --> 01:11:49,240 1709 01:11:49,240 --> 01:11:51,399 1710 01:11:51,399 --> 01:11:53,110 1711 01:11:53,110 --> 01:11:54,640 1712 01:11:54,640 --> 01:11:57,520 1713 01:11:57,520 --> 01:12:00,310 1714 01:12:00,310 --> 01:12:01,899 1715 01:12:01,899 --> 01:12:03,760 1716 01:12:03,760 --> 01:12:05,770 1717 01:12:05,770 --> 01:12:07,630 1718 01:12:07,630 --> 01:12:14,260 1719 01:12:14,260 --> 01:12:16,150 1720 01:12:16,150 --> 01:12:17,380 1721 01:12:17,380 --> 01:12:20,590 1722 01:12:20,590 --> 01:12:23,080 1723 01:12:23,080 --> 01:12:25,600 1724 01:12:25,600 --> 01:12:27,610 1725 01:12:27,610 --> 01:12:28,870 1726 01:12:28,870 --> 01:12:30,370 1727 01:12:30,370 --> 01:12:31,690 1728 01:12:31,690 --> 01:12:33,010 1729 01:12:33,010 --> 01:12:37,960 1730 01:12:37,960 --> 01:12:41,070 1731 01:12:41,070 --> 01:12:43,630 1732 01:12:43,630 --> 01:12:48,220 1733 01:12:48,220 --> 01:12:49,840 1734 01:12:49,840 --> 01:12:53,500 1735 01:12:53,500 --> 01:12:56,650 1736 01:12:56,650 --> 01:12:59,020 1737 01:12:59,020 --> 01:13:00,250 1738 01:13:00,250 --> 01:13:02,140 1739 01:13:02,140 --> 01:13:05,040 1740 01:13:05,040 --> 01:13:09,370 1741 01:13:09,370 --> 01:13:14,020 1742 01:13:14,020 --> 01:13:18,400 1743 01:13:18,400 --> 01:13:19,600 1744 01:13:19,600 --> 01:13:24,880 1745 01:13:24,880 --> 01:13:27,010 1746 01:13:27,010 --> 01:13:29,350 1747 01:13:29,350 --> 01:13:31,210 1748 01:13:31,210 --> 01:13:37,720 1749 01:13:37,720 --> 01:13:37,730 1750 01:13:37,730 --> 01:13:44,100 1751 01:13:44,100 --> 01:13:45,750 1752 01:13:45,750 --> 01:13:48,900 1753 01:13:48,900 --> 01:13:51,390 1754 01:13:51,390 --> 01:13:52,680 1755 01:13:52,680 --> 01:13:55,760 1756 01:13:55,760 --> 01:13:58,200 1757 01:13:58,200 --> 01:14:00,480 1758 01:14:00,480 --> 01:14:01,920 1759 01:14:01,920 --> 01:14:04,350 1760 01:14:04,350 --> 01:14:05,580 1761 01:14:05,580 --> 01:14:06,690 1762 01:14:06,690 --> 01:14:08,040 1763 01:14:08,040 --> 01:14:09,300 1764 01:14:09,300 --> 01:14:13,050 1765 01:14:13,050 --> 01:14:16,530 1766 01:14:16,530 --> 01:14:19,140 1767 01:14:19,140 --> 01:14:21,660 1768 01:14:21,660 --> 01:14:23,160 1769 01:14:23,160 --> 01:14:25,320 1770 01:14:25,320 --> 01:14:27,810 1771 01:14:27,810 --> 01:14:30,240 1772 01:14:30,240 --> 01:14:31,680 1773 01:14:31,680 --> 01:14:33,600 1774 01:14:33,600 --> 01:14:35,190 1775 01:14:35,190 --> 01:14:38,120 1776 01:14:38,120 --> 01:14:41,880 1777 01:14:41,880 --> 01:14:43,860 1778 01:14:43,860 --> 01:14:45,990 1779 01:14:45,990 --> 01:14:51,630 1780 01:14:51,630 --> 01:14:52,710 1781 01:14:52,710 --> 01:14:54,780 1782 01:14:54,780 --> 01:14:56,730 1783 01:14:56,730 --> 01:14:58,500 1784 01:14:58,500 --> 01:15:00,270 1785 01:15:00,270 --> 01:15:03,270 1786 01:15:03,270 --> 01:15:05,520 1787 01:15:05,520 --> 01:15:07,110 1788 01:15:07,110 --> 01:15:08,790 1789 01:15:08,790 --> 01:15:10,860 1790 01:15:10,860 --> 01:15:11,970 1791 01:15:11,970 --> 01:15:15,330 1792 01:15:15,330 --> 01:15:17,400 1793 01:15:17,400 --> 01:15:18,990 1794 01:15:18,990 --> 01:15:21,750 1795 01:15:21,750 --> 01:15:23,490 1796 01:15:23,490 --> 01:15:25,050 1797 01:15:25,050 --> 01:15:26,820 1798 01:15:26,820 --> 01:15:30,660 1799 01:15:30,660 --> 01:15:33,000 1800 01:15:33,000 --> 01:15:34,620 1801 01:15:34,620 --> 01:15:36,180 1802 01:15:36,180 --> 01:15:37,620 1803 01:15:37,620 --> 01:15:41,610 1804 01:15:41,610 --> 01:15:45,630 1805 01:15:45,630 --> 01:15:49,140 1806 01:15:49,140 --> 01:15:53,070 1807 01:15:53,070 --> 01:15:54,480 1808 01:15:54,480 --> 01:15:57,540 1809 01:15:57,540 --> 01:15:59,280 1810 01:15:59,280 --> 01:16:01,050 1811 01:16:01,050 --> 01:16:03,150 1812 01:16:03,150 --> 01:16:04,740 1813 01:16:04,740 --> 01:16:06,090 1814 01:16:06,090 --> 01:16:07,740 1815 01:16:07,740 --> 01:16:09,600 1816 01:16:09,600 --> 01:16:11,580 1817 01:16:11,580 --> 01:16:13,110 1818 01:16:13,110 --> 01:16:15,840 1819 01:16:15,840 --> 01:16:19,470 1820 01:16:19,470 --> 01:16:22,620 1821 01:16:22,620 --> 01:16:24,120 1822 01:16:24,120 --> 01:16:27,930 1823 01:16:27,930 --> 01:16:29,400 1824 01:16:29,400 --> 01:16:31,530 1825 01:16:31,530 --> 01:16:33,030 1826 01:16:33,030 --> 01:16:34,860 1827 01:16:34,860 --> 01:16:37,200 1828 01:16:37,200 --> 01:16:38,820 1829 01:16:38,820 --> 01:16:40,440 1830 01:16:40,440 --> 01:16:43,950 1831 01:16:43,950 --> 01:16:45,420 1832 01:16:45,420 --> 01:16:47,550 1833 01:16:47,550 --> 01:16:54,750 1834 01:16:54,750 --> 01:16:58,860 1835 01:16:58,860 --> 01:17:05,340 1836 01:17:05,340 --> 01:17:06,600 1837 01:17:06,600 --> 01:17:10,530 1838 01:17:10,530 --> 01:17:12,570 1839 01:17:12,570 --> 01:17:14,220 1840 01:17:14,220 --> 01:17:18,630 1841 01:17:18,630 --> 01:17:19,740 1842 01:17:19,740 --> 01:17:21,540 1843 01:17:21,540 --> 01:17:23,610 1844 01:17:23,610 --> 01:17:25,110 1845 01:17:25,110 --> 01:17:27,300 1846 01:17:27,300 --> 01:17:29,010 1847 01:17:29,010 --> 01:17:30,600 1848 01:17:30,600 --> 01:17:33,000 1849 01:17:33,000 --> 01:17:35,820 1850 01:17:35,820 --> 01:17:38,610 1851 01:17:38,610 --> 01:17:41,760 1852 01:17:41,760 --> 01:17:43,350 1853 01:17:43,350 --> 01:17:47,040 1854 01:17:47,040 --> 01:17:48,840 1855 01:17:48,840 --> 01:17:51,030 1856 01:17:51,030 --> 01:17:55,950 1857 01:17:55,950 --> 01:17:57,750 1858 01:17:57,750 --> 01:18:01,080 1859 01:18:01,080 --> 01:18:02,670 1860 01:18:02,670 --> 01:18:04,170 1861 01:18:04,170 --> 01:18:07,110 1862 01:18:07,110 --> 01:18:08,610 1863 01:18:08,610 --> 01:18:10,110 1864 01:18:10,110 --> 01:18:13,470 1865 01:18:13,470 --> 01:18:15,630 1866 01:18:15,630 --> 01:18:22,260 1867 01:18:22,260 --> 01:18:23,670 1868 01:18:23,670 --> 01:18:25,800 1869 01:18:25,800 --> 01:18:28,170 1870 01:18:28,170 --> 01:18:30,480 1871 01:18:30,480 --> 01:18:32,250 1872 01:18:32,250 --> 01:18:35,040 1873 01:18:35,040 --> 01:18:37,590 1874 01:18:37,590 --> 01:18:38,700 1875 01:18:38,700 --> 01:18:40,950 1876 01:18:40,950 --> 01:18:42,960 1877 01:18:42,960 --> 01:18:44,880 1878 01:18:44,880 --> 01:18:46,320 1879 01:18:46,320 --> 01:18:47,730 1880 01:18:47,730 --> 01:18:49,560 1881 01:18:49,560 --> 01:18:50,940 1882 01:18:50,940 --> 01:18:52,800 1883 01:18:52,800 --> 01:18:54,870 1884 01:18:54,870 --> 01:18:56,970 1885 01:18:56,970 --> 01:18:58,890 1886 01:18:58,890 --> 01:19:02,340 1887 01:19:02,340 --> 01:19:04,170 1888 01:19:04,170 --> 01:19:06,060 1889 01:19:06,060 --> 01:19:07,650 1890 01:19:07,650 --> 01:19:11,340 1891 01:19:11,340 --> 01:19:13,260 1892 01:19:13,260 --> 01:19:15,210 1893 01:19:15,210 --> 01:19:18,630 1894 01:19:18,630 --> 01:19:20,010 1895 01:19:20,010 --> 01:19:22,380 1896 01:19:22,380 --> 01:19:24,390 1897 01:19:24,390 --> 01:19:26,310 1898 01:19:26,310 --> 01:19:29,520 1899 01:19:29,520 --> 01:19:30,930 1900 01:19:30,930 --> 01:19:33,450 1901 01:19:33,450 --> 01:19:35,910 1902 01:19:35,910 --> 01:19:37,020 1903 01:19:37,020 --> 01:19:39,030 1904 01:19:39,030 --> 01:19:40,620 1905 01:19:40,620 --> 01:19:45,810 1906 01:19:45,810 --> 01:19:49,530 1907 01:19:49,530 --> 01:19:51,180 1908 01:19:51,180 --> 01:19:53,880 1909 01:19:53,880 --> 01:19:55,620 1910 01:19:55,620 --> 01:19:57,450 1911 01:19:57,450 --> 01:19:59,670 1912 01:19:59,670 --> 01:20:02,010 1913 01:20:02,010 --> 01:20:05,850 1914 01:20:05,850 --> 01:20:07,830 1915 01:20:07,830 --> 01:20:09,780 1916 01:20:09,780 --> 01:20:12,630 1917 01:20:12,630 --> 01:20:15,570 1918 01:20:15,570 --> 01:20:18,590 1919 01:20:18,590 --> 01:20:21,510 1920 01:20:21,510 --> 01:20:22,980 1921 01:20:22,980 --> 01:20:24,540 1922 01:20:24,540 --> 01:20:27,000 1923 01:20:27,000 --> 01:20:29,550 1924 01:20:29,550 --> 01:20:32,250 1925 01:20:32,250 --> 01:20:35,070 1926 01:20:35,070 --> 01:20:37,050 1927 01:20:37,050 --> 01:20:39,750 1928 01:20:39,750 --> 01:20:41,430 1929 01:20:41,430 --> 01:20:43,950 1930 01:20:43,950 --> 01:20:46,080 1931 01:20:46,080 --> 01:20:47,790 1932 01:20:47,790 --> 01:20:54,890 1933 01:20:54,890 --> 01:21:01,980 1934 01:21:01,980 --> 01:21:10,890 1935 01:21:10,890 --> 01:21:16,400 1936 01:21:16,400 --> 01:21:19,290 1937 01:21:19,290 --> 01:21:23,930 1938 01:21:23,930 --> 01:21:27,060 1939 01:21:27,060 --> 01:21:31,470 1940 01:21:31,470 --> 01:21:34,920 1941 01:21:34,920 --> 01:21:39,840 1942 01:21:39,840 --> 01:21:41,550 1943 01:21:41,550 --> 01:21:43,560 1944 01:21:43,560 --> 01:21:44,760 1945 01:21:44,760 --> 01:21:47,430 1946 01:21:47,430 --> 01:21:48,630 1947 01:21:48,630 --> 01:21:51,030 1948 01:21:51,030 --> 01:21:52,920 1949 01:21:52,920 --> 01:21:59,610 1950 01:21:59,610 --> 01:22:01,380 1951 01:22:01,380 --> 01:22:04,800 1952 01:22:04,800 --> 01:22:06,240 1953 01:22:06,240 --> 01:22:08,250 1954 01:22:08,250 --> 01:22:10,620 1955 01:22:10,620 --> 01:22:14,100 1956 01:22:14,100 --> 01:22:17,310 1957 01:22:17,310 --> 01:22:19,440 1958 01:22:19,440 --> 01:22:29,119 1959 01:22:29,119 --> 01:22:36,289 1960 01:22:36,289 --> 01:22:42,949 1961 01:22:42,949 --> 01:22:44,839 1962 01:22:44,839 --> 01:22:46,489 1963 01:22:46,489 --> 01:22:52,549 1964 01:22:52,549 --> 01:22:59,179 1965 01:22:59,179 --> 01:23:03,169 1966 01:23:03,169 --> 01:23:05,479 1967 01:23:05,479 --> 01:23:09,939 1968 01:23:09,939 --> 01:23:11,929 1969 01:23:11,929 --> 01:23:22,189 1970 01:23:22,189 --> 01:23:23,869 1971 01:23:23,869 --> 01:23:25,909 1972 01:23:25,909 --> 01:23:27,619 1973 01:23:27,619 --> 01:23:29,659 1974 01:23:29,659 --> 01:23:33,319 1975 01:23:33,319 --> 01:23:34,609 1976 01:23:34,609 --> 01:23:37,099 1977 01:23:37,099 --> 01:23:40,129 1978 01:23:40,129 --> 01:23:42,139 1979 01:23:42,139 --> 01:23:44,239 1980 01:23:44,239 --> 01:23:46,099 1981 01:23:46,099 --> 01:23:50,359 1982 01:23:50,359 --> 01:23:52,969 1983 01:23:52,969 --> 01:23:55,449 1984 01:23:55,449 --> 01:23:57,500 1985 01:23:57,500 --> 01:23:59,359 1986 01:23:59,359 --> 01:24:02,539 1987 01:24:02,539 --> 01:24:04,729 1988 01:24:04,729 --> 01:24:07,849 1989 01:24:07,849 --> 01:24:09,169 1990 01:24:09,169 --> 01:24:11,329 1991 01:24:11,329 --> 01:24:15,409 1992 01:24:15,409 --> 01:24:16,969 1993 01:24:16,969 --> 01:24:20,779 1994 01:24:20,779 --> 01:24:22,869 1995 01:24:22,869 --> 01:24:25,789 1996 01:24:25,789 --> 01:24:29,089 1997 01:24:29,089 --> 01:24:30,649 1998 01:24:30,649 --> 01:24:32,899 1999 01:24:32,899 --> 01:24:34,429 2000 01:24:34,429 --> 01:24:36,110 2001 01:24:36,110 --> 01:24:38,120 2002 01:24:38,120 --> 01:24:39,590 2003 01:24:39,590 --> 01:24:40,880 2004 01:24:40,880 --> 01:24:44,480 2005 01:24:44,480 --> 01:24:47,930 2006 01:24:47,930 --> 01:24:49,790 2007 01:24:49,790 --> 01:24:53,630 2008 01:24:53,630 --> 01:24:55,670 2009 01:24:55,670 --> 01:24:57,050 2010 01:24:57,050 --> 01:24:59,660 2011 01:24:59,660 --> 01:25:01,940 2012 01:25:01,940 --> 01:25:04,670 2013 01:25:04,670 --> 01:25:06,560 2014 01:25:06,560 --> 01:25:08,510 2015 01:25:08,510 --> 01:25:09,920 2016 01:25:09,920 --> 01:25:12,140 2017 01:25:12,140 --> 01:25:14,780 2018 01:25:14,780 --> 01:25:14,790 2019 01:25:14,790 --> 01:25:15,940 2020 01:25:15,940 --> 01:25:19,010 2021 01:25:19,010 --> 01:25:21,440 2022 01:25:21,440 --> 01:25:24,140 2023 01:25:24,140 --> 01:25:26,540 2024 01:25:26,540 --> 01:25:30,650 2025 01:25:30,650 --> 01:25:32,450 2026 01:25:32,450 --> 01:25:32,460 2027 01:25:32,460 --> 01:25:33,150 2028 01:25:33,150 --> 01:25:33,160 2029 01:25:33,160 --> 01:25:42,600 2030 01:25:42,600 --> 01:25:42,610 2031 01:25:42,610 --> 01:25:44,670