Transcript Search in video 0:00 all right so we are finally down to the last lecture in this course and it's not 0:06 going to be on any quiz or anything like that so this is just for fun you'll 0:11 learn some facts about the history early days of computing and some very exciting 0:16 stuff that happened at that time which actually has made your life much much simpler because you'll get a feel for 0:23 what it might have been like programming in those days so this word is used a lot 0:32 stored-program computer because the program itself is stored in the machine 0:37 and that was not always the case so we 0:43 think of a program is a sequence of instructions so if I gave you a calculator you would know how to carry 0:49 on some sequence of instructions you have a piece of paper on which you would write down temporary results that you 0:56 get and you keep following some recipe but the recipe is written outside the 1:01 calculator right or it's in your head and you remember it and you repeat it so I think there was long term 1:08 understanding you know from 18th century from 19th century that program is really 1:14 a sequence of instructions even though we didn't have any interesting computers to talk about until 50s the technical 1:23 challenge was how to sequence these instructions so you have a calculator and we know how to calculate but now 1:31 somebody says build me an automaton which will actually go through these instructions automatically one by one 1:37 and that turned out to be a pretty big challenge for people how to solve this 1:43 problem and lot of time and energy was spent on it so in the early days for 1:50 first few years you only had a read-only memory right so you would have some sort 1:55 of a diode array which was actually later but the thing that stuck for quite a while was you will have paper tapes 2:02 right so you have paper tape readers which were well developed in those days and if you could encode your program 2:08 into that then the machine would read the paper tape and now that's the instructions and 2:14 he'll carry out the instruction and during that period when people were 2:20 exploring all kinds of memory technologies the idea occurred to somebody why not have read and write 2:26 memory for programs also right so this idea is spelled out in this paper called 2:34 edvac which is really a conceptual thing this machine was never built but enormously influential because the 2:40 paper is extremely well written right and had these famous guys in often 2:47 ointment and actually a Kurt McLean name doesn't appear on the paper which is a 2:52 little bit of a travesty but whatever so I think the central idea here is you can 2:59 store program and data in the same storage okay 3:06 what's the big deal the big deal is the moment you think like that then you suddenly realize oh program can be 3:12 manipulated as data and the moment that realization came to people then computer 3:19 system off because then they say oh now we can really write something interesting on these machines because it 3:26 was not possible to do something very clever until this realization gained that program itself can be manipulated 3:33 as data inside the machine and the first 3:38 successful machine that was built using these idea was at SAC at Cambridge 3:43 University the connection is very very direct which I'll show you in a second 3:48 well it's a personal Morris will build this machine and I think it lasted about six years so lots of programs were 3:55 written lots of ideas about assembly language programming and design of computers came from from the EDSAC days 4:05 so what happened is that Eckert and Mauchly and for no I mean you know have 4:11 these ideas they have written this paper it's all classified it was happening 4:16 during Second World War and you know this was considered the highest level of secret because the justification for 4:22 producing these things was that it can be used for calculating munitions ranges and and building atom 4:29 bombs etc though the first atom bomb was built without any computers so they held 4:36 a course I mean the course was part at University of Pennsylvania enormously 4:41 influential course anybody who took that not anybody so they were maybe twenty participants or less in this course and 4:48 everybody went home and they build their own computers so it was a big privilege to be invited to this course and these 4:55 were already very important people who attended the course and they went so at 5:00 Princeton you know is was built Cambridge at SAC maniac johniac iliac 5:07 many many different machines were built during that time and MIT wasn't far 5:15 behind so MIT build whirlwind and its justification was it was a real time 5:21 computers it was gonna protect against incoming missiles so it was the sage defense system so it was an enormous 5:28 machine right and lot of work went into it by Forrester in their word so in some 5:35 sense it was the first real real-time computer or hit which could do something 5:42 now there is another figure in computing which is so important Alan Turing right 5:48 and he was actually not just a theoretician he was a great thinker he wanted to build machines and he wanted 5:55 to design assembly language and all kinds of encoding schemes and they were 6:00 only good for him I don't think he was so convoluted right he was optimizing 6:06 every last bit but I think that trouble here is during the war years it's not 6:12 clear how they influenced each other because it's all secret and and then 6:18 even after the war people haven't been able to find out very much about it but one fact is that during did spend one 6:24 year with four Norman at Princeton during that time right so they must have influenced each other except that they 6:31 didn't like each other so and you know for Norman was a very establish figure at that time and Turing 6:37 was this young guy logician who you know if you have seen the movie you know what I'm talking about so so here all this 6:46 research going on and trying to build these stored-program computers right and there may be a dozen of them in the 6:52 world at that time that have been constructed so what's happening commercially so commercially IBM 6:59 dominated the scene with calculators the kind of calculators which most of you 7:05 will heart have hard time imagining so here is a description of this calculator it has 150 words store right not just 7:14 one or two accumulator 150 words it can store instruction constraints and table 7:20 of data were read from paper tapes okay 7:25 sixty-six paper tape reading stations can you visualize this you know you have 7:31 a calculator which is so big that you need lots of input going into it so 7:39 there are 66 reading stations right and data could be output from one stage you 7:46 could print out the paper tape which could be fed into the next stage etc now 7:52 read this sentence tapes could be glued together to form a loop loops was a very 8:00 big deal people just did not understand how to do loops and computing and you 8:06 will come to appreciate this very quickly I'm going to show you one or two things you say I know how to do this and then you get stuck you know how to do 8:13 this so this is the idea that is happening in and I think ultimately the 8:18 first computer commercial computer that was built was UNIVAC which was sold in 1951 and they got huge mileage out of it 8:26 because election results were tabulated on this so you know they were on television which was also in its infancy 8:33 at that time right so they were showing results of 1951 election on this 8:41 now then there was a BM 701 and I I write it in a very dramatic way because 8:49 it changed everything it changed everything notice the first number 30 machines were sold in the 8:57 first year before that we are talking of one computer in this city one computer 9:03 in that city right he could count all the computers in the world suddenly in one year you're selling 30 computers not 9:11 only that they designed a cheaper version which was IBM 650 extremely successful machine and they sold 120 of 9:19 them in the first year and they had backorder for 750 so this is a very big 9:27 deal I mean you are going from ones and twos to saying oh no no this technology is for use and these machines are not 9:33 being sold just to engineers these machines are being sold to banks and insurance companies which had a lot of 9:39 data processing to do now in some sense they took the fun out of it right 9:45 because now you were nobody was gonna afford you to build your own machines they buy one from IBM right so I think 9:52 that's what happened that there was no reason for you to build your own machine because the chances you could outdo IBM 9:59 that their game was close to zero at that time so there were other commercial competitors but in the academic world it 10:06 became you know all these because it was a very very expensive affair to build a 10:12 computer so people have often asked the 10:17 question why was IBM so late getting into the game I mean it already knew all this stuff why was it not producing 10:25 started producing computer in 1950 I mean how come you know act was the first company to do it and IBM entered the 10:31 scene three years later and the reason is they were making too much money right 10:38 and when a company is making too much money you know your vision gets like this oh I can make more money by selling 10:44 these fancy calculators why are you bothering me with I'm already making so much money right 10:50 so I think their success with calculators is what you know delayed 10:56 their entry into the computer business 11:01 now this is a very important slide and I want you to pay attention to this so 11:07 this is instruction description taken from IBM three six six fifty manual and 11:15 this is as I said the most popular machine at that time I mean the first machine that sold ultimately the number 11:22 reached over a thousand of how many 650s were sold so you're gonna program this 11:28 right and you're not a hardware engineering you or some guy who wants to program this machine and you rope open 11:34 the is a manual and that's the kind of description you will find right so can 11:41 you read this description load the contents of location one two three four 11:47 into the distribution put it also in the upper accumulator set the lower accumulated to zero then go to location 11:54 one zero zero nine for the next instruction the main point the takeaway 12:01 from this is your view of programming is inseparable from hardware you know they 12:09 want you to understand that there is something called upper accumulator lower accumulator there's that and you know 12:15 where to place the next instruction in it programmers view of the machine was 12:21 inseparable from the actual hardware so if you have two different machines you know they'll give the program them quite 12:28 differently because they're going to be different internally so at that time if 12:33 you said actually you know I would like a high level language I would like a 12:38 language in which I don't have to know all these details you would have been laughed out of the room people say are 12:44 you crazy or something you mean you're gonna program your machine without knowing how many registers it has right 12:51 so that's the background in which Fortran was developed right and right became such a big deal 12:56 because that was the first language in which you didn't have to understand the underlying and implementation of your 13:02 computer now you know there are always 13:08 there is always room for hacking in this machine the main storage was on a drum 13:13 right so if you read an instruction here by the time you are ready to read the 13:18 next instruction the drum has moved right so if you try if you put the next 13:24 instruction right next to it it's very slow so you can calculate the speed and 13:29 you say ah if I put the first instruction here and the next instruction where the drum will be when 13:35 I'm ready to read then it will be much faster so that's the level of hacking people who are doing exactly how the 13:41 program should be spread on the drum so that when the computer is ready to read 13:47 an instruction it's right there right different level of programming you know 13:52 in those days ok so now if we look at the scene and 50s what's happening with 13:59 computers a couple of things stand out one hardware was extremely expensive 14:07 extremely expensive and this thing continued until early 70s you know so if 14:12 you came for a tour of MIT or any other prestigious school you would have been 14:17 taken to this building behind glass cases where their computer was right it 14:23 was a thing to display because it was so expensive you could be proud of the fact my school has this computer right none 14:31 of that I mean that has completely disappeared now because now we put computers in some remote area which are 14:38 almost inaccessible right these gigantic data centers but hardware was expensive 14:44 stores were small now get used to it 1,000 words 14:50 you're gonna write all your wonderful programs and you have 1,000 words we're 14:56 not talking of registers we're talking of the amount of DRAM as you will call it there was no dear I'm at that point 15:01 right you have very very small amount of memory okay so one 15:10 corollary of that is if the storage is so small there is no question of any 15:16 resident software you know so you can't even say oh what kind of OS it had what 15:22 OS you are trying to run your program that's going to consume all the memory so the only actually commonly used 15:29 utilities that existed for everybody if you were doing floating-point calculation so there will be a floating-point library because in those 15:36 days floating-point was not implemented in hardware you know it was emulated using integer instructions so you may 15:44 have some shared routines again to save storage so that everybody can use the same things memory access time was 10 to 15:55 50 times slower than the processor cycle now at that time we didn't know that 16:00 this kind of a disparity between the computing speed and the memory access 16:05 speed will continue I mean if it's anything it's worse today right you can compute extremely fast but accessing 16:12 storages remains very expensive this was true even at that time instruction 16:21 execution time was totally dominated by the memory reference time so it didn't 16:26 matter how long it took to decode or do to do your ads all the question was just 16:31 fetching the instruction was going to take a long time right and if instruction happens to refer to memory 16:38 right then you're going to refer to the memory again so memory time just 16:43 dominated the whole execution of one instruction the other thing that may 16:51 surprise you or maybe not maybe it's not such a surprise the main technical challenge in building these machines was 16:58 design of this controller the design of this controller now you're doing 17:04 pipeline machines for your project and optimizing it believe me the controllers 17:09 in these machines are trivial trivial as compared to what you today absolutely tribute I would I could 17:16 give it as homework one right there so simple the controller in these machines 17:22 but the point is people did not know how to think about controllers they knew 17:27 little bit about finite state machines and if you start thinking like that you know you will go crazy trying to design 17:33 the controller for instruction sequencing so what I want to do is 17:40 transition to give you a flavor of what these instruction sets look like now 17:46 remember the dominant computing model in people's head is a calculator and the 17:52 thing that stands out about a calculator is that there is something called an accumulator even today you know when you 17:57 calculate you see there is no number being displayed in the accumulator and your center of focus is that okay add 18:04 something to it subtract something multiply something to it right so accumulator at the end of the 18:09 instruction you always get some new value in the accumulator so this is how 18:14 computing began right so not surprisingly the earliest instruction sets are all accumulator based 18:20 instruction sets and another reason was registers work very expensive 18:28 I'll visualize it for you so in whirlwind you have an accumulator which 18:33 was 18 bits right 18 bits wide each bit 18:38 of the accumulator was big enough so that if something went wrong you could 18:44 stand inside it and fix it can you visualize that right accumulator all the 18:51 circuitry that is going in there each bit is like a box inside which a human 18:58 being can get in you know to check out if the wiring is right and if the vacuum tubes are working etc there from long 19:06 ways okay so let's look at the instruction set as spelled out in the 19:13 their paper 19:19 so what do you see in this instruction set load X right so they're always 19:25 talking of bringing something from the memory every instruction deals with memory so what you're doing is you're 19:32 giving me an instruction to bring something from the memory into the accumulator right and of course you need 19:40 some instruction to store the accumulator back into the memory so you have an address and you have load and 19:47 store instructions to bring that thing to the accumulator and what is the add instruction it takes whatever is in the 19:55 accumulator it brings something from the memory adds to it and leaves it in the accumulator this is exactly how people 20:02 thought of a calculator at that point but notice now even this add instruction is referring to the memory because 20:08 you're always bringing something from the memory adding it to the accumulator leaving it in the accumulator and you 20:16 can have multiply and divide these were you know you had to provide one extra register for it a quotient register and 20:22 phenomen one of the big contributions of this paper was it offers very convincing 20:28 argument why all the arithmetic must be done at the binary level there is no reason to emulate decimal arithmetic we 20:34 can do everything in binary because it's more efficient so if you follow that then it follows that by shifting you can 20:42 also do certain kinds of multiplies and divides and so on and they made full use of it so the earliest computers had 20:49 shift instructions in them essentially as a way of speeding up multiply and 20:55 divide and this legacy goes back to 21:02 Babbage you know in 19th century they knew that for doing control they need 21:07 some sort of a test right and of course it made sense if the test was done with 21:13 the accumulator so accumulator if it was greater than zero right then you put 21:20 some new address in the program counter otherwise you would have gone to the next instruction in the system does this 21:29 instruction set make sense you can understand it right away right and this 21:36 will become clear why we need some instructions to extract the address part that means if I gave you that load X 21:42 instruction I may want to just look at X from it so there were special instructions for that but taking into 21:47 everything into account you know there were less than two dozen instructions on EDD back and at SAC in it to start out 21:56 with okay so you're ready now let's write a program and the program is going 22:04 to be very very simple all I want to do is add two vectors I have a vector a and 22:10 I have a vector B and I want to put the results after addition into vector C and 22:15 the length of the vector is n right it's stored in another location and I will 22:21 leave constant one so I have stored that somewhere also okay so see if you if 22:27 this program makes sense to you so you load n write the constant and you jump 22:36 greater than or equal to zero if it is that then you go to done because we're gonna initialize it to minus n and then 22:44 you add one to it and store it back in N so that tells you how far you have progressed in your computation and then 22:52 I do load a load B and I add sorry load a add B to the accumulator and whatever 23:00 I get I store it back in C fantastic and I say jump back to loop not so difficult 23:08 to write except that this program doesn't work 23:16 what is the problem you see every time 23:23 you traverse this loop you need a different address right first time we 23:29 were doing a B and C next time you want to do a 1 B 1 C 1 next time a 2 B 2 C 2 23:37 right so you have to somehow step through the data now I can tell you that 23:45 if not ever enough hints forgiven this will become a PhD level problem right and the hint is the right 23:54 self-modifying code so what can I do I can look at F 1 F 2 and F 3 because 24:03 these are instructions I can look at instruction as a data and what can I do 24:08 to these instructions now I can go into 24:14 surgery on the instruction right I can change a to a plus 1 B to B plus 1 C to 24:20 C plus 1 that's all arithmetic right fetch that instruction add 1 to it put 24:28 it back fetch that instruction add 1 to it put it back okay so this thing that 24:34 comes 24:40 so load the address part of f1 add one to it and store it into the address part 24:47 of it again so those each instruction is being modified now how many of you are 24:55 following me nobody yeah make sense that 25:06 you have some instructions and the only way I know how to change that addresses 25:11 to do arithmetic on it which is allowed because it's a stored-program computer I 25:18 can operate on instruction like I I can operate on other data so this is how a 25:24 loop was written and everybody thought even I think so this was the best thing 25:30 since sliced bread if sliced bread it exists at that time I don't know ok it was a very very clever 25:38 thing because suddenly all these none of this hunky-dory stuff you know with the wrapping loops and gluing your tapes 25:45 together etc because now I'm giving you a systematic way of doing your loops and you have a test for zero so that you can 25:51 get out of the loop whenever you want in it so let's do just do simple arithmetic 25:59 on this right 17 instruction fetches per 26:05 iteration right 10 operand fetches 5 stores and how many instructions are for 26:12 bookkeeping out of 17 14 instruction 26:19 fetches have nothing to do with the vectors I was trying to add is just setting up the computation right 26:24 bringing it doing surgery on the instruction etc right operand fetches 26:30 I'm fetching it ten times but every instruction fetch has to be regarded as a fetch right and eight of them are 26:38 overhead stores four of them are overhead in other words 26:46 all I wanted to do was to do two reads one ad and store it back but instead 26:51 this is the overhead I'm paying and most 26:58 most of the executed instructions are for bookkeeping so people said this is 27:03 not right we need to do something so this became a research topic what can I 27:08 do so that I don't have to continually do surgery on these instructions okay so 27:16 the clever idea which was introduced by Tom Kilburn at Manchester University 27:22 right was to have more specialized registers which he called index 27:28 registers so the idea would be that you 27:33 know suppose I have four index registers so I can choose I X as one of those four 27:39 registers is taking few bits to specify this register but now the meaning of the instruction is instead of fetching from 27:46 X add the index value to X right so if 27:51 you add the index value to X now I can keep changing it because all I had to do is change the index register I don't 27:59 have to go and change the instruction that is over there and you can add new 28:06 instructions which will be checking if the index register is zero then you jump 28:11 to somewhere otherwise you continue in the same way and you can also load the 28:17 index register and you can store the index register and so on pretty soul 28:25 somebody says you know this is a very clever idea but these index registers are beginning to look like accumulators 28:33 you have given me for index registers you want to increment you want to decrement you want to test for zero you 28:39 are doing all these things what is the difference between index register and 28:45 your general purpose register a very valid question to ask at that point but 28:51 at least your program has become much better because now you're not going to have this you know in the middle of the 28:57 iteration same instruction being brought out modified the address store it back all that is gone because now we're just 29:04 going to change the index register which is the processor state so using index 29:10 suggests that the code becomes much simpler and program does not modify itself efficiency has improved 29:17 dramatically because we are not fetching the instructions over and over again and you know we it's fantastic 29:25 basically so the point is there were no 29:30 index registers before introduction of index registers simplified programming 29:37 as well as made efficiency of program dramatically higher dramatically higher 29:43 so you can imagine that any machine built after that is definitely going to have index registers in it okay it's 29:53 born complex control index registers alright now if you look at operations on 30:02 index registers so these are the most obvious ones you have to bring the index 30:08 register into the accumulator and back etc you have to be able to add some constants to it also AC must be saved 30:15 and restored you need to increment index 30:21 registers you need to store it began to look like a accumulator so people say 30:27 alright in my machine I'm not going to draw any distinction between index registers and jerry' purpose registers 30:33 so instead of thinking of these are the four index registers and these are the four or eight or whatever 30:40 general-purpose registers I'm just going to give you a set of registers and give you addressing board so that any 30:45 register can be used as an index register if you want you know to do the computation so this is how the idea of a 30:53 general purpose register was born and remember this is a very important lesson 30:58 in computer architecture that anytime you introduce specialized registers 31:03 specialized anything you can't do anything with it unless you also introduced 31:08 instructions to manipulate that state and if you don't want your instruction 31:14 set to keep growing you know you either think it does a journal purpose registers or in this file thing in terms 31:21 of CSRs engine under the set of registers or you think in terms of addresses that's right memory mapped i/o 31:29 right so all the i/o devices we don't want to have instructions for dealing with every conceivable IO device you say 31:36 oh when I write in location 2043 what I really mean is the printer okay so you 31:43 do this kind of address mapping in order to talk you know with all kinds of 31:48 things without having to change the instruction set all the time the instruction set will remain simply load 31:54 store and register to register operations okay so support for subroutine calls 32:00 they recognized very early on that they need some routines and the idea is there from two different points you're jumping 32:06 to the same code so you have to go there come back here from here you go there and come back here and you don't want to copy this 32:12 code so this remains where it is and Wheeler who was one of the programmers a professor at Cambridge right he showed 32:21 how he could execute this code without anything special so he took 32:26 self-modifying code to the next level you know he did some hacking which I used to give out as a problem set long 32:32 time ago but I'm not gonna bother you with this but there's actually a paper published on how to do this stuff 32:39 without the thing and after that this jump subroutine instruction was 32:44 introduced which was also his idea that you know when you jump there you remember where you came from you know so 32:51 we store it in some register so you have jump in link you know that's how this 32:56 five in all modern instruction sets work now even after doing that there was a 33:02 problem because I jump into the subroutine and I have to do some relative addressing because the 33:10 arguments were here right in my code they're sitting here so sometimes call 33:17 came from here some other time the call came from here so when it's saying the first argument it's not the same 33:25 address we are back to square zero should I go and modify these things every time you know I call the 33:32 subroutine and that's how the idea of indirect addressing was born that we 33:38 will leave the address somewhere else in the memory and whenever you wanted to access something you could say no no I 33:45 don't mean that I mean the contents of X so that combined with you know indexing 33:51 could let you access anything so these two things are so important in modern 33:56 instruction sets you know you do this in direction through registers and not through memory locations but the idea is 34:02 the same so indexing and indirection are 34:08 sort of the backbone of any instruction set I mean it will be hard-pressed to find an instruction set that doesn't 34:14 have those two capabilities as well as some sort of a jump and Link instructions I'm not going to go through 34:20 all this all right so as you can see how 34:27 the addressing evolved right so first you have just the accumulator bring X 34:33 and I don't even specify where because there's only one accumulator right and then we have it with index registers and 34:39 then we have it index registers and then we have the works and the final through is in direction through registers all 34:50 right so what I've convinced you so far is in order to program these machines 34:55 you have to know how many registers are there and you know how wide they are and what's where the address is etc etc have 35:02 to know lots of details about the machine now IBM was just going gangbusters in 35:07 50s right very very successful company it had an internal problem it had four 35:15 completely different sets of computers families of computers 701 657 or 214 or 35:23 one etcetera systems had every computer had its own instruction set its own IO 35:30 system its own assemblers compilers libraries and they were in ten 35:35 for different markets so IBM said this cannot continue we cannot maintain so 35:41 many different lines of computers because there is some commonality and we must find a way so for example why 35:49 shouldn't the i/o be the same for all of them right same printer should work same software should work 35:55 you know from so this was the origins of 360 so they were solving an internal 36:01 problem they were trying to do something so that they will have only one is a 36:06 which will serve all their computers you know so they say you can have different 36:11 implementations so there are way way way ahead of the game in this no other competitor was thinking remotely like 36:19 this but IBM was thinking in terms of so really this was the first attempt to 36:26 define an instruction set without referring to an implementation without 36:32 saying anything about exactly how many things are inside it you know whether it's pipeline not pipeline you know what 36:39 registers exist except for architectural e visible registers so this is how the 36:45 definition was given and you know this is a fantastic paper I highly recommend 36:51 you should read this 1964 paper why I am dull broad books okay and they define 36:59 many things which we stick to even today so for example they said there is a 37:04 notion of processor state this is what we call architectural state what is the 37:11 processor state in this file 37:17 yes 37:23 processor state is something that is visible after one instruction to the next instruction when it executes what 37:29 is that information yep values and all the registers plus PC 37:41 right yeah okay and memory also right 37:46 because we rely on the memory we say if this this instruction did something to the memory it has to be visible to the 37:52 next instruction so all these abstract concepts are you know what state you can rely on which we 37:59 call architectural States you know they were defined in the IBM this article 38:05 program counter accumulator etc and then 38:11 they also said every instruction assumes a bit pattern right so whatever if you 38:16 say it's an integer it's implicit at that point whether it's a two's complement or ones complement or 38:22 whatever right bit pattern so data types are also sort of being defined and a hardware level if it's of sorting point 38:29 instructions then you're going to interpret those bits in a certain way so you don't carry tags there you know it 38:35 just goes and looks at it right then they talked a lot in that paper about why not tagging right it's not a good 38:41 idea as far as they were concerned and 38:47 then look at this if an instruction can be interrupted then the hardware must 38:54 save and restore the state in that transparent manner so why are they 39:00 thinking of interrupts in such a big way because of IO 39:06 right IO is so slow that if I have started something there I don't want to 39:12 sit here and to do my thumbs I'm gonna go and do something else on this computer while iOS going on so I need to 39:20 restore the computer program that started this IO so therefore I need a very precise 39:26 notion of interrupts that I had executed every instruction up to this point and I 39:31 haven't done anything beyond that so the notion of precise interrupts was 39:36 defined in this machine for the first time and it was so important it was so 39:42 important that they said it wouldn't matter which computer you are running on you'll see the same interrupts behavior 39:48 on that so programmers machine model is 39:56 a contract between hardware and software which is what we have been saying over and over again okay so we don't have 40:05 time so this is the kind of instruction set it was it's pretty simple but it's 40:13 extremely memory based right so unlike the RISC 5 instruction you had where almost all the instructions just 40:19 deal with registers here the basic instructions are always you are specifying a register and you're 40:26 specifying an operand that comes from memory side to do something with it this 40:33 is something we haven't touched on at all so this file has a style of 40:38 branching which is extremely popular that you compare the value of the accumulator and you say if it's zero do 40:43 something otherwise do something else or you compare two values and depending upon the result you do something there 40:49 is a completely different style of branching which IBM adopted which is I do some arithmetic and I have another 40:56 set of registers which I have all conditioned codes and I said those and 41:01 all the branches are based on condition codes I go and look at those things and then I can jump based on that and if you 41:08 look at it like that there are some advantages because you can do interrupts in a very interesting way you know 41:15 because then if you get an overflow underflow you can set those bits and you can test it and all kinds of things can 41:21 happen because of it so this is the most important part so 41:29 this paper is published in 1964 and at the same time IBM announces six 41:36 different models of the computer with six different speeds since different 41:42 price ranges right very very different from each other but 41:47 the promise that any program that runs on one is guaranteed to run on something 41:53 else a promise that is held by IBM until today right so they've really designed 41:59 something they've mastered the concept of abstract idea of an instruction set architecture so that you can keep 42:06 innovating underneath you can keep going and designing better and faster machines it is still in use today right this 42:14 instruction set so this is the picture of a microprocessor which came out a few 42:20 years ago which is still running 360 or its successor you know it's 370 390 42:26 instruction said look how far we have come you know from it you know 1.4 billion 42:34 transistors quad core design right opportunity it's just amazing 42:42 even IBM people could not have imagined in 1964 you know that 50 years later this instruction set will still exist 42:50 right exist and you know people will implement it so that shows you that if 42:56 you do a good design and this is absolutely true if your instruction set 43:01 is worth anything it is gonna outlast any implementation right because you're 43:07 gonna keep building newer and newer implementations all the time but it has to run all programs on it for sure 43:15 now this five I say is much simpler than I say it's from the 60 by this is the 43:24 thing I do not want you to forget so somebody is a this stands for reduce instruction sets in fact to me some 43:31 misnomer to call it a reduced instruction set because even a sock had only 20 in structures or 80 in 43:38 instructions so we are certainly not smaller than that I think what makes 43:43 modern instructions sets or reduced instruction sets interesting is there is 43:49 an absolute division there are instructions that manipulate registers 43:55 r1 plus r2 goes to r3 there's a whole slew of instruction they never talk to 44:01 memory and there are only two instructions I've talked to memory a load which brings something into the 44:07 general purpose register and a store that takes something from the general purpose registers to the memory so this 44:14 separation was achieved first by Seymour Cray in 60s right he is the first one to 44:21 design machines like that but now it's just is gospel except for Intel machines so 360 or 44:28 sorry not to x86 still doesn't follow this philosophy but that's for 44:34 compatibility reasons but everybody agree that this is a great idea to separate load store instructions from 44:42 other register to register instructions for the simple reason register to 44:49 register instructions have a very well defined boundary you're doing something inside right the moment you do load in 44:55 store you're going outside you're getting something to bring it back to 45:00 this day it is true that when you go outside and get something it takes a while so we spend a lot of time 45:08 designing caches there's that too you know create an illusion that it doesn't take so long so there's a great 45:14 complexity in memory systems but at least in at the implementation level we 45:20 have simplified the problem we have simplified the problem that we can look inside and we can pipeline it we can do 45:26 all kinds of crazy things we can do superscalar execution etc and separately we can keep designing memory systems you 45:33 know which can do many many instruction 45:38 loads and stores simultaneously in them because the the most important aspect of 45:44 memory systems is that it should be able to tolerate more than one load miss I 45:51 mean more than one cache miss if you don't get a hit now you know it's gonna take a while 45:57 so what will be great is if you can do another load instruction which gives your head or if it gives a Miss then 46:04 you're processing two misses the moment you can do that which is necessary for all fast machines then 46:12 things work very well so it makes it easier to implement when you have such a clean divide well I'm gonna stop here so 46:21 I hope you have enjoyed this class especially the labs and the project because if you have done labs you can be 46:28 very proud of the fact but what you have accomplished in terms of design it's 46:34 it's not common I mean people in other schools and other places will be surprised by the degree of 46:41 sophistication you had in your design so that's the heart of this course that you 46:47 have to do these labs properly and we want to thank you because this course cannot be taught that there are no 46:53 takers right so I know some of you are maybe taking it under duress but it's 47:00 very very important for us we need the feedback to figure out so thanks a lot 47:06 for taking the course and doing the labs good I'll be happy to take any questions 47:13 or comments if you have [Applause] 47:21 the tea was extraordinarily strong and we have heard so many lectures from silvina and Daniel and that's just the 47:30 surface what goes on behind the surface behind the curtain is a much much more 47:35 collaboration and without tears the course just cannot run cannot run I mean 47:41 you guys there are primary interface to you and and I think you like them 47:47 generally because you want more time from them 47:52 any other comments any announcements for 48:00 the quiz so when it's the quiz is a week from this Thursday right a week from 48:05 this Thursday so now you can go out and play or if you want you can prepare for the quiz whatever right you don't have 48:13 to worry about this lecture very good thank you