Transcript Search in video 0:00 okay good afternoon everybody let's get started so welcome to six double of four today 0:08 we're going to shift gears and start learning about operating systems which 0:13 are one of the major topics in in double of for and so you know these operating 0:21 systems is is a big topic so this will be an introduction and we will focus on 0:28 the interaction between hardware and software so this is a topic where it is 0:35 very important to understand what happens on both sides of the fence this 0:40 is also the part of double of war that is essentially the prerequisite to six 0:47 of 33 so if you take six oh three three you see these concepts as an 0:52 introduction of you know a more thorough discussion that you will see in operating systems there okay so broadly 0:59 speaking in six level of course so far we've built machines with build 1:06 processors that run a single program at a time right so there's some hardware 1:13 with design you know this processor over here we've looked at the memory 1:19 hierarchy and then you know as you know systems have all sorts of i/o devices you know hard drives network cards 1:26 displays keyboards mice other peripherals and so you know we haven't 1:32 talked a lot about those but we will we will do that in a in a couple of 1:38 lectures but you know the key idea here is that as we've seen the system so far 1:44 this program that's running directly on top of the hardware has complete access to all the hardware resources right it's 1:50 directly accessing the processor is directly accessing memory and so on and 1:56 so remember that the interface that this program is coded against is what we call 2:01 the instruction set architecture or ISA right so that you know we've we've seen 2:07 risk v there are many others there's x86 arm and so on and so you know this we're basically 2:15 directly coding our programs against the hardware software interface now some 2:23 systems actually work like this so if you for example are looking at what we 2:30 call embedded systems right where basically these are systems that do a single thing so for example the elevator 2:35 controller for you know the startup building right it's only controlling the elevators and so you know having a 2:43 single program running the whole system is a reasonable architecture but most computer systems actually do not work 2:51 like this right and and so you already know these because in your laptops you're running multiple programs at the 2:56 same time right and you know when you write programs you're not directly accessing the hardware there's a little 3:03 bit more abstraction going on and so what happens is you know most computer 3:08 systems are running with what we call an operating system right so instead of 3:15 there being a single program in the machine there are multiple executing programs right right on top over here 3:21 that share the machine right and it's 3:26 executing program does not have direct access to hardware so instead there is 3:31 an operating system in the middle that's interposing between these programs between these executing programs and the 3:38 hardware and so this gives rise to two different interfaces so the operating 3:43 system is still going against the ISA right the operating system is talking to hardware at least it believes it's 3:50 talking to hardware as we will see later but it does half it is the only thing in the system that has unrestricted access 3:56 to the hardware these running programs above are coded against a different 4:03 interface which is what we call the application binary interface or ABI okay so now you have two interfaces and 4:11 you know this gives rise to a number of system level issues okay so before we 4:20 dive into what operating systems do and why they do it let me shift a little change the 4:27 nomenclature a little bit because we need to be precise and because when we talk about operating systems we don't 4:32 really talk about programs we actually talk about processes and so here if you 4:40 see two changes right the first one is that here you can see that instead of 4:45 program right I've changed the word process and here instead of operating system I'm saying OS kernel okay so 4:54 first of all the difference between our program and our process is a program is 4:59 just the code it's a collection of instructions and a process is a program 5:04 that is being executed so it's the code plus some other state right it registers 5:11 what values does it have in registers what values does it have in memory what memory does it have allocated is it does 5:18 it have any other resources does it have any open files is it talking to you know some other process over the network so 5:26 that's the key difference right a process is just a running program and 5:33 then the operating system kernel is just another process but it is a process that 5:38 has special privileges and we call it the kernel rather than the operating system because generally when we talk 5:43 about operating systems you talk about things like Windows or Linux or Mac OS that includes a lot of other programs 5:49 that you know things like a text editor or a window manager that are not this 5:56 thing that's interposing in the middle they're just normal programs and so we distinguish between you know the 6:01 operating system which is this whole suite of programs and the kernel which is this special process with this process with special privileges that is 6:08 talking directly to hardware okay all right so there are three key reasons why 6:16 we want to have operating systems there are three key benefits that we get when we introduce an operating system the first one is protection protection 6:23 and privacy so you're gonna have multiple processes here these processes don't necessarily trust each other they 6:30 might be malicious they might be buggy and so you do not want these processes to interfere with each 6:36 other and so the first thing that we're going to do and or you know one of the most basic things that the operating system does is protect processes from 6:44 each other make it so that each process is isolated and can only access its own data but not data from other processes 6:52 the second one is abstraction and this is really really important 6:57 now how many hard drives are how many peripherals can you buy for your computer there's there's thousands of 7:03 different devices that you can plug into your machine right and so it would be 7:08 insane if you were writing programs that were directly interfacing with each possible device right imagine if when 7:15 you had to write you know a Python program you had to say well if you have this hard drive then right you know I 7:21 want to write some contents so right this sector over here right - this particular piece of the hard drive I 7:27 mean nothing would work right so instead what the operating system does is it abstracts details of the underlying 7:34 hardware and it gives higher-level interfaces so for example you do not write when you write a program you're 7:41 not manipulating a hard drive directly or you're not issuing commands to the disk 7:47 in fact you're opening files right i'mso files are sort of the abstraction that 7:52 we build on top of storage devices to essentially make processes able to 8:01 manipulate data and have long term storage non-volatile storage without concerning ourselves with the details of 8:08 the hardware right okay the third big 8:13 benefit of having an operating system is resource management so these operating system is essentially in charge of 8:19 controlling how all these different processes are going to use the different parts of the hardware who is going to 8:26 get access to the CPU who is going to get access to what memory right who is 8:31 going to get access to different IO devices right who can you know open 8:37 network connections and so on ok any questions of our 8:45 so there are three key mechanisms that we use to get these three key benefits 8:52 and so this slide summarizes everything in a very high level and then what we're going to do over the next three lectures 8:58 is dive deeper into each of these mechanisms the first one is as I said to 9:04 protect different process from processes from each other the OS kernel provides a private address space to its process so 9:12 each process is given some amount of memory physical memory by the operating system and then a process is not allowed 9:18 to access memory that's outside of its own address space right so this is how 9:24 it looks like right so you have memory you have some so physical memories over here right so different addresses are in 9:31 different so you know address is basically go down over here and so you see that the OS kernel takes some of the 9:38 memory of the machine and then there's some blocks of memory that are dedicated to different processes but if you're 9:45 running on you know if for process one all processes one sees is this amount of 9:51 memory over here all everything that's above or below is basically out of bounds it's invisible to it okay so 10:01 essentially yes 10:06 no processes are written in I mean eventually they will be compiled down to 10:12 assembly or as we will we will see these in more detail in a couple of slides but yes they're not read they can be read in 10:19 assembly or they can be really higher-level languages they can be 10:24 written as I'm but yes we will we will 10:35 see virtual memory in detail in in the next lecture right so for now you know just at a high 10:43 level all you need to understand is that there is some amount of protection so that you know our process can see some 10:50 part of the physical memory but not all of it okay so this is essentially what 10:56 we call virtualizing memory and we will see this on the next lecture okay the other thing that we need to do the 11:03 operating system kernel needs to take care of is essentially scheduling processes right deciding how different 11:08 processes share the CPU over time so we have one processor I mean at least in 11:13 the systems that we've built so far there's one CPU right there's one processor now in the systems that you 11:18 have today there's multiple but in general you're gonna have many more processes than dam processors so for 11:25 example whoever if you who has a laptop can you tell me how many processes you 11:32 have you have in your machine can you take a look and see this how many are 11:39 running yes 11:46 right right 11:51 so in general you'll have hundreds of little processes that are taking care of different things in the machine right 11:56 but you know a couple of them are running because probably you have two 12:01 cores in your machine most of them are you know they're waiting for some some event to complete or sleeping because 12:08 they have nothing to do but in general we're going to have this problem that when there are more processes than that 12:15 want to run then we have CPUs and in our case you know again we have a simple a 12:20 single CPU we're going to have to do something like this where we're going to have to split the CPU over time and 12:27 essentially give different time slices of the of the CPU to different processes 12:32 right so you know this in this diagram you see time going this way and so in 12:38 the beginning you have process one running and then you know after a while after a few tens of milliseconds then the operating system steps in and 12:45 decides to give the CPU due process to and get them after a while it decides to step in again and give this if you to 12:51 process one and this keeps going and so what happens is that it's process believes that it's running on an on on 12:58 its own CPU it's just a CPU that's kind of slow right it seems slower than than 13:03 if it was running alone the final piece of the puzzle is that the operating system kernel let's processes invoke 13:10 some other system services like files like sockets like you know all sorts of 13:17 events via a mechanism that we call system calls and these will see two lectures from now so there are multiple 13:24 ways that you can implement operating systems one way would be to start establishing calling conventions between 13:30 applications and the operating system and sort of you know make the each 13:36 process very aware that this an operating system behind the you know and below it and so in general we don't do 13:45 this right and we want to keep a very high level abstraction and the way operating systems achieve this is by 13:51 implementing what we call a virtual machine and giving a different virtual machine - its process and what I mean by 13:58 this is that each process really does believe that it runs on its own machine and on its own physical hardware but 14:04 this machine doesn't actually exist in the physical world it's just a mirage it's just something that the operating 14:10 system is providing under underneath and so in general every time that we have a machine or something that looks like a 14:17 machine that a process is running against but that is not implemented in physical hardware by contrast by you 14:24 know it's not physical we call it a virtual machine okay so here's how it looks you have the physical Hardware at 14:30 the bottom with all these devices right and then you have the operating system kernel these specially privileged 14:35 process that's running directly above the hardware and then you have different processes so for each process but for 14:42 process one over here right you have this ABI that consists of a virtual processor some virtual memory and then 14:49 you know different system calls file sockets different events and this is all 14:55 provided by what we call the virtual machine right virtual virtual machine for this process so the operating system 15:00 kernel is emulating this virtual machine one but you know there are other 15:07 processes running on the machine and for each of those the operating system is emulating a different virtual machine 15:13 with its own ABI right okay any 15:18 questions so virtual machines are essentially a 15:24 new layer of abstraction right we're basically saying we want to give this illusion that we have a machine but we 15:31 don't necessarily have to implement it in hardware so far we've we've seen machines that are directly implemented in hardware 15:37 right but that's not how we're going to operate from now on and so this is a 15:43 stable of operating systems but it's actually a very general concept it's used all over the place right it's using 15:48 programming languages it's used for in a virtualization technologies and so you 15:53 might have seen the term virtual machine used in multiple contexts and basically 16:01 their reason is using so many contexts is because it's always referring to the same thing right so we say that a 16:07 virtual machine is an emulation of a computer system so again every time that you have some notion of a process right 16:14 some sort of running program that's not running against real hardware but it's running against some sort of conceptual 16:21 machine that is implemented by a process underneath what you have is a virtual 16:27 machine but that process that's running underneath that's implementing the the machine is implementing a virtual 16:33 machine so in this case the operating system kernel is implementing a virtual machine right okay so let's see this in 16:41 a little bit more detail so don't turn your handouts so without looking at the handouts can you tell me how many 16:48 virtual machines that you use in lab 4 16:55 one two well I mean clearly we gave you a virtual machine so at least that that's one right okay so so let's go 17:03 from you know top down so first you started writing some processes and some 17:10 programming assembly right and then you run it right so at that point when you run it that's a risk five process right 17:15 that's running sort you know bubble sort of quick sort what have you right and so that process is interacting and it's 17:23 coated against the risk five is a but in fact we were not giving you hardware that implements a risk five processor 17:30 right what we gave you is we gave you a risk five emulator that's an process right this was some 17:35 Python program called sim dot py and this in fact is implementing a risk five 17:42 virtual machine right this emulator makes it look to this process like it's running on a risk five processor but in 17:50 fact it's doing everything in software there's no hardware going on here right it's all a simulation it's an emulation 17:56 now this is a Python program right so in the end they seem that py is a text file 18:02 there's no x86 instructions there's no assembly here and so in fact this is 18:07 coded against the Python language and what's underneath the Python language well there's another program that's called a Python interpreter right in 18:14 this in this case this is C Python this is a program that actually is does have you know machine instructions and it is 18:21 implementing another virtual machine right the Python virtual machine this is taking the same dot py file interpreting 18:30 it it's different statements and then executing write the symbol py process 18:36 under you know the it's translating those in those those high-level 18:42 statements into low-level operations right okay so that's two virtual 18:47 machines so far ok so this is this is a Linux process so it's coded against the 18:53 Linux avi right and behind be the Linux avi lights the Linux operating system 18:59 kernel which of course is implementing a Linux virtual machine on Linux x86 virtual machine so a 19:06 point what you have is an operating system right and a process that's itself a virtual machine that's itself it you 19:14 know that's that's running another process that's itself a virtual machine for this other thing okay so this 19:20 process this this Linux operating system kernel is talking x86 is a right is 19:25 running against an exodus expresser and it is sort of talking hardware directly 19:31 right but that's not the end of the story this is not actually running against Hardware right because what this is 19:39 running against is VirtualBox right which is yet another process that you 19:44 download it for your own operating system whatever that is and that's implementing yet another virtual machine 19:49 that's emulating Hardware right it's making this Linux OS kernel here believe 19:55 that it actually is writing to a disk when in fact it's actually right you know but what this is doing is 20:01 translating these disk commands and saying now actually I have the whole disk image here in in some file and I'm 20:08 going to translate it back to file read and write commands and so this is coded 20:14 against you know there are multiple versions of VirtualBox for you know one for whatever operating system you you might be running so maybe you're running 20:21 Windows Linux Mac OS FreeBSD what have you and so you're running this virtual 20:26 VirtualBox process on top of your operating system and these operating system again is implement again another 20:33 process virtual machine which is talking x86 is a which unless you did some other 20:41 nested virtual machine is talking with Hardware right okay so how many virtual 20:46 machines do we have five 20:52 so this is kind of mind-boggling right how many layers of translation from from 20:57 a risk five instruction until that actually runs on hardware okay so so two 21:03 things two important things first there are these are different types of virtual machines right so you can distinguish 21:10 first of all at the lowest level what we have is this risk five virtual machine and this x86 system virtual machine are 21:18 emulating Hardware right we call this system virtual machines or Hardware virtual machines or full system virtual 21:23 machines because they're making whatever is above them believe that they're running directly in Hardware then you 21:29 have these operating systems right which are implementing process virtual machines right they're giving this more 21:36 abstracted interface to to the processes that are running above their OS right 21:42 and then finally you have this Python virtual machine which is what we call a language level virtual machine right so 21:49 that tends to be more abstracted you know it's further away from hardware when you're writing a path a Python program you don't necessarily I mean 21:55 there's a notion of a process right but it's not very related to specific you know the specific instructions that 22:01 you're going to run in your in your processor okay any questions yes 22:13 so you can do multiple things depending on whatever virtualization software you 22:18 are using in general they tend to be restricted to emulate the same I 22:24 underlying is a but there are virtualization packages that will emulate different is a is like QA I mean 22:30 so you can you know if you're doing any iOS development for example what happens 22:35 is what what you're using is a virtual machine that emulates arm or on x86 right and so you'll frequently run into 22:43 some of these ok so finally you know when we get to the 22:50 physical world you know there's there's you know as I said there's the issue of which type or at what's at what level of 22:57 virtual machine we're talking about you know where it's language level process level or system level but there's also 23:03 another factor here that's very important which is performance right so some of these virtual machines are you 23:10 are introducing a significant performance slowdown but some of them are actually not nothing to using a lot 23:18 of a lot of slowdown so for example you know the Python virtual machine right whenever you run a Python program it's 23:25 no common knowledge that unless you're doing something you know fairly fairly special like using numpy all the time or 23:33 some library that's that's implemented very efficiently you're typically incurring at you know one to two orders 23:38 of magnitude worth of slowdown versus if you just run something that's directly machine code right for example if you 23:44 took a C program and compile it that would be you know 10 to 100 times faster 23:49 and so you know the problem is you know that's very simple right and the reason 23:55 is well we're implementing everything in software right so basically everything that process that we need to somehow 24:01 emulate but when we're talking about operating systems you know it's not acceptable for us to incur one to two 24:08 orders of magnitude worth of overhead and so because we want to support operating systems with minimal overhead 24:14 what we're going to do is add some hardware support to our machines so that 24:20 we can essentially have most of the time the program that the 24:26 higher level of abstraction running directly on the underlying hardware resources and then what they will do is 24:32 the operating system will step in whenever it's required whenever you know the program dressing is something that it shouldn't or the you know the program 24:40 requires some some service from from the operating system okay so more 24:48 specifically there are four different things that we're going to see over the next three lectures for different is a 24:55 extensions that we're going to introduce to support operating systems and to support these process level virtual 25:01 machines the first one is that we're going to change our processors to have two different modes of execution user 25:07 and supervisor so the only the OS kernel is going to run in this special supervisor mode and then all other 25:15 processes are going to run in user mode and the reason that we introduce these two different modes of execution is that 25:22 we're going to give the supervisor mode some super powers that the users are not going to have so we call these privilege 25:30 instructions and privilege instructions and registers that only whatever's whatever process is running in 25:36 supervisor mode has access to so if you have you know essentially there's going 25:42 to be some hidden state that's not going to be available to most of the programs but then the OS is going to have access 25:47 to that state then the the key question is well how do we transition between the 25:53 operator the the you know user mode or whatever process is running is in user mode - supervisor mode safely right how 26:00 do we switch between these two modes and this is accomplished with what we call exceptions or interrupts and so that's 26:06 the mechanism that will that will see today and then the other key piece of hardware that we need to introduce is 26:13 essentially what we call hardware support for virtual memory and this is what will let us introduce this notion of having private address spaces with 26:20 very low overheads so we will see all these three first the first three 26:26 mechanisms today and then next lecture will dive into the virtual memory but 26:31 the key takeaway point you know regardless of all these mechanisms is that all these I as a tensions really only work if both 26:38 hardware and software are very much in sync right you need the operating system 26:44 to be very very aware of what the contract is is with hardware right and 26:50 to to know what the conventions are and what the interface is and so we will see 26:57 this both today we'll start diving into you know specific implementation details 27:02 and onto more representation okay so the first mechanism that we're going to see 27:07 our exceptions so essentially an exception is an exceptional event right 27:13 it's something that's not common just by definition it's some event that needs to be processed by the operating system 27:19 kernel and this is either you know unexpected or rare so basically you have 27:25 some process over here right is running some sequence of instructions you know instruction I mine is one instruction I 27:31 sub I I sub I plus one and then on I an 27:36 instruction either by something happens something unexpected happens these exception triggers and so what we're 27:42 going to do is redirect the processor to execute instructions from this Hamel 27:49 error code this operating system code that's in the OS kernel and we call this code the exception handler so this 27:56 handler is going to essentially run through a sequence of instructions figure out what to do and then return 28:02 control back to the process now there are many different reasons why 28:08 exceptions could happen and generally we classify them between you know we 28:14 distinguish between what we call asynchronous events or synchronous events so synchronous events are 28:22 essentially what we call exceptions and they're generated by the program itself 28:28 they're generated by the process so for example the process the PC goes into an 28:33 illegal instruction writer an unsupported instruction and so now we don't know what to do right so that 28:41 merits a call into the operating system to say hey something went wrong figure 28:46 it out but there's many other other reasons for example you know you might have a 28:52 process that issues an illegal memory access you know issues an access to memory that it's not supposed to be 28:58 accessed or it is a memory access that's an aligned right your your address 29:05 you're trying to access a word but it's not aligned to a word boundary and so that will again trigger a call to the OS 29:10 there are also arithmetic instructions right so if you have a divider you will 29:16 get divided by severe exceptions right because I don't know what the result should be right so that's up to the 29:21 operating system to decide and also sometimes the process itself wants to 29:28 invoke the the operating system with this system called mechanism that that I 29:34 mentioned before and so we will see this in detail in a couple of lectures but just be aware that you know the basic 29:39 mechanism that's used there is also exceptions on the other hand we also have what we call interrupts and 29:46 interrupts you know by contrast with exceptions are asynchronous events they're generated by other devices 29:52 devices earlier than the program so for example you have a timer that's periodically interrupting the processor 29:58 so that they always can step in and do something right and so whenever this timer expires it raises some interrupts 30:04 line and the the CPU takes an interrupt 30:10 and automatically goes into the OS so that you know that is that interrupts can be serviced but also you know if you 30:16 have a keyboard every time you press a key right there's an interrupt that's being raised in the CPU so that they always can process it when you know you 30:23 have an some IO operation let's say there's a right going on and the hard 30:29 drive disk figures out that you know it's done give me something else if you 30:35 have it you know again that's going to raise an interrupt and then the operating system needs needs to step in 30:40 and figure out what to do okay so in general you'll see the terrans 30:46 exception and interrupt being used more or less interchangeably in this course 30:53 we're going to use exception as the general term and so we will use synchronous exception when we're talking 30:58 specifically about this sink events this is also in line with the risk 5 nomenclature which uses 31:04 exceptions to refer to everything ok any question so far alright so as I said 31:16 when an exception happens basically there's a very specific recipe that 31:21 we're going to follow the first one is that we're going to stop whatever you 31:26 know we're going to stop execution at the instruction where the exception happens and we need to implement 31:31 something that's called precise exceptions so whenever there is an exception we need to have all the way to 31:39 complete all the instructions that precede the instruction that that caused the exception and not have any you know 31:48 the instructions that follow the instruction that had the exception should not have any effect so this is 31:54 intuitive right I mean if you have an exception that in one particular instruction you know everything above it 32:00 or every layer instruction should have been completed every instruction that comes after should not but if you start 32:08 thinking about well what's going what's that going to do to pipelining right we're now we're overlapping multiple 32:14 instructions and there are you know side effects that happen at different cycles so you'll see that you know in a couple 32:19 of lectures we'll see exactly what hardware support these requires and there are some nuances and how you know 32:26 how can you implement exceptions efficiently but you know forgetting for a second about implementation details 32:33 conceptually what the processor is going to do is stop the program right at that 32:38 instruction then it's going to save the PC of the instruction that where the exception 32:44 happened and the reason why that happened you know is that an interrupt is that an illegal operation something 32:50 else in some privileged registers that only the OS has access to and then it's 32:57 going to toggle the supervisor bit so there's going to be a bit that decides where we are in supervisor mode or user 33:03 mode so it's going to enable supervisor mode it's going to disable interrupts so that no other you know so that the 33:09 they always doesn't get interrupted when it's trying to handle this event and 33:14 then it transfers control to a pre-specified PC the pre-specified address so then the exception handler in 33:22 the OS kernel runs processes the the exception figures out what to do and there are two possible options one 33:30 opportunity is you know that we could process this exception or this standing traveling doesn't actually have anything 33:36 to do with the process and in that case what we're going to do is return control to the process that was running at the 33:43 same instruction and so what's nice about this is that this is completely transparent to the process right the user process just saw a little bleeping 33:51 time but then you know instruction you know execution resumes at the same point 33:56 where it was interrupted and so you know there's no functionally there's no 34:01 difference from you know if the process was running continuously the other 34:06 option is that this exception is actually due to something that the process did that it shouldn't have done 34:12 so for example you know there was a bug in the program and you started trying to access memory that you shouldn't have 34:18 access or executing data as code and running through some garbage instructions and so in that case the 34:25 operating system basically avoids the process right it just stops it and that ensures that nothing nothing bad happens 34:32 okay any questions 34:39 so oh yes 34:48 but things so sometimes the kernel 34:55 itself is stratified so you know sometimes what will happen is that the kernel will have some special bit of 35:02 code that's you know super carefully coded do not cause it cause exceptions and that will immediately re-enable 35:09 exceptions but there's some you know very specific interloper you don't want 35:15 to get re interrupted while you're trying to do handle and interrupt so but 35:21 yeah that's that's a great question and in fact you know if if you look at specific operating systems you'll see 35:28 that they'll talk about the top half and the bottom half top half is whatever handles interrupts and that cannot be 35:33 interrupted and then there's the bottom half which can be interrupted okay so 35:40 let's see what can you do with exceptions and interrupts and you know 35:46 specifically we're going to look at two different applications or two different features that we can implement with with 35:52 this mechanism which essentially do what we want with respect to the processor they virtualize the processor they give 35:58 us a virtual CPU the first the first application the first case studies 36:05 essentially this this scheduling feature that I mentioned before right so as I said you know one of the key features of 36:11 the of the OS kernel is that when you have more processes than CPUs it needs 36:17 to split or slice the CPU across processes and so each process is given 36:25 some fraction of the time in the CPU and it cannot take more time than it is 36:30 given and the enabling technology for this is just a timer it's what we call 36:36 timer interrupts so basically there are some hardware device does this keeping track of time and then the kernel has 36:44 some access to it maybe through one of these privileged registers and it's 36:49 setting some timer it's basically saying interrupt me and you know milliseconds or 20 milliseconds and then 20 milliseconds from now that device 36:56 will race on interrupt and that will jump you know that will make the processor jump back into the kernel so 37:02 let's see how this works in in more details so here you can see the time in milliseconds right and we have the 37:10 process that's running in the CPU so let's assume that in the beginning we have the kernel that's running in the 37:15 CPU and now the colonel wants to give control to some processors right it wants to schedule some processing in the 37:21 CPU so what it does before before just giving the CPU to that process let's say 37:27 it wants to schedule process 1 it's before it does that it sets a timer it says you know in 20 milliseconds please 37:33 interrupt me and then it loads the state of process 1 and gives control to 37:39 process 1 so from you know time equals 10 milliseconds 30 milliseconds process 37:45 1 is happily running on the CPU but then at the equals 30 milliseconds the timer 37:51 fires and when you get that interrupt that automatically switches the CPU into 37:57 the exception handler right so you see that you know the kernel is running for this small time slice right you see this 38:03 small time slice of orange here and in this case basically you know the the 38:10 kernel figures out that this is a timer interrupt so the time that this process hat has expired and so now we need to 38:16 schedule something else and so in this particular case you know let's say that there's a process to that also wants to 38:21 run and so we you know the OS kernel decides to schedule process 2 so that's 38:26 the timer to fire and let's say 30 milliseconds maybe you know 30 instead of 20 because process 2 has higher priority so it deserves more CPU time 38:34 and then it you know the process repeats right so 60 milliseconds the the kernel 38:42 is invoked again decides to switch to process 1 and so on right and so with 38:48 timer interrupts as as you can see here what we can do is we can safely time 38:55 slice you know time multiplex the CPU among multiple processes even these processes have no idea that 39:02 they're actually just running on a slice off of the CPU at a time right yes yes 39:14 there is this is a hardware device so you'll have some quartz crystal that's 39:20 keeping track of you know how many microseconds milliseconds have right so 39:28 the key thing is that only the operating system has access through this privilege mode to this timer right otherwise the 39:36 process process one could say no I actually want to run for longer let's set the timer to fixable eight seconds 39:41 right to infinity milliseconds and now the system is mine right so the key point here is that you don't allow 39:49 processes to access this this Hardware right that that hardware is only accessible by the operating system okay 39:59 any other questions yes 40:08 that's a great question so there's a million different scheduling algorithms 40:14 in recitation tomorrow you'll see something very simple that basically is what we call round-robin scheduling and 40:20 then you know how long should it have well that depends on your implementation you want to because you have caches you 40:26 want to essentially let its process run for enough time that you're not just 40:31 thrashing the caches all over the place right so if you know since since these processes are not gonna share anything 40:37 you at least want to keep the caches warm relatively and so that tends to set 40:43 a limit of a few milliseconds at least okay and then on the other side you know 40:48 if you schedule our process for four seconds then you're not going to have access to that quote to that process of 40:55 the left CPU for four seconds and so if you're doing anything interactive you also want to set that small that 41:01 interval to be small so it's an interesting trade-off okay what else can 41:07 we do well so the other thing that we can do with with exceptions is emulate 41:14 instructions that our processor our hardware doesn't actually support and so these we can do by doing you know taking 41:22 an illegal instruction exception and doing something interesting with it so let's let's be specific 41:28 so far we've implemented you know these risk 5 processors and we've implemented the simplest subset I mean a 32-bit I is 41:37 a variant so that's our V 32 I and there 41:43 are multiple extensions that implement more instructions and one of those that's that's very commonly implemented is what we call the M extension and M 41:49 stands for multiply right so so the M extension define defines this multiply 41:54 instruction over here that is that is essentially you know taking X to raise 42:00 the contents of X 2 X 3 and storing you know multiplying them and storing the result on register X 1 but of course if 42:06 we haven't implemented M then we'll run you know that's implemented that's the processor is detecting that 42:13 as an illegal instruction right and so if we try to run code from an 42:19 are we 32 I am machine that is a machine with that multiply extension on this 42:24 basic you know machine more basic machine like the ones we've built without a multiplier what happens so we 42:33 get an instruction exception and then in that case you know one simple thing that 42:39 we could do is there always will step in and it it can say well you try to run 42:44 something that I don't actually understand or that's noting this is a so it could abort the process but it can 42:50 also see oh you know this is a multiply instruction and the hardware doesn't 42:55 have multiply instructions but I know how to do multiply so what we're going to emulate the instruction in software 43:01 let's see how this happens so suppose that you have this code over here in process one right so there's this add 43:06 instruction and then there's a small instruction and so process one is running over here and then at this 43:12 particular point it steps into the small instruction and that causes an illegal instruction exception and then that 43:17 causes the kernel to be called and so the kernel is going to take a look you know it's going to run the illegal 43:23 instruction exception handler and is going to see that it this you know the 43:30 sequence of 32 bits that the processor right our Hardware processor doesn't know how to interpret is actually a 43:35 multiply instruction so it's going to emulate it in software now you know the OS doesn't have a multiplier either but 43:41 it can do multiplication by repeated addition with you know a normal function call and then when it's done it 43:47 basically needs to return control to the process at the instruction following the 43:53 multiply right and so the result here is that the program believes that it's 44:00 actually running in a processor that that's half mole I'm hardware multiplier right it believes that it is running in 44:05 this more sophisticated is a even though in hardware we only we don't actually have a multiplier right in a sense we've 44:15 virtualized the processor with we're making the program believe that it has 44:21 hardware that it doesn't right okay any problem with this approach yes 44:35 it's it's kind of flow yeah right but 44:45 this is an this is an exception to that rule right because if you're emulating the instruction that cause an exception 44:51 you need to to return to the next instruction otherwise you're constantly 44:56 running to the exception right okay so 45:03 yeah the problem is that this is still much slower than a hardware multiple 45:21 well so that depends on what the contract with the OS is right if they 45:26 always guarantees that's going to emulate those instructions then then it should otherwise it should not write so 45:32 that that is specific to the OS but it is in a sense they always can choose 45:37 where to hide it or whether to not okay so in the remainder of the lecture we're 45:45 going to start going into into details into how exceptions are actually implemented in risk pipe this is 45:52 actually something that we're going to discuss in much more detail in tomorrow's recitation but let's start you know taking a look at what risk five 45:59 does so that you know we can get get jump-started into into the recitation 46:04 materials okay so basically as I as I said before general that in general and 46:11 ISA has these supervisor mode this user mode and then supervisor mode has access 46:17 to privileged state and privileged instructions and in risk five this privilege state is in the form of some 46:23 privileged privileged registers that are called control and status registers or CSRs for for short so some of these are 46:32 some famous CSRs although there are many are essentially what we call em EPC so 46:38 all of these begin by M and that stands for is this machine level and so this is the EPC the exception PC this is the PC 46:46 of the instruction where the exception triggered M cause which is the cause of 46:52 the exception MT vac is the address of the exception handler right so this is 46:57 something that the OS sets so that when an exception happens basically the 47:04 processor jumps to the address that's stored in that register and then M sadhu is basically has a bunch of started 47:09 status bits telling you things like what privilege mode are you are you on are you in user mode supervisor mode our 47:16 interrupts enable or not and so on and then there are instructions privileged instructions CS r r and c s RW to read 47:25 and write these CSRs and then we'll see another instruction that's called M read that needs to do something fairly 47:32 special right needs to return from the exception exception handler to the process and that is a fairly delicate 47:39 dance of atomically updating different pieces of state and of course you know 47:45 because it's our privilege trying to execute these instructions from user mode causes an exception so if you have 47:51 a normal user process that's trying to run as CSR our it will immediately go into the operating system which will 47:58 figure out what to do probably abort the process right okay so the typical way 48:04 that we that we code this exception handlers or that operating systems code this exception handlers is there's what 48:10 we call the coma handler which is in this picture shown here in orange that's 48:16 typically written in assembly because it needs to do a very delicate dance on how 48:21 to switch contacts between the user level process and the exception and the 48:27 OS kernel and then we have a lot of other exception handlers and those we write in higher level C code and so then 48:34 every exception goes directly to this common Handler and the common handler that's five things the first thing it 48:40 does is it saves all the registers into some location in memory right and then 48:47 it passes all the state right the the a pointer or you know the address of this 48:52 where these registers are stored at to the particular exception handler that we're interested in running 48:58 finally we you know these exception 49:03 handler returns and it tells the common handler what process should run next so 49:10 normally we'll run the same process if this is a timer interrupt with my sweet so different process it is the practice 49:17 if the process did something bad and the exception handler already then it will also switch the different process and 49:22 then the common handler that's sort of the reverse thing it reloads the registers from memory into the actual 49:27 registers and then it returns it calls em read which that's this this fairly 49:34 difficult dance of different bits letting different pieces of state at the same time so it takes this m EP c and 49:40 sets the pc to it it also disables supervisor mode andrea nabel's interrupts all at the same time so then 49:46 we jump back into into process code okay so the code that you'll see tomorrow is 49:52 how to both implement the common exception handler and also how to implement the dispatcher into different 49:58 exception handlers this is what you'll see in recitation tomorrow so in summary again piggles of operating 50:05 systems protection abstraction and resource management today we've seen how to do this you know how you know we 50:12 require some hardware support that is efficiently we've seen how you can do 50:17 lots of different things with user mode and supervisor mode and with exceptions and the next lecture will start our 50:24 discussion of virtual memory thank you