Tuesday, 9 April 2019

The Operating System - Between it and you

So, you could say that you can divide operating systems into two types, the Unix based systems, and Windows. It's true, pretty much any operating system that isn't Windows is based on the Unix systems developed back in the 70s. Actually, a lot of them are based on what is called Free BDS, or Berkley Development Systems (from University of California Berkely) and devices from the Android, to the Apple Mac, to the Playstation II operate on this system.

And then there's Windows. Bill Gates had to be different, but we have to at least give him a little credit namely because he also started working on his systems back in the 70s, and used (or should I say borrowed) an operating system called CPM which was in wide use at the time. However, he basically patented his systems, where as the Unix based systems were open source, so basically the choice was to pay Bill Gates a lot of money and use his system, or to build your own from another system that had already been developed. Well, you can guess which way people decided to go (actually, until the rise of the Smartphone, they just paid Bill Gates a lot of money).

The role of the operating system is to basically enable you to use the computer. Well, you could use it by simply feeding it a bunch of 0s and 1s, but honestly, you probably need a PhD to be able to do that, and to do so simply to play a round of Halo is going to be a little time consuming. So, the operating system does all of the grunt work for you so all you need to do is to plonk the CD into the X-Box, press play, and you are good to go.

I'm not sure if it is possible to play Halo on this machine
Another way to look at operating systems is through their interface. You have two types - GUI for graphical user interface, and the other one, which pretty much doesn't exist any more. In retrospect they are referred to as Textual User Interfaces, and you can still access them by opening up the 'Command Line' in Windows. Actually, if you know what you are doing, using a TUI can actually be much faster than all the pointing and clicking that you do with your mouse, though once again, you do need to know what your are doing.

Here's one I prepared for your earlier

Operating systems perform a number of functions, such as:

Resource Management: The operating system (OS for short) manages the computer's resources, namely the CPU, the graphics card, the memory, the hard drives, and even that little cup warmer that you have plugged into your USB port (well, not really, that's just using the computer's power to keep your coffee warm).

Data Management: The input and output, and arrangement of data on the various devices is also managed my the operating system. When you search through your directories looking for that particular bill you thought you paid, that is actually the OS managing the arrangements, not the hard drive, though the directory structure is still saved to the hard drive in a special file, it is just the OS that interprets it.

Job Management: This probably goes hand in hand with the above, but the OS runs a scheduler that tells CPU what to do and when. So, when you have multiple things running at the same time (such as that Iron Maiden tune you have playing on Spotify while you are finishing off that essay you have to hand up in two hours time) it is the OS that allocates the time to the CPU to do these tasks.

Speaking of Iron Maiden:

Let's look at a few more types of operating systems:

Real Time: As the name suggests these operating systems operate in real time, and are required for incredibly precise operations, such as piloting a space craft or managing a nuclear reactor. These operating systems do not have any user input, and are usually locked away in a sealed case. The other thing is that these operating systems are predictable, but not fast.

Single User/Single Task: Basically these operating systems perform a single task, and are designed to be used by a single user, such as the old phones that a handful of people still use. On the flip side, you have the multi-tasking systems, such as your desktop or lap top. In these situations ease of use is critical.

Multi-User: These operating systems are designed to have multiple users access the system and work on them. They generally don't sit on your desktop but rather on the server that your desktop is connected to. In a lot of cases, the end user will probably be using a Single User system, while the server system is running a multi-user system. One of the reasons that this is useful is that if one wishes to update the system, they only need to do it on one machine as opposed to a whole heap of them. Linux and Unix operating systems are examples of this. Reliability is the key factor with these computers.

Distributed Systems: These operating systems sit on multiple computers but actually make them appear as if they were one computer. This allows for system wide sharing of resources. Users of these operating systems should not know which computer they are using, or where their files are stored. Sychronisation of communications are essential for these systems.

Embedded Operating Systems: These operating systems are basically 'embedded' on a device, such as Andriod on your phone (or iOS). This is also the case with systems like Playstation and X-Box, as well as your smart-TV, or even printer (if you have a mega-fancy one that is). In this situation real-time performance may be critical (though ease of use is also a factor).

On to the sections of the Operating System, which is probably best illustrated by the following chart:

Kernel: This is the part of the operating system that communicates directly with the hardware of the computer. In a way it acts as a sort of gate keeper between the upper layers and the devices so that you don't have everybody trying to get to the same thing at the same time and pretty much crashing the system. This is where the scheduling takes place and it shares the resources across the applications. This needs to be working properly, and I mean 100% properly, because a single bug can lock up the entire system.

Device Drivers: Okay, these also communicate with the devices, but they need to because it is the drivers that actually allow the computer to use them. If the device driver for a particular device is not present, then that device is basically useless. These also need to be properly coded because if they aren't then it is going to effect the entire system.

API Layer: Known as the Application Programming Interface, this is the layer between the applications (or games if that's your thing) and the device drivers and kernal. These programs allow the kernal and the drivers to communicate with the application, and also feed the various tasks down through to them. There are actually a number of APIs that perform various specialised tasks such as graphics and networking, among others. The APIs break the tasks down into 'competent cells' so that the lower levels are able to interpret, and then perform them. These have no direct access to the hardware.

Applications: Or games, if that happens to be your things. This is the top layer of the operating system, and is where all the programs that your play around with such as the browser of those 'special' sites, and your word processor, and even that calculator that you have open so that you don't have to think too hard when it comes to arithmetic. Once again, this layer has no direct access to the hardware and all requests need to be fed down through the APIs.

Desktop Design

So, the best design for a desktop operating system is one that reliably performs the tasks that the user desires, and it needs to interact seamlessly between the applications and the hardware. The trick is balancing ease of use, speed and reliability. In this instance, using a proven kernel is the trick because we simply don't want the computer crashing in the middle of Grandma talking with her daughter overseas, especially if Grandma has no idea how to use a computer.

The device drivers need to configure themselves without any user input, and in the instance the drivers should at least have some kernal level access to the devices (particularly graphics). In fact, when it comes to device drivers, we really don't want the user to have to reboot the system everytime they want to plug in or remove the Flash drive. The API needs to be well developed to allow for multiple applications to be developed, and also provide ease of development. Finally, it should be a graphical interface where the user can simply know what does what.


There are a number of ways that tasks can be scheduled by the Process Scheduler: First Come, Best Dressed; Shortest; Priority; and a round robin tournament. While the first come process is fairly easy to implement, it can result in pretty horrendous wait times, particularly if there is a queue of rather long processes. The shortest method could be problematic if it ends up with a lot of pretty long processes that end up being dumped at the end of the queue. Thus we have the priority, but this isn't one where the user assigns a priority, but rather the schedualer assigns the priority based on how important the task happens to be. Then again, you can also assign a priority to a task manually.

There are a number of different types of priorities: static, which is assigned when the task is created, and dynamic, which depends on other factors after creation such as behaviour within the system. Then you have internal and external priorities, one assigned within the operating system, and the other being assigned by other factors, such as an impatient user (who can't wait the 10 nano-seconds for the task to be completed). The major problem with priorities is that it may result in low priority tasks being starved (that is waiting in the queue forever).

If you do decide to set the priority for a specific task, it is for one time and one task only, and will have to be reset if you wish to do it again.

The round robin system is where pretty much every task gets a fair go at using the CPU. They are each allocated time spots, and they will use the CPU for that particular time spot before moving aside to let the next one have a go. Actually they are called time quantums because, well, scientists.

Another way of doing it is called multi-level queuing. This is where tasks that are similar are gathered together in separate queues, and each of the queues have their own scheduling regime. The priorities are then set for each of the queues to perform their varied tasks.

Virtual Memory

Now on to paging, which is annoying. Operating systems, like pretty much all of the other parts of the computer, divides its tasks into chunks which are called frames. These frames are then stored in the memory. However, if the memory is close to being full it then does something called paging. What it does is that it creates space on the hard drive, and then turns the frames into pages (which is basically a frame, but with another name) and places it there. This section of the hard drive is known as virtual memory.

Now, if the operating system is looking for a frame and can't find it in memory this is called a page fault. When that happens it then goes to the virtual memory of the hard drive and once it is found, it then returns it to memory where it can then be easily accessed. This can lead to thrashing.

Basically thrashing occurs when the operating system is repeatedly sending frames to the virtual memory, and also retrieving them, and due to hard drives being ridiculously slow compared to the memory this leads to a significant drop in performance. In fact, in some cases when thrashing occurs it can only be solved through user interventions (such as shutting down those horrendous number of programs that aren't actually being used, or just upgrading your computer).

So, what happens when a page fault occurs is that the computer puts the program to sleep, goes and searches the virtual memory for the required page, brings it back and places it in the memory, and then wakes the program up again so that it can execute the instructions. The page table is also updated with the new information. Demand paging is where a page is swapped with a page in memory (usually for the same process), where as a memory swap is where an entire process that is in memory is swapped with a process on the hard drive.

This process is tenable where the pages being accessed aren't being accessed at random. Pages placed into virtual memory are normally those that aren't expected to be accessed for a period of time. This is one of the major reasons that in programming we don't replicate code that we have already written, to prevent extended access to the virtual memory. Code that is being used regularly is usually kept in the memory. The axiom is that a program spends 90% of its time on 10% of the code. This is also known a locality.

There are two types of locality: temporal and spatial. Temporal locality says that code that has recently been used is likely to be used again in the near future. Spatial locality says that code in addresses close together tend to be used together. These concepts work to reduce the amount of time spent searching the virtual memory for pages.

Virtual Machines

These are actually more common than I originally thought. What virtual machines do is that they add another layer between the API layer and the device driver layer, called the virtual layer. Basically it is where a virtual computer is being run on your computer. These are used with programming languages like Java, which uses a special API to integrate it with the various operating systems and computers. However there are other forms of virtual machines, such as Virtual Box by Oracle, and also VMWare. These allow you to install operating systems onto the virtual machine for trial purposes. Emulators, such as that C64 emulator that you use to play the really cool games from the 80s, or Dos Box for those old school games, are also a from of virtual machine.

The main problem with virtual machines is that they add another layer to the operating system, when has the effect of slowing things down. Anything that is running on the virtual machine is going to be less efficient than if it had been installed directly onto the operating system. However, when you consider that some of the emulators deal with games and programs that are decades old, then performance issues really aren't going to be a big problem (unless of course they happen to run at the speed of a modern computer, which means they become unplayable).

Punch Card Reader: By Mike Ross - CC BY-SA 3.0,

Creative Commons License

The Operating System - Between it and you by David Alfred Sarkies is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This license only applies to the text and any image that is within the public domain. Any images or videos that are the subject of copyright are not covered by this license. Use of these images are for illustrative purposes only are are not intended to assert ownership. If you wish to use this work commercially please feel free to contact me

No comments:

Post a Comment