Operating System

Kernel

The kernel is the core component of an operating system that is responsible for managing system resources and providing low level services to other components.

Process Scheduler

The process scheduler is responsible for allocating a fair share of CPU time to each process.

Memory manager

The memory manager is responsible for allocating and making sure that each process gets it fair share of memory, and that processes do not interfere with each other's memory.

Filesystem manager

The filesystem manager is responsible for organizing and managing the storage and retrieval of data from secondary storage devices.

Device drivers

Device drivers are software components that provide an interface between the operating system and the hardware devices. It enables the OS and users to use hardware devices without worrying about how they work under the hood.

Syscalls

Syscalls provide an interface for the services made available by the operating system. They are generally made available through C/C++ headers, but sometimes assembly might have to be used for lower level tasks.

Types of structuring

Monolithic/Simple

All of the functionality of the kernel is placed into a single static binary file that runs in single address space. While the following case studies underline the obvious disadvantages of using it, a monolithic structure still results in low overhead times.

Examples

MS-DOS

Early operating systems like MS-DOS did not consider the face that would become so popular. As a result, MS-DOS was written to provide the most functionality in the least space. It was not very well structured and did not have any modular structure. There was no proper separation between the application programs and the hardware, and applications could access the display or write to disk directly. This would of course cause system crashes, when a program executed badly. These are not bad software design decisions, but they were limited by the technology of their time.

UNIX

The early UNIX OS was split into the kernel and the system programs. There was some layering involved where the shell scripts and other interpreters made use of syscalls. The evidence of this monolithic structuring can still be seen in today's UNIX, Windows OSes.

Layered

In the layered approach, an operating system is broken down into smaller layers. And the applications on that computer can make use of these layers. The bottom most layer, layer 0 is the hardware layer. An operating system layers, is an abstract object with data structures and a set of routines that can be invoked by higher layers.

This makes debugging easier, since debugging a layer will involve debugging it and the layers below it. If we start from layer 0 and up, we can debug layer by layer, which is far easier than debugging the entire OS. Each layer only has knowledge of the routines provided by the lower layers, and what those routines do. It does not have to know how those routines are accomplished or what data structures are used, but only what is capable.

The difficulty with this approach is defining the layers. To somewhat mitigate this, we prefer lower number of layers, with each layer providing more functionality. This gives us the advantages of modularity, while not making it hard to work with like a monolithic OS.

Microkernel

In the microkernel approach, we remove absolutely everything that is unnecessary from the kernel and re-implement them as system level or user level programs. The main purpose of a microkernel is to provide communication between the client program and user space services. It is advantageous in the fact that, to add new services, we do not have to modify the kernel but simply add it in as a user level program. Therefore, it is also easier to port it to different hardware, and is also more secure since there are less kernel lever services.

While having everything as a user level service does help with the development, it causes a lot of overhead. The OS also has to switch from one program to another for copying messages, which is the single biggest inhibitor to microkernel structure's growth.

Loadable kernel modules

This is the current widely adopted method for OS design. The kernel starts up with a set of core components. There are additional software components called Loadable Kernel Modules(LKM) that can be added to the kernel at runtime without causing any interference. This makes it easier to add support for newer hardware, as all we have to do is implement the drivers for it in some kernel module. When the kernel module is not longer needed, it can safely be removed. This is an object oriented approach.
Example OSes that use this are GNU/Linux, Solaris, Windows, OS X

Hybrid

Most OSes today are not actually fully using the LKM approach. They use a mix of approaches.

  • Windows uses a mostly monolithic approach, with a microkernel for different subsystems
  • Linux and Solaris have a monolithic core, with dynamically addable kernel modules
  • OS X has a somewhat layered approach, where the kernel, Darwin uses the Mach microkernel and the BSD UNIX kernel. And so it ends up providing both BSD and Mach system calls, on Mac OS. On iOS, the BSD and Mach APIs are very restricted and locked down

Processes

A process in an OS is just a program that is running. It can be considered a fundamental unit of work in an OS. They can be categorized as follows.

Resources

Resources are components and facilities that are managed and allocated to processes, to enable them to run and complete their tasks. At its very core, an OS can be thought of as a resource manager.

Categories

  • Physical: CPU , RAM , I/O , Storage devices
  • Virtual: Virtual memory like swap space or page files , Virtual CPUs
  • Software: Files and directories , I/O streams , Sockets , Semaphores
  • Abstract: Processes , Threads , Synchronization objects like mutexes and locks

Security

Components

Some security design components/issues are discussed below.

  • Access control: The OS manages access to resources like files and devices through mechanisms like permissions, Access Control Lists (ACLs), Attribute Based Access Control (ABAC)
  • Authentication: The OS verifies the identity of users using mechanisms like passwords, biometrics.
  • Authorization: The OS determines what actions a user can perform on a resource based on their role and privilege. Take a look at GNU Linux > File permissions and attributes
  • Encryption: An OS provides encryption mechanisms like disk encryption, and secure sockets layer to protect the integrity of data.
  • Integrity: Integrity of data is preserved through mechanisms that prevent unauthorized tampering.
  • Vulnerability management: The OS should be able to apply patches to help identify and remediate vulnerability issues.
  • Auditing and logging: The OS should have logging mechanisms in place to monitor security events and threats.