Skip to main content

OPERATING SYSTEM

 INTRODUCTION AND FUNCTIONS OF OPERATING SYSTEM

Introduction and functions of operating system


What is an Operating System?


Operating System lies in the category of system software. It basically manages all the resources of the computer. An operating system acts as an interface between the software and different parts of the computer or the computer hardware. The operating system is designed in such a way that it can manage the overall resources and operations of the computer. 

Operating System is a fully integrated set of specialized programs that handle all the operations of the computer. It controls and monitors the execution of all other programs that reside in the computer, which also includes application programs and other system software of the computer. Examples of Operating Systems are Windows, Linux, Mac OS, etc.

An Operating System (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs. The operating system is the most important type of system software in a computer system.


What is an Operating System Used for?

The operating system helps in improving the computer software as well as hardware. Without OS, it became very difficult for any application to be user-friendly. The Operating System provides a user with an interface that makes any application attractive and user-friendly. 

The operating System comes with a large number of device drivers that make OS services reachable to the hardware environment. Each and every application present in the system requires the Operating System. 

The operating system works as a communication channel between system hardware and system software. The operating system helps an application with the hardware part without knowing about the actual hardware configuration. It is one of the most important parts of the system and hence it is present in every device, whether large or small device.


Need for Operating System -


The fundamental goal of an Operating System is to execute user programs and to make tasks easier. Various application programs along with hardware systems are used to perform this work.
Operating System is software that manages and controls the entire set of resources and effectively utilizes every part of a computer. 

The figure shows how OS acts as a medium between hardware units and application programs.



OS as a platform for Application programs: The operating system provides a platform, on top of which, other programs, called application programs can run. These application programs help users to perform a specific task easily. It acts as an interface between the computer and the user. It is designed in such a manner that it operates, controls, and executes various applications on the computer.
 

Managing Input-Output unit: The operating system also allows the computer to manage its own resources such as memory, monitor, keyboard, printer, etc. Management of these resources is required for effective utilization. The operating system controls the various system input-output resources and allocates them to the users or programs as per their requirements.
 

Multitasking: The operating system manages memory and allows multiple programs to run in their own space and even communicate with each other through shared memory. Multitasking gives users a good experience as they can perform several tasks on a computer at a time.        


A platform for other software applications: Different application programs are needed by users to carry out particular system tasks. These applications are managed and controlled by the OS to ensure their effectiveness. It serves as an interface between the user and the applications, in other words.                                        

Controls memory: It helps in controlling the computer’s main memory. Additionally, it allows and deallocates memory to all tasks and applications.                                    

Looks after system files: It helps with system file management. As far as we are aware, all of the data on the system exists as files. It facilitates simple file interaction.        




Operating System Generations -


Operating systems have been evolving over the years. We can categorize this evaluation based on different generations which is briefed below:

0th Generation

The term 0th generation is used to refer to the period of development of computing when Charles Babbage invented the Analytical Engine and later John Atanasoff created a computer in 1940. The hardware component technology of this period was electronic vacuum tubes. There was no Operating System available for this generation computer and computer programs were written in machine language. This computers in this generation were inefficient and dependent on the varying competencies of the individual programmer as operators.

First Generation (1951-1956)

The first generation marked the beginning of commercial computing including the introduction of Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701.

System operation was performed with the help of expert operators and without the benefit of an operating system for a time though programs began to be written in higher level, procedure-oriented languages, and thus the operator’s routine expanded. Later mono-programmed operating system was developed, which eliminated some of the human intervention in running job and provided programmers with a number of desirable functions. These systems still continued to operate under the control of a human operator who used to follow a number of steps to execute a program. Programming language like FORTRAN was developed by John W. Backus in 1956.

Second Generation (1956-1964)

The second generation of computer hardware was most notably characterised by transistors replacing vacuum tubes as the hardware component technology. The first operating system GMOS was developed by the IBM computer. GMOS was based on single stream batch processing system, because it collects all similar jobs in groups or batches and then submits the jobs to the operating system using a punch card to complete all jobs in a machine. Operating system is cleaned after completing one job and then continues to read and initiates the next job in punch card.

Researchers began to experiment with multiprogramming and multiprocessing in their computing services called the time-sharing system. A noteworthy example is the Compatible Time Sharing System (CTSS), developed at MIT during the early 1960s.

Third Generation (1964-1979)

The third generation officially began in April 1964 with IBM’s announcement of its System/360 family of computers. Hardware technology began to use integrated circuits (ICs) which yielded significant advantages in both speed and economy.

Operating system development continued with the introduction and widespread adoption of multiprogramming. The idea of taking fuller advantage of the computer’s data channel I/O capabilities continued to develop.

Another progress which leads to developing of personal computers in fourth generation is a new development of minicomputers with DEC PDP-1. The third generation was an exciting time, indeed, for the development of both computer hardware and the accompanying operating system.

Fourth Generation (1979 – Present)

The fourth generation is characterized by the appearance of the personal computer and the workstation. The component technology of the third generation, was replaced by very large scale integration (VLSI). Many Operating Systems which we are using today like Windows, Linux, MacOS etc. developed in the fourth generation.



DEVELOPMENTS LEADING TO MODERN OPERATING SYSTEMS

In recent years, operating systems (OS) have undergone significant transformations to meet the evolving demands of hardware, applications, and security threats. These changes have introduced new design elements in both new OSs and updates to existing ones. Key drivers in hardware include multiprocessor systems, increased processor speed, high-speed network attachments, and a diverse range of memory storage devices. On the application side, multimedia applications, Internet and Web access, and client/server computing have had a profound influence on OS design. Additionally, the escalating security risks associated with Internet access, such as viruses, worms, and hacking techniques, have had a significant impact on OS design, necessitating stronger security measures.

To address these evolving demands, a variety of approaches and design elements have been explored. One such approach is the microkernel architecture, which assigns only essential functions to the kernel and delegates other OS services to processes or servers running in user mode. This separation simplifies implementation, provides flexibility, and lends itself well to distributed environments. By interacting with local and remote server processes in a similar manner, a microkernel facilitates the construction of distributed systems.

Multithreading is another crucial technique where a process is divided into multiple threads that can execute concurrently. This allows for better utilization of resources and improved performance, particularly for applications that involve independent tasks. For example, a database server that handles numerous client requests can benefit from multithreading, as the switching between threads involves less overhead compared to switching between different processes.

Symmetric multiprocessing (SMP) takes advantage of computer hardware architecture and OS behavior. An SMP system schedules processes or threads across multiple processors, resulting in potential benefits such as increased performance, availability (as a single processor failure does not halt the system), incremental growth (by adding additional processors), and scaling (offering a range of products with different price and performance characteristics based on the number of configured processors). Multithreading and SMP are often discussed together, as they complement each other and can be used effectively in combination.

In addition to these developments, object-oriented design has been introduced to OS development. This approach provides discipline in adding modular extensions to a small kernel and enables programmers to customize an OS without compromising system integrity. Object-oriented design also facilitates the development of distributed tools and full-fledged distributed operating systems.

In conclusion, the modern evolution of operating systems has been driven by the need to adapt to advancements in hardware, applications, and security threats. The introduction of design elements such as the microkernel architecture, multithreading, symmetric multiprocessing, distributed systems, and object-oriented design has enabled OSs to enhance performance, scalability, and security while meeting the changing requirements of users and technology.



VIRTUAL MACHINES


A virtual machine (VM) in the context of operating systems refers to the emulation of a computer system within another computer system. It allows multiple operating systems to run on a single physical machine, providing isolation, flexibility, and resource management. Here's an explanation along with an example:

  1. Virtualization: Virtualization is the process of creating a virtual version of something, such as hardware, storage, or an operating system. In the context of operating systems, virtualization often involves the creation of virtual machines.


  2. Hypervisor: A hypervisor, also known as a Virtual Machine Monitor (VMM), is a software layer that enables the creation and management of virtual machines on a physical host machine. There are two types of hypervisors:


    • Type 1 Hypervisor (Bare-metal Hypervisor): It runs directly on the host hardware to control the hardware and to manage guest operating systems. Examples include VMware ESXi and Microsoft Hyper-V.

    • Type 2 Hypervisor (Hosted Hypervisor): It runs on top of the host operating system and allows the creation of virtual machines as applications. Examples include VMware Workstation, Oracle VirtualBox, and Parallels Desktop.

  1. Virtual Machine (VM):

    • A virtual machine is an emulation of a computer system that runs an operating system and applications.
    • Each VM is isolated from other VMs on the same physical machine, and they share the underlying hardware resources.
    • VMs are created and managed by the hypervisor, which allocates resources, provides an abstraction layer, and facilitates communication between VMs and the physical hardware.

Example:

Let's consider an example using a Type 2 Hypervisor (Hosted Hypervisor) like VMware Workstation or Oracle VirtualBox:

  1. Install a Hypervisor:

    • Download and install a Type 2 hypervisor on your physical machine. For this example, let's use Oracle VirtualBox.
  2. Create a Virtual Machine:

    • Open VirtualBox and create a new virtual machine.
    • Specify details such as the amount of RAM, virtual hard disk size, and the ISO file for the guest operating system.
  3. Install Guest Operating System:

    • Start the virtual machine, and it will boot from the specified ISO file.
    • Install the guest operating system (e.g., Linux or Windows) within the virtual machine.
  4. Run Multiple Virtual Machines:

    • Repeat the process to create and run multiple virtual machines on the same physical host.
    • Each virtual machine operates independently and can run a different operating system.
  5. Isolation and Resource Management:

    • Each VM is isolated from others, and their resource usage (CPU, RAM, disk space) can be controlled and configured by the hypervisor.
    • The hypervisor manages the allocation of resources and ensures that each VM gets its fair share.
  6. Flexibility and Portability:

    • Virtual machines can be easily cloned, copied, or moved between different host machines, providing flexibility and portability.
  7. Use Cases:

    • Testing and development: VMs provide a sandbox environment for testing software in different operating systems.
    • Server consolidation: Multiple virtual servers can run on a single physical server, improving resource utilization.
    • Isolation and security: VMs can provide isolation between applications or services for security purposes.

In summary, virtual machines in operating systems enable the running of multiple isolated environments on a single physical machine, allowing for flexibility, resource management, and improved utilization of hardware resources.


INTRODUCTION TO LINUX OS


Linux is a family of open-source, Unix-like operating systems based on the Linux kernel. It is a versatile and powerful operating system that has gained popularity in various computing environments, ranging from servers and mainframes to personal computers and embedded systems.

Architecture of Linux:




  1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the common hardware resources of the computer to provide each process with its virtual resources. This makes the process seem as if it is the sole process running on the machine. The kernel is also responsible for preventing and mitigating conflicts between different processes. Different types of the kernel are:
  2. Monolithic Kernel
  3. Hybrid kernels
  4. Exo kernels
  5. Micro kernels

  1. System Library: Linux uses system libraries, also known as shared libraries, to implement various functionalities of the operating system. These libraries contain pre-written code that applications can use to perform specific tasks. By using these libraries, developers can save time and effort, as they don’t need to write the same code repeatedly. System libraries act as an interface between applications and the kernel, providing a standardized and efficient way for applications to interact with the underlying system.
  2. Shell: The shell is the user interface of the Linux Operating System. It allows users to interact with the system by entering commands, which the shell interprets and executes. The shell serves as a bridge between the user and the kernel, forwarding the user’s requests to the kernel for processing. It provides a convenient way for users to perform various tasks, such as running programs, managing files, and configuring the system.
  3. Hardware Layer: The hardware layer encompasses all the physical components of the computer, such as RAM (Random Access Memory), HDD (Hard Disk Drive), CPU (Central Processing Unit), and input/output devices. This layer is responsible for interacting with the Linux Operating System and providing the necessary resources for the system and applications to function properly. The Linux kernel and system libraries enable communication and control over these hardware components, ensuring that they work harmoniously together.
  4. System Utility: System utilities are essential tools and programs provided by the Linux Operating System to manage and configure various aspects of the system. These utilities perform tasks such as installing software, configuring network settings, monitoring system performance, managing users and permissions, and much more. System utilities simplify system administration tasks, making it easier for users to maintain their Linux systems efficiently.

Key Characteristics:

  1. Open Source:

    • Linux is distributed under open-source licenses, meaning the source code is freely available for anyone to view, modify, and distribute.
    • This open nature fosters collaboration and a large community of developers contributing to its development and improvement.
  2. Kernel:

    • The Linux kernel is the core of the operating system. It manages hardware resources, provides essential services, and acts as an interface between software applications and the hardware.
  3. Multi-User and Multi-Tasking:

    • Linux supports multiple users working on the system simultaneously, each with their own user account and workspace.
    • It allows multitasking, enabling several processes to run concurrently.
  4. Multiplatform:

    • Linux is designed to run on a variety of hardware architectures, including x86, x86_64, ARM, MIPS, and more. This versatility makes it suitable for different types of devices, from servers to embedded systems.
  5. File System:

    • Linux uses a hierarchical file system, where files and directories are organized in a tree-like structure.
    • File systems like Ext4, XFS, and Btrfs are commonly used in Linux.
  6. Shell and Command-Line Interface (CLI):

    • Linux provides a powerful command-line interface (CLI) where users can interact with the system by entering commands.
    • The shell is the command processor that interprets and executes user commands.
  7. Package Management:

    • Linux distributions use package management systems (e.g., APT, YUM, Zypper) to simplify the installation, upgrading, and removal of software packages.
    • Software is often distributed in packages, making it easy to manage dependencies.
  8. Security:

    • Linux has a strong security model, with features such as user permissions, access control lists, and a robust set of security tools.
    • Regular security updates and patches contribute to the overall security of the system.

Distributions (Distros):

  • Linux comes in various distributions, often referred to as "distros." Each distribution is a variation of the Linux operating system, bundled with a specific package manager, set of default applications, and system configurations. Some popular Linux distributions include Ubuntu, Fedora, Debian, CentOS, Arch Linux, and many more.

Use Cases:

  1. Server Environments:

    • Linux is widely used as a server operating system, powering a significant portion of web servers, cloud servers, and enterprise servers.
  2. Desktop and Laptop Systems:

    • Linux can be used as a desktop operating system, providing a user-friendly interface with various desktop environments (e.g., GNOME, KDE, XFCE).
  3. Development and Programming:

    • Linux is a preferred choice for developers and programmers due to its support for various programming languages, tools, and development environments.
  4. Embedded Systems:

    • Linux is commonly used in embedded systems, providing a customizable and lightweight platform for devices like routers, set-top boxes, and IoT devices.
  5. Educational Purposes:

    • Linux is often used in educational environments for teaching computer science and operating system concepts.

In summary, Linux is a robust, open-source operating system with a rich set of features, making it suitable for a wide range of applications and computing environments. Its flexibility, stability, and strong community support contribute to its widespread adoption.


Advantages of Linux

  • The main advantage of Linux is it is an open-source operating system. This means the source code is easily available for everyone and you are allowed to contribute, modify and distribute the code to anyone without any permissions.
  • In terms of security, Linux is more secure than any other operating system. It does not mean that Linux is 100 percent secure, it has some malware for it but is less vulnerable than any other operating system. So, it does not require any anti-virus software.
  • The software updates in Linux are easy and frequent.
  • Various Linux distributions are available so that you can use them according to your requirements or according to your taste.
  • Linux is freely available to use on the internet.
  • It has large community support.
  • It provides high stability. It rarely slows down or freezes and there is no need to reboot it after a short time.
  • It maintains the privacy of the user.
  • The performance of the Linux system is much higher than other operating systems. It allows a large number of people to work at the same time and it handles them efficiently.
  • It is network friendly.
  • The flexibility of Linux is high. There is no need to install a complete Linux suite; you are allowed to install only the required components.
  • Linux is compatible with a large number of file formats.
  • It is fast and easy to install from the web. It can also install it on any hardware even on your old computer system.
  • It performs all tasks properly even if it has limited space on the hard disk.

Disadvantages of Linux

  • It is not very user-friendly. So, it may be confusing for beginners.
  • It has small peripheral hardware drivers as compared to windows.

Introduction to Linux Shell and Shell Scripting


If we are using any major operating system, we are indirectly interacting with the shell. While running Ubuntu, Linux Mint, or any other Linux distribution, we are interacting with the shell by using the terminal. Before discussing Linux shells and shell scripting so before understanding shell scripting we have to get familiar with the following terminologies:

  • Kernel
  • Shell
  • Terminal

What is Kernel?

The kernel is a computer program that is the core of a computer’s operating system, with complete control over everything in the system. It manages the following resources of the Linux system –

  • File management
  • Process management
  • I/O management
  • Memory management
  • Device management etc.

What is Shell?

A shell is a special user program that provides an interface for the user to use operating system services. Shell accepts human-readable commands from users and converts them into something which the kernel can understand. It is a command language interpreter that executes commands read from input devices such as keyboards or from files. The shell gets started when the user logs in or starts the terminal.


                                                                             
Linux Shell

Shell is broadly classified into two categories –

  • Command Line Shell
  • Graphical shell

Command Line Shell

Shell can be accessed by users using a command line interface. A special program called Terminal in Linux/macOS, or Command Prompt in Windows OS is provided to type in the human-readable commands such as “cat”, “ls” etc. and then it is being executed. The result is then displayed on the terminal to the user.


Graphical Shells

Graphical shells provide means for manipulating programs based on the graphical user interface (GUI), by allowing for operations such as opening, closing, moving, and resizing windows, as well as switching focus between windows. Window OS or Ubuntu OS can be considered as a good example which provides GUI to the user for interacting with the program. Users do not need to type in commands for every action.


There are several shells are available for Linux systems like –

  • BASH (Bourne Again SHell) – It is the most widely used shell in Linux systems. It is used as default login shell in Linux systems and in macOS. It can also be installed on Windows OS.
  • CSH (C SHell) – The C shell’s syntax and its usage are very similar to the C programming language.
  • KSH (Korn SHell) – The Korn Shell was also the base for the POSIX Shell standard specifications etc.

Each shell does the same job but understands different commands and provides different built-in functions.


What is a terminal?

A program which is responsible for providing an interface to a user so that he/she can access the shell. It basically allows users to enter commands and see the output of those commands in a text-based interface. Large scripts that are written to automate and perform complex tasks are executed in the terminal.

To access the terminal, simply search in search box “terminal” and double-click it.


Shell Scripting


Usually, shells are interactive, which means they accept commands as input from users and execute them. However, sometimes we want to execute a bunch of commands routinely, so we have to type in all commands each time in the terminal.

As a shell can also take commands as input from file, we can write these commands in a file and can execute them in shell to avoid this repetitive work. These files are called Shell Scripts or Shell Programs. Shell scripts are similar to the batch file in MS-DOS. Each shell script is saved with `.sh` file extension e.g., myscript.sh.

A shell script has syntax just like any other programming language. If you have any prior experience with any programming language like Python, C/C++ etc. It would be very easy to get started with it.

A shell script comprises the following elements –

  • Shell Keywords – if, else, break etc.
  • Shell commands – cd, ls, echo, pwd, touch etc.
  • Functions
  • Control flow – if..then..else, case and shell loops etc.

Here are some commonly used Bash shell commands along with brief explanations:

  1. Navigation:

    • cd [directory]: Change directory.
    • ls [options] [path]: List directory contents.
    • pwd : Print working directory.
    • mkdir [directory]: Create a directory.

  2. File Manipulation:

    • touch [file]: Create an empty file or update file timestamp.
    • cp [source] [destination]: Copy files or directories.
    • mv [source] [destination]: Move or rename files or directories.
    • rm [options] [file]: Remove files or directories.
  3. Viewing and Editing Files:

    • cat [file]: Concatenate and display file content.
    • more [file]: Display file content page by page.
    • less [file]: Display file content interactively.
    • nano [file] or vim [file]: Open a text editor to create or edit files.
  4. File Permissions:

    • chmod [permissions] [file]: Change file permissions.
    • chown [owner]:[group] [file]: Change file owner and group.
  5. Searching:

    • grep [pattern] [file]: Search for a pattern in a file.
    • find [path] -name [filename]: Find files by name.
    • locate [file]: Find files quickly using a pre-built index.
  6. Text Processing:

    • awk: Pattern scanning and processing language.
    • sed: Stream editor for filtering and transforming text.
  7. Processes:

    • ps [options]: Display information about active processes.
    • kill [options] [PID]: Terminate a process.
  8. System Information:

    • uname [options]: Display system information.
    • df -h: Display disk space usage.
    • free -h: Display memory usage.
  9. Networking:

    • ping [host]: Send ICMP Echo Request to a network host.
    • ifconfig or ip a: Display network interface information.
    • netstat [options]: Display network connections and routing tables.

  10. Compress and Archive:

    • tar [options]: Create or extract tar archives.
    • gzip [file] or gunzip [file]: Compress or decompress files using gzip.

  11. User Management:

    • whoami: Display the current username.
    • passwd: Change user password.

  12. Environment Variables:

    • echo $VAR: Print the value of an environment variable.
    • export VAR=value: Set an environment variable.

  13. Job Control:

    • bg [job]: Move a job to the background.
    • fg [job]: Bring a background job to the foreground.
    • jobs: List active jobs.



PROCESS MANAGEMENT



 

Comments

Popular posts from this blog

HOME

  Vision of the Institution "To create a collaborative academic environment to foster professional excellence and ethical values" Mission of the Institution To develop outstanding professionals with high ethical standards capable of creating and managing global enterprises To foster innovation and research by providing a stimulating learning environment To ensure equitable development of students of all ability levels and backgrounds To be responsive to changes in technology, socio-economic and environmental conditions To foster and maintain mutually beneficial partnerships with alumni and industry Vision of the Department To Nurture professionals in the field of Artificial Intelligence and Machine Learning by disseminating knowledge and skills in the area of Artificial Intelligence and Machine Learning and inculcate into them social and ethical values towards serving the greater cause of the society Mission of the Department M1: To foster students with the latest technologie...