System functionality
system |
---|
User space interfaces, System calls |
Driver Model |
modules |
buses, PCI |
hardware interfaces, [re]booting |
This article describes infrastructures used to support and manage other kernel functionalities. This functionality is named after system calls and sysfs.
User space communication
editUser space communication refers to the exchange of data and messages between user space applications and the kernel. User space applications are programs that run in the user space of the operating system, which is a protected area of memory that provides a safe and isolated environment for applications to run in.
There are several mechanisms available in Linux for user space communication with the kernel. One of the most common mechanisms is through system calls, which are functions that allow user space applications to request services from the kernel, such as opening files, creating processes, and accessing system resources.
Another mechanism for user space communication is through service files, which are special files that represent physical or virtual devices, such as storage devices, network interfaces, and various peripheral devices. User space applications can communicate with these devices by reading from and writing to their corresponding device files.
In summary, Linux kernel provides several mechanisms for user space communication, including system calls, device files, procfs, sysfs, and devtmpfs. These mechanisms enable user space applications to communicate with the kernel and access system resources in a safe and controlled manner.
⚲ APIs:
- kernel space API for user space
- user space API for kernel space
📖 References
System calls
editSystem calls are the fundamental interface between user space applications and the Linux kernel. They provide a way for programs to request services from the operating system, such as opening a file, allocating memory, or creating a new process. In the Linux kernel, system calls are implemented as functions that can be invoked by user space programs using a software interrupt mechanism.
The Linux kernel provides hundreds of system calls, each with its own unique functionality. These system calls are organized into categories such as process management, file management, network communication, and memory management. User space applications can use these system calls to interact with the kernel and access the underlying system resources.
⚲ API
⚙️ Internals
- linux/syscalls.h inc
- syscall_init id installs entry_SYSCALL_64 id
- man 2 syscall ↪
- entry_SYSCALL_64 id ↯ call hierarchy:
📖 References
- System call
- Directory of system calls, man section 2
- Anatomy of a system call, part 1 and part 2
- syscalls ltp
💾 Historical
Device files
editClassic UNIX devices are Char devices used as byte streams with man 2 ioctl.
⚲ API
ls /dev cat /proc/devices cat /proc/misc
Examples: misc_fops id usb_fops id memory_fops id
- Allocated devices doc
- drivers/char src - actually byte stream devices
- Chapter 13. I/O Architecture and Device Drivers
hiddev
edit⚠️ Warning: confusion. hiddev isn't real human interface device! It reuses USBHID infrastructure. hiddev is used for example for monitor controls and Uninterruptible Power Supplies. This module supports these devices separately using a separate event interface on /dev/usb/hiddevX (char 180:96 to 180:111) (⚙️ HIDDEV_MINOR_BASE id)
⚲ API
⚙️ Internals
- CONFIG_USB_HIDDEV
- linux/hiddev.h inc
- hiddev_event id
- drivers/hid/usbhid/hiddev.c src, hiddev_fops id
📖 References
📖 References
Administration
edit🔧 TODO
📖 References
procfs
editThe proc filesystem (procfs) is a special filesystem that presents information about processes and other system information in a hierarchical file-like structure, providing a more convenient and standardized method for dynamically accessing process data held in the kernel than traditional tracing methods or direct access to kernel memory. Typically, it is mapped to a mount point named /proc
at boot time. The proc file system acts as an interface to internal data structures in the kernel. It can be used to obtain information about the system and to change certain kernel parameters at runtime.
/proc
includes a directory for each running process —including kernel threads— in directories named /proc/PID
, where PID
is the process number. Each directory contains information about one process, including the command that originally started the process (/proc/PID/cmdline
), the names and values of its environment variables (/proc/PID/environ
), a symlink to its working directory (/proc/PID/cwd
), another symlink to the original executable file —if it still exists— (/proc/PID/exe
), a couple of directories with symlinks to each open file descriptor (/proc/PID/fd
) and the status —position, flags, ...— of each of them (/proc/PID/fdinfo
), information about mapped files and blocks like heap and stack (/proc/PID/maps
), a binary image representing the process's virtual memory (/proc/PID/mem
), a symlink to the root path as seen by the process (/proc/PID/root
), a directory containing hard links to any child process or thread (/proc/PID/task
), basic information about a process including its run state and memory usage (/proc/PID/status
) and much more.
📖 References
sysfs
editsysfs is a pseudo-file system that exports information about various kernel subsystems, hardware devices, and associated device drivers from the kernel's device model to user space through virtual files. In addition to providing information about various devices and kernel subsystems, exported virtual files are also used for their configuring. Sysfs is designed to export the information present in the device tree, which would then no longer clutter up procfs.
Sysfs is mounted under the /sys
mount point.
⚲ API
📖 References
devtmpfs
editdevtmpfs is a hybrid kernel/userspace approach of a device filesystem to provide nodes before udev runs for the first time.
📖 References
Containerization
editContainerization is a powerful technology that has revolutionized the way software applications are developed, deployed, and run. At its core, containerization provides an isolated environment for running applications, where the application has all the necessary dependencies and can be easily moved from one environment to another without worrying about any compatibility issues.
Containerization technology has its roots in the chroot command, which was introduced in the Unix operating system in the 1979. Chroot provided a way to change the root directory of a process, effectively creating a new isolated environment with its own file system hierarchy. However, this early implementation of containerization had limited functionality, and it was difficult to manage and control the various processes running within the container.
In the early 2000s, the Linux kernel introduced namespaces and control groups to provide a more robust and scalable containerization solution. Namespaces allow processes to have their own isolated view of the system, including the file system, network, and process ID space, while control groups provide fine-grained control over the resources allocated to each container, such as CPU, memory, and I/O.
Using these kernel features, containerization platforms such as Docker and Kubernetes have emerged as popular solutions for building and deploying containerized applications at scale. Containerization has become an essential tool for modern software development, allowing developers to easily package applications and deploy them in a consistent and predictable manner across different environments.
Resources usage and limits
edit⚲ API
- man 2 chroot – change root directory
- man 2 sysinfo – return system information
- man 2 getrusage – get resource usage
- get/set resource limits:
📖 References
Namespaces
editLinux namespaces provide the way to to isolate and virtualize different aspects of the operating system. Namespaces allow multiple instances of an application to run in isolation from each other, without interfering with the host system or other instances.
🔧 TODO
⚲ API
- /proc/self/ns
- man 8 lsns, man 2 ioctl_ns ↪ ns_ioctl id
- man 1 unshare, man 2 unshare
- man 1 nsenter, man 2 setns
- man 2 clone3 ↪ clone_args id
- linux/ns_common.h inc
- linux/proc_ns.h inc
- namespaces definition
⚙️ Internals
- init_nsproxy id - struct of namespaces
- kernel/nsproxy.c src
- fs/namespace.c src
- fs/proc/namespaces.c src
- net/core/net_namespace.c src
- kernel/time/namespace.c src
- kernel/user_namespace.c src
- kernel/pid_namespace.c src
- kernel/utsname.c src
- kernel/cgroup/namespace.c src
- ipc/namespace.c src
📖 References
- man 7 namespaces
- man 7 uts_namespaces
- man 7 ipc_namespaces
- man 7 mount_namespace
- man 7 pid_namespaces
- man 7 network_namespaces
- man 7 user_namespaces
- man 7 time_namespaces
- man 7 cgroup_namespaces
Control groups
editcgroups are used to limit and control the resource usage of groups of processes. They allow administrators to set limits on CPU usage, memory usage, disk I/O, network bandwidth, and other resources, which can be useful for managing system performance and preventing resource contention.
There are two versions of cgroups. Unlike v1, cgroup v2 has only a single process hierarchy and discriminates between processes, not threads.
Here are some of the key differences between cgroups v1 and v2:
cgroups v1 | cgroups v2 | |
---|---|---|
Hierarchy | each subsystem had its own hierarchy, which could lead to complexity and confusion | unified hierarchy, which simplifies management and enables better resource allocation |
Controllers | has several subsystems that are controlled by separate controllers, each with its own set of configuration files and parameters | controllers are consolidated into a single "cgroup2" controller, which provides a unified interface for managing resources |
Resource distribution | distributes resources among groups of processes based on proportional sharing, which can lead to unpredictable results | resources are distributed based on a "weighted fair queuing" algorithm, which provides better predictability and fairness |
Cgroups v2 is not backward compatible with cgroups v1, which means that migrating from v1 to v2 can be challenging and requires careful planning.
🔧 TODO
⚲ API
- linux/cgroup.h inc
- linux/cgroup-defs.h inc
- css_set id – holds set of reference-counted pointers to cgroup_subsys_state id objects
- cgroup_subsys id
- linux/cgroup_subsys.h inc – list of cgroup subsystems
⚙️ Internals
- cg_list id – list of css_set id in task_struct
- kernel/cgroup src
- cgroup_init id
- cgroup2_fs_type id
📖 References
- Control Groups v1 doc
- man 1 systemd-cgtop
- man 5 systemd.slice – slice unit configuration
- man 7 cgroups
- man 7 cgroup_namespaces
- CFS Bandwidth Control for cgroups doc
- Real-Time group scheduling doc
📚 Further reading
💾 Historical
- https://github.com/mk-fg/cgroup-tools for cgrpup v1
Driver Model
editThe Linux driver model (or Device Model, or just DM) is a framework that provides a consistent and standardized way for device drivers to interface with the kernel. It defines a set of rules, interfaces, and data structures that enable device drivers to communicate with the kernel and perform various operations, such as managing resources, livecycle and more.
DM core structure consists of DM classes, DM buses, DM drivers and DM devices.
kobject
editIn the Linux kernel, a kobject id is a fundamental data structure used to represent kernel objects and provide a standardized interface for interacting with them. A kobject is a generic object that can represent any type of kernel object, including devices, files, modules, and more.
The kobject data structure contains several fields that describe the object, such as its name, type, parent, and operations. Each kobject has a unique name within its parent object, and the parent-child relationships form a hierarchy of kobjects.
Kobjects are managed by the kernel's sysfs file system, which provides a virtual file system that exposes kernel objects as files and directories in the user space. Each kobject is associated with a sysfs directory, which contains files and attributes that can be read or written to interact with the kernel object.
⚲ Infrastructure API
Classes
editA class is a higher-level view of a device that abstracts out low-level implementation details. Drivers may see a NVME storage or a SATA storage, but, at the class level, they are all simply block_class id devices. Classes allow user space to work with devices based on what they do, rather than how they are connected or how they work. General DM classes structure match composite pattern.
⚲ API
- ls /sys/class/
- class_register id registers class id
- linux/device/class.h inc
👁 Examples: input_class id, block_class id net_class id
Buses
editA peripheral bus is a channel between the processor and one or more peripheral devices. A DM bus is proxy for a peripheral bus. General DM buses structure match composite pattern. For the purposes of the device model, all devices are connected via a bus, even if it is an internal, virtual, platform_bus_type id. Buses can plug into each other. A USB controller is usually a PCI device, for example. The device model represents the actual connections between buses and the devices they control. A bus is represented by the bus_type id structure. It contains the name, the default attributes, the bus' methods, PM operations, and the driver core's private data.
⚲ API
- ls /sys/bus/
- bus_register id registers bus_type id
- linux/device/bus.h inc
👁 Examples: usb_bus_type id, hid_bus_type id, pci_bus_type id, scsi_bus_type id, platform_bus_type id
Drivers
edit⚲ API
- ls /sys/bus/:/drivers/
- module_driver id - simple common driver initializer, 👁 for example used in module_pci_driver id
- driver_register id registers device_driver id - basic device driver structure, one per all device instances.
- linux/device/driver.h inc
👁 Examples: hid_generic id usb_register_device_driver id
Platform drivers
- module_platform_driver id registers platform_driver id (platform wrapper of device_driver id) with platform_bus_type id
- linux/platform_device.h inc
👁 Examples: gpio_mouse_device_driver id
Devices
edit⚲ API
- ls /sys/devices/
- device_register id registers device id - the basic device structure, per each device instance
- linux/device.h inc – Device drivers infrastructure doc
- linux/dev_printk.h inc
- Device Resource Management doc, devres, devm ...
👁 Examples: platform_bus id mousedev_create
Platform devices
- platform_device id - platform wrapper of struct device - the basic device structure doc, contains resources associated with the devie
- it is can be created dynamically automatically by platform_device_register_simple id or platform_device_alloc id. Or registered with platform_device_register id.
- platform_device_unregister id - releases device and associated resources
👁 Examples: add_pcspkr id
⚲ API 🔧 TODO
- platform_device_info platform_device_id platform_device_register_full platform_device_add
- platform_device_add_data platform_device_register_data platform_device_add_resources
- attribute_group dev_pm_ops
⚙️ Internals
📖 References
Modules
edit
⚲ API
- lsmod
- cat /proc/modules
⚙️ Internals
📖 References
- LDD3: Building and Running Modules
- http://www.xml.com/ldd/chapter/book/ch02.html
- http://www.tldp.org/LDP/tlk/modules/modules.html
- http://www.tldp.org/LDP/lkmpg/2.6/html/ The Linux Kernel Module Programming Guide
Peripheral buses are the communication channels used to connect various peripheral devices to a computer system. These buses are used to transfer data between the peripheral devices and the system's processor or memory. In the Linux kernel, peripheral buses are implemented as drivers that enable communication between the operating system and the hardware.
Peripheral buses in the Linux kernel include USB, PCI, SPI, I2C, and more. Each of these buses has its own unique characteristics, and the Linux kernel provides support for a wide range of peripheral devices.
The PCI (Peripheral Component Interconnect) bus is used to connect internal hardware devices in a computer system. It is commonly used to connect graphics cards, network cards, and other expansion cards. The Linux kernel provides a PCI bus driver that enables communication between the operating system and the devices connected to the bus.
The USB (Universal Serial Bus) is one of the most commonly used peripheral buses in modern computer systems. It allows devices to be hot-swapped and supports high-speed data transfer rates.
🔧 TODO: device enumeration
⚲ API
- Shell interface: ls /proc/bus/ /sys/bus/
See also Buses of Driver Model
See Input: keyboard, mouse etc
PCI
⚲ Shell API
- lspci -vv
- column -t /proc/bus/pci/devices
Main article: PCI
USB
⚲ Shell API
- lsusb -v
- ls /sys/bus/usb/
- cat /proc/bus/usb/devices
⚙️ Internals
📖 References
Other buses
Buses for 🤖 embedded devices:
- linux/gpio/driver.h inc linux/gpio.h inc drivers/gpio src tools/gpio src
- drivers/i2c src https://i2c.wiki.kernel.org
SPI
⚲ API
⚙️ Internals
📖 References
Hardware interfaces
editHardware interfaces are basic part of any operating, enabling communication between the processor and other HW components of a computer system: memory, peripheral devices and buses, various controllers.
I/O ports and registers
editI/O ports and registers are electronic components in computer systems that enable communication between CPU and other electronic controllers and devices.
⚲ API
linux/regmap.h inc — register map access API
asm-generic/io.h inc — generic I/O port emulation.
- ioread32 id / iowrite32 id ...
- The {in,out}[bwl] macros are for emulating x86-style PCI/ISA IO space:
linux/ioport.h inc — definitions of routines for detecting, reserving and allocating system resources.
Functions for memory mapped registers:
ioremap id ...
Hardware Device Drivers
editKeywords: firmware, hotplug, clock, mux, pin
⚙️ Internals
- drivers/acpi src
- drivers/base src
- drivers/sdio src - Secure Digital Input Output
- drivers/virtio src
- drivers/hwmon src
- drivers/thermal src
- drivers/pinctrl src
- drivers/clk src
📖 References
- Pin control subsystem doc
- Linux Hardware Monitoring doc
- Firmware guide doc
- Devicetree doc
- https://hwmon.wiki.kernel.org/
- LDD3:The Linux Device Model
- http://www.tldp.org/LDP/tlk/dd/drivers.html
- http://www.xml.com/ldd/chapter/book/
- http://examples.oreilly.com/linuxdrive2/
Booting and halting
editKernel booting
editThis is loaded in two stages - in the first stage the kernel (as a compressed image file) is loaded into memory and decompressed, and a few fundamental functions such as essential hardware and basic memory management (memory paging) are set up. Control is then switched one final time to the main kernel start process calling start_kernel id, which then performs the majority of system setup (interrupts, the rest of memory management, device and driver initialization, etc.) before spawning separately, the idle process and scheduler, and the init process (which is executed in user space).
Kernel loading stage
The kernel as loaded is typically an image file, compressed into either zImage or bzImage formats with zlib. A routine at the head of it does a minimal amount of hardware setup, decompresses the image fully into high memory, and takes note of any RAM disk if configured. It then executes kernel startup via startup_64 (for x86_64 architecture).
- arch/x86/boot/compressed/vmlinux.lds.S src - linker script defines entry startup_64 id in
- arch/x86/boot/compressed/head_64.S src - assembly of extractor
- extract_kernel id - extractor in language C
- prints
Decompressing Linux... done. Booting the kernel.
Kernel startup stage
The startup function for the kernel (also called the swapper or process 0) establishes memory management (paging tables and memory paging), detects the type of CPU and any additional functionality such as floating point capabilities, and then switches to non-architecture specific Linux kernel functionality via a call to start_kernel id.
↯ Startup call hierarchy:
- arch/x86/kernel/vmlinux.lds.S src – linker script
- arch/x86/kernel/head_64.S src – assembly of uncompressed startup code
- arch/x86/kernel/head64.c src – platform depended startup:
- init/main.c src – main initialization code
- start_kernel id 200 SLOC
- mm_init id
- sched_init id
- rcu_init id – Read-copy-update
- rest_init id
- kernel_init id - deferred kernel thread #1
- kernel_init_freeable id This and following functions are defied with attribute __init id
- run_init_process id obviously runs the first process man 1 init
- kthreadd id – deferred kernel thread #2
- cpu_startup_entry id
- kernel_init id - deferred kernel thread #1
- start_kernel id 200 SLOC
start_kernel id executes a wide range of initialization functions. It sets up interrupt handling (IRQs), further configures memory, starts the man 1 init process (the first user-space process), and then starts the idle task via cpu_startup_entry id. Notably, the kernel startup process also mounts the initial ramdisk (initrd) that was loaded previously as the temporary root file system during the boot phase. The initrd allows driver modules to be loaded directly from memory, without reliance upon other devices (e.g. a hard disk) and the drivers that are needed to access them (e.g. a SATA driver). This split of some drivers statically compiled into the kernel and other drivers loaded from initrd allows for a smaller kernel. The root file system is later switched via a call to man 8 pivot_root / man 2 pivot_root which unmounts the temporary root file system and replaces it with the use of the real one, once the latter is accessible. The memory used by the temporary root file system is then reclaimed.
⚙️ Internals
...
edit📖 References
- Article about booting of the kernel
- Initial RAM disk doc
- Linux startup process
- init
- Linux (U)EFI boot process
- The kernel’s command-line parameters doc
- Boot Configuration doc
- Boot time memory management doc
- Kernel booting process
- Kernel initialization process
📚 Further reading
💾 Historical
- http://tldp.org/HOWTO/Linux-i386-Boot-Code-HOWTO/
- http://www.tldp.org/LDP/lki/lki-1.html
- http://www.tldp.org/HOWTO/KernelAnalysis-HOWTO-4.html
Halting or rebooting
edit🔧 TODO
⚲ API
- linux/reboot.h inc
- linux/stop_machine.h inc
- reboot_mode id
- sys_reboot id calls
- linux/reboot-mode.h inc
⚙️ Internals
Power management
editKeyword: suspend, alarm, hibernation.
⚲ API
- /sys/power/
- /sys/kernel/debug/wakeup_sources
- ⌨️ hands-on:
- sudo awk '{gsub("^ ","?")} NR>1 {if ($6) {print $1}}' /sys/kernel/debug/wakeup_sources
- linux/pm.h inc
- linux/pm_qos.h inc
- linux/pm_clock.h inc
- linux/pm_domain.h inc
- linux/pm_wakeirq.h inc
- linux/pm_wakeup.h inc
- linux/suspend.h inc
- pm_suspend id suspends the system
- Suspend and wakeup depend on
- man 2 timer_create and man 2 timerfd_create with clock ids CLOCK_REALTIME_ALARM id or CLOCK_BOOTTIME_ALARM id will wake the system if it is suspended.
- man 2 epoll_ctl with flag EPOLLWAKEUP id blocks suspend
- See also man 7 capabilities CAP_WAKE_ALARM id, CAP_BLOCK_SUSPEND id
⚙️ Internals
- CONFIG_PM id
- CONFIG_SUSPEND id
- kernel/power src
- alarm_init id
- kernel/time/alarmtimer.c src
- drivers/base/power src: wakeup_sources id
📖 References
- PM administration doc
- CPU and Device PM doc
- Power Management doc
- sysfs power testing ABI doc
- https://lwn.net/Kernel/Index/#Power_management
- PowerTOP
- cpupower
- tlp – apply laptop power management settings
- ACPI – Advanced Configuration and Power Interface
Runtime PM
editKeywords: runtime power management, devices power management opportunistic suspend, autosuspend, autosleep.
⚲ API
- /sys/devices/.../power/:
- async autosuspend_delay_ms control runtime_active_kids runtime_active_time runtime_enabled runtime_status runtime_suspended_time runtime_usage
- linux/pm_runtime.h inc
- pm_runtime_mark_last_busy id
- pm_runtime_enable id
- pm_runtime_disable id
- pm_runtime_get id – asynchronous get
- pm_runtime_get_sync id
- pm_runtime_resume_and_get id – preferable synchronous get
- pm_runtime_put id
- pm_runtime_put_noidle id – just decrement usage counter
- pm_runtime_put_sync id
- pm_runtime_put_autosuspend id
- SET_RUNTIME_PM_OPS id
👁 Example: ac97_pm id
⚙️ Internals
📖 References