Contents
Modern Operating Systems
Modern Operating Systems
Pearson’s Commitment to Diversity, Equity, and Inclusion
Contents
Preface
About the Authors
1
Introduction
1.1
What is an Operating System?
1.1.1
The Operating System as an Extended Machine
1.1.2
The Operating System as a Resource Manager
1.2
History of Operating Systems
1.2.1
The First Generation (1945–1955): Vacuum Tubes
1.2.2
The Second Generation (1955–1965): Transistors and Batch Systems
1.2.3
The Third Generation (1965–1980): ICs and Multiprogramming
1.2.4
The Fourth Generation (1980–Present): Personal Computers
1.2.5
The Fifth Generation (1990–Present): Mobile Computers
1.3
Computer Hardware Review
1.3.1
Processors
Multithreaded and Multicore Chips
1.3.2
Memory
1.3.3
Nonvolatile Storage
1.3.4
I/O Devices
1.3.5
Buses
1.3.6
Booting the Computer
1.4
The Operating System Zoo
1.4.1
Mainframe Operating Systems
1.4.2
Server Operating Systems
1.4.3
Personal Computer Operating Systems
1.4.4
Smartphone and Handheld Computer Operating Systems
1.4.5
The Internet of Things and Embedded Operating Systems
1.4.6
Real-Time Operating Systems
1.4.7
Smart Card Operating Systems
1.5
Operating System Concepts
1.5.1
Processes
1.5.2
Address Spaces
1.5.3
Files
1.5.4
Input/Output
1.5.5
Protection
1.5.6
The Shell
1.5.7
Ontogeny Recapitulates Phylogeny
Large Memories
Protection Hardware
Disks
Virtual Memory
1.6
System Calls
1.6.1
System Calls for Process Management
1.6.2
System Calls for File Management
1.6.3
System Calls for Directory Management
1.6.4
Miscellaneous System Calls
1.6.5
The Windows API
1.7
Operating System Structure
1.7.1
Monolithic Systems
1.7.2
Layered Systems
1.7.3
Microkernels
1.7.4
Client-Server Model
1.7.5
Virtual Machines
VM/370
Virtual Machines Rediscovered
The Java Virtual Machine
Containers
1.7.6
Exokernels and Unikernels
1.8
The World According to C
1.8.1
The C Language
1.8.2
Header Files
1.8.3
Large Programming Projects
1.8.4
The Model of Run Time
1.9
Research On Operating Systems
1.10
Outline of the Rest of this Book
1.11
Metric Units
1.12
Summary
Problems
2
Processes and Threads
2.1
Processes
2.1.1
The Process Model
2.1.2
Process Creation
2.1.3
Process Termination
2.1.4
Process Hierarchies
2.1.5
Process States
2.1.6
Implementation of Processes
2.1.7
Modeling Multiprogramming
2.2
Threads
2.2.1
Thread Usage
2.2.2
The Classical Thread Model
2.2.3
POSIX Threads
2.2.4
Implementing Threads in User Space
2.2.5
Implementing Threads in the Kernel
2.2.6
Hybrid Implementations
2.2.7
Making Single-Threaded Code Multithreaded
2.3
Event-Driven Servers
2.4
Synchronization and Interprocess Communication
2.4.1
Race Conditions
2.4.2
Critical Regions
2.4.3
Mutual Exclusion with Busy Waiting
Disabling Interrupts
Lock Variables
Strict Alternation
Peterson’s Solution
The TSL Instruction
2.4.4
Sleep and Wakeup
The Producer-Consumer Problem
2.4.5
Semaphores
Solving the Producer-Consumer Problem Using Semaphores
The Readers and Writers Problem
2.4.6
Mutexes
Futexes
Mutexes in Pthreads
2.4.7
Monitors
2.4.8
Message Passing
Design Issues for Message-Passing Systems
The Producer-Consumer Problem with Message Passing
2.4.9
Barriers
2.4.10
Priority Inversion
2.4.11
Avoiding Locks: Read-Copy-Update
2.5
Scheduling
2.5.1
Introduction to Scheduling
Process Behavior
When to Schedule
Categories of Scheduling Algorithms
Scheduling Algorithm Goals
2.5.2
Scheduling in Batch Systems
First-Come, First-Served
Shortest Job First
Shortest Remaining Time Next
2.5.3
Scheduling in Interactive Systems
Round-Robin Scheduling
Priority Scheduling
Multiple Queues
Shortest Process Next
Guaranteed Scheduling
Lottery Scheduling
Fair-Share Scheduling
2.5.4
Scheduling in Real-Time Systems
2.5.5
Policy Versus Mechanism
2.5.6
Thread Scheduling
2.6
Research on Processes and Threads
2.7
Summary
Problems
3
Memory Management
3.1
No Memory Abstraction
3.1.1
Running Multiple Programs Without a Memory Abstraction
3.2
A Memory Abstraction: Address Spaces
3.2.1
The Notion of an Address Space
Base and Limit Registers
3.2.2
Swapping
3.2.3
Managing Free Memory
Memory Management with Bitmaps
Memory Management with Linked Lists
3.3
Virtual Memory
3.3.1
Paging
3.3.2
Page Tables
Structure of a Page Table Entry
3.3.3
Speeding Up Paging
Translation Lookaside Buffers
Software TLB Management
3.3.4
Page Tables for Large Memories
Multilevel Page Tables
Inverted Page Tables
3.4
Page Replacement Algorithms
3.4.1
The Optimal Page Replacement Algorithm
3.4.2
The Not Recently Used Page Replacement Algorithm
3.4.3
The First-In, First-Out (FIFO) Page Replacement Algorithm
3.4.4
The Second-Chance Page Replacement Algorithm
3.4.5
The Clock Page Replacement Algorithm
3.4.6
The Least Recently Used (LRU) Page Replacement Algorithm
3.4.7
Simulating LRU in Software
3.4.8
The Working Set Page Replacement Algorithm
3.4.9
The WSClock Page Replacement Algorithm
3.4.10
Summary of Page Replacement Algorithms
3.5
Design Issues for Paging Systems
3.5.1
Local versus Global Allocation Policies
3.5.2
Load Control
3.5.3
Cleaning Policy
3.5.4
Page Size
3.5.5
Separate Instruction and Data Spaces
3.5.6
Shared Pages
3.5.7
Shared Libraries
3.5.8
Mapped Files
3.6
Implementation Issues
3.6.1
Operating System Involvement with Paging
3.6.2
Page Fault Handling
3.6.3
Instruction Backup
3.6.4
Locking Pages in Memory
3.6.5
Backing Store
3.6.6
Separation of Policy and Mechanism
3.7
Segmentation
3.7.1
Implementation of Pure Segmentation
3.7.2
Segmentation with Paging: MULTICS
3.7.3
Segmentation with Paging: The Intel x86
3.8
Research on Memory Management
3.9
Summary
Problems
4
File Systems
4.1
Files
4.1.1
File Naming
4.1.2
File Structure
4.1.3
File Types
4.1.4
File Access
4.1.5
File Attributes
4.1.6
File Operations
4.1.7
An Example Program Using File-System Calls
4.2
Directories
4.2.1
Single-Level Directory Systems
4.2.2
Hierarchical Directory Systems
4.2.3
Path Names
4.2.4
Directory Operations
4.3
File-System Implementation
4.3.1
File-System Layout
Old School: The Master Boot Record
New School: Unified Extensible Firmware Interface
4.3.2
Implementing Files
Contiguous Allocation
Linked-List Allocation
Linked-List Allocation Using a Table in Memory
I-nodes
4.3.3
Implementing Directories
4.3.4
Shared Files
4.3.5
Log-Structured File Systems
4.3.6
Journaling File Systems
4.3.7
Flash-based File Systems
4.3.8
Virtual File Systems
4.4
File-System Management and Optimization
4.4.1
Disk-Space Management
Block Size
Keeping Track of Free Blocks
Disk Quotas
4.4.2
File-System Backups
4.4.3
File-System Consistency
4.4.4
File-System Performance
Caching
Block Read Ahead
Reducing Disk-Arm Motion
4.4.5
Defragmenting Disks
4.4.6
Compression and Deduplication
4.4.7
Secure File Deletion and Disk Encryption
4.5
Example File Systems
4.5.1
The MS-DOS File System
4.5.2
The UNIX V7 File System
4.6
Research on File Systems
4.7
Summary
Problems
5
Input/Output
5.1
Principles of I/O Hardware
5.1.1
I/O Devices
5.1.2
Device Controllers
5.1.3
Memory-Mapped I/O
5.1.4
Direct Memory Access
5.1.5
Interrupts Revisited
Precise and Imprecise Interrupts
5.2
Principles of I/O Software
5.2.1
Goals of the I/O Software
5.2.2
Programmed I/O
5.2.3
Interrupt-Driven I/O
5.2.4
I/O Using DMA
5.3
I/O Software Layers
5.3.1
Interrupt Handlers
5.3.2
Device Drivers
5.3.3
Device-Independent I/O Software
Uniform Interfacing for Device Drivers
Buffering
Error Reporting
Allocating and Releasing Dedicated Devices
Device-Independent Block Size
5.3.4
User-Space I/O Software
5.4
Mass Storage: Disk and SSD
5.4.1
Magnetic Disks
Disk Formatting
Disk Arm Scheduling Algorithms
Error Handling
5.4.2
Solid State Drives (SSDs)
Stable Storage
5.4.3
RAID
5.5
Clocks
5.5.1
Clock Hardware
5.5.2
Clock Software
5.5.3
Soft Timers
5.6
User Interfaces: Keyboard, Mouse, & Monitor
5.6.1
Input Software
Keyboard Software
Mouse Software
Trackpads
5.6.2
Output Software
Text Windows
The X Window System
Graphical User Interfaces
Bitmaps
Fonts
Touch Screens
5.7
Thin Clients
5.8
Power Management
5.8.1
Hardware Issues
5.8.2
Operating System Issues
The Display
The Hard Disk
The CPU
The Memory
Wireless Communication
Thermal Management
Battery Management
Driver Interface
5.8.3
Application Program Issues
5.9
Research on Input/Output
5.10
Summary
Problems
6
Deadlocks
6.1
Resources
6.1.1
Preemptable and Nonpreemptable Resources
6.1.2
Resource Acquisition
6.1.3
The Dining Philosophers Problem
6.2
Introduction to Deadlocks
6.2.1
Conditions for Resource Deadlocks
6.2.2
Deadlock Modeling
6.3
The Ostrich Algorithm
6.4
Deadlock Detection and Recovery
6.4.1
Deadlock Detection with One Resource of Each Type
6.4.2
Deadlock Detection with Multiple Resources of Each Type
6.4.3
Recovery from Deadlock
Recovery through Preemption
Recovery through Rollback
Recovery through Killing Processes
6.5
Deadlock Avoidance
6.5.1
Resource Trajectories
6.5.2
Safe and Unsafe States
6.5.3
The Banker’s Algorithm for a Single Resource
6.5.4
The Banker’s Algorithm for Multiple Resources
6.6
Deadlock Prevention
6.6.1
Attacking the Mutual-Exclusion Condition
6.6.2
Attacking the Hold-and-Wait Condition
6.6.3
Attacking the No-Preemption Condition
6.6.4
Attacking the Circular Wait Condition
6.7
Other Issues
6.7.1
Two-Phase Locking
6.7.2
Communication Deadlocks
6.7.3
Livelock
6.7.4
Starvation
6.8
Research on Deadlocks
6.9
Summary
Problems
7
Virtualization and the Cloud
7.1
History
7.2
Requirements for Virtualization
7.3
Type 1 and Type 2 Hypervisors
7.4
Techniques for Efficient Virtualization
7.4.1
Virtualizing the Unvirtualizable
7.4.2
The Cost of Virtualization
7.5
Are Hypervisors Microkernels Done Right?
7.6
Memory Virtualization
7.7
I/O Virtualization
7.8
Virtual Machines on Multicore CPUS
7.9
Clouds
7.9.1
Clouds as a Service
7.9.2
Virtual Machine Migration
7.9.3
Checkpointing
7.10
OS-Level Virtualization
7.11
Case Study: Vmware
7.11.1
The Early History of VMware
7.11.2
VMware Workstation
7.11.3
Challenges in Bringing Virtualization to the x86
7.11.4
VMware Workstation: Solution Overview
Virtualizing the x86 Architecture
A Guest Operating System Centric Strategy
The Virtual Hardware Platform
The Role of the Host Operating System
7.11.5
The Evolution of VMware Workstation
7.11.6
ESX Server: VMware’s type 1 Hypervisor
7.12
Research on Virtualization and the Cloud
7.13
Summary
Problems
8
Multiple Processor Systems
8.1
Multiprocessors
8.1.1
Multiprocessor Hardware
UMA Multiprocessors with Bus-Based Architectures
UMA Multiprocessors Using Crossbar Switches
UMA Multiprocessors Using Multistage Switching Networks
NUMA Multiprocessors
Multicore Chips
Manycore Chips
Heterogeneous Multicores
Programming with Multiple Cores
Simultaneous Multithreading
8.1.2
Multiprocessor Operating System Types
Each CPU Has Its Own Operating System
Leader-Follower Multiprocessors
Symmetric Multiprocessors
8.1.3
Multiprocessor Synchronization
Spinning vs. Switching
8.1.4
Multiprocessor Scheduling
Time Sharing
Space Sharing
Gang Scheduling
Scheduling for Security
8.2
Multicomputers
8.2.1
Multicomputer Hardware
Interconnection Technology
Network Interfaces
8.2.2
Low-Level Communication Software
Node-to-Network Interface Communication
Remote Direct Memory Access
8.2.3
User-Level Communication Software
Send and Receive
Blocking Versus Nonblocking Calls
8.2.4
Remote Procedure Call
Implementation Issues
8.2.5
Distributed Shared Memory
Replication
False Sharing
Achieving Sequential Consistency
8.2.6
Multicomputer Scheduling
8.2.7
Load Balancing
A Graph-Theoretic Deterministic Algorithm
A Sender-Initiated Distributed Heuristic Algorithm
A Receiver-Initiated Distributed Heuristic Algorithm
8.3
Distributed Systems
8.3.1
Network Hardware
Ethernet
The Internet
8.3.2
Network Services and Protocols
Network Services
Network Protocols
8.3.3
Document-Based Middleware
8.3.4
File-System-Based Middleware
Transfer Model
The Directory Hierarchy
Naming Transparency
Semantics of File Sharing
8.3.5
Object-Based Middleware
8.3.6
Coordination-Based Middleware
Publish/Subscribe
8.4
Research on Multiple Processor Systems
8.5
Summary
Problems
9
Security
9.1
Fundamentals of Operating System Security
9.1.1
The CIA Security Triad
9.1.2
Security Principles
9.1.3
Security of the Operating System Structure
9.1.4
Trusted Computing Base
9.1.5
Attackers
9.1.6
Can We Build Secure Systems?
9.2
Controlling Access to Resources
9.2.1
Protection Domains
9.2.2
Access Control Lists
9.2.3
Capabilities
9.3
Formal Models of Secure Systems
9.3.1
Multilevel Security
The Bell-LaPadula Model
The Biba Model
9.3.2
Cryptography
Secret-Key Cryptography
Public-Key Cryptography
Digital Signatures
9.3.3
Trusted Platform Modules
9.4
Authentication
9.4.1
Passwords
Weak Passwords
UNIX Password Security
One-Time Passwords
Challenge-Response Authentication
9.4.2
Authentication Using a Physical Object
9.4.3
Authentication Using Biometrics
9.5
Exploiting Software
9.5.1
Buffer Overflow Attacks
Stack Canaries
Avoiding Stack Canaries
Data Execution Prevention
Code Reuse Attacks
Address-Space Layout Randomization
Bypassing ASLR
Non-Control-Flow Diverting Attacks
Buffer Overflows—The Not So Final Word
9.5.2
Format String Attacks
9.5.3
Use-After-Free Attacks
9.5.4
Type Confusion Vulnerabilities
9.5.5
Null Pointer Dereference Attacks
9.5.6
Integer Overflow Attacks
9.5.7
Command Injection Attacks
9.5.8
Time of Check to Time of Use Attacks
9.5.9
Double Fetch Vulnerability
9.6
Exploiting Hardware
9.6.1
Covert Channels
9.6.2
Side Channels
9.6.3
Transient Execution Attacks
Transient Execution Attacks Based on Faults
Transient Execution Attacks Based on Speculation
9.7
Insider Attacks
9.7.1
Logic Bombs
9.7.2
Back Doors
9.7.3
Login Spoofing
9.8
Operating System Hardening
9.8.1
Fine-Grained Randomization
9.8.2
Control-Flow Restrictions
9.8.3
Access Restrictions
9.8.4
Code and Data Integrity Checks
9.8.5
Remote Attestation Using a Trusted Platform Module
9.8.6
Encapsulating Untrusted Code
Sandboxing
9.9
Research on Security
9.10
Summary
Problems
10
Case Study 1: UNIX, Linux, and Android
10.1
History of UNIX and Linux
10.1.1
UNICS
10.1.2
PDP-11 UNIX
10.1.3
Portable UNIX
10.1.4
Berkeley UNIX
10.1.5
Standard UNIX
10.1.6
MINIX
10.1.7
Linux
10.2
Overview of Linux
10.2.1
Linux Goals
10.2.2
Interfaces to Linux
10.2.3
The Shell
10.2.4
Linux Utility Programs
10.2.5
Kernel Structure
10.3
Processes in Linux
10.3.1
Fundamental Concepts
10.3.2
Process-Management System Calls in Linux
10.3.3
Implementation of Processes and Threads in Linux
Threads in Linux
10.3.4
Scheduling in Linux
10.3.5
Synchronization in Linux
10.3.6
Booting Linux
10.4
Memory Management in Linux
10.4.1
Fundamental Concepts
10.4.2
Memory Management System Calls in Linux
10.4.3
Implementation of Memory Management in Linux
Physical Memory Management
Memory-Allocation Mechanisms
Virtual Address-Space Representation
10.4.4
Paging in Linux
The Page Replacement Algorithm
10.5
Input/Output in Linux
10.5.1
Fundamental Concepts
10.5.2
Networking
10.5.3
Input/Output System Calls in Linux
10.5.4
Implementation of Input/Output in Linux
10.5.5
Modules in Linux
10.6
The Linux File System
10.6.1
Fundamental Concepts
10.6.2
File-System Calls in Linux
10.6.3
Implementation of the Linux File System
The Linux Virtual File System
The Linux Ext2 File System
The Linux Ext4 File System
The /proc File System
10.6.4
NFS: The Network File System
NFS Architecture
NFS Protocols
NFS Implementation
NFS Version 4
10.7
Security in Linux
10.7.1
Fundamental Concepts
10.7.2
Security System Calls in Linux
10.7.3
Implementation of Security in Linux
10.8
Android
10.8.1
Android and Google
10.8.2
History of Android
Early Development
Android 1.0
Continued Development
10.8.3
Design Goals
10.8.4
Android Architecture
10.8.5
Linux Extensions
Wake Locks
Out-of-Memory Killer
10.8.6
Art
10.8.7
Binder IPC
Binder Kernel Module
Binder User-Space API
Binder Interfaces and AIDL
10.8.8
Android Applications
Activities
Services
Receivers
Content Providers
10.8.9
Intents
10.8.10
Process Model
Starting Processes
Process Lifecycle
Process Dependencies
10.8.11
Security and Privacy
Application Sandboxes
Permissions
SELinux and Defense in Depth
Privacy and Permissions
Evolving Runtime Permissions
10.8.12
Background Execution and Social Engineering
10.9
Summary
Problems
11
Case Study 2: Windows 11
11.1
History of Windows Through Windows 11
11.1.1
1980s: MS-DOS
11.1.2
1990s: MS-DOS-based Windows
11.1.3
2000s: NT-based Windows
11.1.4
Windows Vista
11.1.5
Windows 8
11.1.6
Windows 10
11.1.7
Windows 11
11.2
Programming Windows
11.2.1
Universal Windows Platform
11.2.2
Windows Subsystems
11.2.3
The Native NT Application Programming Interface
11.2.4
The Win32 Application Programming Interface
11.2.5
The Windows Registry
11.3
System Structure
11.3.1
Operating System Structure
The Hypervisor
The Hardware Abstraction Layer
The Kernel Layer
Deferred Procedure Calls
Asynchronous Procedure Calls
Dispatcher Objects
The Executive Layer
The Device Drivers
11.3.2
Booting Windows
11.3.3
Implementation of the Object Manager
Handles
The Object Namespace
Object Types
11.3.4
Subsystems, DLLs, and User-Mode Services
11.4
Processes and Threads in Windows
11.4.1
Fundamental Concepts
Processes
Jobs and Fibers
Thread Pools
Threads
11.4.2
Job, Process, Thread, and Fiber Management API Calls
Interprocess Communication
Synchronization
11.4.3
Implementation of Processes and Threads
Scheduling
11.4.4
WoW64 and Emulation
WoW64 Design
x64 Emulation on arm64
11.5
Memory Management
11.5.1
Fundamental Concepts
Virtual Address Allocation
Pagefiles
11.5.2
Memory-Management System Calls
11.5.3
Implementation of Memory Management
Page-Fault Handling
Page Tables
The Page Replacement Algorithm
Physical Memory Management
Page Combining
11.5.4
Memory Compression
11.5.5
Memory Partitions
11.6
Caching in Windows
11.7
Input/Output in Windows
11.7.1
Fundamental Concepts
11.7.2
Input/Output API Calls
11.7.3
Implementation of I/O
Device Drivers
I/O Request Packets
Device Stacks
11.8
The Windows NT File System
11.8.1
Fundamental Concepts
11.8.2
Implementation of the NT File System
File System Structure
Storage Allocation
File Compression
Journaling
File Encryption
11.9
Windows Power Management
11.10
Virtualization in Windows
11.10.1
Hyper-V
Hypervisor
The Virtualization Stack
Device I/O
VA-backed VMs
11.10.2
Containers
Namespace Virtualization
Hyper-V Isolated Containers
Hardware Isolated Processes
11.10.3
Virtualization-Based Security
11.11
Security in Windows
11.11.1
Fundamental Concepts
11.11.2
Security API Calls
11.11.3
Implementation of Security
11.11.4
Security Mitigations
Eliminating Vulnerabilities
Breaking Exploitation Techniques
Containing Damage
Limiting Window of Time to Exploit
Antimalware
11.12
Summary
Problems
12
Operating System Design
12.1
The Nature of the Design Problem
12.1.1
Goals
12.1.2
Why Is It Hard to Design an Operating System?
12.2
Interface Design
12.2.1
Guiding Principles
Principle 1: Simplicity
Principle 2: Completeness
Principle 3: Efficiency
12.2.2
Paradigms
User-Interface Paradigms
Execution Paradigms
Data Paradigms
12.2.3
The System-Call Interface
12.3
Implementation
12.3.1
System Structure
Layered Systems
Exokernels
Microkernel-Based Client-Server Systems
Kernel Threads
12.3.2
Mechanism vs. Policy
12.3.3
Orthogonality
12.3.4
Naming
12.3.5
Binding Time
12.3.6
Static vs. Dynamic Structures
12.3.7
Top-Down vs. Bottom-Up Implementation
12.3.8
Synchronous vs. Asynchronous Communication
12.3.9
Useful Techniques
Hiding the Hardware
Indirection
Reusability
Reentrancy
Brute Force
Check for Errors First
12.4
Performance
12.4.1
Why Are Operating Systems Slow?
12.4.2
What Should Be Optimized?
12.4.3
Space-Time Trade-offs
12.4.4
Caching
12.4.5
Hints
12.4.6
Exploiting Locality
12.4.7
Optimize the Common Case
12.5
Project Management
12.5.1
The Mythical Man Month
12.5.2
Team Structure
12.5.3
The Role of Experience
12.5.4
No Silver Bullet
Problems
13
Reading List and Bibliography
13.1
Suggestions for Further Reading
13.1.1
Introduction
13.1.2
Processes and Threads
13.1.3
Memory Management
13.1.4
File Systems
13.1.5
Input/Output
13.1.6
Deadlocks
13.1.7
Virtualization and the Cloud
13.1.8
Multiple Processor Systems
13.1.9
Security
13.1.10
Case Study 1: UNIX, Linux, and Android
13.1.11
Case Study 2: Windows
13.1.12
Operating System Design
13.2
Alphabetical Bibliography
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Z
List of Figures
Figure 1-1: Where the operating system fits in.
Figure 1-2: Operating systems turn awful hardware into beautiful abstractions.
Figure 1-3: An early batch system. (a) Programmers bring cards to 1401. (b) 1401 reads batch of jobs onto tape. (c) Operator carries input tape to 7094. (d) 7094 does computing. (e) Operator carries output tape to 1401. (f) 1401 prints output.
Figure 1-4: Structure of a typical FMS job.
Figure 1-5: A multiprogramming system with three jobs in memory.
Figure 1-6: Some of the components of a simple personal computer.
Figure 1-7: (a) A three-stage pipeline. (b) A superscalar CPU.
Figure 1-8: (a) A quad-core chip with a shared L2 cache. (b) A quad-core chip with separate L2 caches.
Figure 1-9: A typical memory hierarchy. The numbers are very rough approximations.
Figure 1-10: Structure of a disk drive.
Figure 1-11: (a) The steps in starting an I/O device and getting an interrupt. (b) Interrupt processing involves taking the interrupt, running the interrupt handler, and returning to the user program.
Figure 1-12: The structure of a large x86 system.
Figure 1-13: A process tree. Process A created two child processes, B and C. Process B created three child processes, D, E, and F.
Figure 1-14: A file system for a university department.
Figure 1-15: (a) Before mounting, the files on the USB drive are not accessible. (b) After mounting, they are part of the file hierarchy.
Figure 1-16: Two processes connected by a pipe.
Figure 1-17: The 10 steps in making the system call read(fd, buffer, nbytes).
Figure 1-19: A stripped-down shell. Throughout this book, TRUE is assumed to be defined as 1.
Figure 1-20: Processes have three segments: text, data, and stack.
Figure 1-21: (a) Two directories before linking /usr/jim/memo to ast’s directory. (b) The same directories after linking.
Figure 1-22: (a) File system before the mount. (b) File system after the mount.
Figure 1-24: A simple structuring model for a monolithic system.
Figure 1-26: Simplified structure of the MINIX system.
Figure 1-27: The client-server model over a network.
Figure 1-28: The structure of VM/370 with CMS.
Figure 1-29: (a) A type 1 hypervisor. (b) A pure type 2 hypervisor. (c) A practical type 2 hypervisor.
Figure 1-30: The process of compiling C and header files to make an executable.
Figure 2-1: (a) Multiprogramming four programs. (b) Conceptual model of four independent, sequential processes. (c) Only one program is active at once.
Figure 2-2: A process can be in running, blocked, or ready state. Transitions between these states are as shown.
Figure 2-3: The lowest layer of a process-structured operating system handles interrupts and scheduling. Above that layer are sequential processes.
Figure 2-6: CPU utilization as a function of the number of processes in memory.
Figure 2-7: A word processor with three threads.
Figure 2-8: A multithreaded Web server.
Figure 2-9: A rough outline of the code for Fig. 2-8. (a) Dispatcher thread.(b) Worker thread.
Figure 2-10: (a) Three processes each with one thread. (b) One process with three threads.
Figure 2-12: Each thread has its own stack.
Figure 2-14: An example program using threads.
Figure 2-15: (a) A user-level threads package. (b) A threads package managed by the kernel.
Figure 2-16: Multiplexing user-level threads onto kernel-level threads.
Figure 2-17: Conflicts between threads over the use of a global variable.
Figure 2-18: Threads can have private global variables.
Figure 2-19: An event-driven thank-you server (pseudo code).
Figure 2-21: Two processes want to access shared memory at the same time.
Figure 2-22: Mutual exclusion using critical regions.
Figure 2-23: A proposed solution to the critical-region problem. (a) Process 0. (b) Process 1. In both cases, be sure to note the semicolons terminating the while statements.
Figure 2-24: Peterson’s solution for achieving mutual exclusion.
Figure 2-25: Entering and leaving a critical region using the TSL instruction.
Figure 2-26: Entering and leaving a critical region using the XCHG instruction.
Figure 2-27: The producer-consumer problem with a fatal race condition.
Figure 2-28: The producer-consumer problem using semaphores.
Figure 2-29: A solution to the readers and writers problem.
Figure 2-30: Implementation of mutex_lock and mutex_unlock.
Figure 2-33: Using threads to solve the producer-consumer problem.
Figure 2-34: A monitor.
Figure 2-35: An outline of the producer-consumer problem with monitors. Only one monitor procedure at a time is active. The buffer has N slots.
Figure 2-36: A solution to the producer-consumer problem in Java.
Figure 2-37: The producer-consumer problem with N messages.
Figure 2-38: Use of a barrier. (a) Processes approaching a barrier. (b) All processes but one blocked at the barrier. (c) When the last process arrives at the barrier, all of them are let through.
Figure 2-39: Read-Copy-Update: inserting a node in the tree and then removing a branch—all without locks.
Figure 2-40: Bursts of CPU usage alternate with periods of waiting for I/O. (a) A CPU-bound process. (b) An I/O-bound process.
Figure 2-41: Some goals of the scheduling algorithm under different circumstances.
Figure 2-42: An example of shortest-job-first scheduling. (a) Running four jobs in the original order. (b) Running them in shortest-job-first order.
Figure 2-43: Round-robin scheduling. (a) The list of runnable processes. (b)The list of runnable processes after B uses up its quantum.
Figure 2-44: A scheduling algorithm with four priority classes.
Figure 2-45: (a) Possible scheduling of user-level threads with a 50-msec process quantum and threads that run 5 msec per CPU burst. (b) Possible scheduling of kernel-level threads with the same characteristics as (a).
Figure 3-1: Three simple ways of organizing memory with an operating system and one user process. Other possibilities also exist.
Figure 3-2: Illustration of the relocation problem. (a) A 16-KB program. (b) Another 16-KB program. (c) The two programs loaded consecutively into memory.
Figure 3-3: Base and limit registers can be used to give each process a separate address space.
Figure 3-4: Memory allocation changes as processes come into memory and leave it. The shaded regions are unused memory.
Figure 3-5: (a) Allocating space for a growing data segment. (b) Allocating space for a growing stack and a growing data segment.
Figure 3-6: (a) A part of memory with five processes and three holes. The tick marks show the memory allocation units. The shaded regions (0 in the bitmap) are free. (b) The corresponding bitmap. (c) The same information as a list.
Figure 3-7: Four neighbor combinations for the terminating process, X.
Figure 3-8: The position and function of the MMU. Here the MMU is shown as being a part of the CPU chip because it commonly is nowadays. However, logically it could be a separate chip and was years ago.
Figure 3-9: The relation between virtual addresses and physical memory addresses is given by the page table. Every page begins on a multiple of 4096 and ends 4095 addresses higher, so 4K–8K really means 4096–8191 and 8K–12K means 8192–12287.
Figure 3-10: The internal operation of the MMU with 16 4-KB pages.
Figure 3-11: A typical page table entry.
Figure 3-13: (a) A 32-bit address with two page table fields. (b) Two-level page tables.
Figure 3-14: Comparison of a traditional page table with an inverted page table.
Figure 3-15: Operation of second chance. (a) Pages sorted in FIFO order. (b) Page list if a page fault occurs at time 20 and A has its R bit set. The numbers above the pages are their load times.
Figure 3-16: The clock page replacement algorithm.
Figure 3-17: The aging algorithm simulates LRU in software. Shown are six pages for five clock ticks. The five clock ticks are represented by (a) to (e).
Figure 3-18: The working set is the set of pages used by the k most recent memory references. The function w(k, t) is the size of the working set at time t.
Figure 3-19: The working set algorithm.
Figure 3-20: Operation of the WSClock algorithm. (a) and (b) give an example of what happens when R=1. (c) and (d) give an example of R=0.
Figure 3-22: Local versus global page replacement. (a) Original configuration. (b) Local page replacement. (c) Global page replacement.
Figure 3-23: Page fault rate as a function of the number of page frames assigned.
Figure 3-24: (a) One address space. (b) Separate I and D spaces.
Figure 3-25: Two processes sharing the same program sharing its page tables.
Figure 3-26: A shared library being used by two processes.
Figure 3-27: An instruction causing a page fault.
Figure 3-28: (a) Paging to a static swap area. (b) Backing up pages dynamically.
Figure 3-29: Page fault handling with an external pager.
Figure 3-30: In a one-dimensional address space with growing tables, one table may bump into another.
Figure 3-31: A segmented memory allows each table to grow or shrink independently of the other tables.
Figure 3-32: Comparison of paging and segmentation.
Figure 3-33: (a)–(d) Development of checkerboarding. (e) Removal of the checkerboarding by compaction.
Figure 3-34: The MULTICS virtual memory. (a) The descriptor segment pointed to the page tables. (b) A segment descriptor. The numbers are the field lengths.
Figure 3-35: A 34-bit MULTICS virtual address.
Figure 3-36: Conversion of a two-part MULTICS address into a main memory address.
Figure 3-37: A simplified version of the MULTICS TLB. The existence of two page sizes made the actual TLB more complicated.
Figure 4-2: Three kinds of files. (a) Byte sequence. (b) Record sequence. (c) Tree.
Figure 4-3: (a) An executable file. (b) An archive.
Figure 4-6: A simple program to copy a file.
Figure 4-7: A single-level directory system containing four files.
Figure 4-8: A hierarchical directory system.
Figure 4-9: A UNIX directory tree.
Figure 4-10: A possible file-system layout.
Figure 4-11: Layout for UEFI with partition table.
Figure 4-12: (a) Contiguous allocation of disk space for seven files. (b) The state of the disk after files D and F have been removed.
Figure 4-13: Storing a file as a linked list of disk blocks.
Figure 4-14: Linked-list allocation using a file-allocation table in main memory.
Figure 4-15: An example i-node.
Figure 4-16: (a) A simple directory containing fixed-size entries with the disk addresses and attributes in the directory entry. (b) A directory in which each entry justrefers to an i-node.
Figure 4-17: Two ways of handling long file names in a directory. (a) In-line. (b) In a heap.
Figure 4-18: File system containing a shared file.
Figure 4-19: (a) Situation prior to linking. (b) After the link is created. (c) After the original owner removes the file.
Figure 4-20: Components inside a typical flash SSD.
Figure 4-21: Position of the virtual file system.
Figure 4-22: A simplified view of the data structures and code used by the VFS and concrete file system to do a read.
Figure 4-23: The dashed curve (left-hand scale) gives the data rate of a disk. The solid curve (right-hand scale) gives the disk-space efficiency. All files are 4 KB.
Figure 4-24: (a) Storing the free list on a linked list. (b) A bitmap.
Figure 4-25: (a) An almost-full block of pointers to free disk blocks in memory and three blocks of pointers on disk. (b) Result of freeing a three-block file. (c) An alternative strategy for handling the three free blocks. The shaded entries represent pointers to free disk blocks.
Figure 4-26: Quotas are kept track of on a per-user basis in a quota table.
Figure 4-27: A file system to be dumped. The squares are directories and the circles are files. The shaded items have been modified since the last dump. Each directory and file is labeled by its i-node number.
Figure 4-28: Bitmaps used by the logical dumping algorithm.
Figure 4-29: File-system states. (a) Consistent. (b) Missing block. (c) Duplicate block in free list. (d) Duplicate data block.
Figure 4-30: The buffer cache data structures.
Figure 4-31: (a) I-nodes placed at the start of the disk. (b) Disk divided into cylinder groups, each with its own blocks and i-nodes.
Figure 4-32: The MS-DOS directory entry.
Figure 4-34: A UNIX V7 directory entry.
Figure 4-35: A UNIX i-node.
Figure 4-36: The steps in looking up /usr/ast/mbox.
Figure 5-2: (a) Separate I/O and memory space. (b) Memory-mapped I/O. (c) Hybrid.
Figure 5-3: (a) A single-bus architecture. (b) A dual-bus memory architecture.
Figure 5-4: Operation of a DMA transfer.
Figure 5-5: How an interrupt happens. The connections between the devices and the controller actually use interrupt lines on the bus rather than dedicated wires.
Figure 5-6: (a) A precise interrupt. (b) An imprecise interrupt.
Figure 5-7: Steps in printing a string.
Figure 5-8: Writing a string to the printer using programmed I/O.
Figure 5-9: Writing a string to the printer using interrupt-driven I/O. (a) Code executed at the time the print system call is made. (b) Interrupt service procedure for the printer.
Figure 5-10: Printing a string using DMA. (a) Code executed when the print system call is made. (b) Interrupt-service procedure.
Figure 5-11: Layers of the I/O software system.
Figure 5-12: Logical positioning of device drivers. In reality, all communication between drivers and device controllers goes over the bus.
Figure 5-14: (a) Without a standard driver interface. (b) With a standard driver interface.
Figure 5-15: (a) Unbuffered input. (b) Buffering in user space. (c) Buffering in the kernel followed by copying to user space. (d) Double buffering in the kernel.
Figure 5-16: Networking may involve many copies of a packet.
Figure 5-17: Layers of the I/O system and the main functions of each layer.
Figure 5-18: (a) Physical geometry of a disk with two zones. (b) A possible virtual geometry for this disk.
Figure 5-19: A disk sector.
Figure 5-20: An illustration of cylinder skew.
Figure 5-21: (a) No interleaving. (b) Single interleaving. (c) Double interleaving.
Figure 5-22: Shortest Seek First (SSF) disk scheduling algorithm.
Figure 5-23: The elevator algorithm for scheduling disk requests.
Figure 5-24: (a) A disk track with a bad sector. (b) Substituting a spare for the bad sector. (c) Shifting all the sectors to bypass the bad one.
Figure 5-25: Analysis of the influence of crashes on stable writes.
Figure 5-26: RAID levels 0 through 6. Backup and parity drives are shown shaded.
Figure 5-27: A programmable clock.
Figure 5-28: Three ways to maintain the time of day.
Figure 5-29: Simulating multiple timers with a single clock.
Figure 5-33: Clients and servers in the M.I.T. X Window System.
Figure 5-34: A skeleton of an X Window application program.
Figure 5-35: A sample window on the authors’ machine on a 1920×1080 display.
Figure 5-36: A skeleton of a Windows main program.
Figure 5-37: An example rectangle drawn using Rectangle. Each box represents one pixel.
Figure 5-38: Copying bitmaps using BitBlt. (a) Before. (b) After.
Figure 5-39: Some examples of character outlines at different point sizes.
Figure 5-40: (a) Running at full clock speed. (b) Cutting voltage by two cuts clock speed by two and power consumption by four.
Figure 6-1: Using a semaphore to protect resources. (a) One resource. (b) Two resources.
Figure 6-2: (a) Deadlock-free code. (b) Code with a potential deadlock.
Figure 6-3: Lunch time in the Philosophy Department.
Figure 6-4: A nonsolution to the dining philosophers problem.
Figure 6-5: A solution to the dining philosophers problem.
Figure 6-6: Resource -allocation graphs. (a) Holding a resource. (b) Requesting a resource. (c) Deadlock.
Figure 6-7: An example of how deadlock occurs and how it can be avoided.
Figure 6-8: (a) A resource graph. (b) A cycle extracted from (a).
Figure 6-9: The four data structures needed by the deadlock detection algorithm.
Figure 6-10: An example for the deadlock detection algorithm.
Figure 6-11: Two process resource trajectories.
Figure 6-12: Demonstration that the state in (a) is safe.
Figure 6-13: Demonstration that the state in (b) is not safe.
Figure 6-14: Three resource allocation states: (a) safe. (b) safe. (c) unsafe.
Figure 6-15: The banker’s algorithm with multiple resources.
Figure 6-16: (a) Numerically ordered resources. (b) A resource graph.
Figure 6-18: A resource deadlock in a network.
Figure 6-19: Polite processes that may cause livelock.
The figure illustrates the state of a system with four processes.
Figure 7-1: Location of type 1 and type 2 hypervisors.
Figure 7-3: When the operating system in a virtual machine executes a kernelonly instruction, it traps to the hypervisor if virtualization technology is present.
Figure 7-4: The binary translator rewrites the guest operating system running in ring 1, while the hypervisor runs in ring 0.
Figure 7-5: True virtualization and paravirtualization.
Figure 7-6: VMI Linux running on (a) the bare hardware, (b) VMware, and (c) Xen.
Figure 7-7: Extended/nested page tables are walked every time a guest physical address is accessed—including the accesses for each level of the guest’s page tables.
Figure 7-8: High-level components of the VMware virtual machine monitor (in the absence of hardware support).
Figure 7-9: Virtual hardware configuration options of the early VMware Workstation, ca. 2000.
Figure 7-10: The VMware Hosted Architecture and its three components: VMX, VMM driver, and VMM.
Figure 7-11: Difference between a normal context switch and a world switch.
Figure 7-12: ESX Server: VMware’s type 1 hypervisor.
Figure 8-1: (a) A shared-memory multiprocessor. (b) A message-passing multicomputer. (c) A wide area distributed system.
Figure 8-2: Three bus-based multiprocessors. (a) Without caching. (b) With caching. (c) With caching and private memories.
Figure 8-3: (a) An 8×8 crossbar switch. (b) An open crosspoint. (c) A closed crosspoint.
Figure 8-4: (a) A 2×2 switch with two input lines, A and B, and two output lines, X and Y. (b) A message format.
Figure 8-5: An omega switching network.
Figure 8-6: (a) A 256-node directory-based multiprocessor. (b) Division of a 32-bit memory address into fields. (c) The directory at node 36.
Figure 8-7: Partitioning multiprocessor memory among four CPUs, but sharing a single copy of the operating system code. The boxes marked Data are the operating system’s private data for each CPU.
Figure 8-8: A leader-follower multiprocessor model.
Figure 8-9: The SMP multiprocessor model.
Figure 8-10: The TSL instruction can fail if the bus cannot be locked. These four steps show a sequence of events where the failure is demonstrated.
Figure 8-11: Use of multiple locks to avoid cache thrashing.
Figure 8-12: Using a single data structure for scheduling a multiprocessor.
Figure 8-13: A set of 32 CPUs split into four partitions, with two CPUs available.
Figure 8-14: Communication between two threads belonging to thread A that are running out of phase.
Figure 8-15: Gang scheduling.
Figure 8-16: Various interconnect topologies. (a) A single switch. (b) A ring. (c) A grid. (d) A double torus. (e) A cube. (f) A 4D hypercube.
Figure 8-17: Store-and-forward packet switching.
Figure 8-18: Position of the network interface boards in a multicomputer.
Figure 8-19: (a) A blocking send call. (b) A nonblocking send call.
Figure 8-20: Steps in making a remote procedure call. The stubs are shaded.
Figure 8-21: Various layers where shared memory can be implemented. (a) The hardware. (b) The operating system. (c) User-level software.
Figure 8-22: (a) Pages of the address space distributed among four machines. (b) Situation after CPU 0 references page 10 and the page is moved there. (c) Situation if page 10 is read only and replication is used.
Figure 8-23: False sharing of a page containing two unrelated variables.
Figure 8-24: Two ways of allocating nine processes to three nodes.
Figure 8-25: (a) An overloaded node looking for a lightly loaded node to hand off processes to. (b) An empty node looking for work to do.
Figure 8-27: Positioning of middleware in a distributed system.
Figure 8-28: (a) Classic Ethernet. (b) Switched Ethernet.
Figure 8-29: A portion of the Internet.
Figure 8-30: Six different types of network service.
Figure 8-31: Accumulation of packet headers.
Figure 8-32: The Web is a big directed graph of documents.
Figure 8-33: (a) The upload/download model. (b) The remote-access model.
Figure 8-34: (a) Two file servers. The squares are directories and the circles are files. (b) A system in which all clients have the same view of the file system. (c) A system in which different clients have different views of the file system.
Figure 8-35: (a) Sequential consistency. (b) In a distributed system with caching, reading a file may return an obsolete value.
Figure 8-36: Three Linda tuples.
Figure 8-37: The publish/subscribe architecture.
Figure 9-2: A reference monitor.
Figure 9-3: Three protection domains.
Figure 9-4: A protection matrix.
Figure 9-5: A protection matrix with domains as objects.
Figure 9-6: Use of access control lists to manage file access.
Figure 9-8: When capabilities are used, each process has a capability list.
Figure 9-9: A cryptographically protected capability.
Figure 9-10: (a) An authorized state. (b) An unauthorized state.
Figure 9-11: The Bell-LaPadula multilevel security model.
Figure 9-12: Relationship between the plaintext and the ciphertext.
Figure 9-13: (a) Computing a signature block. (b) What the receiver gets.
Figure 9-16: Use of a smart card for authentication.
Figure 9-17: (a) Situation when the main program is running. (b) After the procedure A has been called. (c) Buffer overflow shown in gray
Figure 9-18: Skipping the stack canary: by modifying len first, the attack is able to bypass the canary and modify the return address directly.
Figure 9-19: Return-oriented programming: linking gadgets.
Figure 9-20: A format string vulnerability.
Figure 9-21: A format string attack. By using exactly the right number of %08x, the attacker can use the first four characters of the format string as an address.
Figure 9-22: Code that might lead to a command injection attack.
Figure 9-23: (a) The client, server, and collaborator processes. (b) The encapsulated server can still leak to the collaborator via covert channels.
Figure 9-24: A covert channel using file locking.
Figure 9-25: Structure of an encryption routine which iterates over the bits in the key, taking different actions depending on the value of the bit.
Figure 9-26: Cache side channel attack on Andy’s Messenger App.
Figure 9-27: Meltdown: a user access kernel memory and uses it as an index.
Figure 9-28: A value read by a faulting instruction still leaves a trace in the cache.
Figure 9-29: Speculative execution: the CPU mispredicts the condition in Line 1 and executes Lines 2–3 speculatively, accessing memory that should be off-limits.
Figure 9-30: Original Spectre attack on the kernel.
Figure 9-31: (a) Normal code. (b) Code with a back door inserted.
Figure 9-32: (a) Correct login screen. (b) Phony login screen.
Figure 9-33: Address-space layout randomization: (a) coarse-grained (b) fine-grained
Figure 9-34: Control-flow integrity (pseudo-code): (a) without CFI, (b) with CFI.
Figure 9-35: Securing and verifying the boot process.
Figure 9-36: (a) Memory divided into 16-MB sandboxes. (b) One way of checking an instruction for validity.
Figure 10-1: The layers in a Linux system.
Figure 10-3: Structure of the Linux kernel.
Figure 10-4: Process creation in Linux.
Figure 10-7: A highly simplified shell.
Figure 10-8: The steps in executing the command ls typed to the shell.
Figure 10-10: Illustration of Linux runqueue data structures for (a) the Linux O(1) scheduler, and (b) the Completely Fair Scheduler.
Figure 10-11: The sequence of processes used to boot some Linux systems.
Figure 10-12: (a) Process A’s virtual address space. (b) Physical memory. (c) Process B’s virtual address space.
Figure 10-13: Two processes can share a mapped file.
Figure 10-15: Linux’ main memory representation.
Figure 10-16: Linux uses four-level page tables.
Figure 10-17: Operation of the buddy algorithm.
Figure 10-18: Page states considered in the page-frame replacement algorithm.
Figure 10-19: The uses of sockets for networking.
Figure 10-22: The Linux I/O system showing one file system in detail.
Figure 10-24: (a) Before linking. (b) After linking.
Figure 10-25: (a) Separate file systems. (b) After mounting.
Figure 10-26: (a) A file with one lock. (b) Adding a second lock. (c) A third one.
Figure 10-31: Disk layout of the Linux ext2 file system.
Figure 10-32: (a) A Linux directory with three files. (b) The same directory after the file voluminous has been removed.
Figure 10-34: The relation between the file-descriptor table, the open-filedescription table, and the i-node table.
Figure 10-35: Examples of remote mounted file systems. Directories are shown as squares and files as circles.
Figure 10-36: The NFS layer structure.
Figure 10-39: Android process hierarchy.
Figure 10-40: Publishing and interacting with system services.
Figure 10-41: Creating a new ART process from zygote.
Figure 10-42: Binder IPC architecture.
Figure 10-43: Basic Binder IPC transaction.
Figure 10-44: Binder cross-process object mapping.
Figure 10-45: Transferring Binder objects between processes.
Figure 10-46: Binder user-space API.
Figure 10-47: Simple interface described in AIDL.
Figure 10-48: Binder interface inheritance hierarchy.
Figure 10-49: Full path of an AIDL-based Binder IPC.
Figure 10-50: Basic service manager AIDL interface.
Figure 10-51: Basic structure of AndroidManifest.xml.
Figure 10-52: Starting an email application’s main activity.
Figure 10-53: Starting the camera application after email.
Figure 10-54: Removing the email process to reclaim RAM for the camera.
Figure 10-55: Sharing a camera picture through the email application.
Figure 10-56: Returning to the email application.
Figure 10-57: Starting an application service.
Figure 10-58: Interface for controlling a sync service’s sync interval.
Figure 10-59: Binding to an application service.
Figure 10-60: Sending a broadcast to application receivers.
Figure 10-61: Interacting with a content provider.
Figure 10-62: Steps in launching a new application process.
Figure 10-67: Requesting and using a permission.
Figure 10-68: Accessing data without a permission.
Figure 10-69: Sharing a picture using a content provider.
Figure 10-70: Adding a picture attachment using a content provider.
Figure 10-71: The only thing most users care about security.
Figure 10-72: Confirming permissions at install time (circa 2010)
Figure 10-75: Android 6.0 runtime permission prompt.
Figure 10-78: Android 10’s background vs. foreground location prompt.
Figure 10-79: Android 11’s ‘‘only this once’’ prompt.
Figure 10-80: Android 11’s location permission settings.
Figure 10-81: Android 12’s coarse vs. fine prompt.
Figure 10-82: Android 12’s privacy dashboard showing ‘‘location’’ details.
Figure 10-83: Android’s early battery use screen.
Figure 10-84: Doze and maintenance windows.
Figure 11-3: The Win32 API allows programs to run on almost all versions of Windows.
Figure 11-4: The programming layers in Modern Windows.
Figure 11-5: The components used to build NT subsystems.
Figure 11-11: Windows kernel-mode organization.
Figure 11-12: Some of the hardware functions the HAL manages.
Figure 11-13: Dispatcher_header data structure embedded in many executive objects (dispatcher objects).
Figure 11-14: Simplified depiction of device stacks for two NTFS file volumes. The I/O request packet is passed from down the stack. The appropriate routines from the associated drivers are called at each level in the stack. The device stacks themselves consist of device objects allocated specifically to each stack.
Figure 11-15: Structure of an executive object managed by the object manager.
Figure 11-16: Handle table data structures for a minimal table using a single page for up to 512 handles.
Figure 11-17: Handle-table data structures for a maximal table of up to 16 million handles.
Figure 11-20: I/O and object manager steps for creating/opening a file and getting back a file handle.
Figure 11-22: The relationship between jobs, processes, threads, and fibers. Jobs and fibers are optional; not all processes are in jobs or contain fibers.
Figure 11-26: Windows supports 32 priorities for threads.
Figure 11-27: An example of priority inversion.
Figure 11-28: Native vs. WoW64 processes on an arm64 machine. Shaded areas indicate emulated code.
Figure 11-29: Comparison of x86 and x64 emulation infrastructure on an arm64 machine. Shaded areas indicate emulated code.
Figure 11-30: Virtual to physical address translation with a 4-level page table scheme implementing 48-bits of virtual address.
Figure 11-31: Virtual address space layout for three 64-bit user processes. The white areas are private per process. The shaded areas are shared among all processes.
Figure 11-33: Mapped regions with their shadow pages on disk. The lib.dll file is mapped into two address spaces at the same time.
Figure 11-34: A page table entry (PTE) for a mapped page on the Intel x86 and AMD x64 architectures.
Figure 11-35: The Windows self-map entries are used to map the physical pages of the page table hierarchy into kernel virtual addresses. This makes conversion between a virtual address and its PTE address very easy.
Figure 11-36: Some of the fields in the page-frame database for a valid page.
Figure 11-37: The various page lists and the transitions between them.
Figure 11-38: Page transitions with memory compression (free/zero lists and mapped files omitted for clarity).
Figure 11-39: Memory partition data structures.
Figure 11-41: A single level in a device stack.
Figure 11-42: The major fields of an I/O Request Packet.
Figure 11-43: Windows allows drivers to be stacked to work with a specific instance of a device. The stacking is represented by device objects.
Figure 11-44: The NTFS master file table.
Figure 11-46: An MFT record for a three-run, nine-block stream.
Figure 11-47: A file that requires three MFT records to store all its runs.
Figure 11-48: The MFT record for a small directory.
Figure 11-49: (a) An example of a 48-block file being compressed to 32 blocks. (b) The MFT record for the file after compression.
Figure 11-50: Hyper-V virtualization components in the root and child partitions.
Figure 11-51: Flow of parvirtualized I/O for an enlightened guest OS.
Figure 11-52: VA-backed VM’s GPA space is logically mapped to virtual address ranges in its vmmem process on the host.
Figure 11-53: The contents of the VHD exposed to the container is backed by a set of host directories that are merged at runtime to make up the container file system contents.
Figure 11-54: Virtual Secure Mode architecture with NT kernel in VTL0 and Secure Kernel in VTL1. VTLs share memory, CPUs, and devices, but each VTL has its own access protections for these resources, controlled by higher VTLs.
Figure 11-55: Structure of an access token.
Figure 11-56: An example security descriptor for a file.
Figure 11-59: Hotpatch application for mylib.dll. Functions foo() and baz() are updated in the patch binary, mylib_patch.dll.
Figure 12-1: (a) Algorithmic code. (b) Event-driven code.
Figure 12-2: One possible design for a modern layered operating system.
Figure 12-3: Client-server computing based on a microkernel.
Figure 12-4: Directories are used to map external names onto internal names.
Figure 12-5: Code for searching the process table for a given PID.
Figure 12-6: (a) CPU-dependent conditional compilation. (b) Word-length-dependent conditional compilation.
Figure 12-7: (a) A procedure for counting bits in a byte. (b) A macro to count the bits. (c) A macro that counts bits by table lookup.
Figure 12-8: (a) Part of an uncompressed image with 24 bits per pixel. (b) The same part compressed with GIF, with 8 bits per pixel. (c) The color palette.
Figure 12-11: (a) Traditional software design progresses in stages. (b) Alternative design produces a working system (that does nothing) starting on day 1.
Landmarks
Front Matter
Start of Content
Back Matter
List of Figures
Page List
i
ii
iii
iv
v
vi
vii
viii
ix
x
xi
xii
xiii
xiv
xv
xvi
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
xxv
xxvi
xxvii
xxviii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156