Operating System Formulas Quick Reference – GATE CS Complete Cheat Sheet
What This Page Covers
- CPU scheduling formulas: TAT, WT, RT, CPU utilisation
- Synchronisation: no formulas, key conditions listed
- Memory management: EAT, TLB, page table size, multilevel paging
- Virtual memory: EAT with page faults, page fault rate
- Disk scheduling: access time, seek distance, RAID capacity
- File systems: inode max file size, disk accesses
Tip: Bookmark this page for GATE revision. Every formula includes a quick worked example.
1. CPU Scheduling Formulas
Turnaround Time (TAT):
TAT = CT – Arrival Time
Waiting Time (WT):
WT = TAT – Burst Time
WT = TAT – CPU burst (for non-preemptive, single burst)
Response Time (RT):
RT = Time of first CPU allocation – Arrival Time
CPU Utilisation:
CPU Util = (Total CPU busy time) / (Total time) × 100%
Throughput:
Throughput = Number of processes completed / Total time
Average TAT / WT / RT:
Average = Sum of individual values / Number of processes
TAT = 8 – 0 = 8 ms
WT = 8 – 5 = 3 ms
RT = (First CPU given at t=3) – 0 = 3 ms
Little’s Law (for queuing)
L = average number of processes in system
λ = average arrival rate
W = average time a process spends in system
CPU Utilisation with I/O (n processes, each I/O fraction p)
p = fraction of time a process waits for I/O
n = degree of multiprogramming
n = 10: Utilisation = 1 – 0.8¹⁰ ≈ 89.3%
2. Memory Management Formulas
Physical Address = Frame number × Page size + Page offset
Logical Address split:
For logical address space = 2^n bytes, page size = 2^d bytes:
Page number bits = n – d, Offset bits = d
Number of pages:
Pages = Process size / Page size (round up)
Page Table Size:
Page table entries = Number of pages = 2^(n–d)
Page table size = 2^(n–d) × entry size
Internal Fragmentation (Paging):
Max internal fragmentation = Page size – 1 byte
Average internal fragmentation = Page size / 2
EAT without TLB (single-level page table):
EAT = 2 × ma (one for page table, one for data)
EAT with TLB:
EAT = h × (TLB_time + ma) + (1–h) × (TLB_time + 2×ma)
≈ ma + TLB_time + (1–h) × ma
where h = TLB hit ratio
EAT = 0.9 × (10 + 100) + 0.1 × (10 + 200)
= 0.9 × 110 + 0.1 × 210
= 99 + 21 = 120 ns
Multilevel Paging
Disk accesses per memory reference = k + 1
(k table lookups + 1 data access)
Two-level: EAT = 3 × ma
Three-level: EAT = 4 × ma
Segmentation
Condition: Offset < Limit[segment] (else segmentation fault)
3. Virtual Memory Formulas
EAT = (1 – p) × ma + p × page_fault_time
p = page fault rate (0 ≤ p ≤ 1)
ma = memory access time
page_fault_time ≈ disk access time + restart overhead
Page Fault Rate for acceptable EAT:
EAT ≤ (1 + x%) × ma
Solve for maximum p
Number of Page Faults:
Depends on reference string and algorithm (trace manually)
Hit Ratio for Page Replacement:
Hit ratio = (References – Page faults) / References
EAT = (1 – 0.001) × 200 + 0.001 × 8,000,000
= 199.8 + 8,000 = 8,199.8 ns ≈ 8.2 microseconds
Page Replacement Algorithm Performance
| Algorithm | Optimal? | Belady’s Anomaly? | Implementation |
|---|---|---|---|
| OPT (Optimal) | Yes (theoretical minimum) | No | Not possible in practice |
| FIFO | No | Yes | Simple queue |
| LRU | No (approx OPT) | No | Stack/counter |
| LFU | No | No | Frequency counter |
| MFU | No | No | Frequency counter |
| Clock (Second Chance) | No (approx LRU) | No | Circular buffer + reference bit |
Working Set Model
Total demand = Σ |W(t, Δ)| for all processes
If total demand > available frames → Thrashing
Degree of multiprogramming for stability:
Number of processes = Available frames / Average working set size
4. Disk Scheduling & I/O Formulas
Access Time = Seek Time + Rotational Latency + Transfer Time
Average Rotational Latency:
Avg Rotational Latency = (1/2) × (60,000 / RPM) ms
= 30,000 / RPM ms
Transfer Time:
Transfer Time = Data size / Transfer rate
Rotation Time (1 full rotation):
T_rot = 60,000 / RPM ms
Transfer Time for one sector:
T_transfer = T_rot / Sectors per track
T_rot = 60,000 / 5400 = 11.11 ms
Avg latency = 5.56 ms
T_transfer = 11.11 / 500 = 0.022 ms
Access = 6 + 5.56 + 0.022 = 11.58 ms
RAID Capacity Formulas
RAID 1: Usable = D (mirror, only 1 copy usable)
RAID 5: Usable = (N – 1) × D
RAID 6: Usable = (N – 2) × D
RAID 10: Usable = N × D / 2
(N = number of disks, D = disk size)
Effective Access Time with Disk Cache
h = cache hit ratio
5. File System Formulas
k = Block size / Pointer size
Max file size (Unix inode, 12 direct + 1 SI + 1 DI + 1 TI):
Max size = (12 + k + k² + k³) × Block size
Disk accesses to read a file block (no cache):
Direct block (1–12): 1 (inode) + 1 (data) = 2
Single indirect: 1 + 1 + 1 = 3
Double indirect: 1 + 2 + 1 = 4
Triple indirect: 1 + 3 + 1 = 5
Indexed allocation max file size:
Max size = (Block size / Pointer size) × Block size
FAT size:
FAT size = (Volume size / Cluster size) × FAT entry size
Bitmap size:
Bitmap size = Total blocks / 8 bytes
k = 4096 / 4 = 1024
Max size = (12 + 1024 + 1024² + 1024³) × 4096 bytes ≈ 4 TB
To read block #1036 (first double-indirect block):
Disk accesses = 1 (inode) + 1 (double-indirect block) + 1 (single-indirect block) + 1 (data) = 4
6. Deadlock Formulas
Minimum instances of a resource type to prevent deadlock:
R ≥ n × (m – 1) + 1
where R = total resource instances, n = processes, m = max needed by one process
Necessary condition check (RAG):
Cycle in RAG is NECESSARY but not sufficient for deadlock
Exception: If each resource type has exactly one instance,
cycle ↔ deadlock (both necessary and sufficient)
Banker’s Algorithm – Safe State:
Need[i] = Max[i] – Allocation[i]
A state is safe if a safe sequence exists:
Work ≥ Need[i] → grant → Work += Allocation[i]
For single-instance resources:
Minimum resources for n processes (each needing 1): n (trivially)
Minimum to allow k simultaneously: R = k + (n – k) × 0 = k
Min instances to guarantee no deadlock:
R ≥ 5 × (3 – 1) + 1 = 10 + 1 = 11 instances
7. Quick Facts for GATE
| Topic | Key Fact |
|---|---|
| FCFS scheduling | Non-preemptive; no starvation; simplest; suffers convoy effect |
| SJF optimal | SJF minimises average waiting time among all non-preemptive algorithms |
| SRTN optimal | SRTN (preemptive SJF) minimises average waiting time among ALL scheduling algorithms |
| Round Robin | Best response time; quantum q → large q ≈ FCFS; small q → high overhead |
| Priority starvation fix | Aging: gradually increase priority of waiting processes |
| Peterson’s solution | Works for 2 processes; not scalable; requires busy waiting |
| Binary semaphore | Values: 0 or 1; equivalent to mutex; can be used for mutual exclusion |
| Counting semaphore | Values: 0 to N; used for resource pools |
| Monitor advantage | Automatic mutual exclusion; only one process active inside monitor at a time |
| Coffman conditions | All 4 must hold for deadlock: ME, Hold & Wait, No Preemption, Circular Wait |
| Paging: no ext. fragmentation | Paging eliminates external fragmentation; has internal fragmentation (last page) |
| Segmentation: no int. fragmentation | Segmentation eliminates internal fragmentation; has external fragmentation |
| Belady’s anomaly | FIFO only: more frames can cause MORE page faults; LRU/OPT are stack algorithms (no anomaly) |
| OPT page replacement | Replace page that won’t be used for the longest time; minimum page faults but requires future knowledge |
| Thrashing cause | Too many processes, not enough frames; each process’s working set doesn’t fit in allocated frames |
| SSTF starvation | SSTF can starve far requests; SCAN/LOOK do not starve |
| Inode doesn’t store name | File name is in directory entry; inode stores everything else |
| Hard link reference count | Inode deleted only when reference count = 0; symbolic links don’t affect count |
| RAID 0 | Zero fault tolerance; fastest I/O; data lost if any disk fails |
| RAID 5 parity | Can recover from exactly 1 disk failure; usable = (N-1)/N of total capacity |
Critical Section Requirements
1. Mutual Exclusion: Only one process in CS at a time
2. Progress: If no process is in CS, selection cannot be postponed indefinitely
3. Bounded Waiting: A limit exists on how many times others can enter CS before a waiting process gets in
Bonus (not always required): No busy waiting (sleep-based synchronisation)
Scheduling Algorithm Properties
| Algorithm | Preemptive? | Starvation? | Optimal for? |
|---|---|---|---|
| FCFS | No | No | Fairness |
| SJF | No | Yes (long jobs) | Min avg WT (non-preemptive) |
| SRTN | Yes | Yes (long jobs) | Min avg WT (overall) |
| Priority | Both | Yes (low priority) | Importance-based |
| Round Robin | Yes | No | Time-sharing, response time |
| Multilevel Queue | Both | Possible | Mixed workloads |
| Multilevel Feedback | Yes | No (with aging) | General purpose |