Operating System Formulas Quick Reference – GATE CS Complete Cheat Sheet



Operating System Formulas Quick Reference – GATE CS Complete Cheat Sheet

What This Page Covers

  • CPU scheduling formulas: TAT, WT, RT, CPU utilisation
  • Synchronisation: no formulas, key conditions listed
  • Memory management: EAT, TLB, page table size, multilevel paging
  • Virtual memory: EAT with page faults, page fault rate
  • Disk scheduling: access time, seek distance, RAID capacity
  • File systems: inode max file size, disk accesses

Tip: Bookmark this page for GATE revision. Every formula includes a quick worked example.

1. CPU Scheduling Formulas

Completion Time (CT): Time at which process finishes execution

Turnaround Time (TAT):
TAT = CT – Arrival Time

Waiting Time (WT):
WT = TAT – Burst Time
WT = TAT – CPU burst (for non-preemptive, single burst)

Response Time (RT):
RT = Time of first CPU allocation – Arrival Time

CPU Utilisation:
CPU Util = (Total CPU busy time) / (Total time) × 100%

Throughput:
Throughput = Number of processes completed / Total time

Average TAT / WT / RT:
Average = Sum of individual values / Number of processes

Example: Process P: Arrival=0, Burst=5, CT=8
TAT = 8 – 0 = 8 ms
WT = 8 – 5 = 3 ms
RT = (First CPU given at t=3) – 0 = 3 ms

Little’s Law (for queuing)

L = λ × W
L = average number of processes in system
λ = average arrival rate
W = average time a process spends in system

CPU Utilisation with I/O (n processes, each I/O fraction p)

CPU Utilisation = 1 – p^n
p = fraction of time a process waits for I/O
n = degree of multiprogramming
p = 0.8, n = 3: Utilisation = 1 – 0.8³ = 1 – 0.512 = 48.8%
n = 10: Utilisation = 1 – 0.8¹⁰ ≈ 89.3%

2. Memory Management Formulas

Physical Address (Paging):
Physical Address = Frame number × Page size + Page offset

Logical Address split:
For logical address space = 2^n bytes, page size = 2^d bytes:
Page number bits = n – d, Offset bits = d

Number of pages:
Pages = Process size / Page size (round up)

Page Table Size:
Page table entries = Number of pages = 2^(n–d)
Page table size = 2^(n–d) × entry size

Internal Fragmentation (Paging):
Max internal fragmentation = Page size – 1 byte
Average internal fragmentation = Page size / 2

EAT without TLB (single-level page table):
EAT = 2 × ma (one for page table, one for data)

EAT with TLB:
EAT = h × (TLB_time + ma) + (1–h) × (TLB_time + 2×ma)
≈ ma + TLB_time + (1–h) × ma
where h = TLB hit ratio

EAT Example: TLB hit ratio = 0.9, TLB access = 10 ns, Memory = 100 ns
EAT = 0.9 × (10 + 100) + 0.1 × (10 + 200)
= 0.9 × 110 + 0.1 × 210
= 99 + 21 = 120 ns

Multilevel Paging

For k-level page table:
Disk accesses per memory reference = k + 1
(k table lookups + 1 data access)

Two-level: EAT = 3 × ma
Three-level: EAT = 4 × ma

Segmentation

Physical Address = Base[segment] + Offset
Condition: Offset < Limit[segment] (else segmentation fault)

3. Virtual Memory Formulas

EAT with Page Faults:
EAT = (1 – p) × ma + p × page_fault_time
p = page fault rate (0 ≤ p ≤ 1)
ma = memory access time
page_fault_time ≈ disk access time + restart overhead

Page Fault Rate for acceptable EAT:
EAT ≤ (1 + x%) × ma
Solve for maximum p

Number of Page Faults:
Depends on reference string and algorithm (trace manually)

Hit Ratio for Page Replacement:
Hit ratio = (References – Page faults) / References

EAT Example: ma = 200 ns, page fault time = 8 ms = 8,000,000 ns, p = 0.001
EAT = (1 – 0.001) × 200 + 0.001 × 8,000,000
= 199.8 + 8,000 = 8,199.8 ns ≈ 8.2 microseconds

Page Replacement Algorithm Performance

AlgorithmOptimal?Belady’s Anomaly?Implementation
OPT (Optimal)Yes (theoretical minimum)NoNot possible in practice
FIFONoYesSimple queue
LRUNo (approx OPT)NoStack/counter
LFUNoNoFrequency counter
MFUNoNoFrequency counter
Clock (Second Chance)No (approx LRU)NoCircular buffer + reference bit

Working Set Model

Working Set W(t, Δ) = set of pages referenced in last Δ time units
Total demand = Σ |W(t, Δ)| for all processes
If total demand > available frames → Thrashing

Degree of multiprogramming for stability:
Number of processes = Available frames / Average working set size

4. Disk Scheduling & I/O Formulas

Disk Access Time:
Access Time = Seek Time + Rotational Latency + Transfer Time

Average Rotational Latency:
Avg Rotational Latency = (1/2) × (60,000 / RPM) ms
= 30,000 / RPM ms

Transfer Time:
Transfer Time = Data size / Transfer rate

Rotation Time (1 full rotation):
T_rot = 60,000 / RPM ms

Transfer Time for one sector:
T_transfer = T_rot / Sectors per track

Example: 5400 RPM, 500 sectors/track, seek = 6 ms, block = 1 sector
T_rot = 60,000 / 5400 = 11.11 ms
Avg latency = 5.56 ms
T_transfer = 11.11 / 500 = 0.022 ms
Access = 6 + 5.56 + 0.022 = 11.58 ms

RAID Capacity Formulas

RAID 0: Usable = N × D
RAID 1: Usable = D (mirror, only 1 copy usable)
RAID 5: Usable = (N – 1) × D
RAID 6: Usable = (N – 2) × D
RAID 10: Usable = N × D / 2
(N = number of disks, D = disk size)

Effective Access Time with Disk Cache

EAT = h × T_cache + (1 – h) × T_disk
h = cache hit ratio

5. File System Formulas

Pointers per index block:
k = Block size / Pointer size

Max file size (Unix inode, 12 direct + 1 SI + 1 DI + 1 TI):
Max size = (12 + k + k² + k³) × Block size

Disk accesses to read a file block (no cache):
Direct block (1–12): 1 (inode) + 1 (data) = 2
Single indirect: 1 + 1 + 1 = 3
Double indirect: 1 + 2 + 1 = 4
Triple indirect: 1 + 3 + 1 = 5

Indexed allocation max file size:
Max size = (Block size / Pointer size) × Block size

FAT size:
FAT size = (Volume size / Cluster size) × FAT entry size

Bitmap size:
Bitmap size = Total blocks / 8 bytes

Inode Example: Block size = 4 KB, pointer = 4 B
k = 4096 / 4 = 1024
Max size = (12 + 1024 + 1024² + 1024³) × 4096 bytes ≈ 4 TB

To read block #1036 (first double-indirect block):
Disk accesses = 1 (inode) + 1 (double-indirect block) + 1 (single-indirect block) + 1 (data) = 4

6. Deadlock Formulas

Maximum resources to guarantee no deadlock (n processes, each needs max m instances):
Minimum instances of a resource type to prevent deadlock:
R ≥ n × (m – 1) + 1
where R = total resource instances, n = processes, m = max needed by one process

Necessary condition check (RAG):
Cycle in RAG is NECESSARY but not sufficient for deadlock
Exception: If each resource type has exactly one instance,
cycle ↔ deadlock (both necessary and sufficient)

Banker’s Algorithm – Safe State:
Need[i] = Max[i] – Allocation[i]
A state is safe if a safe sequence exists:
Work ≥ Need[i] → grant → Work += Allocation[i]

For single-instance resources:
Minimum resources for n processes (each needing 1): n (trivially)
Minimum to allow k simultaneously: R = k + (n – k) × 0 = k

Formula Example: 5 processes each need at most 3 instances of a resource.
Min instances to guarantee no deadlock:
R ≥ 5 × (3 – 1) + 1 = 10 + 1 = 11 instances

7. Quick Facts for GATE

TopicKey Fact
FCFS schedulingNon-preemptive; no starvation; simplest; suffers convoy effect
SJF optimalSJF minimises average waiting time among all non-preemptive algorithms
SRTN optimalSRTN (preemptive SJF) minimises average waiting time among ALL scheduling algorithms
Round RobinBest response time; quantum q → large q ≈ FCFS; small q → high overhead
Priority starvation fixAging: gradually increase priority of waiting processes
Peterson’s solutionWorks for 2 processes; not scalable; requires busy waiting
Binary semaphoreValues: 0 or 1; equivalent to mutex; can be used for mutual exclusion
Counting semaphoreValues: 0 to N; used for resource pools
Monitor advantageAutomatic mutual exclusion; only one process active inside monitor at a time
Coffman conditionsAll 4 must hold for deadlock: ME, Hold & Wait, No Preemption, Circular Wait
Paging: no ext. fragmentationPaging eliminates external fragmentation; has internal fragmentation (last page)
Segmentation: no int. fragmentationSegmentation eliminates internal fragmentation; has external fragmentation
Belady’s anomalyFIFO only: more frames can cause MORE page faults; LRU/OPT are stack algorithms (no anomaly)
OPT page replacementReplace page that won’t be used for the longest time; minimum page faults but requires future knowledge
Thrashing causeToo many processes, not enough frames; each process’s working set doesn’t fit in allocated frames
SSTF starvationSSTF can starve far requests; SCAN/LOOK do not starve
Inode doesn’t store nameFile name is in directory entry; inode stores everything else
Hard link reference countInode deleted only when reference count = 0; symbolic links don’t affect count
RAID 0Zero fault tolerance; fastest I/O; data lost if any disk fails
RAID 5 parityCan recover from exactly 1 disk failure; usable = (N-1)/N of total capacity

Critical Section Requirements

Any correct critical section solution must satisfy:
1. Mutual Exclusion: Only one process in CS at a time
2. Progress: If no process is in CS, selection cannot be postponed indefinitely
3. Bounded Waiting: A limit exists on how many times others can enter CS before a waiting process gets in
Bonus (not always required): No busy waiting (sleep-based synchronisation)

Scheduling Algorithm Properties

AlgorithmPreemptive?Starvation?Optimal for?
FCFSNoNoFairness
SJFNoYes (long jobs)Min avg WT (non-preemptive)
SRTNYesYes (long jobs)Min avg WT (overall)
PriorityBothYes (low priority)Importance-based
Round RobinYesNoTime-sharing, response time
Multilevel QueueBothPossibleMixed workloads
Multilevel FeedbackYesNo (with aging)General purpose