Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
This article is a comparison chart and also a technical reference sheet which integrates provenance, algorithmic variants and engineering pragmatic factors (speed, memory, scope, etc.) of several popular Sorting Algorithms. Use this article as an advanced technical reference.
This is a highly technical and to the point reference sheet and comparison chart about Sorting Algorithms. For a more introductory article check or Sorting Algorithms: A Comprehensive Guide.
Within contemporary computer architecture, ordering operations subtend a broad spectrum of computational workflows—index construction, relational joins, external-memory analytics, cache-oblivious layout transformations, and GPU rendering pipelines. Although the asymptotic landscape is ostensibly settled, micro-architectural mismatches can trigger pathological cache-miss cascades, TLB pressure, or write-amplification on non-volatile media. Conversely, a judiciously selected method can brush the linear-time information-entropy bound for favorable input distributions while respecting stringent energy and memory budgets.
Dimension | Dichotomies | Operational Relevance |
---|---|---|
Comparator Semantics | Comparison‑driven (≤, ≥) • distribution‑driven (radix, counting) | Decision‑tree entropy lower‑bounds comparison sorts by Ω(nlogn)\Omega(n \log n); distribution sorts break that barrier when keys exhibit exploitable structure. |
Residence | Internal (RAM‑resident) • external (hierarchical I/O) | Dictates whether the algorithm may rely on O(1)O(1) random access or must stream in blocks. |
Stability | Stable • unstable | Stability preserves the relative order of key‑equivalent records—essential for lexicographic, multi‑key pipelines. |
Adaptivity | Data‑adaptive • oblivious | Adaptive algorithms (e.g. TimSort) collapse to O(n)O(n) on partially ordered inputs; oblivious schemes ignore presortedness. |
Avoid rote complexity tables; instead apply this four-factor calculus:
Pragmatic aphorism — When the distribution is opaque and a few kilobytes of scratchpad are acceptable, TimSort or Musser’s introsort offer consistently favorable latency-throughput profiles.
Canonical descriptions augmented with historiographic notes and micro-architectural caveats.
Origins. Iverson’s APL idioms (1962) catalyzed its didactic debut.
Current status. Pedagogical only; for arrays < 10 locality sometimes offsets its O(n²) cost, but practical alternatives abound.
procedure bubble_sort(A):
repeat
swapped ← false
for i ← 1 → |A|-1 do
if A[i-1] > A[i] then
swap A[i-1], A[i]
swapped ← true
until not swapped
Endurance rationale. Exactly two writes per element make it suitable for EEPROM or flash with limited write cycles.
Cache intimacy. Linear probing of contiguous cells delivers near-optimal spatial locality; compilers and STL hybrids therefore switch to insertion for sub-arrays ≤ 32.
Gap-sequence research. Sedgewick, Tokuda, and Pratt sequences empirically converge near O(n^1.25). The algorithm’s in-place nature and low branch-misprediction footprint still justify its niche in legacy embedded toolchains.
Pointer-level variant. On linked lists, tortoise–hare splitting enables O(1) auxiliary space. Bottom-up blocking harmonizes with SIMD widths and GPU shared-memory tiles.
Canonical formulation. Hoare’s 1961 two-index partitioning.
Pivot robustness. Median-of-ninther, median-of-medians, or random pivots mitigate adversarial cases.
Introspective safeguard. Musser’s introsort (1997) monitors recursion depth (≈ 2 log₂ n) and falls back to heap sort, enforcing O(n log n) worst-case bounds.
function partition(A, lo, hi):
pivot ← A[(lo+hi) // 2] # optional median‑of‑three
i ← lo; j ← hi
while true:
while A[i] < pivot: i ← i+1
while A[j] > pivot: j ← j‑1
if i ≥ j: return j
swap A[i], A[j]
i ← i+1; j ← j‑1
Linear-time heapification. Floyd’s bottom-up sift-down eliminates the naïve n log n build.
Locality penalty. Scattered memory accesses degrade cache behavior; quick sort often wins wall-clock unless memory is at a premium.
Design synthesis. Peters (2002) merged natural-run detection, binary insertion, and exponential galloping. The min-run heuristic (≈ 32) targets L1 capacity.
Caveat. A 2012 Java port bug (stack collapse when invariants were violated) underscores its subtle invariants.
Starts with quick sort, switches to heap at depth sentinel, and finishes tiny partitions with insertion. std::sort
in the C++ STL adopts this tri-modal choreography, combining average-case agility with worst-case certainty.
Auxiliary memory scales with key domain k; compress sparse domains via perfect hashing or bucket compression first.
Digit ordering. LSB orientation preserves stability and streamability; MSB orientation yields quick-sort-like partition recursion.
Word-size engineering. On 64-bit integers, 11-bit buckets (2048) need six passes—trading bandwidth for cache occupancy.
Effectiveness degrades when the hash deviates from uniformity; sort intra-bucket with insertion or quick sort for resilience.
FlashSort (expository)
Hybrid classification plus insertion achieves quasi-linear time on uniformly distributed floats but enjoys little industrial adoption.
Algorithm | Best Time | Average Time | Worst Time | Space | Stable? |
---|---|---|---|---|---|
Bubble Sort | Θ(n) | Θ(n²) | Θ(n²) | Θ(1) | ✓ |
Insertion Sort | Θ(n) | Θ(n²) | Θ(n²) | Θ(1) | ✓ |
Selection Sort | Θ(n²) | Θ(n²) | Θ(n²) | Θ(1) | ✗ |
Shell Sort | Θ(n log n) | Θ(n log² n) | Θ(n log² n) | Θ(1) | ✗ |
Merge Sort | Θ(n log n) | Θ(n log n) | Θ(n log n) | Θ(n) | ✓ |
Quick Sort | Θ(n log n) | Θ(n log n) | Θ(n²) | Θ(log n) | ✗ |
Heap Sort | Θ(n log n) | Θ(n log n) | Θ(n log n) | Θ(1) | ✗ |
Counting Sort | Θ(n+k) | Θ(n+k) | Θ(n+k) | Θ(n+k) | ✓ |
Radix Sort | Θ(n d) | Θ(n d) | Θ(n d) | Θ(n+k) | ✓ |
Bucket Sort | Θ(n+k) | Θ(n+k) | Θ(n²) | Θ(n+k) | ✓ |
IntroSort | Θ(n log n) | Θ(n log n) | Θ(n log n) | Θ(log n) | ✗ |
Timsort | Θ(n) | Θ(n log n) | Θ(n log n) | Θ(n) | ✓ |
External Merge Sort | Θ(n log_{M/B}(n/B)) | Θ(n log_{M/B}(n/B)) | Θ(n log_{M/B}(n/B)) | Θ(n) | ✓ |
Knuth, D. E. The Art of Computer Programming, Vol. 3: Sorting and Searching. Addison-Wesley, 1998.
Bentley, J. L., & McIlroy, M. D. “Engineering a Sort Function.” Software—Practice & Experience 23 (1993): 1249–1265.
Peters, T. “TimSort: A Very Fast, Hybrid Sorting Algorithm,” PEP 450, 2002.
Blelloch, G. E. et al. “GPU Radix Sort: An Experimental Study.” PPoPP ’16: 556–567.
To go to the main guide about algorithms follow this link: A Friendly Guide to Computer Science Algorithms by Category,