Types of Sorting Algorithms

July 23, 2024

Sorting Algorithms Cheatsheet

Sorting algorithms are the unsung heroes of computer science, turning chaotic data into organized, usable information. From powering search engines to optimizing databases, they’re everywhere—quietly making your digital life faster and smoother. These algorithms vary widely in efficiency, complexity, and use cases, measured by their time complexity—how runtime scales with input size. In this guide, we’ll explore the main types of sorting algorithms, break down their strengths and weaknesses, and help you decide which one fits your needs.

What is Time Complexity?

Sorting efficiency hinges on time complexity—how long an algorithm takes as data grows. For a deep dive, check out my Time Complexity blog. It’s packed with details to level up your understanding.

Types of Sorting Algorithms

Sorting algorithms split into two big camps: comparison-based (elements are compared to sort) and non-comparison-based (using counting or digit tricks). Let’s dive in.

Comparison-Based Sorting Algorithms

1. Bubble Sort – O(n2)O(n^2)

Bubble Sort is the beginner’s classic. It repeatedly compares adjacent elements and swaps them if they’re out of order, “bubbling” larger values to the end. It’s like sorting a deck of cards by repeatedly swapping neighbors.

  • Time Complexity: O(n2)O(n^2) (worst and average).
  • Pros: Simple to code, intuitive.
  • Cons: Painfully slow for big lists—think (n = 1,000) taking a million steps.
  • Best For: Tiny datasets or teaching.
def bubbleSort(array):
    n = len(array)
    for i in range(n):
        for j in range(n - i - 1):
            if array[j] > array[j + 1]:
                array[j], array[j + 1] = array[j + 1], array[j]
    return array

2. Selection Sort – O(n2)O(n^2)

Selection Sort splits the list into sorted and unsorted parts, repeatedly picking the smallest (or largest) unsorted element and adding it to the sorted side. Imagine picking the shortest person from a crowd, one by one.

  • Time Complexity: O(n2)O(n^2) (all cases).
  • Pros: Minimal swaps, in-place sorting.
  • Cons: Still slow for large data—doesn’t adapt.
  • Best For: Small lists or when swap cost is high.
def selectionSort(array):
    length = len(array)
    for i in range(length):
        min = i
        for j in range(i+1, length):
            if array[j] < array[min]:
                min = j
        array[i], array[min] = array[min], array[i]
    return array

3. Insertion Sort – O(n2)O(n^2)

Insertion Sort builds a sorted list one item at a time, inserting each new element into its proper spot. It’s like sorting a hand of cards as you pick them up.

  • Time Complexity: O(n2)O(n^2) (worst and average), O(n)O(n) (best, nearly sorted).
  • Pros: Fast for small or nearly sorted data, stable.
  • Cons: Struggles with large, random lists.
  • Best For: Small datasets or online sorting (data arriving incrementally).
def insertionSort(array):
    length = len(array)
    for i in range(1, length):
        index = i
        cur = array[i]
        for j in range(i-1, -1, -1):
            if array[j] > cur:
                array[j+1] = array[j]
                index = j
            else:
                break
        array[index] = cur
    return array

4. Merge Sort – O(nlogn)O(n \log n)

Merge Sort divides the list into halves, sorts them recursively, and merges them back together. Picture splitting a messy stack of papers into smaller piles, sorting each, then combining them.

  • Time Complexity: O(nlogn)O(n \log n) (all cases).
  • Pros: Stable, predictable, great for linked lists or external storage.
  • Cons: Needs extra space (O(n)O(n)).
  • Best For: Large datasets, memory isn’t tight.
def mergeSort(array):
    if len(array) <= 1:
        return array
    mid = len(array) // 2
    left = mergeSort(array[:mid])
    right = mergeSort(array[mid:])
    return merge(left, right)

def merge(left, right):
    result = []
    i = j = 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            result.append(left[i])
            i += 1
        else:
            result.append(right[j])
            j += 1
    result.extend(left[i:])
    result.extend(right[j:])
    return result

5. Quick Sort – O(nlogn)O(n \log n)

Quick Sort picks a pivot, partitions the array around it, and recursively sorts the partitions. Think of it as organizing a crowd by picking a height and splitting taller and shorter groups.

  • Time Complexity: O(nlogn)O(n \log n) (average), O(n2)O(n^2) (worst, bad pivot).
  • Pros: Fast in practice, in-place sorting.
  • Cons: Worst-case is rare but possible; not stable.
  • Best For: General-purpose sorting, large datasets.
def quickSort(array):
    if len(array) <= 1:
        return array
    pivot = array[len(array) // 2]
    left = [x for x in array if x < pivot]
    middle = [x for x in array if x == pivot]
    right = [x for x in array if x > pivot]
    return quickSort(left) + middle + quickSort(right)

Non-Comparison-Based Sorting Algorithms

1. Counting Sort – O(n+k)O(n + k)

Counting Sort counts occurrences of each value and uses those counts to place elements in order. It’s like sorting marbles by color into pre-labeled bins.

  • Time Complexity: O(n+k)O(n + k) (kk is value range).
  • Pros: Lightning-fast for small integer ranges.
  • Cons: Useless for floats or strings; needs extra space.
  • Best For: Integers with a tight range (e.g., 0-100).
def countingSort(array):
    if not array:
        return array
    max_val = max(array)
    min_val = min(array)
    range_val = max_val - min_val + 1
    count = [0] * range_val
    output = [0] * len(array)
    for num in array:
        count[num - min_val] += 1
    for i in range(1, len(count)):
        count[i] += count[i - 1]
    for num in reversed(array):
        output[count[num - min_val] - 1] = num
        count[num - min_val] -= 1
    return output

2. Radix Sort – O(d(n+k))O(d \cdot (n + k))

Radix Sort sorts digit by digit, from least to most significant, often using Counting Sort as a helper. Imagine sorting phone numbers by area code, then prefix.

  • Time Complexity: O(d(n+k))O(d \cdot (n + k)) (dd is digits, kk is digit range).
  • Pros: Great for large integer sets, stable.
  • Cons: Limited to numbers, extra space needed.
  • Best For: Uniformly distributed integers (e.g., IDs).
def radixSort(array):
    if not array:
        return array
    max_num = max(array)
    exp = 1
    while max_num // exp > 0:
        countingSortForRadix(array, exp)
        exp *= 10
    return array

def countingSortForRadix(array, exp):
    n = len(array)
    output = [0] * n
    count = [0] * 10
    for i in range(n):
        index = array[i] // exp
        count[index % 10] += 1
    for i in range(1, 10):
        count[i] += count[i - 1]
    i = n - 1
    while i >= 0:
        index = array[i] // exp
        output[count[index % 10] - 1] = array[i]
        count[index % 10] -= 1
        i -= 1
    for i in range(n):
        array[i] = output[i]

3. Bucket Sort – O(n+k)O(n + k)

Bucket Sort scatters elements into buckets, sorts each bucket (often with Insertion Sort), and merges them. Picture sorting coins into jars by value, then arranging each jar.

  • Time Complexity: O(n+k)O(n + k) (average, kk is bucket count).
  • Pros: Fast for uniform data, adaptable.
  • Cons: Poor for skewed data; worst case O(n2)O(n^2).
  • Best For: Uniformly distributed numbers (e.g., floats 0-1).
def bucketSort(array):
    if not array:
        return array
    max_val, min_val = max(array), min(array)
    bucket_range = (max_val - min_val) / len(array)
    buckets = [[] for _ in range(len(array) + 1)]
    for num in array:
        if max_val == min_val:
            bucket_index = 0
        else:
            bucket_index = int((num - min_val) / bucket_range)
        if bucket_index == len(buckets):
            bucket_index -= 1
        buckets[bucket_index].append(num)
    for bucket in buckets:
        bucket.sort()
    index = 0
    for bucket in buckets:
        for num in bucket:
            array[index] = num
            index += 1
    return array

Comparison Table

Here’s how these algorithms stack up:

AlgorithmTime Complexity (Avg)Space ComplexityStable?Best Use Case
Bubble SortO(n2)O(n^2)O(1)O(1)YesTeaching, tiny lists
Selection SortO(n2)O(n^2)O(1)O(1)NoLow swap cost
Insertion SortO(n2)O(n^2)O(1)O(1)YesNearly sorted data
Merge SortO(nlogn)O(n \log n)O(n)O(n)YesLarge datasets
Quick SortO(nlogn)O(n \log n)O(logn)O(\log n)NoGeneral purpose
Counting SortO(n+k)O(n + k)O(n+k)O(n + k)YesSmall integer range
Radix SortO(d(n+k))O(d \cdot (n + k))O(n+k)O(n + k)YesLarge integers
Bucket SortO(n+k)O(n + k)O(n+k)O(n + k)YesUniform data

Conclusion

Sorting algorithms are the backbone of data organization, each with unique trade-offs. From the slow-but-steady Bubble Sort to the blazing-fast Quick Sort or Radix Sort, your choice depends on your data and goals. Need something simple? Go with Insertion Sort. Handling millions of records? Merge Sort or Quick Sort has your back. Got integers? Counting Sort might be your ace.