[Sorting Algorithm] Common Small Sequence Sorting Method – Insertion Sort

insertion sort 插入排序法

Thinking

When dealing with small sequences, there are several reasons for preferring algorithms that can achieve O(n) time complexity:

  1. Adaptability: Algorithms like insertion sort can take advantage of the initial degree of order in the input data. If the data is already partially sorted (which is more likely in small datasets), these types of algorithms can perform closer to linear efficiency.
  2. Memory Access: Small datasets can be fully accommodated in the CPU cache, reducing the need to access main memory. This can significantly speed up O(n) algorithms, as they typically have good spatial locality, making full use of the cache.
  3. Overhead: More complex algorithms (such as quicksort or merge sort) usually have a larger setup overhead (e.g., recursive stack calls or additional memory allocations). In small datasets, this overhead might not be sufficiently amortized, making simple algorithms more advantageous in terms of performance.

Comparison Table of Sorting Algorithms

Sorting AlgorithmAverage Time ComplexityBest CaseWorst CaseSpace ComplexitySorting MethodStability
Bubble SortO(n^2)O(n)O(n^2)O(1)In-placeStable
Insertion SortO(n^2)O(n)O(n^2)O(1)In-placeStable
Counting SortO(n + k)O(n + k)O(n + k)O(k)Not in-placeStable
Bucket SortO(n + k)O(n)O(n^2)O(n + k)Not in-placeStable

Round 1:

Contestants: Bubble Sort, Insertion Sort, Counting Sort, Bucket Sort

Non-in-place sorting requires additional memory allocation, which is time-consuming, therefore eliminating Counting Sort and Bucket Sort.

Round 2:

Contestants: Bubble Sort, Insertion Sort, Counting SortBucket Sort
During the swap of adjacent elements in Bubble Sort, it involves three assignment operations(using a temporary variable for the swap)

In Insertion Sort, before finding the insertion point, it only requires moving the element backwards, which typically involves only one assignment operation.

Winner: Insertion sort