Computer Science · Data Structures

Sorting Algorithms
Complete Guide for Students

A comprehensive reference covering Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, Radix Sort, and Counting Sort — with explanations, step-by-step algorithms, C++ programs, outputs, and complexity analysis.

Introduction to Sorting Algorithms
Why sorting is the backbone of efficient data processing

Sorting algorithms are one of the most fundamental concepts in computer science and programming. Almost every software system you use today relies on sorting in some way. Whether it is arranging student marks, organizing search results, managing product lists in online stores, or displaying contacts in a phonebook, sorting plays a crucial role. Simply put, sorting means arranging data in a specific order, usually ascending (smallest to largest) or descending (largest to smallest).

Imagine a teacher who wants to arrange exam scores from highest to lowest so the top performers can easily be identified. Without sorting, the teacher would have to manually scan through the entire list each time. Sorting algorithms automate this process so computers can quickly organize large amounts of data. This becomes extremely important when dealing with thousands or even millions of records.

Sorting algorithms are also important because they improve efficiency in other algorithms. Many searching techniques, like binary search, only work on sorted data. If the data is not sorted first, searching becomes slower. That is why sorting is often considered the backbone of efficient data processing.

Students learning programming must understand sorting algorithms not only to pass exams but also to build strong problem-solving skills. In this guide, we will explore several popular sorting algorithms including Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, Radix Sort, and Counting Sort.

🔍

Enables Fast Searching

Binary search and many other search techniques require data to be sorted first, making sorting essential.

Performance Optimization

Properly sorted data allows faster analysis, filtering, and processing across all application types.

📚

Teaches Core Concepts

Sorting teaches loops, recursion, divide-and-conquer, and data structures — foundational programming skills.

🌐

Used Everywhere

Databases, e-commerce, search engines, social media, and operating systems all rely on sorting internally.

Basic Concepts of Sorting
What sorting means and the two major categories

What is Sorting?

Sorting is the process of arranging elements in a specific order. The order can be ascending, descending, alphabetical, or based on any other criteria depending on the application. In programming, sorting usually involves organizing numbers, strings, or objects inside data structures such as arrays or lists.

Simple Example

Unsorted 8   3   5   1   9
Sorted ↑ 1   3   5   8   9

This simple rearrangement makes it much easier to perform other operations such as searching, comparing values, or identifying patterns in the data.

Types of Sorting Algorithms

Sorting algorithms can generally be divided into two major categories:

1. Comparison-Based Sorting

  • Bubble Sort
  • Selection Sort
  • Insertion Sort
  • Merge Sort
  • Quick Sort
  • Heap Sort

These algorithms sort data by comparing elements with each other. More flexible — works with almost any type of data.

2. Non-Comparison Sorting

  • Counting Sort
  • Radix Sort

These algorithms do not compare elements directly. They use mathematical properties of data such as digit positions or value ranges — achieving nearly linear time in specific conditions.

Sorting Algorithms — In Depth
Working principles, step-by-step algorithms, C++ programs, and outputs
1

Bubble Sort

O(n²) Average / Worst  |  O(n) Best

Bubble Sort repeatedly compares adjacent elements and swaps them if they are in the wrong order. After every pass, the largest element moves to the end of the array — like a bubble rising to the top. It is one of the simplest sorting algorithms and often the first algorithm taught to beginners.

Bubble sort is easy to understand but not very efficient for large datasets. However, in the best case (when the list is already sorted), it can run in O(n) time because only one pass is required.

BestO(n) AvgO(n²) WorstO(n²) SpaceO(1)

Algorithm Steps

  1. Start.
  2. Take an array of n elements.
  3. Repeat from i = 0 to n-1.
  4. Compare the current element with the next element.
  5. If the first element is greater than the second, swap them.
  6. Move to the next pair of elements.
  7. Continue the process until the array is completely sorted.
  8. Stop.

Trace Example — Array: 4, 2, 7, 1

Initial 4   2   7   1
Pass 1 4↔2 → 2,4,7,1  |  4<7 skip  |  7↔1 → 2, 4, 1, 7
Pass 2 2<4 skip  |  4↔1 → 2, 1, 4, 7
Pass 3 2↔1 → 1, 2, 4, 7 ✓ Sorted
C++ · bubble_sort.cpp
#include <iostream>
using namespace std;

void bubbleSort(int arr[], int n)
{
    for(int i=0; i<n-1; i++)
    {
        for(int j=0; j<n-i-1; j++)
        {
            if(arr[j] > arr[j+1])
            {
                int temp = arr[j];
                arr[j] = arr[j+1];
                arr[j+1] = temp;
            }
        }
    }
}

int main()
{
    int arr[] = {5,1,4,2,8};
    int n = 5;

    bubbleSort(arr, n);

    cout << "Sorted array: ";
    for(int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
Output Sorted array: 1 2 4 5 8
2

Selection Sort

O(n²) Best / Average / Worst

Selection Sort works by repeatedly selecting the smallest element from the unsorted portion of the array and placing it at the beginning. Unlike bubble sort which swaps many times, selection sort performs fewer swaps — making it useful when memory writing operations are expensive.

The algorithm divides the array into a sorted portion and an unsorted portion. Initially the sorted portion is empty. It scans the entire array to find the smallest element and swaps it with the first position, then searches for the next smallest, and so on.

BestO(n²) AvgO(n²) WorstO(n²) SpaceO(1)

Algorithm Steps

  1. Start.
  2. Take an array of n elements.
  3. Find the smallest element in the array.
  4. Swap it with the first position.
  5. Find the next smallest element from the remaining array.
  6. Swap it with the second position.
  7. Repeat the process until the array is sorted.
  8. Stop.

Trace Example — Array: 29, 10, 14, 37

Initial 29   10   14   37
Pass 1 Min=10 → swap 29↔10 → 10, 29, 14, 37
Pass 2 Min=14 → swap 29↔14 → 10, 14, 29, 37
Pass 3 Min=29 → already in place → 10, 14, 29, 37 ✓ Sorted
C++ · selection_sort.cpp
#include <iostream>
using namespace std;

void selectionSort(int arr[], int n)
{
    for(int i=0; i<n-1; i++)
    {
        int minIndex = i;

        for(int j=i+1; j<n; j++)
        {
            if(arr[j] < arr[minIndex])
                minIndex = j;
        }

        int temp = arr[i];
        arr[i] = arr[minIndex];
        arr[minIndex] = temp;
    }
}

int main()
{
    int arr[] = {29,10,14,37,13};
    int n = 5;

    selectionSort(arr, n);

    cout << "Sorted array: ";
    for(int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
Output Sorted array: 10 13 14 29 37
3

Insertion Sort

O(n) Best  |  O(n²) Average / Worst

Insertion Sort works similarly to how people sort playing cards in their hands. When you pick up a new card, you place it in the correct position among the already sorted cards. The algorithm starts with the second element and compares it with previous elements, shifting larger elements right until the correct position is found.

Insertion sort is efficient for small datasets and nearly sorted lists. Its best-case complexity is O(n) when the list is already sorted, making it adaptive.

BestO(n) AvgO(n²) WorstO(n²) SpaceO(1)

Algorithm Steps

  1. Start.
  2. Assume the first element is already sorted.
  3. Take the next element (key).
  4. Compare it with the previous elements.
  5. Shift larger elements one position to the right.
  6. Insert the element at its correct position.
  7. Repeat until all elements are sorted.
  8. Stop.

Trace Example — Array: 5, 2, 4, 6

Initial 5   2   4   6
Step 1 Insert 2 into [5] → 2, 5, 4, 6
Step 2 Insert 4 into [2,5] → 2, 4, 5, 6
Step 3 Insert 6 → already correct → 2, 4, 5, 6 ✓ Sorted
C++ · insertion_sort.cpp
#include <iostream>
using namespace std;

void insertionSort(int arr[], int n)
{
    for(int i=1; i<n; i++)
    {
        int key = arr[i];
        int j = i-1;

        while(j>=0 && arr[j] > key)
        {
            arr[j+1] = arr[j];
            j--;
        }

        arr[j+1] = key;
    }
}

int main()
{
    int arr[] = {12,11,13,5,6};
    int n = 5;

    insertionSort(arr, n);

    cout << "Sorted array: ";
    for(int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
Output Sorted array: 5 6 11 12 13
4

Merge Sort

O(n log n) Best / Average / Worst

Merge Sort uses a powerful strategy called Divide and Conquer. Instead of sorting the entire array at once, it divides the array into smaller parts, sorts each part separately, and then merges them together. This makes it much faster than the simple O(n²) algorithms.

Merge sort guarantees O(n log n) time complexity in best, average, and worst cases — making it very reliable. However, it requires extra memory because temporary arrays are used during the merging process.

BestO(n log n) AvgO(n log n) WorstO(n log n) SpaceO(n)

Algorithm Steps

  1. Start.
  2. Divide the array into two halves.
  3. Continue dividing until each subarray has one element.
  4. Compare elements of the subarrays.
  5. Merge them in sorted order.
  6. Continue merging until the whole array becomes sorted.
  7. Stop.

Trace Example — Array: 8, 3, 5, 1

Divide [8, 3]    [5, 1]
Sort halves [3, 8]    [1, 5]
Merge 1, 3, 5, 8 ✓ Sorted
C++ · merge_sort.cpp
#include <iostream>
using namespace std;

void merge(int arr[], int l, int m, int r)
{
    int i=l, j=m+1, k=0;
    int temp[100];

    while(i<=m && j<=r)
    {
        if(arr[i] < arr[j])
            temp[k++] = arr[i++];
        else
            temp[k++] = arr[j++];
    }

    while(i<=m)  temp[k++] = arr[i++];
    while(j<=r)  temp[k++] = arr[j++];

    for(i=l, k=0; i<=r; i++, k++)
        arr[i] = temp[k];
}

void mergeSort(int arr[], int l, int r)
{
    if(l < r)
    {
        int m = (l+r)/2;
        mergeSort(arr, l, m);
        mergeSort(arr, m+1, r);
        merge(arr, l, m, r);
    }
}

int main()
{
    int arr[] = {38,27,43,3,9};
    int n = 5;

    mergeSort(arr, 0, n-1);

    cout << "Sorted array: ";
    for(int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
Output Sorted array: 3 9 27 38 43
5

Quick Sort

O(n log n) Average  |  O(n²) Worst

Quick Sort is one of the most widely used sorting algorithms because it performs extremely well in practice. It uses the divide-and-conquer technique but works differently from merge sort. The algorithm selects a special element called a pivot. The array is then partitioned so that all elements smaller than the pivot move to the left side, while larger elements move to the right side.

The same process is repeated recursively for both sides until the array is sorted. Quick sort can degrade to O(n²) if poor pivot choices are made, but this is rare in practice.

BestO(n log n) AvgO(n log n) WorstO(n²) SpaceO(log n)

Algorithm Steps

  1. Start.
  2. Select a pivot element from the array.
  3. Place elements smaller than pivot on the left side.
  4. Place elements greater than pivot on the right side.
  5. Repeat the same process for left and right subarrays.
  6. Continue until the array becomes sorted.
  7. Stop.

Trace Example — Array: 9, 3, 7, 1, 5

Initial 9   3   7   1   5
Pivot=5 Left: [3, 1]  |  5  |  Right: [9, 7]
Recurse [1, 3] sorted  |  [7, 9] sorted
Result 1, 3, 5, 7, 9 ✓ Sorted
C++ · quick_sort.cpp
#include <iostream>
using namespace std;

int partition(int arr[], int low, int high)
{
    int pivot = arr[high];
    int i = low - 1;

    for(int j=low; j<high; j++)
    {
        if(arr[j] < pivot)
        {
            i++;
            swap(arr[i], arr[j]);
        }
    }

    swap(arr[i+1], arr[high]);
    return i+1;
}

void quickSort(int arr[], int low, int high)
{
    if(low < high)
    {
        int pi = partition(arr, low, high);
        quickSort(arr, low, pi-1);
        quickSort(arr, pi+1, high);
    }
}

int main()
{
    int arr[] = {10,7,8,9,1,5};
    int n = 6;

    quickSort(arr, 0, n-1);

    cout << "Sorted array: ";
    for(int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
Output Sorted array: 1 5 7 8 9 10
6

Heap Sort

O(n log n) Best / Average / Worst

Heap Sort is based on a special data structure called a binary heap — a complete binary tree where each parent node is greater than its children (max heap) or smaller (min heap). It works in two phases: first building a max heap from the array, then repeatedly removing the largest element and placing it at the end.

Heap sort always runs in O(n log n) time regardless of the input data, which makes it a reliable choice for large datasets. It sorts in-place, requiring no extra memory beyond O(1).

BestO(n log n) AvgO(n log n) WorstO(n log n) SpaceO(1)

Algorithm Steps

  1. Start.
  2. Convert the array into a Max Heap.
  3. Swap the first element (largest) with the last element.
  4. Reduce the heap size by one.
  5. Heapify the remaining elements.
  6. Repeat the process until the array is sorted.
  7. Stop.

Trace Example — Array: 12, 11, 13, 5

Build MaxHeap 13 at root: [13, 11, 12, 5]
Swap & Heapify Swap 13↔5 → [5,11,12,13] → heapify → [12,11,5,13]
Result 5, 11, 12, 13 ✓ Sorted
C++ · heap_sort.cpp
#include <iostream>
using namespace std;

void heapify(int arr[], int n, int i)
{
    int largest = i;
    int left  = 2*i + 1;
    int right = 2*i + 2;

    if(left  < n && arr[left]  > arr[largest]) largest = left;
    if(right < n && arr[right] > arr[largest]) largest = right;

    if(largest != i)
    {
        swap(arr[i], arr[largest]);
        heapify(arr, n, largest);
    }
}

void heapSort(int arr[], int n)
{
    for(int i=n/2-1; i>=0; i--)
        heapify(arr, n, i);

    for(int i=n-1; i>=0; i--)
    {
        swap(arr[0], arr[i]);
        heapify(arr, i, 0);
    }
}

int main()
{
    int arr[] = {12,11,13,5,6,7};
    int n = 6;

    heapSort(arr, n);

    cout << "Sorted array: ";
    for(int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
Output Sorted array: 5 6 7 11 12 13
7

Counting Sort

O(n + k) Best / Average / Worst

Counting Sort is a non-comparison sorting algorithm that works by counting how many times each value appears. It does not compare elements directly — instead it uses the frequency of each value. This makes it extremely fast but it only works when the range of numbers (k) is not very large.

BestO(n+k) AvgO(n+k) WorstO(n+k) SpaceO(k)

Algorithm Steps

  1. Start.
  2. Find the maximum value in the array.
  3. Create a counting array.
  4. Count how many times each element appears.
  5. Store cumulative counts.
  6. Place elements into their correct positions.
  7. Copy the sorted elements back to the original array.
  8. Stop.

Trace Example — Array: 4, 2, 2, 8, 3

Count 2→×2, 3→×1, 4→×1, 8→×1
Cumulative Build prefix sums of count array
Result 2, 2, 3, 4, 8 ✓ Sorted
C++ · counting_sort.cpp
#include <iostream>
using namespace std;

void countingSort(int arr[], int n)
{
    int output[100];
    int count[100] = {0};

    for(int i=0; i<n; i++)
        count[arr[i]]++;

    for(int i=1; i<100; i++)
        count[i] += count[i-1];

    for(int i=n-1; i>=0; i--)
    {
        output[count[arr[i]]-1] = arr[i];
        count[arr[i]]--;
    }

    for(int i=0; i<n; i++)
        arr[i] = output[i];
}

int main()
{
    int arr[] = {4,2,2,8,3,3,1};
    int n = 7;

    countingSort(arr, n);

    cout << "Sorted array: ";
    for(int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
Output Sorted array: 1 2 2 3 3 4 8
8

Radix Sort

O(nk) Best / Average / Worst

Radix Sort is a non-comparison sorting algorithm that sorts numbers digit by digit. Instead of comparing numbers directly, it groups them based on individual digits starting from the least significant digit (units), then tens, then hundreds, and so on. It uses Counting Sort as a subroutine for each digit pass.

Radix sort is very efficient when sorting large lists of integers with limited digit lengths, achieving O(nk) time complexity where k is the number of digit positions.

BestO(nk) AvgO(nk) WorstO(nk) SpaceO(n+k)

Algorithm Steps

  1. Start.
  2. Find the maximum number in the array.
  3. Sort numbers according to the units digit.
  4. Then sort according to the tens digit.
  5. Then sort according to the hundreds digit.
  6. Continue until all digit positions are processed.
  7. Stop when the array is sorted.

Trace Example — Array: 170, 45, 75, 90

Units digit 170, 90, 45, 75
Tens digit 45, 75, 90, 170
Result 45, 75, 90, 170 ✓ Sorted
C++ · radix_sort.cpp
#include <iostream>
using namespace std;

int getMax(int arr[], int n)
{
    int mx = arr[0];
    for(int i=1; i<n; i++)
        if(arr[i] > mx) mx = arr[i];
    return mx;
}

void countSort(int arr[], int n, int exp)
{
    int output[100];
    int count[10] = {0};

    for(int i=0; i<n; i++)
        count[(arr[i]/exp)%10]++;

    for(int i=1; i<10; i++)
        count[i] += count[i-1];

    for(int i=n-1; i>=0; i--)
    {
        output[count[(arr[i]/exp)%10]-1] = arr[i];
        count[(arr[i]/exp)%10]--;
    }

    for(int i=0; i<n; i++)
        arr[i] = output[i];
}

void radixSort(int arr[], int n)
{
    int m = getMax(arr, n);
    for(int exp=1; m/exp>0; exp*=10)
        countSort(arr, n, exp);
}

int main()
{
    int arr[] = {170,45,75,90,802,24,2,66};
    int n = 8;

    radixSort(arr, n);

    cout << "Sorted array: ";
    for(int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
Output Sorted array: 2 24 45 66 75 90 170 802
Comparison of Sorting Algorithms
Time and space complexity at a glance
Algorithm Best Case Average Case Worst Case Space
Bubble Sort O(n) O(n²) O(n²) O(1)
Selection Sort O(n²) O(n²) O(n²) O(1)
Insertion Sort O(n) O(n²) O(n²) O(1)
Merge Sort O(n log n) O(n log n) O(n log n) O(n)
Quick Sort O(n log n) O(n log n) O(n²) O(log n)
Heap Sort O(n log n) O(n log n) O(n log n) O(1)
Counting Sort O(n+k) O(n+k) O(n+k) O(k)
Radix Sort O(nk) O(nk) O(nk) O(n+k)
Time Complexity Analysis
Best Case, Average Case, and Worst Case explained

🟢 Best Case

Represents the minimum time required by the algorithm. For example, Insertion Sort and Bubble Sort run in O(n) when the list is already sorted, because no swaps or shifts are needed.

🟡 Average Case

Represents expected performance when inputs are random and unpredictable. This is the most realistic measure for practical applications where input order is unknown.

🔴 Worst Case

Represents the maximum time an algorithm could take. Bubble Sort and Selection Sort always degrade to O(n²) in the worst case, regardless of input arrangement.

Why Complexity Matters

When analyzing algorithms, researchers use Big-O notation to represent how execution time grows as the number of elements increases. An algorithm with O(n log n) complexity is far superior to O(n²) for large datasets. For example, sorting 1,000,000 elements with O(n²) requires approximately 1 trillion operations, while O(n log n) requires only about 20 million — a dramatic difference in real-world performance.

Understanding these cases helps programmers evaluate how algorithms perform under different conditions and choose the most suitable algorithm for their specific problem — simple algorithms for small or nearly sorted data, advanced algorithms for large or random datasets.

Conclusion

Sorting algorithms are essential tools in computer science. They allow computers to organize data efficiently so that other operations such as searching, filtering, and analyzing can be performed quickly. From simple algorithms like bubble sort and insertion sort to advanced techniques like merge sort and quick sort, each algorithm has unique advantages.

Simple algorithms are easier to understand and implement, making them ideal for learning and small datasets. Advanced algorithms like heap sort and merge sort handle large datasets more efficiently. Non-comparison algorithms such as counting sort and radix sort provide even faster performance under specific conditions.

Students who understand these algorithms gain a strong foundation in programming and algorithm design. Once the basic principles are clear, it becomes easier to learn more advanced topics like graph algorithms, dynamic programming, and machine learning optimization.

Frequently Asked Questions
Common questions about sorting algorithms
1 What is a sorting algorithm?
A sorting algorithm is a method used to arrange elements of a list or array in a specific order such as ascending or descending. It systematically processes elements to produce an ordered output from an unordered input.
2 Which sorting algorithm is fastest?
There is no single fastest algorithm for all situations. Quick Sort and Merge Sort are generally efficient for large datasets with O(n log n) average complexity. For specific data types, Counting Sort and Radix Sort can be even faster.
3 Why is merge sort faster than bubble sort?
Merge sort uses the divide-and-conquer approach and runs in O(n log n) time, while bubble sort runs in O(n²). For large inputs, this difference is enormous — merge sort will finish in seconds where bubble sort might take hours.
4 What is time complexity in sorting algorithms?
Time complexity measures how the running time of an algorithm increases as the input size grows. It is expressed using Big-O notation and helps programmers predict and compare algorithm performance for different dataset sizes.
5 Where are sorting algorithms used in real life?
Sorting algorithms are used in databases to organize records, e-commerce websites to sort products by price or rating, search engines to rank results, social media platforms to order posts, and operating systems to manage processes and memory.