Sorting Algorithms
Complete Guide for Students
A comprehensive reference covering Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, Radix Sort, and Counting Sort — with explanations, step-by-step algorithms, C++ programs, outputs, and complexity analysis.
Sorting algorithms are one of the most fundamental concepts in computer science and programming. Almost every software system you use today relies on sorting in some way. Whether it is arranging student marks, organizing search results, managing product lists in online stores, or displaying contacts in a phonebook, sorting plays a crucial role. Simply put, sorting means arranging data in a specific order, usually ascending (smallest to largest) or descending (largest to smallest).
Imagine a teacher who wants to arrange exam scores from highest to lowest so the top performers can easily be identified. Without sorting, the teacher would have to manually scan through the entire list each time. Sorting algorithms automate this process so computers can quickly organize large amounts of data. This becomes extremely important when dealing with thousands or even millions of records.
Sorting algorithms are also important because they improve efficiency in other algorithms. Many searching techniques, like binary search, only work on sorted data. If the data is not sorted first, searching becomes slower. That is why sorting is often considered the backbone of efficient data processing.
Students learning programming must understand sorting algorithms not only to pass exams but also to build strong problem-solving skills. In this guide, we will explore several popular sorting algorithms including Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, Radix Sort, and Counting Sort.
Enables Fast Searching
Binary search and many other search techniques require data to be sorted first, making sorting essential.
Performance Optimization
Properly sorted data allows faster analysis, filtering, and processing across all application types.
Teaches Core Concepts
Sorting teaches loops, recursion, divide-and-conquer, and data structures — foundational programming skills.
Used Everywhere
Databases, e-commerce, search engines, social media, and operating systems all rely on sorting internally.
What is Sorting?
Sorting is the process of arranging elements in a specific order. The order can be ascending, descending, alphabetical, or based on any other criteria depending on the application. In programming, sorting usually involves organizing numbers, strings, or objects inside data structures such as arrays or lists.
Simple Example
This simple rearrangement makes it much easier to perform other operations such as searching, comparing values, or identifying patterns in the data.
Types of Sorting Algorithms
Sorting algorithms can generally be divided into two major categories:
1. Comparison-Based Sorting
- Bubble Sort
- Selection Sort
- Insertion Sort
- Merge Sort
- Quick Sort
- Heap Sort
These algorithms sort data by comparing elements with each other. More flexible — works with almost any type of data.
2. Non-Comparison Sorting
- Counting Sort
- Radix Sort
These algorithms do not compare elements directly. They use mathematical properties of data such as digit positions or value ranges — achieving nearly linear time in specific conditions.
Bubble Sort
O(n²) Average / Worst | O(n) BestBubble Sort repeatedly compares adjacent elements and swaps them if they are in the wrong order. After every pass, the largest element moves to the end of the array — like a bubble rising to the top. It is one of the simplest sorting algorithms and often the first algorithm taught to beginners.
Bubble sort is easy to understand but not very efficient for large datasets. However, in the best case (when the list is already sorted), it can run in O(n) time because only one pass is required.
Algorithm Steps
- Start.
- Take an array of n elements.
- Repeat from i = 0 to n-1.
- Compare the current element with the next element.
- If the first element is greater than the second, swap them.
- Move to the next pair of elements.
- Continue the process until the array is completely sorted.
- Stop.
Trace Example — Array: 4, 2, 7, 1
#include <iostream> using namespace std; void bubbleSort(int arr[], int n) { for(int i=0; i<n-1; i++) { for(int j=0; j<n-i-1; j++) { if(arr[j] > arr[j+1]) { int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } } } } int main() { int arr[] = {5,1,4,2,8}; int n = 5; bubbleSort(arr, n); cout << "Sorted array: "; for(int i=0; i<n; i++) cout << arr[i] << " "; return 0; }
Selection Sort
O(n²) Best / Average / WorstSelection Sort works by repeatedly selecting the smallest element from the unsorted portion of the array and placing it at the beginning. Unlike bubble sort which swaps many times, selection sort performs fewer swaps — making it useful when memory writing operations are expensive.
The algorithm divides the array into a sorted portion and an unsorted portion. Initially the sorted portion is empty. It scans the entire array to find the smallest element and swaps it with the first position, then searches for the next smallest, and so on.
Algorithm Steps
- Start.
- Take an array of n elements.
- Find the smallest element in the array.
- Swap it with the first position.
- Find the next smallest element from the remaining array.
- Swap it with the second position.
- Repeat the process until the array is sorted.
- Stop.
Trace Example — Array: 29, 10, 14, 37
#include <iostream> using namespace std; void selectionSort(int arr[], int n) { for(int i=0; i<n-1; i++) { int minIndex = i; for(int j=i+1; j<n; j++) { if(arr[j] < arr[minIndex]) minIndex = j; } int temp = arr[i]; arr[i] = arr[minIndex]; arr[minIndex] = temp; } } int main() { int arr[] = {29,10,14,37,13}; int n = 5; selectionSort(arr, n); cout << "Sorted array: "; for(int i=0; i<n; i++) cout << arr[i] << " "; return 0; }
Insertion Sort
O(n) Best | O(n²) Average / WorstInsertion Sort works similarly to how people sort playing cards in their hands. When you pick up a new card, you place it in the correct position among the already sorted cards. The algorithm starts with the second element and compares it with previous elements, shifting larger elements right until the correct position is found.
Insertion sort is efficient for small datasets and nearly sorted lists. Its best-case complexity is O(n) when the list is already sorted, making it adaptive.
Algorithm Steps
- Start.
- Assume the first element is already sorted.
- Take the next element (key).
- Compare it with the previous elements.
- Shift larger elements one position to the right.
- Insert the element at its correct position.
- Repeat until all elements are sorted.
- Stop.
Trace Example — Array: 5, 2, 4, 6
#include <iostream> using namespace std; void insertionSort(int arr[], int n) { for(int i=1; i<n; i++) { int key = arr[i]; int j = i-1; while(j>=0 && arr[j] > key) { arr[j+1] = arr[j]; j--; } arr[j+1] = key; } } int main() { int arr[] = {12,11,13,5,6}; int n = 5; insertionSort(arr, n); cout << "Sorted array: "; for(int i=0; i<n; i++) cout << arr[i] << " "; return 0; }
Merge Sort
O(n log n) Best / Average / WorstMerge Sort uses a powerful strategy called Divide and Conquer. Instead of sorting the entire array at once, it divides the array into smaller parts, sorts each part separately, and then merges them together. This makes it much faster than the simple O(n²) algorithms.
Merge sort guarantees O(n log n) time complexity in best, average, and worst cases — making it very reliable. However, it requires extra memory because temporary arrays are used during the merging process.
Algorithm Steps
- Start.
- Divide the array into two halves.
- Continue dividing until each subarray has one element.
- Compare elements of the subarrays.
- Merge them in sorted order.
- Continue merging until the whole array becomes sorted.
- Stop.
Trace Example — Array: 8, 3, 5, 1
#include <iostream> using namespace std; void merge(int arr[], int l, int m, int r) { int i=l, j=m+1, k=0; int temp[100]; while(i<=m && j<=r) { if(arr[i] < arr[j]) temp[k++] = arr[i++]; else temp[k++] = arr[j++]; } while(i<=m) temp[k++] = arr[i++]; while(j<=r) temp[k++] = arr[j++]; for(i=l, k=0; i<=r; i++, k++) arr[i] = temp[k]; } void mergeSort(int arr[], int l, int r) { if(l < r) { int m = (l+r)/2; mergeSort(arr, l, m); mergeSort(arr, m+1, r); merge(arr, l, m, r); } } int main() { int arr[] = {38,27,43,3,9}; int n = 5; mergeSort(arr, 0, n-1); cout << "Sorted array: "; for(int i=0; i<n; i++) cout << arr[i] << " "; return 0; }
Quick Sort
O(n log n) Average | O(n²) WorstQuick Sort is one of the most widely used sorting algorithms because it performs extremely well in practice. It uses the divide-and-conquer technique but works differently from merge sort. The algorithm selects a special element called a pivot. The array is then partitioned so that all elements smaller than the pivot move to the left side, while larger elements move to the right side.
The same process is repeated recursively for both sides until the array is sorted. Quick sort can degrade to O(n²) if poor pivot choices are made, but this is rare in practice.
Algorithm Steps
- Start.
- Select a pivot element from the array.
- Place elements smaller than pivot on the left side.
- Place elements greater than pivot on the right side.
- Repeat the same process for left and right subarrays.
- Continue until the array becomes sorted.
- Stop.
Trace Example — Array: 9, 3, 7, 1, 5
#include <iostream> using namespace std; int partition(int arr[], int low, int high) { int pivot = arr[high]; int i = low - 1; for(int j=low; j<high; j++) { if(arr[j] < pivot) { i++; swap(arr[i], arr[j]); } } swap(arr[i+1], arr[high]); return i+1; } void quickSort(int arr[], int low, int high) { if(low < high) { int pi = partition(arr, low, high); quickSort(arr, low, pi-1); quickSort(arr, pi+1, high); } } int main() { int arr[] = {10,7,8,9,1,5}; int n = 6; quickSort(arr, 0, n-1); cout << "Sorted array: "; for(int i=0; i<n; i++) cout << arr[i] << " "; return 0; }
Heap Sort
O(n log n) Best / Average / WorstHeap Sort is based on a special data structure called a binary heap — a complete binary tree where each parent node is greater than its children (max heap) or smaller (min heap). It works in two phases: first building a max heap from the array, then repeatedly removing the largest element and placing it at the end.
Heap sort always runs in O(n log n) time regardless of the input data, which makes it a reliable choice for large datasets. It sorts in-place, requiring no extra memory beyond O(1).
Algorithm Steps
- Start.
- Convert the array into a Max Heap.
- Swap the first element (largest) with the last element.
- Reduce the heap size by one.
- Heapify the remaining elements.
- Repeat the process until the array is sorted.
- Stop.
Trace Example — Array: 12, 11, 13, 5
#include <iostream> using namespace std; void heapify(int arr[], int n, int i) { int largest = i; int left = 2*i + 1; int right = 2*i + 2; if(left < n && arr[left] > arr[largest]) largest = left; if(right < n && arr[right] > arr[largest]) largest = right; if(largest != i) { swap(arr[i], arr[largest]); heapify(arr, n, largest); } } void heapSort(int arr[], int n) { for(int i=n/2-1; i>=0; i--) heapify(arr, n, i); for(int i=n-1; i>=0; i--) { swap(arr[0], arr[i]); heapify(arr, i, 0); } } int main() { int arr[] = {12,11,13,5,6,7}; int n = 6; heapSort(arr, n); cout << "Sorted array: "; for(int i=0; i<n; i++) cout << arr[i] << " "; return 0; }
Counting Sort
O(n + k) Best / Average / WorstCounting Sort is a non-comparison sorting algorithm that works by counting how many times each value appears. It does not compare elements directly — instead it uses the frequency of each value. This makes it extremely fast but it only works when the range of numbers (k) is not very large.
Algorithm Steps
- Start.
- Find the maximum value in the array.
- Create a counting array.
- Count how many times each element appears.
- Store cumulative counts.
- Place elements into their correct positions.
- Copy the sorted elements back to the original array.
- Stop.
Trace Example — Array: 4, 2, 2, 8, 3
#include <iostream> using namespace std; void countingSort(int arr[], int n) { int output[100]; int count[100] = {0}; for(int i=0; i<n; i++) count[arr[i]]++; for(int i=1; i<100; i++) count[i] += count[i-1]; for(int i=n-1; i>=0; i--) { output[count[arr[i]]-1] = arr[i]; count[arr[i]]--; } for(int i=0; i<n; i++) arr[i] = output[i]; } int main() { int arr[] = {4,2,2,8,3,3,1}; int n = 7; countingSort(arr, n); cout << "Sorted array: "; for(int i=0; i<n; i++) cout << arr[i] << " "; return 0; }
Radix Sort
O(nk) Best / Average / WorstRadix Sort is a non-comparison sorting algorithm that sorts numbers digit by digit. Instead of comparing numbers directly, it groups them based on individual digits starting from the least significant digit (units), then tens, then hundreds, and so on. It uses Counting Sort as a subroutine for each digit pass.
Radix sort is very efficient when sorting large lists of integers with limited digit lengths, achieving O(nk) time complexity where k is the number of digit positions.
Algorithm Steps
- Start.
- Find the maximum number in the array.
- Sort numbers according to the units digit.
- Then sort according to the tens digit.
- Then sort according to the hundreds digit.
- Continue until all digit positions are processed.
- Stop when the array is sorted.
Trace Example — Array: 170, 45, 75, 90
#include <iostream> using namespace std; int getMax(int arr[], int n) { int mx = arr[0]; for(int i=1; i<n; i++) if(arr[i] > mx) mx = arr[i]; return mx; } void countSort(int arr[], int n, int exp) { int output[100]; int count[10] = {0}; for(int i=0; i<n; i++) count[(arr[i]/exp)%10]++; for(int i=1; i<10; i++) count[i] += count[i-1]; for(int i=n-1; i>=0; i--) { output[count[(arr[i]/exp)%10]-1] = arr[i]; count[(arr[i]/exp)%10]--; } for(int i=0; i<n; i++) arr[i] = output[i]; } void radixSort(int arr[], int n) { int m = getMax(arr, n); for(int exp=1; m/exp>0; exp*=10) countSort(arr, n, exp); } int main() { int arr[] = {170,45,75,90,802,24,2,66}; int n = 8; radixSort(arr, n); cout << "Sorted array: "; for(int i=0; i<n; i++) cout << arr[i] << " "; return 0; }
| Algorithm | Best Case | Average Case | Worst Case | Space |
|---|---|---|---|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) |
| Counting Sort | O(n+k) | O(n+k) | O(n+k) | O(k) |
| Radix Sort | O(nk) | O(nk) | O(nk) | O(n+k) |
🟢 Best Case
Represents the minimum time required by the algorithm. For example, Insertion Sort and Bubble Sort run in O(n) when the list is already sorted, because no swaps or shifts are needed.
🟡 Average Case
Represents expected performance when inputs are random and unpredictable. This is the most realistic measure for practical applications where input order is unknown.
🔴 Worst Case
Represents the maximum time an algorithm could take. Bubble Sort and Selection Sort always degrade to O(n²) in the worst case, regardless of input arrangement.
Why Complexity Matters
When analyzing algorithms, researchers use Big-O notation to represent how execution time grows as the number of elements increases. An algorithm with O(n log n) complexity is far superior to O(n²) for large datasets. For example, sorting 1,000,000 elements with O(n²) requires approximately 1 trillion operations, while O(n log n) requires only about 20 million — a dramatic difference in real-world performance.
Understanding these cases helps programmers evaluate how algorithms perform under different conditions and choose the most suitable algorithm for their specific problem — simple algorithms for small or nearly sorted data, advanced algorithms for large or random datasets.
Conclusion
Sorting algorithms are essential tools in computer science. They allow computers to organize data efficiently so that other operations such as searching, filtering, and analyzing can be performed quickly. From simple algorithms like bubble sort and insertion sort to advanced techniques like merge sort and quick sort, each algorithm has unique advantages.
Simple algorithms are easier to understand and implement, making them ideal for learning and small datasets. Advanced algorithms like heap sort and merge sort handle large datasets more efficiently. Non-comparison algorithms such as counting sort and radix sort provide even faster performance under specific conditions.
Students who understand these algorithms gain a strong foundation in programming and algorithm design. Once the basic principles are clear, it becomes easier to learn more advanced topics like graph algorithms, dynamic programming, and machine learning optimization.

0 Comments
If you have any doubts, Please let me know