Day 1: Introduction to Algorithmic Complexity

Kartik Sharma - Sep 7 - - Dev Community

Image description

1. Introduction to Algorithmic Complexity

What is Algorithmic Complexity?

Algorithmic complexity refers to how the performance of an algorithm changes with the size of its input. It's a measure of efficiency that helps us understand the resources an algorithm needs (time and space) to solve a problem. Two main resources are considered:

  1. Time complexity – how the runtime of an algorithm grows with the input size.
  2. Space complexity – how much memory an algorithm requires as the input size increases.

Why is Algorithmic Complexity Important?

In real-world scenarios, input sizes can be huge, so we want our programs to handle these efficiently. By analyzing an algorithm’s complexity, we can:

  • Predict performance – Know how an algorithm behaves as the problem size increases.
  • Optimize code – Find and implement the most efficient algorithms to solve a problem.
  • Compare algorithms – Evaluate different algorithms to choose the best one based on their performance and resources used.

For example, you might have two algorithms that solve the same problem, but one might take 1 second for a given input size, while another takes 10 seconds or even an hour for larger inputs. Understanding complexity helps us avoid slower algorithms in production systems.


2. Importance of Efficiency in Programming

Efficiency in programming isn’t just about writing code that "works," but writing code that scales well, both in terms of time (how fast it runs) and space (how much memory it uses).

Why does efficiency matter?

  1. Real-world performance: When handling large datasets (like millions of users or large databases), inefficient algorithms can lead to slow applications.

  2. Scalability: Efficient code scales better as input sizes grow. An algorithm that takes 1 second for 1,000 inputs might take several hours for a billion inputs if it’s inefficient.

  3. Cost: Inefficient algorithms can consume more memory and processing power, increasing costs for cloud-based applications.

  4. User experience: Fast algorithms provide a better user experience, especially in applications like search engines, data analytics, and real-time systems.


3. Big O Notation Basics

What is Big O Notation?

Big O notation is a mathematical concept used to describe the worst-case time complexity of an algorithm. It expresses how the runtime or space requirements of an algorithm grow as the input size increases. It provides a high-level understanding of the efficiency of an algorithm.

  • Focus on growth: Big O notation ignores constants and lower-order terms, focusing only on how the algorithm scales as input size grows.

Common Big O Time Complexities:

  1. O(1) - Constant time: The algorithm's runtime doesn't change regardless of the input size.

    • Example: Accessing an element in an array by index.
  2. O(n) - Linear time: The runtime grows linearly with the input size.

    • Example: A loop that iterates through all n elements of an array.
  3. O(n²) - Quadratic time: The runtime grows quadratically with the input size.

    • Example: A nested loop where each element is compared with every other element (like a basic sorting algorithm).
  4. O(log n) - Logarithmic time: The runtime grows logarithmically as the input size increases. Common in algorithms that repeatedly halve the problem, such as binary search.

    • Example: Searching in a sorted array using binary search.
  5. O(n log n): Common in more efficient sorting algorithms, such as Merge Sort and Quick Sort.

Why Big O?

Big O allows us to abstract away details like the number of lines of code and focus on how an algorithm’s efficiency scales as inputs increase. It helps us make quick decisions about whether an algorithm will work for large inputs.


4. Best-case, Average-case, and Worst-case Scenarios

What do these terms mean?

  • Best-case complexity: The behavior of the algorithm when it performs the minimum possible work. For example, in a sorting algorithm, the best-case might occur when the input is already sorted.

  • Average-case complexity: The expected behavior over all possible inputs. This gives a realistic idea of how the algorithm will perform most of the time.

  • Worst-case complexity: The behavior of the algorithm when it performs the maximum amount of work. This gives an upper bound on how the algorithm performs under the most challenging circumstances.

Examples:

  1. Linear Search:

    • Best case: O(1) – If the target element is the first element.
    • Worst case: O(n) – If the target element is the last or not present.
    • Average case: O(n) – On average, the target will be found midway through the array.
  2. QuickSort:

    • Best case: O(n log n) – When the pivot divides the array into equal halves.
    • Worst case: O(n²) – When the pivot is always the smallest or largest element, resulting in uneven division.

Why focus on worst-case?

Worst-case analysis is important because it tells us how bad the performance of an algorithm can get. In critical systems, we often want to design for the worst possible scenario.


5. Simple Time Complexities: Constant and Linear

Constant Time – O(1):

An algorithm takes constant time if the amount of work it does does not depend on the size of the input. Regardless of whether the input has 1 element or 1 million elements, the runtime remains the same.

Example:

function printFirstElement(arr) {
  console.log(arr[0]);
}
Enter fullscreen mode Exit fullscreen mode

This algorithm always takes the same amount of time to print the first element, no matter how large the array is.

Linear Time – O(n):

An algorithm takes linear time if its runtime grows in direct proportion to the size of the input. If the input size doubles, the runtime will also double.

Example:

function printAllElements(arr) {
  for (let i = 0; i < arr.length; i++) {
    console.log(arr[i]);
  }
}
Enter fullscreen mode Exit fullscreen mode

This algorithm’s runtime grows as the number of elements in the array increases.


6. Chapter 1 of "Introduction to Algorithms" by Cormen et al.

The first chapter of Cormen’s Introduction to Algorithms is a general introduction to algorithms. It defines what an algorithm is and gives the reader an understanding of the formal way to represent and analyze algorithms.

Topics Covered:

  • What is an algorithm?: A sequence of steps or rules for solving a problem.
  • The role of algorithms in computing: Algorithms form the foundation of all computing processes.
  • Different types of algorithms: Sorting, searching, graph traversal, etc.
  • Formal vs informal definitions: Cormen introduces both formal and informal ways to describe algorithms.
  • Importance of asymptotic analysis: A more formal introduction to how we analyze algorithms in terms of time and space complexities (leading into Big O notation).

7. Practice Problems: Complexity Analysis

Solve the following problems to practice identifying algorithmic complexity:

Problem 1:

Given an array of size n, write an algorithm to find the maximum element. What is the time complexity?

Solution:

function findMax(arr) {
  let max = arr[0];
  for (let i = 1; i < arr.length; i++) {
    if (arr[i] > max) {
      max = arr[i];
    }
  }
  return max;
}
Enter fullscreen mode Exit fullscreen mode
  • Time complexity: O(n) – You need to iterate through all elements to find the maximum.

Problem 2:

Write an algorithm to print the first 100 elements of an array. What is the time complexity?

Solution:

function printFirst100(arr) {
  for (let i = 0; i < 100 && i < arr.length; i++) {
    console.log(arr[i]);
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Time complexity: O(1) – Constant time, since it prints a maximum of 100 elements, regardless of the size of the input.

Problem 3:

Write a function to check if a number exists in a sorted array using binary search. What is the time complexity?

Solution:

function binarySearch(arr, target) {
  let left = 0;
  let right = arr.length - 1;

  while (left <= right) {
    let mid = Math.floor((left + right) / 2);

    if (arr[mid] === target) {
      return true;
    } else if (arr[mid] < target) {
      left = mid + 1;
    } else {
      right = mid - 1;
    }
  }
  return false;
}
Enter fullscreen mode Exit fullscreen mode
  • Time complexity: O(log n) – Since binary search halves the array at each step.

Problem 4:

Find the complexity of the following function:

function foo(n) {
  for (let i = 0; i < n; i++) {
    for (let j = 0; j < n; j++) {
      console.log(i, j);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Time complexity: O(n²) – Two nested loops iterate over n elements.

Problem 5:

Write an algorithm to reverse a string. What is the time complexity?

Solution:

function reverseString(str) {
  let reversed = "";
  for (let i = str.length - 1; i >= 0; i--) {
    reversed += str[i];
  }
  return reversed;
}
Enter fullscreen mode Exit fullscreen mode
  • Time complexity: O(n) – The algorithm goes through all characters

in the string.


By understanding algorithmic complexity, you can improve the efficiency of your code. Big O notation provides a framework to analyze how well an algorithm will perform as the size of the input grows. Practicing with simple examples like linear search, maximum finding, and string reversal can strengthen your understanding of algorithm analysis basics.

. . .
Terabox Video Player