What comes to your mind when you read the words "Asymptotic analysis"?
No, its not the analysis of people - who have contracted COVID, but do not show the symptoms. (Don't know whats COVID? - consider yourself the lucky generation!)
In this blog, lets cover one of the basic and critical elements to keep in mind when creating/understanding an algorithm. Asymptotic analysis is the method to determine efficiency of the algorithm in terms of:
i) Time it takes to run the algorithm
ii) Space (memory) it requires to run the algorithm
And here, we are looking how these terms (time & space) vary as the size of input increases. Specifically, when input becomes very very large - therefore the name "asymptotic" :)
The WHY?
Before we get into the details, lets see WHY do we need to know/determine this. With improving technology, we can see that computing performance and space (memory) available in a computer is exponentially increasing (read Moore's law - even though it has started to slow down recently). Then one might ask, why take the effort to perform asymptotic analysis when we have enough time (due to faster processing) and memory (due to smaller and more efficient packing of chips - not Lays though, they've some scope of improvement :P). That's a valid and very good question.
Consider the case when the input becomes very very large. So large that it affects the performance of the program/script - depending on the algorithm used. And this is a practical scenario - all thanks to Moore's law (once again). The chips have become more affordable - meaning more people can use smartphones/laptops/computers or any other electronic device. Due to this, at a given time more than a billion devices could be accessing a server/database or performing some computation. This leads to billions of operations being performed at any given second. If the algo is not the most optimized one, we might be using more resources (in terms of time and memory) to make computations.
This is WHY we need to be able to perform the asymptotic analysis of an algo - to be able to identify the algo to use for a specific task.
Time Complexity
As the name suggests, here we want to determine how the running time of the algorithm increases based on increasing input size.
Asymptotic Notation
There are 3 terms that we use to qualitatively measure the time complexity of an algo:
1. Big O: written as O(x) This is used to show the upper bound of the rate of increase of time as input grows. Can be thought of as the worst case scenario.
2. Big Omega: written as Ω(x) This is used to show the lower bound of an algo. The best case scenario.
3. Big Theta: written as θ(x) This is used to show both the upper and lower bound of an algo. Can be thought of as the average case.
PS: The "x" mentioned above is replaced by "how the runtime grows as input becomes very large". For example:
i) If the running time for an algo increases linearly with increase in input, it would be showed as "O(n)". Here, "n" suggests that it increases linearly. [Considering here only Big(O) notation in this example, but its valid for all 3 notations.]
ii) If the running time increases in quadratic fashion, we would use "O(n^2)".
iii) If increases in logarithmic fashion, "O(log(n))".
iv) Similarly, if it increases exponentially, "O(x^n)" - where "x" could be any constant value. e.g. O(2^n)
Note: We drop the constant terms (if they exist) in the notation. For example, we consider O(2n) & O(3n) as O(n). This is because, when the input "n" is very very large, the difference between "2n" & "3n" is insignificant.
Visual representation:
Here is a visual representation of the asymptotic notations for better understanding.
i) Big O: We say f(n) = O(g(n)), when there exists a function g(n), such that
"c.g(n)" is always greater than or equal to f(n), for all "n > n0". ["c" is a constant; "n0" is the size of input above which we determine asymptotic complexity; "."(dot) represents multiplication operation.]
i) Big Omega: We say f(n) = Ω(g(n)), when there exists a function g(n), such that
"c.g(n)" is always less than or equal to f(n), for all "n > n0".
i) Big Theta: We say f(n) = θ(g(n)), when there exists a function g(n), such that
f(n) is bounded between "c1.g(n)" & "c2.g(n)", for all "n > n0". ["c1,c2" are constants]
Space Complexity
As you would have guessed already, here we want to determine how much space or memory does the algorithm consume when executing it. Similar to time complexity, we have 3 terms Big O, Big Omega & Big Theta to express space complexity. Most commonly used notation is Big O.
Like in time complexity, as the space required to execute the algo varies, the function inside Big O(x) changes.
For example: O(n), O(n^2), O(2^n), etc.
Note
Asymptotic analysis of algorithm tells us about the time and space complexity - the qualitative aspect. The actual value (quantitative aspect) of runtime and space used while running an algo would depend on other factors like: language used, compiler, specs of machine on which it is running.
Test yourself
Q. What is the time complexity of below algorithm?
for (int i=0; i<n; i++)
{
for (int j=0; j<n; j++)
{
if ( arr[j] > arr[j+1] )
{
temp = arr[j];
arr[j] = arr[j+1];
arr[j+1] = arr[j];
}
}
}
Ans: Time complexity: O(n^2), Space complexity: O(n)
Explanation: As you may have noticed, this is the algo for "Bubble sort". Here, we are iterating over the given array (arr) to sort it in ascending order. We can see from the implementation that first we are iterating for "i" from "0" to "n-1"; and then for each "i", we are iterating for "j" from "0" to "n-1". Therefore, for an array of size "n", we would be iterating for "n^2" times. That is, the runtime increases by a factor of "n^2" when input size increases.
To understand space complexity, we must look at how much space is the algo utilizing to run. We can see that, the array uses "n" amount of space and "temp" variable uses a fixed amount of space. When size of input array increases, the space used by algo also increases by factor of "n". And when the input array size is very very large, the space utilized by "temp" variable is insignificant. Therefore, space complexity is "O(n)".