Introduction to Big O Notation
Last updated
Was this helpful?
Last updated
Was this helpful?
Big O Notation is a way to represent how long an algorithm will take to execute. It enables a software developer to determine how efficient different approaches to solving a problem are.
Here are some common types of time complexities in Big O Notation.
O(1) - Constant time complexity
O(n) - Linear time complexity
O(log n) - Logarithmic time complexity
O(n^2) - Quadratic time complexity
Hopefully by the end of this article you will be able to understand the basics of Big O Notation.
Constant time algorithms will always take same amount of time to be executed. The execution time of these algorithm is independent of the size of the input. A good example of O(1) time is accessing a value with an array index.
Other examples include: push() and pop() operations on an array.
An algorithm has a linear time complexity if the time to execute the algorithm is directly proportional to the input size n. Therefore the time it will take to run the algorithm will increase proportionately as the size of input n increases.
A good example is finding a CD in a stack of CDs or reading a book, where n is the number of pages.
Examples in of O(n) is using linear search:
An algorithm has logarithmic time complexity if the time it takes to run the algorithm is proportional to the logarithm of the input size n. An example is binary search, which is often used to search data sets:
Other examples of logarithmic time complexity include:
An algorithm has quadratic time complexity if the time to execution it is proportional to the square of the input size. A good example of this is checking to see whether there are any duplicates in a deck of cards.
You will encounter quadratic time complexity in algorithms involving nested iterations, such as nested for loops. In fact, the deeper nested loops will result in O(n^3), O(n^4), etc.
Other examples of quadratic time complexity include bubble sort, selection sort, and insertion sort.