It’s 9:00 a.m., you arrived at work five minutes ago and just got time to fetch your first coffee of the day.

Suddenly, your boss comes and tells you how hungry he is. You ask him how you can help. He has a meeting with other hungry people and he wants you to cook an apple pie by 11:00 a.m.

You visit Recipe Overflow to find apple pie recipes and end up on a thread with folks arguing about the best recipe.

You realize there are different ways of cooking an apple pie. Some recipes look easy to cook, with only three or four steps and few ingredients, but users downvote them because they say they are not that tasty.

There is one recipe that has nice ratings, more steps, but people say this is the best they have eaten. You don’t want to disappoint your boss, you pick that super delicious one.

As there are multiple ways to cook an apple pie, there are multiple approaches to solve a coding problem. A badly-written algorithm can slow your app down. How can we measure that?

Code cannot be tasted, that’s where** **Big O comes in.

# Big O Notation, What Is It For?

“Big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows, by denoting an upper bound of its growth rate.” — Wikipedia

In simple words, it helps to tell if an algorithm is efficient enough to deal with a huge volume of data.

- For a small application that processes a small volume of data, it might not be necessary to use Big O. Unless the code is really bad, the difference in the runtime between a good algorithm and a not-too-bad one has a small impact on the application.
- But, for large applications, manipulating a large collection of data, doing this analysis while writing an algorithm can make your app run significantly faster (and use less memory).

This said, understanding how Big O works is absolutely worth it, even for small projects. You will thank yourself later, when your app scales up.

# Why Don’t We Use Time to Measure Efficiency?

Although time is usually closely related to complexity, the two aren’t necessarily identical. Processing operations on different computers with different hardware, operating systems, and programming languages might affect the resulting time.

Also, time is harder to analyze than math. Big O notation is used to see how an algorithm scales. It is easier and faster to compare curves of two different algorithms than the resulting time of tests.

Finally, Big O notation abstracts the algorithm and the data it processes. It approximates the efficiency of the worst-case performing scenario of an algorithm.

Meaning, if you do a search in a list with 100,000 items, the worst-case scenario would be finding nothing. It is harder and not relevant to measure the efficiency of an algorithm with only time and real data.

# How Does It Work?

This chart shows you the volume of operations an algorithm needs to process depending on the input’s size (elements). It is divided into five parts, going from the most efficient, *Excellent, *to the least efficient, *Horrible.*

The notation is written this way: *O(efficiency) *and any algorithm can be noted with this formula. Needless to say, the greener your algorithm is, the better. The red part is a zone to avoid at all costs! An algorithm in the red zone scales very poorly.

Let’s see what kind of algorithm these curves could correspond to…

## Constant time complexity — O(1)

No matter the input’s size, it always only does one operation.

## Logarithmic time complexity — O(log n)

This represents a function whose complexity increases logarithmically as the input size increases.

Here is the typical shape of a logarithm function.

The abscissa and ordinate values are not really important here, pay attention to the curve itself. With *x *being the size of the input, we can see it scales extremely well.

Above is the code of a binary search, an algorithm of *O(log n)* complexity.

The binary search algorithm locates an item in a sorted array by repeatedly dividing the search interval in half.

As we dispose of one part of the search case during every step of binary search and perform the search operation on the other half, this results in worst-case time complexity of *O(log n)*.

## Linear time complexity — O(n)

It represents the complexity of an algorithm that increases linearly and in direct proportion of the input size.

An example of this would be to search for an item on a list. As we did just above but in a less optimal way.

In this case, the function is a loop. The worst-case scenario of this: it loops until the end of the list.

The time of execution directly depends on the input’s size, which results in an *O(n)*.

## Logarithmic linear time complexity — O(n log n)

Merge sort, heap sort, and quick sort are algorithms with a time complexity of *O(n log n)*. They scale in a logarithmic linear way.

Below is an example of the merge sort, a divide and conquer sorting algorithm. It divides the array into two halves, recursively sorts them, and finally merges the two sorted halves.

Here is a great visualizer to see it with your own eyes.

Let’s explain the time complexity.

*O(n**log n**)*: It divides the given array into two halves recursively until the size becomes 1. This is the`mergeSort`

function.*O(**n**log n)*: The task of merging two sorted slices runs in linear time. This is the`merge`

function.

## Quadratic time complexity — O(n²)

*O(n²)* represents an algorithm whose complexity is directly proportional to the square of the input size.

Here is a useless example, but simple to understand:

A loop has an *O(n)* complexity, therefore, a loop in a loop has an *O(n) * O(n)* complexity, which is *O(n²)*.

Bubble sort, insertion sort, and selection sort are examples of *O(n²)* sorting algorithms.

## Exponential time complexity — O(2ᴺ)

*O(2ᴺ)* is an algorithm that has exponential growth. The volume of operations doubles with each addition to the input data set.

An example I found is the recursive calculation of Fibonacci numbers. For those who don’t know, in the Fibonacci sequence, each number is followed by the sum of the two preceding ones.

Do not use the code above in a real project, this calculation can be achieved with a lower time complexity. This is only an example.

As we see, the function recursively calls itself twice for each input number until the number is less than or equal to one.

# Time Complexity vs. Space Complexity

So far, we used big O to measure the time complexity of an algorithm.

Big O is also used to measure *space complexity*. Space complexity is the measure of the volume of working storage (memory) an algorithm needs, in the worst-case scenario.

Space complexity is as important as time complexity. In fact, there might be cases where you need to privilege one instead of the other (speed or memory).…

It is the same logic as we’ve seen before, here is a little example below…

Taking the worst-case scenario of this function into consideration (all words starting with the letter A), the `words`

`const`

would contain *n *words. Therefore, the space complexity would be *O(n)*.

# Conclusion

Big O helps us understand how an algorithm scales, depending on the input’s size.

It can be used to measure time and space complexity and both are equally important to keep in mind.

Big O is a must-have tool in a programmer’s toolbox.

Source : Big O Complexity