# What is Big O Notation? — Explained in Simple Terms

With problem solving comes many approaches.

However, just because an algorithm solves a problem, it doesn’t necessarily mean it’s the most *“ideal”* solution.

Generally whenever you’re choosing a solution AKA an algorithm for your problem, you want to consider optimization, whether that’s taking into account how long the algorithm will take to run or the amount of space it’s going to take up in memory.

It is not enough to just solve problems. The greatest programmers don’t only come up with solutions, they also contemplate why things work the way they do and if there are better ways to implement those solutions. This way of thinking promotes efficient code.

This is when factors like** Big O notation** become relevant.

# What is Big O notation and why is it useful in software development?

Essentially, Big O notation is a *special symbol *that lets us know how *fast* an algorithm is. However, Big O notation doesn’t tell us how long the algorithm will take in terms of seconds, instead it *allows us to compare the number of operations the algorithm will make*. It’s all about the **growth **of an algorithm.

**Let’s consider this example:**

Suppose I have a list of 50 numbers and I ask you to guess which number I’m thinking of. One way you could approach this problem is by going through the entire list of numbers.

So, you’d start guessing this way: 1, 2, 3, 4 and so on… not a very smart approach.

In this scenario, it could take up to 50 guesses to reach the answer, which might not be too much of a hassle but imagine if the list increased to 50 billion numbers? it could take up to 50 billion guesses, yikes!

This is regarded as “**simple search**”, simple search runs in* **linear time** *because the maximum number of operations is equivalent to the size of the list. The runtime for simple search is expressed in Big O notation as O(n).

Linear time = **O(n)**

*n* refers to the number of operations the algorithm will make and the capitalized O in front of the* n*, simply lets us know that we’re dealing with Big O notation *(note: there are other notations, for example: “little o”).*

So, looking back at my list of 50 numbers, when using the simple search method, the **complexity** of that algorithm would look like this: O(50)

Therefore, we know that it would take at most 50 operations.

Pretty straightforward, right?

Now let’s look at another approach for solving the same problem, this time we’ll use “**binary search**”. With this approach, my list of 50 numbers would take you at most about 6 guesses and my list of 50 billion numbers? That would take you at most 36 guesses.

**HUGE difference, right?**

This is because binary search runs in ** logarithmic time**, often referred to as

*log time*.

Logarithmic time = **O(log n)**

So, we do *log with a base of 2* n to determine how many operations the algorithm will make in the ** worst-case **scenario

*(note: Big O focuses on the worst-case scenario).*

So, my list of 50 numbers while using the binary search approach would be expressed as O(log 50):

We always round so, 6 is the answer. Therefore, we know it would take at most 6 guesses or “operations”.

*Logarithms are the flip of exponentials:*

For my list of 50 billion numbers, using our binary search algorithm, its complexity would look like this O(log 50,000,000,000):

We round so, it’s 36. It would take at most 36 guesses or “operations”

It doesn’t matter how large the list (*n*) is, we can always ** approximate** its run time.

Notice how in these examples mentioned, the binary search approach is much faster than the simple search approach and this is because their

running times are growing at different rates.

# Here are the top 5 most common Big O run times that you’re likely to come across (sorted from fastest to slowest):

- O(log n) known as log time, example:
*Binary Search* - O(n) known as linear time, example:
*Simple Search* - O(n * log n) known as linearithmic/quasilinear, example:
*Quicksort* - O(n²) known as quadric time, example:
*Selection Sort* - O(n!) known as factorial time, example: an extremely slow algorithm like the t
*raveling salesperson*

All in all, being concerned with the quality of the algorithms you use when writing code, makes you a better programmer.

**Code is supposed to be simple and elegant — it’s an art!**

Let’s all create beautiful programs! :)