Big O notation is a measurement of algorithm
efficiency. Its purpose is to
characterize the time taken by algorithms as the sizes of their inputs approach
infinity. For example, consider 4n^{2}
+ 8n + 5. The purpose of Big O notation
is to understand how long it would take this algorithm to compute a result for
a very large input.

As n^{2} approaches infinity, the other terms in the
equation become irrelevant. 8n, when
compared to 4n^{2} as n becomes increasingly large, is so small that it
doesn’t need to be considered. Similarly,
the coefficient 4 in 4n^{2} can also be ignored, resulting in an
algorithm of order (n^{2}).

Big O notation can be calculated by breaking down the
components of an algorithm and multiplying their individual complexities. Consider the following example:

for (int x; x<n; x++)

{

for (int y; y<n; y++)

{

int
sum = x + y;

printf(
“sum equals %n”, sum);

}

}

The inner loop has a complexity of O(n). The outer loop also has a complexity of
O(n). When we multiply the two, we get
the complexity of the algorithm, which is n * n = O(n^{2}).

Algorithms can have widely varying complexity depending on
what they’re trying to calculate, whether it is a single lookup of a value or a
brute force sort of a large set of words. The following are examples of algorithms and their associated Big O
values:

O(1)

Constant

Determine the value associated with a given key in a hash
table

O(log n)

Logarithmic

Binary search – consider the worst case scenario searching
for a page in a 64 page book. Let’s
start the search at page 32. If it’s
less than 32, split it again and search page 16, and then page 8, page 4, page
2, and finally page 1. You’ll see that
it took six steps or log(64) searches to find the page.

O(n)

Linear

Finding an element in a linked list – in the worst case
scenario, the desired value is the last element of the list and the algorithm
takes n steps to find it.

O(n log n)

Quasilinear

Heapsort – In heapsort, the input elements are arranged into
a heap, a binary tree in which the root has the greatest value. Each of the nodes as you descend the tree is
greater than its children. In order to
get to this state, each element must search down the tree for a worst-case
O(log n). This operation must be
performed on every node in the tree, which is O(n), yielding a complexity O(n
log n).

O(n^{2})

Quadratic

Insertion sort – In insertion sort, an array of arbitrary
length is sorted by randomly selecting elements from the array and placing them
in the appropriate location within the already sorted list. In the worst case scenario, it could take n
events to move all of the elements required to place an element in its proper
location. If this was true for every
element, it would take n*n computations to sort the array, or O(n^{2})

O(n^{c})

Polynomial

Any algorithm of three nested for
loops is an example of a polynomial algorithm. Recall from the previous
example that each for loops adds an order n to the Big O. For instance,
consider the following code:

for
(int x; x<n; x++)

{

for (int y; y<n; y++)

{

int sum = x + y;

printf( “sum equals %n”, sum);

for (int z; z<y; z++)

{

int new_sum = x + y + z;

}

}

}

In this case, we are processing three for loops
which iterate over the entire input, so the complexity is O(n^{3})

Once we get past polynomial, there remains exponential, in
which O(c^{n}), and factorial, in which O(n!). This introduces algorithms which may approach
the lifetime of the universe to compute. For example, consider n = 32. c^{32}
and 32! are remarkably large numbers, even for a relatively small input. If you find your algorithms having this level
of complexity, it might be time to reevaluate your code.