Complexity classes

!

Asymptotic notation

We will develop a way to measure the efficiency of algorithms which is invariant to the non-essential issues such as the speed of computer hardware and the effiency of the programming language used.

!

The measurement functions is only sensitive to the growth of the time it takes for an algorithm to complete with respect to the size of the input.

Measure

Using functions as measurement of algorithmic complexity

We want to use functions to measure the inherit complexity of algorithms.

This is not very straight-forward.

!

!

We don’t care about:

!

We do care about:

Capturing the essentials

Suppose that we have an algorithm $P$. Let’s use some function $f(n)$ to represent the time, in the worst case, that $P$ takes to process an input of size $n$.

!

Let’s look at the effects of various environmental factors on the measure $f(n)$.

Effects on the measure

But we don’t want to care about these implementation and runtime factors when investigating the complexity of algorithms.


Intuition: given $f(n)$, we define family of (infinite) functions

$\mathcal{O}$-notation - better than

We want to define all function which are measure of algorithms which are better than (or same as) $g(n)$.

Definition: The Big-O notation

$$\mathcal{O}(g(n)) = \{f(n) : \exists n_0, c > 0,\forall n > n_0, \ 0 < f(n) < c\cdot g(n)\}$$

We call $g(n)$ the asymptotic upper bound of the functions in $\mathcal{O}(g(n))$.

Significance of asymptotic analysis

!

!

Challenge:


Check that the variations on the way that the efficiency of $A$ is measured also are in $\mathcal{O}(g(n))$.

  1. $T_1(n)$ is the time that $A$ runs on a slow processor.

  2. $T_2(n)$ is the time that $A$ if the input is measured in the array length.

$\Omega$-notation - worse than

Definition: The $\Omega$-notation

$$\Omega(g(n)) = \{f(n) : \exists n_0, c > 0,\forall n > n_0, \ 0 < c\cdot g(n) < f(n) \}$$

We say that $g(n)$ is the asymptotic lower bound of the functions in $\Omega(g(n))$.

$\Theta$-notation - same as

*Definition: The $\Theta$-notation

$$\Theta(g(n)) = \mathcal{O}(g(n)) \cap \Omega(g(n))$$

More precisely,

$$ \begin{eqnarray} && \Theta(g(n)) \\ &=& \{f(n): \exists n_0, c_1, c_2 > 0,\forall n > n_0,\ 0 \leq c_1\cdot g(n) \leq f(n) \leq c_2\cdot g(n)\} \end{eqnarray} $$

Recap

Functions we use:

More about Exponents

More about Logorithms

More about Factorial

Stirling’s approximation:

$$ n! \approx \sqrt{2\pi n} \left(\frac{n}{e}\right)^n $$

Comparisons

For any $a > 1$, $b > 0$ and $c > 1$, we have:

$$ \mathcal{O}(\log_a(n)) \subseteq \mathcal{O}(n^b) \subseteq \mathcal{O}(c^n) $$

We can show that:

$$ n! \in\mathcal{O}(n^n) $$

In fact:

$$ \mathcal{O}(2^n) \subset \mathcal{O}(n^n) $$

Notation

It’s convenient to treat complexity classes as variables in equations.

So, we use the following notations:

Test your understanding

  1. If $f = \mathcal{O}(g)$ and $g = \mathcal{O}(h)$, can you prove that $f = \mathcal{O}(h)$?

    Does it make sense intuitively?

    Does it work out mathematically?

  2. So that $n^2 + 10000= \mathcal{O}(n^3)$.

back
© KEN PU, 2016