Skip to main content

Complexity Theory

1. Introduction

Complexity Theory is a sub-field of computer science in which we will estimate algorithm costs mathematically and independently from the underlying hardware

2. Model of computation

2.1 Definition

To calculate an algorithm cost, we need a model of computation M\mathcal{M}.

A model of computation gives what are the instructions possible in our algorithms, and what are their cost.

By default we will use the Random Access Machine model in which we will pose the following assumptions:

  • Memory Access (read/write) is done on constant time.
  • Each arithmetic instruction can be executed no more than a constant time for standard types.
  • Each Bit-wise Instruction can be executed no more than a constant time for standard types.
  • Each Boolean Test can be executed no more than a constant time for standard types.

Here by standard types, we mean Integers with no more than minteger=64m_{\text{integer}}=64 bits, and Floating point types with no more than mFloat=128m_{\text{Float}}=128 bits

2.2 Algorithm Cost

Using the model M\mathcal{M}, any algorithm can be decomposed into one of the elementary instructions described above.

To simplify the analysis, we can require that the cost of each elementary instruction is 11

2.3 Example

3. Big O\mathcal{O}-Notation

3.1 Mathematical Definition

This is not a general definition, but for all practical purposes, it will work well in competitive programming setting.

Let f,gF(R+,R+)f,g\in\mathscr{F}(\mathbb{R}_+,\mathbb{R}_+) be two real valued functions with positive values.

By definition, we say that ff is asymptotically bounded by gg if:

KR+,RR+/x>R,f(x)Kg(x)\exists K \in\mathbb{R}_+,R\in\mathbb{R}_+ /\quad \forall x>R, \quad f(x) \le Kg(x)

If gg is eventually strictly positive, this is equivalent to:

KR+,RR+/x>R,f(x)g(x)K\exists K \in\mathbb{R}_+^*,R\in\mathbb{R}_+ / \quad \forall x > R, \quad \frac{f(x)}{g(x)} \le K

We say that ff is eventually bounded from above by some constant multiple of gg, or roughly speaking ff is a big O\mathcal{O} of gg, and we note it by:

f=O(g)f = \mathcal{O}(g)

3.2 Properties

NameEquationUtility
Reflexivef=O(f)f=\mathcal{O}(f)Generally not useful
Transitivef=O(g) and g=O(h)    f=O(h)f=\mathcal{O}(g) \ \text{and} \ g=\mathcal{O}(h) \implies f=\mathcal{O}(h)Get an easier upper bound
Sum$$ f_1+f_2=\mathcal{O}(\max(f_1,f_2))$$Get an easier upper bound
Scalingf=O(g)    f=O(kg)kR+f=\mathcal{O}(g) \implies f=\mathcal{O}(k\cdot g) \quad \forall k\in\mathbb{R}_+^*Remove constants
Product{f1=O(g1)f2=O(g2)    f1f2=O(g1g2)\begin{cases} f_1=\mathcal{O}(g_1) \\ f_2=\mathcal{O}(g_2)\end{cases} \implies f_1f_2=\mathcal{O}(g_1g_2)Simplify products
Cumulative Sumf increasing    i=1nf(i)=O(1n+1f(x)dx)f \ \text{increasing} \implies \sum_{i=1}^nf(i)=\mathcal{O}\left(\int_{1}^{n+1} f(x)dx\right)

4. Big Ω\Omega-Notation

4.1 Mathematical Definition

This is not a general definition, but for all practical purposes, it will work well in competitive programming setting.

Let f,gF(R+,R+)f,g\in\mathscr{F}(\mathbb{R}_+,\mathbb{R}_+) be two real valued functions with positive values.

By definition, we say that ff is asymptotically bounded from below by gg if:

KR+,RR+/x>R,f(x)Kg(x)\exists K \in\mathbb{R}_+,R\in\mathbb{R}_+ /\quad \forall x>R, \quad f(x) \ge Kg(x)

If gg is eventually strictly positive, this is equivalent to:

KR+,RR+/x>R,f(x)g(x)K\exists K \in\mathbb{R}_+^*,R\in\mathbb{R}_+ / \quad \forall x > R, \quad \frac{f(x)}{g(x)} \ge K

We say that ff is eventually bounded from below by some constant multiple of gg, or roughly speaking ff is a big O\mathcal{O} of gg, and we note it by:

f=Ω(g)f = \Omega(g)

3.3 Properties

NameEquationUtility
Reflexivef=Ω(f)f=\Omega(f)Generally not useful
Transitivef=Ω(g) and g=Ω(h)    f=Ω(h)f=\Omega(g) \ \text{and} \ g=\Omega(h) \implies f=\Omega(h)Get an easier upper bound
Sumf1+f2=Ω(min(f1,f2))f_1+f_2=\Omega(\min(f_1,f_2))Get an easier upper bound
Scalingf=Ω(g)    f=Ω(kg)kR+f=\Omega(g) \implies f=\Omega(k\cdot g) \quad \forall k\in\mathbb{R}_+^*Remove constants
Product{f1=Ω(g1)f2=Ω(g2)    f1f2=Ω(g1g2)\begin{cases} f_1=\Omega(g_1) \\ f_2=\Omega(g_2)\end{cases} \implies f_1f_2=\Omega(g_1g_2)Simplify products
Cumulative Sumf increasing    i=1nf(i)=Ω(1nf(x)dx)f \ \text{increasing} \implies \sum_{i=1}^nf(i)=\Omega\left(\int_{1}^{n} f(x)dx\right)
Dualityf=O(g)    g=Ω(f)f=\mathcal{O}(g) \iff g=\Omega(f)

5. Big Θ\Theta- Notation

5.1 Mathematical Definition

This is not a general definition, but for all practical purposes, it will work well in competitive programming setting.

Let f,gF(R+,R+)f,g\in\mathscr{F}(\mathbb{R}_+,\mathbb{R}_+) be two real valued functions with positive values.

By definition, we say that f=Θ(g)f=\Theta(g) if f=O(g)f=\mathcal{O}(g) and f=Ω(g)f=\Omega(g)

6. Examples

FunctionBig O\mathcal{O}Big Ω\OmegaBig Θ\Theta
f(n)=n5+10n50f(n)=n^5+10n-50O(n5)\mathcal{O}(n^5)Ω(n5)\Omega(n^5)Θ(n5)\Theta(n^5)
f(n)=n1nf(n)= n-\frac{1}{n}O(n)\mathcal{O}(n)Ω(n)\Omega(n)Θ(n)\Theta(n)
f(n)=ennf(n)= e^n-nO(en)\mathcal{O}(e^n)Ω(en)\Omega(e^n)Θ(en)\Theta(e^n)
f(n)=2+sinnf(n)=2+\sin nO(1)\mathcal{O}(1)Ω(1)\Omega(1)Θ(1)\Theta(1)
f(n)=exp(sin2n10)+n2f(n)=\exp(\sin^2 n^{10})+ n^2O(n2)\mathcal{O}(n^2)Ω(n2)\Omega(n^2)Θ(n2)\Theta(n^2)
f(n)=k=1n3kf(n)=\sum_{k=1}^n3^kO(3n)\mathcal{O}(3^n)Ω(3n)\Omega(3^n)Θ(3n)\Theta(3^n)
f(n)=exp(n2(sinn+2))f(n)=\exp(n^2(\sin n+2))O(exp(3n2))\mathcal{O}(\exp(3n^2))Ω(exp(n2))\Omega(\exp(n^2))