This is used to measure the worst-case complexity of an algorithm/function/operation. It is considered the ceiling for the growth function
Example: In a linear search for an array of size n, the worst case is when the term is at the very last index, n. Therefore the big-O would be
Formal Definition
Let and be two real-valued, positive functions:
\exists \ n_{0},c > 0: \forall n>n_{0}, f(n) \leq c\times g(n) \implies f(n) \in O(g(n))
>$$
* For the given domain $(n_{0},\infty)$, if $f(n)$ is **always smaller than** $g(n)$ times a constant factor, then $f(n)$ belongs to the big-O of $g(n)$
Technically, $g(n)$ can be any function. $g(n+1)$, $g(log(n) + n^e)$ and $g(n^2 - \log(n))$ are all valid big-O functions, but the closest upper bound is the best option.
### Comparing Big-O
If $f(n) \in O(g(n))$ but $g(n) \not\in O(f(n))$, then $f(n)$ is always faster than $g(n)$.