### Introduction

Hidden Markov Model(*HMM*) is a time series model in which hidden variables exist behind the observation sequence. Meanwhile, it also satisfies Markov assumption and independence assumption.

**Markov assumption**:

It means that current observation’s probability is only concerned with exactly last observation. It has no relationship among other previous observations except the last observation. Of course, there is also some high-order Markov assumption, which can take more previous observation into consideration to calculate current observation probability:

**independence assumption**:

It means that current observation only relays on current state. If current state $Z_t$ is given out, then $P(X_t)$ is independent from other observations.

**Hidden Variables**:

If you have some previous statistical knowledge, you will know that hidden variable is a kind of variable that cannot be directly observed from given observations. It’s a kind of variables that entailed in the given data. a observation X follows the mixture model from $\mathbb{N}(0, 1)$ and $\mathbb{N}(0,2)$, and we already get the observation of X, but we don’t know whether X is from $\mathbb{N}(0, 1)$ or $\mathbb{N}(0, 2)$, therefore we introduce a hidden variable Z to indicate X’s source distribution. then we divide the probability of X into two parts:

$$

P(X) = P(X, Z=1) + P(X, Z=2)

$$

In more general cases we can represent $P(X)$ as:

$$

P(X) = \sum\limits_{Z}P(X,Z)

$$

### Algorithms

In this way we have three kinds of algorithms to meet our kinds of demands.

**Forward Algorithm**

*This algorithm is used to calculate the probability of a given observation sequence*:

$Z_j^t$ means the state for time t is the jth state. Then what we concern is:

**Backward Algorithm**

*This algorithm is used to calculate the probability of future observation given current state*:

It’s a recursion function that can be applied to get future observation probability based on current fixed state.

**Viterbi Algorithm**

*This algorithm is used to calculate the most probable hidden variable sequence if given the observation sequence*:

Then what we the probability of the most likely path is then:

$$

arg~max_j \delta_T(j)

$$