All Dirty Fact Around BGB324
An HMM also assumes Markovian state transition dynamics, but the states themselves are not observable. Instead, the observations (data sequence) are a probabilistic function of hidden states, which emit the observations according to their individual observation probability distributions. A K-state first order discrete HMM is fully characterized by the following set of parameters: The set of M possible observations (symbols) generated by the underlying discrete process, described as: V=v1,v2,?��,vM The set of K hidden states in the model: S=sk,?1��k��K The state transition probability matrix, A, containing the probabilities of transitioning from every state in the model to every other state. If the current state of the system at time t is qt: aij=P(qt+1=sj|qt=si),0��aij��1,��j=1Naij?=1 The matrix A = aij for 1 �� i, j �� K. The set of observation probability distributions, ��, for each of the hidden states. The discrete HMM is designed to capture the dynamics of a sequence of symbols drawn from a finite library of M possible observations. The form of the observation probability distributions of each hidden state is therefore an M-dimensional categorical distribution. For a given state k: ��k(i)=P(vi?at?time?t|qt=sk),1��i��M The parameters of each distribution, ��k(i) for 1 �� k �� K consist of the set of p1, p2, ��, pMk event probabilities, where ��i=1Mpi=1. pi can be interpreted as the likelihood of observing symbol i while the system is in state k. The set of initial state distributions, Z = zk, where zk=P(q1=sk),1��k��K Figure ?Figure33 BGB324 ic50 graphically depicts the structure of a 3-state, fully connected HMM. S1, S2, and S3 represent the three hidden states, and O1, O2, and O3 are random variables representing the observations emitted by each state according to its emission probability distribution. Figure 3 Diagram of a fully connected (ergodic) 3-state Hidden Markov Model. S1, S2, and S3 represent the hidden states, O1, O2, and O3 are observations emitted by the hidden states, and aij represents the probability of transitioning from state i to state j. A key assumption we made when choosing to model the data sequences with HMMs is that a first-order Markovian representation of the underlying dynamics of the stochastic process is adequate��the probability of transition to a future state is conditioned only on the current state, and none that preceded it. We felt that this was a fair assumption to make given the nature of the data, but concede that there could be higher-order underlying dynamics in the data. The beta-process hidden markov model The BP-HMM can be thought of as a kind of latent feature model, where the ��features�� represent a (potentially unbounded) set of system states, and each data sequence has been generated by an HMM populated by some subset of these states, with sequence-specific state transition dynamics.