Questions tagged [markov-chain]

A "memoryless" discrete stochastic process.

A Markov Chain is a discrete stochastic process for which the conditional probability of proceeding from state $x_t$ to state $x_{t+1}$ does not depend on the sequence $\ldots, x_0,x_1,\ldots,x_{t-1}$ of states preceding $x_t$. It is characterized by a transition matrix: conventionally each row, corresponding to a possible state, contains the conditional probabilities of transitions to each other state (including itself). Properties of the Markov Chain can thereby be computed from the transition matrix.

98 questions
10
votes
2 answers

Is Markov chain based sampling the "best" for Monte Carlo sampling? Are there alternative schemes available?

Markov Chain Monte Carlo is a method based on Markov chains that allows us to obtain samples (in a Monte Carlo setting) from non-standard distributions from which we cannot draw samples directly. My question is why Markov chain is…
9
votes
1 answer

Hidden Markov model for event prediction

Question: Is the set-up below a sensible implementation of a Hidden Markov model? I have a data set of 108,000 observations (taken over the course of 100 days) and approximately 2000 events throughout the whole observation time-span. The data looks…
9
votes
1 answer

Given two absorbing Markov chains, what is the probability that one will terminate before the other?

I have two different Markov chains, each with one absorbing state and a known starting position. I want to determine the probability that chain 1 will reach an absorbing state in fewer steps than chain 2. I think I can calculate the probability of…
Jeff
  • 3,525
  • 5
  • 27
  • 38
9
votes
1 answer

Probability of a consecutive pair of values

Lets $X=(x_1, x_2,...x_{20})$ where $x_i\sim N(0,1)$ and $x_i, x_j$ are independent $\forall i\neq j$. What is the probability to obtain a sample $X$ where there are at least two consecutive values $x_i$ and $x_{i+1}$ such that $ \begin{cases} …
will198
  • 669
  • 1
  • 4
  • 9
8
votes
3 answers

Random walk: kings on a chessboard

I have a question about the random walk of two kings in a 3×3 chessboard. Each king is moving randomly with equal probability on this chessboard - vertically, horizontally and diagonally. Τhe two kings are moving independently from each other in the…
cube
  • 143
  • 1
  • 7
8
votes
1 answer

Gibbs Sampler output: how many Markov chains?

When running a Gibbs sampler (for $n=200$ Iterations) with two full conditionals, I get the output $\mathbf{x} = (x_1^{(n)},x_2^{(n)})_{n =1,...,200}$. So $\mathbf{x}$ is the realizations of a Gibbs Markov chain, the so called Gibbs sequence. but…
7
votes
2 answers

Converting 2nd order Markov chain to the 1st order equivalent

Given a 2nd order Markov chain where each state takes values in the set $\mathcal{X}=\{A,C,G,T\}$, such that all transition probabilities $p(x_t|x_{t-1},x_{t-2})$ are larger than zero, How to convert it to the equivalent 1st order one with all the…
xiaohan2012
  • 6,819
  • 5
  • 18
  • 18
5
votes
1 answer

Analyzing output in MCMC

I am using emcee to do inference on some data. I am trying to fit my data to a line of equation $ y = mx + b $. # Initialize MCMC ndim = 2 # number of parameters in the model nwalkers = 100 # number of MCMC walkers nsteps = 500 # number of MCMC…
aloha
  • 410
  • 2
  • 9
5
votes
1 answer

How to estimate the infinitesimal generator of a Markov chain?

I am working on an 8 state (8th state absorbing) multi-state markov model, and what I am having difficulty understanding is, how sensitive are the results to the initial qmatrix and what is the best way to go about creating the initial matrix? For…
Mark C
  • 85
  • 8
5
votes
1 answer

Why is acceptance prob $A(x \rightarrow x')=min\left[1,\frac{\pi(x')Q(x'\rightarrow x)}{\pi(x)Q(x\rightarrow x')}\right]$ for Metropolis-Hastings?

In the Metropolis-Hasting algorithm we choose the acceptance probability $A(x \rightarrow x')$ as following: $$A(x \rightarrow x') = min \left[1, \frac{\pi(x')Q(x' \rightarrow x)}{\pi(x)Q(x \rightarrow x')} \right]$$ For me this formula is not 100%…
Charlie Parker
  • 5,836
  • 11
  • 57
  • 113
5
votes
1 answer

What to do when rejecting a proposed point in MCMC?

I'm writing a simple Metropolis-Hastings MCMC algorithm. Every time a move gets accepted, the point is added to a list of accepted points. I wonder what exactly I should do when a proposed move has been rejected. Should the last accepted point be…
5
votes
2 answers

Difference between graphical model and markov chain

Representing causality using fuzzy cognitive maps presents a cognitive model which is a graphical model consisting of weighted directed graph. To me it looks like a state transition machine. Can somebody explain the difference between causal map and…
SKM
  • 737
  • 1
  • 7
  • 21
4
votes
2 answers

Optimizing through a Transition matrix, in a non-stepwise manner

I made some sort of a transition matrix of an increase in probability of success, given the allocation of some finite resource (e.g. 4 in this case). At this moment, I consider the allocation in a stepwise manner. That is, for every +1 increase, I…
PascalVKooten
  • 2,127
  • 5
  • 22
  • 34
4
votes
0 answers

Application of likelihood ratio test to test the Markov property

Do you know a reference (freely available on the web) where the likelihood ratio test is applied in order to test for the Markov property? The setting is a directly observable discrete Markov-chain with given transition matrices. The concrete…
Richi W
  • 3,216
  • 3
  • 30
  • 53
4
votes
2 answers

EM algorithm to estimate discrete Markov chain transition probabilities

I'm surveying algorithms to estimate the transition probabilities of a discrete Markov chain and I found that EM approach could used. However I am not able to find any simple description on how to implement the EM algorithm to fit a discrete Markov…
1
2 3 4 5 6 7