Questions tagged [markov-process]

A stochastic process with the property that the future is conditionally independent of the past, given the present.

Overview

A Markov process is any stochastic process $Y_{t}$ such that the future is conditionally independent of the past, given the present; the distribution of the process only depends on where the process is, not where it has been: $$ P(Y_{t+1}=y_{t+1} |Y_t = y_{t}, Y_{t-1} = y_{t-1}, ..., Y_{1} = y_{1}) = P(Y_{t+1}=y_{t+1} |Y_t = y_{t}) $$ This property is known as the Markov property.

References

The following threads on math.se provide references to resources on Markov processes:

1130 questions
58
votes
12 answers

Resources for learning Markov chain and hidden Markov models

I am looking for resources (tutorials, textbooks, webcast, etc) to learn about Markov Chain and HMMs. My background is as a biologist, and I'm currently involved in a bioinformatics-related project. Also, what are the necessary mathematical…
bow
  • 121
  • 1
  • 3
  • 4
54
votes
3 answers

Do we have a problem of "pity upvotes"?

I know, this may sound like it is off-topic, but hear me out. At Stack Overflow and here we get votes on posts, this is all stored in a tabular form. E.g.: post id voter id vote type datetime ------- -------- --------- …
44
votes
3 answers

What are the differences between hidden Markov models and neural networks?

I'm just getting my feet wet in statistics so I'm sorry if this question does not make sense. I have used Markov models to predict hidden states (unfair casinos, dice rolls, etc.) and neural networks to study users clicks on a search engine. Both…
Lostsoul
  • 683
  • 2
  • 7
  • 12
39
votes
2 answers

Can somebody explain to me NUTS in english?

My understanding of the algorithm is the following: No U-Turn Sampler (NUTS) is a Hamiltonian Monte Carlo Method. This means that it is not a Markov Chain method and thus, this algorithm avoids the random walk part, which is often deemed as…
user3007270
  • 521
  • 1
  • 4
  • 6
38
votes
2 answers

A fair die is rolled 1,000 times. What is the probability of rolling the same number 5 times in a row?

A fair die is rolled 1,000 times. What is the probability of rolling the same number 5 times in a row? How do you solve this type of question for variable number of throws and number of repeats?
Teodor Dyakov
  • 615
  • 5
  • 9
36
votes
5 answers

Difference between Bayesian networks and Markov process?

What is the difference between a Bayesian Network and a Markov process? I believed I understood the principles of both, but now when I need to compare the two I feel lost. They mean almost the same to me. Surely they are not. Links to other…
35
votes
2 answers

Calculate Transition Matrix (Markov) in R

Is there a way in R (a built-in function) to calculate the transition matrix for a Markov Chain from a set of observations? For example, taking a data set like the following and calculate the first order transition…
B_Miner
  • 7,560
  • 20
  • 81
  • 144
27
votes
3 answers

Intuitive explanation for periodicity in Markov chains

Can someone explain me in a intuitive way what the periodicity of a Markov chain is? It is defined as follows: For all states $i$ in $S$ $d_i$=gcd$\{n \in \mathbb{N} | p_{ii}^{(n)} > 0\} =1$ Thank you for your effort!
Chris
  • 1,169
  • 3
  • 12
  • 16
26
votes
3 answers

What is the difference between "limiting" and "stationary" distributions?

I'm doing a question on Markov chains and the last two parts say this: Does this Markov chain possess a limiting distribution. If your answer is "yes", find the limiting distribution. If your answer is "no", explain why. Does this Markov chain…
Kaish
  • 867
  • 2
  • 11
  • 19
25
votes
2 answers

Markov Process that depends on present state and past state

I would just like someone to confirm my understanding or if I'm missing something. The definition of a markov process says the next step depends on the current state only and no past states. So, let's say we had a state space of a,b,c,d and we go…
mentics
  • 353
  • 1
  • 3
  • 6
25
votes
1 answer

Random matrices with constraints on row and column length

I need to generate random non-square matrices with $R$ rows and $C$ columns, elements randomly distributed with zero mean, and constrained such that the length ($L_2$ norm) of each row is $1$ and the length of each column is $\sqrt{\frac{R}{C}}$.…
23
votes
1 answer

Real-life examples of Markov Decision Processes

I've been watching a lot of tutorial videos and they are look the same. This one for example: https://www.youtube.com/watch?v=ip4iSMRW5X4 They explain states, actions and probabilities which are fine. The person explains it ok but I just can't seem…
Karl Morrison
  • 763
  • 2
  • 8
  • 17
22
votes
4 answers

Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?

Based on the little knowledge that I have on MCMC (Markov chain Monte Carlo) methods, I understand that sampling is a crucial part of the aforementioned technique. The most commonly used sampling methods are Hamiltonian and Metropolis. Is there a…
21
votes
6 answers

Examples of hidden Markov models problems?

I read quite a bit of hidden Markov models and was able to code a pretty basic version of it myself. But there are two main ways I seem to learn. One is to read and implement it into code (which is done) and the second is to understand how it…
Lostsoul
  • 683
  • 2
  • 7
  • 12
20
votes
5 answers

How do you see a Markov chain is irreducible?

I have some trouble understanding the Markov chain property irreducible. Irreducible is said to mean that the stochastic process can "go from any state to any state". But what defines whether it can go from state $i$ to state $j$, or cannot go? The…
mavavilj
  • 3,167
  • 4
  • 20
  • 44
1
2 3
75 76