Markov chains explained with interactive simulations
July 30, 2014 3:48 AM Subscribe
I can use this when I teach Markov Chains next week, thanks.
posted by Philosopher Dirtbike at 4:52 AM on July 30, 2014 [2 favorites]
posted by Philosopher Dirtbike at 4:52 AM on July 30, 2014 [2 favorites]
Everything is Markov Chains!
posted by xtian at 4:58 AM on July 30, 2014 [1 favorite]
posted by xtian at 4:58 AM on July 30, 2014 [1 favorite]
I was kinda hoping that the "interactivity" would be a Choose Your Own Adventure style write-up, but on reflection I realized that would hardly model a memoryless process (or so one hopes, anyway).
Very cool & instructive -- thanks!
posted by Westringia F. at 5:21 AM on July 30, 2014 [1 favorite]
Very cool & instructive -- thanks!
posted by Westringia F. at 5:21 AM on July 30, 2014 [1 favorite]
Oh god I did my M.Sc on a Markov chain based modelling method. No more.. please no more.
posted by PenDevil at 5:35 AM on July 30, 2014 [3 favorites]
posted by PenDevil at 5:35 AM on July 30, 2014 [3 favorites]
Markov chains can minic this "stickyness". In the mathematically the probability of staying will turn red if the chance of Markov, are rainy days in a row and row. To build this "stickyness". In the probability of hopping," and a 0.9 probability of transition back into its row's states to jump around with other behavior, you want to determine the number of rainy. Therefore, every state space, a Markov chains can transition back into itself).
One way to simulate the transition matrix" to tally the original? The rows do the next day in our Markov chain is include "playing," from as a row and each cells grows quadratical systems that a baby's behaviors could transition back into itself). If we're at 'A' or stay at 'A'. If we're at 'A'. If we're at 'A' we could be to include "playground with two states (A and a 0.9 probability of the "S" state space, the algorithm Google uses to have a "stickyness" with a transitioning," from as an example.
posted by CaseyB at 5:41 AM on July 30, 2014 [22 favorites]
One way to simulate the transition matrix" to tally the original? The rows do the next day in our Markov chain is include "playing," from as a row and each cells grows quadratical systems that a baby's behaviors could transition back into itself). If we're at 'A' or stay at 'A'. If we're at 'A'. If we're at 'A' we could be to include "playground with two states (A and a 0.9 probability of the "S" state space, the algorithm Google uses to have a "stickyness" with a transitioning," from as an example.
posted by CaseyB at 5:41 AM on July 30, 2014 [22 favorites]
CaseyB's post is going to get Metafilter blocked by spam filters.
posted by AzraelBrown at 6:55 AM on July 30, 2014 [1 favorite]
posted by AzraelBrown at 6:55 AM on July 30, 2014 [1 favorite]
So what's the state of the art in generating Markov chains from my Metafilter comment history, my Twitter post history, or other arbitrary text?
posted by These Premises Are Alarmed at 7:04 AM on July 30, 2014
posted by These Premises Are Alarmed at 7:04 AM on July 30, 2014
So what's the state of the art in generating Markov chains from my Metafilter comment history
Sadly, Markovfilter has been down for awhile.
posted by gwint at 7:13 AM on July 30, 2014
Sadly, Markovfilter has been down for awhile.
posted by gwint at 7:13 AM on July 30, 2014
Thanks for this!
posted by shothotbot at 8:33 AM on July 30, 2014
posted by shothotbot at 8:33 AM on July 30, 2014
Sector is an iPad app with a uniquely visual representation of Markov chains with respect to slicing and sequencing sampled audio.
posted by dbiedny at 8:58 AM on July 30, 2014 [1 favorite]
posted by dbiedny at 8:58 AM on July 30, 2014 [1 favorite]
So Markov chains can be seen as a graph? The nodes are the states, the transitions are edges, and the values of the edges could be the probabilities?
The sum of all outgoing edges must = 100%?
posted by symbioid at 9:03 AM on July 30, 2014
The sum of all outgoing edges must = 100%?
posted by symbioid at 9:03 AM on July 30, 2014
Yes, exactly.
posted by a snickering nuthatch at 9:07 AM on July 30, 2014
posted by a snickering nuthatch at 9:07 AM on July 30, 2014
symbioid: So Markov chains can be seen as a graph? The nodes are the states, the transitions are edges, and the values of the edges could be the probabilities?
The sum of all outgoing edges must = 100%?
Jpfed: Yes, exactly.
Well, not exactly, depending on the definitions you choose. :)
There are two different ways you can think of messing with/expanding the idea of Markov chains as they are presented in the link, both dealing with passing from the discrete case to the continuous case in different ways. In general these all fall under the heading of Markov processes, although the term Markov chain can be applied to any of them in the literature.
Firstly, you can pass from discrete to continuous time. The principle here is that instead of considering "steps" of time where you move from one place to the other, you instead think of time as proceeding continuously. You can still jump between discrete states, but the amount of time you spend in a state is another random variable on top of the transition probabilities associated with a discrete-time Markov chain. In this case, thinking about the Markov process as a graph still makes sense, but the probability of jumping between states/staying on your current state is a little more involved than just labelling edges.
Secondly, you can pass from discrete to continuous state space. These are harder to picture, but suffice to say that the graph analogy breaks down pretty hard here.
Of course, you can also do both. This is when you get complex things like Brownian motion, which is super important in stochastic analysis, physics, finance, etc.
posted by TypographicalError at 10:35 AM on July 30, 2014 [2 favorites]
The sum of all outgoing edges must = 100%?
Jpfed: Yes, exactly.
Well, not exactly, depending on the definitions you choose. :)
There are two different ways you can think of messing with/expanding the idea of Markov chains as they are presented in the link, both dealing with passing from the discrete case to the continuous case in different ways. In general these all fall under the heading of Markov processes, although the term Markov chain can be applied to any of them in the literature.
Firstly, you can pass from discrete to continuous time. The principle here is that instead of considering "steps" of time where you move from one place to the other, you instead think of time as proceeding continuously. You can still jump between discrete states, but the amount of time you spend in a state is another random variable on top of the transition probabilities associated with a discrete-time Markov chain. In this case, thinking about the Markov process as a graph still makes sense, but the probability of jumping between states/staying on your current state is a little more involved than just labelling edges.
Secondly, you can pass from discrete to continuous state space. These are harder to picture, but suffice to say that the graph analogy breaks down pretty hard here.
Of course, you can also do both. This is when you get complex things like Brownian motion, which is super important in stochastic analysis, physics, finance, etc.
posted by TypographicalError at 10:35 AM on July 30, 2014 [2 favorites]
Well, not exactly, depending on the definitions you choose. :)
I thought "Markov chain" (as opposed to "Markov process") specifically referred to discrete-time, discrete-state-space. I stand corrected.
You can still jump between discrete states, but the amount of time you spend in a state is another random variable on top of the transition probabilities associated with a discrete-time Markov chain. In this case, thinking about the Markov process as a graph still makes sense, but the probability of jumping between states/staying on your current state is a little more involved than just labelling edges.
Actually, edge labels are perfectly adequate for this case. It just so happens that you interpret the edge label as the rate parameter of a Poisson process.
posted by a snickering nuthatch at 10:44 AM on July 30, 2014 [1 favorite]
I thought "Markov chain" (as opposed to "Markov process") specifically referred to discrete-time, discrete-state-space. I stand corrected.
You can still jump between discrete states, but the amount of time you spend in a state is another random variable on top of the transition probabilities associated with a discrete-time Markov chain. In this case, thinking about the Markov process as a graph still makes sense, but the probability of jumping between states/staying on your current state is a little more involved than just labelling edges.
Actually, edge labels are perfectly adequate for this case. It just so happens that you interpret the edge label as the rate parameter of a Poisson process.
posted by a snickering nuthatch at 10:44 AM on July 30, 2014 [1 favorite]
One thing that disappointed me about this is that it doesn't try to visualize an evolving distribution on states, which is how I tend to think about Markov chains. Maybe that is too sophisticated though.
posted by eruonna at 4:06 PM on July 30, 2014
posted by eruonna at 4:06 PM on July 30, 2014
« Older Now you can make the Kessel Run in less than 12..... | I wouldn't know where to begin. Newer »
This thread has been archived and is closed to new comments
posted by oceanjesse at 4:27 AM on July 30, 2014