Today I generated an interesting idea on analysis of time series. Those are frequently used in pattern recognition, mathematical finance and forecastings (weather prediction, sales prediction, number of site visitors etc.).
Apart from different window-based approaches, there are two basic models that by their nature represent the sequential structure in data:
Both of the above are actually Markov models and have some internal state (in first case it's probabilities of each of internal states of model, in the case of RNN the state is equal do dumped states of all elements with delay). The notable difference between these models is first is generative, while the second is discriminative.

Some pros and cons of both models I see:
  • HMC training is inexpensive, but requires predicted values to be binned (actually this works with continuous variables, but this is mostly related to gaussian visibles and this is not robust).
  • RNN requires quite a long training, but has hard times to find stable mapping (since it should be continuous), so requires some regularizations built in its architecture. Training is usually based on prediction of one or several next points in sequence, which is not always adequate.
Apart of setting several next observation in sequence as a target, there is plenty of various targets, like running mean over $n$ next observations. However, I'd prefer machine learning to find out these targets for me. 

Possible approach I came to:
  1. train HMC in inverse time, so HMC predicts past, not future
  2. it's predictions (probabilities of hidden states) can be treated as vector representation of future, because it is computed on information from future
  3. RNN is trained to predict hidden states of HMC
  4. Later, some other model is used based on the output of RNN to predict the value of interest. 
So the trick is that we use HMC predictions as some reliable informative target about the future, like this is done is deep learning, for instance.