Competing with Markov prediction strategies
Abstract
Assuming that the loss function is convex in the prediction, we construct a prediction strategy universal for the class of Markov prediction strategies, not necessarily continuous. Allowing randomization, we remove the requirement of convexity.
1 Introduction
This paper belongs to the area of research known as universal prediction of individual sequences (see [2] for a review): the predictor’s goal is to compete with a wide benchmark class of prediction strategies. In the previous papers [15] and [14] we constructed prediction strategies competitive with the important classes of Markov and stationary, respectively, continuous prediction strategies. In this paper we consider competing against possibly discontinuous strategies. Our main results assert the existence of prediction strategies competitive with the Markov strategies.
This paper’s idea of transition from continuous to general benchmark classes was motivated by Skorokhod’s topology for the space of “càdlàg” functions, most of which are discontinuous. Skorokhod’s idea was to allow small deformations not only along the vertical axis but also along the horizontal axis when defining neighborhoods. Skorokhod’s topology was metrized by Kolmogorov so that it became a separable space ([1], Appendix III; [11], p. 913), which allows us to apply one of the numerous algorithms for prediction with expert advice (Kalnishkan and Vyugin’s Weak Aggregating Algorithm in this paper) to construct a universal algorithm.
2 Main results
The game of prediction between two players, called Predictor and Reality, is played according to the following protocol (of perfect information, in the sense that either player can see the other player’s moves made so far).
Prediction protocol FOR : Reality announces . Predictor announces . Reality announces . END FOR.
The game proceeds in rounds numbered by the positive integers . At the beginning of each round Predictor is given some signal relevant to predicting the following observation . The signal is taken from the signal space and the observation from the observation space . Predictor then announces his prediction , taken from the prediction space , and the prediction’s quality in light of the actual observation is measured by a loss function .
We will always assume that the signal space , the prediction space , and the observation space are nonempty sets; and will often be equipped with additional structures.
Markovuniversal prediction strategies: deterministic case
Predictor’s strategies in the prediction protocol will be called prediction strategies. Formally such a strategy is a function
it maps each history to the chosen prediction. In this paper we will be especially interested in Markov strategies, which are functions ; intuitively, is the recommended prediction on round . The restriction to Markov strategies is not a severe one, since the signal can encode as much of the past as we want (cf. [8], footnote 1); in particular, can contain information about the previous observations . In this paper Markov prediction strategies will also be called prediction rules (as in [15]; in a more general context, however, it would be risky to omit “Markov” since “prediction rule” is too easy to confuse with “prediction strategy”).
For both our theorems we will need the notion of “approximation” to a signal ; intuitively, the “approximation” of is another signal which is as close to as possible but carries only bits of information. If , a reasonable definition of would be to take the binary expansion of but remove all the binary digits starting from the th after the binary dot. In general, we will have to equip with an “approximation structure”; we will do this following Kolmogorov and Tikhomirov ([12], Section 2, [11], p. 913).
Consider a sequence of mappings , , such that each is idempotent, in the sense for all , and contains elements. (Such mappings are codingtheory analogues of projections in linear algebra and contractions in topology; can be thought of as the result of encoding , sending it over an bit channel, and restoring as well as possible at the receiving end.) It is the sequence that will be referred to as an approximation structure.
If is a totally bounded (say, compact) metric space, there is an approximation structure such that
(1) 
uniformly in . (We often let stand for the metric in various metric spaces, always clear from the context.) In fact, the th Kolmogorov diameter
of is essentially the inverse function to the entropy . See [9] for precise values and estimates of for numerous totally bounded metric spaces .
A prediction strategy is Markovuniversal for a loss function and an approximation structure if it guarantees that for any prediction rule and any there exists a number such that for any and any sequence of Reality’s moves its responses satisfy
Theorem 1
Suppose is equipped with an approximation structure , is a closed convex subset of a separable Banach space, and the loss function is bounded, convex in the variable , and uniformly continuous in uniformly in . There exists a Markovuniversal for and prediction strategy.
A Markovuniversal prediction strategy will be constructed in the next section. Theorem 1 says that, under its conditions,
(2) 
uniformly in for all and all .
Markovuniversal prediction strategies: randomized case
When the loss function is not required to be convex in , the conclusion of Theorem 1 may become false ([6], Theorem 2). The situation changes if we consider randomized prediction strategies.
A randomized prediction strategy is a function
mapping the past to the probability measures on the prediction space. In other words, this is a strategy for Predictor in the extended game of prediction with the prediction space . A Markov randomized prediction strategy, or randomized prediction rule for brevity, is a function .
We will say that a randomized prediction strategy outputting is Markovuniversal for a loss function and an approximation structure if, for any randomized prediction rule and any , there exists such that, for any sequence of Reality’s moves,
(3) 
with probability at least , where are independent random variables distributed as
(4) 
Intuitively, the word “probability” after (3) refers only to the prediction strategies’ internal randomization; it is not assumed that Reality behaves stochastically. We will use this definition only in the case where the loss function is continuous in the prediction, and so (3) will indeed be an event having a probability.
Theorem 2
Suppose the signal space is equipped with an approximation structure , is a separable topological space, and the loss function is bounded and such that the set of functions is equicontinuous. There exists a randomized prediction strategy that is Markovuniversal for and .
3 Proof of Theorem 1
Let us fix a dense countable subset of . We will say that a function is elementary if and depends on only via ; a function is elementary if it is elementary for some . There are countably many elementary functions; let us enumerate them as . We will refer to these functions as experts. We will apply a special case of Kalnishkan and Vyugin’s [6] Weak Aggregating Algorithm (WAA) to the sequence of experts (as in [14]).
Let be a sequence of positive numbers summing to 1, . Define
to be the instantaneous loss of the th expert on the th round and his cumulative loss over the first rounds. For all define
( are the weights of the experts to use on round ) and
(the normalized weights; it is obvious that the denominator is positive and finite). The WAA’s prediction on round is
(5) 
To make this series convergent, we may take and reorder so that for all . In this case we will automatically have since
(6) 
as .
Let be the WAA’s loss on round and be its cumulative loss over the first rounds.
Lemma 1 ([6], Lemma 9)
The WAA guarantees that, for all ,
(7) 
The first two terms on the righthand side of (7) are sums over the first rounds of different kinds of mean of the experts’ losses (see, e.g., [5], Chapter III, for a general definition of the mean); we will see later that they nearly cancel each other out. If those two terms are ignored, the remaining part of (7) is identical (except that now depends on ) to the main property of the “Aggregating Algorithm” (see, e.g., [13], Lemma 1). All infinite series in (7) are trivially convergent.
In the proof of Lemma 1 we will use the following property of “countable convexity” of :
(8) 
This property follows from (6) and
if we let .

The proof is by induction on . For , (7) follows from the countable convexity (8) and . Assuming (7), we obtain
(the first “” again used the countable convexity (8)). Therefore, it remains to prove
By the definition of this can be rewritten as
which after cancellation becomes
(9) The last inequality follows from the general result about comparison of different means ([5], Theorem 85), but we can also check it directly (following [6]). Let , where . Then (9) can be rewritten as
and the last inequality follows from the concavity of the function .
Lemma 2 ([6], Lemma 5)
Let be an upper bound on . The WAA guarantees that, for all and ,
(10) 

There is no term in [6] since that paper only considers nonnegative loss functions. (Notice that even without assuming nonnegativity this term is very crude and can be easily improved.)
Now it is easy to prove Theorem 1. The definition of Markovuniversality can be restated as follows: a prediction strategy outputting is Markovuniversal if and only if for any prediction rule , any , and any there exists such that, for any and any ,
(11) 
Let be output by the WAA and let us consider any prediction rule , any , and any . Choose such that whenever and choose an elementary expert such that, for all , .
4 Proof of Theorem 2
A convenient pseudometric on can be defined by
(cf. [3], Corollary 11.3.4). Let us redefine as the quotient space obtained from the original by identifying and for which ([4], Section 2.4); in other words, we will not distinguish predictions that always lead to identical losses. Now becomes a metric on . Let be a countable dense subset of the original topological space (which is separable as a subset of a separable Banach space); the condition of equicontinuity implies that (formally defined as the set of equivalence classes containing elements of the original ) remains a dense subset in equipped with the metric .
We define the norm of a function as
this norm is finite for bounded Lipschitz functions (which form a Banach space under this norm: see [3], Section 11.2). Notice that
(13) 
Next define
(14) 
where is a probability measure on . This is the loss function in a new game of prediction with the prediction space ; it is linear and, therefore, convex in . (In general, the role of randomization in this paper is to make the loss function convex in the prediction.)
As a metric on we will take the Fortet–Mourier metric ([3], Section 11.3) defined as
The topology on induced by this metric is called the topology of weak convergence ([1]; weak convergence is called simply “convergence” in [3]; for the proof of equivalence of several natural definitions of the topology of weak convergence, see [3], Theorem 11.3.3).
It is easy to see that the space with metric is separable: e.g., the set of probability measures concentrated on finite subsets of and taking rational values is dense in (cf. [1], Appendix III). Let us enumerate the elements of a dense countable set in as ; as in the previous section, we will use the WAA to merge all experts .
The convergence of the mixture (5) to a probability measure on is now obvious. The countable convexity (8) now holds with equality,
and follows from the general fact that
for bounded Borel , positive summing to , and (this is obviously true for simple and follows for arbitrary integrable from the definition of Lebesgue integral: see, e.g., [3], Section 4.1).
Therefore, it is easy to check that the chain (12) still works (with equipped with metric ) and we can rephrase the previous section’s result as follows. For any randomized prediction rule , any , and any there exists such that, for any and any , the WAA’s predictions are guaranteed to satisfy
(15) 
(cf. (11)).
The loss function is bounded in absolute value by a constant , and so the law of the iterated logarithm (in Kolmogorov’s finitary form, [7], the end of the introductory section; the condition that the cumulative variance tends to infinity is easy to get rid of: see, e.g., [10], (5.8)) implies that for any there exists such that the conjunction of
and
holds with probability at least . Combining the last two inequalities with (15) we can see that for any randomized prediction rule , any , any , and any there exists such that, for any , the WAA’s responses to are guaranteed to satisfy
with probability at least . This is equivalent to the WAA (applied to ) being a Markovuniversal randomized prediction strategy.
5 Conclusion
An interesting theoretical problem is to state more explicit versions of Theorems 1 and 2: for example, to give an explicit expression for .
The field of lossy compression is now well developed, and it would be interesting to apply our prediction algorithms (perhaps with the Weak Aggregating Algorithm replaced by an algorithm based on, say, gradient descent [2] or defensive forecasting [15]) to the approximation structures induced by popular lossy compression algorithms.
Acknowledgments
This work was partially supported by MRC (grant S505/65).
References
 [1] Patrick Billingsley. Convergence of Probability Measures. Wiley, New York, 1968.
 [2] Nicolò CesaBianchi and Gábor Lugosi. Prediction, Learning, and Games. Cambridge University Press, Cambridge, 2006.
 [3] Richard M. Dudley. Real Analysis and Probability, volume 74 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, England, revised edition, 2002.
 [4] Ryszard Engelking. General Topology, volume 6 of Sigma Series in Pure Mathematics. Heldermann, Berlin, second edition, 1989.
 [5] G. H. Hardy, John E. Littlewood, and George Pólya. Inequalities. Cambridge University Press, Cambridge, second edition, 1952.
 [6] Yuri Kalnishkan and Michael V. Vyugin. The Weak Aggregating Algorithm and weak mixability. In Peter Auer and Ron Meir, editors, Proceedings of the Eighteenth Annual Conference on Learning Theory, volume 3559 of Lecture Notes in Computer Science, pages 188–203, Berlin, 2005. Springer. The journal version is being prepared for the Special Issue of Journal of Machine Learning Research devoted to COLT’2005; all references are to the journal version.
 [7] Andrei N. Kolmogorov. Über das Gesetz des iterierten Logarithmus. Mathematische Annalen, 101:126–135, 1929.
 [8] Andrei N. Kolmogorov. Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung. Mathematische Annalen, 104:415–458, 1931.
 [9] Andrei N. Kolmogorov and Vladimir M. Tikhomirov. entropy and capacity of sets in functional spaces (in Russian). Uspekhi Matematicheskikh Nauk, 14(2):3–86, 1959.
 [10] Glenn Shafer and Vladimir Vovk. Probability and Finance: It’s Only a Game! Wiley, New York, 2001.
 [11] Albert N. Shiryaev. Kolmogorov: life and creative activities. Annals of Probability, 17:866–944, 1989.
 [12] Vladimir M. Tikhomirov. entropy and capacity (in Russian). In Yury V. Prokhorov and Albert N. Shiryaev, editors, Kolmogorov. Teoriya Informatsii i Teoriya Algoritmov, pages 262–269. Nauka, Moscow, 1987.
 [13] Vladimir Vovk. Competitive online statistics. International Statistical Review, 69:213–248, 2001.
 [14] Vladimir Vovk. Competing with stationary prediction strategies. Technical Report arXiv:cs.LG/0607067, arXiv.org ePrint archive, July 2006.
 [15] Vladimir Vovk. Predictions as statements and decisions. Technical Report arXiv:cs.LG/0606093, arXiv.org ePrint archive, June 2006.