登入選單
返回Google圖書搜尋
No-Regret Algorithms for Structured Prediction Problems
註釋No-regret algorithms are a popular class of online learning rules. Unfortunately, most no-regret algorithms assume that the set Y of allowable hypotheses is small and discrete. Instead, the authors consider prediction problems where Y has internal structure: Y might be the set of strategies in a game like poker, the set of paths in a graph, or the set of configurations of a data structure like a rebalancing binary search tree; or Y might be a given convex set (the "online convex programming" problem), or, in general, an arbitrary bounded set. They derive a family of no-regret learning rules, called Lagrangian Hedging algorithms, to take advantage of this structure. Their algorithms are a direct generalization of known no-regret learning rules, like weighted majority and external-regret matching. In addition to proving regret bounds, they demonstrate one of their algorithms learning to play one-card poker.