登入
選單
返回
Google圖書搜尋
No-Regret Algorithms for Structured Prediction Problems
Geoffrey J. Gordon
出版
Carnegie Mellon University. Center for Automated Learning and Discovery
, 2005
URL
http://books.google.com.hk/books?id=gdmoSgAACAAJ&hl=&source=gbs_api
註釋
No-regret algorithms are a popular class of online learning rules. Unfortunately, most no-regret algorithms assume that the set Y of allowable hypotheses is small and discrete. Instead, the authors consider prediction problems where Y has internal structure: Y might be the set of strategies in a game like poker, the set of paths in a graph, or the set of configurations of a data structure like a rebalancing binary search tree; or Y might be a given convex set (the "online convex programming" problem), or, in general, an arbitrary bounded set. They derive a family of no-regret learning rules, called Lagrangian Hedging algorithms, to take advantage of this structure. Their algorithms are a direct generalization of known no-regret learning rules, like weighted majority and external-regret matching. In addition to proving regret bounds, they demonstrate one of their algorithms learning to play one-card poker.