登入
選單
返回
Google圖書搜尋
Markov Decision Problems with Expected Utility Criteria
David M. Kreps
Stanford University. Department of Operations Research
出版
Graduate School of Business, Stanford University
, 1975
URL
http://books.google.com.hk/books?id=3c8EAAAAIAAJ&hl=&source=gbs_api
註釋
Finite state and action Markov decision problems with expected utility criteria are analyzed. A Markov decision chain (or sequential decision process) is defined in the usual manner. But instead of seeking to maximize the expected sum (or product) of rewards, the objective is maximization of the expectation of some cardinal utility function defined on the sequence of rewards.