Basis Construction and Utilization for Markov Decision Processes Using Graphs
The ease or difficulty in solving a problemstrongly depends on the way it is represented. For example, consider the task of multiplying the numbers 12 and 24. Now imagine multiplying XII and XXIV. Both tasks can be solved, but it is clearly more difficult to use the Roman numeral representations of...
Main Author: | Johns, Jeffrey Thomas |
---|---|
Format: | Others |
Published: |
ScholarWorks@UMass Amherst
2010
|
Subjects: | |
Online Access: | https://scholarworks.umass.edu/open_access_dissertations/177 https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1179&context=open_access_dissertations |
Similar Items
-
Recent Advances in Deep Reinforcement Learning Applications for Solving Partially Observable Markov Decision Processes (POMDP) Problems: Part 1—Fundamentals and Applications in Games, Robotics and Natural Language Processing
by: Xuanchen Xiang, et al.
Published: (2021-07-01) -
State-similarity metrics for continuous Markov decision processes
by: Ferns, Norman Francis.
Published: (2007) -
Basis construction and utilization for Markov decision processes using graphs
by: Johns, Jeffrey T
Published: (2010) -
Scaling solutions to Markov Decision Problems
by: Zang, Peng
Published: (2012) -
The Convergence of a Cooperation Markov Decision Process System
by: Xiaoling Mo, et al.
Published: (2020-08-01)