The authors consider time-average Markov Decision Processes (MDPs), which accumulate a reward and cost at each decision epoch. A policy meets the sample-path constraint if the time-average cost is below a specified value with probability one. The optimization problem is to maximize the expected average reward over all policies that meet the sample-path constraint. The sample-path constraint is compared with the more commonly studied constraint of requiring the average expected cost to be less than a specified value. Although the two criteria are equivalent for certain classes of MDPs, their feasible and optimal policies differ for many nontrivial problems. In general, there does not exist optimal or nearly optimal stationary policies when the expected average-cost constraint is employed. Assuming that a policy exists that meets the sample-path constraint, the authors establish that there exist nearly optimal stationary policies for communicating MDPs. A parametric linear programming algorithm is given to construct nearly optimal stationary policies. The discussion relies on well known results from the theory of stochastic processes and linear programming. The techniques lead to simple proofs of the existence of optimal and nearly optimal stationary policies for unichain and deterministic MDPs, respectively.

  • Corporate Authors:

    Operations Research Society of America

    Mount Royal and Guilford Avenue
    Baltimore, MD  United States  21202
  • Authors:
    • Ross, K W
    • Varadarajan, R
  • Publication Date: 1989-9

Media Info

  • Features: Figures; References;
  • Pagination: p. 780-790
  • Serial:

Subject/Index Terms

Filing Info

  • Accession Number: 00489583
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Nov 30 1989 12:00AM