Document Type

Article

Abstract

We consider control problems with trajectories which involve ordinary measureable control functions and controls which are measures. The payoff involves a running cost in time and a running cost against the control measures. In the optimal control problem we are trying to minimize this payoff with both controls. In the differential game problem we are trying to minimize the cost with the ordinary controls assuming that the measure controls are chosen to maximize the cost. We will characterize the value functions in both cases using viscosity solution theory by deriving the Bellman and Isaacs equations.

Disciplines

Numerical Analysis and Computation | Probability

Comments

© 1993. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/. The final published version may be accessed at https://dx.doi.org/https://dx.doi.org/10.1016/0362-546X(93)90019-O

Share

COinS