Document Type



We consider control problems with trajectories which involve ordinary measureable control functions and controls which are measures. The payoff involves a running cost in time and a running cost against the control measures. In the optimal control problem we are trying to minimize this payoff with both controls. In the differential game problem we are trying to minimize the cost with the ordinary controls assuming that the measure controls are chosen to maximize the cost. We will characterize the value functions in both cases using viscosity solution theory by deriving the Bellman and Isaacs equations.


Numerical Analysis and Computation | Probability


© 1993. This manuscript version is made available under the CC-BY-NC-ND 4.0 license The final published version may be accessed at