We consider control problems with trajectories which involve ordinary measureable control functions and controls which are measures. The payoff involves a running cost in time and a running cost against the control measures. In the optimal control problem we are trying to minimize this payoff with both controls. In the differential game problem we are trying to minimize the cost with the ordinary controls assuming that the measure controls are chosen to maximize the cost. We will characterize the value functions in both cases using viscosity solution theory by deriving the Bellman and Isaacs equations.
Numerical Analysis and Computation | Probability
E. N. Barron, R. Jensen, and J.-L. Menaldi, Optimal control and differential games with measures, Nonlinear Analysis: Theory, Methods, and Applications 21 (1993), 241-268. doi: 10.1016/0362-546X(93)90019-O