Access Type
Open Access Dissertation
Date of Award
January 2014
Degree Type
Dissertation
Degree Name
Ph.D.
Department
Mathematics
First Advisor
George Yin
Abstract
This dissertation focuses on stability analysis and optimal controls for stochastic dynamic systems. It encompasses two parts. One part of our work gives an in-depth study of stability of linear jump diffusion, linear Markovian jump diffusion, multi-dimensional jump diffusion and
regime-switching jump diffusion together with the associated numerical solutions. The other part of our work is controls for stochastic dynamic systems, to be specific, we concentrated on mean variance types of control under different formulations. We obtained the nearly optimal
mean-variance controls under both two-time-scale and hidden Markov chain formulations and convergence for each case is achieved.
In Chapter 2, stability analysis of benchmark linear scalar jump diffusion is studied first. We presented the conditions for exponential p stable and almost surely exponentially stable for SDE and that of numerical solutions. Note that due to the use of Poisson processes,
using asymptotic expansions as in the usual approach of treating diffusion processes does not work any longer. Different from the existing treatments of Euler-Maurayama methods for solutions of stochastic differential equations, techniques from stochastic approximation is employed in our work. The similar analysis is carried out for Markov jump diffusion and multi-dimensional jump diffusion. Beside of these, we have a thorough study on regime switching jump diffusion in which asymptotic stability in the large and exponential p-stability are carried out. Connection between almost surely exponential stability and exponential p-stability is exploited. Necessary conditions for exponential p-stability are derived and criteria for asymptotic stability in distribution are provided. In Chapter 3 We work on the famous mean variance problem in which a switching process ( say, market regime) is embedded. We first use a two-time-scale formulation to treat the underlying systems, which is represented by usage of a small parameter. As the small parameter goes to 0, we obtain a limit problem. Using the limit problem as a guide, we construct controls for the original problem, and show that the control so constructed is nearly optimal. In chapter 4, we revisited the mean variance control problem in which the switching process is a hidden Markov chain. Instead of having
full knowledge of switching process, we assume a noisy observation of switching process corrupted by white noise is available, we focus on minimizing the variance subject to a mixed terminal expectation. Using the Wonham filter, we convert the partially observable system to a completely observable one first. Because closed form solutions are virtually impossible to obtain, our main effort is devoted to designing a numerical algorithm. Convergence of the algorithm is obtained.
Recommended Citation
Yang, Zhixin (harriet), "Stability And Controls For Stochastic Dynamic Systems" (2014). Wayne State University Dissertations. 1061.
https://digitalcommons.wayne.edu/oa_dissertations/1061