Access Type

Open Access Dissertation

Date of Award

January 2021

Degree Type


Degree Name



Industrial and Manufacturing Engineering

First Advisor

Ekrem A. Murat


A typical decision problem optimizes one or more objectives subject to a set of constraints on its decision variables. Most real-world decision problems contain uncertain parameters. The exponential growth of data availability, ease of accessibility in computational power, and more efficient optimization techniques have paved the way for machine learning tools to effectively predict these uncertain parameters. Traditional machine learning models measure the quality of predictions based on the closeness between true and predicted values and ignore decision problems involving uncertain parameters for which predicted values are treated as the true values.Standard approaches passing point estimates of machine learning models into decision problems as replacement of uncertain parameters lose the connection between predictive and prescriptive tasks. Recently developed methods to strengthen the bond between predictive and prescriptive tasks still rely on either "first predict, then optimize" strategy or use approximation techniques in integrating predictive and prescriptive tasks.

We develop an integrated framework for performing predictive and prescriptive analytics concurrently to realize the best prescriptive performance under uncertainty. This framework is applicable to all prescriptive tasks involving uncertainty. Further, it is scalable to handle integrated predictive and prescriptive tasks with reasonable computational effort and enables users to apply decomposition algorithms for large-scale problems. The framework also accommodates prediction tasks ranging from simple regression to more complex black-box neural network models.

The integrated optimization framework is composed of two integration approaches. The first approach integrates regression-based prediction and mathematical programming-based prescription tasks as a bilevel program. While the lower-level problem prescribes decisions based on the predicted outcome for a specific observation, the upper-level evaluates the quality of decisions with respect to true values. The upper-level problem can be considered as a prescriptive error, and the goal is to minimize this prescriptive error. In order to achieve the same performance in external data sets (test) compared to internal data sets (train), we offer different approaches to control the "prescription generalization error" associated with out-of-sample observation. We develop a decomposition algorithm for large-scale problems by leveraging a progressive hedging algorithm to solve the resulting bilevel formulation. The second approach integrates the learning of neural network-based prediction and optimization tasks as a nested neural network. While the predictive neural network promotes decisions based on predicted outcomes, the prescriptive neural network evaluates the quality of predicted decisions with respect to true values. We also propose a weight initialization process for nested neural networks and build a decomposition algorithm for large-scale problems.

Our results for the example problems validate the performance of our proposed integrated predictive and prescriptive optimization and training frameworks. With customarily generated synthetic data sets, proposed methods surpass all of the "first predict, then optimize" approaches and recently developed approximate integration methods for both "in-sample" and "out of sample" data sets. We also observe how the proposed generalization error controlling approach improves results in "out of sample" data sets. Customarily generated synthetic data pairs at different levels of correlation and non-linearity graphically show us how different methods converge to each other.