Off-campus WSU users: To download campus access dissertations, please use the following link to log into our proxy server with your WSU access ID and password, then click the "Off-campus Download" button below.

Non-WSU users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Access Type

WSU Access

Date of Award

January 2023

Degree Type

Dissertation

Degree Name

Ph.D.

Department

Computer Science

First Advisor

Marco Brocanelli

Abstract

Current Mobile Augmented Reality (MAR) apps incorporate resource-intensive tasks,including rendering high-quality virtual objects (AR tasks) and performing AI model inference (AI tasks) to analyze the environment. However, these platforms still lack the required battery life, memory capacity, and on-board computational resources, posing challenges for delivering a seamless user experience. Previous studies have proposed methods to reduce the GPU power consumption in AR tasks, primarily using static techniques such as simplifying virtual object geometry (e.g., reducing polygons) without considering the unique characteristics of each object, increasing storage demands, and increasing burden on app developers. In addition, current MAR apps that integrate AI capabilities encounter resource usage imbalances between AI and AR tasks. This is primarily due to the concurrent utilization of device hardware resources, such as the CPU and GPU by both task types, which negatively impacts overall system performance, leading to either increased AI inference time, or a poor quality of virtual objects. Previous research has focused on addressing concurrent resource utilization in AI tasks through dynamic task reallocation. However, these approaches have led to limited enhancements in average AI inference time primarily because they do not take into account optimizations for AR rendering tasks and their influence on system performance. Some research has addressed the limited computing power of mobile devices by utilizing cloud/edge computing resources, particularly when edge servers can offer better end-to-end latency. However, current task allocation methods for edge computing offloading do not account for the unique resource needs of individual tasks or explore the potential for data sharing among them. In reality, many AI tasks within MAR apps operate on the same camera frames, which unlocks an opportunity to optimize network resource usage and reduce energy consumption by considering data-sharing among tasks for edge allocation. In this dissertation, we address the challenges of energy efficiency, system performance, and task allocation to the servers in MAR apps by leveraging both on-device and edge computing techniques. Our contribution involves the development of comprehensive frameworks for MAR apps that aim to balance the quality of virtual objects, mobile device power usage, and the inference time of on-device AI tasks. Additionally, we have extended an existing data sharing-aware algorithm to optimize the task allocation rate, thereby maximizing the profit of compute-intensive tasks among edge servers while limiting the increase of network data size.

Off-campus Download

Share

COinS