Off-campus WSU users: To download campus access dissertations, please use the following link to log into our proxy server with your WSU access ID and password, then click the "Off-campus Download" button below.
Non-WSU users: Please talk to your librarian about requesting this dissertation through interlibrary loan.
Access Type
WSU Access
Date of Award
January 2022
Degree Type
Dissertation
Degree Name
Ph.D.
Department
Electrical and Computer Engineering
First Advisor
Mohammed Ismail
Second Advisor
Mohammed Alhawari
Abstract
The increasing demand for high performance and energy efficiency in Artificial Neural Networks (ANNs) and Deep Learning (DL) accelerators has driven a wide range ofapplication-specific integrated circuits (ASICs). Besides, the rapid deployment of low-power IoT devices requires highly efficient computing. In recent years, this field has started to deviate from the conventional digital implementation of machine learning-based (ML) accelerators; instead, researchers have started investigating implementation in different domains. This is due to two main reasons: a. better performance, and b. lower power consumption. An emerging trend has been to employ time-domain (TD) circuits to implement the multiply-accumulate (MAC) operation. TD accelerators leverage both digital and analog features, thereby enabling energy-efficient computing and scaling with CMOS technology. This work reviews the state-of-the-art TD accelerators and discusses system considerations and hardware implementations, including the spatially unrolled (SU) and recursive (REC) TD architectures. Additionally, the work analyzes the energy and area efficiency of the TD architectures for varying input resolutions and network sizes to provide insight for designers into how to choose the appropriate TD approach for a particular application. Furthermore, it discusses our proposed scalable SU-TD architecture synthesized in 65nm CMOS technology with an efficient DTC circuit that utilizes a laddered inverter (LI) circuit that consumes 3× less power than the inverter-based DTC. The proposed TD core achieves 116 TOPS/W and occupies 0.201 mm2. The proposed core offers an improvement in energy efficiency by 2.4−47×, and an improvement in area efficiency by 6.4−74× compared to prior time-domain accelerators. Analog computation is another approach that offers outstanding energy efficiency with real-time parallel processing and learning. This is mainly due to the emerging analog memory technologies which have enabled local storage and processing. There are a lot of challenges in the analog domain approach, such as it is susceptible to variation and noise, and analog cores involve digital-to-analog converters (DACs) and analog-to-digital converters (ADCs), which consume up to 64% of total energy consumption. This urges the need for energy and area-efficient ADC designs. Finally, the work presents our proposed programmable-precision monotonic SAR-ADC in 22nm FDSOI for analog computation. The proposed ADC offers no additional cost in area and energy at full precision.
Recommended Citation
Al-Maharmeh, Hamza, "Energy-Efficient Mixed-Signal Techniques For Artificial Neural Network Accelerators In Edge Computing" (2022). Wayne State University Dissertations. 3748.
https://digitalcommons.wayne.edu/oa_dissertations/3748