Date Approved

6-6-2023

Embargo Period

6-8-2023

Document Type

Thesis

Degree Name

Master of Science in Computer Science

Department

Computer Science

College

College of Science & Mathematics

Advisor

Nancy Tinkham, Ph.D.

Committee Member 1

Shen Shyang Ho, Ph.D.

Committee Member 2

Ning Wang, Ph.D.

Keywords

Collaborative computing, Computation offloading, Deep neural network, Input partition, IoT devices, Split layer

Subject(s)

Internet of things; Neural networks (Computer science)

Disciplines

Computer Sciences | Mathematics

Abstract

In recent times, advances in the technologies of Internet-of-Things (IoT) and Deep Neural Networks (DNN) have significantly increased the accuracy and speed of a variety of smart applications. However, one of the barriers to deploying DNN to IoT is the computational limitations of IoT devices as compared with the computationally expensive task of DNN inference. Computation offloading is an approach that addresses this problem by offloading DNN computation tasks to cloud servers. In this thesis we propose a collaborative computation offloading solution, in which some of the work is done on the IoT device, and the remainder of the work is done by the cloud server. There are two components to this collaborative approach. First, the input image to the DNN is partitioned into multiple pieces, allowing the pieces of the image to be processed in parallel, speeding up the inference time. Second, the DNN is split between two of its layers, so that layers before the split point are processed on the IoT device, and layers after the split point are processed by the cloud server. We investigated several strategies for partitioning the image and splitting the DNN, and we evaluated the results using several commonly-used DNNs: Lenet-5, AlexNet, and VGG-16. The results show that collaborative computation offloading sped up the inference time of IoT devices by 35-40% as compared with non-collaborative methods.

Share

COinS