Date Approved


Embargo Period


Document Type


Degree Name

Master of Science (M.S.)


Computer Science


College of Science & Mathematics


Silvija Kokalj-Filipovic, Ph.D.

Committee Member 1

Anthony Breitzman, Ph.D.

Committee Member 2

Patrick McKee, M.S.


adversarial attack; autoencoder; data compression; neural network; radio frequency; signal processing


Data compression (Telecommunication); Neural Networks (Computer Science)


Computer Sciences | Physical Sciences and Mathematics


In the era of vast data processing and transmission, sending data over a channel for downstream operations is a very common occurrence. The bandwidth of this data channel acts as a limiting factor in this operation, capping the amount of data that can be sent over a time period. Therefore, in addition to pursuing advancements in networking technology, there exists a need for more efficient means of data compression. Learned compression is the application of machine learning models to the data compression problem, and in this study, we leverage the ability of neural networks to learn the underlying structure of the training data to perform more informed compression, achieving a greater compression ratio than algorithmic data compression. Specifically, we analyze the efficacy of a model known as the Hierarchical Quantized Autoencoder (HQA) for lossy data compression across various datasets. This model adds a novel hierarchical architecture to the quantized auto encoder, the current standard for learned data compression, which not only allows for a higher compression ratio but adds flexibility to the model as the desired compression ratio can be set at inference time. We evaluate the performance of this model across different image datasets and propose a new model of the same structure as HQA but with a modified encoder, decoder and loss function suited to compress radio frequency (RF) data. We find that using these models, we can achieve a high compression ratio with minimal sacrifice to the performance of the downstream task.