Date Approved


Embargo Period


Document Type


Degree Name

M.S. Computer Science


Computer Science


College of Science & Mathematics


Shen-Shyang Ho, Ph.D.

Committee Member 1

Bo Sun, Ph.D.

Committee Member 2

Ganesh R. Baliga, Ph.D.


3D Classification, Deep Learning, Machine Learning, Multi-view, Point Cloud, Sampling


Computer vision; Learning classifier systems


Artificial Intelligence and Robotics | Computer Sciences


A 3D classification method requires more training data than a 2D image classification method to achieve good performance. These training data usually come in the form of multiple 2D images (e.g., slices in a CT scan) or point clouds (e.g., 3D CAD modeling) for volumetric object representation. The amount of data required to complete this higher dimension problem comes with the cost of requiring more processing time and space. This problem can be mitigated with data size reduction (i.e., sampling). In this thesis, we empirically study and compare the classification performance and deep learning training time of PointNet utilizing uniform random sampling and farthest point sampling, and SampleNet which utilizes a reduction approach based on weighted average of nearest neighbor points, and Multi-view Convolution Neural Network (MVCNN). Contrary to recent research which claimed that SampleNet performs outright better than simple form of sampling approaches used by PointNet, our experimental results show that SampleNet may not significantly reduce processing time and yet it achieves a poorer classification performance. Additionally, resolution reduction for the views in MVCNN achieves poor accuracy when compared to view reduction. Moreover, our experimental result shows that simple sampling approaches used by PointNet as well as using simple view reduction when using a multi-view classifier can maintain accuracy while decreasing processing time for the 3D classification task.