Date of Award
5-17-2025
Document Type
Masters Project
Abstract
Identifying two active predictors driving a response variable is critical in fields like genomics, medicine, and finance, yet standard methods, such as penalized regression, often fail to isolate these pairs consistently. We propose Dynamic Pairwise Sparse Tuning (DPST), a novel feedforward neural network (FNN) method that enhances sparse predictor selection by augmenting standard backpropagation with custom weight updates using adaptive thresholding, smoothed refinement, and pruning. Through simulations across predictor counts (P = 3, 4, 5) and sample sizes (N = 1800, 3600) using a controlled sparse coefficient matrix defining pair relationships, DPST consistently outperforms our static FNN baseline, also developed by us, which uses only backpropagation. For instance, DPST achieves an accuracy of 0.732 versus 0.609 at P = 5, N = 3600, across C = (P) pairs (3, 6, 10). The baseline excels at P = 3, N = 3600 (accuracy 0.960 vs. 0.684), where DPST’s updates limit generalization, and trains faster (e.g., 3.57 s vs. 20.66 s). DPST’s precision suits applications like gene-pair detection and financial risk modeling, while the static baseline supports rapid analyses. Our results highlight DPST’s potential to advance sparse modeling.
Recommended Citation
Azadda, Raymond Dacosta, "Dynamic Pairwise Sparse Tuning (DPST) vs. Static two-predictor selection: a neural network approach" (2025). Mathematics and Statistics . 68.
https://ualaska.researchcommons.org/uaf_grad_math_stats/68
Handle
http://hdl.handle.net/11122/16268