This study examined how varying volumes of artificially simulated label and feature noise affect the classification performance of a deep neural network (DNN) in predicting political orientation from ordinal survey data. Using a Dutch political dataset including 19 variables related to climate issues, social welfare, and immigration, controlled levels of label and feature noise (5–50%) were systematically injected to simulate corrupted data conditions. The DNN’s performance was compared with logistic regression and random forest baseline models using accuracy and Macro-F1 metrics. Results showed that while the DNN outperformed simpler baselines under clean and low noise conditions, its advantage narrowed as noise level increased. As the noise volume increased, classification accuracy consistently declined, confirming that data corruption impairs the DNN’s learning ability. Label noise exhibited a stronger negative impact than feature noise because it directly corrupted the learning target, whereas feature noise altered input variability thus increased the effective rank of the data. Analysis of the data’s effective rank revealed that feature noise increased rank dispersion but did not lead to benign overfitting, suggesting that the dataset’s effective rank and signal-to-noise ratio were insufficient for noise-tolerant generalization. This study contributes to the theoretical understanding of DNN model behavior under noisy supervision and suggests that improving label reliability and feature diversity may enhance noise tolerance and maintain robust classification performance in survey-based classification tasks.