Abstract:
he proliferation of fake news threatens the integrity of information ecosystems, creating a pressing need for effective and interpretable detection mechanisms. Recent advances in machine learning, particularly with transformer-based models, offer promising solutions due to their superior ability to analyze complex language patterns. However, the practical implementation of these solutions often presents challenges due to their high computational costs and limited interpretability. In this work, we explore using content-based features to enhance the explainability and effectiveness of fake news detection. We propose a comprehensive feature framework encompassing characteristics related to linguistic, affective, cognitive, social, and contextual processes. This framework is evaluated across several public English datasets to identify key differences between fake and legitimate news. We assess the detection performance of these features using various traditional classifiers, including single and ensemble methods and analyze how feature reduction affects classifier performance. Our results show that, while traditional classifiers may not fully match transformer-based models, they achieve competitive results with significantly lower computational requirements. We also provide an interpretability analysis highlighting the most influential features in classification decisions. This study demonstrates the potential of interpretable features to build efficient, explainable, and accessible fake news detection systems.