Online judges (OJs) are a popular tool to support programming learning. However, one major issue with OJs is that problems are often put together without any associated meta-information that could, for example, be used to help classify problems. This meta-information could be extremely valuable to help users quickly find what types of problems they need most. To face this problem, several OJ administrators have recently begun manually annotating the topics of problems based on computer science-related subjects, such as dynamic programming, graphs, and data structures. Initially, the topics were used to support programming competitions and experienced learners. However, with OJs increasingly used to support CS1 classes, such topic annotation needs to be extended to suit CS1 learners and instructors. In this work, for the first time, to the best of our knowledge, we propose and validate a predictive model that can automatically detect fine-grained topics of problems based on the CS1 syllabus. After experimenting with many shallow and deep learning models with different word representations based on cutting-edge NLP techniques, our best model is a CNN, achieving an F1-score of 88.9%. We then present how our model can be used for various applications, including (i) facilitating the search process of problems for CS1 learners and instructors and (ii) how it can be integrated into a system to recommend problems in OJs.