Naïve Bayes is a tractable and efficient approach to classification learning. However, as it is common in real classification contexts, datasets are often characterized by a larga number of features which may complicate the interpreation of the results as well as slow down the method's execution. In addition, the consequences of misclassifications may be rather different for different classes. Hence, it is crucial to control misclassification rates in the most critical cases. In this work we propose a sparse version of the Naïve Bayes in which a variable reduction approach, that takes into account the dependencies among features, is embedded into the classification algorithm. Moreover, a number of constraints over the performance measures of interest are included in order to assure that the achievement in the different individual performance measures under consideration is controled. Our findings show that, under a reasonable computational cost, the number of variables is significantly reduced obtaining competitive estimates of the performance measures.