The article focuses on building a heart disease prediction system utilizing Federated Learning (FL) combined with Explainable AI (XAI) techniques. FL allows us to train models on decentralized healthcare datasets, ensuring data privacy by keeping patient data local to each institution. This method is especially suited for collaborative research where sensitive patient data cannot be centralized. To ensure transparency and trust in the model's predictions, we integrate XAI techniques such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations). These methods provide detailed explanations of how the model arrives at its conclusions, highlighting the features most important in predicting heart disease risk. For implementation, we use Python-based frameworks such as PySyft for Federated Learning and Scikit-learn for model building, with Random Forests and Logistic Regression serving as the base models. Our dataset is sourced from the UCI Heart Disease Dataset, with feature engineering and normalization applied before model training. Performance evaluation will focus on metrics such as accuracy, F1-score, and AUC-ROC, alongside interpretability using SHAP and LIME. The result is a privacy-preserving, interpretable model capable of accurately predicting heart disease. This system empowers healthcare providers by offering not only predictions but also clear insights into how those predictions are made. The major advantage of combining FL and XAI is the ability to maintain high accuracy and data privacy, while providing actionable and understandable insights for medical professionals.