The proliferation of the Internet of Things (IoT) has enabled large-scale, real-time analytics while simultaneously introducing substantial privacy and security risks due to centralized data aggregation. Federated learning (FL) offers a promising paradigm by training global models across distributed devices without transferring raw data. However, naive FL pipelines remain vulnerable to gradient inversion, membership inference, and poisoning attacks, and they impose non-trivial communication and energy overheads on constrained devices. This paper proposes FL-ISM, a federated learning–based IoT security model that integrates secure aggregation, calibrated differential privacy, and Byzantine-robust optimization with reputation-aware client selection and communication compression. We formalize the system and threat model, provide privacy and robustness analyses, and evaluate FL-ISM on intrusion and anomaly detection tasks under non-IID data. Results show that FL-ISM maintains competitive accuracy while reducing uplink traffic and significantly mitigating backdoor attacks, thereby advancing deployable privacy-preserving analytics for safety-critical IoT environments.