Music has a unique emotional connection with human beings. It is a medium of unifying people all around the globe. On the other hand, it is challenging to generalize music and assert that all individuals would enjoy the same type. Emotion-based music is important as it can de-stress and de-anxiety human beings. Its purpose is to identify the user's emotions appropriately in real time and play music accordingly based on the user's mood. The system implemented here identifies the facial features from the face of the user based on facial parameters like lips and eyes from a real-time webcam with the assistance of the CNN Algorithm. The suggested system identifies 7 different types of human emotions like happy, sad, neutral, fearful, disgust, angry, and surprise. The identified emotion is then used to recommend music from a trained dataset, unlike previous systems where manually curated playlists are used. By training, the CNN model using facial emotion data and K-means using music data playlists, the suggested system becomes more efficient and flexible and provides an easy solution for user-personalized music recommendations based on the user's emotional state.