Music has a unique emotional connection with human beings. It is a medium of unifying people all around the globe. On the other hand, it is a very difficult task to generalize music and say that all human beings would like and enjoy the same type of music [1]. Emotion-based music is important as it can de-stress and de-anxiety human beings [12]. Its purpose is to identify the user's emotions appropriately in real time and play music accordingly based on the user's mood. The system implemented here identifies the facial features from the face of the user based on facial parameters like lips and eyes from a real-time webcam with the assistance of the CNN Algorithm [13]. The suggested system identifies 7 different types of human emotions like happy, sad, neutral, fearful, disgust, angry, and surprise. The identified emotion is then used to recommend music from a trained dataset, unlike previous systems where manually curated playlists are used [12]. By training the CNN model using facial emotion data and K-means using music data playlists, the suggested system becomes more efficient and flexible and provides an easy solution for user-personalized music recommendations based on the user's emotional state [13].