Real Time Sign Language Recognition Communication is the base of society because it allows people to express their thoughts and ideas. Communicating with one’s peers is vital when it comes to human behavior, and this is why, even when there is any sort of disability that prevents people from doing so, methods like sign language get in the middle and create a new way to plug people back in in the circle of communication. However, the introduction of sign language creates a gap between the people who is proficient in the language and those who don’t. This is the problem our project is trying to address. Specifically, we propose to build a real time sign language translator using a normal video camera from a laptop computer and increase its performance by using parallel processing technologies, when available. The objective is to allow people with speaking disabilities communicate with people who does not necessarily understand sign language. There will be several milestones that together will lead to the completion of the project. Namely, image acquisition, image processing, parallel processing and machine learning. Imagine acquisition and image processing will be both specific, concrete stages that will only be done once. Parallel processing and machine learning will be spanning throughout the project persistently, making the impression they are not concrete milestones. This project will be carried out using a variety of technologies and techniques borrowed from different fields, ranging from signal processing, mathematics and physics, to computer programming, sociology and communication theory.