top of page

what inspired us

The seeds of our idea for an AI App for Deaf children started way back at our very first Young Coders MeetUp (YCM), in January 2019. Imran joined this event, our first Deaf young coder. Our meetups are youth-led and have an ethos of diversity and inclusion. Our first meetup was about Machine Learning. In the spirit of collaboration, Imran and Femi, a founding member of the YCM, buddied up together for the day. As the afternoon progressed Imran told Femi about the difficulties that Deaf people have with their literacy skills. Whilst they were progressing through the session they came to the conclusion that, perhaps with Machine Learning they could develop a potential solution for other Imran and other Deaf learners. Making the most of the limited features, they built a quick prototype using fingerspelling signs - capturing still images through their laptops and going through the supervised learning of their model. However, they found that this was quite hard as some of the signs required movements.

​

They then took their idea to the iOS community, and were invited by Skillsmatter to present their ideas at the iOS Con 2019 in March. There they got some great advice from industry professionals on how they could use Swift to create the app on the iPhone and iPad. They managed to make some awesome contacts in the Swift community. Tim Condon, who had worked on the BBC iPlayer app, was so impressed that he offered to help and later ran a two day workshop for the YCM, at the Tate Modern. Although they learned a lot about the uses of Swift and why it was so awesome for machine learning, they weren’t able to get much support in creating the machine learning side of the app.

​

Femi was lucky enough to be invited by UAL Creative Computing Institute in July 2019, to be a guest participant on their Masterclass on Machine Learning for the Creative Industry. There, he learned all about neural networks and machine learning models. He was given some great advice from the facilitator, Marco Marchesi, about the project idea. He was pointed towards Rebecca Fiebrink and her amazing Wekinator project, as well as being guided towards Dynamic Time Warping as a way to solve the problem of detecting moving signs. Femi was then able to explore Wekinator and dive deeper into machine learning throughout the summer, and he got a chance to contact Rebecca Fiebrink for help with the idea. With her Wekinator project he made a very simple prototype of the app, able to determine a few distinguished signs from each other. In the meantime Imran helped to collect video clips of people signing individual words for data, to use for training a neural network.

​

In November Femi went to a talk by Professor Richard Harvey from Gresham University, an expert on artificial intelligence, machine learning and signal processing, and asked him about the approach that they should be taking towards the idea. He told Femi that Dynamic Time Warping is fairly experimental and not nuanced enough and advised that Hidden Markov Models would be a better way to go. He also said that this sort of project has been researched before and that the expert on computer vision, machine learning and Sign Language Recognition would be Professor Richard Bowden, University of Surrey.

​

Inspired by the Nesta Longitude Explorers Prize competition Femi and Imran invited other YCM coders to form a team to enter the Sign2Word AI app for the competition. What has been fantastic is that the YCM’s focus on collaborative working and knowledge sharing, has meant that an awesome team, each young person with unique strengths, knowledge and skills have joined forces and it’s been amazing. They have all been to the YCMs so have a shared experience and enjoy being together. Since the news came that they had got through to the Semi-finals, they have been meeting regularly, and after brushing up on his machine learning and AI skills, Femi delivered a knowledge sharing session to the rest of the group to get everyone on a level playing field. Imran is too old to join the competition but he still holds a central role of being the Product Client and representing the Deaf community. Mutsa has been amazing with her design skills - interpreting the user stories from Imran and translating them into wireframes so the Swift App can be designed to meet the needs of the users. Malaika has been collaging all the information and is building a website and managing the content. Nishka has been documenting the journey and giving support with the learning and looking into the mathematics linked to the project (Statistics & Linear Algebra). Thomas has been exploring the options for the build - so we can incorporate the neural network into the Swift app in the most efficient way.

We are currently at the phase of looking at research in the field of Computer Vision and understanding the complexities of Sign Language Recognition. Having made the initial contact with Academia and continuing to learn more about Machine Learning, our next stage is to follow up, seek mentoring and learn more about Hidden Markov models. We are really excited about collaborating together, being on a journey of discovery to make our Sign2Word AI App a reality - providing the very thing that Imran found missing as a young Deaf person as he struggled with written language.

bottom of page