Skip to content

A Neural Captioning Model trained with Keras and deployed in a Flutter App

Notifications You must be signed in to change notification settings

GUIZ4RD/Pic2Speech

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Pic2Speech - Describing the world for visually impaired

Pic2Speech uses Artificial Intelligence to describe the content of pictures to help visually impaired understanding the world.

Model

The model is built with Keras and is mostly based on Show and Tell: A Neural Image Caption Generator" by Vinyals et al. It's trained on the Full MS COCO for around 500k steps.

Deployment

The model is deployed on an Azure ML Service using Azure ML Python API.

Mobile App

The mobile app is developed with Google Flutter and let people take a picture with their smartphone and get a vocal description for it.

About

A Neural Captioning Model trained with Keras and deployed in a Flutter App

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages