Machine Learning model for recognizing American Sign Language (ASL) letters
We used a public ASL dataset provided by David Lee on Roboflow, and more containing labeled images for each letter in the ASL alphabet.
- Format: Object Detection (converted for classification)
- Classes: AβZ (26 total)
- Source: Roboflow Public Datasets
The model is based on MobileNetV2, a lightweight convolutional neural network:
- β Pretrained on ImageNet
- π Fine-tuned on the ASL dataset
- π§ Optimized for mobile performance (ideal for Snap AR)
| Metric | Accuracy | Dataset Size | Change from Before | Status |
|---|---|---|---|---|
| Training Accuracy | 99.95% | 12,789 images | Same (was ~100%) | β Excellent |
| Validation Accuracy | 97.66% | 1,411 images | +10.74% π | β Excellent! |
| Test Accuracy | 87.50% | 80 images | +0.86% | β Very Good |
Status: Ready to use! π
The model generalizes well, maintaining high performance on unseen ASL hand signs.
To improve generalization and robustness, the following augmentations were applied during training:
- π Rotation
βοΈ Width & Height Shift- π Zoom
- π Shear Transformation
- π‘ Brightness Adjustment
These augmentations simulate real-world variations like lighting and camera angles.
-
Clone this repo:
git clone https://github.com/your-username/asl-recognition.git cd asl-recognition