Emergrade is an ambitious, full-stack web application built on Django that seamlessly integrates Virtual Try-On (VTON) technology with EEG data processing. The core mission is to investigate how a user's real-time cognitive state, as captured via the Muse 2 (EEG), affects a virtual clothing try-on experience.
This project was developed by our multidisciplinary team for NatHacks 2025, focusing on AI, UX, sustainability, and digital well-being.
Check out the live website: Emergrade on Render
Check out the project on Devpost: Devpost Page Here
-
Virtual Try-On (VTON) Core: Utilizes yisol et. al IDM-VTON model to accurately composite new garment images onto a target person's photo. A dedicated HTML page (
vton_demo.html) serves as the primary front-end for this feature. -
Django Web Application: Provides a robust and scalable web framework for the entire platform, handling user interactions, data storage (
db.sqlite3), and the presentation layer. -
EEG Integration: Processes physiological data by acquiring, handling, and analyzing EEG signals (specifically Delta, Theta, Alpha, Beta, Gamma band powers). Dependencies on
muselslconfirm support for streaming data from devices like the Muse headset. This data is the unique layer informing the "smart shopper" experience. This feature is yet to be fully integrated.
- HTML, CSS, JS
- Python 3.12
- Django
- HuggingFace Space (IDM-VTON)
- Pillow (image preprocessing)
- Requests → HF inference API
- MuseLSL
- Lab Streaming Layer (LSL)
- Python WebSockets client
This project requires a Python environment managed by Pipenv (using Pipfile).
- Clone the Repository:
git clone https://github.com/JeremelleV/Emergrade.git
- Install Dependencies:
pipenv install pipenv shell
- Run the Django Server:
python manage.py runserver
All virtual try-on results are generated using the excellent open-source model licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License:
🔗 https://huggingface.co/spaces/yisol/IDM-VTON
🔗 https://github.com/yisol/IDM-VTON
This extension does not modify or distribute model weights.
All inference calls go directly through the publicly available Hugging Face Space API using the @gradio/client library.
If you use or extend this project, please credit yisol et al. for their work.