Skip to content

AbigaleD/Draft2UI

Repository files navigation

🧠 Draft2UI

Generative UI Layout Sketching with Stable Diffusion + ControlNet

Draft2UI is a generative system for automatic user interface (UI) layout sketching. Leveraging the power of Stable Diffusion and ControlNet, the system generates high-fidelity UI wireframes from simple structure masks. This tool aims to assist designers in rapidly prototyping and iterating on layout ideas, significantly improving design efficiency and creativity.

✨ Highlights

  • 🌀 Diffusion-based generation for high-quality, diverse layout sketches
  • 🧩 Structure-guided input using binary masks
  • 🖥️ End-to-end pipeline: mask creation, conditional diffusion, rendering
  • 🚀 Optimized for multi-GPU clusters
  • 📊 Evaluated on the RICO dataset

📸 Demo

Input-GAN_Drart2UI-Original


📦 Installation

1. Clone the repo

git clone https://github.com/your-username/Draft2UI.git
cd Draft2UI
  1. Create a Python environment

We recommend using conda or virtualenv:

conda create -n draft2ui python=3.10
conda activate draft2ui
  1. Install dependencies
pip install -r requirements.txt

📝 This project is built on top of HuggingFace diffusers, torch, and gradio for UI demoing.

🧪 Usage

  1. Launch the demo UI

python app.py

This will start a local Gradio interface where you can upload structure masks and preview generated UIs.

  1. Generate with Python script

from draft2ui.generator import Draft2UIGenerator

generator = Draft2UIGenerator() ui_image = generator.generate('path/to/mask.png') ui_image.save('generated_ui.png')

🏗 Project Structure

Draft2UI/
├── assets/              # Example images
├── checkpoints/         # Pretrained models and weights
├── configs/             # Model configuration files
├── data/                # Dataset preprocessing scripts
├── draft2ui/            # Core source code
│   ├── generator.py     # UI generation logic
│   └── utils.py         # Utility functions
├── app.py               # Gradio interface
├── train.py             # Training script (for fine-tuning)
├── requirements.txt     # Python dependencies
└── README.md

📊 Evaluation

We evaluate Draft2UI on the RICO dataset, using the following metrics: • Structural similarity (IoU of generated layouts) • Perceptual quality (LPIPS / FID) • User studies on coherence and usability

📈 Our model outperforms baseline methods in layout fidelity and visual appeal.

🔧 Fine-tuning

To fine-tune the model with your own UI dataset:

python train.py --config configs/train_config.yaml

Make sure to organize your dataset with corresponding mask/image pairs.

🧠 Research Background

This project explores the application of generative diffusion models for UI design, incorporating: • Conditional generation with structure masks • HCI principles for layout plausibility • Vision-language alignment in design tasks

If you use Draft2UI in your research, please consider citing our work (citation coming soon).

📄 License

This project is licensed under the MIT License. See LICENSE for details.

🙌 Acknowledgements • Stable Diffusion • ControlNet • HuggingFace diffusers • RICO Dataset

📬 Contact

For questions or collaboration, feel free to reach out via GitHub issues or email

About

Generative UI layout sketching via structure-aware diffusion models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published