Skip to content
#

ai-evaluation-framework

Here are 10 public repositories matching this topic...

Language: All
Filter by language

prompt-evaluator is an open-source toolkit for evaluating, testing, and comparing LLM prompts. It provides a GUI-driven workflow for running prompt tests, tracking token usage, visualizing results, and ensuring reliability across models like OpenAI, Claude, and Gemini.

  • Updated Dec 4, 2025
  • TypeScript

Clinical trial application for mental health benchmark evaluation of AI responses in multi-turn conversations. Guides users to understand AI interaction patterns and resolve personal mental health issues through therapeutic AI assistance.

  • Updated Oct 23, 2025
  • Python

Improve this page

Add a description, image, and links to the ai-evaluation-framework topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-evaluation-framework topic, visit your repo's landing page and select "manage topics."

Learn more