Skip to content

Langchain Demo with Gemma Model is a question-answering app using Streamlit and the gemma2:2b model from Ollama. It leverages Langchain for prompt management and includes Langsmith Tracking for real-time logging. Users can input questions and receive intelligent responses

Notifications You must be signed in to change notification settings

Osama066/Langchain-Demo-With-Gemma-Model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Langchain Demo with Gemma Model

This repository demonstrates how to build a simple question-answering application using the Langchain framework and Gemma model. The application allows users to input a question, and it provides a response generated using the Ollama Llama2 model.

Features

  • Streamlit Framework: Interactive web interface to input questions and display responses.
  • Langchain Integration: Uses Langchain prompts to manage the conversation flow.
  • Ollama Model: Utilizes the gemma2:2b model for generating intelligent answers.
  • Langsmith Tracking: Tracks the prompt and responses for logging and debugging.

Getting Started

Prerequisites

  • Python 3.8+
  • Ollama Model setup
  • Install dependencies using pip:
pip install langchain-community langchain-core streamlit python-dotenv

Environment Variables

Create a .env file in the root of your project and set the following variables:

LANGCHAIN_API_KEY=your_langchain_api_key
LANGCHAIN_PROJECT=your_project_name

Run the Application

To run the application, simply execute:

streamlit run app.py

You can then open the application in your browser at http://localhost:8501 and start interacting with the model.

Code Overview

  • Langchain API Key: The API key is loaded using dotenv for Langsmith tracking.
  • Prompt Template: The conversation is structured using a Langchain ChatPromptTemplate, allowing the user to ask a question and receive a response.
  • LLM (Large Language Model): The Ollama model (gemma2:2b) is used to generate answers to user questions.
  • Streamlit Input: The user inputs a question through a simple text field, and the response is displayed in real-time.

Future Enhancements

  • Add more models to switch between different LLMs.
  • Implement additional Langchain tools for better user interaction.
  • Add more complex prompt management for better responses.

image

About

Langchain Demo with Gemma Model is a question-answering app using Streamlit and the gemma2:2b model from Ollama. It leverages Langchain for prompt management and includes Langsmith Tracking for real-time logging. Users can input questions and receive intelligent responses

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages