Skip to content

Commit 33e973d

Browse files
committed
documentaion
1 parent 2e35abd commit 33e973d

File tree

3 files changed

+380
-55
lines changed

3 files changed

+380
-55
lines changed

Agent.md

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
## Teams AI Agent: System Architecture
2+
3+
This diagram illustrates the high-level components of the application and their relationships. It shows how our services, hosted on Azure, interact with each other and with external services to deliver the full functionality to a user within Microsoft Teams.
4+
5+
### Architectural Summary
6+
7+
1. **Client-Side:** The user interacts with the React application, which is served as a Tab inside the Microsoft Teams client. The frontend's primary responsibilities are rendering the UI, managing client-side state, and initiating authenticated API calls.
8+
2. **Authentication:** Azure Active Directory is the identity provider. The frontend uses the Teams JS SDK to get an SSO token, which is sent with every API request. The backend validates this token on every call to ensure the request is secure and authorized.
9+
3. **Backend:** The Node.js application, hosted on Azure App Service, is the core of the system.
10+
* It securely loads all its secrets (API keys, connection strings) from **Azure Key Vault** using a passwordless **Managed Identity**.
11+
* It exposes a single primary API endpoint (`/v6/mcp/agent/chat`) that uses Server-Sent Events (SSE) for real-time communication.
12+
* It instantiates a **LangChain Agent** to handle the conversational logic.
13+
4. **AI & Tools:** The LangChain Agent orchestrates calls to external services. It sends the user's prompt and conversation history to **AWS Bedrock** for processing and calls the **Topcoder MCP Gateway** when the AI model decides a tool is needed to answer a question.
14+
5. **Data Persistence:** All conversation history is stored in MongoDB API, providing a scalable and durable memory for the agent.
15+
16+
### Sequence Diagram: A Single Chat Message with a Tool Call
17+
18+
This diagram illustrates the step-by-step flow of data and method calls for a "happy path" scenario where a user sends a message, the agent decides to use a tool, and then responds with a summary.
19+
20+
```mermaid
21+
sequenceDiagram
22+
participant User
23+
participant Frontend as React Frontend
24+
participant Backend as Node.js Backend
25+
participant LangChain as LangChain Agent
26+
participant CosmosDB as MongoDB
27+
participant Bedrock as AWS Bedrock
28+
participant MCP as Topcoder MCP
29+
30+
User->>Frontend: Types "Show me an active challenge" and clicks Send
31+
Frontend->>Backend: POST /v6/mcp/agent/chat (with SSO Token)
32+
33+
rect rgb(230, 240, 255)
34+
note over Backend: Middleware: `validateToken` runs
35+
Backend->>AzureAD: Verify Token Signature (using cached public keys)
36+
AzureAD-->>Backend: OK
37+
end
38+
39+
Backend->>LangChain: Create Agent Instance
40+
LangChain->>CosmosDB: getMessageHistory(sessionId)
41+
CosmosDB-->>LangChain: Return previous messages
42+
43+
LangChain->>Bedrock: streamEvents(prompt, history, tools)
44+
Bedrock-->>LangChain: Stream Chunks (Decides to use a tool)
45+
46+
loop Streaming Response to Client
47+
LangChain-->>Backend: Yields 'thinking' chunks
48+
Backend-->>Frontend: SSE: event: message, data: {type: "chunk", ...}
49+
end
50+
51+
LangChain-->>Backend: Yields 'tool_start' event for `query-tc-challenges`
52+
Backend-->>Frontend: SSE: event: message, data: {type: "tool_start", ...}
53+
54+
LangChain->>MCP: callTool('query-tc-challenges', {status: 'Active'})
55+
MCP-->>LangChain: Return JSON result of challenges
56+
57+
LangChain-->>Backend: Yields 'tool_result' event with data
58+
Backend-->>Frontend: SSE: event: message, data: {type: "tool_result", ...}
59+
60+
LangChain->>Bedrock: streamEvents(prompt, history, tool_result)
61+
Bedrock-->>LangChain: Stream Final Summary Chunks
62+
63+
loop Streaming Final Response
64+
LangChain-->>Backend: Yields final 'text' chunks
65+
Backend-->>Frontend: SSE: event: message, data: {type: "chunk", ...}
66+
end
67+
68+
rect rgb(255, 245, 230)
69+
note over Backend, CosmosDB: Finalization
70+
LangChain->>CosmosDB: addMessages(user_prompt, final_ai_response)
71+
CosmosDB-->>LangChain: OK
72+
Backend-->>Frontend: SSE: event: end
73+
end
74+
```
75+
76+
### Sequence Summary
77+
78+
1. **Request & Auth:** The user sends a prompt. The frontend sends it to the backend API along with the SSO token, which is validated.
79+
2. **Memory Retrieval:** The LangChain agent is created and immediately fetches the conversation history from Cosmos DB to provide context for the LLM.
80+
3. **First LLM Call:** The agent sends the full context to AWS Bedrock. Bedrock analyzes the request and decides that it needs to use the `query-tc-challenges` tool. It streams back its initial thoughts and this tool-use instruction.
81+
4. **Tool Execution:** The backend streams the "thinking" and "tool_start" status to the frontend. It then makes a direct API call to the Topcoder MCP Gateway.
82+
5. **Second LLM Call:** Once the tool result is received, the agent sends this new information back to AWS Bedrock, asking it to synthesize a final, human-readable answer.
83+
6. **Final Response:** Bedrock streams the final summary. The backend relays these text chunks to the frontend, which displays them to the user.
84+
7. **Finalization:** Once the stream is complete, the agent's memory manager saves the new user message and the final AI response back to Mongo DB for future conversations. The SSE connection is then closed.

README.md

Lines changed: 240 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -1,56 +1,241 @@
1-
# Topcoder Model Context Protocol (MCP) Server
2-
3-
## Authentication Based Access via Guards
4-
5-
Tools/Resources/Prompts support authentication via TC JWT and/or M2M JWT. Providing JWT in the requests to the MCP server will result in specific listings and bahavior based on JWT access level/roles/permissions.
6-
7-
#### Using `authGuard` - requires TC jwt presence for access
8-
9-
```ts
10-
@Tool({
11-
name: 'query-tc-challenges-private',
12-
description:
13-
'Returns a list of Topcoder challenges based on the query parameters.',
14-
parameters: QUERY_CHALLENGES_TOOL_PARAMETERS,
15-
outputSchema: QUERY_CHALLENGES_TOOL_OUTPUT_SCHEMA,
16-
annotations: {
17-
title: 'Query Public Topcoder Challenges',
18-
readOnlyHint: true,
19-
},
20-
canActivate: authGuard,
21-
})
22-
```
23-
24-
#### Using `checkHasUserRole(Role.Admin)` - TC Role based guard
25-
26-
```ts
27-
@Tool({
28-
name: 'query-tc-challenges-protected',
29-
description:
30-
'Returns a list of Topcoder challenges based on the query parameters.',
31-
parameters: QUERY_CHALLENGES_TOOL_PARAMETERS,
32-
outputSchema: QUERY_CHALLENGES_TOOL_OUTPUT_SCHEMA,
33-
annotations: {
34-
title: 'Query Public Topcoder Challenges',
35-
readOnlyHint: true,
36-
},
37-
canActivate: checkHasUserRole(Role.Admin),
38-
})
39-
```
40-
41-
#### Using `canActivate: checkM2MScope(M2mScope.QueryPublicChallenges)` - M2M based access via scopes
42-
43-
```ts
44-
@Tool({
45-
name: 'query-tc-challenges-m2m',
46-
description:
47-
'Returns a list of Topcoder challenges based on the query parameters.',
48-
parameters: QUERY_CHALLENGES_TOOL_PARAMETERS,
49-
outputSchema: QUERY_CHALLENGES_TOOL_OUTPUT_SCHEMA,
50-
annotations: {
51-
title: 'Query Public Topcoder Challenges',
52-
readOnlyHint: true,
53-
},
54-
canActivate: checkM2MScope(M2mScope.QueryPublicChallenges),
55-
})
1+
# Microsoft Teams AI Agent
2+
3+
A production-grade Microsoft Teams Tab app featuring a conversational AI agent powered by **LangChain.js** and **AWS Bedrock**.
4+
The agent integrates with mcp tools to answer real-time queries beyond its base model knowledge.
5+
6+
---
7+
8+
## ✨ Key Features
9+
10+
* **Conversational AI:** Powered by LangChain.js + AWS Bedrock (Claude 3.5 Sonnet).
11+
* **External Tools:** Fetch live contextual data through external integrations.
12+
* **Real-time Chat Streaming:** Uses SSE for continuous agent thought updates.
13+
* **Conversation History:** Stored and grouped in MongoDB for persistence.
14+
* **Azure AD SSO:** Secure Teams authentication for verified access.
15+
* **Fluent UI + Teams SDK:** Seamless user experience inside Teams.
16+
* **Flexible Deployment:** Manual dev setup + production-ready Docker build.
17+
18+
---
19+
20+
## 🧩 Technology Stack
21+
22+
| Frontend (Vite) | Backend (Node.js + Express) |
23+
| --------------------------- | ----------------------------------------- |
24+
| ✅ React + TypeScript (Vite) | ✅ LangChain.js + AWS Bedrock (Claude 3.5) |
25+
| ✅ Fluent UI + Teams SDK | ✅ MongoDB (Mongoose ODM) |
26+
| ✅ Vite environment support | ✅ MCP Gateway for tool access |
27+
28+
| AI & Security | Infrastructure |
29+
| --------------------------------- | --------------------------------- |
30+
| ✅ AWS Bedrock (Claude 3.5 Sonnet) | ✅ Docker-based production build |
31+
| ✅ Azure Active Directory (SSO) | ✅ Environment-based configuration |
32+
| ✅ JWT Validation + JWKS-RSA | ✅ ngrok for local Teams testing |
33+
34+
---
35+
36+
## 🧠 Local Development Setup (Manual)
37+
38+
You can run the frontend and backend separately for local testing.
39+
This is the preferred approach during active development.
40+
41+
## Prerequisites
42+
43+
* **Node.js:** v22.x
44+
* **MongoDB:** Local instance or MongoDB Atlas
45+
* **ngrok:** To expose your servers for Teams testing
46+
* **Azure AD App Registration:** For SSO
47+
* **AWS Bedrock access**
48+
49+
### Azure App Registration
50+
51+
This is a mandatory step to enable SSO and secure Agent.
52+
53+
1. Go to the **Azure Portal** -> **Azure Active Directory** -> **App registrations** -> **+ New registration**.
54+
2. **Name:** `Teams AI Agent`
55+
3. **Supported account types:** Select `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts`.
56+
4. Click **Register**.
57+
* **Save the `Application (client) ID` and `Directory (tenant) ID`.** You will need these for your environment variables.
58+
5. Go to the **Expose an API** blade.
59+
* Click **Set** next to "Application ID URI". The default `api://<frontend_uri>/<this_azure_app_client_id>` is fine for now. We will update this with our Railway URL later.
60+
6. Click **+ Add a scope**.
61+
* **Scope name:** `access_as_user`
62+
* **Who can consent?:** `Admins and users`
63+
* Fill in the admin/user consent descriptions (e.g., "Allows the app to access the AI agent API as the signed-in user.").
64+
* Click **Add scope**.
65+
7. Click **+ Add a client application**.
66+
* Add MS Teams client id `1fec8e78-bce4-4aaf-ab1b-5451cc387264`
67+
* Select `access_as_user` in Authorize scopes
68+
8. Go to the **API permissions** blade.
69+
* Click **+ Add a permission** -> **APIs of my Org**.
70+
* Select your `Teams AI Agent` app.
71+
* Check the box for the `access_as_user` permission and click **Add permissions**.
72+
* Click the **Grant admin consent** button.
73+
9. Go to the **Authentication** blade.
74+
* Click **+ Add a platform** -> **Web**.
75+
* **Redirect URIs:** You will add your Railway frontend URL here later (e.g., `https://my-frontend.up.railway.app`)
76+
* And also add `Mobile and desktop applications` and check all default urls
77+
78+
---
79+
80+
81+
82+
### Step 1: Clone Repository
83+
84+
```bash
85+
git clone https://github.com/topcoder-platform/tc-mcp.git
86+
cd tc-mcp
87+
```
88+
89+
---
90+
91+
### Step 2: Backend Setup
92+
93+
```bash
94+
pnpm install
95+
cp .env.example .env
96+
# Edit .env with MongoDB, Azure, AWS credentials
97+
pnpm start:dev
98+
```
99+
100+
Backend will run at `http://localhost:3000/v6/mcp/*`.
101+
102+
---
103+
104+
### Step 3: Frontend Setup
105+
106+
```bash
107+
cd teamsTab
108+
pnpm install
109+
cp .env.example .env
110+
# Add your API base URL and Teams App config
111+
pnpm run dev
112+
```
113+
114+
Frontend will run at `http://localhost:5173`.
115+
116+
> **Note:**
117+
>
118+
> * During development, you can set `VITE_API_BASE_URL=http://localhost:3000/v6/mcp/agent`
119+
> * In production build, Vite will read VITE_API_BASE_URL from the container environment that was hardcoded in Dockerfile.
120+
121+
---
122+
123+
### Step 4: Expose Local Servers for Teams
124+
125+
To test inside Teams, both servers must be public.
126+
127+
```bash
128+
# For frontend
129+
ngrok http 5173
130+
# For backend
131+
ngrok http 3000
132+
```
133+
134+
You’ll get two public URLs:
135+
136+
* Frontend → `https://<frontend-id>.ngrok-free.app`
137+
* Backend → `https://<backend-id>.ngrok-free.app`
138+
139+
> **Note:**
140+
> ngrok frontend url should be added to `teamsTab\vite.config.ts` allowed hosts for development environment
141+
142+
---
143+
144+
### Step 5: Configure Teams Manifest
145+
146+
Edit:
147+
148+
```
149+
teamsTab/appPackageDev/manifest.json
150+
```
151+
152+
Update:
153+
154+
```json
155+
{
156+
"id": "<azure-client-id>",
157+
"contentUrl": "https://<frontend-id>.ngrok-free.app",
158+
"validDomains": [
159+
"<frontend-id>.ngrok-free.app",
160+
"<backend-id>.ngrok-free.app"
161+
],
162+
"webApplicationInfo": {
163+
"id": "<azure-client-id>",
164+
"resource": "api://<frontend-id>.ngrok-free.app/<azure-client-id>"
165+
}
166+
}
167+
```
168+
169+
---
170+
171+
### Step 6: Sideload App in Teams
172+
173+
1. Zip the following from `teamsTab/appPackageDev/`:
174+
175+
* `manifest.json`
176+
* `color.png`
177+
* `outline.png`
178+
2. Go to **Microsoft Teams → Apps → Upload a custom app** and upload the zip.
179+
180+
You can now test the full AI agent directly in the Teams client.
181+
182+
---
183+
184+
## 🐳 Production / Deployment (Dockerized)
185+
186+
When you’re ready to deploy (e.g., on **Railway**, **Render**, or **AWS ECS**), use the provided `Dockerfile`.
187+
188+
### Dockerfile Overview
189+
190+
The Docker build:
191+
192+
* Installs dependencies
193+
* Builds the frontend (`teamsTab/`)
194+
* Builds the backend
195+
* Starts the server using `appStartUp.sh`
196+
* Frontend & Backend Agent will be available in single url
197+
- http://localhost:3000/teamsTab
198+
- http://localhost:3000/v6/mcp/agent
199+
200+
201+
### Build and Run
202+
203+
```bash
204+
docker build -t teams-ai-agent .
205+
docker run -d -p 3000:3000 --env-file .env teams-ai-agent
56206
```
207+
208+
> **💡 Note:**
209+
> * The `VITE_API_BASE_URL` is already **hardcoded in your Dockerfile** using:
210+
>
211+
> ```dockerfile
212+
> ENV VITE_API_BASE_URL=https://api.example.com
213+
> ```
214+
> This means the frontend is built with that value baked in at build time.
215+
> * Configure environment variables **directly in your hosting platform’s dashboard**, such as **Railway**, **AWS ECS / Lightsail**, or **Render** — no `.env` file needed.
216+
> * If you prefer using an `.env` file, mount or include it at runtime using the `--env-file .env` option (as shown above).
217+
>
218+
219+
---
220+
221+
222+
### Optional: ngrok for Local Preview in Docker
223+
224+
You can still run:
225+
226+
```bash
227+
ngrok http 3000
228+
```
229+
230+
And use that public URL in your Teams manifest for quick cloud-like testing.
231+
232+
---
233+
234+
### Summary
235+
236+
| Mode | How to Run | Notes |
237+
| -------------- | ---------------------------------------------- | --------------------------------------------- |
238+
| **Local Dev** | frontend + backend separately | Fast iteration, live reload |
239+
| **Production** | `docker build && docker run` | Uses built Vite files, NestJS will serve frontend |
240+
| **Teams Test** | Use ngrok URLs | Needed for Teams to access your local servers |
241+

0 commit comments

Comments
 (0)