Production-oriented, modular chat interface for CortexLTM memory workflows.
-
Improved streaming "thinking" UX while waiting for agent responses:
- shows one live step at a time (no step counters)
- rotates tool-specific progress copy for Gmail, Calendar, Drive, and Web Search
- avoids duplicate loader text when headline and active step match
-
Improved reaction reliability for streamed assistant messages:
- reaction calls now resolve temporary UI IDs to persisted event IDs before POSTing
-
Added production-oriented Supabase auth flow with:
- Email/password sign-in
- Create-account flow
- OAuth sign-in (Google, GitHub)
-
Added secure session handling via HTTP-only cookies.
-
Added auth API routes under
src/app/api/auth/*. -
Added
/auth/callbackflow for OAuth session finalization. -
Updated chat API proxying to forward bearer auth to CortexLTM.
-
Added
AUTH_MODEsupport (devandsupabase) for easier OSS onboarding. -
Added soul-contract injection for local/demo provider calls via
CORTEX_SOUL_SPEC_PATHor../CortexLTM/soul/SOUL.md. -
Improved CortexLTM error propagation so UI routes return upstream status/details instead of generic 503s.
-
Added delete-confirmation loading UI using the shared brain loader.
-
Hardened assistant message rendering for streamed markdown code blocks:
- supports fenced blocks that start after list prefixes
- tolerates trailing text on closing fences
- preserves code rendering when a model omits a closing fence
-
Improved chat thread switching UX:
- added transition/loading states when changing threads
- composer disables during transition to prevent cross-thread sends
This project is in early development. APIs, UI behavior, and module boundaries may change quickly.
- Next.js App Router + TypeScript
- Tailwind CSS
- Framer Motion
- CortexLTM HTTP memory backend integration
- Install dependencies:
npm install
- Copy env template:
# macOS/Linux cp .env.example .env.local # Windows (cmd) copy .env.example .env.local
- Set required values in
.env.local:CORTEX_MEMORY_BACKEND=cortex_httpCORTEX_API_BASE_URL(for example:http://127.0.0.1:8000)- Optional
CORTEX_API_KEY(must matchCORTEXLTM_API_KEYwhen backend auth is enabled) AUTH_MODE=dev(orsupabasewhen backend enforces bearer tokens)APP_ORIGIN(for example:http://localhost:3000, used for OAuth callback URLs)NEXT_PUBLIC_SUPABASE_URLandNEXT_PUBLIC_SUPABASE_ANON_KEYwhen using Supabase auth- Keep
CHAT_DEMO_MODE=falsefor real backend chat (set totrueonly for isolated local UI demos) - Optional
CORTEX_SOUL_SPEC_PATH(absolute or workspace-relative path toSOUL.md)
- Start development server:
npm run dev
Open http://localhost:3000.
Before starting CortexUI, run CortexLTM API:
uvicorn cortexltm.api:app --host 0.0.0.0 --port 8000GET /api/auth/sessioncurrent auth statePOST /api/auth/sign-inemail/password sign-inPOST /api/auth/sign-upemail/password account creationPOST /api/auth/oauth/startstart OAuth login (Google/GitHub)POST /api/auth/sign-outclear local auth cookiesGET /api/chat/threadslist threads for resolved userPOST /api/chat/threadscreate threadGET /api/chat/[threadId]/messagesfetch recent messagesPOST /api/chat/[threadId]/messagesproxy chat requests to CortexLTM (/v1/threads/{threadId}/chat)POST /api/chat/[threadId]/messages/[messageId]/reactionsave/clear a reaction on assistant messages (thumbs_up,heart,angry,sad,brain)PATCH /api/chat/[threadId]rename threadDELETE /api/chat/[threadId]delete threadPOST /api/chat/[threadId]/promotepromote thread to core memoryGET /api/chat/[threadId]/summaryfetch active summary (optional)
POST /api/chat/[threadId]/messages enforces:
addUserEvent(...source:"chatui")in CortexLTMbuildMemoryContext(...)in CortexLTM- model generation in CortexLTM
addAssistantEvent(...source:"chatui_llm")in CortexLTM
This preserves summary trigger timing that depends on assistant writes.
Assistant message reactions:
- Reactions are behind a compact per-message toggle and expand inline on click/tap.
- Selecting the
brainreaction triggers an immediate upstream summary write in CortexLTM. - Reaction updates are optimistic and do not auto-scroll the transcript.
- In
AUTH_MODE=supabase, users must sign in before chat routes initialize. - CortexUI stores Supabase access/refresh tokens in HTTP-only cookies and forwards bearer auth to CortexLTM.
- CortexLTM HTTP integration is isolated in
src/lib/memory/cortex-http-provider.ts. - For local/demo provider mode (
CHAT_DEMO_MODE=trueor local threads), CortexUI prepends the soul contract before model calls. - Additional design/implementation details live in
ARCHITECTURE.mdand active work items are tracked inTODO.md.