Skip to content

An LLM-Augmented Semantic Expansion & Ideation Canvas. Build dynamic mind maps with generative AI autocomplete, Deep-First focus modes, and a hybrid Next.js + Python architecture.

Notifications You must be signed in to change notification settings

upsilonyc/grafity

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

Grafity is a graph-based and LLM-augmented knowledge canvas that allows non-linear yet organized ideation.

Beyond basic components — nodes and edges — it also features:

  • Depth-First and Breadth-First views:
    • By clicking "Depth-Frist" and selecting a node, all nodes except that node and its parents will be set semi-trasparent.
    • This feature is to allow users to explore their thoughts in a linear manner, while still maintaining the non-linear nature of the graph.
  • AI Autocomplete:
    • By clicking "AI Autocomplete", an LLM will suggest a new node with relevant label and content to connect to the selected node, based on its content.
    • This feature augments the canvas in both depth and breadth. It helps the user to dive deeper into one specific node, while also reminding them of new ideas that connect.

Demo

Link to Demo Video

Things-to-consider

I. On Systemization

A. How should connections be layered logically and reasonably to minimize search endeavor?

Think of convergence with cognitive mechanisms of:

  • memory facilitation, e.g. chunking;
  • streams of thought, i.e. convergent / divergent;
  • representation of thoughts, i.e. symbols, analogies — think of the "illogical tunnels" that seem unsystematic but actually make threads of information more intuitive.

Use LLMs built-in to understand & predict connections.

B. What input, say, what kind of data is the product designed to handle, and why is the user needing this?

Examples could be:

  • handling research papers for conducting literature review,
  • handling ideas for product design,
  • handling tasks for progress-tracking and management,
  • handling class notes & example practices for studying purposes,
  • helping navigate multi-party chats (e.g. reaching out to prof, advisor, etc. simultaneously to seek for research opportunities),

The thing is, what is a logical system, or say, umbrella of system, under which all these are doable? Maybe leave this to be user-defined.

C. Any form of output (pdf, ipg, text, code, etc.) the user can take away?

Can be extracted to image/pdf.

II. On Personalization

A. How can the product take variation in individual thinking patterns into consideration?

I.e. just like tree/graph traversal, some thinkers with explorer mindsets are intrinsically breadth-first, while others prefer depth-first thinking.

In this regard, how can the product take this into account? Should it let the user choose which mode to go with, or should it adaptively optimize representation? If we go with the latter, how?

III. On Competitive Landscape

A. How can the product differentiate from mindmap applications?

TBD


Key Features

I. The Core (Addressing Things-to-consider I.B)

A. Infinite Canvas: Pan/Zoom/Drag capabilities.
B. CRUD Nodes: Double-click to create. Markdown support.
C. Fluid Linking: Drag from Node A to Node B to create a directional, named edge.

Edges are named/classified in accordance to support Focus/Mining View in Key Fetures Section II.C

D. User Database (Yet to be implemented)

Password, Username, DisplayName, Chat (Date, title, tag, type)

II. The Specifics

A. AI Autocomplete (Addressing Things-to-consider Section I.A)

User select a node, click the button, and an LLM suggests a node. This solves the "Illogical Tunnel" problem.

B. BFS (Exploring) v.s. DFS (Mining) Views (Addressing Things-to-consider Section II)
  • Exploration View: For BFS-comfortable users. They see the whole graph, zoom in/out, and organize spatially.
  • Mining/Focus View: They only see the current node and its direct children/parents. This simulates a linear stream of thought within a non-linear graph.
C. Multimodal Output (Addressing Things-to-consider Section I.C) (Yet to be implemented)
  • The "Export map to..." allows the user to download their map/thoughts in image format.
  • The "Extract Summary" allows the user to get an AI-generated summary of their current map. This summary can also be stored in and later edited in, and exported again from the "Note" space for each single chat.

About

An LLM-Augmented Semantic Expansion & Ideation Canvas. Build dynamic mind maps with generative AI autocomplete, Deep-First focus modes, and a hybrid Next.js + Python architecture.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published