Skip to content

Conversation

@speakman
Copy link
Owner

Summary

  • add __main__ module to allow python -m llmcontext
  • switch verbose messaging to logging
  • warn if generated context exceeds ~1M tokens
  • bump version to 0.1.1 and document changes
  • adjust tests for new invocation

Testing

  • pytest -q
  • python -m llmcontext --version

https://chatgpt.com/codex/tasks/task_e_68496b337ef0832b8c4bd6baa9a9b279

@speakman speakman requested a review from Copilot June 11, 2025 12:02
@speakman speakman merged commit f6e1a43 into main Jun 11, 2025
1 check passed
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a main module to allow execution via “python -m llmcontext”, swaps print statements for logging calls, and warns when generated context might exceed 1M tokens.

  • Enable CLI invocation with a dedicated main module
  • Replace direct print calls with logging in favor of verbosity control
  • Update version and changelog accordingly

Reviewed Changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
tests/test_metadata.py Updated CLI invocation to use the new main module interface
llmcontext/llmcontext.py Replaced several print calls with logging improvements
llmcontext/main.py Added entry point to support “python -m llmcontext”
llmcontext/init.py Bumped version to 0.1.1
CHANGELOG.md Documented all new changes in this version update
Comments suppressed due to low confidence (1)

llmcontext/llmcontext.py:440

  • Consider using logger.warning instead of logger.info here (and similarly in subsequent exception handling blocks) to better reflect the severity of file access errors.
logger.info("Warning: Could not access/process file %s: %s", filepath_rel_posix, e)

output_text = generate_project_context(
root_dir_abs, args.exclude, output_file_abs_path, args.verbose
)

Copy link

Copilot AI Jun 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current token estimate is based on splitting the output text by whitespace. Consider documenting that this is an approximation of token count for LLM context purposes.

Suggested change
# Approximate token count based on splitting the text by whitespace.
# Note: This does not reflect the actual tokenization process used by LLMs,
# which may result in a different token count due to subword tokenization and special characters.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants