Tell the AI what to do once. Get a Python script you can run forever.
PyBA uses LLMs to autonomously navigate any website, then exports the session as a standalone Playwright script - no API costs on repeat runs.
PyPI • Documentation • OpenHub
Every AI browser agent has the same issue: you pay for every single run.
- Run it 100 times? Pay for 100 LLM calls.
- Same task every day? Pay every day.
- The AI figures out the same clicks over and over.
PyBA is different. Let the AI figure it out once, then export a deterministic script you own forever.
from pyba import Engine
engine = Engine(openai_api_key="sk-...")
# Step 1: AI navigates autonomously
engine.sync_run(
prompt="Go to Hacker News, click the top story, extract all comments"
)
# Step 2: Export as a standalone Playwright script
engine.generate_code(output_path="hacker_news_scraper.py")Now run python hacker_news_scraper.py forever. No AI. No API costs. Just Playwright.
pip install py-browser-automationengine.sync_run(
prompt="Login to my bank, download this month's statement as PDF",
automated_login_sites=["swissbank"]
)
engine.generate_code("download_statement.py")from pyba import DFS
dfs = DFS(openai_api_key="sk-...")
dfs.sync_run(
prompt="Find all social media accounts linked to username 'targetuser123'"
)from pydantic import BaseModel
class Product(BaseModel):
name: str
price: float
rating: float
engine.sync_run(
prompt="Scrape all products from the first 3 pages",
extraction_format=Product
)
# Data is extracted DURING navigation, stored in your databaseengine.sync_run(
prompt="Go to my Instagram DMs and message john Paula 'Running 10 mins late'",
automated_login_sites=["instagram"]
)
# Credentials come from env vars - never exposed to the LLM| Mode | Use Case | Example |
|---|---|---|
| Normal | Direct task execution | "Fill out this form and submit" |
| DFS | Deep investigation | "Analyze this GitHub user's contribution patterns" |
| BFS | Wide discovery | "Map all pages linked from this homepage" |
from pyba import Engine, DFS, BFS
# Normal mode (default)
engine = Engine(openai_api_key="...")
# Deep-first exploration
dfs = DFS(openai_api_key="...")
# Breadth-first discovery
bfs = BFS(openai_api_key="...")Export any successful run as a standalone Python script. Run it forever without AI.
Every run generates a Playwright trace.zip — replay exactly what happened in Trace Viewer.
Anti-fingerprinting, random mouse movements, human-like delays. Bypass common bot detection.
Works with OpenAI, Google VertexAI, or Gemini.
Store every action in SQLite, PostgreSQL, or MySQL. Audit trails and replay capability.
Built-in login handlers for Instagram, Gmail, Facebook. Credentials stay in env vars.
engine.sync_run(
prompt="Go to this YouTube video and extract: title, view count, like count, channel name, upload date"
)engine.sync_run(
prompt="Fill out the job application: Name='John Doe', Email='john@email.com', upload resume from ~/resume.pdf, submit"
)
engine.generate_code("job_application.py") # Replay anytimedfs = DFS(openai_api_key="...")
dfs.sync_run(
prompt="Find the leadership team, recent news, and funding history for Acme Corp"
)from pyba import Engine, Database
# With database logging
db = Database(engine="sqlite", name="runs.db")
engine = Engine(
openai_api_key="sk-...",
headless=False, # Watch it work
enable_tracing=True, # Generate trace.zip
max_depth=20, # Max actions per run
database=db # Log everything
)See full configuration options in the docs.
PyBA was built for automated intelligence and OSINT — replicating everything a human analyst can do in a browser, but with reproducibility and speed.
If you're doing security research, competitive intelligence, or just automating tedious browser tasks, this is for you.
v0.3.0 - Active development. First stable release: December 18, 2025.
Breaking changes may occur. Pin your version in production.
If PyBA saved you time, consider giving it a ⭐