
Your Playwright script runs. The selector finds the element. The data comes back clean. Then the site ships a new layout, a cookie banner appears in a language your script doesn't expect, and a modal blocks the next step. You fix the script. The site ships another update. You fix the script again.
At some point the maintenance math stops working. That's when developers start asking about AI agents.
This article is for developers who already use Playwright, Puppeteer, or Selenium and want an honest answer to whether AI agents change their automation stack — and when.
You've used it. Playwright, Puppeteer, Selenium.
A headless browser is a browser engine without a graphical interface, controlled by code. It loads pages, executes JavaScript, renders content, and follows the instructions your script provides. It's a powerful, precise tool — and that precision is both its strength and its constraint.
A headless browser does exactly what your script tells it to do. No more, no less. If your script says "click the button with ID submit-btn," it clicks that button. If the button is now named confirm-btn, the script fails.
That determinism is valuable in the right context. It's also the root of the maintenance problem.
The key distinction isn't capability — it's direction.
A headless browser is instruction-following: you write the exact steps, the browser executes them. An AI web agent is goal-directed: you describe the outcome you want, the agent works out the steps itself.
With Playwright: page.click('#submit-btn') — you specify the action.
With a web agent: "Submit the form and return the confirmation number" — you specify the goal.
TinyFish's Web Agent takes a URL and a plain-language goal, navigates the page, makes decisions about what to do next, handles unexpected states, and returns a structured result. The agent handles the selector logic. You handle the goal definition.
This comes with a real trade-off: agents are less predictable than scripts. If you need pixel-perfect determinism — test assertions against specific DOM states, for example — an agent introduces variability a script doesn't. For workflows where the goal matters more than the exact path, that trade-off favors agents.
For a deeper introduction to what web agents can do in practice, see What Is a Web Agent? The Complete Guide to AI Browser Agents in 2026.
| Feature | Playwright / Puppeteer | TinyFish Web Agent |
|---|---|---|
| **How you control it** | Write exact steps (click, fill, navigate) | Describe the goal in plain language |
| **Setup complexity** | Library install, you manage scripts | API call with goal string |
| **Maintenance over time** | High — selectors break when sites change | Lower — agent adapts to layout changes |
| **Handles unexpected states** | No — script fails on unexpected conditions | Yes — agent reasons about what to do next |
| **Infrastructure required** | You manage browser servers, proxies | Managed by TinyFish |
| **Determinism** | High — predictable, exact | Lower — path may vary between runs |
| **Cost at low volume** | Library free; infra + proxy + maintenance extra | $0.015/step PAYG — no infra overhead. Search & Fetch free. |
| **Cost at high volume** | Server fleet + proxy subscriptions + ongoing maintenance = significant ops cost | $0.012/step (Pro), all infra included |
| **Best for** | UI testing, known stable sites, local dev | Dynamic sites, goal-based tasks, production scale |
Where Playwright wins: control, predictability, local development, low-volume stable sites.
Where TinyFish wins: managed infrastructure, goal flexibility, sites with frequent layout changes.
If it's a draw on a row, it's a draw.
Playwright isn't going anywhere. For the majority of browser automation tasks, it's still the right answer.
UI and functional testing. Playwright's native test runner, assertion API, and trace viewer are purpose-built for this. You need deterministic behavior, specific element state assertions, and reproducible test runs. Agents aren't designed for this job.
Scraping a known site with a stable layout. Documentation sites, static product catalogs, public APIs that require browser rendering — if the structure doesn't change, a Playwright script is cheaper and more predictable than an agent.
Local development and prototyping. Working locally with Playwright gives you full visibility into browser state, easy step-through debugging, and zero API latency. Starting with Playwright and migrating specific workflows to agents is a valid progression.
When you need complete control. Custom browser configurations, low-level network interception, cookie manipulation, specific extension behavior — Playwright's architecture gives you direct access to things an agent abstracts away.
Cost-sensitive early stage. At very low volume on cooperative, stable sites, Playwright's library cost is lower on paper. Once you add browser server infrastructure, proxy costs, and maintenance time, TinyFish's per-step pricing is typically comparable — and often cheaper than the full stack.
If your use case is on this list, you don't need a managed platform. Use Playwright.
Here's the pattern that pushes teams toward agents: the script breaks. Not because of a bug — because the site changed.
A cookie banner appeared in Dutch. A login modal loaded differently on mobile. A dynamic popup blocked the next click. A form added a new required field. A confirmation step now requires checking a box that wasn't there before.
Each of these is a 20-minute fix. Multiplied across dozens of target sites, updated continuously, it becomes a full-time maintenance load that grows with the number of sites you're monitoring.
AI agents absorb this maintenance overhead. The goal stays the same; the agent adapts to the changed path.
Scenarios where agents add clear value:
For specifics on where Playwright scripts degrade in production environments, see Scraping Dynamic Websites: When Playwright Breaks.

This isn't a binary choice. There's a spectrum — and TinyFish spans the right half of it.
Library (Playwright/Puppeteer): You write the script, you manage the infrastructure. Full control. Full maintenance responsibility.
Cloud browser (TinyFish Browser API): Same CDP interface, managed infrastructure. Your existing Playwright scripts connect with one line change. Browser servers, proxy routing, and reliability handling are managed. You keep the code.
AI Agent (TinyFish Web Agent): Submit a goal, get a structured result. The agent handles navigation, decision-making, extraction. You define what you want, not how to get it.
TinyFish spans the cloud browser and agent layers with one API key and one billing model. Use it as a CDP-compatible cloud browser — your existing Playwright code runs unchanged, you swap the launch endpoint. Or use it as a Web Agent — submit a goal, get a result. The choice is per-task, not per-account.
You're not choosing between Playwright and TinyFish. You're choosing where on this spectrum your specific task sits — and whether the infrastructure overhead of running it yourself is worth it.
The Web Agent API documentation is at docs.tinyfish.ai
For a view of how teams move from Selenium to agents over time, see From Selenium to AI Agents: A Migration Guide for Web Automation Teams.
Yes. TinyFish's Browser API exposes a CDP (Chrome DevTools Protocol) endpoint. Playwright's connect_over_cdp() method connects to any CDP endpoint. Create a TinyFish browser session, then pass the returned cdp_url to playwright.chromium.connect_over_cdp(session.cdp_url). Everything after that line — selectors, interactions, extraction logic — runs unchanged. You get managed infrastructure without rewriting your automation code.
The Browser API gives you a CDP-accessible managed browser — you write the Playwright or Puppeteer code that controls it. The Web Agent takes a goal in plain language and works out the steps itself, returning a structured result. Both use the same API key and billing pool. Browser API is for teams with existing Playwright code who want managed infrastructure. Web Agent is for goal-based tasks where writing the exact step sequence is the bottleneck.
AI agents are more reliable than Playwright scripts when sites change layouts frequently, because they navigate by page semantics rather than fixed selectors. If scripts fail because sites change layouts frequently, agents are more resilient — they adapt to changed paths. If scripts fail because of infrastructure issues (IP reputation, session handling, scale), moving to managed infrastructure helps regardless of whether you use scripts or agents. If your scripts work reliably, agents don't add reliability — they add flexibility for goal-based tasks at the cost of some determinism.
The agent handles multi-step workflows including authenticated sessions. For your own authenticated accounts, you describe the goal ("log in with these credentials and extract the invoice list from my account") and the agent manages the navigation. For authenticated workflows using accounts you are authorized to access, use use_vault: true with stored credential items rather than passing credentials in the goal string.
UI and functional testing — you need deterministic assertions against specific DOM states, and agent variability breaks this requirement. Low-volume automation against stable, cooperative sites — the economics don't favor agents. Any workflow where you need byte-for-byte reproducibility across runs. And tasks where you need to debug exact failure points in a selector sequence — agents abstract that information away.
No credit card. No setup. Run your first operation in under a minute.