
Playwright is running fine. Then you scale to 500 concurrent sessions. Or you add sites with strict automation requirements and reliability drops. Or you realize your team spends more time managing browser infrastructure than building the product.
That's usually when the search starts.
This article is for developers who already use Playwright and want to understand exactly where TinyFish fits — and where it doesn't. Not a takedown. A map.
Playwright is one of the best browser automation tools available. It controls real Chromium, Firefox, and WebKit browsers with a clean, well-designed API. The ecosystem is mature: thorough documentation, active maintenance from the Microsoft team, and broad community support.
For developers, the appeal is direct. You write the code. You control everything. Intercept requests, manage cookies, record HAR files, take screenshots, assert DOM elements. The degree of control Playwright gives you over browser behavior is genuinely hard to replicate elsewhere.
The critical thing to understand about Playwright: it's a library. It doesn't provide infrastructure. You write the code, you run the infrastructure. That's not a criticism — it's the design. Playwright does exactly what it was built to do.
The infrastructure model that makes Playwright flexible at small scale becomes operational overhead as you grow.
Browser server management. Running Playwright at scale means managing fleets of headless Chromium instances — 100–300MB memory per instance. At 50 concurrent sessions, you're making decisions about server scaling, spot instance reliability, and container orchestration.
Proxy configuration. Sites with strict automation requirements often need residential IP rotation. Building proxy rotation into Playwright means integrating a proxy provider, handling session affinity, managing per-request IP logic, and debugging reliability when rotation fails or IP reputations degrade.
Maintenance overhead. When a target site changes its structure or tightens its automation requirements, your scraper breaks. Diagnosing and fixing these failures is real engineering work — work that doesn't ship product, but blocks pipelines.
Reliability degrades with volume. A Playwright script that works at 10 requests/hour can fail at 500/hour — not because the code changed, but because infrastructure factors create failure modes that weren't visible at low volume. Setting up reliable automation on sites with strict requirements takes significant infrastructure work with Playwright alone.
If your targets are static pages on cooperative domains at modest volume, this overhead is manageable. If you're running production pipelines at scale across varied sites, the infrastructure becomes a full-time engineering problem.
For more on where Playwright's reliability degrades by site type, see Scraping Dynamic Websites: When Playwright Is the Right Tool.
Your existing Playwright scripts work. The change is minimal: create a TinyFish browser session, then connect your Playwright code to it.
TinyFish exposes a Browser API that returns a CDP WebSocket endpoint. Since Playwright's connect_over_cdp() method already speaks CDP, connecting to TinyFish instead of launching a local browser requires two lines — create a session, then connect. Everything after that is unchanged:
import os
from playwright.async_api import async_playwright
async with async_playwright() as p:
# Before: local browser
# browser = await p.chromium.launch()
# After: TinyFish managed browser — create a session, then connect
from tinyfish import TinyFish
session = TinyFish().browser.sessions.create()
browser = await p.chromium.connect_over_cdp(session.cdp_url)
page = await browser.new_page()
await page.goto("https://example.com")
content = await page.content()
await browser.close()Everything after the connection line — your selectors, page interactions, extraction logic — stays identical.
What changes is the infrastructure layer. Browser servers managed. Residential proxy routing included. Cold starts under 250ms. TinyFish runs a native Chromium-based browser session with infrastructure-level request handling. Unlike plugins applied after the browser starts, TinyFish operates at the infrastructure layer — the same layer where the browser itself initializes.
The biggest difference between TinyFish and Playwright isn't what the browser can do — it's what you no longer have to build. TinyFish removes the infrastructure layer entirely: browser servers, proxy management, reliability handling. Your Playwright code connects unchanged; the operational overhead disappears.
One distinction worth noting: TinyFish also offers a Web Agent layer for goal-based tasks, separate from the Browser API. If your use case is "navigate to X, extract Y, handle the flow autonomously," the Agent endpoint may serve you better than CDP. But for teams with existing Playwright code who want to remove browser infrastructure, the Browser API is the direct path.
See also: [Headless Browser vs AI Agents for Web Automation](LINK_PLACEHOLDER:headless-browser-vs-ai-agents)
| Feature | Playwright | TinyFish Browser API |
|---|---|---|
| Hosting model | Self-hosted (you manage servers) | Cloud-managed |
| Setup | Library install; infra setup varies | API key + one line |
| Maintenance | You own: server ops, proxy config, failure debugging | Managed |
| Browser cold start | Depends on your hardware | <250ms |
| Infrastructure reliability | Depends on your implementation | Engine-level, included |
| Max concurrent (managed) | Limited by your servers | 10 (Starter) / 50 (Pro) |
| Cost at low volume | Low — library is free | $0.015/step PAYG |
| Cost at high volume | Server + proxy + engineering time | $0.012/step (Pro) |
| CDP compatible | Native | ✅ — existing Playwright scripts work |
| Full browser control | ✅ Complete | ✅ via CDP |
If Playwright wins a row, it wins. The table is accurate.
One context the table doesn't capture: Playwright's "low" cost at low volume is the library cost. The real cost at scale includes server infrastructure, proxy subscriptions, and the engineering time spent on maintenance. That's what TinyFish replaces — and it's why the economics shift faster than the per-step price suggests.
Use Playwright here. It's the right tool.
UI and functional testing. Playwright is purpose-built for testing — test runners, assertions, traces, screenshot comparison. TinyFish is a data extraction and automation platform, not a testing framework.
Simple, stable targets. Working with a small number of cooperative sites whose structure doesn't change? If you're comfortable managing the browser process and the maintenance load is genuinely zero, Playwright is the leaner choice. The threshold isn't about volume — it's about whether infrastructure has become a problem yet.
Local development and prototyping. Building and debugging automation flows is faster with local Playwright — full visibility into browser state, easy debugging, no API latency.
When you need complete control. Custom browser configurations, extension loading, low-level network manipulation — Playwright's architecture gives direct access to things the Browser API abstracts.
Early-stage validation. If you're running quick experiments to test whether a data source is useful before committing to a pipeline, Playwright locally is the fastest path to an answer.
TinyFish's value is clearest when the infrastructure problem becomes visible. That moment comes sooner than most teams expect.
Production pipelines at scale. When thousands of concurrent sessions make browser infrastructure an ops problem, managed infrastructure is worth the per-step cost.
Teams that don't want to own browser ops. If browser infrastructure isn't a core competency you want to build, outsourcing it is rational — your engineers ship product, not browser fleet maintenance.
Sites with strict automation requirements. Sites that require infrastructure-level handling — where JavaScript-layer plugins fail at scale — need the approach TinyFish takes. TinyFish operates at the infrastructure layer — the same layer where the browser itself initializes, not added after the fact.
Goal-based multi-step tasks. When your automation requires reasoning about what to do next rather than a fixed selector sequence, the Web Agent layer handles tasks that would require complex conditional Playwright logic.
Mixed complexity URL lists. If your list includes simple pages alongside sites that require more sophisticated handling, routing to TinyFish selectively while keeping Playwright for simple cases is a valid tiered approach.
Yes. This is the most important point in this article: Playwright and TinyFish are not competing tools. They're different layers.
Playwright is the automation library. TinyFish is the infrastructure layer. You can run them in parallel — Playwright for tests and simple sites, TinyFish for production-scale or complex extraction — without any conflict.
The migration is genuinely minimal. Here's a complete working example:
import asyncio
import os
from playwright.async_api import async_playwright
async def scrape(url: str) -> str:
async with async_playwright() as p:
# Replace launch() with a TinyFish session — the rest of your code is unchanged
from tinyfish import TinyFish
session = TinyFish().browser.sessions.create()
browser = await p.chromium.connect_over_cdp(session.cdp_url)
page = await browser.new_page()
await page.goto(url, wait_until="networkidle")
content = await page.content()
await browser.close()
return content
result = asyncio.run(scrape("https://example.com"))Your selectors, event handlers, extraction logic — unchanged. The browser runs in TinyFish's managed infrastructure instead of your server.
For session configuration, concurrent session management, and authentication handling, the full documentation is at docs.tinyfish.ai/browser-api.
If you want to try it against your current Playwright scripts, the Getting Started guide walks through first connection in under 10 minutes.
Both, depending on what you need. For teams with existing Playwright code, TinyFish's Browser API is a complement — your scripts run unchanged, just on managed infrastructure instead of your server. As a fuller alternative, TinyFish makes sense when you need infrastructure-level reliability at scale or goal-based agent automation. Most teams end up using both: Playwright for testing and simple sites, TinyFish for production-scale or complex extraction.
Yes. TinyFish's Browser API returns a CDP WebSocket endpoint. Playwright's connect_over_cdp() method connects to any CDP endpoint. The migration is minimal: create a TinyFish browser session, then pass session.cdp_url to playwright.chromium.connect_over_cdp(). Everything after that line — selectors, interactions, extraction logic — runs unchanged.
Common signals: you're spending more engineering time on browser infrastructure than on automation logic; reliability issues appear that aren't in your code; proxy management has become its own project; or target sites with strict requirements need ongoing maintenance as they evolve. If these apply at your current volume, the infrastructure overhead has exceeded the cost of a managed alternative.
TinyFish runs a native Chromium-based browser session with infrastructure-level request handling. Unlike plugins applied after the browser starts, TinyFish operates at the infrastructure layer — the same layer where the browser itself initializes. JavaScript-layer plugins are detectable at the protocol level; this approach is different. The platform achieves up to 85% success rate on sites with strict automation requirements. For sites TinyFish can't reliably access, failures are transparent — you won't get silent incorrect results.
Playwright's library is free, but at scale the real costs are server infrastructure, proxy subscriptions, and engineering time for maintenance. TinyFish charges $0.012–$0.015 per step depending on plan, with browser infrastructure, residential proxy, and LLM inference included. Search and Fetch APIs are free on all plans. Playwright wins at low volume and simple sites; the economics shift toward TinyFish when infrastructure overhead becomes a material cost. The Pro plan ($150/mo) includes 16,500 steps with 50 concurrent agents.
No credit card. No setup. Run your first operation in under a minute.