TinyFish
Search
Fast, structured web search
Fetch
Any URL to clean content
Agent
Multi-step web automation
Browser
Stealth Chromium sessions
All products share one API keyView docs →
Documentation
API reference and guides
Integrations
Connect with your stack
Blog
Product updates and insights
Cookbook
Open-source examples
Pricing
Overview
Enterprise-grade web data
Use Cases
What teams are building
Customers
See who builds with TinyFish
Log InContactContact
Products
SearchFast, structured web search
FetchAny URL to clean content
AgentMulti-step web automation
BrowserStealth Chromium sessions
Resources
DocumentationAPI reference and guides
IntegrationsConnect with your stack
BlogProduct updates and insights
CookbookOpen-source examples
PricingPlans, credits, and billing
Enterprise
OverviewEnterprise-grade web data
Use CasesWhat teams are building
CustomersSee who builds with TinyFish
Log InContact
TinyFish

Web APIs built for agents.

Product
  • Enterprise
  • Use Cases
  • Customers
  • Pricing
  • Integrations
  • Docs
  • Trust
Resources
  • Cookbook
  • Blog
  • Current
  • Accelerator
Connect
  • X/Twitter
  • LinkedIn
  • Discord
  • GitHub
  • Contact Us
© 2026 TinyFish·Privacy·Terms
Engineering

From Selenium to Playwright to AI Agents: The Full Migration Story

TinyFishie·TinyFish Observer·Apr 21, 2026·9 min read
Share
From Selenium to Playwright to AI Agents: The Full Migration Story

Every migration guide you've read covers the same journey: Selenium is slow and flaky, Playwright is faster and more reliable, here's how to move your test suite. That story is true and the advice is correct.

What those guides don't cover is what happens next—the teams that migrate to Playwright, run it in production for 18 months, and then face a different set of problems that Playwright also can't solve.

This is the story of all three stages.

Stage 1: Selenium — The Baseline (And Why You Leave)

Selenium's dominance wasn't an accident. It was the first framework to give developers reliable programmatic browser control at scale, and the ecosystem that grew around it—Grid, language bindings for every major language, years of Stack Overflow answers—made it the safe choice for a decade.

The decision to leave usually comes from one of three places:

Flakiness compounds at scale. A single Selenium test that passes 95% of the time isn't a problem. A 500-test suite where each test passes 95% of the time means your build is green less than 0.003% of the time. This math breaks CI/CD pipelines.

Driver management becomes a job. Keeping ChromeDriver, geckodriver, and the various WebDriver implementations aligned with browser versions is operational work that doesn't ship features. Selenium 4 improved this, but it's still more friction than modern alternatives.

Modern web apps expose the architecture's age. Selenium's synchronous, WebDriver-protocol design predates SPAs, async-heavy frontends, and Shadow DOM. It works, but it requires more explicit waiting, more defensive coding, and more maintenance for the same tests.

If you're still on Selenium and you need to move, Playwright is the right destination. The migration math checks out.

Stage 2: Playwright — The Right Move (With Honest Limits)

Playwright at 34M+ weekly npm downloads in 2026 isn't popular by accident. The auto-waiting, cross-browser support, Trace Viewer, and parallel execution model solve the specific problems that made Selenium difficult at scale.

Migrations that go well follow a predictable pattern: rebuild the page object infrastructure first, migrate the high-maintenance flaky tests, then move the rest as they're touched for other reasons. Teams that try to translate Selenium code line-by-line into Playwright produce Playwright tests with Selenium habits—explicit waits, brittle locators—and don't see the full benefit.

The teams that migrate cleanly and then run Playwright for 12–18 months learn something: Playwright is the right tool for testing. It's less obviously the right tool for everything else.

The distinction matters because many teams use browser automation for more than testing. They use it for:

  • Data collection from authenticated portals
  • Competitive intelligence scraping
  • Workflow automation across web applications
  • Monitoring that checks actual page content, not just HTTP status

For these use cases, Playwright works—but it exposes a different category of problems.

Three-stage web automation evolution diagram from Selenium to Playwright to AI agents

Stage 3: The Problems Playwright Doesn't Solve

Three years into production Playwright usage, certain patterns appear in teams running browser automation at scale:

The selector maintenance tax. Every page.locator() is a dependency on today's DOM structure. When sites redesign—which happens quarterly for most modern web properties—selectors break. A team running 50 scrapers against competitor sites will debug broken selectors several times per month. The Playwright migration solved flakiness from waiting logic; it didn't solve fragility from site changes.

Authentication at breadth. Testing one authenticated application is a solved problem. Monitoring pricing across 40 supplier portals—each with different login flows, session timeouts, and CAPTCHA implementations—is a different problem. You can solve it with Playwright, but the solution is a significant amount of bespoke code per site: login flow management, session persistence, re-authentication handlers, credential rotation. Times 40 sites. Maintained ongoing.

The scale ceiling without infrastructure. Playwright parallelism works well on a single machine. Running 100 concurrent authenticated sessions requires managing session pools, load distribution, and resource allocation. This is infrastructure work that precedes the actual task—collecting data or running checks.

Detection at volume. Playwright in headed mode or with stealth plugins handles light bot detection. At production volume against DataDome or Kasada, the detection surface area grows. The detection arms race requires ongoing attention.

These aren't Playwright failures—they're signs that the task has changed. Testing a single web application against a known DOM structure is a different problem from monitoring 50 external sites with dynamic content, authentication, and active bot protection.

When the Third Stage Makes Sense

The move from Playwright to AI web agents follows a recognizable trigger: when the engineering overhead of maintaining automation infrastructure exceeds the engineering overhead of using the data.

Specifically, teams that benefit from this move tend to be running automation for data collection, monitoring, or workflow tasks (not testing) against targets they don't control, at a scale where selector maintenance and session management have become the primary engineering investment.

What AI agents change in this context is the layer of abstraction. Instead of describing navigation steps in code—click this, wait for that, extract this element—you describe the goal in plain English. The agent handles navigation, renders JavaScript, handles authentication if needed, and returns structured results. When the site redesigns, you don't update selectors; the agent adapts.

TinyFish runs this as a managed platform. You pass a URL and a goal; the platform handles browser allocation, rendering, stealth at the C++ layer, session management, and structured data extraction through a single API call. The same code that extracts pricing from one supplier portal works against 50 of them, with no per-site configuration.

The honest trade-off: per-task cost is higher than optimized Playwright scripts. If you're running a high-volume scrape of static public pages, Playwright or Scrapy is cheaper. If the value is in difficult targets—authenticated, bot-protected, dynamic, varied—the cost of maintaining Playwright automation often exceeds the cost of using an agent platform.

The Full Migration Decision

Some teams stop at Playwright. That's the right call when:

  • Browser automation is primarily for testing your own applications
  • Target sites are stable and primarily your own infrastructure
  • Volume is managed and concurrency needs don't require distributed architecture

The further migration makes sense when:

  • Automation is primarily for external data collection or monitoring
  • Targets include authenticated portals or bot-protected sites
  • Selector maintenance is consuming meaningful engineering time
  • Concurrency requirements are high and growing

The Selenium → Playwright move is almost always right when you're ready. The Playwright → agents move is right for some teams and not others. The deciding question is: where does your engineering time actually go?

---

Test TinyFish against your production targets—the ones where Playwright requires the most maintenance. 500 free steps, no credit card.

**Start your free trial →**

---

FAQ

How long does a Selenium to Playwright migration take?

For a mid-sized test suite (100–500 tests), teams typically complete migration in 2–6 weeks depending on test complexity and how much time they invest in rebuilding page objects vs. direct translation. Direct translation is faster but doesn't eliminate Selenium habits like explicit waits. Rebuilding foundation infrastructure first—page objects, shared utilities, authentication helpers—takes longer upfront but makes the actual test migration nearly mechanical. The BrowserStack and TestDino migration guides recommend the infrastructure-first approach for suites larger than 200 tests.

Is Playwright compatible with existing Selenium tests?

Not directly—Playwright and Selenium use different protocols (CDP vs. WebDriver) and different APIs. You'll rewrite tests, not convert them. However, the conceptual mapping is close enough that experienced Selenium developers typically become productive in Playwright within days. The main adjustment is trusting auto-waiting instead of writing explicit waits, and adopting role-based locators instead of XPath.

When should I NOT migrate from Selenium to Playwright?

If you have a large, stable, working Selenium suite that runs reliably and you have no specific pain—your CI isn't failing from flakiness, your team isn't spending significant time on maintenance—the migration ROI may not justify the effort. Playwright doesn't make working tests work better; it solves specific problems. If those problems aren't costing you time, the migration is optional.

What's the difference between using Playwright for scraping vs. for testing?

For testing your own application, you control the target, the DOM structure is known, and changes to the site update your tests as part of the same development cycle. For scraping external sites, you don't control the target, DOM changes break your code without warning, and maintenance is reactive. The same Playwright skills apply, but the reliability characteristics are completely different. Testing a known application is fundamentally more stable than scraping unknown external sites.

How do AI agents fit into a team that already uses Playwright?

Most teams that adopt AI agents for external data collection don't replace Playwright—they run both. Playwright handles testing internal applications, where they want deterministic, selector-level control over interactions. Agents handle external data collection, monitoring, and workflow automation against targets they don't control. The boundary is roughly: use Playwright where you need testing-grade precision and control, use agents where you need resilience against external change and scale without selector maintenance.

Related Reading

  • Pillar: The Best Web Scraping Tools in 2026
  • Best Puppeteer Alternatives for Browser Automation in 2026
  • Scraping Dynamic Websites: When Playwright Is the Right Tool (And When It Isn't)
  • What Is a Web Agent? The Complete Guide
Get started

Start building.

No credit card. No setup. Run your first operation in under a minute.

Get 500 free creditsRead the docs
More Articles
Building a Browser for the Agent Era
Engineering

Building a Browser for the Agent Era

Max Luong·Apr 14, 2026
Production-Grade Web Fetching for AI Agents
Engineering

Production-Grade Web Fetching for AI Agents

Chenlu Ji·Apr 14, 2026
Why Stitched Web Stacks Fail in Production
Product and Integrations

Why Stitched Web Stacks Fail in Production

Keith Zhai·Apr 14, 2026