Blog
Ideas and Insights
Latest news, updates, and insights from TinyFish.

Roughly 1 in 3 automated browser sessions gets blocked by anti-bot systems. If you're building a product on top of browser automation, that's a failure rate your customers feel.
A person shopping for a camera visits five websites. An AI agent doing the same task visits a thousand. But the web was not built for agents. Not because agents are malicious. Because the web cannot tell the difference between legitimate automation and attacks.
While web pages are built for human eyes, not machine logic, a <div class="action-7b"> tells an agent nothing about what a button does. Security systems treat efficiency as a threat: an agent checking 20 airlines in 5 seconds looks identical to a DDoS. And browsers assume one user, one screen, one click at a time. Being too efficient on today's internet is a signal of being malicious. Even when the intent is benign.

We built the Browser API for legitimate automation. One that executes user intent in a way that respects site integrity while being indistinguishable from direct user interaction.
When you ask an agent to find the best flight price, that is your intent. Your agent should be able to execute it. When you automate vendor portal logins across 300 suppliers, that is legitimate work. When you verify compliance certifications across 50 partners, that is not an attack.
Here is how we built it.
Operating below JavaScript. The automation surface runs under the browser's JavaScript sandbox. Actions and page reads go through the same native paths as real user interaction. Nothing for anti-bot scripts to observe.
WorldModel. A real-time semantic map of every page, built in native code, zero JavaScript executed. Agents understand what's on the page without creating detectable side effects.
Intelligent proxy. Per-domain proxy selection at the network stack level, before any HTTP exchange. The system learns which routes work and adapts automatically.
→ Try Browser API or read on for the full engineering details.
Standard automation tools like Playwright and Puppeteer work well for developer tooling. But they share a fundamental constraint: they operate inside the browser's JavaScript sandbox.
That is the problem.
Anti-bot scripts running on the page can observe injected code, synthetic events, and patched properties. They cannot distinguish between a user's legitimate automation and a malicious bot. They just detect automation and block it.
We measured this directly. On certain anti-bot protected sites, enabling even one standard CDP automation command, Runtime.evaluate, drops pass rate from 100% to 0%. Not gradually. Binary.
Detection systems are not looking for malicious intent. They are looking for the presence of automation at the JavaScript layer. Any signal is enough.
This led to the core architectural decision behind TinyFish Browser: we moved the entire automation surface below the JavaScript sandbox, into the browser's native layer.
At this level, anti-bot scripts cannot observe automation. Not because we are hiding malicious behavior. Because user-directed actions execute through the same native paths that direct user interaction does.
When our browser dispatches a click, it goes through the browser's native trusted event path. isTrusted is true because it genuinely is a trusted event: initiated by the user, executed by their agent.
When it reads a page, it traverses the DOM in native code without executing a single line of JavaScript. When it authenticates a proxy, it happens at the network stack level where no page script can detect it.
This required reimplementing automation across the browser's internals: input handling, network authentication, fingerprint generation, DOM observation, proxy management.
All rebuilt so user-directed automation looks like what it is: a real user's browser executing their intent.

One of the hardest problems in web automation is answering a simple question: what's on this page?
Standard tools answer it by running JavaScript: querying the DOM, watching for mutations, parsing element attributes. This works, but it's observable.
Anti-bot systems detect MutationObservers, DOM queries, and CDP subscriptions. The act of looking at the page changes how the page behaves toward you.
We built something different.
Our browser maintains what we call a WorldModel, a real-time semantic representation of every page, computed entirely in native code with zero JavaScript execution.
The WorldModel knows what is a button, what is a form field, what is navigation, what is content, and what actions are available. It rebuilds continuously as the page changes. It exposes this as structured data that agents can consume directly.
For agents, this means two things.
First, the browser understands pages the way a human does: by function, not by CSS class name. When a developer redesigns a checkout flow and every class name changes, the WorldModel still identifies the purchase action, the quantity input, the shipping form.
Second, this understanding is invisible. No JavaScript runs. No subscriptions are created. No detectable side effects. The page does not know it is being read.
This is the technology that powers Web Agent API.
When you give our agent a goal like "find pricing plans and compare features," the WorldModel is how it understands pages without triggering anti-bot detection. This is what makes our agent reliable across thousands of different site structures.
We are testing WorldModel internally and will be making it available through Browser API in the coming months. Soon, you will be able to use the same semantic understanding our agent uses: browser.click("purchase button") instead of hunting for selectors.
For now, Browser API gives you direct CDP control with native-level anti-detection built in. You write the automation logic. The browser handles legitimate automation at the native layer. The WorldModel also powers our settle detection. After every navigation or interaction, the browser needs to know: is the page done loading?
Standard tools use JavaScript timers or mutation observers. Both detectable. Our browser monitors DOM activity at the native level and uses a quiet-window heuristic: when no DOM changes have occurred for a brief window after the last mutation, the page is settled.
This works reliably on SPAs, lazy-loaded content, and client-side rendering, which are the scenarios that break waitForNetworkIdle.
Every browser session runs in its own isolated microVM. Not a container, not a shared pool. A fresh machine, booted and ready to browse in about 4 seconds from cold start.

Why isolation matters. No browser session shares state with any other. No cookie leakage, no filesystem residue. When a session ends, the VM is destroyed. Each session is a clean environment, identical to a real user opening a fresh browser. That's not a trick. That's what user-directed automation should look like.
Why headful matters. TinyFish Browser runs headful, with full rendering. This eliminates the last category of fingerprint signals that headless browsers cannot address. The tradeoff is typically speed: headful browsers are slower. We solved this by moving the automation surface into native code. Actions execute in sub-millisecond time. No network round-trips to an external controller, no interpreter overhead, no JavaScript execution bridge. The result is a headful browser that performs like headless.
The architecture after session creation. You connect directly to the microVM over CDP WebSocket. Our API layer steps entirely out of the traffic path. No proxy, no middleman, no added latency between your agent and the browser.
A single browser session can open many tabs for parallel automation. No JavaScript injection, no runtime execution overhead. For workflows like price comparison across 30 airlines or data collection across hundreds of listings, this is the gap between minutes and hours.
A fast, stealth browser with a flagged IP address is still blocked.
We maintain a pool of dedicated ISP IPs with per-domain reputation tracking. Each IP carries a quality score for each domain, and the browser automatically selects the best available proxy before every navigation, at the network stack level, before any HTTP exchange occurs.
When a connection fails on a domain, the score adjusts, and the browser routes through a different IP to maintain session reliability. Over time, the system learns which IPs work for which domains. When the pool is exhausted for a specific domain, we fall back to rotating proxies to keep sessions alive while clean IPs recover.
We benchmark against 20 anti-bot protected sites, the kind that reliably break standard automation tools, running 100 tests total.
85% pass rate. Across 100 tests on complex, heavily-protected websites. That number climbs every week as we close remaining gaps.
4-second cold start. From API call to a live, isolated, stealth browser session ready for CDP commands.
Sub-millisecond action execution. Click, type, navigate, extract. All handled in native code without leaving the browser process.
Zero JavaScript execution for page observation. The WorldModel reads and understands every page without running a single line of JavaScript or creating any detectable side effects.
These numbers represent where we are today. The pass rate three months ago was well below this. Three months from now, it'll be higher. The speed and architecture advantages are already in place. Everything else is improving week over week.
Browser API is the newest piece of a platform we've been building for the agentic web. Here's the full stack:
Five APIs, one key. Each solves a different layer of the problem: understanding the web, navigating it, extracting from it, controlling it, and authenticating to it.
Browser API is live today. Get a TinyFish API key at agent.tinyfish.ai, docs at docs.tinyfish.ai/browser .
Try it and tell us what you think
No credit card. No setup. Run your first operation in under a minute.