Blog
Ideas and Insights
Latest news, updates, and insights from TinyFish.

For all the progress in large language models and AI tooling, one constraint still defines what agents can actually do: most of the web is still not accessible to them.
When developers try to connect AI agents to the web, they usually start with search and scraping. In practice, that means interacting with what could be called the “index web” — pages that search engines have already crawled and stored. This layer is fast and convenient, but it represents only a small fraction of the internet.
The rest of the web behaves very differently. Much of it is dynamic, rendered through JavaScript, or hidden behind login flows and interactive systems. This includes real-time pricing, availability, and the kinds of workflows businesses actually rely on. These environments are not designed to be read passively; they are designed to be used.
That distinction matters. It means that even as AI agents improve, they are still operating on a narrow slice of the web.

To work around these limitations, developers often stitch together multiple tools. A typical stack might combine search APIs for discovery with headless browsers for interaction. On paper, this can approximate end-to-end automation.
In practice, it rarely holds up.
These systems tend to be fragile. Small changes in a website’s structure can break workflows. JavaScript-heavy interfaces introduce inconsistencies. Authentication flows add another layer of complexity. As a result, teams often spend more time maintaining infrastructure than building actual products.
The underlying issue is not just tooling. It is that the web itself was not built with machine interaction as a first-class use case.
The alternative is not better tools, but a different assumption about how the web should be accessed.
TinyFish starts from a different assumption: instead of forcing agents to work around the web, design them to operate within it.
Its core concept is the web-native AI agent. Rather than relying on screenshots or visual interpretation, these agents interact with the underlying structure of web pages directly. They understand elements, navigation, and state in a way that is closer to how a browser operates than how a human sees a page.
This shift has practical consequences. It allows agents to work with dynamic content, follow complex flows, and access parts of the web that are invisible to traditional scraping or search-based methods.

The difference becomes clearer in benchmark environments. In the Mind2Web evaluation, which focuses on real-world web tasks, TinyFish’s agents performed consistently across both simple and complex scenarios.
What stands out is not just task completion, but stability. Many systems perform well on straightforward tasks and degrade quickly as workflows become more involved. Narrowing that gap is what makes a system usable in production, where most tasks are multi-step and context-dependent.

The way developers interact with TinyFish reflects this focus on reliability. A single prompt can trigger multiple agents running in parallel browser sessions. Each agent carries out part of the workflow, whether that involves navigating a site, submitting forms, or extracting data.
The output is structured rather than free-form. Instead of returning loosely formatted text, the system produces data that can be passed directly into other services or workflows. This reduces the amount of post-processing and validation that is usually required when working with language models.

One example comes from travel data. Many smaller hotel operators, particularly in regions like Japan, do not expose their inventory through standard APIs. Their sites rely on dynamic rendering and localized interfaces, which makes them difficult to index.
Using TinyFish, agents can navigate these sites directly, retrieve pricing and availability, and return the results in a consistent format. What would normally require manual browsing or brittle scraping pipelines becomes a single automated workflow.
This kind of use case is less about novelty and more about coverage. It extends access to parts of the web that have historically been out of reach for automated systems.
Web agents are only one layer of what TinyFish is building. The broader direction is a unified platform that brings together discovery, navigation, and execution.
The current agent system handles interaction. A browser interface is in development to make these workflows more observable and controllable. A search layer is also planned, one that is designed around how agents retrieve and use information rather than how humans browse.
Taken together, this points toward a different model of web access, one where agents are not limited to pre-indexed content but can move through the web more directly.

From a developer perspective, TinyFish is exposed through a REST API and a Python SDK. The system is instruction-driven, which means outcomes depend heavily on how tasks are specified. Clear constraints and expected formats tend to produce more reliable results.
Early applications reflect this pattern. In insurance, for example, developers have used the platform to query multiple provider portals simultaneously and return comparable quotes in a single response. The value here is not just speed, but the ability to coordinate across systems that were never designed to interoperate.
To encourage this kind of experimentation, TinyFish has also introduced a small accelerator program. The focus is on teams building products where web interaction is central, not incidental.
The idea is straightforward: if agents are going to become a primary interface to the web, then the infrastructure supporting them needs to be reliable enough to build on.

The broader implication is less about any single feature and more about a shift in perspective. For a long time, the web has been treated as something to be indexed and retrieved. That model works well for static information, but it falls short when interaction is required.
As AI agents become more capable, the limitation is no longer understanding language. It is accessing the environments where that understanding needs to be applied.
TinyFish is an attempt to address that gap directly, by treating the web not just as content, but as a system that agents can operate within.
Hear from TinyFish product lead Homer as he walks through why the web is fundamentally difficult for AI agents, and how web-native agents change that.
What is a web-native AI agent?
A web-native AI agent is an AI system designed to interact directly with the structure of websites, rather than relying on screenshots, scraping, or search indexing. It can navigate pages, understand elements, and complete multi-step workflows across the web.
How is this different from traditional web scraping?
Traditional web scraping extracts static HTML content and often breaks when websites change or rely on JavaScript rendering. Web-native AI agents operate on live website structures and can handle dynamic, interactive, and authenticated environments more reliably.
What is the difference between the index web and the deep web?
The index web refers to pages that search engines have already crawled and stored. The deep web includes content behind logins, dynamic systems, and interactive workflows. Most real-world business data exists outside the index web.
Why do AI agents struggle with the modern web?
Most AI agents rely on search APIs or headless browsers, which are fragile when dealing with dynamic websites, JavaScript-heavy interfaces, and authentication flows. These limitations make end-to-end automation unreliable at scale.
What is TinyFish used for?
TinyFish is used to automate real-world web workflows such as extracting pricing data, aggregating information across multiple websites, and interacting with dynamic or login-protected systems that are not accessible through traditional APIs.
What does structured output mean in AI agents?
Structured output refers to machine-readable results that follow a defined schema, rather than free-form text. This allows outputs from AI agents to be directly integrated into applications, APIs, or downstream workflows without additional parsing.
Can AI agents access websites behind logins?
Yes, web-native AI agents are designed to operate within authenticated environments, enabling them to interact with systems that are not accessible through traditional scraping or search-based tools.
No credit card. No setup. Run your first operation in under a minute.

TL;DR: TinyFish is now an n8n community node. Drop it into any workflow, point it at a URL, tell it what you want, and get clean JSON back. The web just became another input in your automation pipeline.


Most AI web agents collapse under production load. Not because the models are weak, but because the architecture is fragile. TinyFish proposes "codified learning" as a solution. Codified learning structures workflows as typed decision graphs, isolating ambiguity to the minority of steps that need it and keeping everything else deterministic, cacheable, and observable.