Blog
Ideas and Insights
Latest news, updates, and insights from TinyFish.

Most AI web workflows do not fail because search is bad, or browser automation is bad, or extraction is bad.
They fail at the boundaries between them.
Search finds a page your fetch layer cannot render. Fetch returns content your agent cannot trust. Browser automation loses the session context the next step needs. So teams end up writing glue code, fallback logic, session handling, retries, and validation just to make separate tools behave like one system.
That integration work is the hidden tax in AI web automation.

A simple workflow sounds straightforward:
In practice, teams often end up stitching together multiple APIs that were never designed to work as one system.
[Code Block 1 - Language: Python, Theme: GitHub Dark]
# Search for the page
search_results = search_api.query("notion pricing")
url = search_results[0].url
# Try to fetch content
content = fetch_api.scrape(url)
# Fallback if the page needs JavaScript
if not content or content.get("error"):
browser = browser_api.launch()
page = browser.goto(url)
page.wait_for_selector(".pricing-card")
content = page.content()
browser.close()
# Extract the result
result = agent_api.extract(content, "find pricing plans")This is before retries, validation, error handling, session cleanup, rate limiting, and edge cases when page structure changes.
The problem is not just that the code is longer. The problem is that you are now responsible for the seams between separate tools.
That is the assembly tax.
Search gives you a URL. That does not mean the next layer can use it.
You search for a pricing page. The result looks right. Then your fetch layer hits the URL and returns partial content because the real page only appears after JavaScript runs.
Now you need fallback logic:
What looked like one operation turns into multiple control paths.
In a unified system, search can pass execution hints forward.
{
"url": "https://notion.so/pricing",
"requires_javascript": true,
"recommended_execution": "browser",
"structure_hints": {
"pricing_selector": ".pricing-card",
"load_time_estimate": 2.3
}
}Execution metadata flows from search to fetch, enabling automatic path selection.
That means the right execution path can be chosen automatically instead of forcing your code to guess.

Even after you render a page, the workflow is often not done.
You may need to:
With separate tools, extraction usually has no idea what happened during navigation. It just receives HTML and hopes the page is in the right state.
So teams write more glue:
That is where production workflows start to become fragile.
In a unified system, navigation and extraction share state. The system knows whether the page is ready, whether the intended action succeeded, and whether the result is actually present before extraction runs.
That removes an entire class of failure.
In a unified system, navigation and extraction share state. That removes an entire class of failure.
The core benefit of one platform is not just fewer vendors.
It is that context, state, and feedback can move through the workflow without being rebuilt at every step.
Search should not return only a URL. It should return enough context for the next step to make a good decision.
Fetch should not treat every page the same. It should know when browser rendering is needed.
Agents should not start blind. They should inherit page state, execution history, and signals from the steps before them.
When those pieces are disconnected, your application becomes the thing that has to carry context across the workflow.
When the platform is unified, the platform does it for you.
Separate tools often mean separate sessions:
To a site, that can look like multiple unrelated clients touching the same workflow. That increases the odds of blocks, inconsistent behavior, or failed runs.
When search, rendering, browsing, and execution live in one system, requests can stay coordinated: same session, same fingerprint, same workflow context.
Separate tools usually optimize for isolated metrics:
But your team does not care whether each tool locally optimized its metric.
You care whether the task completed.
In a unified system, the platform can learn from successful runs:
That feedback loop is much harder to build when every tool lives in isolation.

Here is the difference in practice.
Before: stitched tools
class PricingMonitor:
def __init__(self):
self.search = SearchAPI(key=os.getenv("SEARCH_KEY"))
self.fetch = FetchAPI(key=os.getenv("FETCH_KEY"))
self.browser = BrowserAPI(key=os.getenv("BROWSER_KEY"))
async def get_pricing(self, competitor):
results = await self.search.query(f"{competitor} pricing")
url = results[0].url
try:
content = await self.fetch.scrape(url)
if not content.get("pricing"):
raise Exception()
except:
browser = await self.browser.launch()
page = await browser.goto(url)
content = await page.content()
await browser.close()
return self.parse_pricing(content)30+ lines, 3 API keys, custom error handling across tool boundaries
You can absolutely build this.
The question is whether this orchestration work is what your team should be doing.
After: one platform
[Code Block 4 - Language: Python, Theme: GitHub Dark]
import tinyfish
def get_pricing(competitor):
return tinyfish.run(
goal=f"Find {competitor} pricing plans"
)3 lines, 1 API key, unified error handling
Same job. Less glue. Fewer boundaries to manage.
The question is not 'Can I build this with separate tools?' Of course you can. The question is: Should my team spend time integrating web primitives, or solving the problem that sits on top of them?
Teams running production web workflows on TinyFish include:
DoorDash
1M+ quarterly web operations powering global data science workflows
Grubhub
Production social intelligence extracting restaurant reputation signals at scale
TestSprite
20M+ autonomous test steps executed monthly with production-grade reliability
This is not a theoretical architecture advantage. It is what production teams need when workflows have to keep running.
The question is not: Can I build this with separate tools?
Of course you can!
The real question is: Should my team spend time integrating web primitives, or solving the problem that sits on top of them?
If you are building:
then the integration work is rarely the differentiator.
It is just tax.
TinyFish gives you one platform for:
That means:
The individual pieces matter.
But the bigger advantage is that they were designed to work as one system.
That is what makes production web workflows simpler to build and easier to trust.
Take a workflow you are currently stitching together.
Run it in TinyFish.
The difference is not just fewer APIs.
It is fewer boundaries where things break.
Try the Playground: agent.tinyfish.ai
Start with 500 free credits. No credit card required.
No credit card. No setup. Run your first operation in under a minute.