Blog
Ideas and Insights
Latest news, updates, and insights from TinyFish.
Manual QA is where engineering velocity goes to die.
You ship a feature. Someone writes test cases in a spreadsheet. Someone else clicks through them in a browser. They file bugs in Jira with vague reproduction steps. Three days later, you discover the bug was actually a test environment issue. Repeat forever.
What if you could just describe what you want tested in plain English, and have AI agents spin up real browsers to execute every test case simultaneously — streaming live previews back to your screen?
That's Fast QA. And you can build it in an afternoon.
Fast QA is a no-code QA testing platform with a simple premise: describe tests like you'd explain them to a junior QA engineer, and let AI handle the rest.
The workflow:
The key insight: TinyFish doesn't simulate browsers. It runs them. Every test executes in a real browser with real navigation, real form fills, real JavaScript rendering. This is the difference between testing what your code should do and testing what your users actually experience.
The system is deliberately simple. Three API routes, two external services, zero databases.
┌─────────────────────────────────┐
│ Next.js Frontend │
│ Dashboard · Projects · Tests │
│ QA Context (State Management) │
│ LocalStorage (Persistence) │
└──────────────┬──────────────────┘
│
┌──────────┼──────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│/api/ │ │/api/ │ │/api/ │
│generate│ │execute │ │generate│
│-tests │ │-tests │ │-report │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│OpenRou-│ │TinyFish│ │OpenRou-│
│ter AI │ │ API │ │ter AI │
│(MiniMax│ │Browser │ │(MiniMax│
│ M2.1) │ │Automa- │ │ M2.1) │
└────────┘ │tion │ └────────┘
└────────┘
Why no database? This is a cookbook recipe, not a SaaS product. LocalStorage keeps things running with zero infrastructure. You want to add Postgres later? Go for it. But the point is you can go from git clone to working QA platform in under 5 minutes.
Why OpenRouter + MiniMax M2.1? For structured test case generation, you don't need GPT-4. MiniMax M2.1 is fast, cheap, and reliable at converting natural language into structured JSON. Swap it for any model — the prompts are model-agnostic.
Why TinyFish API? This is where the magic happens. Every other part of this system is replaceable. TinyFish is not. It's the thing that takes a structured test goal and actually does it in a real browser — handling navigation, waiting for elements, interacting with dynamic content, and streaming results back. More on this below.
Let's skip the boilerplate and talk about what makes this work.
Each test case gets sent to TinyFish as a structured goal. TinyFish spins up a real browser, executes the steps, and streams progress back via Server-Sent Events. Here's what that looks like:
const response = await fetch("<https://agent.tinyfish.ai/v1/automation/run-sse>", {
method: "POST",
headers: {
"X-API-Key": process.env.TINYFISH_API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify({
url: testCase.targetUrl,
goal: `Execute this QA test: ${testCase.steps.join(". ")}.
Verify: ${testCase.expectedResult}.
Report PASS if the expected result is confirmed, FAIL if not.
Include a screenshot of the final state.`,
}),
});Parallel execution with live previews
This is the wow factor. When you have 10 test cases, you don't run them sequentially. You fire all 10 simultaneously and stream results back as they come:
// Fire all test cases in parallel
const executions = selectedTests.map((test) =>
executeTestWithTinyFish(test, targetUrl)
);
// Each execution streams SSE events back
// The frontend renders live browser previews as they arrive
const results = await Promise.allSettled(executions);
Each SSE stream sends back:
Your frontend just listens to these streams and renders them in real time. You literally watch 10 browsers working simultaneously, each running a different test case.
git clone <https://github.com/tinyfish-io/tinyfish-cookbook>
cd tinyfish-cookbook/fast-qa
npm install
Create .env.local:
TINYFISH_API_KEY=sk-tinyfish-...
OPENROUTER_API_KEY=sk-or-...
The generate endpoint takes a plain English description and returns structured test cases. The prompt engineering here matters — you need the AI to break down vague descriptions into discrete, verifiable steps.
// The system prompt guides the AI to produce executable test cases
const systemPrompt = `You are a QA engineer. Convert the user's plain English
test description into structured test cases. Each test case must have:
- name: Short descriptive name
- steps: Array of specific actions to take
- expectedResult: What should be true after the steps execute
- priority: high/medium/low
Return valid JSON only. No markdown, no explanation.`;
const response = await fetch("<https://openrouter.ai/api/v1/chat/completions>", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.OPENROUTER_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "minimax/minimax-m2.1",
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userTestDescription },
],
response_format: { type: "json_object" },
}),
});
A user types: "Check that the homepage loads, the nav has links to Pricing and Docs, and the hero section has a CTA button that scrolls to the demo."
The AI returns:
{
"testCases": [
{
"name": "Homepage Load",
"steps": ["Navigate to the homepage", "Wait for page to fully load"],
"expectedResult": "Page loads without errors, HTTP 200",
"priority": "high"
},
{
"name": "Navigation Links",
"steps": ["Locate the navigation bar", "Check for Pricing link", "Check for Docs link"],
"expectedResult": "Both Pricing and Docs links are visible and clickable",
"priority": "high"
},
{
"name": "Hero CTA Scroll",
"steps": ["Find the CTA button in the hero section", "Click the CTA button"],
"expectedResult": "Page scrolls smoothly to the demo section",
"priority": "medium"
}
]
}
This is the core of the system. The execute endpoint takes selected test cases, fires them all to TinyFish in parallel, and consolidates the SSE streams back to the frontend.
export async function POST(request) {
const { tests, targetUrl } = await request.json();
// Create a readable stream to send consolidated SSE events
const stream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder();
// Execute all tests in parallel
const promises = tests.map(async (test, index) => {
try {
const tinyFishResponse = await fetch(
"<https://agent.tinyfish.ai/v1/automation/run-sse>",
{
method: "POST",
headers: {
"X-API-Key": process.env.TINYFISH_API_KEY,
"Content-Type": "application/json",
},
body: JSON.stringify({
url: targetUrl,
goal: buildTestGoal(test),
}),
}
);
// Stream TinyFish's SSE events back, tagged with test index
const reader = tinyFishResponse.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// Tag each event with the test index for frontend routing
const taggedEvent = `data: ${JSON.stringify({
testIndex: index,
testName: test.name,
...parseSSEEvent(chunk),
})}\\n\\n`;
controller.enqueue(encoder.encode(taggedEvent));
}
} catch (error) {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({
testIndex: index,
testName: test.name,
status: "error",
error: error.message,
})}\\n\\n`
)
);
}
});
await Promise.allSettled(promises);
controller.close();
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
},
});
}
The buildTestGoal function is where test case structure meets TinyFish's natural language interface:
function buildTestGoal(test) {
return `You are executing a QA test called "${test.name}".
Steps to execute:
${test.steps.map((step, i) => `${i + 1}. ${step}`).join("\\n")}
Expected result: ${test.expectedResult}
After completing all steps, evaluate whether the expected result was achieved.
Respond with:
- status: "PASS" or "FAIL"
- observation: What you actually saw
- screenshot: Take a final screenshot
If any step fails or produces unexpected results, report FAIL with details.`;
}
When tests fail, you don't just want "FAIL". You want context. The report endpoint takes the failure data from TinyFish and generates a structured bug report:
const bugReportPrompt = `Generate a concise bug report from this test failure:
Test: ${failedTest.name}
Steps attempted: ${failedTest.steps.join(", ")}
Expected: ${failedTest.expectedResult}
Actual: ${failedTest.observation}
Format as:
- Summary (1 line)
- Reproduction Steps (numbered)
- Expected vs Actual Behavior
- Severity (Critical/High/Medium/Low)
- Suggested Fix (if obvious)`;
The frontend uses React context for state management and EventSource for SSE consumption. The live preview component is the most satisfying part — you see browser screenshots updating in real time as each parallel agent works through its test:
function TestExecutionGrid({ runningTests }) {
return (
<div className="grid grid-cols-2 lg:grid-cols-3 gap-4">
{runningTests.map((test) => (
<div key={test.index} className="border rounded-lg p-4">
<div className="flex justify-between items-center mb-2">
<span className="font-medium">{test.name}</span>
<StatusBadge status={test.status} />
</div>
{/* Live browser screenshot */}
{test.latestScreenshot && (
<img
src={test.latestScreenshot}
alt={`Live preview: ${test.name}`}
className="w-full rounded border"
/>
)}
{/* Progress steps */}
<div className="mt-2 text-sm text-gray-500">
{test.currentStep || "Waiting..."}
</div>
</div>
))}
</div>
);
}
npm run dev
Open http://localhost:3000. Create a project, add your target URL, write your tests in plain English, and hit run. Watch the browsers go.
→ See the demo video (embed your demo video here)
When you click "Run Tests" with 5 test cases selected, here's the timeline:
The total time is roughly the duration of your slowest test, not the sum of all tests. 5 tests that take 10 seconds each finish in ~10 seconds, not 50. This is what parallel browser agents give you.
Fast QA is a cookbook recipe — it's meant to be forked and extended. But the pattern it demonstrates is genuinely useful:
For startups without dedicated QA: Your founding engineer can write tests in English during standup and run them before merging. No QA hire needed until you're at 20+ people.
For CI/CD pipelines: Swap the frontend for a CLI script that reads test cases from a YAML file and runs them via TinyFish on every push. You now have AI-powered E2E tests that don't break when you refactor your CSS.
For non-technical stakeholders: Your PM can write acceptance criteria in plain English, paste them into Fast QA, and verify features themselves. The gap between "story accepted" and "actually works" shrinks dramatically.
For regression testing at scale: You have 200 test cases across 50 pages? TinyFish runs them all in parallel. What used to be a 4-hour manual regression suite becomes a 5-minute automated run with live visual proof.
The full source code is open source in the TinyFish Cookbook:
→ GitHub: tinyfish-io/tinyfish-cookbook/fast-qa
You'll need:
Clone it, run it, break it, make it yours. If you build something interesting on top of this, open a PR — we'll feature it.
Built with TinyFish API · Part of the TinyFish Cookbook
No credit card. No setup. Run your first operation in under a minute.

TL;DR: TinyFish is now an n8n community node. Drop it into any workflow, point it at a URL, tell it what you want, and get clean JSON back. The web just became another input in your automation pipeline.

TinyFish is launching a high-intensity virtual accelerator program, backed by $2M from Mango Capital. This accelerator is designed to fund and support the founders building the next generation of software on top of the Agentic Web. Applications open February 17, 2026. Rolling admissions.
