163 points by anerli 21 hours ago | 40 comments
We know there's a lot of noise about different browser agents. If you've tried any of them, you know they're slow, expensive, and inconsistent. That's why we built an agent specifically for running test cases and optimized it just for that:
- Pure vision instead of error prone "set-of-marks" system (the colorful boxes you see in browser-use for example)
- Use tiny VLM (Moondream) instead of OpenAI/Anthropic computer use for dramatically faster and cheaper execution
- Use two agents: one for planning and adapting test cases and one for executing them quickly and consistently.
The idea is the planner builds up a general plan which the executor runs. We can save this plan and re-run it with only the executor for quick, cheap, and consistent runs. When something goes wrong, it can kick back out to the planner agent and re-adjust the test.
It’s completely open source. Would love to have more people try it out and tell us how we can make it great.
NitpickLawyer 20 hours ago
I've been recently thinking about testing/qa w/ VLMs + LLMs, one area that I haven't seen explored (but should 100% be feasible) is to have the first run be LLM + VLM, and then have the LLM(s?) write repeatable "cheap" tests w/ traditional libraries (playwright, puppeteer, etc). On every run you do the "cheap" traditional checks, if any fail go with the LLM + VLM again and see what broke, only fail the test if both fail. Makes sense?
anerli 20 hours ago
Instead of caching actual code, we cache a "plan" of specific web actions that are still described in natural language.
For example, a cached "typing" action might look like: { variant: 'type'; target: string; content: string; }
The target is a natural language description. The content is what to type. Moondream's job is simply to find the target, and then we will click into that target and type whatever content. This means it can be full vision and not rely on DOM at all, while still being very consistent. Moondream is also trivially cheap to run since it's only a 2B model. If it can't find the target or it's confidence changed significantly (using token probabilities), it's an indication that the action/plan requires adjustment, and we can dynamically swap in the planner LLM to decide how to adjust the test from there.
ekzy 18 hours ago
anerli 16 hours ago
chrisweekly 8 hours ago
tomatohs 13 hours ago
We have multiple fallbacks to prevent flakes; The "cheap" command, a description of the intended step, and the original prompt.
If any step fails, we fall back to the next source.
chrisweekly 8 hours ago
1. https://netflixtechblog.com/introducing-safetest-a-novel-app...
anerli 6 hours ago
o1o1o1 5 hours ago
However, I do not see a big advantage over Cypress tests.
The article mentions shortcomings of Cypress (and Playwright):
> They start a dev server with bootstrapping code to load the component and/or setup code you want, which limits their ability to handle complex enterprise applications that might have OAuth or a complex build pipeline.
The simple solution is to containerise the whole application (including whatever OAuth provider is used), which then allows you to simply launch the whole thing and then run the tests. Most apps (especially in enterprise) should already be containerised anyway, so most of the times we can just go ahead and run any tests against them.
How is SafeTest better than that when my goal is to test my application in a real world scenario?
retreatguru 3 hours ago