remix logo

Hacker Remix

Show HN: Web-eval-agent – Let the coding agent debug itself

53 points by neversettles 7 hours ago | 9 comments

Hey HN! We’ve been building an MCP server to help AI-assisted web app developers by using browser agents to test whether changes made by an AI inside an editor actually work. We've been testing it on scenarios like verifying new flows in a UI, or checking that sending a chat request triggers a response. The idea is to let your coding agent both code and evaluate if what it did was correct. Here’s a short demo with Cursor: https://www.youtube.com/watch?v=_AoQK-bwR0w

When building apps, we found the hardest part of AI-assisted coding isn’t the coding—it’s tedious point-and-click testing to see if things work. We got tired of this loop: open the app, click through flows, stare at the network tab, copy console errors to the editor, repeat. It felt obvious this should be AI-assisted too. If you can vibe-code, you should be able to vibe-test!

Some agents like Cline and Windsurf have browser integrations, but Cline’s (via Anthropic Computer Use) felt slow and only reported console logs, and Windsurf’s didn’t work reliably yet. We got so tired of manually testing that we decided to fix it.

Our MCP server sits between your IDE agent (Cursor/Windsurf/Cline/Continue) and a Playwright-powered browser-use agent. It spins up the browser, navigates your app per instructions from the IDE agent, and sends back steps, console events, and network events so the IDE agent can assess the app’s state.

We proxy Browser-use’s original Claude calls and swap in Gemini Flash 2.0, cutting latency from ~8s → ~3s per step. We also cap console/network logs at 10,000 characters to stay within context limits, and filter out irrelevant logs (e.g., noisy XHR requests).

At the end, the browser agent outputs a summary like:

  Web Evaluation Report for http://localhost:5173 
  Task: delete an API key and evaluate UX
  Steps: Home → Login → API Keys → Create Key → Delete Key
  Flow tested successfully; UX had problems X, Y, Z...
  Console (8)...   Network (13)...   Timeline of events (57) …
This gives the coding agent the ability to recognize the console and network errors, or any issues with clicking around, and have the coding agent fix them before returning back to the user. (There’s a longer example in the README at https://github.com/Operative-Sh/web-eval-agent.)

Try it in Cursor / Cline / Windsurf / Claude Desktop: (macOS/Linux):

  curl -LSf https://operative.sh/install.sh -o install.sh
  less -N install.sh   # inspect if you’d like
  bash install.sh      # installs uv + jq + Playwright + server
  # then in Cursor/Cline/Windsurf/Continue: craft a prompt using the web_eval_agent tool
(For Windows, there’s a 4-line manual install in the README.)

What we want to do next: pause/go for OAuth screens; save/load browser auth states; Playwright step recording for automated test creation and regression test creation; supporting Loveable / v0 / Bolt.new sites by offering a web version.

We’d love to hear your feedback, especially if you’ve experienced the pain of having to manually test changes happening in your web apps after making changes from inside your IDE, or if you’ve tried any alternative MCP tools for this that have worked well.

Try it out if you feel it’d be helpful for your workflow: https://github.com/Operative-Sh/web-eval-agent. (note: the server hits our operative.sh proxy to cover Gemini tokens. The MCP server itself is OSS; Anthropic base-URL support is coming soon. Free tier included; heavy users can grab the $10 plan to offset our model bill.)

Let us know what you think! Thanks for reading!

proc0 4 hours ago

Interesting. I see from the video example it took a lot of steps and there is a lot of output for a simple task. I'm thinking this probably doesn't scale very well and more complex tasks might have performance challenges. I do think it's the right direction for AI coding.

neversettles 3 hours ago

Yeah, I suppose to esafak's point, perhaps a benchmark for browser agent QA testing would be needed.

esafak 6 hours ago

Is there a benchmark for this? If not, you ought to (crowd?)start one for everybody's sake.

neversettles 6 hours ago

We started with using browser-use because they had the best evals: https://browser-use.com/posts/sota-technical-report

- but we found that Laminar came out with a better browser agent (& a better eval): https://www.lmnr.ai/ so we're looking to migrate over soon!

nico 6 hours ago

Looks amazing. Congrats on the release

How does this compare to browser mcp (https://browsermcp.io/)?

neversettles 6 hours ago

In browser MCP, looks like cursor controls each action along the way, but actually what we wanted was a single browser agent that had a high quality eval that could perform all the actions independently (browser-use)

GreenGames 6 hours ago

This is very cool! Does your MCP server preserve cookies/localStorage between steps, or would developers need to manually script auth handshakes?

neversettles 6 hours ago

Between steps it would preserve cookies, but atm when the playwright browser launches, it starts with a fresh browser state, so you'd have to o-auth to log in each time.

We're adding browser state persistence soon, hoping to enable it so once you sign in with google once, it can stay signed in on your local machine.

GreenGames 6 hours ago

Oh okay thanks - that would be fire tbh