👉 You publish an update to your website. Something on the site changed — copy, layout, a CMS field, a few CTAs. Did anything break? Did what you fixed actually show up? Did staging match production? Did the French version look right?
If you’re already working in Claude, Cursor, or Windsurf, you don’t need to switch back to the Sitepager dashboard. Sitepager now connects via MCP, so your assistant can run scans, read results, and tell you what changed — in the same session.
What you can do
- Run a scan and get a clear summary of what changed since your last baseline
- Catch broken links and missing pages before a campaign or release
- Spot visual changes across pages
- Review SEO issues — missing titles, broken meta, page-structure problems
- Track performance across updates and see what got worse
- Fix issues and re-run to verify — without leaving your assistant
The loop
- Run a scan
- Ask your assistant to review and fix issues
- Re-run the scan to verify
Repeat this after every website update.
Example workflows
Pre-publish check on staging. You’ve made changes on staging and want to check if anything broke before publishing.
Your assistant runs the scan and gives you a clear summary: which pages changed, what was added or removed. If something looks off, you can ask for more detail on that page before publishing.
Catch broken links before a campaign push. Paid traffic is about to hit your launch page, and you want to make sure everything works.
You get a list, fix them, and re-run to confirm. No need to switch back to Sitepager dashboard.
Post-publish diff. A teammate made a global style change and you’re not sure what it touched across the site.
This is the workflow that catches the cross-page surprises — a spacing rule that quietly broke every blog post, a section that disappeared on mobile.
SEO health check after a site update. You’ve restructured URLs, migrated to a new CMS, or updated a set of pages and want to make sure nothing is missing.
Your assistant surfaces the issues by page, proposes fixes based on the actual content, and you can re-run after publishing to confirm they’re resolved.
Performance check after an update. You published a new page or added a third-party script and want to see how it’s performing.
Your assistant gives you a clear summary across performance, accessibility, SEO, and best practices — and what’s actually worth acting on.
Install in 60 seconds
Grab an API key from your Sitepager account and connect your client. Works with Claude Desktop, Claude Code, Cursor, Windsurf, and other MCP-compatible tools.
Full setup instructions are in the docs.
Why we built it this way
A website changed. Did anything break?
That’s the question Sitepager answers — on every update, every publish, every campaign.
For teams working inside AI tools, switching to a dashboard breaks that flow.
MCP is a second front door to the same product — same scans, same baselines, same history, just from where you’re already working.
FAQ
Is this in beta? Yes. Core scan and results are stable. We’re improving how assistants interact with the tools.
Which AI clients does it support? We’ve tested Claude Desktop, Claude Code, Cursor, and Windsurf. Other MCP-compatible tools should work too.
Does this replace the dashboard? No. MCP is for running scans and reviewing results. The dashboard is where you go to see visual diffs and manage baselines.
Does it work for sites built on Webflow / Framer / Shopify / Next.js / anything else? Yes. Sitepager is platform-agnostic — it crawls and compares URLs, so anything you can serve at a URL works.
How do I get help if a prompt doesn’t work the way I expect? Email us or leave a note in the docs. We’re improving how the assistant uses Sitepager based on real prompts, so your feedback helps make it more reliable.
Try it on your next update. If you don’t have a Sitepager account yet, you can start a trial and use MCP from day one.



