Announcing BrowserQL, our next-gen automation tooling

Extract HTML from protected sites with our /unblock API

Use our stealth browsers via the /unblock API with residential proxies, to retrieve HTML, take a screenshot or generate an unlocked endpoint for Playwright or Puppeteer.

Pink Light
Browsers automation that appear human

Bypass Cloudflare, Datadome and other bot detectors

Our /unblock API automatically hides even the most subtle signs that a browser is being automated. It controls browsers at the CDP level, removing typical traces a library leaves behind.
You can enable content or screenshots to get the HTML or PNG, or use the unlocked WebSocket endpoint with Playwright or Puppeteer
/unblock API doc

Use the HTML with Scrapy or other tools

Our APIs render and evaluates a page with our browsers, then returns the HTML or JSON.
You can then use a library such as Scrapy or Beautiful Soup to extract the data if needed. This gives you the advantages of headless such as JavaScript rendering and captcha avoidance, without having to run them yourself.
Check out the docs
Three windows, with the rear showing code involving Scrapy and the /content API, the middle representing an ecommerce site, and the front showing scraped data.
Cloud provider and scripting library logos on the left, with the browserless and browser logos on the right

Use the full power of Puppeteer and Playwright

Unlike many scraping tools, you can also use the standard Puppeteer and Playwright libraries to run any script.
You can click buttons, navigate dynamic content or anything else. Just host the scripts in your servers and connect them to our browsers.
Getting started docs

Use our API or an unforked library

See the Docs
Browserless white logo icon svg

// Automatically responds with the pages HTML payload
curl --request POST \
  --url 'https://production-sfo.browserless.io/unblock' \
  --header 'content-type: application/json' \
  --data '{
  "url": "https://example.com",
  "browserWSEndpoint": false,
  "cookies": false,
  "content": true,
  "screenshot": true,
  "ttl": 3000
}'

Browserless white logo icon svg

// Returns the JSON of the elements specified
$ curl -X POST \
https://chrome.browserless.io/scrape \
-H 'Content-Type: application/json' \
-d '{
  "url": "https://news.ycombinator.com/",
  "elements": [{
    "selector": ".athing .titlelink"
  }]
}'

Browserless white logo icon svg

// From inside your Node application
import puppeteer from 'puppeteer';

// Replace puppeteer.launch with puppeteer.connect
const browser = await puppeteer.connect({
  browserWSEndpoint: 'wss://chrome.browserless.io'
});

// The rest of your script remains the same
const page = await browser.newPage();
await page.goto('https://example.com/');
console.log(await page.title());
browser.close();

Quote icon

Customer Stories

"We started using another scraping company's headless browsers to run Puppeteer scripts. But, it required a Vercel upgrade due to slow fetch times, and the proxies weren't running correctly. I found Browserless and had our Puppeteer code running within an hour. The scrapes are now 5x faster and 1/3rd of the price, plus the support has been excellent."

Nicklas Smit
Full-Stack Developer, Takeoff Copenhagen

"We built a scraping tool to train our chatbots on public website data, but it quickly got complicated due to edge cases and bot detection. I found Browserless and set aside a day for the integration, but it only took a couple of hours. I didn't need to become an expert in managing proxy servers or virtual computers, so now I can stay focused on core parts of the business."

Mike Heap
Founder, My AskAI
Arrow pointed left
Arrow pointed right

Ready to try the benefits of Browserless?