Full-content web fetcher for AI agents and content workflows. Standard HTTP tools (curl, wget, or an agent's built-in web fetch) are often served truncated or different responses because servers inspect the client's network fingerprint. agent-fetch uses browser impersonation so servers respond as they would to a real browser, then runs multiple extraction strategies to pull the complete article — every paragraph, heading, and link. Also supports multi-page crawling, persistent cookies, and custom CSS selectors. Runs locally with no API keys or cloud dependencies.
Also useful for:
- NotebookLM can't add a URL as a source — extract the content and paste it as text
- RAG pipelines need clean markdown from web pages, not HTML soup or truncated summaries
- LLM conversations where you need the full article in context, not a 3-paragraph summary
| Built-in agent fetch | Cloud extraction APIs | agent-fetch | |
|---|---|---|---|
| Content | Summary or truncation | Full (usually) | Full article text |
| Structure | Plain text blob | Markdown (varies) | Markdown with headings, links, lists |
| Runs locally | Yes | No | Yes |
| API key required | No | Yes | No |
| Extraction strategies | 1 (basic parse) | 1–2 | Multiple (Readability, JSON-LD, Next.js, RSC, WP API, text-density, CSS selectors) |
| Open source | N/A | Partial | Yes |
npm install @teng-lin/agent-fetchOr run without installing:
npx agent-fetch https://example.com/pageInstall the Agent Skill and your agent will automatically use agent-fetch when it needs to read URLs:
npx skills add teng-lin/agent-fetchThe skill teaches agents when and how to call agent-fetch — no configuration needed.
# Extract article as markdown
npx agent-fetch https://example.com/article
# Markdown content only (no metadata header)
npx agent-fetch https://example.com/article -q
# Full JSON output (title, content, markdown, metadata)
npx agent-fetch https://example.com/article --json
# Plain text only
npx agent-fetch https://example.com/article --text
# Raw HTML (no extraction)
npx agent-fetch https://example.com/article --raw
# Custom timeout (default: 20s)
npx agent-fetch https://example.com/article --timeout 30000
# With cookies (inline)
npx agent-fetch https://example.com/article --cookie "sessionId=abc123; theme=dark"
# With cookies (Netscape cookie file)
npx agent-fetch https://example.com/article --cookie-file ~/.cookies.txtGetting cookies: Export a Netscape format cookie file from your browser using the Get cookies.txt Locally Chrome extension, then pass it with --cookie-file. Cookies are useful for maintaining authenticated sessions or accessing content that requires login.
# Extract only article content, remove navigation
npx agent-fetch https://example.com/article --select "article" --remove "nav, .sidebar"
# Use specific TLS fingerprint (default: chrome-143)
npx agent-fetch https://example.com/article --preset "ios-safari-18"
# Extract from local PDF file
npx agent-fetch ./document.pdf
# Show version
npx agent-fetch --versionDefault output:
Title: Page Title
Author: Author Name
Site: example.com
Published: 2025-01-26T12:00:00Z
Language: en
Fetched in 523ms
---
# Heading
Full content with **formatting**, [links](https://example.com), and structure preserved...
import { httpFetch } from '@teng-lin/agent-fetch';
const result = await httpFetch('https://example.com/article');
if (result.success) {
console.log(result.markdown); // Full article as markdown
console.log(result.title); // "Article Title"
console.log(result.byline); // "By John Smith"
console.log(result.textContent); // Plain text
console.log(result.latencyMs); // 523
}
// With options
const result2 = await httpFetch('https://slow-site.com/article', {
timeout: 30000, // 30 second timeout (default: 20s)
preset: 'chrome-143', // TLS preset
});Crawl a site and extract articles from multiple pages with depth control:
# Crawl with default settings (depth: 3, max 100 pages)
npx agent-fetch crawl https://example.com
# Crawl deeper with strict concurrency
npx agent-fetch crawl https://example.com --depth 5 --limit 50 --concurrency 3
# Crawl only matching URLs
npx agent-fetch crawl https://example.com --include "*/blog/*" --exclude "**/archive/**"
# Allow cross-origin crawling
npx agent-fetch crawl https://example.com --no-same-origin
# Add delay between requests (rate limiting)
npx agent-fetch crawl https://example.com --delay 1000
# With cookies and custom selectors
npx agent-fetch crawl https://example.com --cookie-file ~/.cookies.txt --select "article"
# Output as JSON for programmatic processing
npx agent-fetch crawl https://example.com --jsonagent-fetch runs multiple extraction strategies in parallel and picks the most complete result. No single method works for every site — modern pages use frameworks, APIs, and structured data that each require different approaches.
| Strategy | What it does | Best for |
|---|---|---|
| Readability | Mozilla's Reader View algorithm (strict + relaxed passes) | Most pages with semantic HTML |
| Text density | Statistical text-to-tag ratio analysis (CETD) | Complex layouts that Readability over-trims |
| JSON-LD | Parses schema.org structured data |
Sites with rich metadata |
| Next.js | Extracts from page props (__NEXT_DATA__) |
Next.js sites (Pages Router) |
| React Server Components | Parses streaming RSC payloads | Next.js sites (App Router) |
| WordPress REST API | Fetches content via /wp-json/wp/v2/ endpoints |
WordPress sites (40%+ of the web) |
| CSS selectors | Probes semantic containers (<article>, .post-content, etc.) |
Fallback for unusual layouts |
Winner selection: Strategies that extract 500+ characters are candidates. If text-density or RSC finds 2x more content than Readability, it wins. Otherwise, the longest result is chosen. Metadata (author, date, site name) is composed from the best source for each field across all strategies.
Disclaimer: This tool is intended for fetching publicly accessible web content. Users are solely responsible for:
- Complying with each website's Terms of Service and
robots.txtdirectives - Ensuring lawful use under applicable laws (including copyright, computer access, and data protection regulations)
- Obtaining necessary permissions before accessing or extracting content
The authors make no warranties about the legality of any specific use case. This tool does not grant permission to access any website or circumvent any access controls.
MIT