文章链接提取工具。提交付费媒体文章链接,自动匹配已有内容并返回英文全文,或排队提取。支持 Barron's、Bloomberg、Financial Times、Foreign Policy、Handelsblatt、MarketWatch、New York Times、Reuters、The Atlantic、The Economist、The New Yorker、Wall Street Journal、Washington Post、Wired 共 15 家媒体。需要 Import Token 鉴权,每日有次数限制。Article link extraction tool. Supports Barron's, Bloomberg, FT, Foreign Policy, Handelsblatt, MarketWatch, NYT, Reuters, The Atlantic, The Economist, The New Yorker, WSJ, Washington Post, Wired (15 outlets). Requires Import Token.
python3 {baseDir}/scripts/article_link.py <command>,不要自己写脚本,不要用 curl/requests 直接调 API{baseDir}/config.json 检查 import_token 是否已配置quota 查看剩余次数--deep 时脚本会自动拦截并返回配额信息,必须将配额告知用户并获得明确确认后,才能加 --yes 执行每次使用前先读取 {baseDir}/config.json:
{
"api_base": "https://pick-read.vip/api",
"import_token": "imp-xxx..."
}
import_token 为空 → 告知用户:请到 pick-read.vip 账户页生成导入令牌并填入 config.jsonimport_token 已填写 → 直接执行命令,无需再传 --token 参数python3 {baseDir}/scripts/article_link.py media
返回示例:
{
"type": "media_list",
"total": 15,
"media": [
{"domain": "ft.com", "name": "Financial Times"},
{"domain": "wsj.com", "name": "Wall Street Journal"},
{"domain": "nytimes.com", "name": "New York Times"}
]
}
python3 {baseDir}/scripts/article_link.py quota
返回示例:
{
"type": "quota",
"basic_used": 3,
"basic_limit": 50,
"deep_used": 0,
"deep_limit": 5
}
python3 {baseDir}/scripts/article_link.py submit "https://www.wsj.com/articles/some-article"
返回示例(已匹配 — 直接返回英文全文,无需额外命令):
{
"type": "submit_matched",
"job_id": "abc123",
"origin_url": "https://www.wsj.com/articles/some-article",
"source_media": "Wall Street Journal",
"mode": "basic",
"status": "matched",
"matched_article_id": "def456",
"title": "Article Title in English",
"content_html": "<p>Full article text in English...</p>",
"original_publish_time": "2026-04-10T08:00:00",
"next_action": "done — 将 title + content_html 英文全文展示给用户"
}
→ type=submit_matched 表示已拿到全文,直接展示 title + content_html 给用户即可
返回示例(未匹配,排队提取):
{
"type": "submit_pending",
"job_id": "abc123",
"origin_url": "https://www.wsj.com/articles/some-article",
"source_media": "Wall Street Journal",
"mode": "basic",
"status": "pending_extract",
"next_action": "poll — 用 status \"abc123\" 轮询任务状态"
}
→ 按 next_action 指引操作,无需自行判断
可选参数:
--deep — 深度解析模式,跳过已有匹配,直接重新提取--yes — 确认执行深度解析(必须在用户确认后才能使用)当用户要求深度解析时,必须分两步执行:
步骤 1: 触发确认提示
python3 {baseDir}/scripts/article_link.py submit "https://..." --deep
返回示例:
{
"type": "deep_confirm_required",
"message": "深度解析每日仅 5 次,今日已用 1 次,剩余 4 次。请确认后使用 --yes 执行。",
"deep_used": 1,
"deep_limit": 5,
"deep_remaining": 4,
"confirm_command": "submit \"https://...\" --deep --yes"
}
→ 将 message 展示给用户,询问是否确认执行
步骤 2: 用户确认后执行
python3 {baseDir}/scripts/article_link.py submit "https://..." --deep --yes
⚠️ 禁止跳过步骤 1 直接使用 --deep --yes,必须先让用户看到配额并确认
status 字段含义:
matched — 已匹配,脚本已自动获取英文全文,直接展示pending_extract — 未匹配,已排队等待提取,按 next_action 轮询processing — 提取进行中,继续轮询ready — 提取完成,脚本已自动获取全文failed — 提取失败,告知用户提交后如果 type=submit_pending,按 next_action 轮询:
python3 {baseDir}/scripts/article_link.py status "abc123"
返回示例(提取完成 — 自动返回英文全文):
{
"type": "job_ready",
"job_id": "abc123",
"status": "ready",
"matched_article_id": "def456",
"title": "Article Title in English",
"content_html": "<p>Full article text...</p>",
"next_action": "done — 将 title + content_html 英文全文展示给用户"
}
返回示例(仍在处理中):
{
"type": "job_status",
"job_id": "abc123",
"status": "processing",
"next_action": "poll — 等待几秒后再次查询 status \"abc123\""
}
→ 始终按 next_action 操作,done 表示已拿到全文,poll 表示继续等待
python3 {baseDir}/scripts/article_link.py jobs
可选参数:--page 2、--page-size 10
注意:
submit和status命令已自动在匹配/完成时获取全文。此命令仅在已知article_id时作为独立工具使用。
如果已有 matched_article_id,可直接调用:
python3 {baseDir}/scripts/article_link.py article "matched_article_id"
返回示例:
{
"type": "article_content",
"id": "def456",
"source_media": "Financial Times",
"title": "Article Title in English",
"content_html": "<p>Full article text in English...</p>",
"origin_url": "https://www.ft.com/content/xxx",
"original_publish_time": "2026-04-10T08:00:00"
}
→ 将 title 和 content_html 中的英文全文展示给用户(content_html 是 HTML 格式,需解析后呈现纯文本)
用户说“帮我看看这篇 FT 文章讲了什么”:
# 只需一步: submit 自动匹配 + 获取英文全文
python3 {baseDir}/scripts/article_link.py submit "https://www.ft.com/content/xxx"
# → type=submit_matched 时,直接展示 title + content_html
# → type=submit_pending 时,按 next_action 轮询
python3 {baseDir}/scripts/article_link.py status "返回的job_id"
用户说“帮我深度解析这篇 FT 文章”:
# 步骤 1: 触发确认
python3 {baseDir}/scripts/article_link.py submit "https://www.ft.com/content/xxx" --deep
# → 返回 deep_confirm_required,将 message 展示给用户
# 步骤 2: 用户确认“是”后才执行
python3 {baseDir}/scripts/article_link.py submit "https://www.ft.com/content/xxx" --deep --yes
# → 按 next_action 处理结果
next_action 操作next_action 以 done 开头 → 直接展示 title + content_html 给用户next_action 以 poll 开头 → 等待几秒后执行其中的命令| 媒体 | 域名 |
|---|---|
| Barron's | barrons.com |
| Bloomberg | bloomberg.com |
| Financial Times | ft.com |
| Foreign Policy | foreignpolicy.com |
| Handelsblatt | handelsblatt.com |
| MarketWatch | marketwatch.com |
| New York Times | nytimes.com |
| Newsweek | newsweek.com |
| Reuters | reuters.com |
| The Atlantic | theatlantic.com |
| The Economist | economist.com |
| The New Yorker | newyorker.com |
| Wall Street Journal | wsj.com |
| Washington Post | washingtonpost.com |
| Wired | wired.com |
| 模式 | 每人每天上限 | 说明 |
|---|---|---|
| 基础模式 (basic) | 50 次 | 先匹配已有内容,未命中则排队提取 |
| 深度解析 (deep) | 5 次 | 跳过匹配,直接重新提取 |
| 现象 | 原因 | 解决 |
|---|---|---|
401: 缺少导入令牌 | config.json 中 import_token 为空 | 让用户到 pick-read.vip 生成令牌 |
401: 导入令牌无效或已被撤销 | token 错误或已重置 | 重新生成令牌 |
403: 订阅已过期 | 用户订阅到期 | 告知用户需要续订 |
422: 不支持该媒体来源 | 链接不在白名单中 | 用 media 命令查看支持列表 |
429: 今日已达上限 | 每日次数用尽 | 用 quota 查看配额,明天再试 |
EOF occurred in violation of protocol | 系统代理干扰 TLS | 脚本已内置代理绕过,正常重试 |