Search, summarize, and synthesize economics literature. find research gaps, position your contribution.
This skill helps economists conduct rigorous literature reviews: actively searching academic databases, building reference list, summarizing individual papers, synthesizing views and evidence across the literature, identifying research gaps, and positioning the user's contribution.
Before starting any search, Claude must run this check every time the skill is invoked.
Read the file [plugin_root]/.env.
The file should contain:
OPENALEX_API_KEY=your-api-key-here
If the key is absent / still a placeholder:
Use AskUserQuestion to prompt the user:
"请按以下步骤获取 OpenAlex API Key,然后粘贴到这里: ① 访问 https://openalex.org/settings/api → 注册 / 登录账号 ② 在账户设置页面生成 API Key → Copy ③ 将 API Key 粘贴至此"
OPENALEX_API_KEY=<value> to [plugin_root]/.env (standard KEY=VALUE format, one entry per line — not JSON). If the key already exists in the file, replace that line in place. Confirm: "✅ API Key 已保存,下次不再询问。" Then proceed.&api_key=... from all URLs).If the key is already a valid non-placeholder string: proceed silently — no prompt needed.
This skill is Phase 2 of the empirical research workflow. The confirmed research question from Phase 1 is the essential input. Claude must follow this decision tree:
Scan your working directory to find the research-question.md document produced by the question skill:
研究问题:…
研究对象:…
识别层级:Level [A/B/C]
数据来源(初步):…
研究假设:H₀ / H₁ / 预期方向
Then ask only the two supplementary questions:
"文献搜索还需要以下信息: ① 文献时间范围:全部年份 / 2000年后 / 最近十年? ② 有没有 2–3 篇奠基性论文(作者 + 标题或 DOI)可作为搜索锚点?没有也可以直接跳过。"
Once the user answers (or skips ②), proceed directly to Step 2.
If no research-question.md document is found (user jumped directly to literature review without going through Phase 1):
Run the /question command immediately:
"在开展文献检索之前,需要先明确研究问题。我将运行 /question 命令帮助你完成这一步。"
Run the full /question workflow. After the user confirms the research question, return to this skill at Step 1.1 (the block will now be present) and continue.
Before searching, expand the user's topic into a full keyword matrix. Do not rely on a single phrase.
| Layer | Primary terms | Synonyms / variants |
|---|---|---|
| Core concept | e.g. "minimum wage" | "wage floor", "wage policy", "statutory wage" |
| Outcome | e.g. "employment" | "jobs", "labor demand", "hours worked", "unemployment" |
| Method | e.g. "difference-in-differences" | "DiD", "diff-in-diff", "natural experiment", "quasi-experiment" |
| Context | e.g. "United States" | "US", "OECD", "developing countries", "low-income countries" |
Map the research question to 3–6 relevant JEL codes (e.g., J31 = Wage Level and Structure; J23 = Labor Demand; C21 = Cross-Sectional Models). These could be used in EconLit and NBER searches.
Generate at least 5 distinct query strings covering different angles:
Query 1 (main): ("minimum wage" OR "wage floor") AND (employment OR "labor demand")
Query 2 (method): ("minimum wage") AND ("difference-in-differences" OR "DiD" OR "natural experiment")
Query 3 (mechanism): ("minimum wage") AND (prices OR "profit margins" OR "labor productivity")
Query 4 (subgroup): ("minimum wage") AND ("low-skilled" OR "teen employment" OR "small business")
Query 5 (recent): ("minimum wage") AND (employment) after:2018
Claude must actively execute these searches autonomously.
Primary search engine: OpenAlex API via WebFetch.
Supplementary & optional sources: Semantic Scholar, NBER, SSRN and arXiv.
Search rules:
Use primary search engine for all rounds.
Use supplementary sources only for following purposes:
Semantic Scholar cross-validation is mandatory for all Core-tier papers — even when OpenAlex coverage is strong.
For Background / Methodological papers, cross-validation may be skipped if OpenAlex data looks internally consistent.
All OpenAlex requests follow the base URL https://api.openalex.org. Append &api_key=YOUR_KEY (from .env) to every request. Without a key, requests are very limited but still functional for moderate searches.
在发出第一个 OpenAlex WebFetch 请求后,若收到网络错误(连接超时、403、域名无法解析等),必须严格遵循以下顺序,禁止直接跳转到网络搜索兜底:
Step A — 立即生成本地运行脚本,并提示用户在自己的电脑上执行
向用户说明:
"Cowork 沙箱无法访问 OpenAlex API(网络受限)。请在您的本地终端运行以下 Python 脚本获取文献数据,然后将输出结果粘贴到这里或保存为文件后上传。
运行方式:
pip install requests python fetch_openalex.py ```"*
随后生成完整的本地脚本 fetch_openalex.py(写入工作区):
"""
OpenAlex 文献获取脚本 — 在本地终端运行
生成文件:openalex_results.json
"""
import requests, json, time
API_KEY = "YOUR_OPENALEX_API_KEY" # 替换为你的 API Key(或留空)
BASE = "https://api.openalex.org"
HEADERS = {"User-Agent": "mailto:[email protected]"}
def fetch(url):
if API_KEY:
url += f"&api_key={API_KEY}" if "?" in url else f"?api_key={API_KEY}"
r = requests.get(url, headers=HEADERS, timeout=30)
r.raise_for_status()
time.sleep(0.5) # 避免触发速率限制
return r.json()
queries = [
# ── 根据研究问题自动填充的查询(Claude 在生成脚本时替换占位符)──
f"{BASE}/works?search=KEYWORD_1&filter=type:article&sort=cited_by_count:desc&per_page=25",
f"{BASE}/works?search=KEYWORD_2&filter=type:article&sort=cited_by_count:desc&per_page=25",
f"{BASE}/works?search=KEYWORD_3&filter=type:article,publication_year:>2018&sort=cited_by_count:desc&per_page=25",
]
results = {}
for i, url in enumerate(queries, 1):
print(f"查询 {i}/{len(queries)}: {url[:80]}...")
try:
data = fetch(url)
results[f"query_{i}"] = data.get("results", [])
print(f" → 获取 {len(results[f'query_{i}'])} 篇")
except Exception as e:
print(f" ✗ 失败:{e}")
results[f"query_{i}"] = []
with open("openalex_results.json", "w", encoding="utf-8") as f:
json.dump(results, f, ensure_ascii=False, indent=2)
print("\n✅ 结果已保存至 openalex_results.json")
print(f" 总计获取:{sum(len(v) for v in results.values())} 篇文献条目")
脚本中的占位符
KEYWORD_1/2/3由 Claude 在生成时替换为 Step 2 生成的实际关键词查询字符串。
Step B — 等待用户上传 / 粘贴结果
等待用户执行后返回结果。接受以下任意格式:
openalex_results.json 文件收到数据后,正常解析并继续 Round 1 的候选文献筛选流程。
Step C — 仅当用户明确表示无法在本地运行时,才启用兜底方案
若用户回复"无法运行"或"没有 Python 环境",此时才可使用以下兜底(按优先级):
WebSearch 搜索 Google Scholar / 学术搜索(降级,覆盖度有限)https://openalex.org/works?search=... 并将页面内容粘贴⛔ 严禁行为: 在未提示用户尝试本地运行之前,直接切换到网络搜索兜底。
Important — Two-step author lookup: Never filter by author name directly (names are ambiguous). Always first search for the author's OpenAlex ID, then filter works by that ID.
Pattern 1 — Works keyword search