614 個技能
Process and manipulate PDF documents, including text extraction, form filling, and metadata editing
Search arXiv, inspect paper metadata, and download papers by keyword or arXiv ID. Use when the user wants fresh arXiv search results, paper detail lookup, or direct PDF/source downloads.
Fetch and summarize the latest news from trusted sources across politics, finance, society, world, tech, sports, and entertainment. Use when the user asks for latest news, headlines, or news in a specific category.
Read, preview, and summarize text-based files — source code, configs, logs, data files, and plain text documents. Use when the user asks to read, inspect, or summarize local .txt, .md, .json, .yaml, .csv, .log, or source code files.
全面的 PDF 操作工具包,用于提取文本和表格、创建新 PDF、合并/拆分文档和处理表单。当 Claude 需要填写 PDF 表单或以编程方式大规模处理、生成或分析 PDF 文档时使用。
全面的电子表格创建、编辑和分析,支持公式、格式、数据分析和可视化。当 Claude 需要处理电子表格(.xlsx、.xlsm、.csv、.tsv 等)时:(1) 创建带有公式和格式的新电子表格,(2) 读取或分析数据,(3) 在保留公式的同时修改现有电子表格,(4) 在电子表格中进行数据分析和可视化,或 (5) 重新计算公式
Extract transcripts from YouTube videos. Use when the user asks for a transcript, subtitles, or captions of a YouTube video and provides a YouTube URL (youtube.com/watch?v=, youtu.be/, or similar). Supports output with or without timestamps.
Use this skill for web search, extraction, mapping, crawling, and research via Tavily’s REST API when web searches are needed and no built-in tool is available, or when Tavily’s LLM-friendly format is beneficial.
Verify claims, statements, or information using multiple authoritative sources
Create and maintain XerahS Improvement Proposals (XIPs) with GitHub as source of truth and docs/proposals/xip folder as backup. Use when creating or editing XIPs, syncing XIPs between GitHub issues and the docs/proposals/xip folder, or when the user mentions XIP, GitHub issues for XIP, or local XIP files.
Rules and workflows for updating docs/CHANGELOG.md, including version grouping, consolidation, and commit-entry attribution.
Convert PDF, DOCX, XLSX, and text files to clean, structured Markdown. CJK-friendly, table-friendly, privacy-first.
获取并提取网页的可读内容。用于轻量级的页面访问,无需浏览器自动化。
新闻助手。支持今日头条、实时热搜以及关键词新闻搜索。脚本固定位置: {baseDir}/scripts/news_tool.py。
搜索和播放音乐 / Search & play music. 当用户想要:(1) 随机点一首歌/随机推荐一首歌/帮我放首歌 (2) 搜索某首歌曲/点播指定歌曲/播放某个歌手的歌 (3) 获取歌曲播放链接或封面 (4) 在网易云/酷狗/酷我/汽水/QQ音乐等平台查找音乐时使用。Use when user wants to: randomly recommend or play a song, search for a song by name/artist, play specific music, get play URL or cover image from NetEase/KuGou/KuWo/QiShui/QQ Music.
搜索并阅读微信公众号文章。脚本固定位置: workspace/skills/wechat-fetch/scripts/wechat_tool.py。遇到微信链接必须首选此工具。
GitHub CLI (gh) command reference. Use when working with GitHub repositories, PRs, issues, actions, `gh api`, or any GitHub operations from the command line.
This skill should be used BEFORE running any git commit command. Triggers when about to run `git commit`. Ensures commit messages follow Conventional Commits specification.
This skill should be used when creating a GitHub pull request via `gh pr create`. Defines PR body format with Why/What/Notes sections, ensures proper assignment, and prompts for Jira ticket number.
This skill should be used when the user asks to create a contact in Outlook from freeform text data (e.g., business cards, email signatures, addresses). It handles parsing contact information and creating properly structured Outlook contacts that sync to mobile devices.
Use this skill when working with ~/org/ directory, Denote files (YYYYMMDDTHHMMSS--title__tags.org), or org-mode knowledge bases. Provides scripts for: parsing Denote filenames/metadata, extracting org file TOC, and navigating 3,000+ file PKM systems. Trigger on: ~/org/, ~/org/llmlog/, ~/org/bib/, Denote ID parsing, org heading extraction.
This skill should be used when the user wants to download or retrieve a file from the CRM database and save it to the filesystem. Triggers on requests like "download the contract from CRM", "save the database file to disk", "get the PDF from the database", or "retrieve the attachment from CRM". Uses the bel-crm-db skill for schema knowledge.
Download transcripts for all data folders sequentially. Use for overnight batch processing or when you need to download pending transcripts across all channels and collections.
Expert in email content extraction and analysis. **Use whenever the user mentions .eml files, email messages, says "Extract email information", "Using the email information", or requests to extract, parse, analyze, or process email files.** Handles email thread parsing, attachment extraction, and converting emails to structured markdown format for AI processing. (project, gitignored)
Capture a YouTube video transcript as raw material using `ytt`, storing it in the raw/ directory with minimal metadata for later distillation.
Automatically process unprocessed audio and image files in Gastrohem daily WhatsApp folders. This skill should be used when the user asks to transcribe audio files, perform OCR on images, or process media in daily folders (e.g., "Process media in today's folder", "Transcribe audio and OCR images in 24.10 folder"). Handles audio transcription using insanely-fast-whisper (parallelized, creates .json) and image OCR using Claude's vision capabilities (creates natural .md summaries with Gastrohem-relevant info).
Find and repair broken wikilinks in vault. Triggers when user mentions "fix links", "broken links", "repair vault", "fix broken links".
Import existing markdown files into Kurt database. Fix ERROR records, bulk import files, link content to database.
Use when exporting data for ad platforms (Google Ads, Meta) or working with project datasets. Documents exact CSV formats for Enhanced Conversions, Customer Match, and project data schemas.
6 web scraping & data collection skills. Trigger: collecting web data, finding datasets, API access for research. Design: ethical scraping methods with rate limiting and data quality checks.