Conducts efficient research on any topic, synthesizes findings, and extracts actionable insights.
Conduct efficient research on any topic with systematic information gathering, synthesis, and insight extraction.
The Quick Research skill helps you:
Basic Usage (L1):
/quick-research [topic] --depth [basic/standard]
/quick-research compare [topic1] [topic2]
/quick-research summarize [research materials]
Advanced Usage (L2):
/quick-research --deep-dive [topic] --sources [count]
/quick-research --verify-claims [text] --cross-check
/quick-research --trend-analysis [topic] --timeline [range]
Expert Usage (L3):
/quick-research --synthesize [multiple topics] --find-patterns
/quick-research --predictive-analysis [topic] --based-on-research
/quick-research --continuous-monitoring [topics] --alert-threshold
Basic Research:
Topic Comparison:
Research Summarization:
Step 1: Define Research Goals
Step 2: Gather Information
Step 3: Synthesize Findings
Step 4: Generate Output
Research Summary Template:
Research Summary: [Topic]
Research Question:
[What you're trying to answer]
Key Findings:
1. [Finding 1] - [Source]
2. [Finding 2] - [Source]
3. [Finding 3] - [Source]
Data & Statistics:
- [Stat 1] - [Source]
- [Stat 2] - [Source]
Conflicting Information:
- [Conflict 1]: [Source A says X, Source B says Y]
- [Conflict 2]: [Source A says X, Source B says Y]
Key Insights:
- [Insight 1]
- [Insight 2]
- [Insight 3]
Recommendations:
- [Recommendation 1]
- [Recommendation 2]
Sources:
- [Source 1] - [URL or citation]
- [Source 2] - [URL or citation]
Research Quality:
- Sources consulted: [number]
- Source types: [types]
- Credibility: [High/Medium/Low]
- Bias detected: [Yes/No]
Example 1: Basic Research
User: /quick-research "Claude Cowork features" --depth standard
PersonalOS:
Research Summary: Claude Cowork Features
Research Question:
What are the key features and capabilities of Claude Cowork?
Key Findings:
1. Cowork allows users to create custom skills using Markdown files with YAML frontmatter
- Source: https://docs.claude.ai/cowork/skills
2. Skills auto-trigger based on file content and context patterns
- Source: Cowork documentation
3. Cowork supports plugins for extended functionality
- Source: Cowork plugin API docs
4. Built-in Telegram integration for remote access and notifications
- Source: Cowork platform features
5. Computer use capabilities for file management, app launching, form filling, and screenshot analysis
- Source: Computer use documentation
Data & Statistics:
- Skills can be shared as skill packs (collections of related skills)
- Plugins extend Cowork's capabilities beyond base features
- Telegram integration enables remote interaction from anywhere
Conflicting Information:
None found - documentation is consistent
Key Insights:
- Cowork's skill system is highly extensible and community-shareable
- Auto-triggering based on context makes it truly adaptive
- Integration with Telegram and computer use enables remote and GUI-based workflows
Recommendations:
- PersonalOS should leverage auto-triggering based on context
- Design skills to be shareable as skill packs
- Integrate with Telegram for remote access
- Use computer use for GUI-based workflows
Sources:
- Claude Cowork Documentation - https://docs.claude.ai/cowork
- Cowork Skills API Reference
- Cowork Plugin Documentation
Research Quality:
- Sources consulted: 5
- Source types: Official documentation
- Credibility: High
- Bias detected: No
Example 2: Topic Comparison
User: /quick-research compare "PersonalOS" "Sidekick" --focus "features and architecture"
PersonalOS:
Comparison: PersonalOS vs. Sidekick
Research Focus:
Feature sets and system architecture comparison
Similarities:
1. Both are personal AI assistant systems for productivity
2. Both use skill-based architecture
3. Both learn from user behavior over time
4. Both focus on workflow optimization and automation
Differences:
| Aspect | PersonalOS | Sidekick |
|--------|------------|----------|
| Platform | Claude Cowork | Standalone agent system |
| Skill Format | Cowork MD + YAML frontmatter | SKILL.md format |
| Distribution | Cowork skill packs | GitHub releases |
| Onboarding | 5-question interview | 5-question interview |
| Memory | File-based persistence | 3-layer architecture |
| Focus | Cowork-native | Platform-agnostic |
PersonalOS Advantages:
- Native Cowork integration (auto-triggering, context awareness)
- Community shareable via Cowork Skills Hub
- Leverages Cowork's Telegram and computer use capabilities
- Smaller footprint, easier to install
Sidekick Advantages:
- Platform-agnostic (works with any agent system)
- More mature feature set (90+ skills)
- Established community and documentation
- Advanced memory architecture
Key Insight:
PersonalOS is optimized for Claude Cowork users who want native integration and community sharing. Sidekick is better for users who want platform-agnostic capabilities or advanced memory systems.
Recommendation:
If the user is already using Cowork → PersonalOS
If the user needs platform-agnostic system → Sidekick
Example 3: Summarize Materials
User: /quick-research summarize [provided research paper on AI agent systems]
PersonalOS:
Research Summary: AI Agent Systems - Comprehensive Paper Analysis
Research Question:
What are the key insights about AI agent systems from this paper?
Key Findings:
1. Modern AI agent systems need three core components: tool calling, memory, and task orchestration
- Source: Paper Section 2
2. Tool calling is the most critical differentiator between chatbots and agents
- Source: Paper Section 3.1
3. Memory systems should be multi-layered: short-term (context window), medium-term (session persistence), long-term (knowledge base)
- Source: Paper Section 3.2
4. Task orchestration requires planning, execution, and adaptation loops
- Source: Paper Section 3.3
Data & Statistics:
- Agent systems with tool calling outperform chatbots by 73% on complex tasks
- Multi-layer memory systems improve context retention by 45%
- Adaptive task orchestration reduces failure rates by 60%
Conflicting Information:
- One source suggests LLM-only agents can be effective for simple tasks
- Paper argues tool calling is essential for any meaningful agent system
Key Insights:
- Tool calling is the non-negotiable requirement for true agent capabilities
- Memory architecture complexity should scale with task complexity
- Task orchestration must be adaptive, not just rule-based
Recommendations:
- PersonalOS must ensure all skills use tool calling effectively
- Implement 3-layer memory: context (short-term), session (medium-term), knowledge graph (long-term)
- Design task orchestration with adaptive planning and error recovery
Sources:
- "Modern AI Agent Systems: Architecture and Design Patterns" - [Paper citation]
- 8 academic papers cited
- 3 industry case studies referenced
Research Quality:
- Sources consulted: 1 comprehensive paper with 11 references
- Source types: Academic paper with citations
- Credibility: High
- Bias detected: No
❌ Mistake 1: No clear research question
❌ Mistake 2: Relying on single source
❌ Mistake 3: Not evaluating source credibility
❌ Mistake 4: Ignoring conflicting information
❌ Mistake 5: Too much detail, no synthesis
Deep-Dive Research:
Claim Verification:
Trend Analysis:
Source Evaluation:
Example 1: Deep-Dive Research
User: /quick-research --deep-dive "AI agent memory systems" --sources 15
PersonalOS:
Deep-Dive Research: AI Agent Memory Systems
Research Question:
What are the different approaches to memory systems in AI agents, and which are most effective?
Comprehensive Findings:
1. Memory Architecture Types:
- Short-term Memory (Context Window)
* Stores recent conversation history
* Limited by model context length (e.g., 200K tokens for Claude)
* Forgets information beyond capacity
- Medium-term Memory (Session Persistence)
* Persists information across multiple sessions
* Typically file-based or database storage
* Examples: Hermes session logs, Cowork session persistence
- Long-term Memory (Knowledge Base)
* Stores structured knowledge for future retrieval
* Examples: Vector databases, knowledge graphs, hierarchical storage
* Approaches: Vector search (Pinecone, Qdrant), Knowledge graphs (Khoj), Hybrid (both)
2. Leading Implementations:
**Hermes:**
- File-based session persistence
- Trajectory logs with conversation history
- Memory tool for cross-session recall
- Strengths: Simple, transparent, easy to debug
- Limitations: No vector search, limited semantic retrieval
**Cowork:**
- Session-based persistence
- File context awareness (reads project files)
- Skills auto-trigger based on content patterns
- Strengths: Context-aware, adaptive to project
- Limitations: No long-term knowledge base
**Khoj:**
- Vector database for semantic search (Chroma)
- Knowledge graph for interconnected knowledge
- Supports multiple data sources (documents, notes, code)
- Strengths: Powerful semantic retrieval, knowledge relationships
- Limitations: More complex setup
**Sidekick:**
- 3-layer memory: Short-term (context), Medium-term (session), Long-term (knowledge base)
- Structured skill packs with knowledge
- Progressive disclosure based on session count
- Strengths: Mature architecture, proven effectiveness
- Limitations: Platform-specific, more resource-intensive
3. Academic Research Findings:
- "Memory is the critical differentiator between chatbots and agents" (IEEE 2024)
- Multi-layer memory systems improve task completion by 45% (ACL 2023)
- Semantic retrieval (vector search) outperforms keyword search by 67% (EMNLP 2023)
- Knowledge graphs enable reasoning and inference beyond retrieval (NeurIPS 2024)
4. Industry Case Studies:
- **Company A (SaaS):** Implemented vector search memory → 38% reduction in repetitive questions
- **Company B (DevTools):** Knowledge graph memory → 52% improvement in code reuse
- **Company C (Research):** Multi-layer memory → 61% faster information retrieval
5. Emerging Trends:
- Hybrid approaches (vector + knowledge graph) gaining popularity
- Personal memory systems that adapt to individual work patterns
- Decentralized memory (local storage vs cloud) for privacy
- Memory compression and summarization to manage growth
Data & Statistics:
- 87% of advanced agent systems use multi-layer memory
- Vector search is used by 73% of systems with long-term memory
- 62% of systems combine vector search with knowledge graphs
Conflicting Information:
- Some argue memory should be minimal to encourage fresh thinking
- Counter-argument: Memory is essential for continuity and learning
- Resolution: Balance needed; some systems use "session-agnostic" design
Key Insights:
1. Multi-layer memory (short, medium, long-term) is the standard for advanced agents
2. Vector search is now baseline requirement for semantic retrieval
3. Knowledge graphs are emerging for reasoning and inference beyond simple retrieval
4. Hybrid approaches (vector + graph) are becoming most popular
5. Privacy concerns driving trend toward local/offline memory storage
Recommendations for PersonalOS:
- Implement 3-layer memory: context (built-in), session (file-based), knowledge (vector search)
- Use vector search for semantic retrieval (Chroma or Qdrant)
- Consider knowledge graph for advanced reasoning (future enhancement)
- Keep memory local for privacy (no cloud dependencies)
- Design memory to be portable and exportable
Sources:
- Cowork Documentation - https://docs.claude.ai/cowork
- Khoj Documentation - https://github.com/khoj-ai/khoj
- Sidekick GitHub - https://github.com/example/sidekick
- "Memory in AI Agent Systems" (IEEE 2024)
- 11 additional academic papers
- 3 industry case studies
Research Quality:
- Sources consulted: 15
- Source types: Official docs (4), Academic papers (11), Case studies (3)
- Credibility: High
- Bias detected: Minimal (minor preference for semantic memory)
Example 2: Claim Verification
User: /quick-research --verify-claims "AI agents can fully replace human workers in 2025" --cross-check
PersonalOS:
Claim Verification: "AI agents can fully replace human workers in 2025"
Claim: AI agents will be capable of fully replacing human workers across all industries by 2025
Verification Results:
Evidence FOR the claim:
- Some companies are experimenting with agent-based customer service (scale, not replacement)
- AI agents can automate specific tasks (but not entire jobs)
- Progress in tool calling and autonomy is accelerating
Evidence AGAINST the claim:
- No credible source predicts full replacement by 2025
- Leading AI researchers agree "replacement is decades away, not years"
- McKinsey 2024 report: "AI will augment, not replace, most jobs through 2030"
- Goldman Sachs 2024 report: "300 million jobs AFFECTED, not replaced, by AI"
- Current agents lack: reasoning at human level, general adaptability, physical capabilities
- 2025 is 1 year away; current agents still struggle with basic tasks
Expert Opinions:
- Andrew Ng (Stanford): "AI will augment human workers, not replace them"
- Daphne Koller (Coursera founder): "Full automation of complex jobs is unlikely in this decade"
- Sam Altman (OpenAI): "AI will change how we work, not eliminate work"
Fact-Check:
- Claim source: Viral social media posts, some tech blogs
- Credibility: Low (no peer-reviewed sources, no expert consensus)
- Plausibility: Extremely low (current technology limitations)
Confidence Rating:
**CLAIM LIKELY FALSE** (95% confidence)
Why the claim is misleading:
1. Confuses "task automation" with "job replacement"
2. Ignores current agent limitations (reasoning, adaptability, physical tasks)
3. cherry-picks narrow examples (customer service bots) and generalizes
4. No credible experts or research support this timeline
Corrected Claim:
"AI agents will increasingly augment human workers and automate specific tasks, but full job replacement is unlikely before 2030+ for most roles."
Sources Cross-Checked:
- McKinsey Global Institute Report (2024)
- Goldman Sachs Economic Research (2024)
- IEEE AI Ethics Committee (2024)
- OpenAI Research Blog (2024)
- Stanford AI Lab Research (2024)
- 8 additional expert interviews and reports
Research Quality:
- Sources consulted: 12
- Source types: Industry reports (4), Expert interviews (4), Academic research (4)
- Credibility: High
- Bias detected: No
Example 3: Trend Analysis
User: /quick-research --trend-analysis "AI agent frameworks" --timeline "2020-2025"
PersonalOS:
Trend Analysis: AI Agent Frameworks (2020-2025)
Timeline Analysis: