Perform AI-powered web searches with real-time information using Perplexity models via LiteLLM and OpenRouter. This skill should be used when conducting web searches for current information, finding recent scientific literature, getting grounded answers with source citations, or accessing information beyond the model knowledge cutoff. Provides access to multiple Perplexity models including Sonar Pro, Sonar Pro Search (advanced agentic search), and Sonar Reasoning Pro through a single OpenRouter API key.
Perform AI-powered web searches using Perplexity models through LiteLLM and OpenRouter. Perplexity provides real-time, web-grounded answers with source citations, making it ideal for finding current information, recent scientific literature, and facts beyond the model's training data cutoff.
This skill provides access to all Perplexity models through OpenRouter, requiring only a single API key (no separate Perplexity account needed).
Use this skill when:
Do not use for:
Get OpenRouter API key:
Configure environment:
# Set API key
export OPENROUTER_API_KEY='sk-or-v1-your-key-here'
# Or use setup script
python scripts/setup_env.py --api-key sk-or-v1-your-key-here
Install dependencies:
uv pip install litellm
Verify setup:
python scripts/perplexity_search.py --check-setup
See references/openrouter_setup.md for detailed setup instructions, troubleshooting, and security best practices.
Simple search:
python scripts/perplexity_search.py "What are the latest developments in CRISPR gene editing?"
Save results:
python scripts/perplexity_search.py "Recent CAR-T therapy clinical trials" --output results.json
Use specific model:
python scripts/perplexity_search.py "Compare mRNA and viral vector vaccines" --model sonar-pro-search
Verbose output:
python scripts/perplexity_search.py "Quantum computing for drug discovery" --verbose
Access models via --model parameter:
Model selection guide:
sonar-prosonar-pro-searchsonar-reasoning-prosonarsonarSee references/model_comparison.md for detailed comparison, use cases, pricing, and performance characteristics.
Good examples:
Bad examples:
Perplexity searches real-time web data:
For high-quality results, mention source preferences:
Break complex questions into clear components:
Example: "What improvements does AlphaFold3 offer over AlphaFold2 for protein structure prediction, according to research published between 2023 and 2024? Include specific accuracy metrics and benchmarks."
See references/search_strategies.md for comprehensive guidance on query design, domain-specific patterns, and advanced techniques.
python scripts/perplexity_search.py \
"What does recent research (2023-2024) say about the role of gut microbiome in Parkinson's disease? Focus on peer-reviewed studies and include specific bacterial species identified." \
--model sonar-pro
python scripts/perplexity_search.py \
"How to implement real-time data streaming from Kafka to PostgreSQL using Python? Include considerations for handling backpressure and ensuring exactly-once semantics." \
--model sonar-reasoning-pro
python scripts/perplexity_search.py \
"Compare PyTorch versus TensorFlow for implementing transformer models in terms of ease of use, performance, and ecosystem support. Include benchmarks from recent studies." \
--model sonar-pro-search
python scripts/perplexity_search.py \
"What is the evidence for intermittent fasting in managing type 2 diabetes in adults? Focus on randomized controlled trials and report HbA1c changes and weight loss outcomes." \
--model sonar-pro
python scripts/perplexity_search.py \
"What are the key trends in single-cell RNA sequencing technology over the past 5 years? Highlight improvements in throughput, cost, and resolution, with specific examples." \
--model sonar-pro
Use perplexity_search.py as a module:
from scripts.perplexity_search import search_with_perplexity
result = search_with_perplexity(
query="What are the latest CRISPR developments?",
model="openrouter/perplexity/sonar-pro",
max_tokens=4000,
temperature=0.2,
verbose=False
)
if result["success"]:
print(result["answer"])
print(f"Tokens used: {result['usage']['total_tokens']}")