One-click market viability assessment for Amazon sellers. Analyzes market size, competition intensity, brand landscape, pricing structure, and consumer pain points to deliver a GO/CAUTION/AVOID recommendation. Uses all 11 APIClaw API endpoints with cross-validation for data-backed decisions. Use when user asks about: market entry, can I sell, should I enter, market viability, is this niche worth it, category analysis, market opportunity, market assessment, niche evaluation, product category research. Requires APICLAW_API_KEY.
One input (keyword/category). Full market viability assessment with sub-market discovery.
{skill_base_dir}/scripts/apiclaw.py — run --help for params{skill_base_dir}/references/reference.md (field names & response structure)Required: APICLAW_API_KEY. Get free key at apiclaw.io/api-keys
categories endpoint, with fallback to top search result. If category_source is inferred_from_search, confirm with usersampleAvgMonthlyRevenue (NEVER calculate avgPrice × totalSales — overestimates 30-70%)monthlySalesFloor (lower bound). Fallback: 300,000 / BSR^0.65, tag 🔍sampleOpportunityIndex, sampleTop10BrandSalesRate directly — never reinventreviews/analysis needs 50+ reviews. Fallback chain when sample is insufficient:
realtime/product ratingBreakdown — only star distribution, no themes/reviews/analysis entirely:
a. apiclaw.py reviews-raw --asin X → fetch up to 100 raw reviews (10 credits, ~60s)
b. For each review: render Map prompt via apiclaw.py review-tag-prompt --review '<json>'
and have your own LLM produce JSON tags (sentiment + 11 dimensions)
c. Collect candidate phrases per dimension; for each dimension render
Reduce prompt via apiclaw.py review-reduce-prompt --label-type X --candidates '[...]'
and have your LLM produce semantic clusters
d. apiclaw.py review-aggregate --reviews R --tagged T --clusters C
→ consumerInsights output compatible with /reviews/analysisWORK=/tmp/review_<ASIN>_$(date +%s) && mkdir -p $WORKreview-tag-prompt RENDERS the prompt only; YOUR LLM produces the JSON. Render once to learn the schema, then produce tags for all N reviews in one in-context pass (don't call the CLI N times).candidates = {d: sorted({el.strip().lower() for t in tagged for el in (t.get(d) or [])}) for d in DIMS}/reviews/analysis aggregation. This skill's primary workflow outputs (GO/CAUTION/AVOID verdict, market size, brand/price analysis) remain valid — do not re-run them.Run market --category "{path}" --topn 10 --page-size 20, paginate all pages. Score each sub-market (1-100):
| Dimension | Weight | Field | Good→100 | Bad→0 |
|---|---|---|---|---|
| Demand | 25% | sampleAvgMonthlySales | ≥1500 | <200 |
| Profit | 25% | sampleAPlusRate | ≥0.35 | <0.15 |
| New Entrant | 20% | sampleNewSkuRate | ≥0.20 | <0.05 |
| Brand Openness | 20% | topBrandSalesRate | ≤0.50 | ≥0.90 (inverted) |
| Capacity | 10% | totalSkuCount | 300-8000 | extreme |
Fallback (grossMargin=0 for all): redistribute to Demand 30%, New Entrant 25%, Brand 25%, Capacity 20%.
Present TOP 10 sub-markets. Ask user which to deep-dive (default: top 3). If ≤3 sub-markets, deep-dive all.
| Dimension | Weight | Good | Medium | Warning |
|---|---|---|---|---|
| Market Size | 15% | >$10M/mo | $5-10M | <$5M |
| Market Trend | 10% | Rising | Stable | Declining |
| Competition | 25% | CR10<40% | 40-60% | >60% |
| Price Opportunity | 15% | oppIndex>1.0 | 0.5-1.0 | <0.5 |
| New Entrant Space | 10% | >15% | 5-15% | <5% |
| Consumer Pain Points | 15% | Clear gaps | Some | None |
| Profit Potential | 10% | >30% | 15-30% | <15% |
| Score | Signal | Action |
|---|---|---|
| 70-100 | ✅ GO | Proceed with product development |
| 40-69 | ⚠️ CAUTION | Possible but needs differentiation |
| 0-39 | 🔴 AVOID | Too competitive or too small |
CR10 dual-level check: Category CR10 PASS + sub-market CR10 FAIL → ⚠️ CAUTION. Both FAIL → AVOID. User criteria override: If user sets thresholds, ANY fail → CAUTION/AVOID. Never override.
python3 {skill_base_dir}/scripts/apiclaw.py market-entry --keyword "{kw}" --category "{path}"
Runs all 11 endpoints (~20 calls). Output JSON is large — use targeted extraction, not full read.
Respond in user's language.
Sections: Sub-Market Landscape → Executive Summary → Market Overview → Trend → Brand Landscape → Price Structure → Top 5 Competitors → Consumer Insights → Scoring Breakdown (with "Basis" column) → Entry Strategy → Data Provenance → API Usage → Cross-Market Comparison
If user provides COGS, calculate break-even and profit. If not, prompt for it.
Output language MUST match the user's input language. If the user asks in Chinese, the entire report is in Chinese. If in English, output in English. Exception: API field names (e.g. monthlySalesFloor, categoryPath), endpoint names, technical terms (e.g. ASIN, BSR, CR10, FBA, credits) remain in English.
Data is based on APIClaw API sampling as of [date]. Monthly sales (
monthlySalesFloor) are lower-bound estimates. This analysis is for reference only and should not be the sole basis for business decisions. Validate with additional sources before acting.
Rules: Strategy recommendations are NEVER 📊. Anomalies (>200% growth) are always 💡. User criteria override AI judgment.
Aggregate-label rule (applies to ALL report output, not just fallback): NEVER attach 📊 to ANY element that aggregates or groups underlying content when ANY piece of that content is 🔍 or 💡. "Aggregate/grouping elements" include:
#, ##, ###, ####) — including top-level summary sections like "Overall Score", "Verdict", "Executive Summary"## Overall Score — 27/100 · Grade F 📊 is WRONG if any Basis row inside is 🔍)**Target ASIN** 📊 as a column label is WRONG if any cell in that column contains 🔍)A group-level 📊 implies the whole block/column/row is data-backed, which smuggles inferred/directional content into the 📊 tier via visual grouping. Either (a) omit the group-level label entirely (preferred when content mixes tiers), or (b) use the LOWEST confidence present inside (🔍 if any underlying content is 🔍; 💡 if any is 💡). This is a universal output-quality rule — it applies regardless of which fallback path (if any) was triggered.
Emoji reservation rule (closely related): The three confidence symbols 📊 🔍 💡 are RESERVED for confidence labeling. NEVER use them as decorative prefixes on section headers, table headers, or any aggregate element — even when you also include a correct confidence suffix on the same line. Example:
## 📊 Overall Score — 27/100 · Grade F 🔍 (the leading 📊 reads as a data-backed claim even though the trailing 🔍 is correct)## Overall Score — 27/100 · Grade F 🔍 (no decorative emoji, just the proper confidence suffix)## 🎯 Overall Score — 27/100 · Grade F 🔍 (use non-reserved decorative icons like 🎯 🧭 📋 📝 📂 🏁 🚨 🏆 🔔 when a visual prefix is desired)Decorative emoji ≠ confidence label — but from a reader's perspective, a leading 📊/🔍/💡 is indistinguishable from a confidence claim. Reserve these three symbols EXCLUSIVELY for confidence annotation to avoid ambiguity.
Include a table at the end of every report:
| Data | Endpoint | Key Params | Notes |
|---|---|---|---|
| (e.g. Market Overview) | markets/search | categoryPath, topN=10 | 📊 Top N sampling, sales are lower-bound |
| ... | ... | ... | ... |
Extract endpoint and params from _query in JSON output. Add notes: sampling method, T+1 delay, realtime vs DB, minimum review threshold, etc.
| Endpoint | Calls | Credits |
|---|---|---|
| (each endpoint used) | N | N |
| Total | N | N |
Extract from meta.creditsConsumed per response. End with Credits remaining: N.