Review App Store metadata (fastlane) across 14+ languages for ASO, naturalness, and messaging. Use when reviewing localized metadata before release. Triggers on "ASO review", "ASOレビュー", "ストアレビュー", "metadata review", "メタデータレビュー", "store review".
Review App Store metadata across all locales from three perspectives: ASO optimization, naturalness, and messaging consistency.
Core principle: Localize, don't translate. Metadata must read as if a native speaker wrote it from scratch.
/aso-review
/aso-review path/to/fastlane/metadata
The skill auto-discovers metadata files. No arguments needed if fastlane metadata is in the standard location.
At workflow start, create tasks for each phase:
TaskCreate: "Phase 0: Route — verify target is ASO metadata"
TaskCreate: "Phase 1: Discover target files"
TaskCreate: "Phase 2: Build shared context"
TaskCreate: "Phase 3: Form review team & run reviews"
TaskCreate: "Phase 4: Cross-review discussion"
TaskCreate: "Phase 5: Synthesize results"
TaskCreate: "Phase 6: Apply fixes (after user approval)"
Update status as you progress: in_progress when starting, completed when done.
Verify the review target is ASO metadata.
Valid targets:
**/metadata/** — fastlane metadata (subtitle, keywords, description per locale)Redirect: If .tsx LP files or docusaurus.config.* detected, tell user: "LP pages should use /lp-review instead."
No metadata found: If no fastlane metadata directory exists, ask the user for the path before proceeding.
Glob("**/metadata/**") → fastlane metadata
ja, en, ko, zh-Hans, de, es, fr, ...If fewer than expected locales found: Inform the user how many locales were detected and confirm whether to proceed or wait for missing locales.
Gather before dispatching to reviewers:
references/_localization-principles.md)MUST call TeamCreate to create the review team:
TeamCreate("aso-review-team")
Then spawn ALL 3 agents in ONE message using Task tool. Do NOT launch sequentially.
| Agent Name | subagent_type | Role |
|---|---|---|
| aso-reviewer | labee-marketing-aso | LEAD — ASO optimization |
| naturalness-reviewer | general-purpose | Naturalness checking |
| messaging-reviewer | labee-pmm-fujimoto-ren | Value proposition & tone |
Each agent receives:
references/_checklist-aso.mdreferences/_localization-principles.mdCRITICAL: Each agent MUST review EVERY detected locale independently. Not just JP and EN.
Per-agent instructions:
aso-reviewer (LEAD):
naturalness-reviewer:
messaging-reviewer:
MUST use SendMessage between agents. Do NOT skip this phase.
Each agent responds with agreements, disagreements, and proposed resolutions.
The aso-reviewer (LEAD) makes final call on conflicts — especially ASO keyword vs naturalness trade-offs.
Report format:
# ASO Review Report
## Summary
[Overall assessment across all locales]
## Per-Locale Findings
### ja (Japanese)
- [Finding with specific before/after suggestion]
### ko (Korean)
- [Finding with specific before/after suggestion]
[Repeat for every detected locale]
## Cross-Locale Issues
- [Issues spanning multiple locales]
## Conflicts Resolved
- [Conflict]: [Resolution by lead reviewer]
Only after user approval.
pnpm run build if applicableBad (keyword stuffing):
タスク管理・時間管理・プロジェクト管理・チーム管理アプリ
Good (natural with keyword):
チームのタスクと時間をまとめて管理
Why: The bad version crams every keyword into the subtitle, sacrificing readability. The good version includes the primary keyword naturally while communicating a clear benefit.
Bad (translated from JP, sounds like a manual):
작업을 관리할 수 있습니다
Good (localized, casual and scannable):
할 일, 한눈에 정리
Why: The bad version uses formal -습니다 endings and technical 작업 (work/task), reading like a translated manual. The good version uses casual 할 일 (to-do) with a noun-ending phrase natural to Korean App Store copy.
Bad (subject omission from JP source):
Verwaltet Aufgaben und Zeit
Good (addresses user directly):
Deine Aufgaben und Zeit im Griff
Why: The bad version omits the subject (natural in Japanese, awkward in German) and uses a bare verb. The good version addresses the user directly with "Deine" (du-form), standard for German consumer apps.
Bad (reads like a spec sheet):
任务管理和时间跟踪
Good (conversational, benefit-oriented):
轻松搞定每日待办
Why: The bad version lists features in formal written register. The good version uses colloquial phrasing native to the Chinese App Store, leading with the benefit.
Bad:
Delve into comprehensive task management
Good:
See your tasks. Check them off.
Why: The bad version uses AI vocabulary ("delve", "comprehensive") that signals machine-generated text. The good version is concrete, action-oriented, and sounds human-written.
/lp-review| File | Load When |
|---|---|
references/_checklist-aso.md | Auto-loaded: ASO review checklist |
references/_localization-principles.md | Auto-loaded: Localization guidance and per-language red flags |