Use when conducting a formal Salesforce Well-Architected Framework (WAF) review of an org or solution design. Covers all three pillars: Trusted (security, compliance), Easy (user experience, adoption), and Adaptable (scalability, maintainability). Produces a structured assessment with findings and recommendations. Triggers: well-architected review, WAF assessment, org architecture review, architecture health check, trusted easy adaptable. NOT for deep-dives into individual pillars (use security-architecture-review, limits-and-scalability-planning, or technical-debt-assessment) or for implementation guidance.
Use this skill when conducting a formal Salesforce Well-Architected Framework (WAF) review against an org or a solution design. It applies all three high-level pillars — Trusted, Easy, and Adaptable — and produces a structured assessment with scored findings, prioritized recommendations, and a summary scorecard. It does not replace pillar-specific deep-dives; it is the entry point that tells you which deep-dives are needed.
Use when assessing an existing production org holistically. Suitable for annual architecture reviews, post-merger org assessments, or as a baseline before a major transformation program.
Pre-review questionnaire (gather before starting):
Review sequence:
Trusted pillar: Security model audit (sharing model completeness, OWDs, sharing rules), FLS coverage check in Apex (use of WITH SECURITY_ENFORCED, stripInaccessible, or WITH USER_MODE), authentication strength (MFA enforced, SSO configured), Shield assessment (field audit trail, event monitoring, platform encryption for regulated fields), data classification inventory (what data lives where, is it classified, is sensitive data masked in sandboxes).
Easy pillar: Lightning page performance review (identify pages with 10+ components, missing lazy loading, synchronous Apex on load), process complexity assessment (count of active Flows, are any redundant, are users doing manual work that should be automated), adoption signal review (field usage, report and dashboard activity, object record counts vs licence count), mobile readiness (mobile navigation configured, key pages mobile-optimised), accessibility compliance (are custom LWC components meeting WCAG 2.1 AA where possible).
Adaptable pillar: Governor limit headroom scan (daily API usage vs limit, SOQL row limits in peak transactions, heap size in large batch jobs), technical debt level (presence of trigger frameworks, documented architecture decisions, test coverage %, hardcoded IDs), deployment pipeline maturity (sandbox refresh policy, use of scratch orgs, source control, CI/CD pipeline presence), configuration vs code ratio (could logic move from Apex to Flow without sacrificing reliability), dependency management (managed packages with known deprecations, API version currency).
Scoring: Each finding is rated Red (critical gap — immediate action required), Amber (improvement needed — schedule within the quarter), or Green (good practice — document and maintain).
Output: Findings report with pillar, area, finding, severity, and recommendation — plus a summary scorecard and a prioritized backlog.
Use when reviewing a proposed design before build begins, or before delivery of a new project or feature set into an existing org.
This mode is lighter than a full org review. Focus on the portions of the WAF that the specific solution will affect:
Produce a WAF review section in the solution design document itself rather than a separate deliverable.
Use immediately before go-live for a project or a significant release. This is a checkpoint, not a full review. Confirm:
Document any items that cannot be met at go-live as accepted risks with a named owner and a target remediation date. Do not withhold sign-off for Amber findings that have an agreed remediation plan; withhold it only for Red findings without a plan.
The Salesforce Well-Architected Framework uses three top-level pillars. Each maps to the six internal WAF dimensions used in this repository's scoring model.
| WAF Pillar | Internal Dimensions |
|---|---|
| Trusted | Security, Reliability |
| Easy | User Experience |
| Adaptable | Scalability, Performance, Operational Excellence |
Security model completeness covers OWD settings, sharing rules, role hierarchy, and Apex sharing. Every custom object should have a documented OWD justification. Apex that queries or mutates data should enforce FLS using WITH USER_MODE, WITH SECURITY_ENFORCED, or stripInaccessible. Any class using without sharing must have a documented reason.
Compliance readiness means understanding what regulated data the org holds and whether appropriate controls are in place. GDPR requires a data map and a right-to-erasure process. HIPAA requires audit logging and access controls on PHI. PCI-DSS prohibits storing cardholder data in standard Salesforce fields without encryption.
Authentication strength means MFA is enforced (not just enabled), SSO is configured where the org has more than a handful of users, and trusted IP ranges are documented and minimal.
Shield considerations apply whenever the org holds regulated or sensitive data. Event Monitoring provides an audit trail for data access. Field Audit Trail extends the standard field history retention window. Platform Encryption encrypts data at rest in Salesforce storage.
User experience quality is assessed by examining page performance (Lightning App Builder page load time, number of components per page, synchronous Apex on page load), clarity of error messages, and whether the UI makes the right path the easy path.
Adoption signals include field usage rates (fields with zero or near-zero population may indicate unused features or poor design), record count trends (objects with no records may indicate failed rollouts), and report and dashboard usage (low engagement suggests the org is not supporting decision-making effectively).
Process simplicity asks: are users doing work that could be automated? Are there manual copy-paste steps between Salesforce and other systems? Are approval processes clearly documented and consistently applied?
Scalability headroom is measured by reviewing peak governor limit consumption. An org running at 80% of its daily API limit or hitting SOQL row limits in production transactions is fragile. Identify the top five governor limit consumption points and document headroom.
Technical debt level is assessed by examining test coverage (target 85%+, never accept below 75%), presence of hardcoded IDs (use Custom Metadata or Custom Settings instead), API version currency of Apex classes and LWC components, and whether architecture decisions are documented.
Deployment pipeline maturity covers whether the team uses source control, whether sandbox refresh is on a documented cadence, whether there is a CI/CD pipeline or at minimum a documented manual deployment checklist, and whether scratch org definitions exist for repeatable environment setup.
stripInaccessible missing, WITH USER_MODE absent)Step-by-step instructions for an AI agent or practitioner activating this skill: