NIW qualification screening. Pre-filing assessment with field tier, evidence scoring, and go/no-go verdict.
Run this before writing anything. One question: is this case worth the time?
Default stance: conservative. A weak petition does more harm than waiting.
Approval environment (2026):
2025-2026 policy shifts:
EB-2 eligibility gate (hard stop if fails)
↓
Information gathering (no guessing, no assumptions)
↓
Pathway determination (academic / industry / hybrid / entrepreneur)
↓
Field alignment (strip technology labels, evaluate application domain)
↓
Evidence scoring (0-3 scale, 0 = blocking gap)
↓
Verdict (QUALIFIED / LIKELY_QUALIFIED / BORDERLINE / NOT_QUALIFIED)
↓
Petition direction (map to horizon-niw-* skills)
Filing date = evidence deadline. USCIS evaluates eligibility based on the record at time of I-140 filing. An RFE clarifies existing claims — it is not a second chance to submit missing evidence.
Scoring rule: Only count what EXISTS NOW. Future evidence = 0. Multiple zeros = filing is premature.
USCIS checks EB-2 eligibility before Dhanasar analysis. This gate fails → NOT_QUALIFIED. Full stop.
Confirm all:
3 of 6: official academic record, 10+ years full-time experience letters, professional license, commanding salary, membership in associations requiring outstanding achievement, recognition from peers/government/professional organizations.
Jan 2025: Exceptional ability must relate to the SAME ENDEAVOR as the NIW request. Mismatch = standalone RFE.
Meeting 3 of 6 alone is insufficient. Kazarian two-step: Step 1 confirms criteria are met. Step 2 evaluates whether ALL evidence together demonstrates "expertise significantly above that ordinarily encountered."
Do not guess. Do not assume. Ask for what is missing. If the user provides a resume, extract automatically and confirm.
Personal: Name, highest degree (field/institution/year/honors), undergraduate degree (if claiming 5yr pathway), current title/employer/years post-degree
Academic (if applicable): Publication count (journal vs conference vs preprint), top venue names, total citations + source (Google Scholar / Scopus / Semantic Scholar), h-index, first-authored count, peer review invitations, patents
Professional: Total years post-graduation, key projects (what was built, measurable outcome, evidence available?), advisory/consulting roles
Evidence inventory (for each major claim):
Field: Primary domain (strip technology labels — see field alignment), connection to U.S. national interest, named federal initiatives/policies
| Criterion | Minimum | Strong |
|---|---|---|
| Degree | Master's or PhD in relevant field | PhD preferred |
| Publications | 5+ peer-reviewed (journal + conference) | Top-venue papers = strong bonus |
| Citations | ~50+ independent (exclude self-citations) | 100+ strong; 200+ very strong |
| Evidence | Google Scholar profile, published papers | ISI/Scopus indexing preferred |
Elite venue exception: 2-3 papers in Nature/Science/Cell/PNAS/NeurIPS/ICML/ICLR with 200+ independent citations can satisfy with fewer total papers.
Bonus factors: First-authored in recognized journals, cited by government reports or standards bodies, 10+ peer review invitations, invited speaker, h-index >= 5, open-source tools adopted by others.
Red flags: All papers in predatory journals, self-citations dominate, no first-authored papers, publications outside claimed field.
| Criterion | Minimum | Strong |
|---|---|---|
| Degree | Advanced degree OR Bachelor's + 5yr | Advanced + experience ideal |
| Experience | 3+ years relevant | 5+ preferred |
| Project quality | 1+ project reframable to national importance | Multiple stronger |
| Objective evidence | 1+ verifiable external source per major claim | Company/govt source preferred |
| Subjective evidence | Reference letter from manager or senior stakeholder | Senior title at credible org |
Project reframability — three questions:
Evidence quality reference:
| Claim | Weak (1) | Strong (3) |
|---|---|---|
| "Reduced fraud" | Own LinkedIn post | CEO press release with metrics |
| "Improved patient outcomes" | Internal slide deck | Peer-reviewed paper or published outcome report |
| "Widely used" | "Thousands of users" (unverified) | GitHub stars + downloads + company blog with numbers |
| "Led critical project" | Job description | Manager letter with outcomes + org chart |
Minimum combination: 2-4 publications (at least 1 first-authored) + 25+ independent citations + 3yr experience + 1 reframable project with objective evidence. OR: strong professional track (5+ years, strong objective evidence) with 1-2 publications that directly validate the work.
Pathway C caps at LIKELY_QUALIFIED — never QUALIFIED. Both sides must have real evidence. Either side thin → BORDERLINE.
If detected (founder, business owner, VC-funded): flag explicitly, recommend entrepreneur-specific evaluation or attorney consultation. Do not continue evaluation here — entrepreneur cases need a dedicated evidence framework. Not every entrepreneur qualifies; general job creation claims are insufficient.
Detection logic: Publications >= 5 with citations → try A. < 5 but significant projects → try B. Borderline both → C. Founder/owner → flag D.
Technology labels are not fields. AI, ML, data science, blockchain, cloud computing are methods. Strip the method. Evaluate the application domain.
"AI applied to WHAT?" / "[Technology] used for WHAT PURPOSE, serving WHOM, at WHAT SCALE?"
The answer is the domain. The domain determines tier.
Semiconductor/Microelectronics, Cybersecurity/Critical Infrastructure, Biotechnology/Public Health, Energy Security/Energy Independence, Quantum Information Science, Advanced Manufacturing/Domestic Production, Space/Defense Technology, STEM Education/Workforce Pipeline, Foundational AI/CS Research*, Critical Mineral Supply Chains
*Foundational AI only if: top venue publications (NeurIPS/ICML/ICLR/Nature/Science) AND cross-domain adoption evidence. "I do ML research" alone is not sufficient.
Financial security/Fraud prevention, Transportation safety/Autonomous systems, Agriculture/Food security, Drug safety/Pharmacovigilance, Education technology (must tie to workforce outcomes), Environmental monitoring/Disaster response, Water infrastructure/Water security, Housing/Construction technology, Clinical records NLP, LLM annotation/AI evaluation methodology, Biometric/identity security
Every Tier 2 domain must demonstrate systemic or national-scale impact, not company KPIs.
Marketing/AdTech, Consumer personalization, Retail/E-commerce, Entertainment/Media (unless disinformation, accessibility, or IP protection), Social media engagement, General BI/Analytics, HR/Talent optimization.
Technology labels (AI, ML) do not elevate Tier 3 domains.
| User Claims | Actual Tier | Why |
|---|---|---|
| "AI research" (no domain) | None | AI is a method. No domain = no tier |
| "AI for healthcare" | 1 | Public health |
| "AI for drug discovery" | 1 | Biotechnology |
| "AI for cybersecurity" | 1 | Cybersecurity |
| "AI for fraud detection (banking)" | 2 | Financial security — needs national-scale evidence |
| "AI for marketing personalization" | 3 | Marketing regardless of AI |
| "Computer vision for retail" | 3 | Retail optimization |
| "Computer vision for medical imaging" | 1 | Public health |
| "Data science for finance" | 2-3 | Fraud=2, trading=3 |
USCIS applies heightened scrutiny to these titles because the occupations are widely available in the domestic labor market.
The following do NOT satisfy Prong 1: classroom teaching without broader field implications, mere occupation shortage claims, consulting in a shortage occupation (alone), general entrepreneur assertions based solely on job creation, benefits limited to specific employers, startup companies without detailed national impact explanation.
Any endeavor falling into these categories → blocking gap.
Dhanasar is administration-agnostic. But "national importance" is inherently policy-sensitive. Frame the same work differently depending on which federal priorities are active.
For each major claim:
| Score | Definition |
|---|---|
| 3 | Objective, verifiable, third-party source with specifics |
| 2 | Objective but general, OR subjective from credible independent source |
| 1 | Self-reported, internal-only, vague, unverifiable |
| 0 | No evidence — blocking gap |
Any claim scoring 0 = blocking gap. Average < 1.5 across a claim's evidence = significant gap.
2024-2025 AAO updates:
For each major project, three questions:
No quantifiable outcomes in the entire case → flag as significant gap. At 54% approval, a petition without quantifiable evidence faces substantially higher RFE risk.
| Verdict | Criteria |
|---|---|
| QUALIFIED | Pathway met, Tier 1-2, evidence mostly 2-3, no blocking gaps |
| LIKELY_QUALIFIED | Most criteria met, addressable gaps (list specific conditions) |
| BORDERLINE | Meaningful gaps, pathway marginal, Tier 2-3, evidence mix of 1-2 |
| NOT_QUALIFIED | EB-2 gate fails, OR no viable pathway, OR Tier 3 without exceptional reframe |
Conservative rule: When in doubt between two tiers, assign the lower one. Underselling a strong case is recoverable; overselling a weak case wastes years and filing fees.
2024-2025 trend: some officers incorrectly demand NIW petitioners meet the higher EB-1A standard.
If HIGH or MEDIUM: recommend preemptive Dhanasar framing in petition opening. Cite Matter of Dhanasar, 26 I&N Dec. 884 (AAO 2016). Avoid EB-1A language ("original contributions of major significance," "top of the field," "extraordinary ability").
When the scoring rubric alone is ambiguous:
---