Build mock interview simulators with voice, case interviews, behavioral prep, and scorecards.
Instructions for building and improving AI-powered mock interview simulators that adapt dynamically to any company, role, industry, and market based on user input.
The simulator must NEVER be hardcoded to a specific company or market. Instead:
The user provides their target company, role/position, and location/market during the pre-interview setup
The AI interviewer uses this context to dynamically research and adapt: pulling in relevant company facts, industry dynamics, regional economic context, and role-specific technical questions
The system prompt instructs the AI to act as an informed interviewer at that specific company and tailor all questions, scenarios, and feedback accordingly
This means a single simulator can prep someone for a PE Principal role at Goldman Sachs in New York, a consulting Associate at McKinsey in London, or a VP of Finance at a regional bank in Santo Domingo — all driven by what the user enters.
Before starting any interview, show a setup screen collecting:
Company Name — Text input with placeholder (e.g., "Goldman Sachs", "Banco Popular Dominicano")
Role / Position — Text input (e.g., "Private Equity Principal", "Senior Consultant", "VP of Finance")
Interview Type Selector — Card-based selection:
Structured Interview (icon: Briefcase) — Behavioral + technical + firm knowledge, 8–10 questions
Consulting Case Interview (icon: BarChart3) — Business case scenarios with quantitative analysis
Behavioral Only (icon: MessageSquare) — Focused STAR-method practice, 8–10 behavioral questions
Each card shows title, brief description, estimated duration, and question count
English (default), Spanish, French, Portuguese, German, Mandarin, Japanese, Arabic, Hindi
Show language name in both English and native script (e.g., "Spanish — Español")
Any language the AI model supports should be available
Adapts question categories to the role and industry:
Behavioral / STAR (2–3 questions)
Technical (LBO, valuation, capital structure, accounting) (2–3 questions)
Deal / Transaction Experience (1–2 questions)
Firm & Market Knowledge (1–2 questions)
Culture Fit (1 question)
Behavioral / STAR (2–3 questions)
Problem Solving / Frameworks (2–3 questions)
Client Experience / Engagement Stories (1–2 questions)
Firm & Market Knowledge (1–2 questions)
Culture Fit (1 question)
Behavioral / STAR (2–3 questions)
System Design / Architecture (2–3 questions)
Past Projects / Technical Impact (1–2 questions)
Company & Product Knowledge (1–2 questions)
Culture Fit (1 question)
Behavioral / STAR (2–3 questions)
Role-Specific Technical (2–3 questions)
Experience & Accomplishments (1–2 questions)
Company Knowledge (1–2 questions)
Culture Fit (1 question)
Business-case-style interviews with quantitative and qualitative analysis:
Market Sizing: Top-down / bottom-up estimation relevant to the target company's industry
Profitability Analysis: Revenue/cost decomposition, margin drivers
Market Entry: Go/no-go framework, competitive landscape, regulatory considerations
M&A / Due Diligence: Synergy analysis, integration risk, valuation
Operations Optimization: Process improvement, capacity planning, cost reduction
The AI selects a case scenario relevant to the target company and industry. For example:
Banking company → "Should [Company] enter the digital payments market in [Region]?"
Tech company → "A client's SaaS platform is losing enterprise customers — diagnose and recommend"
Healthcare → "Evaluate the acquisition of a regional hospital chain"
Case flow: Scenario presentation → Clarifying questions → Framework building → Quantitative analysis → Recommendation → Evaluation
Focused STAR storytelling practice:
8–10 behavioral questions across: leadership, teamwork, failure/resilience, initiative, conflict resolution, influence without authority, ambiguity, time pressure
Strict STAR-method feedback after every answer
Scoring on: specificity, quantification, personal ownership ("I" vs "we"), structure, and relevance to the target role
Present the language selector on the setup screen before starting the session
The system prompt must include an explicit language instruction at the TOP: "Conduct this entire interview in [language_name]. All questions, feedback, and the final scorecard must be in [language_name]."
The UI chrome (buttons, labels, sidebar) remains in English unless the user explicitly requests full localization
The AI should use professional, business-appropriate register in the selected language — not casual or overly academic
For non-English interviews, the AI should still understand if the candidate mixes in English technical terms (e.g., "LBO", "IRR", "EBITDA") without penalizing them
Build the system prompt dynamically from the user's setup selections. The frontend constructs the full prompt and passes it to the backend via the systemPrompt field on conversation creation.
[LANGUAGE INSTRUCTION — if non-English]
You are a senior interviewer at {company_name} conducting a {interview_type} interview for the {role_name} position.
COMPANY CONTEXT:
Research and incorporate what you know about {company_name}:
- Industry position, key products/services, competitive advantages
- Recent news, strategic initiatives, financial performance
- Market/region: {location_context}
- Company culture, values, and what they look for in candidates
Use this knowledge to make questions specific and relevant. If the candidate mentions something about the company, validate or challenge their knowledge.
{INTERVIEW TYPE SPECIFIC INSTRUCTIONS}
INTERVIEW GUIDELINES:
- Ask ONE question at a time
- After each answer, provide brief constructive feedback (3–5 sentences max):
* For behavioral: STAR structure quality, specificity, quantification, ownership ("I" vs "we")
* For technical: accuracy, logical flow, assumptions stated
* For cases: framework quality, math accuracy, creativity, communication
- Rate each answer: Strong / Adequate / Needs Improvement
- Then ask the next question
- Be professional, direct, and constructive
- {difficulty_instruction}
FINAL SCORECARD:
After all questions are complete, provide a final scorecard with:
- Overall rating (Strong Hire / Hire / Lean Hire / No Hire)
- Category-by-category scores
- Top 3 strengths observed
- Top 3 areas for improvement
- Specific recommendations for interview day at {company_name}
Start by briefly introducing yourself as the interviewer at {company_name}, explaining the interview format, and asking the first question.
"Be supportive and constructive. Give the candidate time to think. Provide helpful feedback.""Be demanding. Push back on vague answers. Ask pointed follow-ups. Challenge assumptions. Simulate a high-pressure interview environment."Add a visible timer in the chat area:
Starts counting when the AI finishes asking a question (streaming ends)
Displays elapsed time next to the input area (e.g., "Response time: 1:32")
Stops when the user submits their answer
Records per-question response times for the final scorecard
Visual cue: Green < 2 min, Yellow 2–4 min, Red > 4 min
Timer helps candidates practice pacing — real interviews penalize overly long or short answers
When the AI sends the final scorecard, detect it and render a special scorecard UI:
Parse the scorecard from the AI's markdown response
Display as a styled card with:
Overall rating (color-coded: green for Strong Hire/Hire, yellow for Lean Hire, red for No Hire)
Category-by-category scores in a visual grid
Top 3 strengths (green checkmarks)
Top 3 areas for improvement (amber indicators)
Response time summary (average, fastest, slowest)
Specific recommendations for interview day
The right sidebar dynamically reflects the interview type and adapts labels to the role:
Structured Interview stages (adapt labels to role/industry):
Finance: Behavioral → Technical/LBO → Deal Experience → Firm Knowledge → Culture Fit
Consulting: Behavioral → Problem Solving → Client Experience → Firm Knowledge → Culture Fit
Tech: Behavioral → System Design → Past Projects → Company Knowledge → Culture Fit
General: Behavioral → Technical → Experience → Company Knowledge → Culture Fit
Each stage shows: number badge, label, active/complete/upcoming state, and a contextual tip for the current stage.
POST /api/openai/conversationsaccepts optionalsystemPrompt,interviewType, andlanguage fields
If systemPrompt is provided, use it instead of any default; otherwise fall back to a generic interview prompt
The streaming endpoint (POST /api/openai/conversations/:id/messages) remains unchanged — reads all messages including system message from DB
Model: use the latest available model; max_completion_tokens: 8192
Setup screen is the default view (no conversation ID yet)
On "Begin Interview," construct the dynamic system prompt from user inputs, create conversation, then start streaming
The system prompt message is hidden from the chat display — filter by m.role === "system" or content-matching
Use cn()from@/lib/utils for all dynamic classNames — avoid template literals in JSX className props (known design-subagent bug pattern)
SSE parsing: fetch+ReadableStreamreader; split chunks on\n, parsedata: {...} lines
Use react-markdown+remark-gfm for rendering AI responses
Use a professional, corporate design system — clean typography, muted palette, subtle shadows
Card-based setup screen with clear visual hierarchy
Adapt accent colors if desired, but default to a neutral professional palette
artifacts/mock-interview/src/
├── pages/
│ └── Interview.tsx \# Main interview page (setup + chat)
├── components/
│ ├── Layout.tsx \# App shell with sidebar navigation
│ ├── SetupScreen.tsx \# Pre-interview setup form
│ ├── ChatArea.tsx \# Message list + input + timer
│ ├── ProgressSidebar.tsx \# Interview stage tracker
│ └── Scorecard.tsx \# Final scorecard renderer
├── lib/
│ ├── prompts.ts \# System prompt builder (takes setup inputs, returns prompt string)
│ └── utils.ts \# cn() and helpers
└── App.tsx
artifacts/api-server/src/routes/openai/index.ts \# Streaming endpoint
lib/api-spec/openapi.yaml \# API contract
lib/db/src/schema/index.ts \# DB schema
When improving the simulator, prioritize in this order:
Pre-interview setup screen (company, role, type, language inputs)
Dynamic system prompt builder from user inputs
Consulting case interview support
Behavioral-only interview mode
Response timer
End-of-session scorecard UI
Difficulty level selector
Focus area filtering (structured interview only)
Session history / past interview review
PDF export of scorecard and transcript