Creates question-answer flashcard pairs from educational content to help students study and memorize concepts. Use this when the user wants flashcards, study cards, or Q&A pairs from learning materials.
Extract key concepts from educational material and transform them into effective question-answer flashcards that help students test their understanding and retention.
The user will provide:
Create a markdown file with this structure:
# Flashcards for [Topic Name]
Generated from: [Brief description of source content]
---
## Card 1
Q: What is [concept]?
A: [Clear, concise definition or explanation]
## Card 2
Q: How does [concept] work?
A: [Process explanation with key steps]
## Card 3
Q: What is the difference between [concept A] and [concept B]?
A: [Comparison highlighting key distinctions]
## Card 4
Q: When should you use [technique/tool]?
A: [Use cases and scenarios]
## Card 5
Q: What are the key components of [system]?
A: [Enumeration with brief explanations]
[Continue for remaining cards...]
Q: What is the purpose of the @tool decorator in LangChain? A: The @tool decorator converts a Python function into a tool that can be called by an LLM. It automatically generates a JSON schema from the function's type annotations and docstring, making the function's signature and purpose understandable to language models.
Q: What is the difference between LangChain and LangGraph? A: LangChain provides building blocks for LLM applications (prompts, chains, tools), while LangGraph extends LangChain with stateful, graph-based workflows. LangGraph is specifically designed for building agents that need persistence, cycles, and complex control flow.
Q: How does the ReAct agent pattern work? A: The ReAct (Reasoning-Acting) pattern runs in a loop: the LLM receives a question, decides whether to call tools or provide a final answer, executes any requested tools, receives results, and repeats until no more tools are needed. This enables multi-step problem solving.