Converts text notes into visual formats including mind maps (Mermaid), concept maps, flowcharts for processes, timeline visualizations, comparison tables/matrices, and sketchnote patterns. Use when transforming lecture notes, creating study aids, visualizing relationships, or organizing complex information spatially.
Transform text-based notes into visual formats that enhance understanding, retention, and recall through spatial organization and visual encoding.
Converts notes into multiple visual formats:
node scripts/generate-mindmap.js notes.txt mindmap.md
node scripts/create-timeline.js events.json timeline.md
node scripts/build-comparison-table.js items.json comparison.md
graph TD
A[Text Notes] --> B[Identify Structure]
B --> C{Note Type?}
C -->|Hierarchical| D[Mind Map]
C -->|Relationships| E[Concept Map]
C -->|Sequential| F[Flowchart/Timeline]
C -->|Comparative| G[Comparison Table]
D --> H[Apply Visual Design]
E --> H
F --> H
G --> H
H --> I[Add Color Coding]
I --> J[Review & Refine]
mindmap
root((Machine Learning))
Supervised Learning
Classification
Logistic Regression
Decision Trees
Neural Networks
Regression
Linear Regression
Polynomial Regression
Unsupervised Learning
Clustering
K-Means
DBSCAN
Dimensionality Reduction
PCA
t-SNE
Reinforcement Learning
Q-Learning
Deep Q-Networks
graph TD
A[Data Science] --> B[Statistics]
A --> C[Programming]
A --> D[Domain Knowledge]
B --> B1[Descriptive]
B --> B2[Inferential]
B --> B3[Predictive]
C --> C1[Python]
C --> C2[R]
C --> C3[SQL]
C1 --> C1a[NumPy]
C1 --> C1b[Pandas]
C1 --> C1c[Scikit-learn]
D --> D1[Business]
D --> D2[Healthcare]
D --> D3[Finance]
style A fill:#ff9999
style B fill:#99ccff
style C fill:#99ff99
style D fill:#ffcc99
Classification
|
Linear -------- ML Core -------- Neural Nets
|
Clustering
|
Dimensionality
Reduction
graph LR
A[Supervised Learning] -->|requires| B[Labeled Data]
A -->|produces| C[Prediction Model]
C -->|evaluated by| D[Test Data]
E[Unsupervised Learning] -->|uses| F[Unlabeled Data]
E -->|discovers| G[Patterns]
A -.->|contrasts with| E
B -->|split into| H[Training Set]
B -->|split into| D
style A fill:#e1f5ff
style E fill:#ffe1e1
graph TD
subgraph Algorithms
A[Linear Regression]
B[Logistic Regression]
C[Neural Networks]
end
subgraph Concepts
D[Supervised Learning]
E[Classification]
F[Regression]
end
subgraph Data
G[Training Set]
H[Test Set]
I[Features]
end
D --> E
D --> F
E --> B
E --> C
F --> A
G --> A
G --> B
G --> C
I --> A
I --> B
I --> C
flowchart TD
A[Start: Data Problem] --> B{Labeled Data?}
B -->|Yes| C[Supervised Learning]
B -->|No| D[Unsupervised Learning]
C --> E{Output Type?}
E -->|Continuous| F[Regression]
E -->|Categories| G[Classification]
D --> H{Goal?}
H -->|Group Similar| I[Clustering]
H -->|Reduce Dimensions| J[PCA/t-SNE]
F --> K[Linear Regression]
G --> L[Logistic Regression]
K --> M[Train Model]
L --> M
I --> M
J --> M
M --> N[Evaluate]
N --> O{Good Performance?}
O -->|No| P[Adjust Parameters]
P --> M
O -->|Yes| Q[Deploy]
graph LR
A[Collect Data] --> B[Clean Data]
B --> C[Explore Data]
C --> D[Feature Engineering]
D --> E[Train Model]
E --> F[Evaluate]
F --> G[Deploy]
style A fill:#ff9999
style G fill:#99ff99
graph TB
subgraph Student
A[Read Material]
D[Take Notes]
G[Create Summary]
end
subgraph Professor
B[Assign Reading]
E[Give Lecture]
end
subgraph System
C[Provide Resources]
F[Track Progress]
H[Generate Quiz]
end
B --> A
A --> C
C --> D
E --> D
D --> F
F --> G
G --> H
timeline
title History of Machine Learning
1950 : Turing Test
: AI as a field emerges
1956 : Dartmouth Conference
: "Artificial Intelligence" term coined
1997 : Deep Blue defeats Kasparov
2011 : Watson wins Jeopardy
2012 : AlexNet breakthrough in ImageNet
2016 : AlphaGo defeats Lee Sedol
2022 : ChatGPT released
gantt
title Semester Project Timeline
dateFormat YYYY-MM-DD
section Research
Literature review :done, 2024-01-15, 14d
Data collection :active, 2024-01-29, 7d
section Analysis
Data cleaning :2024-02-05, 3d
Statistical analysis :2024-02-08, 7d
section Writing
First draft :2024-02-15, 10d
Revisions :2024-02-25, 5d
Final submission :milestone, 2024-03-01, 0d
1950s ─────► 1990s ─────► 2010s ─────► 2020s
│ │ │ │
│ │ │ │
Symbolic AI Statistical Deep Generative
Expert ML Learning AI
Systems SVMs CNNs GPT/DALL-E
RNNs Stable Diffusion
| Algorithm | Type | Data Needs | Interpretability | Speed | Accuracy |
|---|---|---|---|---|---|
| Linear Regression | Supervised | Low | ⭐⭐⭐ | ⚡⚡⚡ | ⭐⭐ |
| Decision Tree | Supervised | Medium | ⭐⭐⭐ | ⚡⚡ | ⭐⭐ |
| Neural Network | Supervised | High | ⭐ | ⚡ | ⭐⭐⭐ |
| K-Means | Unsupervised | Low | ⭐⭐ | ⚡⚡⚡ | ⭐⭐ |
| Aspect | Supervised Learning | Unsupervised Learning |
|--------|--------------------|-----------------------|
| **Data** | Labeled (inputs + outputs) | Unlabeled (inputs only) |
| **Goal** | Predict outputs for new inputs | Discover patterns/structure |
| **Examples** | Classification, Regression | Clustering, Dim. reduction |
| **Algorithms** | Linear Reg., Neural Nets | K-Means, PCA |
| **Evaluation** | Accuracy, precision, recall | Silhouette score, elbow method |
| **Use Cases** | Spam detection, price prediction | Customer segmentation, anomaly detection |
┌─────────────────────┬────────────────────┬───────────────────┐
│ Algorithm │ Advantages │ Disadvantages │
├─────────────────────┼────────────────────┼───────────────────┤
│ Linear Regression │ • Fast │ • Assumes linear │
│ │ • Interpretable │ relationship │
│ │ • Low data needs │ • Sensitive to │
│ │ │ outliers │
├─────────────────────┼────────────────────┼───────────────────┤
│ Neural Network │ • High accuracy │ • "Black box" │
│ │ • Handles complex │ • Needs lots of │
│ │ patterns │ data │
│ │ • Versatile │ • Computationally │
│ │ │ expensive │
└─────────────────────┴────────────────────┴───────────────────┘
┌────────────────────────────────────────────────────┐
│ Topic: Machine Learning Fundamentals │
│ Date: Jan 15, 2024 │
├──────────────┬─────────────────────────────────────┤
│ │ │
│ KEY CONCEPTS │ VISUAL NOTES │
│ │ │
│ • Supervised │ [Diagram of labeled data] │
│ Learning │ ↓ │
│ │ [Algorithm box] │
│ │ ↓ │
│ • Features │ [Prediction output] │
│ │ │
│ • Training │ Training = Learning patterns │
│ vs Testing │ Testing = Checking accuracy │
│ │ │
│ │ [Icon: Brain] → [Icon: Computer] │
│ │ │
│ • Models │ Types of Models: │
│ │ 🔹 Linear (simple) │
│ │ 🔹 Tree (decisions) │
│ │ 🔹 Neural (complex) │
│ │ │
├──────────────┴─────────────────────────────────────┤
│ SUMMARY: │
│ ML = Computers learning patterns from data │
│ Supervised = Learn from labeled examples │
│ Goal = Make accurate predictions on new data │
└────────────────────────────────────────────────────┘
Common Icons for Concepts:
📊 Data/Statistics 💡 Idea/Concept
🔄 Process/Cycle ⚠️ Warning/Important
📈 Growth/Increase ✓ Success/Correct
📉 Decline/Decrease ✗ Error/Incorrect
🎯 Goal/Target 🔍 Analysis/Detail
⚡ Speed/Quick 🐌 Slow/Gradual
💰 Cost/Money ⏰ Time/Deadline
👥 People/Users 🖥️ Computer/System
📚 Learning/Study 🧠 Intelligence/Thinking
Visual Connectors:
──► Leads to ⟷ Bidirectional
┄┄► Weak connection ⇉ Strong connection
─┬─ Branches ═══► Important path
│
└─► Alternative
graph LR
A[Concepts] --> B[Examples]
A --> C[Applications]
B --> D[Practice Problems]
style A fill:#e1f5ff
style B fill:#fff5e1
style C fill:#e1ffe1
style D fill:#ffe1f5
Topic 2
|
Topic 1 ── MAIN IDEA ── Topic 3
|
Topic 4
Main Topic
|
┌───────┴───────┐
| |
SubTopic 1 SubTopic 2
| |
┌─┴─┐ ┌─┴─┐
Detail Detail Detail Detail
Step 1 → Step 2 → Step 3 → Step 4 → Result
Category A | Category B | Category C
Item 1 ✓ | ✗ | ✓
Item 2 ✗ | ✓ | ✓
Item 3 ✓ | ✓ | ✗
graph TD
A[Use short labels] --> B[Limit to 7-10 nodes]
B --> C[Choose right diagram type]
C --> D[Add descriptive title]
style A fill:#e1f5ff
style D fill:#ffe1e1
Diagram Type Selection:
Dual Coding: Combine text + visual for better retention Spatial Memory: Use position to encode relationships Color Associations: Consistent color use creates mental links Personal Icons: Create your own symbol system
Text Notes:
Machine Learning Types:
- Supervised Learning (uses labeled data)
- Classification (categories)
- Regression (continuous values)
- Unsupervised Learning (no labels)
- Clustering (group similar items)
- Dimensionality Reduction (simplify data)
Mind Map:
graph TD
A[Machine Learning] --> B[Supervised]
A --> C[Unsupervised]
B --> D[Classification<br/>categories]
B --> E[Regression<br/>continuous]
C --> F[Clustering<br/>group similar]
C --> G[Dim. Reduction<br/>simplify]
Text Notes:
Python vs R for Data Science:
Python is general-purpose, easier to learn, better for production.
R is statistics-focused, great for visualization, academic preference.
Both have good ML libraries.
Comparison Table:
| Aspect | Python | R |
|---|---|---|
| Purpose | General-purpose | Statistics-focused |
| Learning Curve | Easier | Steeper |
| Production Use | ⭐⭐⭐ | ⭐ |
| Visualization | ⭐⭐ | ⭐⭐⭐ |
| ML Libraries | ⭐⭐⭐ | ⭐⭐⭐ |
| Community | Large, diverse | Academic |
For detailed information:
resources/visual-note-patterns.mdresources/mermaid-examples.mdresources/color-coding-systems.mdresources/sketchnote-templates.md