15552 个技能
Modern Angular (v20+) expert with deep knowledge of Signals, Standalone Components, Zoneless applications, SSR/Hydration, and reactive patterns.
Automatically select optimal Claude model (Haiku/Sonnet/Opus) based on task complexity to reduce costs while maintaining quality.
Reads text using a customized, hyper-realistic voice clone.
Step-by-step workflow to identify fallback-driven regressions and replace them with strict source-level validation and fail-closed transitions.
🧠 Activate Yann LeCun's cognitive framework — Father of Convolutional Neural Networks, Meta Chief AI Scientist, one of the three pioneers of deep learning. Applicable scenarios: Computer vision design, convolutional network architecture, self-supervised learning, solving open-ended problems, balancing engineering and theory. Core paradigm: Convolutional inductive bias + self-supervised learning + energy models + engineering pragmatism.
Use esta skill quando for necessário propor uma correção localizada, conservadora e revisável.
Deprecated skill kept only for legacy reference. Do not use for new doc visuals; prefer `nano-banana-2`, then `nano-banana`, then `ai-image-generation`, with flow/diagram fallback when image-first is not the right fit.
Use when the task is deep-research or hybrid, when prior internal decisions may matter, when the user asks for comparison or architecture investigation, or when LDR-style research with conditional AutoMem recall is required.
Use this skill when a feature requires coordinated changes in teachmewow-helix and teachmewow-agent. It enforces parallel track planning, per-repo checklists, and final convergence validation.
Use this skill when modifying payment states, implementing verification flows, or calculating late statuses in the Aegis platform.
Instinct-based learning system that observes sessions via hooks, creates atomic instincts with confidence scoring, and evolves them into skills/commands/agents.
Automatically extract reusable patterns from Claude Code sessions and save them as learned skills for future use.
Maintain fixture-based regression testing for deterministic extraction and generation outputs.
Guides AI coding agents through the lmms-eval codebase - a unified evaluation framework for Large Multimodal Models (LMMs). Use when integrating new models, adding evaluation tasks/benchmarks, using the HTTP eval server, or navigating the evaluation pipeline architecture.
GPU-accelerated data curation for LLM training. Supports text/image/video/audio. Features fuzzy deduplication (16× faster), quality filtering (30+ heuristics), semantic deduplication, PII redaction, NSFW detection. Scales across GPUs with RAPIDS. Use for preparing high-quality training datasets, cleaning web data, or deduplicating large corpora.
Machine-learning predictive strategy based on sklearn walk-forward training, feature engineering, and signal generation. Suitable for any OHLCV data.
This skill helps an LLM generate correct AxAgent tuning and evaluation code using @ax-llm/ax. Use when the user asks about agent.optimize(...), judgeOptions, eval datasets, optimization targets, saved optimizedProgram artifacts, or recursive optimization guidance.
This skill helps an LLM generate correct AxLearn code using @ax-llm/ax. Use when the user asks about self-improving agents, trace-backed learning, feedback-aware updates, or AxLearn modes.
This skill helps an LLM generate correct AxGen code using @ax-llm/ax. Use when the user asks about ax(), AxGen, generators, forward(), streamingForward(), assertions, field processors, step hooks, self-tuning, or structured outputs.
This skill should be used for time series machine learning tasks including classification, regression, clustering, forecasting, anomaly detection, segmentation, and similarity search. Use when working with temporal data, sequential patterns, or time-indexed observations requiring specialized algorithms beyond standard ML approaches. Particularly suited for univariate and multivariate time series analysis with scikit-learn compatible APIs.
Distributed training orchestration across clusters. Scales PyTorch/TensorFlow/HuggingFace from laptop to 1000s of nodes. Built-in hyperparameter tuning with Ray Tune, fault tolerance, elastic scaling. Use when training massive models across multiple machines or running distributed hyperparameter sweeps.
Compress large language models using knowledge distillation from teacher to student models. Use when deploying smaller models with retained performance, transferring GPT-4 capabilities to open-source models, or reducing inference costs. Covers temperature scaling, soft targets, reverse KLD, logit distillation, and MiniLLM training strategies.
Merge multiple fine-tuned models using mergekit to combine capabilities without retraining. Use when creating specialized models by blending domain-specific expertise (math + coding + chat), improving performance beyond single models, or experimenting rapidly with model variants. Covers SLERP, TIES-Merging, DARE, Task Arithmetic, linear merging, and production deployment strategies.
Build predictive models using linear regression, polynomial regression, and regularized regression for continuous prediction, trend forecasting, and relationship quantification
Skill for implementing the unified RHIVE Tech-Noir design system including Glassmorphism and Circuitry logic.
Create and transform features using encoding, scaling, polynomial features, and domain-specific transformations for improved model performance and interpretability
Build recommendation systems using collaborative filtering, content-based filtering, matrix factorization, and neural network approaches
Deterministic mechanical gear math for spur, helical, bevel, and worm geometries with CAD-ready dimension outputs.
CAD-oriented dimensioning guidance and validation patterns for mechanical parts, with emphasis on gear geometry outputs.
Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.