Implement a new objective for jolt-eval
This skill handles all the boilerplate: creating the objective struct, implementing the Objective trait, registering it in the appropriate enum, creating a const key, adding an ObjectiveFunction, creating a Criterion benchmark, and running sync_targets.sh.
</Purpose>
<Execution_Policy>
cyclomatic_complexity).{{ARGUMENTS}}: must be a valid Rust identifier (lowercase alphanumeric + underscores). Reject otherwise.description()collect_measurement(), uses Setup = ().setup() and run().diff_paths() scoping)jolt-eval/src/objective/mod.rs to understand the current enums and dispatch methods.jolt-eval/src/objective/objective_fn/mod.rs to understand objective function registration.jolt-eval/src/objective/code_quality/lloc.rsjolt-eval/src/objective/performance/binding.rsCreate jolt-eval/src/objective/code_quality/<objective_name>.rs:
use std::path::Path;
use crate::objective::{
MeasurementError, Objective, OptimizationObjective, StaticAnalysisObjective,
};
pub const <UPPER_NAME>: OptimizationObjective =
OptimizationObjective::StaticAnalysis(StaticAnalysisObjective::<VariantName>(<Name>Objective {
target_dir: "<target_directory>",
}));
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
pub struct <Name>Objective {
pub(crate) target_dir: &'static str,
}
impl <Name>Objective {
pub fn collect_measurement_in(&self, repo_root: &Path) -> Result<f64, MeasurementError> {
let src_dir = repo_root.join(self.target_dir);
// Implement measurement logic
todo!()
}
}
impl Objective for <Name>Objective {
type Setup = ();
fn name(&self) -> &str { "<objective_name>" }
fn description(&self) -> String {
format!("Description of measurement in {}", self.target_dir)
}
fn setup(&self) {}
fn collect_measurement(&self) -> Result<f64, MeasurementError> {
let repo_root = Path::new(env!("CARGO_MANIFEST_DIR")).parent().unwrap();
self.collect_measurement_in(repo_root)
}
fn units(&self) -> Option<&str> { Some("units") }
}
Create jolt-eval/src/objective/performance/<objective_name>.rs:
use crate::objective::Objective;
pub const <UPPER_NAME>: OptimizationObjective =
OptimizationObjective::Performance(PerformanceObjective::<VariantName>(<Name>Objective));
pub struct <Name>Setup {
// Pre-computed data for each iteration
}
#[derive(Clone, Copy, Default, PartialEq, Eq, Hash)]
pub struct <Name>Objective;
impl Objective for <Name>Objective {
type Setup = <Name>Setup;
fn name(&self) -> &str { "<objective_name>" }
fn description(&self) -> String {
"What is being benchmarked and at what scale".to_string()
}
fn setup(&self) -> <Name>Setup {
// Use thread_local! { static SHARED: ... } pattern for expensive one-time init
// that should be amortized across Criterion iterations.
// Return a fresh Setup that can be consumed by run().
todo!()
}
fn run(&self, setup: <Name>Setup) {
// The hot path that Criterion measures.
// Use std::hint::black_box() to prevent dead-code elimination.
todo!()
}
fn units(&self) -> Option<&str> { Some("s") }
}
Performance objective guidelines:
thread_local! with a Shared struct for expensive setup (random data generation, etc.) that should be amortizedsetup() method is called per-iteration by Criterion — keep it cheap (clone from shared state)run() method is what Criterion measures — this is the hot pathstd::hint::black_box() on the result to prevent the compiler from optimizing away the computationEdit the appropriate mod.rs:
jolt-eval/src/objective/code_quality/mod.rs — add pub mod <objective_name>;jolt-eval/src/objective/performance/mod.rs — add pub mod <objective_name>;Edit jolt-eval/src/objective/mod.rs:
For static analysis, add to StaticAnalysisObjective:
all() with the target_dir fieldname(), description(), collect_measurement(), collect_measurement_in(), units()For performance, add to PerformanceObjective:
all()name(), units(), description()diff_paths() — return the appropriate path sliceAdd a pub use line in jolt-eval/src/objective/mod.rs:
pub use <submodule>::<objective_name>::<UPPER_NAME>;
Edit jolt-eval/src/objective/objective_fn/mod.rs:
Import the const key:
use super::{..., <UPPER_NAME>};
Add a const ObjectiveFunction:
pub const MINIMIZE_<UPPER_NAME>: ObjectiveFunction = ObjectiveFunction {
name: "minimize_<objective_name>",
inputs: &[<UPPER_NAME>],
evaluate: |m, _| m.get(&<UPPER_NAME>).copied().unwrap_or(f64::INFINITY),
};
Add it to ObjectiveFunction::all().
Create jolt-eval/benches/<objective_name>.rs:
use jolt_eval::objective::performance::<objective_name>::<Name>Objective;
jolt_eval::bench_objective!(<Name>Objective);
Then run ./jolt-eval/sync_targets.sh to update Cargo.toml with the new [[bench]] entry.
Run these commands (all must pass):
# Format
cargo fmt -q
# Lint
cargo clippy -p jolt-eval -q --all-targets -- -D warnings
# Run tests
cargo nextest run -p jolt-eval --cargo-quiet
# For static analysis objectives, verify the measurement works
cargo run -p jolt-eval --bin measure-objectives -- --objective <objective_name>
# For performance objectives, verify the benchmark compiles
cargo bench -p jolt-eval --bench <objective_name> -- --test
If any step fails, fix the issue and re-run.
Task: Implement a new objective for jolt-eval. {{ARGUMENTS}}