Use when working with scaleinvariance package for multifractal field simulation and analysis - Hurst exponent estimation, fractional Brownian motion (fBm), fractionally integrated flux (multiplicative cascades) (FIF), intermittency analysis, and Generalized Scale Invariance (GSI). Also use for scaling analysis of time series, turbulence data, atmospheric fields, or any task involving self-similar, cascade, or multifractal processes or fields.
Simulation and analysis tools for multifractal fields and time series. Version 0.10.0.
Version check: If you happen to determine that the installed scaleinvariance version does not match the version above, this skill may be outdated. Fetch the latest skill file from https://raw.githubusercontent.com/thomasdewitt/scaleinvariance/master/agent-skills/scaleinvariance/SKILL.md and save it to your local skill folder.
Documentation: See README.md plus the source docstrings in scaleinvariance/simulation/.
When analyzing multiple arrays (e.g., multiple time series or spatial snapshots), concatenate them along a NEW axis and pass ALL AT ONCE whenever possible. Analysis functions are, in general, NOT linear operations.
import numpy as np
import scaleinvariance
# Say we have 10 independent time series, each of length 2048
time_series_list = [generate_some_data() for _ in range(10)]
# CORRECT - stack along new axis (axis 0) and compute along original axis (axis 1)
stacked = np.stack(time_series_list, axis=0) # Shape: (10, 2048)
H, err = scaleinvariance.structure_function_hurst(stacked, axis=1) # Analyze along axis 1
# WRONG - do NOT loop and combine
# results = []
# for ts in time_series_list:
# H, err = scaleinvariance.structure_function_hurst(ts) # NO!
# results.append(H)
# H_avg = np.mean(results) # INCORRECT - these are not linear operations!
This applies to all analysis functions: structure_function_hurst, haar_fluctuation_hurst, spectral_hurst, two_point_C1, and their underlying scaling functions (structure_function, haar_fluctuation, power_spectrum_binned).
If you are attempting to analyze a large dataset that does not fit in memory, you MUST carefully consider whether the specific function you are using is linear. In the above example, the Hurst exponent is computed using a linear regression and is NOT linear. Other functions, such as structure_function with order=1 and assuming each input array is the same size, can sometimes return linear outputs. You may have to do your own linear regression. You MUST carefully consider cases where loops are necessary (is the function linear? is each input chunk the same size? does nan prevalence change?), and when you are unsure explain the situation to your user!
| Task | Function | Notes |
|---|---|---|
| Recommended cases when 0<H<1 | structure_function_hurst | First-order structure function scaling, robust |
| Wavelet-based alternative, works for -1<H<1 | haar_fluctuation_hurst | Uses Haar wavelet convolution, first order |
| Spectral method, works for all H | spectral_hurst | Uses power spectrum β = 2H + 1, uses second-order statistics so will not be equivalent to the above for multifractal fields |
All three methods estimate the same H parameter but may give slightly different results. Structure function is recommended for most cases.
*Usually, the _hurst methods above are preferred to the functions in this section, as the spectrum/scaling function may be obtained from them using a kwarg described below
| Task | Function | Notes |
|---|---|---|
| Structure function values | structure_function | Returns lags and S_n(r) values |
| Haar fluctuation values | haar_fluctuation | Returns lags and fluctuation values |
| Power spectral density | power_spectrum_binned | Returns binned frequencies and PSD |
| Task | Function | Notes |
|---|---|---|
| Estimate C1 parameter | two_point_C1 | Uses two-point scaling to estimate intermittency |
| Analytic K(q) function | K_analytic | Returns Lovejoy and Schertzer's K(q) = (C1/(alpha-1)) * (q^alpha - q) |
| Empirical K(q) from data | K_empirical | Estimates K(q) scaling function from data across range of q values |
| Task | Function | Notes |
|---|---|---|
| 1D multifractal (FIF) | FIF_1D | Fractionally integrated flux with H, C1, alpha |
| N-D multifractal (FIF) | FIF_ND | 2D, 3D, etc. with GSI support |
| 1D fBm (spectral) | fBm_1D_circulant | Fast spectral synthesis, periodic |
| N-D fBm (spectral) | fBm_ND_circulant | 2D, 3D, 4D, etc. spectral synthesis, isotropic |
| 1D fBm (fractional integration) | fBm_1D | Extended H range (-0.5, 1.5) |
| Spectral fractional integration | fractional_integral_spectral | Isotropic Fourier kernel, periodic, any real H |
| Two-regime (broken) fractional integration | broken_fractional_integral_spectral | Two Hurst exponents joined at a wavelength |
| Task | Function | Notes |
|---|---|---|
| Anisotropic scale metric | canonical_scale_metric | Creates stratified/anisotropic distance metric, for use in FIF and fBm methods above |
| Task | Function | Notes |
|---|---|---|
| Set backend | set_backend('numpy') or set_backend('torch') | Torch is often much faster but is a huge dependency |
| Get backend | get_backend() | Returns current backend name |
| Set threads | set_num_threads(n) | Default: 90% CPU count |
| Set precision | set_numerical_precision('float32') or ('float64') | Default: 'float32'. Applies to both backends. |
| Get precision | get_numerical_precision() | Returns current precision string |
| Set device | set_device('cpu') or set_device('cuda') | GPU acceleration (torch backend only) |
| Get device | get_device() | Returns current device string |
| Convert to numpy | to_numpy(x) | Convert backend result to numpy array |
scaleinvariance.structure_function_hurst(
data, # N-dimensional array
min_sep=None, # Min lag for fit (default: 1 or 4 depending on size)
max_sep=None, # Max lag for fit (default: size/8 or size-1 if small)
axis=0, # Axis along which to compute
return_fit=False # If True, also return lags, values, fit_line
) -> (H, uncertainty) | (H, uncertainty, lags, sf_values, fit_line)
scaleinvariance.haar_fluctuation_hurst(
data, # N-dimensional array
min_sep=None, # Min lag for fit
max_sep=None, # Max lag for fit
axis=0, # Axis along which to compute
return_fit=False # If True, also return lags, values, fit_line
) -> (H, uncertainty) | (H, uncertainty, lags, haar_values, fit_line)
scaleinvariance.spectral_hurst(
data, # N-dimensional array
max_wavelength=None, # Max wavelength for fit (default: size/8 or size)
min_wavelength=None, # Min wavelength for fit (default: 2 or 8)
nbins=50, # Number of log bins for PSD
axis=0, # Axis along which to compute FFT
return_fit=False # If True, also return freqs, psd, fit_line
) -> (H, uncertainty) | (H, uncertainty, freqs, psd_values, fit_line)
scaleinvariance.structure_function(
data, # N-dimensional array
order=1, # Order n of structure function S_n(r)
max_sep=None, # Maximum lag to compute
axis=0, # Axis along which to compute
lags='powers of 1.2' # 'all', 'powers of X', or array of lags
) -> (lags, sf_values)
scaleinvariance.haar_fluctuation(
data, # N-dimensional array
order=1, # Order of fluctuation
max_sep=None, # Maximum lag to compute
axis=0, # Axis along which to compute
lags='powers of 1.2', # 'all', 'powers of X', or array of lags
nan_behavior='raise' # 'raise' or 'ignore'
) -> (lags, haar_values)
scaleinvariance.power_spectrum_binned(
data, # N-dimensional array
max_wavelength=None, # Maximum wavelength to include
min_wavelength=None, # Minimum wavelength to include
nbins=50, # Number of log bins
axis=0 # Axis along which to compute FFT
) -> (binned_freq, binned_psd)
scaleinvariance.two_point_C1(
data, # N-dimensional array
order=2, # Second order for exponent (default: 2)
assumed_alpha=2.0, # Levy stability parameter (default: Gaussian)
scaling_method='structure_function', # or 'haar_fluctuation'
min_sep=None, # Min separation for scaling
max_sep=None, # Max separation for scaling
axis=0 # Axis along which to compute
) -> (C1, uncertainty) # NOTE: uncertainty from error propagation is approximate
scaleinvariance.K_analytic(
q, # Order of moment (float or array)
C1, # Codimension of mean (intermittency parameter)
alpha # Levy stability parameter
) -> K_value # K(q) = (C1/(alpha-1)) * (q^alpha - q)
scaleinvariance.K_empirical(
data, # N-dimensional array
q_values=None, # Moment orders (default: 0.1 to 2.5 by 0.1)
scaling_method='structure_function', # or 'haar_fluctuation'
hurst_fit_method='fixed', # 'fixed' or 'joint'
min_sep=None, # Min lag for scaling fits
max_sep=None, # Max lag for scaling fits
axis=0, # Axis along which to compute
nan_behavior='raise' # 'raise' or 'ignore' (haar_fluctuation only)
) -> (q_values, K_values, H_fit, C1_fit, alpha_fit)
scaleinvariance.FIF_1D(
size, # Length (must be even, power of 2 recommended)
alpha, # Levy stability parameter in [0.5, 2], != 1
C1, # Codimension of mean (intermittency), must be >= 0
# C1 = 0 routes to fBm (requires causal=False)
H, # Hurst exponent. 'spectral' path: any real (with
# overflow guard). 'LS2010'/'naive'/'spectral_odd'
# paths: H in [-1, 1] (H<0 via diff).
levy_noise=None, # Pre-generated noise for reproducibility
causal=False, # Use causal kernels (must be False for C1=0)
outer_scale=None, # Large-scale cutoff (default: size)
outer_scale_width_factor=2.0, # Transition width control
kernel_construction_method_flux='LS2010', # 'LS2010' or 'naive'
kernel_construction_method_observable='spectral', # 'spectral', 'LS2010', 'naive', or 'spectral_odd'
periodic=True # Full periodic output; set False to double internally and crop
) -> numpy.ndarray # Normalized by mean (or zero mean for H < 0 on non-spectral paths)
scaleinvariance.FIF_ND(
size, # Tuple of dimensions, e.g., (512, 512) or (256, 256, 128)
alpha, # Levy stability parameter in [0.5, 2], != 1
C1, # Codimension of mean (intermittency), must be >= 0
# C1 = 0 routes to fBm_ND_circulant()
H, # Hurst exponent. 'spectral' path: any real
# (with overflow guard). 'LS2010' path: H in [0, 1].
levy_noise=None, # Pre-generated noise for reproducibility
outer_scale=None, # Large-scale cutoff (default: max(size))
outer_scale_width_factor=2.0, # Transition width control
kernel_construction_method_flux='LS2010', # 'LS2010' only
kernel_construction_method_observable='spectral', # 'spectral' or 'LS2010'
periodic=False, # Bool or tuple of bool for per-axis periodicity
scale_metric=None, # Custom GSI distance metric
scale_metric_dim=None # Scaling dimension (default: spatial dimension)
) -> numpy.ndarray # Normalized by mean
The recommended way to produce reproducible FIF simulations is to explicitly generate the Lévy noise with a seed and pass it to the FIF function via the levy_noise parameter:
import scaleinvariance
from scaleinvariance.simulation.FIF import extremal_levy
# Generate reproducible noise
noise = extremal_levy(alpha=1.8, size=4096, seed=42)
# Pass noise to FIF — same noise always gives the same result
fif = scaleinvariance.FIF_1D(4096, alpha=1.8, C1=0.1, H=0.4, levy_noise=noise)
This approach is preferred over setting a global random seed because it makes the reproducibility intent explicit and avoids side effects on other random state.
# extremal_levy signature
scaleinvariance.simulation.FIF.extremal_levy(
alpha, # Lévy stability parameter in (0, 2)
size=1, # Number of samples
seed=None # Optional random seed for reproducibility
) -> numpy.ndarray
Circulant methods are preferred. fBm uses a convolution with a power law kernel similar to FIF, but this is not as accurate. However, fBm_1D can be used to limit the outer scale or control the noise, or make causal simulations, for example.
scaleinvariance.fBm_1D_circulant(
size, # Length (must be power of 2)
H, # Hurst parameter (typical: 0-1, but negative supported)
periodic=True
) -> numpy.ndarray # Zero mean, unit std
scaleinvariance.fBm_ND_circulant(
size, # Tuple of dimensions, e.g., (512, 512) for 2D, (256, 256, 128) for 3D (must be powers of 2)
H, # Hurst parameter (typical: 0-1, but negative supported)
periodic=True # Bool or tuple of bool for per-axis periodicity control
) -> numpy.ndarray # Zero mean, unit std
scaleinvariance.fBm_1D(
size, # Length (must be even, power of 2 recommended)
H, # Hurst exponent in (-0.5, 1.5)
causal=True, # Use causal kernels
outer_scale=None, # Large-scale cutoff
outer_scale_width_factor=2.0,
periodic=True, # Full periodic output; set False to double internally and crop
gaussian_noise=None, # Pre-generated noise
kernel_construction_method='LS2010' # 'naive', 'LS2010', or 'spectral'
) -> numpy.ndarray # Zero mean, unit std
Standalone spectral fractional integrators. Both apply an isotropic Fourier-space power-law kernel via rfftn/irfftn — i.e. circular convolution (periodic boundaries). They work on 1D or N-D real signals. The DC bin of the kernel is filled from the nearest non-DC bin (no spectral notch at DC), so a positive-valued input with significant mean (e.g. a FIF unit-mean flux) stays positive after integration. Note this means they are not mean-preserving — the output mean is scaled relative to the input mean by ~outer_scale**H.
scaleinvariance.fractional_integral_spectral(
signal, # Real 1D or N-D array
H, # Hurst exponent. Any finite real != 0.
# H > 0: integration (redden the spectrum)
# H < 0: differentiation (blue the spectrum)
# H == 0: raises ValueError (identity)
outer_scale=None, # Low-freq regularization scale (defaults to max(signal.shape))
) -> ndarray # Mean-preserving; same shape as input
scaleinvariance.broken_fractional_integral_spectral(
signal,
H_small_scale, # Hurst exponent at small scales (|f| > k_t)
H_large_scale, # Hurst exponent at large scales (|f| <= k_t)
transition_wavelength, # Wavelength in grid units where slope changes.
# k_t = 1 / transition_wavelength.
outer_scale=None,
) -> ndarray # Mean-preserving; same shape as input
The broken variant uses a piecewise power-law kernel: slope -H_large_scale for |f| <= k_t, slope -H_small_scale for |f| > k_t, with a continuity prefactor so the magnitude is continuous at k_t. Raises ValueError if both exponents are zero; accepts one being zero as a legitimate use (flat on one side, sloped on the other).
Overflow guard: both functions raise OverflowError before allocating if the peak kernel magnitude outer_scale ** max_positive_H would exceed the safe range of the active real dtype. Rough ceilings with default outer_scale: float32 tolerates up to H ≈ 6 at size 2^20 and H ≈ 4 at size 2^30; float64 tolerates up to H ≈ 51 at size 2^20. If you need larger H, call scaleinvariance.set_numerical_precision('float64') first. Negative H never overflows (kernel ≤ 1 at Nyquist).
FIF_1D and FIF_ND no longer accept the old combined kernel_construction_method= argument. Use kernel_construction_method_flux= and kernel_construction_method_observable= explicitly.'LS2010' (and deprecated 'naive' for 1D).'LS2010', 'spectral', and 'spectral_odd'.'spectral' observable (default): exact Fourier-space power-law transfer function. Raises if causal=True (1D) or scale_metric is set (N-D).'spectral_odd' observable (1D only): odd (antisymmetric) Fourier-space transfer function. Produces zero-mean output. Raises if causal=True.naive FIF kernels remain available only for 1D comparison/debugging and emit a warning because their outputs are not remotely accurate.fBm_1D still uses a single kernel_construction_method= argument and now defaults to 'LS2010'.fBm_1D(..., kernel_construction_method='spectral') is supported for non-causal periodic runs.FIF_ND, spectral kernels do not support custom scale_metric.scaleinvariance.canonical_scale_metric(
size, # (nx, nz) for 2D or (nx, ny, nz) for 3D
ls, # Spheroscale (characteristic horizontal scale), must be > 0
Hz=5/9 # Anisotropy exponent (default: atmospheric stratification)
) -> numpy.ndarray # Scale metric with dx=2 spacing (matches LS2010 kernels)
IMPORTANT for GSI with FIF_ND: When using canonical_scale_metric, you must also specify scale_metric_dim:
scale_metric_dim = 1 + Hzscale_metric_dim = 2 + Hzscale_metric_dim = 3 + Hz# Example: 2D GSI simulation
size = (256, 256)
sim_size = (512, 512) # Doubled for periodic=False
Hz = 0.556
ls = 100.0
metric = scaleinvariance.canonical_scale_metric(sim_size, ls=ls, Hz=Hz)
fif = scaleinvariance.FIF_ND(size, alpha=1.8, C1=0.1, H=0.3, periodic=False,
scale_metric=metric, scale_metric_dim=1 + Hz)
spectral observable path supports any real H (modulo an overflow guard at very large positive H)spectral path (default) supports any real H; LS2010 requires H in [0, 1]Control periodicity separately for each axis:
# Periodic in x, non-periodic in z (for e.g., horizontal periodicity, vertical boundaries)
fif = scaleinvariance.FIF_ND((512, 256), alpha=1.7, C1=0.1, H=0.3,
periodic=(True, False))
PyTorch backend provides speedups, often quite significant, for simulations:
import scaleinvariance
# Check current backend
print(scaleinvariance.get_backend()) # 'numpy' or 'torch'
# Force numpy backend
scaleinvariance.set_backend('numpy')
# Use torch if available (auto-detected by default)
scaleinvariance.set_backend('torch')
# GPU acceleration (torch backend only)
scaleinvariance.set_device('cuda') # use GPU
scaleinvariance.set_device('cpu') # back to CPU
# Control threading
scaleinvariance.set_num_threads(8)
# Control numerical precision (default 'float32'). Doubles precision
# at the cost of 2x memory. Applies to both backends.
scaleinvariance.set_numerical_precision('float64')
assert scaleinvariance.get_numerical_precision() == 'float64'
All public functions return NumPy arrays regardless of backend. Internally, the torch backend keeps data as native tensors (on the active device) to avoid per-operation CPU<->GPU transfers.
FIF_1D and FIF_ND clip the internal log-flux to a precision-aware
exponent range ((-88.7, 88.7) for float32, (-709, 709) for float64) before
calling exp. This prevents unbounded cascade amplitudes from producing
inf. If any samples hit the clip, a RuntimeWarning is emitted naming
the number of saturated samples — treat it as a signal to lower C1 or
raise precision. For large simulations and alpha != 2, expect clipping. This is the nature of levy variables, and little can be done.
User-supplied scale_metric arrays in FIF_ND are validated as
finite and strictly positive before being used.
import numpy as np
import matplotlib.pyplot as plt
import scaleinvariance
# Generate test data
fif = scaleinvariance.FIF_1D(4096, alpha=1.8, C1=0.1, H=0.4)
# Estimate H with fit details
H, err, lags, sf, fit = scaleinvariance.structure_function_hurst(fif, return_fit=True)
print(f"H = {H:.3f} +/- {err:.3f}")
# Plot
plt.figure(figsize=(6, 4))
plt.loglog(lags, sf, 'o', label='Data')
plt.loglog(lags, fit, '-', label=f'Fit: H={H:.2f}')
plt.xlabel('Lag'); plt.ylabel('S_1(r)')
plt.legend(); plt.show()
import numpy as np
import scaleinvariance
# Generate multiple realizations
n_realizations = 20
size = 2048
H_true = 0.35
# Stack into 2D array: (n_realizations, size)
data = np.stack([
scaleinvariance.FIF_1D(size, alpha=1.8, C1=0.1, H=H_true)
for _ in range(n_realizations)
], axis=0)
# Analyze ALL at once, along axis 1
H_est, H_err = scaleinvariance.structure_function_hurst(data, axis=1)
print(f"Estimated H = {H_est:.3f} +/- {H_err:.3f} (true: {H_true})")
import numpy as np
import matplotlib.pyplot as plt
import scaleinvariance
# Parameters
size = (256, 256)
Hz = 0.556 # Atmospheric stratification
ls = 50.0 # Spheroscale
# Create doubled domain for non-periodic simulation
sim_size = (512, 512)
metric = scaleinvariance.canonical_scale_metric(sim_size, ls=ls, Hz=Hz)
# Generate GSI multifractal
fif = scaleinvariance.FIF_ND(size, alpha=1.7, C1=0.1, H=0.3, periodic=False,
scale_metric=metric, scale_metric_dim=1 + Hz)
plt.figure(figsize=(6, 5))
plt.imshow(np.log(fif), cmap='viridis', aspect='auto')
plt.colorbar(label='log(FIF)')
plt.xlabel('x'); plt.ylabel('z (vertical)')
plt.title('Anisotropic GSI multifractal')
plt.show()