Add a new operator or modify an existing operator in MindSpore. Use when adding a new op, changing op signatures, adding new backend support (Ascend/CPU/GPU), migrating py_method dispatch to pyboost, or modifying infer/kernel logic. Triggers on phrases like "add op", "new operator", "implement op", "add CPU support", "migrate to pyboost", "add GPU kernel", "register operator".
This skill covers the full lifecycle of adding or modifying a MindSpore operator.
Every operator spans multiple files. Start by identifying which already exist:
mindspore/ops/api_def/{op}.yaml # API-level dispatch (pyboost vs py_method, per backend)
mindspore/ops/op_def/yaml/{op}_op.yaml # Op schema (args, returns, dispatch class names)
mindspore/ops/infer/ops_func_impl/{op}.h/.cc # InferShape + InferType (C++)
mindspore/ops/kernel/cpu/pyboost/customize/{op}.h/.cc # CPU pyboost kernel
mindspore/ops/kernel/gpu/pyboost/customize/{op}.h/.cc # GPU pyboost kernel
mindspore/ops/kernel/ascend/aclnn/pyboost_impl/customize/{op}.h/.cc # Ascend pyboost kernel
mindspore/ccsrc/include/pynative/utils/pyboost/customize/{op}.h # Shared customize header
mindspore/ccsrc/pynative/utils/pyboost/customize/{op}.cc # Shared customize impl
mindspore/python/mindspore/ops/tensor_method.py # py_method fallback (if py_method dispatch)
File: mindspore/ops/api_def/{op_name}.yaml
Controls how the op is dispatched at the API level.
{op_name}:
op_yaml: {op_name}_op.yaml # references op_def YAML
py_method: tensor_{op_name} # Python callback function name
Ascend: pyboost # or py_method
CPU: pyboost # or py_method or None
GPU: pyboost # or py_method or None
interface: tensor, function # tensor = Tensor.xxx(), function = ops.xxx()
For ops with multiple overloads (scalar vs tensor input), use a list:
{op_name}:
- op_yaml: {op_name}_tensor_op.yaml
py_method: tensor_{op_name}_tensor
Ascend: pyboost
CPU: pyboost
GPU: pyboost
interface: tensor, function
- op_yaml: {op_name}_scalar_op.yaml
py_method: tensor_{op_name}_scalar
...
py_method vs pyboost:
pyboost = new fast execution path; requires C++ customize kernel filespy_method = legacy Python dispatch; just needs a Python function in tensor_method.pyFile: mindspore/ops/op_def/yaml/{op_name}_op.yaml
Defines the op's schema and C++ kernel class names.
#operator {op_name}
{op_name}:
args:
input:
dtype: tensor
weight:
dtype: float
default: 1.0
keepdim:
dtype: bool
default: False
returns:
output:
dtype: tensor
bprop_expander: False # True if gradient is implemented via expander
function:
disable: True # True for func-only ops (not a Primitive class)
dispatch:
enable: True
Ascend: {OpName}Ascend # C++ class name, e.g. RsqrtAscend
CPU: {OpName}CPU # CamelCase of op_name
GPU: {OpName}GPU
dtype options: tensor, float, int, bool, tuple[int], list[int], tuple[tensor], number
For optional args: add default: None and use std::optional<TensorPtr> in C++
File: mindspore/ops/infer/ops_func_impl/{op_name}.h
#ifndef MINDSPORE_CORE_OPS_OPS_FUNC_IMPL_{OP_NAME_UPPER}_H_
#define MINDSPORE_CORE_OPS_OPS_FUNC_IMPL_{OP_NAME_UPPER}_H_
#include <vector>
#include "ops/ops_func_impl/op_func_impl.h"
namespace mindspore::ops {
class OPS_API {OpName}FuncImpl : public OpFuncImpl {
public:
BaseShapePtr InferShape(const PrimitivePtr &primitive,
const std::vector<AbstractBasePtr> &input_args) const override;
TypePtr InferType(const PrimitivePtr &primitive,
const std::vector<AbstractBasePtr> &input_args) const override;
// For simple infer (pyboost path):
ShapeArray InferShape(const PrimitivePtr &primitive,
const ValuePtrList &input_values) const override;
TypePtrList InferType(const PrimitivePtr &primitive,
const ValuePtrList &input_values) const override;
};
} // namespace mindspore::ops
#endif
File: mindspore/ops/infer/ops_func_impl/{op_name}.cc
#include "infer/ops_func_impl/{op_name}.h"
#include <set>
#include "ops/ops_func_impl/simple_infer.h"
namespace mindspore::ops {
BaseShapePtr {OpName}FuncImpl::InferShape(...) const {
// Same shape as input (for elementwise ops):
return input_args[kIndex0]->GetShape()->Clone();
}
TypePtr {OpName}FuncImpl::InferType(...) const {
// Validate and return output type
auto input_type = input_args[kIndex0]->GetType();
static const std::set<TypeId> valid_types = {kNumberTypeFloat32, kNumberTypeFloat16, ...};
// ... type checking ...
return input_type;
}
// For simple infer (ValuePtrList path):
TypePtrList {OpName}FuncImpl::InferType(const PrimitivePtr &, const ValuePtrList &input_values) const {
const auto &x = input_values[kIndex0]->cast<tensor::TensorPtr>();
return {x->Dtype()};
}
ShapeArray {OpName}FuncImpl::InferShape(const PrimitivePtr &, const ValuePtrList &input_values) const {
const auto &x = input_values[kIndex0]->cast<tensor::TensorPtr>();
return {x->shape()};
}
REGISTER_SIMPLE_INFER(kName{OpName}, {OpName}FuncImpl)
} // namespace mindspore::ops
Common TypeId values: kNumberTypeFloat16, kNumberTypeFloat32, kNumberTypeFloat64, kNumberTypeBFloat16, kNumberTypeInt32, kNumberTypeInt64, kNumberTypeBool
Header: mindspore/ops/kernel/cpu/pyboost/customize/{op_name}.h
#ifndef MINDSPORE_MINDSPORE_CCSRC_PLUGIN_DEVICE_CPU_KERNEL_PYBOOST_CUSTOMIZE_{OP_NAME_UPPER}_H_
#define MINDSPORE_MINDSPORE_CCSRC_PLUGIN_DEVICE_CPU_KERNEL_PYBOOST_CUSTOMIZE_{OP_NAME_UPPER}_H_
#include <memory>
#include "ir/tensor.h"
#include "include/pynative/utils/pyboost/op_runner.h"
namespace mindspore::kernel::pyboost {