diff --git a/.ai/categories/parachains.md b/.ai/categories/parachains.md index 659908c58..309865038 100644 --- a/.ai/categories/parachains.md +++ b/.ai/categories/parachains.md @@ -1326,310 +1326,515 @@ For reference, Astar's implementation of [`pallet-contracts`](https://github.com --- -Page Title: Benchmarking FRAME Pallets +Page Title: Benchmark Your Pallet - Source (raw): https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md - Canonical (HTML): https://docs.polkadot.com/parachains/customize-runtime/pallet-development/benchmark-pallet/ -- Summary: Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. - -# Benchmarking +- Summary: Learn how to benchmark your custom pallet extrinsics to generate accurate weight calculations for production use. ## Introduction -Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks. +Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks. -The Polkadot SDK leverages the [FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure. +This guide continues building on what you've learned through the pallet development series. You'll learn how to benchmark the custom counter pallet extrinsics and integrate the generated weights into your runtime. -## The Case for Benchmarking +## Prerequisites -Benchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights. +Before you begin, ensure you have: -Benchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability. +- Completed the previous pallet development tutorials: + - [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\_blank} + - [Mock Your Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\_blank} + - [Unit Test Pallets](/parachains/customize-runtime/pallet-development/pallet-testing/){target=\_blank} +- Basic understanding of [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity){target=\_blank}. +- Familiarity with [Rust's testing framework](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\_blank}. +- Familiarity setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}. Refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide for instructions if needed. -### Benchmarking and Weight +## Create the Benchmarking Module -In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as: +Create a new file `benchmarking.rs` in your pallet's `src` directory and add the following code: -- Computational complexity. -- Storage complexity (proof size). -- Database reads and writes. -- Hardware specifications. +```rust title="pallets/pallet-custom/src/benchmarking.rs" +#![cfg(feature = "runtime-benchmarks")] -Benchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model. - -Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period. +use super::*; +use frame::deps::frame_benchmarking::v2::*; +use frame::benchmarking::prelude::RawOrigin; -Within FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs: +#[benchmarks] +mod benchmarks { + use super::*; -```rust hl_lines="2" -#[pallet::call_index(0)] -#[pallet::weight(T::WeightInfo::do_something())] -pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } -``` + #[benchmark] + fn set_counter_value() { + let new_value: u32 = 100; -The `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic. + #[extrinsic_call] + _(RawOrigin::Root, new_value); -## Benchmarking Process + assert_eq!(CounterValue::::get(), new_value); + } -Benchmarking a pallet involves the following steps: + #[benchmark] + fn increment() { + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 50; -1. Creating a `benchmarking.rs` file within your pallet's structure. -2. Writing a benchmarking test for each extrinsic. -3. Executing the benchmarking tool to calculate weights based on performance metrics. + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag. + assert_eq!(CounterValue::::get(), amount); + assert_eq!(UserInteractions::::get(caller), 1); + } -### Prepare Your Environment + #[benchmark] + fn decrement() { + // First, set the counter to a non-zero value + CounterValue::::put(100); -Install the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\_blank} command-line tool: + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 30; -```bash -cargo install frame-omni-bencher -``` + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -Before writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following: + assert_eq!(CounterValue::::get(), 70); + assert_eq!(UserInteractions::::get(caller), 1); + } -```toml title="Cargo.toml" -frame-benchmarking = { version = "37.0.0", default-features = false } + impl_benchmark_test_suite!(Pallet, crate::mock::new_test_ext(), crate::mock::Test); +} ``` -You must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`: +This module contains all the [benchmarking definitions](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} for your pallet. -```toml title="Cargo.toml" -runtime-benchmarks = [ - "frame-benchmarking/runtime-benchmarks", - "frame-support/runtime-benchmarks", - "frame-system/runtime-benchmarks", - "sp-runtime/runtime-benchmarks", -] -``` +## Define the Weight Trait -Lastly, ensure that `frame-benchmarking` is included in `std = []`: +Add a `weights` module to your pallet that defines the `WeightInfo` trait using the following code: -```toml title="Cargo.toml" -std = [ - # ... - "frame-benchmarking?/std", - # ... -] -``` +```rust title="pallets/pallet-custom/src/lib.rs" +#[frame::pallet] +pub mod pallet { + use frame::prelude::*; + pub use weights::WeightInfo; -Once complete, you have the required dependencies for writing benchmark tests for your pallet. + pub mod weights { + use frame::prelude::*; -### Write Benchmark Tests + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } -Create a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following: + impl WeightInfo for () { + fn set_counter_value() -> Weight { + Weight::from_parts(10_000, 0) + } + fn increment() -> Weight { + Weight::from_parts(15_000, 0) + } + fn decrement() -> Weight { + Weight::from_parts(15_000, 0) + } + } + } + // ... rest of pallet +} ``` -my-pallet/ -├── src/ -│ ├── lib.rs # Main pallet implementation -│ └── benchmarking.rs # Benchmarking -└── Cargo.toml + +The `()` implementation provides placeholder weights for development. + +## Add WeightInfo to Config + +Update your pallet's `Config` trait to include `WeightInfo` by adding the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::config] +pub trait Config: frame_system::Config { + type RuntimeEvent: From> + IsType<::RuntimeEvent>; + + #[pallet::constant] + type CounterMaxValue: Get; + + type WeightInfo: weights::WeightInfo; +} ``` -With the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\_blank} to get started as follows: +The [`WeightInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/trait.WeightInfo.html){target=\_blank} trait provides an abstraction layer that allows weights to be swapped at runtime configuration. By making `WeightInfo` an associated type in the `Config` trait, you will enable each runtime that uses your pallet to specify which weight implementation to use. -```rust title="benchmarking.rs (starter template)" -//! Benchmarking setup for pallet-template -#![cfg(feature = "runtime-benchmarks")] +## Update Extrinsic Weight Annotations -use super::*; -use frame_benchmarking::v2::*; +Replace the placeholder weights in your extrinsics with calls to the `WeightInfo` trait by adding the following code: -#[benchmarks] -mod benchmarks { - use super::*; - #[cfg(test)] - use crate::pallet::Pallet as Template; - use frame_system::RawOrigin; - - #[benchmark] - fn do_something() { - let caller: T::AccountId = whitelisted_caller(); - #[extrinsic_call] - do_something(RawOrigin::Signed(caller), 100); - - assert_eq!(Something::::get().map(|v| v.block_number), Some(100u32.into())); - } - - #[benchmark] - fn cause_error() { - Something::::put(CompositeStruct { block_number: 100u32.into() }); - let caller: T::AccountId = whitelisted_caller(); - #[extrinsic_call] - cause_error(RawOrigin::Signed(caller)); - - assert_eq!(Something::::get().map(|v| v.block_number), Some(101u32.into())); - } - - impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test); +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::call] +impl Pallet { + #[pallet::call_index(0)] + #[pallet::weight(T::WeightInfo::set_counter_value())] + pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(1)] + #[pallet::weight(T::WeightInfo::increment())] + pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(2)] + #[pallet::weight(T::WeightInfo::decrement())] + pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } } ``` -In your benchmarking tests, employ these best practices: +By calling `T::WeightInfo::function_name()` instead of using hardcoded `Weight::from_parts()` values, your extrinsics automatically use whichever weight implementation is configured in the runtime. You can switch between placeholder weights for testing and benchmarked weights for production easily, without changing any pallet code. -- **Write custom testing functions**: The function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing. -- **Use the `#[extrinsic_call]` macro**: This macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\_blank} docs for more details. -- **Validate extrinsic behavior**: The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context. +## Include the Benchmarking Module -Add the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following: +At the top of your `lib.rs`, add the module declaration by adding the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#![cfg_attr(not(feature = "std"), no_std)] + +extern crate alloc; +use alloc::vec::Vec; + +pub use pallet::*; -```rust #[cfg(feature = "runtime-benchmarks")] mod benchmarking; + +// Additional pallet code ``` -### Add Benchmarks to Runtime +The `#[cfg(feature = "runtime-benchmarks")]` attribute ensures that benchmarking code is only compiled when explicitly needed to keep your production runtime efficient. -Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows: +## Configure Pallet Dependencies -1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations: +Update your pallet's `Cargo.toml` to enable the benchmarking feature by adding the following code: - ```rust title="benchmarks.rs" - frame_benchmarking::define_benchmarks!( - [frame_system, SystemBench::] - [pallet_parachain_template, TemplatePallet] - [pallet_balances, Balances] - [pallet_session, SessionBench::] - [pallet_timestamp, Timestamp] - [pallet_message_queue, MessageQueue] - [pallet_sudo, Sudo] - [pallet_collator_selection, CollatorSelection] - [cumulus_pallet_parachain_system, ParachainSystem] - [cumulus_pallet_xcmp_queue, XcmpQueue] - ); +```toml title="pallets/pallet-custom/Cargo.toml" +[dependencies] +codec = { features = ["derive"], workspace = true } +scale-info = { features = ["derive"], workspace = true } +frame = { features = ["experimental", "runtime"], workspace = true } + +[features] +default = ["std"] +runtime-benchmarks = [ + "frame/runtime-benchmarks", +] +std = [ + "codec/std", + "scale-info/std", + "frame/std", +] +``` + +The Cargo feature flag system lets you conditionally compile code based on which features are enabled. By defining a `runtime-benchmarks` feature that cascades to FRAME's benchmarking features, you create a clean way to build your pallet with or without benchmarking support, ensuring all necessary dependencies are available when needed but excluded from production builds. + +## Update Mock Runtime + +Add the `WeightInfo` type to your test configuration in `mock.rs` by adding the following code: + +```rust title="pallets/pallet-custom/src/mock.rs" +impl pallet_custom::Config for Test { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); +} +``` + +In your mock runtime for testing, use the placeholder `()` implementation of `WeightInfo`, since unit tests focus on verifying functional correctness rather than performance. + +## Configure Runtime Benchmarking + +To execute benchmarks, your pallet must be integrated into the runtime's benchmarking infrastructure. Follow these steps to update the runtime configuration: + +1. **Update `runtime/Cargo.toml`**: Add your pallet to the runtime's `runtime-benchmarks` feature as follows: + + ```toml title="runtime/Cargo.toml" + runtime-benchmarks = [ + "cumulus-pallet-parachain-system/runtime-benchmarks", + "hex-literal", + "pallet-parachain-template/runtime-benchmarks", + "polkadot-sdk/runtime-benchmarks", + "pallet-custom/runtime-benchmarks", + ] ``` - For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown: - ```rust title="benchmarks.rs" hl_lines="3" - frame_benchmarking::define_benchmarks!( + When you build the runtime with `--features runtime-benchmarks`, this configuration ensures all necessary benchmarking code across all pallets (including yours) is included. + +2. **Update runtime configuration**: Run development benchmarks with the placeholder implementation and use the resulting weights file to update benchmark weights as follows: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); + } + ``` + +3. **Register benchmarks**: Add your pallet to the benchmark list in `runtime/src/benchmarks.rs` as follows: + + ```rust title="runtime/src/benchmarks.rs" + polkadot_sdk::frame_benchmarking::define_benchmarks!( [frame_system, SystemBench::] - [pallet_parachain_template, TemplatePallet] + [pallet_balances, Balances] + // ... other pallets + [pallet_custom, CustomPallet] ); ``` - !!!warning "Updating `define_benchmarks!` macro is required" - Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here. + The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks. -2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this: +## Test Benchmark Compilation - ```rust title="lib.rs" - #[cfg(feature = "runtime-benchmarks")] - mod benchmarks; - ``` +Run the following command to verify your benchmarks compile and run as tests: - The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code. +```bash +cargo test -p pallet-custom --features runtime-benchmarks +``` -3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`: +You will see terminal output similar to the following as your benchmark tests pass: - ```toml - runtime-benchmarks = [ - # ... - "pallet_parachain_template/runtime-benchmarks", - ] +
+ cargo test -p pallet-custom --features runtime-benchmarks + test benchmarking::benchmarks::bench_set_counter_value ... ok + test benchmarking::benchmarks::bench_increment ... ok + test benchmarking::benchmarks::bench_decrement ... ok + +
- ``` +The `impl_benchmark_test_suite!` macro generates unit tests for each benchmark. Running these tests verifies that your benchmarks compile correctly, execute without panicking, and pass their assertions, catching issues early before building the entire runtime. -### Run Benchmarks +## Build the Runtime with Benchmarks -You can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled: +Compile the runtime with benchmarking enabled to generate the WASM binary using the following command: -1. Run `build` with the feature flag included: +```bash +cargo build --release --features runtime-benchmarks +``` - ```bash - cargo build --features runtime-benchmarks --release - ``` +This command produces the runtime WASM file needed for benchmarking, typically located at: `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm` -2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations: +The build includes all the benchmarking infrastructure and special host functions needed for measurement. The resulting WASM runtime contains your benchmark code and can communicate with the benchmarking tool's execution environment. You'll create a different build later for operating your chain in production. - ```bash - touch weights.rs - ``` +## Install the Benchmarking Tool -3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use: +Install the `frame-omni-bencher` CLI tool using the following command: - ```bash - curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ - --output ./pallets/benchmarking/frame-weight-template.hbs - ``` +```bash +cargo install frame-omni-bencher --locked +``` + +[`frame-omni-bencher`](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} is the official Polkadot SDK tool designed explicitly for FRAME pallet benchmarking. It provides a standardized way to execute benchmarks, measure execution times and storage operations, and generate properly formatted weight files with full integration into the FRAME weight system. + +## Download the Weight Template + +Download the official weight template file using the following commands: + +```bash +curl -L https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ +--output ./pallets/pallet-custom/frame-weight-template.hbs +``` + +The weight template is a Handlebars file that transforms raw benchmark data into a correctly formatted Rust source file. It defines the structure of the generated `weights.rs` file, including imports, trait definitions, documentation comments, and formatting. Using the official template ensures your weight files follow the Polkadot SDK conventions and include all necessary metadata, such as benchmark execution parameters, storage operation counts, and hardware information. + +## Execute Benchmarks + +Run benchmarks for your pallet to generate weight files using the following commands: -4. Run the benchmarking tool to measure extrinsic weights: +```bash +frame-omni-bencher v1 benchmark pallet \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs +``` + +Benchmarks execute against the compiled WASM runtime rather than native code because WASM is what actually runs in production on the blockchain. WASM execution can have different performance characteristics than native code due to compilation and sandboxing overhead, so benchmarking against the WASM ensures your weight measurements reflect real-world conditions. + +??? note "Additional customization" + + You can customize benchmark execution with additional parameters for more detailed measurements, as shown in the sample code below: ```bash frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet INSERT_NAME_OF_PALLET \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output weights.rs + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --steps 50 \ + --repeat 20 \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs ``` + + - `--steps 50`: Number of different input values to test when using linear components (default: 50). More steps provide finer granularity for detecting complexity trends but increase benchmarking time. + - `--repeat 20`: Number of repetitions for each measurement (default: 20). More repetitions improve statistical accuracy by averaging out variance, reducing the impact of system noise, and providing more reliable weight estimates. + - `--heap-pages 4096`: WASM heap pages allocation. Affects available memory during execution. + - `--wasm-execution compiled`: WASM execution method. Use `compiled` for performance closest to production conditions. - !!! tip "Flag definitions" - - **`--runtime`**: The path to your runtime's Wasm. - - **`--pallet`**: The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`. - - **`--extrinsic`**: Which extrinsic to test. Using `""` implies all extrinsics will be benchmarked. - - **`--template`**: Defines how weight information should be formatted. - - **`--output`**: Where the output of the auto-generated weights will reside. +## Use Generated Weights -The generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity: +After running benchmarks, a `weights.rs` file is generated containing measured weights based on actual measurements of your code running on real hardware, accounting for the specific complexity of your logic, storage access patterns, and computational requirements. -
- frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet "INSERT_NAME_OF_PALLET" \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output ./weights.rs - ... - 2025-01-15T16:41:33.557045Z INFO polkadot_sdk_frame::benchmark::pallet: [ 0 % ] Starting benchmark: pallet_parachain_template::do_something - 2025-01-15T16:41:33.564644Z INFO polkadot_sdk_frame::benchmark::pallet: [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error - ... - Created file: "weights.rs" - -
+Follow these steps to use the generated weights with your pallet: -#### Add Benchmark Weights to Pallet +1. Integrate the generated weights by adding the weights module to your pallet's `lib.rs` as follows: -Once the `weights.rs` is generated, you must integrate it with your pallet. + ```rust title="pallets/pallet-custom/src/lib.rs" + #![cfg_attr(not(feature = "std"), no_std)] -1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration: + extern crate alloc; + use alloc::vec::Vec; + + pub use pallet::*; + + #[cfg(feature = "runtime-benchmarks")] + mod benchmarking; - ```rust title="lib.rs" pub mod weights; - use crate::weights::WeightInfo; - - /// Configure the pallet by specifying the parameters and types on which it depends. - #[pallet::config] - pub trait Config: frame_system::Config { - // ... - /// A type representing the weights required by the dispatchables of this pallet. - type WeightInfo: WeightInfo; + + #[frame::pallet] + pub mod pallet { + use super::*; + use frame::prelude::*; + use crate::weights::WeightInfo; + // ... rest of pallet } ``` -2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows: + Unlike the benchmarking module (which is only needed when running benchmarks), the weights module must be available in all builds because the runtime needs to call the weight functions during regular operation to calculate transaction fees and enforce block limits. - ```rust hl_lines="2" title="lib.rs" - #[pallet::call_index(0)] - #[pallet::weight(T::WeightInfo::do_something())] - pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } +2. Update your runtime configuration to use the generated weights instead of the placeholder `()` implementation by adding the following code: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = pallet_custom::weights::SubstrateWeight; + } ``` -3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code: + This change activates your benchmarked weights in the production runtime. Now, when users submit transactions that call your pallet's extrinsics, the runtime will use the actual measured weights to calculate fees and enforce block limits. - ```rust title="mod.rs" - // Configure pallet. - impl pallet_parachain_template::Config for Runtime { - // ... - type WeightInfo = pallet_parachain_template::weights::SubstrateWeight; +??? code "Example generated weight file" + + The generated `weights.rs` file will look similar to this: + + ```rust title="pallets/pallet-custom/src/weights.rs" + //! Autogenerated weights for `pallet_custom` + //! + //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0 + //! DATE: 2025-01-15, STEPS: `50`, REPEAT: `20` + + #![cfg_attr(rustfmt, rustfmt_skip)] + #![allow(unused_parens)] + #![allow(unused_imports)] + #![allow(missing_docs)] + + use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}}; + use core::marker::PhantomData; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + pub struct SubstrateWeight(PhantomData); + impl WeightInfo for SubstrateWeight { + fn set_counter_value() -> Weight { + Weight::from_parts(8_234_000, 0) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + + fn increment() -> Weight { + Weight::from_parts(12_456_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + + fn decrement() -> Weight { + Weight::from_parts(11_987_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } } ``` -## Where to Go Next + The actual numbers in your `weights.rs` file will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\_blank} accounts for database read and write operations. + +## Run Your Chain Locally + +Now that you've added the pallet to your runtime, you can follow these steps to launch your parachain locally to test the new functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}: + +1. Before running your chain, rebuild the production runtime without the `runtime-benchmarks` feature using the following command: + + ```bash + cargo build --release + ``` + + The `runtime-benchmarks` feature flag adds special host functions that are only available in the benchmarking execution environment. A runtime compiled with benchmarking features will fail to start on a production node. + + This build produces a production-ready WASM runtime at `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm`. + + !!! note "Compare build types" + - `cargo build --release --features runtime-benchmarks` - Compiles with benchmarking host functions for measurement. Use this ONLY when running benchmarks with `frame-omni-bencher`. + - `cargo build --release` - Compiles production runtime without benchmarking features. Use this for running your chain in production. + +2. Generate a new chain specification file with the updated runtime using the following commands: + + ```bash + chain-spec-builder create -t development \ + --relay-chain paseo \ + --para-id 1000 \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \ + named-preset development + ``` + + This command generates a chain specification file, `chain_spec.json`, for your parachain with the updated runtime, which defines the initial state and configuration of your blockchain, including the runtime WASM code, genesis storage, and network parameters. Generating this new chain spec with your updated runtime ensures nodes starting from this spec will use the correct version of your code with proper weight calculations. -- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}. -- Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work. +3. Start the parachain node using the Polkadot Omni Node with the generated chain specification by running the following command: + + ```bash + polkadot-omni-node --chain ./chain_spec.json --dev + ``` + + The node will start and display initialization information, including: + + - The chain specification name + - The node identity and peer ID + - Database location + - Network endpoints (JSON-RPC and Prometheus) + +4. Once the node is running, you will see log messages confirming successful production of blocks similar to the following: + +
+ polkadot-omni-node --chain ./chain_spec.json --dev + [Parachain] 🔨 Initializing Genesis block/state (state: 0x47ce…ec8d, header-hash: 0xeb12…fecc) + [Parachain] 🎁 Prepared block for proposing at 1 (3 ms) ... + [Parachain] 🏆 Imported #1 (0xeb12…fecc → 0xee51…98d2) + [Parachain] 🎁 Prepared block for proposing at 2 (3 ms) ... + [Parachain] 🏆 Imported #2 (0xee51…98d2 → 0x35e0…cc32) + +
+ + The parachain will produce new blocks every few seconds. You can now interact with your pallet's extrinsics through the JSON-RPC endpoint at `http://127.0.0.1:9944` using tools like [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank}. + +## Related Resources + +- [FRAME Benchmarking Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/index.html){target=\_blank} +- [Weight Struct Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html){target=\_blank} +- [Benchmarking v2 API](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} +- [frame-omni-bencher Tool](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} --- @@ -2851,7 +3056,7 @@ This command validates all pallet configurations and prepares the build for depl ## Run Your Chain Locally -Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. +Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. For instructions on setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}, refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide. ### Generate a Chain Specification @@ -9665,142 +9870,6 @@ JAM removes many of the opinions and constraints of the current relay chain whil This architectural evolution promises to enhance Polkadot's scalability and flexibility while maintaining robust security guarantees. JAM is planned to be rolled out to Polkadot as a single, complete upgrade rather than a stream of smaller updates. This approach seeks to minimize the developer overhead required to address any breaking changes. ---- - -Page Title: Pallet Testing - -- Source (raw): https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md -- Canonical (HTML): https://docs.polkadot.com/parachains/customize-runtime/pallet-development/pallet-testing/ -- Summary: Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. - -# Pallet Testing - -## Introduction - -Unit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment to simulate runtime and mock transaction execution for extrinsics and queries. - -To begin unit testing, you must first set up a mock runtime that simulates blockchain behavior, incorporating the necessary pallets. For a deeper understanding, consult the [Mock Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\_blank} guide. - -## Writing Unit Tests - -Once the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet module's `test.rs` file. - -Unit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you've defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet. - -The tests confirm that: - -- **Pallets initialize correctly**: At the start of each test, the system should initialize with block number 0, and the pallets should be in their default states. -- **Pallets modify each other's state**: The second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions. -- **State transitions between blocks are seamless**: By simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number. - -Testing pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled. - -This approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable. - -### Test Initialization - -Each test starts by initializing the runtime environment, typically using the `new_test_ext()` function, which sets up the mock storage and environment. - -```rust -#[test] -fn test_pallet_functionality() { - new_test_ext().execute_with(|| { - // Test logic goes here - }); -} -``` - -### Function Call Testing - -Call the pallet's extrinsics or functions to simulate user interaction or internal logic. Use the `assert_ok!` macro to check for successful execution and `assert_err!` to verify that errors are correctly handled. - -```rust -#[test] -fn it_works_for_valid_input() { - new_test_ext().execute_with(|| { - // Call an extrinsic or function - assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param)); - }); -} - -#[test] -fn it_fails_for_invalid_input() { - new_test_ext().execute_with(|| { - // Call an extrinsic with invalid input and expect an error - assert_err!( - TemplateModule::some_function(Origin::signed(1), invalid_param), - Error::::InvalidInput - ); - }); -} -``` - -### Storage Testing - -After calling a function or extrinsic in your pallet, it's essential to verify that the state changes in the pallet's storage match the expected behavior to ensure data is updated correctly based on the actions taken. - -The following example shows how to test the storage behavior before and after the function call: - -```rust -#[test] -fn test_storage_update_on_extrinsic_call() { - new_test_ext().execute_with(|| { - // Check the initial storage state (before the call) - assert_eq!(Something::::get(), None); - - // Dispatch a signed extrinsic, which modifies storage - assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42)); - - // Validate that the storage has been updated as expected (after the call) - assert_eq!(Something::::get(), Some(42)); - }); -} - -``` - -### Event Testing - -It's also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the [`#generate_deposit`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.generate_deposit.html){target=\_blank} macro are stored under the system's event storage key (system/events) as [`EventRecord`](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.EventRecord.html){target=\_blank} entries. These can be accessed using [`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\_blank} or verified with specific helper methods provided by the system pallet, such as [`assert_has_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_has_event){target=\_blank} and [`assert_last_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\_blank}. - -Here's an example of testing events in a mock runtime: - -```rust -#[test] -fn it_emits_events_on_success() { - new_test_ext().execute_with(|| { - // Call an extrinsic or function - assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param)); - - // Verify that the expected event was emitted - assert!(System::events().iter().any(|record| { - record.event == Event::TemplateModule(TemplateEvent::SomeEvent) - })); - }); -} -``` - -Some key considerations are: - -- **Block number**: Events are not emitted on the genesis block, so you need to set the block number using [`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\_blank} to ensure events are triggered. -- **Converting events**: Use `.into()` when instantiating your pallet's event to convert it into a generic event type, as required by the system's event storage. - -## Where to Go Next - -- Dive into the full implementation of the [`mock.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/mock.rs){target=\_blank} and [`test.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/tests.rs){target=\_blank} files in the [Solochain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/solochain){target=_blank}. - -
- -- Guide __Benchmarking__ - - --- - - Explore methods to measure the performance and execution cost of your pallet. - - [:octicons-arrow-right-24: Reference](/develop/parachains/testing/benchmarking) - -
- - --- Page Title: Parachains Overview @@ -12399,6 +12468,142 @@ You now know the weight system, how it affects transaction fee computation, and - [Web3 Foundation Research](https://research.web3.foundation/Polkadot/overview/token-economics#relay-chain-transaction-fees-and-per-block-transaction-limits){target=\_blank} +--- + +Page Title: Unit Test Pallets + +- Source (raw): https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md +- Canonical (HTML): https://docs.polkadot.com/parachains/customize-runtime/pallet-development/pallet-testing/ +- Summary: Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. + +# Unit Test Pallets + +## Introduction + +Unit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment to simulate runtime and mock transaction execution for extrinsics and queries. + +To begin unit testing, you must first set up a mock runtime that simulates blockchain behavior, incorporating the necessary pallets. For a deeper understanding, consult the [Mock Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\_blank} guide. + +## Writing Unit Tests + +Once the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet module's `test.rs` file. + +Unit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you've defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet. + +The tests confirm that: + +- **Pallets initialize correctly**: At the start of each test, the system should initialize with block number 0, and the pallets should be in their default states. +- **Pallets modify each other's state**: The second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions. +- **State transitions between blocks are seamless**: By simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number. + +Testing pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled. + +This approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable. + +### Test Initialization + +Each test starts by initializing the runtime environment, typically using the `new_test_ext()` function, which sets up the mock storage and environment. + +```rust +#[test] +fn test_pallet_functionality() { + new_test_ext().execute_with(|| { + // Test logic goes here + }); +} +``` + +### Function Call Testing + +Call the pallet's extrinsics or functions to simulate user interaction or internal logic. Use the `assert_ok!` macro to check for successful execution and `assert_err!` to verify that errors are correctly handled. + +```rust +#[test] +fn it_works_for_valid_input() { + new_test_ext().execute_with(|| { + // Call an extrinsic or function + assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param)); + }); +} + +#[test] +fn it_fails_for_invalid_input() { + new_test_ext().execute_with(|| { + // Call an extrinsic with invalid input and expect an error + assert_err!( + TemplateModule::some_function(Origin::signed(1), invalid_param), + Error::::InvalidInput + ); + }); +} +``` + +### Storage Testing + +After calling a function or extrinsic in your pallet, it's essential to verify that the state changes in the pallet's storage match the expected behavior to ensure data is updated correctly based on the actions taken. + +The following example shows how to test the storage behavior before and after the function call: + +```rust +#[test] +fn test_storage_update_on_extrinsic_call() { + new_test_ext().execute_with(|| { + // Check the initial storage state (before the call) + assert_eq!(Something::::get(), None); + + // Dispatch a signed extrinsic, which modifies storage + assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42)); + + // Validate that the storage has been updated as expected (after the call) + assert_eq!(Something::::get(), Some(42)); + }); +} + +``` + +### Event Testing + +It's also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the [`#generate_deposit`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.generate_deposit.html){target=\_blank} macro are stored under the system's event storage key (system/events) as [`EventRecord`](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.EventRecord.html){target=\_blank} entries. These can be accessed using [`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\_blank} or verified with specific helper methods provided by the system pallet, such as [`assert_has_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_has_event){target=\_blank} and [`assert_last_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\_blank}. + +Here's an example of testing events in a mock runtime: + +```rust +#[test] +fn it_emits_events_on_success() { + new_test_ext().execute_with(|| { + // Call an extrinsic or function + assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param)); + + // Verify that the expected event was emitted + assert!(System::events().iter().any(|record| { + record.event == Event::TemplateModule(TemplateEvent::SomeEvent) + })); + }); +} +``` + +Some key considerations are: + +- **Block number**: Events are not emitted on the genesis block, so you need to set the block number using [`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\_blank} to ensure events are triggered. +- **Converting events**: Use `.into()` when instantiating your pallet's event to convert it into a generic event type, as required by the system's event storage. + +## Where to Go Next + +- Dive into the full implementation of the [`mock.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/mock.rs){target=\_blank} and [`test.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/tests.rs){target=\_blank} files in the [Solochain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/solochain){target=_blank}. + +
+ +- Guide __Benchmarking__ + + --- + + Explore methods to measure the performance and execution cost of your pallet. + + [:octicons-arrow-right-24: Reference](/develop/parachains/testing/benchmarking) + +
+ + --- Page Title: Unlock a Parachain diff --git a/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md b/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md index 748981792..d84e40e0f 100644 --- a/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md +++ b/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md @@ -1,305 +1,510 @@ --- -title: Benchmarking FRAME Pallets -description: Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. +title: Benchmark Your Pallet +description: Learn how to benchmark your custom pallet extrinsics to generate accurate weight calculations for production use. categories: Parachains url: https://docs.polkadot.com/parachains/customize-runtime/pallet-development/benchmark-pallet/ --- -# Benchmarking - ## Introduction -Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks. +Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks. -The Polkadot SDK leverages the [FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure. +This guide continues building on what you've learned through the pallet development series. You'll learn how to benchmark the custom counter pallet extrinsics and integrate the generated weights into your runtime. -## The Case for Benchmarking +## Prerequisites -Benchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights. +Before you begin, ensure you have: -Benchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability. +- Completed the previous pallet development tutorials: + - [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\_blank} + - [Mock Your Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\_blank} + - [Unit Test Pallets](/parachains/customize-runtime/pallet-development/pallet-testing/){target=\_blank} +- Basic understanding of [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity){target=\_blank}. +- Familiarity with [Rust's testing framework](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\_blank}. +- Familiarity setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}. Refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide for instructions if needed. -### Benchmarking and Weight +## Create the Benchmarking Module -In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as: +Create a new file `benchmarking.rs` in your pallet's `src` directory and add the following code: -- Computational complexity. -- Storage complexity (proof size). -- Database reads and writes. -- Hardware specifications. +```rust title="pallets/pallet-custom/src/benchmarking.rs" +#![cfg(feature = "runtime-benchmarks")] -Benchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model. - -Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period. +use super::*; +use frame::deps::frame_benchmarking::v2::*; +use frame::benchmarking::prelude::RawOrigin; -Within FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs: +#[benchmarks] +mod benchmarks { + use super::*; -```rust hl_lines="2" -#[pallet::call_index(0)] -#[pallet::weight(T::WeightInfo::do_something())] -pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } -``` + #[benchmark] + fn set_counter_value() { + let new_value: u32 = 100; -The `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic. + #[extrinsic_call] + _(RawOrigin::Root, new_value); -## Benchmarking Process + assert_eq!(CounterValue::::get(), new_value); + } -Benchmarking a pallet involves the following steps: + #[benchmark] + fn increment() { + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 50; -1. Creating a `benchmarking.rs` file within your pallet's structure. -2. Writing a benchmarking test for each extrinsic. -3. Executing the benchmarking tool to calculate weights based on performance metrics. + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag. + assert_eq!(CounterValue::::get(), amount); + assert_eq!(UserInteractions::::get(caller), 1); + } -### Prepare Your Environment + #[benchmark] + fn decrement() { + // First, set the counter to a non-zero value + CounterValue::::put(100); -Install the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\_blank} command-line tool: + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 30; -```bash -cargo install frame-omni-bencher -``` + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -Before writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following: + assert_eq!(CounterValue::::get(), 70); + assert_eq!(UserInteractions::::get(caller), 1); + } -```toml title="Cargo.toml" -frame-benchmarking = { version = "37.0.0", default-features = false } + impl_benchmark_test_suite!(Pallet, crate::mock::new_test_ext(), crate::mock::Test); +} ``` -You must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`: +This module contains all the [benchmarking definitions](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} for your pallet. + +## Define the Weight Trait + +Add a `weights` module to your pallet that defines the `WeightInfo` trait using the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#[frame::pallet] +pub mod pallet { + use frame::prelude::*; + pub use weights::WeightInfo; + + pub mod weights { + use frame::prelude::*; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + impl WeightInfo for () { + fn set_counter_value() -> Weight { + Weight::from_parts(10_000, 0) + } + fn increment() -> Weight { + Weight::from_parts(15_000, 0) + } + fn decrement() -> Weight { + Weight::from_parts(15_000, 0) + } + } + } -```toml title="Cargo.toml" -runtime-benchmarks = [ - "frame-benchmarking/runtime-benchmarks", - "frame-support/runtime-benchmarks", - "frame-system/runtime-benchmarks", - "sp-runtime/runtime-benchmarks", -] + // ... rest of pallet +} ``` -Lastly, ensure that `frame-benchmarking` is included in `std = []`: +The `()` implementation provides placeholder weights for development. -```toml title="Cargo.toml" -std = [ - # ... - "frame-benchmarking?/std", - # ... -] -``` +## Add WeightInfo to Config -Once complete, you have the required dependencies for writing benchmark tests for your pallet. +Update your pallet's `Config` trait to include `WeightInfo` by adding the following code: -### Write Benchmark Tests +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::config] +pub trait Config: frame_system::Config { + type RuntimeEvent: From> + IsType<::RuntimeEvent>; -Create a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following: + #[pallet::constant] + type CounterMaxValue: Get; -``` -my-pallet/ -├── src/ -│ ├── lib.rs # Main pallet implementation -│ └── benchmarking.rs # Benchmarking -└── Cargo.toml + type WeightInfo: weights::WeightInfo; +} ``` -With the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\_blank} to get started as follows: +The [`WeightInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/trait.WeightInfo.html){target=\_blank} trait provides an abstraction layer that allows weights to be swapped at runtime configuration. By making `WeightInfo` an associated type in the `Config` trait, you will enable each runtime that uses your pallet to specify which weight implementation to use. -```rust title="benchmarking.rs (starter template)" -//! Benchmarking setup for pallet-template -#![cfg(feature = "runtime-benchmarks")] +## Update Extrinsic Weight Annotations -use super::*; -use frame_benchmarking::v2::*; +Replace the placeholder weights in your extrinsics with calls to the `WeightInfo` trait by adding the following code: -#[benchmarks] -mod benchmarks { - use super::*; - #[cfg(test)] - use crate::pallet::Pallet as Template; - use frame_system::RawOrigin; - - #[benchmark] - fn do_something() { - let caller: T::AccountId = whitelisted_caller(); - #[extrinsic_call] - do_something(RawOrigin::Signed(caller), 100); - - assert_eq!(Something::::get().map(|v| v.block_number), Some(100u32.into())); - } - - #[benchmark] - fn cause_error() { - Something::::put(CompositeStruct { block_number: 100u32.into() }); - let caller: T::AccountId = whitelisted_caller(); - #[extrinsic_call] - cause_error(RawOrigin::Signed(caller)); - - assert_eq!(Something::::get().map(|v| v.block_number), Some(101u32.into())); - } - - impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test); +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::call] +impl Pallet { + #[pallet::call_index(0)] + #[pallet::weight(T::WeightInfo::set_counter_value())] + pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(1)] + #[pallet::weight(T::WeightInfo::increment())] + pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(2)] + #[pallet::weight(T::WeightInfo::decrement())] + pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } } ``` -In your benchmarking tests, employ these best practices: +By calling `T::WeightInfo::function_name()` instead of using hardcoded `Weight::from_parts()` values, your extrinsics automatically use whichever weight implementation is configured in the runtime. You can switch between placeholder weights for testing and benchmarked weights for production easily, without changing any pallet code. -- **Write custom testing functions**: The function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing. -- **Use the `#[extrinsic_call]` macro**: This macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\_blank} docs for more details. -- **Validate extrinsic behavior**: The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context. +## Include the Benchmarking Module -Add the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following: +At the top of your `lib.rs`, add the module declaration by adding the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#![cfg_attr(not(feature = "std"), no_std)] + +extern crate alloc; +use alloc::vec::Vec; + +pub use pallet::*; -```rust #[cfg(feature = "runtime-benchmarks")] mod benchmarking; + +// Additional pallet code ``` -### Add Benchmarks to Runtime +The `#[cfg(feature = "runtime-benchmarks")]` attribute ensures that benchmarking code is only compiled when explicitly needed to keep your production runtime efficient. -Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows: +## Configure Pallet Dependencies -1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations: +Update your pallet's `Cargo.toml` to enable the benchmarking feature by adding the following code: - ```rust title="benchmarks.rs" - frame_benchmarking::define_benchmarks!( - [frame_system, SystemBench::] - [pallet_parachain_template, TemplatePallet] - [pallet_balances, Balances] - [pallet_session, SessionBench::] - [pallet_timestamp, Timestamp] - [pallet_message_queue, MessageQueue] - [pallet_sudo, Sudo] - [pallet_collator_selection, CollatorSelection] - [cumulus_pallet_parachain_system, ParachainSystem] - [cumulus_pallet_xcmp_queue, XcmpQueue] - ); +```toml title="pallets/pallet-custom/Cargo.toml" +[dependencies] +codec = { features = ["derive"], workspace = true } +scale-info = { features = ["derive"], workspace = true } +frame = { features = ["experimental", "runtime"], workspace = true } + +[features] +default = ["std"] +runtime-benchmarks = [ + "frame/runtime-benchmarks", +] +std = [ + "codec/std", + "scale-info/std", + "frame/std", +] +``` + +The Cargo feature flag system lets you conditionally compile code based on which features are enabled. By defining a `runtime-benchmarks` feature that cascades to FRAME's benchmarking features, you create a clean way to build your pallet with or without benchmarking support, ensuring all necessary dependencies are available when needed but excluded from production builds. + +## Update Mock Runtime + +Add the `WeightInfo` type to your test configuration in `mock.rs` by adding the following code: + +```rust title="pallets/pallet-custom/src/mock.rs" +impl pallet_custom::Config for Test { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); +} +``` + +In your mock runtime for testing, use the placeholder `()` implementation of `WeightInfo`, since unit tests focus on verifying functional correctness rather than performance. + +## Configure Runtime Benchmarking + +To execute benchmarks, your pallet must be integrated into the runtime's benchmarking infrastructure. Follow these steps to update the runtime configuration: + +1. **Update `runtime/Cargo.toml`**: Add your pallet to the runtime's `runtime-benchmarks` feature as follows: + + ```toml title="runtime/Cargo.toml" + runtime-benchmarks = [ + "cumulus-pallet-parachain-system/runtime-benchmarks", + "hex-literal", + "pallet-parachain-template/runtime-benchmarks", + "polkadot-sdk/runtime-benchmarks", + "pallet-custom/runtime-benchmarks", + ] + ``` + + When you build the runtime with `--features runtime-benchmarks`, this configuration ensures all necessary benchmarking code across all pallets (including yours) is included. + +2. **Update runtime configuration**: Run development benchmarks with the placeholder implementation and use the resulting weights file to update benchmark weights as follows: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); + } ``` - For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown: - ```rust title="benchmarks.rs" hl_lines="3" - frame_benchmarking::define_benchmarks!( +3. **Register benchmarks**: Add your pallet to the benchmark list in `runtime/src/benchmarks.rs` as follows: + + ```rust title="runtime/src/benchmarks.rs" + polkadot_sdk::frame_benchmarking::define_benchmarks!( [frame_system, SystemBench::] - [pallet_parachain_template, TemplatePallet] + [pallet_balances, Balances] + // ... other pallets + [pallet_custom, CustomPallet] ); ``` - !!!warning "Updating `define_benchmarks!` macro is required" - Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here. + The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks. -2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this: +## Test Benchmark Compilation - ```rust title="lib.rs" - #[cfg(feature = "runtime-benchmarks")] - mod benchmarks; - ``` +Run the following command to verify your benchmarks compile and run as tests: - The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code. +```bash +cargo test -p pallet-custom --features runtime-benchmarks +``` -3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`: +You will see terminal output similar to the following as your benchmark tests pass: - ```toml - runtime-benchmarks = [ - # ... - "pallet_parachain_template/runtime-benchmarks", - ] +
+ cargo test -p pallet-custom --features runtime-benchmarks + test benchmarking::benchmarks::bench_set_counter_value ... ok + test benchmarking::benchmarks::bench_increment ... ok + test benchmarking::benchmarks::bench_decrement ... ok + +
- ``` +The `impl_benchmark_test_suite!` macro generates unit tests for each benchmark. Running these tests verifies that your benchmarks compile correctly, execute without panicking, and pass their assertions, catching issues early before building the entire runtime. -### Run Benchmarks +## Build the Runtime with Benchmarks -You can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled: +Compile the runtime with benchmarking enabled to generate the WASM binary using the following command: -1. Run `build` with the feature flag included: +```bash +cargo build --release --features runtime-benchmarks +``` - ```bash - cargo build --features runtime-benchmarks --release - ``` +This command produces the runtime WASM file needed for benchmarking, typically located at: `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm` -2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations: +The build includes all the benchmarking infrastructure and special host functions needed for measurement. The resulting WASM runtime contains your benchmark code and can communicate with the benchmarking tool's execution environment. You'll create a different build later for operating your chain in production. - ```bash - touch weights.rs - ``` +## Install the Benchmarking Tool -3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use: +Install the `frame-omni-bencher` CLI tool using the following command: - ```bash - curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ - --output ./pallets/benchmarking/frame-weight-template.hbs - ``` +```bash +cargo install frame-omni-bencher --locked +``` -4. Run the benchmarking tool to measure extrinsic weights: +[`frame-omni-bencher`](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} is the official Polkadot SDK tool designed explicitly for FRAME pallet benchmarking. It provides a standardized way to execute benchmarks, measure execution times and storage operations, and generate properly formatted weight files with full integration into the FRAME weight system. + +## Download the Weight Template + +Download the official weight template file using the following commands: + +```bash +curl -L https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ +--output ./pallets/pallet-custom/frame-weight-template.hbs +``` + +The weight template is a Handlebars file that transforms raw benchmark data into a correctly formatted Rust source file. It defines the structure of the generated `weights.rs` file, including imports, trait definitions, documentation comments, and formatting. Using the official template ensures your weight files follow the Polkadot SDK conventions and include all necessary metadata, such as benchmark execution parameters, storage operation counts, and hardware information. + +## Execute Benchmarks + +Run benchmarks for your pallet to generate weight files using the following commands: + +```bash +frame-omni-bencher v1 benchmark pallet \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs +``` + +Benchmarks execute against the compiled WASM runtime rather than native code because WASM is what actually runs in production on the blockchain. WASM execution can have different performance characteristics than native code due to compilation and sandboxing overhead, so benchmarking against the WASM ensures your weight measurements reflect real-world conditions. + +??? note "Additional customization" + + You can customize benchmark execution with additional parameters for more detailed measurements, as shown in the sample code below: ```bash frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet INSERT_NAME_OF_PALLET \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output weights.rs + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --steps 50 \ + --repeat 20 \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs ``` + + - `--steps 50`: Number of different input values to test when using linear components (default: 50). More steps provide finer granularity for detecting complexity trends but increase benchmarking time. + - `--repeat 20`: Number of repetitions for each measurement (default: 20). More repetitions improve statistical accuracy by averaging out variance, reducing the impact of system noise, and providing more reliable weight estimates. + - `--heap-pages 4096`: WASM heap pages allocation. Affects available memory during execution. + - `--wasm-execution compiled`: WASM execution method. Use `compiled` for performance closest to production conditions. - !!! tip "Flag definitions" - - **`--runtime`**: The path to your runtime's Wasm. - - **`--pallet`**: The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`. - - **`--extrinsic`**: Which extrinsic to test. Using `""` implies all extrinsics will be benchmarked. - - **`--template`**: Defines how weight information should be formatted. - - **`--output`**: Where the output of the auto-generated weights will reside. +## Use Generated Weights -The generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity: +After running benchmarks, a `weights.rs` file is generated containing measured weights based on actual measurements of your code running on real hardware, accounting for the specific complexity of your logic, storage access patterns, and computational requirements. -
- frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet "INSERT_NAME_OF_PALLET" \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output ./weights.rs - ... - 2025-01-15T16:41:33.557045Z INFO polkadot_sdk_frame::benchmark::pallet: [ 0 % ] Starting benchmark: pallet_parachain_template::do_something - 2025-01-15T16:41:33.564644Z INFO polkadot_sdk_frame::benchmark::pallet: [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error - ... - Created file: "weights.rs" - -
+Follow these steps to use the generated weights with your pallet: + +1. Integrate the generated weights by adding the weights module to your pallet's `lib.rs` as follows: + + ```rust title="pallets/pallet-custom/src/lib.rs" + #![cfg_attr(not(feature = "std"), no_std)] -#### Add Benchmark Weights to Pallet + extern crate alloc; + use alloc::vec::Vec; -Once the `weights.rs` is generated, you must integrate it with your pallet. + pub use pallet::*; -1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration: + #[cfg(feature = "runtime-benchmarks")] + mod benchmarking; - ```rust title="lib.rs" pub mod weights; - use crate::weights::WeightInfo; - - /// Configure the pallet by specifying the parameters and types on which it depends. - #[pallet::config] - pub trait Config: frame_system::Config { - // ... - /// A type representing the weights required by the dispatchables of this pallet. - type WeightInfo: WeightInfo; + + #[frame::pallet] + pub mod pallet { + use super::*; + use frame::prelude::*; + use crate::weights::WeightInfo; + // ... rest of pallet } ``` -2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows: + Unlike the benchmarking module (which is only needed when running benchmarks), the weights module must be available in all builds because the runtime needs to call the weight functions during regular operation to calculate transaction fees and enforce block limits. - ```rust hl_lines="2" title="lib.rs" - #[pallet::call_index(0)] - #[pallet::weight(T::WeightInfo::do_something())] - pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) } +2. Update your runtime configuration to use the generated weights instead of the placeholder `()` implementation by adding the following code: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = pallet_custom::weights::SubstrateWeight; + } ``` -3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code: + This change activates your benchmarked weights in the production runtime. Now, when users submit transactions that call your pallet's extrinsics, the runtime will use the actual measured weights to calculate fees and enforce block limits. + +??? code "Example generated weight file" + + The generated `weights.rs` file will look similar to this: + + ```rust title="pallets/pallet-custom/src/weights.rs" + //! Autogenerated weights for `pallet_custom` + //! + //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0 + //! DATE: 2025-01-15, STEPS: `50`, REPEAT: `20` + + #![cfg_attr(rustfmt, rustfmt_skip)] + #![allow(unused_parens)] + #![allow(unused_imports)] + #![allow(missing_docs)] + + use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}}; + use core::marker::PhantomData; - ```rust title="mod.rs" - // Configure pallet. - impl pallet_parachain_template::Config for Runtime { - // ... - type WeightInfo = pallet_parachain_template::weights::SubstrateWeight; + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; } + + pub struct SubstrateWeight(PhantomData); + impl WeightInfo for SubstrateWeight { + fn set_counter_value() -> Weight { + Weight::from_parts(8_234_000, 0) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + + fn increment() -> Weight { + Weight::from_parts(12_456_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + + fn decrement() -> Weight { + Weight::from_parts(11_987_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + } + ``` + + The actual numbers in your `weights.rs` file will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\_blank} accounts for database read and write operations. + +## Run Your Chain Locally + +Now that you've added the pallet to your runtime, you can follow these steps to launch your parachain locally to test the new functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}: + +1. Before running your chain, rebuild the production runtime without the `runtime-benchmarks` feature using the following command: + + ```bash + cargo build --release ``` -## Where to Go Next + The `runtime-benchmarks` feature flag adds special host functions that are only available in the benchmarking execution environment. A runtime compiled with benchmarking features will fail to start on a production node. + + This build produces a production-ready WASM runtime at `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm`. + + !!! note "Compare build types" + - `cargo build --release --features runtime-benchmarks` - Compiles with benchmarking host functions for measurement. Use this ONLY when running benchmarks with `frame-omni-bencher`. + - `cargo build --release` - Compiles production runtime without benchmarking features. Use this for running your chain in production. + +2. Generate a new chain specification file with the updated runtime using the following commands: + + ```bash + chain-spec-builder create -t development \ + --relay-chain paseo \ + --para-id 1000 \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \ + named-preset development + ``` + + This command generates a chain specification file, `chain_spec.json`, for your parachain with the updated runtime, which defines the initial state and configuration of your blockchain, including the runtime WASM code, genesis storage, and network parameters. Generating this new chain spec with your updated runtime ensures nodes starting from this spec will use the correct version of your code with proper weight calculations. + +3. Start the parachain node using the Polkadot Omni Node with the generated chain specification by running the following command: + + ```bash + polkadot-omni-node --chain ./chain_spec.json --dev + ``` + + The node will start and display initialization information, including: + + - The chain specification name + - The node identity and peer ID + - Database location + - Network endpoints (JSON-RPC and Prometheus) + +4. Once the node is running, you will see log messages confirming successful production of blocks similar to the following: + +
+ polkadot-omni-node --chain ./chain_spec.json --dev + [Parachain] 🔨 Initializing Genesis block/state (state: 0x47ce…ec8d, header-hash: 0xeb12…fecc) + [Parachain] 🎁 Prepared block for proposing at 1 (3 ms) ... + [Parachain] 🏆 Imported #1 (0xeb12…fecc → 0xee51…98d2) + [Parachain] 🎁 Prepared block for proposing at 2 (3 ms) ... + [Parachain] 🏆 Imported #2 (0xee51…98d2 → 0x35e0…cc32) + +
+ + The parachain will produce new blocks every few seconds. You can now interact with your pallet's extrinsics through the JSON-RPC endpoint at `http://127.0.0.1:9944` using tools like [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank}. + +## Related Resources -- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}. -- Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work. +- [FRAME Benchmarking Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/index.html){target=\_blank} +- [Weight Struct Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html){target=\_blank} +- [Benchmarking v2 API](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} +- [frame-omni-bencher Tool](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} diff --git a/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md b/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md index 6b36c500f..6dd373bcb 100644 --- a/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md +++ b/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md @@ -671,7 +671,7 @@ This command validates all pallet configurations and prepares the build for depl ## Run Your Chain Locally -Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. +Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. For instructions on setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}, refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide. ### Generate a Chain Specification diff --git a/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md b/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md index b26d07b1f..06a497f7c 100644 --- a/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md +++ b/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md @@ -1,11 +1,11 @@ --- -title: Pallet Testing +title: Unit Test Pallets description: Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. categories: Parachains url: https://docs.polkadot.com/parachains/customize-runtime/pallet-development/pallet-testing/ --- -# Pallet Testing +# Unit Test Pallets ## Introduction diff --git a/.ai/site-index.json b/.ai/site-index.json index 66fe0f3ff..04c1362da 100644 --- a/.ai/site-index.json +++ b/.ai/site-index.json @@ -2142,14 +2142,14 @@ }, { "id": "parachains-customize-runtime-pallet-development-benchmark-pallet", - "title": "Benchmarking FRAME Pallets", + "title": "Benchmark Your Pallet", "slug": "parachains-customize-runtime-pallet-development-benchmark-pallet", "categories": [ "Parachains" ], "raw_md_url": "https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md", "html_url": "https://docs.polkadot.com/parachains/customize-runtime/pallet-development/benchmark-pallet/", - "preview": "Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks.", + "preview": "Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks.", "outline": [ { "depth": 2, @@ -2158,52 +2158,97 @@ }, { "depth": 2, - "title": "The Case for Benchmarking", - "anchor": "the-case-for-benchmarking" + "title": "Prerequisites", + "anchor": "prerequisites" }, { - "depth": 3, - "title": "Benchmarking and Weight", - "anchor": "benchmarking-and-weight" + "depth": 2, + "title": "Create the Benchmarking Module", + "anchor": "create-the-benchmarking-module" }, { "depth": 2, - "title": "Benchmarking Process", - "anchor": "benchmarking-process" + "title": "Define the Weight Trait", + "anchor": "define-the-weight-trait" }, { - "depth": 3, - "title": "Prepare Your Environment", - "anchor": "prepare-your-environment" + "depth": 2, + "title": "Add WeightInfo to Config", + "anchor": "add-weightinfo-to-config" }, { - "depth": 3, - "title": "Write Benchmark Tests", - "anchor": "write-benchmark-tests" + "depth": 2, + "title": "Update Extrinsic Weight Annotations", + "anchor": "update-extrinsic-weight-annotations" }, { - "depth": 3, - "title": "Add Benchmarks to Runtime", - "anchor": "add-benchmarks-to-runtime" + "depth": 2, + "title": "Include the Benchmarking Module", + "anchor": "include-the-benchmarking-module" }, { - "depth": 3, - "title": "Run Benchmarks", - "anchor": "run-benchmarks" + "depth": 2, + "title": "Configure Pallet Dependencies", + "anchor": "configure-pallet-dependencies" }, { "depth": 2, - "title": "Where to Go Next", - "anchor": "where-to-go-next" + "title": "Update Mock Runtime", + "anchor": "update-mock-runtime" + }, + { + "depth": 2, + "title": "Configure Runtime Benchmarking", + "anchor": "configure-runtime-benchmarking" + }, + { + "depth": 2, + "title": "Test Benchmark Compilation", + "anchor": "test-benchmark-compilation" + }, + { + "depth": 2, + "title": "Build the Runtime with Benchmarks", + "anchor": "build-the-runtime-with-benchmarks" + }, + { + "depth": 2, + "title": "Install the Benchmarking Tool", + "anchor": "install-the-benchmarking-tool" + }, + { + "depth": 2, + "title": "Download the Weight Template", + "anchor": "download-the-weight-template" + }, + { + "depth": 2, + "title": "Execute Benchmarks", + "anchor": "execute-benchmarks" + }, + { + "depth": 2, + "title": "Use Generated Weights", + "anchor": "use-generated-weights" + }, + { + "depth": 2, + "title": "Run Your Chain Locally", + "anchor": "run-your-chain-locally" + }, + { + "depth": 2, + "title": "Related Resources", + "anchor": "related-resources" } ], "stats": { - "chars": 14715, - "words": 1879, - "headings": 9, - "estimated_token_count_total": 3338 + "chars": 22752, + "words": 2813, + "headings": 18, + "estimated_token_count_total": 5191 }, - "hash": "sha256:915bc91edd56cdedd516e871dbe450d70c9f99fb467cc00ff231ea3a74f61d96", + "hash": "sha256:c1b53ecbfc3dea02097104c7a4c652e61ccc42ad56325a9c80684850a198fbdd", "token_estimator": "heuristic-v1" }, { @@ -2349,12 +2394,12 @@ } ], "stats": { - "chars": 26671, - "words": 3041, + "chars": 26958, + "words": 3085, "headings": 26, - "estimated_token_count_total": 6113 + "estimated_token_count_total": 6194 }, - "hash": "sha256:607e283aaa1295de0af191d97de7f6f87afb722c601a447821fde6a09b97f1af", + "hash": "sha256:dad68ea59fd05fd60dc8890c4cf5615243c7ea879830b0dcf3a5e5e53c3ccec7", "token_estimator": "heuristic-v1" }, { @@ -2445,7 +2490,7 @@ }, { "id": "parachains-customize-runtime-pallet-development-pallet-testing", - "title": "Pallet Testing", + "title": "Unit Test Pallets", "slug": "parachains-customize-runtime-pallet-development-pallet-testing", "categories": [ "Parachains" @@ -2491,12 +2536,12 @@ } ], "stats": { - "chars": 6892, - "words": 911, + "chars": 6895, + "words": 912, "headings": 7, "estimated_token_count_total": 1563 }, - "hash": "sha256:8568dfa238b9a649a4e6e60510625c2e7879b76a93187b0b8b8dccf6bc467ae6", + "hash": "sha256:041ccd82f0c1ddfb93be05feb6cf9d7d4a7e37af6caa8fa8fdab5d5538017122", "token_estimator": "heuristic-v1" }, { diff --git a/llms-full.jsonl b/llms-full.jsonl index 643cef3d9..9c1a59263 100644 --- a/llms-full.jsonl +++ b/llms-full.jsonl @@ -268,15 +268,24 @@ {"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 3, "depth": 2, "title": "Recompile the Runtime", "anchor": "recompile-the-runtime", "start_char": 10415, "end_char": 10864, "estimated_token_count": 89, "token_estimator": "heuristic-v1", "text": "## Recompile the Runtime\n\nAfter adding and configuring your pallets in the runtime, the next step is to ensure everything is set up correctly. To do this, recompile the runtime with the following command (make sure you're in the project's root directory):\n\n```bash\ncargo build --release\n```\n\nThis command ensures the runtime compiles without errors, validates the pallet configurations, and prepares the build for subsequent testing or deployment."} {"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 4, "depth": 2, "title": "Run Your Chain Locally", "anchor": "run-your-chain-locally", "start_char": 10864, "end_char": 12339, "estimated_token_count": 365, "token_estimator": "heuristic-v1", "text": "## Run Your Chain Locally\n\nLaunch your parachain locally and start producing blocks:\n\n!!!tip\n Generated chain TestNet specifications include development accounts \"Alice\" and \"Bob.\" These accounts are pre-funded with native parachain currency, allowing you to sign and send TestNet transactions. Take a look at the [Polkadot.js Accounts section](https://polkadot.js.org/apps/#/accounts){target=\\_blank} to view the development accounts for your chain.\n\n1. Create a new chain specification file with the updated runtime:\n\n ```bash\n chain-spec-builder create -t development \\\n --relay-chain paseo \\\n --para-id 1000 \\\n --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \\\n named-preset development\n ```\n\n2. Start the omni node with the generated chain specification:\n\n ```bash\n polkadot-omni-node --chain ./chain_spec.json --dev\n ```\n\n3. Verify you can interact with the new pallets using the [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/extrinsics){target=\\_blank} interface. Navigate to the **Extrinsics** tab and check that you can see both pallets:\n\n - Utility pallet\n\n ![](/images/parachains/customize-runtime/pallet-development/add-pallet-to-runtime/add-pallets-to-runtime-01.webp)\n \n\n - Custom pallet\n\n ![](/images/parachains/customize-runtime/pallet-development/add-pallet-to-runtime/add-pallets-to-runtime-02.webp)"} {"page_id": "parachains-customize-runtime-pallet-development-add-pallet-to-runtime", "page_title": "Add Pallets to the Runtime", "index": 5, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 12339, "end_char": 13091, "estimated_token_count": 183, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Tutorial __Deploy on Paseo TestNet__\n\n ---\n\n Deploy your Polkadot SDK blockchain on Paseo! Follow this step-by-step guide for a seamless journey to a successful TestNet deployment.\n\n [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/deploy-to-testnet/)\n\n- Tutorial __Pallet Benchmarking (Optional)__\n\n ---\n\n Discover how to measure extrinsic costs and assign precise weights to optimize your pallet for accurate fees and runtime performance.\n\n [:octicons-arrow-right-24: Get Started](/tutorials/polkadot-sdk/parachains/zero-to-hero/pallet-benchmarking/)\n\n
"} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 16, "end_char": 1205, "estimated_token_count": 235, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nBenchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks.\n\nThe Polkadot SDK leverages the [FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 1, "depth": 2, "title": "The Case for Benchmarking", "anchor": "the-case-for-benchmarking", "start_char": 1205, "end_char": 1999, "estimated_token_count": 114, "token_estimator": "heuristic-v1", "text": "## The Case for Benchmarking\n\nBenchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights.\n\nBenchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 2, "depth": 3, "title": "Benchmarking and Weight", "anchor": "benchmarking-and-weight", "start_char": 1999, "end_char": 3665, "estimated_token_count": 321, "token_estimator": "heuristic-v1", "text": "### Benchmarking and Weight \n\nIn Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as:\n\n- Computational complexity.\n- Storage complexity (proof size).\n- Database reads and writes.\n- Hardware specifications.\n\nBenchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model.\n \nBecause weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period.\n\nWithin FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs:\n\n```rust hl_lines=\"2\"\n#[pallet::call_index(0)]\n#[pallet::weight(T::WeightInfo::do_something())]\npub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) }\n```\n\nThe `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 3, "depth": 2, "title": "Benchmarking Process", "anchor": "benchmarking-process", "start_char": 3665, "end_char": 4208, "estimated_token_count": 98, "token_estimator": "heuristic-v1", "text": "## Benchmarking Process\n\nBenchmarking a pallet involves the following steps: \n\n1. Creating a `benchmarking.rs` file within your pallet's structure.\n2. Writing a benchmarking test for each extrinsic.\n3. Executing the benchmarking tool to calculate weights based on performance metrics.\n\nThe benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 4, "depth": 3, "title": "Prepare Your Environment", "anchor": "prepare-your-environment", "start_char": 4208, "end_char": 5262, "estimated_token_count": 293, "token_estimator": "heuristic-v1", "text": "### Prepare Your Environment\n\nInstall the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\\_blank} command-line tool:\n\n```bash\ncargo install frame-omni-bencher\n```\n\nBefore writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following:\n\n```toml title=\"Cargo.toml\"\nframe-benchmarking = { version = \"37.0.0\", default-features = false }\n```\n\nYou must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`:\n\n```toml title=\"Cargo.toml\"\nruntime-benchmarks = [\n \"frame-benchmarking/runtime-benchmarks\",\n \"frame-support/runtime-benchmarks\",\n \"frame-system/runtime-benchmarks\",\n \"sp-runtime/runtime-benchmarks\",\n]\n```\n\nLastly, ensure that `frame-benchmarking` is included in `std = []`: \n\n```toml title=\"Cargo.toml\"\nstd = [\n # ...\n \"frame-benchmarking?/std\",\n # ...\n]\n```\n\nOnce complete, you have the required dependencies for writing benchmark tests for your pallet."} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 5, "depth": 3, "title": "Write Benchmark Tests", "anchor": "write-benchmark-tests", "start_char": 5262, "end_char": 7718, "estimated_token_count": 645, "token_estimator": "heuristic-v1", "text": "### Write Benchmark Tests\n\nCreate a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following:\n\n```\nmy-pallet/\n├── src/\n│ ├── lib.rs # Main pallet implementation\n│ └── benchmarking.rs # Benchmarking\n└── Cargo.toml\n```\n\nWith the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\\_blank} to get started as follows:\n\n```rust title=\"benchmarking.rs (starter template)\"\n//! Benchmarking setup for pallet-template\n#![cfg(feature = \"runtime-benchmarks\")]\n\nuse super::*;\nuse frame_benchmarking::v2::*;\n\n#[benchmarks]\nmod benchmarks {\n\tuse super::*;\n\t#[cfg(test)]\n\tuse crate::pallet::Pallet as Template;\n\tuse frame_system::RawOrigin;\n\n\t#[benchmark]\n\tfn do_something() {\n\t\tlet caller: T::AccountId = whitelisted_caller();\n\t\t#[extrinsic_call]\n\t\tdo_something(RawOrigin::Signed(caller), 100);\n\n\t\tassert_eq!(Something::::get().map(|v| v.block_number), Some(100u32.into()));\n\t}\n\n\t#[benchmark]\n\tfn cause_error() {\n\t\tSomething::::put(CompositeStruct { block_number: 100u32.into() });\n\t\tlet caller: T::AccountId = whitelisted_caller();\n\t\t#[extrinsic_call]\n\t\tcause_error(RawOrigin::Signed(caller));\n\n\t\tassert_eq!(Something::::get().map(|v| v.block_number), Some(101u32.into()));\n\t}\n\n\timpl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test);\n}\n```\n\nIn your benchmarking tests, employ these best practices:\n\n- **Write custom testing functions**: The function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing.\n- **Use the `#[extrinsic_call]` macro**: This macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\\_blank} docs for more details.\n- **Validate extrinsic behavior**: The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context.\n\nAdd the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following:\n\n```rust\n#[cfg(feature = \"runtime-benchmarks\")]\nmod benchmarking;\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 6, "depth": 3, "title": "Add Benchmarks to Runtime", "anchor": "add-benchmarks-to-runtime", "start_char": 7718, "end_char": 9847, "estimated_token_count": 418, "token_estimator": "heuristic-v1", "text": "### Add Benchmarks to Runtime\n\nBefore running the benchmarking tool, you must integrate benchmarks with your runtime as follows:\n\n1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations:\n\n ```rust title=\"benchmarks.rs\"\n frame_benchmarking::define_benchmarks!(\n [frame_system, SystemBench::]\n [pallet_parachain_template, TemplatePallet]\n [pallet_balances, Balances]\n [pallet_session, SessionBench::]\n [pallet_timestamp, Timestamp]\n [pallet_message_queue, MessageQueue]\n [pallet_sudo, Sudo]\n [pallet_collator_selection, CollatorSelection]\n [cumulus_pallet_parachain_system, ParachainSystem]\n [cumulus_pallet_xcmp_queue, XcmpQueue]\n );\n ```\n\n For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown:\n ```rust title=\"benchmarks.rs\" hl_lines=\"3\"\n frame_benchmarking::define_benchmarks!(\n [frame_system, SystemBench::]\n [pallet_parachain_template, TemplatePallet]\n );\n ```\n\n !!!warning \"Updating `define_benchmarks!` macro is required\"\n Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here.\n\n2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this:\n\n ```rust title=\"lib.rs\"\n #[cfg(feature = \"runtime-benchmarks\")]\n mod benchmarks;\n ```\n\n The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code.\n\n3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`:\n\n ```toml\n runtime-benchmarks = [\n # ...\n \"pallet_parachain_template/runtime-benchmarks\",\n ]\n\n ```"} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 7, "depth": 3, "title": "Run Benchmarks", "anchor": "run-benchmarks", "start_char": 9847, "end_char": 14232, "estimated_token_count": 1100, "token_estimator": "heuristic-v1", "text": "### Run Benchmarks\n\nYou can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled:\n\n1. Run `build` with the feature flag included:\n\n ```bash\n cargo build --features runtime-benchmarks --release\n ```\n\n2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations:\n\n ```bash\n touch weights.rs\n ```\n\n3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use:\n\n ```bash\n curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \\\n --output ./pallets/benchmarking/frame-weight-template.hbs\n ```\n\n4. Run the benchmarking tool to measure extrinsic weights:\n\n ```bash\n frame-omni-bencher v1 benchmark pallet \\\n --runtime INSERT_PATH_TO_WASM_RUNTIME \\\n --pallet INSERT_NAME_OF_PALLET \\\n --extrinsic \"\" \\\n --template ./frame-weight-template.hbs \\\n --output weights.rs\n ```\n\n !!! tip \"Flag definitions\"\n - **`--runtime`**: The path to your runtime's Wasm.\n - **`--pallet`**: The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`.\n - **`--extrinsic`**: Which extrinsic to test. Using `\"\"` implies all extrinsics will be benchmarked.\n - **`--template`**: Defines how weight information should be formatted.\n - **`--output`**: Where the output of the auto-generated weights will reside.\n\nThe generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity:\n\n
\n frame-omni-bencher v1 benchmark pallet \\\n --runtime INSERT_PATH_TO_WASM_RUNTIME \\\n --pallet \"INSERT_NAME_OF_PALLET\" \\\n --extrinsic \"\" \\\n --template ./frame-weight-template.hbs \\\n --output ./weights.rs\n ...\n 2025-01-15T16:41:33.557045Z INFO polkadot_sdk_frame::benchmark::pallet: [ 0 % ] Starting benchmark: pallet_parachain_template::do_something\n 2025-01-15T16:41:33.564644Z INFO polkadot_sdk_frame::benchmark::pallet: [ 50 % ] Starting benchmark: pallet_parachain_template::cause_error\n ...\n Created file: \"weights.rs\"\n \n
\n\n#### Add Benchmark Weights to Pallet\n\nOnce the `weights.rs` is generated, you must integrate it with your pallet. \n\n1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration:\n\n ```rust title=\"lib.rs\"\n pub mod weights;\n use crate::weights::WeightInfo;\n\n /// Configure the pallet by specifying the parameters and types on which it depends.\n #[pallet::config]\n pub trait Config: frame_system::Config {\n // ...\n /// A type representing the weights required by the dispatchables of this pallet.\n type WeightInfo: WeightInfo;\n }\n ```\n\n2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows:\n\n ```rust hl_lines=\"2\" title=\"lib.rs\"\n #[pallet::call_index(0)]\n #[pallet::weight(T::WeightInfo::do_something())]\n pub fn do_something(origin: OriginFor) -> DispatchResultWithPostInfo { Ok(()) }\n ```\n\n3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code:\n\n ```rust title=\"mod.rs\"\n // Configure pallet.\n impl pallet_parachain_template::Config for Runtime {\n // ...\n type WeightInfo = pallet_parachain_template::weights::SubstrateWeight;\n }\n ```"} -{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmarking FRAME Pallets", "index": 8, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 14232, "end_char": 14715, "estimated_token_count": 114, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}.\n- Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 0, "end_char": 631, "estimated_token_count": 119, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nBenchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks.\n\nThis guide continues building on what you've learned through the pallet development series. You'll learn how to benchmark the custom counter pallet extrinsics and integrate the generated weights into your runtime."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 631, "end_char": 1611, "estimated_token_count": 262, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore you begin, ensure you have:\n\n- Completed the previous pallet development tutorials:\n - [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\\_blank}\n - [Mock Your Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\\_blank}\n - [Unit Test Pallets](/parachains/customize-runtime/pallet-development/pallet-testing/){target=\\_blank}\n- Basic understanding of [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity){target=\\_blank}.\n- Familiarity with [Rust's testing framework](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\\_blank}.\n- Familiarity setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\\_blank}. Refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\\_blank} guide for instructions if needed."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 2, "depth": 2, "title": "Create the Benchmarking Module", "anchor": "create-the-benchmarking-module", "start_char": 1611, "end_char": 3218, "estimated_token_count": 430, "token_estimator": "heuristic-v1", "text": "## Create the Benchmarking Module\n\nCreate a new file `benchmarking.rs` in your pallet's `src` directory and add the following code:\n\n```rust title=\"pallets/pallet-custom/src/benchmarking.rs\"\n#![cfg(feature = \"runtime-benchmarks\")]\n\nuse super::*;\nuse frame::deps::frame_benchmarking::v2::*;\nuse frame::benchmarking::prelude::RawOrigin;\n\n#[benchmarks]\nmod benchmarks {\n use super::*;\n\n #[benchmark]\n fn set_counter_value() {\n let new_value: u32 = 100;\n\n #[extrinsic_call]\n _(RawOrigin::Root, new_value);\n\n assert_eq!(CounterValue::::get(), new_value);\n }\n\n #[benchmark]\n fn increment() {\n let caller: T::AccountId = whitelisted_caller();\n let amount: u32 = 50;\n\n #[extrinsic_call]\n _(RawOrigin::Signed(caller.clone()), amount);\n\n assert_eq!(CounterValue::::get(), amount);\n assert_eq!(UserInteractions::::get(caller), 1);\n }\n\n #[benchmark]\n fn decrement() {\n // First, set the counter to a non-zero value\n CounterValue::::put(100);\n\n let caller: T::AccountId = whitelisted_caller();\n let amount: u32 = 30;\n\n #[extrinsic_call]\n _(RawOrigin::Signed(caller.clone()), amount);\n\n assert_eq!(CounterValue::::get(), 70);\n assert_eq!(UserInteractions::::get(caller), 1);\n }\n\n impl_benchmark_test_suite!(Pallet, crate::mock::new_test_ext(), crate::mock::Test);\n}\n```\n\nThis module contains all the [benchmarking definitions](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\\_blank} for your pallet."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 3, "depth": 2, "title": "Define the Weight Trait", "anchor": "define-the-weight-trait", "start_char": 3218, "end_char": 4168, "estimated_token_count": 201, "token_estimator": "heuristic-v1", "text": "## Define the Weight Trait\n\nAdd a `weights` module to your pallet that defines the `WeightInfo` trait using the following code:\n\n```rust title=\"pallets/pallet-custom/src/lib.rs\"\n#[frame::pallet]\npub mod pallet {\n use frame::prelude::*;\n pub use weights::WeightInfo;\n\n pub mod weights {\n use frame::prelude::*;\n\n pub trait WeightInfo {\n fn set_counter_value() -> Weight;\n fn increment() -> Weight;\n fn decrement() -> Weight;\n }\n\n impl WeightInfo for () {\n fn set_counter_value() -> Weight {\n Weight::from_parts(10_000, 0)\n }\n fn increment() -> Weight {\n Weight::from_parts(15_000, 0)\n }\n fn decrement() -> Weight {\n Weight::from_parts(15_000, 0)\n }\n }\n }\n\n // ... rest of pallet\n}\n```\n\nThe `()` implementation provides placeholder weights for development."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 4, "depth": 2, "title": "Add WeightInfo to Config", "anchor": "add-weightinfo-to-config", "start_char": 4168, "end_char": 4994, "estimated_token_count": 200, "token_estimator": "heuristic-v1", "text": "## Add WeightInfo to Config \n\nUpdate your pallet's `Config` trait to include `WeightInfo` by adding the following code:\n\n```rust title=\"pallets/pallet-custom/src/lib.rs\"\n#[pallet::config]\npub trait Config: frame_system::Config {\n type RuntimeEvent: From> + IsType<::RuntimeEvent>;\n\n #[pallet::constant]\n type CounterMaxValue: Get;\n\n type WeightInfo: weights::WeightInfo;\n}\n```\n\nThe [`WeightInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/trait.WeightInfo.html){target=\\_blank} trait provides an abstraction layer that allows weights to be swapped at runtime configuration. By making `WeightInfo` an associated type in the `Config` trait, you will enable each runtime that uses your pallet to specify which weight implementation to use."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 5, "depth": 2, "title": "Update Extrinsic Weight Annotations", "anchor": "update-extrinsic-weight-annotations", "start_char": 4994, "end_char": 6182, "estimated_token_count": 291, "token_estimator": "heuristic-v1", "text": "## Update Extrinsic Weight Annotations\n\nReplace the placeholder weights in your extrinsics with calls to the `WeightInfo` trait by adding the following code:\n\n```rust title=\"pallets/pallet-custom/src/lib.rs\"\n#[pallet::call]\nimpl Pallet {\n #[pallet::call_index(0)]\n #[pallet::weight(T::WeightInfo::set_counter_value())]\n pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult {\n // ... implementation\n }\n\n #[pallet::call_index(1)]\n #[pallet::weight(T::WeightInfo::increment())]\n pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult {\n // ... implementation\n }\n\n #[pallet::call_index(2)]\n #[pallet::weight(T::WeightInfo::decrement())]\n pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult {\n // ... implementation\n }\n}\n```\n\nBy calling `T::WeightInfo::function_name()` instead of using hardcoded `Weight::from_parts()` values, your extrinsics automatically use whichever weight implementation is configured in the runtime. You can switch between placeholder weights for testing and benchmarked weights for production easily, without changing any pallet code."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 6, "depth": 2, "title": "Include the Benchmarking Module", "anchor": "include-the-benchmarking-module", "start_char": 6182, "end_char": 6719, "estimated_token_count": 141, "token_estimator": "heuristic-v1", "text": "## Include the Benchmarking Module\n\nAt the top of your `lib.rs`, add the module declaration by adding the following code:\n\n```rust title=\"pallets/pallet-custom/src/lib.rs\"\n#![cfg_attr(not(feature = \"std\"), no_std)]\n\nextern crate alloc;\nuse alloc::vec::Vec;\n\npub use pallet::*;\n\n#[cfg(feature = \"runtime-benchmarks\")]\nmod benchmarking;\n\n// Additional pallet code\n```\n\nThe `#[cfg(feature = \"runtime-benchmarks\")]` attribute ensures that benchmarking code is only compiled when explicitly needed to keep your production runtime efficient."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 7, "depth": 2, "title": "Configure Pallet Dependencies", "anchor": "configure-pallet-dependencies", "start_char": 6719, "end_char": 7629, "estimated_token_count": 212, "token_estimator": "heuristic-v1", "text": "## Configure Pallet Dependencies\n\nUpdate your pallet's `Cargo.toml` to enable the benchmarking feature by adding the following code:\n\n```toml title=\"pallets/pallet-custom/Cargo.toml\"\n[dependencies]\ncodec = { features = [\"derive\"], workspace = true }\nscale-info = { features = [\"derive\"], workspace = true }\nframe = { features = [\"experimental\", \"runtime\"], workspace = true }\n\n[features]\ndefault = [\"std\"]\nruntime-benchmarks = [\n \"frame/runtime-benchmarks\",\n]\nstd = [\n \"codec/std\",\n \"scale-info/std\",\n \"frame/std\",\n]\n```\n\nThe Cargo feature flag system lets you conditionally compile code based on which features are enabled. By defining a `runtime-benchmarks` feature that cascades to FRAME's benchmarking features, you create a clean way to build your pallet with or without benchmarking support, ensuring all necessary dependencies are available when needed but excluded from production builds."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 8, "depth": 2, "title": "Update Mock Runtime", "anchor": "update-mock-runtime", "start_char": 7629, "end_char": 8128, "estimated_token_count": 109, "token_estimator": "heuristic-v1", "text": "## Update Mock Runtime\n\nAdd the `WeightInfo` type to your test configuration in `mock.rs` by adding the following code:\n\n```rust title=\"pallets/pallet-custom/src/mock.rs\"\nimpl pallet_custom::Config for Test {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = ConstU32<1000>;\n type WeightInfo = ();\n}\n```\n\nIn your mock runtime for testing, use the placeholder `()` implementation of `WeightInfo`, since unit tests focus on verifying functional correctness rather than performance."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 9, "depth": 2, "title": "Configure Runtime Benchmarking", "anchor": "configure-runtime-benchmarking", "start_char": 8128, "end_char": 9970, "estimated_token_count": 390, "token_estimator": "heuristic-v1", "text": "## Configure Runtime Benchmarking\n\nTo execute benchmarks, your pallet must be integrated into the runtime's benchmarking infrastructure. Follow these steps to update the runtime configuration:\n\n1. **Update `runtime/Cargo.toml`**: Add your pallet to the runtime's `runtime-benchmarks` feature as follows:\n\n ```toml title=\"runtime/Cargo.toml\"\n runtime-benchmarks = [\n \"cumulus-pallet-parachain-system/runtime-benchmarks\",\n \"hex-literal\",\n \"pallet-parachain-template/runtime-benchmarks\",\n \"polkadot-sdk/runtime-benchmarks\",\n \"pallet-custom/runtime-benchmarks\",\n ]\n ```\n\n When you build the runtime with `--features runtime-benchmarks`, this configuration ensures all necessary benchmarking code across all pallets (including yours) is included.\n\n2. **Update runtime configuration**: Run development benchmarks with the placeholder implementation and use the resulting weights file to update benchmark weights as follows:\n\n ```rust title=\"runtime/src/configs/mod.rs\"\n impl pallet_custom::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = ConstU32<1000>;\n type WeightInfo = ();\n }\n ```\n\n3. **Register benchmarks**: Add your pallet to the benchmark list in `runtime/src/benchmarks.rs` as follows:\n\n ```rust title=\"runtime/src/benchmarks.rs\"\n polkadot_sdk::frame_benchmarking::define_benchmarks!(\n [frame_system, SystemBench::]\n [pallet_balances, Balances]\n // ... other pallets\n [pallet_custom, CustomPallet]\n );\n ```\n\n The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 10, "depth": 2, "title": "Test Benchmark Compilation", "anchor": "test-benchmark-compilation", "start_char": 9970, "end_char": 10962, "estimated_token_count": 245, "token_estimator": "heuristic-v1", "text": "## Test Benchmark Compilation\n\nRun the following command to verify your benchmarks compile and run as tests:\n\n```bash\ncargo test -p pallet-custom --features runtime-benchmarks\n```\n\nYou will see terminal output similar to the following as your benchmark tests pass:\n\n
\n cargo test -p pallet-custom --features runtime-benchmarks\n test benchmarking::benchmarks::bench_set_counter_value ... ok\n test benchmarking::benchmarks::bench_increment ... ok\n test benchmarking::benchmarks::bench_decrement ... ok\n \n
\n\nThe `impl_benchmark_test_suite!` macro generates unit tests for each benchmark. Running these tests verifies that your benchmarks compile correctly, execute without panicking, and pass their assertions, catching issues early before building the entire runtime."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 11, "depth": 2, "title": "Build the Runtime with Benchmarks", "anchor": "build-the-runtime-with-benchmarks", "start_char": 10962, "end_char": 11657, "estimated_token_count": 123, "token_estimator": "heuristic-v1", "text": "## Build the Runtime with Benchmarks\n\nCompile the runtime with benchmarking enabled to generate the WASM binary using the following command:\n\n```bash\ncargo build --release --features runtime-benchmarks\n```\n\nThis command produces the runtime WASM file needed for benchmarking, typically located at: `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm`\n\nThe build includes all the benchmarking infrastructure and special host functions needed for measurement. The resulting WASM runtime contains your benchmark code and can communicate with the benchmarking tool's execution environment. You'll create a different build later for operating your chain in production."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 12, "depth": 2, "title": "Install the Benchmarking Tool", "anchor": "install-the-benchmarking-tool", "start_char": 11657, "end_char": 12222, "estimated_token_count": 121, "token_estimator": "heuristic-v1", "text": "## Install the Benchmarking Tool\n\nInstall the `frame-omni-bencher` CLI tool using the following command:\n\n```bash\ncargo install frame-omni-bencher --locked\n```\n\n[`frame-omni-bencher`](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\\_blank} is the official Polkadot SDK tool designed explicitly for FRAME pallet benchmarking. It provides a standardized way to execute benchmarks, measure execution times and storage operations, and generate properly formatted weight files with full integration into the FRAME weight system."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 13, "depth": 2, "title": "Download the Weight Template", "anchor": "download-the-weight-template", "start_char": 12222, "end_char": 13024, "estimated_token_count": 161, "token_estimator": "heuristic-v1", "text": "## Download the Weight Template\n\nDownload the official weight template file using the following commands:\n\n```bash\ncurl -L https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \\\n--output ./pallets/pallet-custom/frame-weight-template.hbs\n```\n\nThe weight template is a Handlebars file that transforms raw benchmark data into a correctly formatted Rust source file. It defines the structure of the generated `weights.rs` file, including imports, trait definitions, documentation comments, and formatting. Using the official template ensures your weight files follow the Polkadot SDK conventions and include all necessary metadata, such as benchmark execution parameters, storage operation counts, and hardware information."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 14, "depth": 2, "title": "Execute Benchmarks", "anchor": "execute-benchmarks", "start_char": 13024, "end_char": 15060, "estimated_token_count": 407, "token_estimator": "heuristic-v1", "text": "## Execute Benchmarks\n\nRun benchmarks for your pallet to generate weight files using the following commands:\n\n```bash\nframe-omni-bencher v1 benchmark pallet \\\n --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \\\n --pallet pallet_custom \\\n --extrinsic \"\" \\\n --template ./pallets/pallet-custom/frame-weight-template.hbs \\\n --output ./pallets/pallet-custom/src/weights.rs\n```\n\nBenchmarks execute against the compiled WASM runtime rather than native code because WASM is what actually runs in production on the blockchain. WASM execution can have different performance characteristics than native code due to compilation and sandboxing overhead, so benchmarking against the WASM ensures your weight measurements reflect real-world conditions.\n\n??? note \"Additional customization\"\n\n You can customize benchmark execution with additional parameters for more detailed measurements, as shown in the sample code below:\n\n ```bash\n frame-omni-bencher v1 benchmark pallet \\\n --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \\\n --pallet pallet_custom \\\n --extrinsic \"\" \\\n --steps 50 \\\n --repeat 20 \\\n --template ./pallets/pallet-custom/frame-weight-template.hbs \\\n --output ./pallets/pallet-custom/src/weights.rs\n ```\n \n - `--steps 50`: Number of different input values to test when using linear components (default: 50). More steps provide finer granularity for detecting complexity trends but increase benchmarking time.\n - `--repeat 20`: Number of repetitions for each measurement (default: 20). More repetitions improve statistical accuracy by averaging out variance, reducing the impact of system noise, and providing more reliable weight estimates.\n - `--heap-pages 4096`: WASM heap pages allocation. Affects available memory during execution.\n - `--wasm-execution compiled`: WASM execution method. Use `compiled` for performance closest to production conditions."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 15, "depth": 2, "title": "Use Generated Weights", "anchor": "use-generated-weights", "start_char": 15060, "end_char": 18735, "estimated_token_count": 831, "token_estimator": "heuristic-v1", "text": "## Use Generated Weights\n\nAfter running benchmarks, a `weights.rs` file is generated containing measured weights based on actual measurements of your code running on real hardware, accounting for the specific complexity of your logic, storage access patterns, and computational requirements.\n\nFollow these steps to use the generated weights with your pallet:\n\n1. Integrate the generated weights by adding the weights module to your pallet's `lib.rs` as follows:\n\n ```rust title=\"pallets/pallet-custom/src/lib.rs\"\n #![cfg_attr(not(feature = \"std\"), no_std)]\n\n extern crate alloc;\n use alloc::vec::Vec;\n\n pub use pallet::*;\n\n #[cfg(feature = \"runtime-benchmarks\")]\n mod benchmarking;\n\n pub mod weights;\n\n #[frame::pallet]\n pub mod pallet {\n use super::*;\n use frame::prelude::*;\n use crate::weights::WeightInfo;\n // ... rest of pallet\n }\n ```\n\n Unlike the benchmarking module (which is only needed when running benchmarks), the weights module must be available in all builds because the runtime needs to call the weight functions during regular operation to calculate transaction fees and enforce block limits.\n\n2. Update your runtime configuration to use the generated weights instead of the placeholder `()` implementation by adding the following code:\n\n ```rust title=\"runtime/src/configs/mod.rs\"\n impl pallet_custom::Config for Runtime {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = ConstU32<1000>;\n type WeightInfo = pallet_custom::weights::SubstrateWeight;\n }\n ```\n\n This change activates your benchmarked weights in the production runtime. Now, when users submit transactions that call your pallet's extrinsics, the runtime will use the actual measured weights to calculate fees and enforce block limits.\n\n??? code \"Example generated weight file\"\n \n The generated `weights.rs` file will look similar to this:\n\n ```rust title=\"pallets/pallet-custom/src/weights.rs\"\n //! Autogenerated weights for `pallet_custom`\n //!\n //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0\n //! DATE: 2025-01-15, STEPS: `50`, REPEAT: `20`\n\n #![cfg_attr(rustfmt, rustfmt_skip)]\n #![allow(unused_parens)]\n #![allow(unused_imports)]\n #![allow(missing_docs)]\n\n use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}};\n use core::marker::PhantomData;\n\n pub trait WeightInfo {\n fn set_counter_value() -> Weight;\n fn increment() -> Weight;\n fn decrement() -> Weight;\n }\n\n pub struct SubstrateWeight(PhantomData);\n impl WeightInfo for SubstrateWeight {\n fn set_counter_value() -> Weight {\n Weight::from_parts(8_234_000, 0)\n .saturating_add(T::DbWeight::get().reads(1))\n .saturating_add(T::DbWeight::get().writes(1))\n }\n\n fn increment() -> Weight {\n Weight::from_parts(12_456_000, 0)\n .saturating_add(T::DbWeight::get().reads(2))\n .saturating_add(T::DbWeight::get().writes(2))\n }\n\n fn decrement() -> Weight {\n Weight::from_parts(11_987_000, 0)\n .saturating_add(T::DbWeight::get().reads(2))\n .saturating_add(T::DbWeight::get().writes(2))\n }\n }\n ```\n\n The actual numbers in your `weights.rs` file will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\\_blank} accounts for database read and write operations."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 16, "depth": 2, "title": "Run Your Chain Locally", "anchor": "run-your-chain-locally", "start_char": 18735, "end_char": 22211, "estimated_token_count": 795, "token_estimator": "heuristic-v1", "text": "## Run Your Chain Locally\n\nNow that you've added the pallet to your runtime, you can follow these steps to launch your parachain locally to test the new functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\\_blank}: \n\n1. Before running your chain, rebuild the production runtime without the `runtime-benchmarks` feature using the following command:\n\n ```bash\n cargo build --release\n ```\n\n The `runtime-benchmarks` feature flag adds special host functions that are only available in the benchmarking execution environment. A runtime compiled with benchmarking features will fail to start on a production node.\n\n This build produces a production-ready WASM runtime at `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm`.\n\n !!! note \"Compare build types\"\n - `cargo build --release --features runtime-benchmarks` - Compiles with benchmarking host functions for measurement. Use this ONLY when running benchmarks with `frame-omni-bencher`.\n - `cargo build --release` - Compiles production runtime without benchmarking features. Use this for running your chain in production.\n\n2. Generate a new chain specification file with the updated runtime using the following commands:\n\n ```bash\n chain-spec-builder create -t development \\\n --relay-chain paseo \\\n --para-id 1000 \\\n --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \\\n named-preset development\n ```\n\n This command generates a chain specification file, `chain_spec.json`, for your parachain with the updated runtime, which defines the initial state and configuration of your blockchain, including the runtime WASM code, genesis storage, and network parameters. Generating this new chain spec with your updated runtime ensures nodes starting from this spec will use the correct version of your code with proper weight calculations.\n\n3. Start the parachain node using the Polkadot Omni Node with the generated chain specification by running the following command:\n\n ```bash\n polkadot-omni-node --chain ./chain_spec.json --dev\n ```\n\n The node will start and display initialization information, including:\n\n - The chain specification name\n - The node identity and peer ID\n - Database location\n - Network endpoints (JSON-RPC and Prometheus)\n\n4. Once the node is running, you will see log messages confirming successful production of blocks similar to the following:\n\n
\n polkadot-omni-node --chain ./chain_spec.json --dev\n [Parachain] 🔨 Initializing Genesis block/state (state: 0x47ce…ec8d, header-hash: 0xeb12…fecc)\n [Parachain] 🎁 Prepared block for proposing at 1 (3 ms) ...\n [Parachain] 🏆 Imported #1 (0xeb12…fecc → 0xee51…98d2)\n [Parachain] 🎁 Prepared block for proposing at 2 (3 ms) ...\n [Parachain] 🏆 Imported #2 (0xee51…98d2 → 0x35e0…cc32)\n \n
\n\n The parachain will produce new blocks every few seconds. You can now interact with your pallet's extrinsics through the JSON-RPC endpoint at `http://127.0.0.1:9944` using tools like [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\\_blank}."} +{"page_id": "parachains-customize-runtime-pallet-development-benchmark-pallet", "page_title": "Benchmark Your Pallet", "index": 17, "depth": 2, "title": "Related Resources", "anchor": "related-resources", "start_char": 22211, "end_char": 22752, "estimated_token_count": 153, "token_estimator": "heuristic-v1", "text": "## Related Resources\n\n- [FRAME Benchmarking Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/index.html){target=\\_blank}\n- [Weight Struct Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html){target=\\_blank}\n- [Benchmarking v2 API](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\\_blank}\n- [frame-omni-bencher Tool](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\\_blank}"} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 26, "end_char": 847, "estimated_token_count": 167, "token_estimator": "heuristic-v1", "text": "## Introduction\n\n[Framework for Runtime Aggregation of Modular Entities (FRAME)](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/frame_runtime/index.html){target=\\_blank} provides a powerful set of tools for blockchain development through modular components called [pallets](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/polkadot_sdk/frame_runtime/pallet/index.html){target=\\_blank}. These Rust-based runtime modules allow you to build custom blockchain functionality with precision and flexibility. While FRAME includes a library of pre-built pallets, its true strength lies in creating custom pallets tailored to your specific needs.\n\nIn this guide, you'll learn how to build a custom counter pallet from scratch that demonstrates core pallet development concepts."} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 847, "end_char": 1217, "estimated_token_count": 99, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore you begin, ensure you have:\n\n- [Polkadot SDK dependencies installed](/parachains/install-polkadot-sdk/){target=\\_blank}.\n- A [Polkadot SDK Parchain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\\_blank} set up locally.\n- Basic familiarity with [FRAME concepts](/parachains/customize-runtime/){target=\\_blank}."} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 2, "depth": 2, "title": "Core Pallet Components", "anchor": "core-pallet-components", "start_char": 1217, "end_char": 2092, "estimated_token_count": 193, "token_estimator": "heuristic-v1", "text": "## Core Pallet Components\n\nAs you build your custom pallet, you'll work with these key sections:\n\n- **Imports and dependencies**: Bring in necessary FRAME libraries and external modules.\n- **Runtime configuration trait**: Specify types and constants for pallet-runtime interaction.\n- **Runtime events**: Define signals that communicate state changes.\n- **Runtime errors**: Define error types returned from dispatchable calls.\n- **Runtime storage**: Declare on-chain storage items for your pallet's state.\n- **Genesis configuration**: Set initial blockchain state.\n- **Dispatchable functions (extrinsics)**: Create callable functions for user interactions.\n\nFor additional macros beyond those covered here, refer to the [pallet_macros](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/index.html){target=\\_blank} section of the Polkadot SDK Docs."} @@ -297,12 +306,12 @@ {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 17, "depth": 3, "title": "Add to Runtime Construct", "anchor": "add-to-runtime-construct", "start_char": 22324, "end_char": 23326, "estimated_token_count": 214, "token_estimator": "heuristic-v1", "text": "### Add to Runtime Construct\n\nIn the `runtime/src/lib.rs` file, locate the [`#[frame_support::runtime]`](https://paritytech.github.io/polkadot-sdk/master/frame_support/attr.runtime.html){target=\\_blank} section and add your pallet with a unique `pallet_index`:\n\n```rust title=\"runtime/src/lib.rs\"\n#[frame_support::runtime]\nmod runtime {\n #[runtime::runtime]\n #[runtime::derive(\n RuntimeCall,\n RuntimeEvent,\n RuntimeError,\n RuntimeOrigin,\n RuntimeTask,\n RuntimeFreezeReason,\n RuntimeHoldReason,\n RuntimeSlashReason,\n RuntimeLockId,\n RuntimeViewFunction\n )]\n pub struct Runtime;\n\n #[runtime::pallet_index(0)]\n pub type System = frame_system;\n\n // ... other pallets\n\n #[runtime::pallet_index(51)]\n pub type CustomPallet = pallet_custom;\n}\n```\n\n!!!warning\n Each pallet must have a unique index. Duplicate indices will cause compilation errors. Choose an index that doesn't conflict with existing pallets."} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 18, "depth": 3, "title": "Configure Genesis for Your Runtime", "anchor": "configure-genesis-for-your-runtime", "start_char": 23326, "end_char": 23824, "estimated_token_count": 100, "token_estimator": "heuristic-v1", "text": "### Configure Genesis for Your Runtime\n\nTo set initial values for your pallet when the chain starts, you'll need to configure the genesis in your chain specification. Genesis configuration is typically done in the `node/src/chain_spec.rs` file or when generating the chain specification.\n\nFor development and testing, you can use the default values provided by the `#[derive(DefaultNoBound)]` macro. For production networks, you'll want to explicitly set these values in your chain specification."} {"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 19, "depth": 3, "title": "Verify Runtime Compilation", "anchor": "verify-runtime-compilation", "start_char": 23824, "end_char": 24047, "estimated_token_count": 41, "token_estimator": "heuristic-v1", "text": "### Verify Runtime Compilation\n\nCompile the runtime to ensure everything is configured correctly:\n\n```bash\ncargo build --release\n```\n\nThis command validates all pallet configurations and prepares the build for deployment."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 20, "depth": 2, "title": "Run Your Chain Locally", "anchor": "run-your-chain-locally", "start_char": 24047, "end_char": 24235, "estimated_token_count": 47, "token_estimator": "heuristic-v1", "text": "## Run Your Chain Locally\n\nLaunch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\\_blank}."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 21, "depth": 3, "title": "Generate a Chain Specification", "anchor": "generate-a-chain-specification", "start_char": 24235, "end_char": 24644, "estimated_token_count": 92, "token_estimator": "heuristic-v1", "text": "### Generate a Chain Specification\n\nCreate a chain specification file with the updated runtime:\n\n```bash\nchain-spec-builder create -t development \\\n--relay-chain paseo \\\n--para-id 1000 \\\n--runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \\\nnamed-preset development\n```\n\nThis command generates a `chain_spec.json` that includes your custom pallet."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 22, "depth": 3, "title": "Start the Parachain Node", "anchor": "start-the-parachain-node", "start_char": 24644, "end_char": 24827, "estimated_token_count": 44, "token_estimator": "heuristic-v1", "text": "### Start the Parachain Node\n\nLaunch the parachain:\n\n```bash\npolkadot-omni-node --chain ./chain_spec.json --dev\n```\n\nVerify the node starts successfully and begins producing blocks."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 23, "depth": 2, "title": "Interact with Your Pallet", "anchor": "interact-with-your-pallet", "start_char": 24827, "end_char": 25599, "estimated_token_count": 234, "token_estimator": "heuristic-v1", "text": "## Interact with Your Pallet\n\nUse the Polkadot.js Apps interface to test your pallet:\n\n1. Navigate to [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/extrinsics){target=\\_blank}.\n\n2. Ensure you're connected to your local node at `ws://127.0.0.1:9944`.\n\n3. Go to **Developer** > **Extrinsics**.\n\n4. Locate **customPallet** in the pallet dropdown.\n\n5. You should see the available extrinsics:\n\n - **`increment(amount)`**: Increase the counter by a specified amount.\n - **`decrement(amount)`**: Decrease the counter by a specified amount.\n - **`setCounterValue(newValue)`**: Set counter to a specific value (requires sudo/root).\n\n![](/images/parachains/customize-runtime/pallet-development/create-a-pallet/create-a-pallet-01.webp)"} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 24, "depth": 2, "title": "Key Takeaways", "anchor": "key-takeaways", "start_char": 25599, "end_char": 26322, "estimated_token_count": 129, "token_estimator": "heuristic-v1", "text": "## Key Takeaways\n\nYou've successfully created and integrated a custom pallet into a Polkadot SDK-based runtime. You have now successfully:\n\n- Defined runtime-specific types and constants via the `Config` trait.\n- Implemented on-chain state using `StorageValue` and `StorageMap`.\n- Created signals to communicate state changes to external systems.\n- Established clear error handling with descriptive error types.\n- Configured initial blockchain state for both production and testing.\n- Built callable functions with proper validation and access control.\n- Added the pallet to a runtime and tested it locally.\n\nThese components form the foundation for developing sophisticated blockchain logic in Polkadot SDK-based chains."} -{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 25, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 26322, "end_char": 26671, "estimated_token_count": 86, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Guide __Mock Your Runtime__\n\n ---\n\n Learn to create a mock runtime environment for testing your pallet in isolation before integration.\n\n [:octicons-arrow-right-24: Continue](/parachains/customize-runtime/pallet-development/mock-runtime/)\n\n
"} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 20, "depth": 2, "title": "Run Your Chain Locally", "anchor": "run-your-chain-locally", "start_char": 24047, "end_char": 24522, "estimated_token_count": 128, "token_estimator": "heuristic-v1", "text": "## Run Your Chain Locally\n\nLaunch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\\_blank}. For instructions on setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\\_blank}, refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\\_blank} guide."} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 21, "depth": 3, "title": "Generate a Chain Specification", "anchor": "generate-a-chain-specification", "start_char": 24522, "end_char": 24931, "estimated_token_count": 92, "token_estimator": "heuristic-v1", "text": "### Generate a Chain Specification\n\nCreate a chain specification file with the updated runtime:\n\n```bash\nchain-spec-builder create -t development \\\n--relay-chain paseo \\\n--para-id 1000 \\\n--runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \\\nnamed-preset development\n```\n\nThis command generates a `chain_spec.json` that includes your custom pallet."} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 22, "depth": 3, "title": "Start the Parachain Node", "anchor": "start-the-parachain-node", "start_char": 24931, "end_char": 25114, "estimated_token_count": 44, "token_estimator": "heuristic-v1", "text": "### Start the Parachain Node\n\nLaunch the parachain:\n\n```bash\npolkadot-omni-node --chain ./chain_spec.json --dev\n```\n\nVerify the node starts successfully and begins producing blocks."} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 23, "depth": 2, "title": "Interact with Your Pallet", "anchor": "interact-with-your-pallet", "start_char": 25114, "end_char": 25886, "estimated_token_count": 234, "token_estimator": "heuristic-v1", "text": "## Interact with Your Pallet\n\nUse the Polkadot.js Apps interface to test your pallet:\n\n1. Navigate to [Polkadot.js Apps](https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/extrinsics){target=\\_blank}.\n\n2. Ensure you're connected to your local node at `ws://127.0.0.1:9944`.\n\n3. Go to **Developer** > **Extrinsics**.\n\n4. Locate **customPallet** in the pallet dropdown.\n\n5. You should see the available extrinsics:\n\n - **`increment(amount)`**: Increase the counter by a specified amount.\n - **`decrement(amount)`**: Decrease the counter by a specified amount.\n - **`setCounterValue(newValue)`**: Set counter to a specific value (requires sudo/root).\n\n![](/images/parachains/customize-runtime/pallet-development/create-a-pallet/create-a-pallet-01.webp)"} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 24, "depth": 2, "title": "Key Takeaways", "anchor": "key-takeaways", "start_char": 25886, "end_char": 26609, "estimated_token_count": 129, "token_estimator": "heuristic-v1", "text": "## Key Takeaways\n\nYou've successfully created and integrated a custom pallet into a Polkadot SDK-based runtime. You have now successfully:\n\n- Defined runtime-specific types and constants via the `Config` trait.\n- Implemented on-chain state using `StorageValue` and `StorageMap`.\n- Created signals to communicate state changes to external systems.\n- Established clear error handling with descriptive error types.\n- Configured initial blockchain state for both production and testing.\n- Built callable functions with proper validation and access control.\n- Added the pallet to a runtime and tested it locally.\n\nThese components form the foundation for developing sophisticated blockchain logic in Polkadot SDK-based chains."} +{"page_id": "parachains-customize-runtime-pallet-development-create-a-pallet", "page_title": "Create a Custom Pallet", "index": 25, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 26609, "end_char": 26958, "estimated_token_count": 86, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Guide __Mock Your Runtime__\n\n ---\n\n Learn to create a mock runtime environment for testing your pallet in isolation before integration.\n\n [:octicons-arrow-right-24: Continue](/parachains/customize-runtime/pallet-development/mock-runtime/)\n\n
"} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 21, "end_char": 806, "estimated_token_count": 158, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nTesting is a critical part of pallet development. Before integrating your pallet into a full runtime, you need a way to test its functionality in isolation. A mock runtime provides a minimal, simulated blockchain environment where you can verify your pallet's logic without the overhead of running a full node.\n\nIn this guide, you'll learn how to create a mock runtime for the custom counter pallet built in the [Make a Custom Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\\_blank} guide. This mock runtime will enable you to write comprehensive unit tests that verify:\n\n- Dispatchable function behavior.\n- Storage state changes.\n- Event emission.\n- Error handling.\n- Access control and origin validation.\n- Genesis configuration."} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 1, "depth": 2, "title": "Prerequisites", "anchor": "prerequisites", "start_char": 806, "end_char": 1203, "estimated_token_count": 108, "token_estimator": "heuristic-v1", "text": "## Prerequisites\n\nBefore you begin, ensure you have:\n\n- Completed the [Make a Custom Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\\_blank} guide.\n- The custom counter pallet from the Make a Custom Pallet guide. Available in `pallets/pallet-custom`.\n- Basic understanding of [Rust testing](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\\_blank}."} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 2, "depth": 2, "title": "Understand Mock Runtimes", "anchor": "understand-mock-runtimes", "start_char": 1203, "end_char": 1737, "estimated_token_count": 90, "token_estimator": "heuristic-v1", "text": "## Understand Mock Runtimes\n\nA mock runtime is a minimal implementation of the runtime environment that:\n\n- Simulates blockchain state to provide storage and state management.\n- Satisfies your pallet's `Config` trait requirements.\n- Allows isolated testing without external dependencies.\n- Supports genesis configuration to set initial blockchain state for tests.\n- Provides instant feedback on code changes for a faster development cycle.\n\nMock runtimes are used exclusively for testing and are never deployed to a live blockchain."} @@ -316,13 +325,13 @@ {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 10, "depth": 2, "title": "Verify Mock Compilation", "anchor": "verify-mock-compilation", "start_char": 7931, "end_char": 10853, "estimated_token_count": 564, "token_estimator": "heuristic-v1", "text": "## Verify Mock Compilation\n\nBefore proceeding to write tests, ensure your mock runtime compiles correctly:\n\n```bash\ncargo test --package pallet-custom --lib\n```\n\nThis command compiles the test code (including the mock and genesis configuration) without running tests yet. Address any compilation errors before continuing.\n\n??? code \"Complete mock runtime script\"\n\n Here's the complete `mock.rs` file for reference:\n\n ```rust title=\"src/mock.rs\"\n use crate as pallet_custom;\n use frame::{\n deps::{\n frame_support::{ derive_impl, traits::ConstU32 },\n sp_io,\n sp_runtime::{ traits::IdentityLookup, BuildStorage },\n },\n prelude::*,\n };\n\n type Block = frame_system::mocking::MockBlock;\n\n // Configure a mock runtime to test the pallet.\n frame::deps::frame_support::construct_runtime!(\n pub enum Test\n {\n System: frame_system,\n CustomPallet: pallet_custom,\n }\n );\n\n #[derive_impl(frame_system::config_preludes::TestDefaultConfig)]\n impl frame_system::Config for Test {\n type Block = Block;\n type AccountId = u64;\n type Lookup = IdentityLookup;\n }\n\n impl pallet_custom::Config for Test {\n type RuntimeEvent = RuntimeEvent;\n type CounterMaxValue = ConstU32<1000>;\n }\n\n // Build genesis storage according to the mock runtime.\n pub fn new_test_ext() -> sp_io::TestExternalities {\n let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap();\n\n (pallet_custom::GenesisConfig:: {\n initial_counter_value: 0,\n initial_user_interactions: vec![],\n })\n .assimilate_storage(&mut t)\n .unwrap();\n\n t.into()\n }\n\n // Helper function to create a test externalities with a specific initial counter value\n pub fn new_test_ext_with_counter(initial_value: u32) -> sp_io::TestExternalities {\n let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap();\n\n (pallet_custom::GenesisConfig:: {\n initial_counter_value: initial_value,\n initial_user_interactions: vec![],\n })\n .assimilate_storage(&mut t)\n .unwrap();\n\n t.into()\n }\n\n // Helper function to create a test externalities with initial user interactions\n pub fn new_test_ext_with_interactions(\n initial_value: u32,\n interactions: Vec<(u64, u32)>\n ) -> sp_io::TestExternalities {\n let mut t = frame_system::GenesisConfig::::default().build_storage().unwrap();\n\n (pallet_custom::GenesisConfig:: {\n initial_counter_value: initial_value,\n initial_user_interactions: interactions,\n })\n .assimilate_storage(&mut t)\n .unwrap();\n\n t.into()\n }\n ```"} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 11, "depth": 2, "title": "Key Takeaways", "anchor": "key-takeaways", "start_char": 10853, "end_char": 11416, "estimated_token_count": 98, "token_estimator": "heuristic-v1", "text": "## Key Takeaways\n\nYou've successfully created a mock runtime with a genesis configuration for your custom pallet. You can now:\n\n- Test your pallet without a full runtime.\n- Set initial blockchain state for different test scenarios.\n- Create different genesis setups for various testing needs.\n- Use this minimal setup to test all pallet functionality.\n\nThe mock runtime with a genesis configuration is essential for test-driven development, enabling you to verify logic under different initial conditions before integrating it into the actual parachain runtime."} {"page_id": "parachains-customize-runtime-pallet-development-mock-runtime", "page_title": "Mock Your Runtime", "index": 12, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 11416, "end_char": 11766, "estimated_token_count": 87, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n
\n\n- Guide __Pallet Unit Testing__\n\n ---\n\n Learn to write comprehensive unit tests for your pallet using the mock runtime you just created.\n\n [:octicons-arrow-right-24: Continue](/parachains/customize-runtime/pallet-development/pallet-testing/)\n\n
"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Testing", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 18, "end_char": 672, "estimated_token_count": 123, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nUnit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment to simulate runtime and mock transaction execution for extrinsics and queries.\n\nTo begin unit testing, you must first set up a mock runtime that simulates blockchain behavior, incorporating the necessary pallets. For a deeper understanding, consult the [Mock Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\\_blank} guide."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Testing", "index": 1, "depth": 2, "title": "Writing Unit Tests", "anchor": "writing-unit-tests", "start_char": 672, "end_char": 2195, "estimated_token_count": 285, "token_estimator": "heuristic-v1", "text": "## Writing Unit Tests\n\nOnce the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet module's `test.rs` file.\n\nUnit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you've defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet.\n\nThe tests confirm that:\n\n- **Pallets initialize correctly**: At the start of each test, the system should initialize with block number 0, and the pallets should be in their default states.\n- **Pallets modify each other's state**: The second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions.\n- **State transitions between blocks are seamless**: By simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number.\n\nTesting pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled.\n\nThis approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Testing", "index": 2, "depth": 3, "title": "Test Initialization", "anchor": "test-initialization", "start_char": 2195, "end_char": 2507, "estimated_token_count": 68, "token_estimator": "heuristic-v1", "text": "### Test Initialization\n\nEach test starts by initializing the runtime environment, typically using the `new_test_ext()` function, which sets up the mock storage and environment.\n\n```rust\n#[test]\nfn test_pallet_functionality() {\n new_test_ext().execute_with(|| {\n // Test logic goes here\n });\n}\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Testing", "index": 3, "depth": 3, "title": "Function Call Testing", "anchor": "function-call-testing", "start_char": 2507, "end_char": 3280, "estimated_token_count": 167, "token_estimator": "heuristic-v1", "text": "### Function Call Testing\n\nCall the pallet's extrinsics or functions to simulate user interaction or internal logic. Use the `assert_ok!` macro to check for successful execution and `assert_err!` to verify that errors are correctly handled.\n\n```rust\n#[test]\nfn it_works_for_valid_input() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic or function\n assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n });\n}\n\n#[test]\nfn it_fails_for_invalid_input() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic with invalid input and expect an error\n assert_err!(\n TemplateModule::some_function(Origin::signed(1), invalid_param),\n Error::::InvalidInput\n );\n });\n}\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Testing", "index": 4, "depth": 3, "title": "Storage Testing", "anchor": "storage-testing", "start_char": 3280, "end_char": 4129, "estimated_token_count": 190, "token_estimator": "heuristic-v1", "text": "### Storage Testing\n\nAfter calling a function or extrinsic in your pallet, it's essential to verify that the state changes in the pallet's storage match the expected behavior to ensure data is updated correctly based on the actions taken.\n\nThe following example shows how to test the storage behavior before and after the function call:\n\n```rust\n#[test]\nfn test_storage_update_on_extrinsic_call() {\n new_test_ext().execute_with(|| {\n // Check the initial storage state (before the call)\n assert_eq!(Something::::get(), None);\n\n // Dispatch a signed extrinsic, which modifies storage\n assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42));\n\n // Validate that the storage has been updated as expected (after the call)\n assert_eq!(Something::::get(), Some(42));\n });\n}\n\n```"} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Testing", "index": 5, "depth": 3, "title": "Event Testing", "anchor": "event-testing", "start_char": 4129, "end_char": 6150, "estimated_token_count": 519, "token_estimator": "heuristic-v1", "text": "### Event Testing\n\nIt's also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the [`#generate_deposit`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.generate_deposit.html){target=\\_blank} macro are stored under the system's event storage key (system/events) as [`EventRecord`](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.EventRecord.html){target=\\_blank} entries. These can be accessed using [`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\\_blank} or verified with specific helper methods provided by the system pallet, such as [`assert_has_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_has_event){target=\\_blank} and [`assert_last_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\\_blank}.\n\nHere's an example of testing events in a mock runtime:\n\n```rust\n#[test]\nfn it_emits_events_on_success() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic or function\n assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n\n // Verify that the expected event was emitted\n assert!(System::events().iter().any(|record| {\n record.event == Event::TemplateModule(TemplateEvent::SomeEvent)\n }));\n });\n}\n```\n\nSome key considerations are:\n\n- **Block number**: Events are not emitted on the genesis block, so you need to set the block number using [`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\\_blank} to ensure events are triggered.\n- **Converting events**: Use `.into()` when instantiating your pallet's event to convert it into a generic event type, as required by the system's event storage."} -{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Pallet Testing", "index": 6, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 6150, "end_char": 6892, "estimated_token_count": 211, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n- Dive into the full implementation of the [`mock.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/mock.rs){target=\\_blank} and [`test.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/tests.rs){target=\\_blank} files in the [Solochain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/solochain){target=_blank}.\n\n
\n\n- Guide __Benchmarking__\n\n ---\n\n Explore methods to measure the performance and execution cost of your pallet.\n\n [:octicons-arrow-right-24: Reference](/develop/parachains/testing/benchmarking)\n\n
"} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 21, "end_char": 675, "estimated_token_count": 123, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nUnit testing in the Polkadot SDK helps ensure that the functions provided by a pallet behave as expected. It also confirms that data and events associated with a pallet are processed correctly during interactions. The Polkadot SDK offers a set of APIs to create a test environment to simulate runtime and mock transaction execution for extrinsics and queries.\n\nTo begin unit testing, you must first set up a mock runtime that simulates blockchain behavior, incorporating the necessary pallets. For a deeper understanding, consult the [Mock Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\\_blank} guide."} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 1, "depth": 2, "title": "Writing Unit Tests", "anchor": "writing-unit-tests", "start_char": 675, "end_char": 2198, "estimated_token_count": 285, "token_estimator": "heuristic-v1", "text": "## Writing Unit Tests\n\nOnce the mock runtime is in place, the next step is to write unit tests that evaluate the functionality of your pallet. Unit tests allow you to test specific pallet features in isolation, ensuring that each function behaves correctly under various conditions. These tests typically reside in your pallet module's `test.rs` file.\n\nUnit tests in the Polkadot SDK use the Rust testing framework, and the mock runtime you've defined earlier will serve as the test environment. Below are the typical steps involved in writing unit tests for a pallet.\n\nThe tests confirm that:\n\n- **Pallets initialize correctly**: At the start of each test, the system should initialize with block number 0, and the pallets should be in their default states.\n- **Pallets modify each other's state**: The second test shows how one pallet can trigger changes in another pallet's internal state, confirming proper cross-pallet interactions.\n- **State transitions between blocks are seamless**: By simulating block transitions, the tests validate that the runtime responds correctly to changes in the block number.\n\nTesting pallet interactions within the runtime is critical for ensuring the blockchain behaves as expected under real-world conditions. Writing integration tests allows validation of how pallets function together, preventing issues that might arise when the system is fully assembled.\n\nThis approach provides a comprehensive view of the runtime's functionality, ensuring the blockchain is stable and reliable."} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 2, "depth": 3, "title": "Test Initialization", "anchor": "test-initialization", "start_char": 2198, "end_char": 2510, "estimated_token_count": 68, "token_estimator": "heuristic-v1", "text": "### Test Initialization\n\nEach test starts by initializing the runtime environment, typically using the `new_test_ext()` function, which sets up the mock storage and environment.\n\n```rust\n#[test]\nfn test_pallet_functionality() {\n new_test_ext().execute_with(|| {\n // Test logic goes here\n });\n}\n```"} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 3, "depth": 3, "title": "Function Call Testing", "anchor": "function-call-testing", "start_char": 2510, "end_char": 3283, "estimated_token_count": 167, "token_estimator": "heuristic-v1", "text": "### Function Call Testing\n\nCall the pallet's extrinsics or functions to simulate user interaction or internal logic. Use the `assert_ok!` macro to check for successful execution and `assert_err!` to verify that errors are correctly handled.\n\n```rust\n#[test]\nfn it_works_for_valid_input() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic or function\n assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n });\n}\n\n#[test]\nfn it_fails_for_invalid_input() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic with invalid input and expect an error\n assert_err!(\n TemplateModule::some_function(Origin::signed(1), invalid_param),\n Error::::InvalidInput\n );\n });\n}\n```"} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 4, "depth": 3, "title": "Storage Testing", "anchor": "storage-testing", "start_char": 3283, "end_char": 4132, "estimated_token_count": 190, "token_estimator": "heuristic-v1", "text": "### Storage Testing\n\nAfter calling a function or extrinsic in your pallet, it's essential to verify that the state changes in the pallet's storage match the expected behavior to ensure data is updated correctly based on the actions taken.\n\nThe following example shows how to test the storage behavior before and after the function call:\n\n```rust\n#[test]\nfn test_storage_update_on_extrinsic_call() {\n new_test_ext().execute_with(|| {\n // Check the initial storage state (before the call)\n assert_eq!(Something::::get(), None);\n\n // Dispatch a signed extrinsic, which modifies storage\n assert_ok!(TemplateModule::do_something(RuntimeOrigin::signed(1), 42));\n\n // Validate that the storage has been updated as expected (after the call)\n assert_eq!(Something::::get(), Some(42));\n });\n}\n\n```"} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 5, "depth": 3, "title": "Event Testing", "anchor": "event-testing", "start_char": 4132, "end_char": 6153, "estimated_token_count": 519, "token_estimator": "heuristic-v1", "text": "### Event Testing\n\nIt's also crucial to test the events that your pallet emits during execution. By default, events generated in a pallet using the [`#generate_deposit`](https://paritytech.github.io/polkadot-sdk/master/frame_support/pallet_macros/attr.generate_deposit.html){target=\\_blank} macro are stored under the system's event storage key (system/events) as [`EventRecord`](https://paritytech.github.io/polkadot-sdk/master/frame_system/struct.EventRecord.html){target=\\_blank} entries. These can be accessed using [`System::events()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.events){target=\\_blank} or verified with specific helper methods provided by the system pallet, such as [`assert_has_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_has_event){target=\\_blank} and [`assert_last_event`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.assert_last_event){target=\\_blank}.\n\nHere's an example of testing events in a mock runtime:\n\n```rust\n#[test]\nfn it_emits_events_on_success() {\n new_test_ext().execute_with(|| {\n // Call an extrinsic or function\n assert_ok!(TemplateModule::some_function(Origin::signed(1), valid_param));\n\n // Verify that the expected event was emitted\n assert!(System::events().iter().any(|record| {\n record.event == Event::TemplateModule(TemplateEvent::SomeEvent)\n }));\n });\n}\n```\n\nSome key considerations are:\n\n- **Block number**: Events are not emitted on the genesis block, so you need to set the block number using [`System::set_block_number()`](https://paritytech.github.io/polkadot-sdk/master/frame_system/pallet/struct.Pallet.html#method.set_block_number){target=\\_blank} to ensure events are triggered.\n- **Converting events**: Use `.into()` when instantiating your pallet's event to convert it into a generic event type, as required by the system's event storage."} +{"page_id": "parachains-customize-runtime-pallet-development-pallet-testing", "page_title": "Unit Test Pallets", "index": 6, "depth": 2, "title": "Where to Go Next", "anchor": "where-to-go-next", "start_char": 6153, "end_char": 6895, "estimated_token_count": 211, "token_estimator": "heuristic-v1", "text": "## Where to Go Next\n\n- Dive into the full implementation of the [`mock.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/mock.rs){target=\\_blank} and [`test.rs`](https://github.com/paritytech/polkadot-sdk/blob/master/templates/solochain/pallets/template/src/tests.rs){target=\\_blank} files in the [Solochain Template](https://github.com/paritytech/polkadot-sdk/tree/master/templates/solochain){target=_blank}.\n\n
\n\n- Guide __Benchmarking__\n\n ---\n\n Explore methods to measure the performance and execution cost of your pallet.\n\n [:octicons-arrow-right-24: Reference](/develop/parachains/testing/benchmarking)\n\n
"} {"page_id": "parachains-customize-runtime", "page_title": "Overview of FRAME", "index": 0, "depth": 2, "title": "Introduction", "anchor": "introduction", "start_char": 26, "end_char": 754, "estimated_token_count": 146, "token_estimator": "heuristic-v1", "text": "## Introduction\n\nA blockchain runtime is more than just a fixed set of rules—it's a dynamic foundation that you can shape to match your specific needs. With Polkadot SDK's [FRAME (Framework for Runtime Aggregation of Modularized Entities)](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\\_blank}, customizing your runtime is straightforward and modular. Instead of building everything from scratch, you combine pre-built pallets with your own custom logic to create a runtime suited to your blockchain's purpose.\n\nThis overview explains how runtime customization works, introduces the building blocks you'll use, and guides you through the key patterns for extending your runtime."} {"page_id": "parachains-customize-runtime", "page_title": "Overview of FRAME", "index": 1, "depth": 2, "title": "Understanding Your Runtime", "anchor": "understanding-your-runtime", "start_char": 754, "end_char": 1533, "estimated_token_count": 158, "token_estimator": "heuristic-v1", "text": "## Understanding Your Runtime\n\nThe runtime is the core logic of your blockchain—it processes transactions, manages state, and enforces the rules that govern your network. When a transaction arrives at your blockchain, the [`frame_executive`](https://paritytech.github.io/polkadot-sdk/master/frame_executive/index.html){target=\\_blank} pallet receives it and routes it to the appropriate pallet for execution.\n\nThink of your runtime as a collection of specialized modules, each handling a different aspect of your blockchain. Need token balances? Use the Balances pallet. Want governance? Add the Governance pallets. Need something custom? Create your own pallet. By mixing and matching these modules, you build a runtime that's efficient, secure, and tailored to your use case."} {"page_id": "parachains-customize-runtime", "page_title": "Overview of FRAME", "index": 2, "depth": 2, "title": "Runtime Architecture", "anchor": "runtime-architecture", "start_char": 1533, "end_char": 2085, "estimated_token_count": 121, "token_estimator": "heuristic-v1", "text": "## Runtime Architecture\n\nThe following diagram shows how FRAME components work together to form your runtime:\n\n![](/images/parachains/customize-runtime/index/frame-overview-01.webp)\n\nThe main components are:\n\n- **`frame_executive`**: Routes all incoming transactions to the correct pallet for execution.\n- **Pallets**: Domain-specific modules that implement your blockchain's features and business logic.\n- **`frame_system`**: Provides core runtime primitives and storage.\n- **`frame_support`**: Utilities and macros that simplify pallet development."} diff --git a/llms.txt b/llms.txt index 1b820afd6..98d224c83 100644 --- a/llms.txt +++ b/llms.txt @@ -84,10 +84,10 @@ Docs: Parachains - [Add Multiple Pallet Instances](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-add-pallet-instances.md): Learn how to implement multiple instances of the same pallet in your Polkadot SDK-based runtime, from adding dependencies to configuring unique instances. - [Add Smart Contract Functionality](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-add-smart-contract-functionality.md): Add smart contract capabilities to your Polkadot SDK-based blockchain. Explore PVM, EVM, and Wasm integration for enhanced chain functionality. - [Add Pallets to the Runtime](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-add-pallet-to-runtime.md): Add pallets to your runtime for custom functionality. Learn to configure and integrate pallets in Polkadot SDK-based blockchains. -- [Benchmarking FRAME Pallets](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md): Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. +- [Benchmark Your Pallet](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-benchmark-pallet.md): Learn how to benchmark your custom pallet extrinsics to generate accurate weight calculations for production use. - [Create a Custom Pallet](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-create-a-pallet.md): Learn how to create custom pallets using FRAME, allowing for flexible, modular, and scalable blockchain development. Follow the step-by-step guide. - [Mock Your Runtime](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-mock-runtime.md): Learn how to create a mock runtime environment for testing your custom pallets in isolation, enabling comprehensive unit testing before runtime integration. -- [Pallet Testing](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md): Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. +- [Unit Test Pallets](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime-pallet-development-pallet-testing.md): Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. - [Overview of FRAME](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-customize-runtime.md): Learn how Polkadot SDK’s FRAME framework simplifies blockchain development with modular pallets and support libraries for efficient runtime design. - [Get Started with Parachain Development](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-get-started.md): Practical examples and tutorials for building and deploying Polkadot parachains, covering everything from launch to customization and cross-chain messaging. - [Opening HRMP Channels Between Parachains](https://raw.githubusercontent.com/polkadot-developers/polkadot-docs/master/.ai/pages/parachains-interoperability-channels-between-parachains.md): Learn how to open HRMP channels between parachains on Polkadot. Discover the step-by-step process for establishing uni- and bidirectional communication. diff --git a/parachains/customize-runtime/pallet-development/.nav.yml b/parachains/customize-runtime/pallet-development/.nav.yml index a6c7e9399..43f919978 100644 --- a/parachains/customize-runtime/pallet-development/.nav.yml +++ b/parachains/customize-runtime/pallet-development/.nav.yml @@ -1,6 +1,6 @@ nav: - 'Create a Custom Pallet': create-a-pallet.md - 'Mock Your Runtime': mock-runtime.md - - 'Pallet Unit Testing': pallet-testing.md + - 'Unit Test Pallets': pallet-testing.md - 'Add a Custom Pallet to Your Runtime': add-pallet-to-runtime.md - 'Benchmark a Custom Pallet': benchmark-pallet.md \ No newline at end of file diff --git a/parachains/customize-runtime/pallet-development/benchmark-pallet.md b/parachains/customize-runtime/pallet-development/benchmark-pallet.md index dd02ee0b4..7c9407b07 100644 --- a/parachains/customize-runtime/pallet-development/benchmark-pallet.md +++ b/parachains/customize-runtime/pallet-development/benchmark-pallet.md @@ -1,215 +1,509 @@ --- -title: Benchmarking FRAME Pallets -description: Learn how to use FRAME's benchmarking framework to measure extrinsic execution costs and provide accurate weights for on-chain computations. +title: Benchmark Your Pallet +description: Learn how to benchmark your custom pallet extrinsics to generate accurate weight calculations for production use. categories: Parachains --- -# Benchmarking - ## Introduction -Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise [weight](/reference/glossary/#weight){target=\_blank}, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks. +Benchmarking is the process of measuring the computational resources (execution time and storage) required by your pallet's extrinsics. Accurate [weight](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/index.html){target=\_blank} calculations are essential for ensuring your blockchain can process transactions efficiently while protecting against denial-of-service attacks. -The Polkadot SDK leverages the [FRAME](/reference/glossary/#frame-framework-for-runtime-aggregation-of-modularized-entities){target=\_blank} benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's [benchmarking framework](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank}, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure. +This guide continues building on what you've learned through the pallet development series. You'll learn how to benchmark the custom counter pallet extrinsics and integrate the generated weights into your runtime. -## The Case for Benchmarking +## Prerequisites -Benchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights. +Before you begin, ensure you have: -Benchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability. +- Completed the previous pallet development tutorials: + - [Create a Pallet](/parachains/customize-runtime/pallet-development/create-a-pallet/){target=\_blank} + - [Mock Your Runtime](/parachains/customize-runtime/pallet-development/mock-runtime/){target=\_blank} + - [Unit Test Pallets](/parachains/customize-runtime/pallet-development/pallet-testing/){target=\_blank} +- Basic understanding of [computational complexity](https://en.wikipedia.org/wiki/Computational_complexity){target=\_blank}. +- Familiarity with [Rust's testing framework](https://doc.rust-lang.org/book/ch11-00-testing.html){target=\_blank}. +- Familiarity setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}. Refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide for instructions if needed. -### Benchmarking and Weight +## Create the Benchmarking Module -In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as: +Create a new file `benchmarking.rs` in your pallet's `src` directory and add the following code: -- Computational complexity. -- Storage complexity (proof size). -- Database reads and writes. -- Hardware specifications. +```rust title="pallets/pallet-custom/src/benchmarking.rs" +#![cfg(feature = "runtime-benchmarks")] -Benchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model. - -Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period. +use super::*; +use frame::deps::frame_benchmarking::v2::*; +use frame::benchmarking::prelude::RawOrigin; -Within FRAME, each function call that is dispatched must have a `#[pallet::weight]` annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs: +#[benchmarks] +mod benchmarks { + use super::*; -```rust hl_lines="2" ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/dispatchable-pallet-weight.rs' -``` + #[benchmark] + fn set_counter_value() { + let new_value: u32 = 100; -The `WeightInfo` file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic. + #[extrinsic_call] + _(RawOrigin::Root, new_value); -## Benchmarking Process + assert_eq!(CounterValue::::get(), new_value); + } -Benchmarking a pallet involves the following steps: + #[benchmark] + fn increment() { + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 50; -1. Creating a `benchmarking.rs` file within your pallet's structure. -2. Writing a benchmarking test for each extrinsic. -3. Executing the benchmarking tool to calculate weights based on performance metrics. + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the `runtime-benchmarks` feature flag. + assert_eq!(CounterValue::::get(), amount); + assert_eq!(UserInteractions::::get(caller), 1); + } -### Prepare Your Environment + #[benchmark] + fn decrement() { + // First, set the counter to a non-zero value + CounterValue::::put(100); -Install the [`frame-omni-bencher`](https://crates.io/crates/frame-omni-bencher){target=\_blank} command-line tool: + let caller: T::AccountId = whitelisted_caller(); + let amount: u32 = 30; -```bash -cargo install frame-omni-bencher -``` + #[extrinsic_call] + _(RawOrigin::Signed(caller.clone()), amount); -Before writing benchmark tests, you need to ensure the `frame-benchmarking` crate is included in your pallet's `Cargo.toml` similar to the following: + assert_eq!(CounterValue::::get(), 70); + assert_eq!(UserInteractions::::get(caller), 1); + } -```toml title="Cargo.toml" ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/cargo.toml::1' + impl_benchmark_test_suite!(Pallet, crate::mock::new_test_ext(), crate::mock::Test); +} ``` -You must also ensure that you add the `runtime-benchmarks` feature flag as follows under the `[features]` section of your pallet's `Cargo.toml`: - -```toml title="Cargo.toml" ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/cargo.toml:2:7' +This module contains all the [benchmarking definitions](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} for your pallet. + +## Define the Weight Trait + +Add a `weights` module to your pallet that defines the `WeightInfo` trait using the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#[frame::pallet] +pub mod pallet { + use frame::prelude::*; + pub use weights::WeightInfo; + + pub mod weights { + use frame::prelude::*; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + impl WeightInfo for () { + fn set_counter_value() -> Weight { + Weight::from_parts(10_000, 0) + } + fn increment() -> Weight { + Weight::from_parts(15_000, 0) + } + fn decrement() -> Weight { + Weight::from_parts(15_000, 0) + } + } + } + + // ... rest of pallet +} ``` -Lastly, ensure that `frame-benchmarking` is included in `std = []`: +The `()` implementation provides placeholder weights for development. -```toml title="Cargo.toml" ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/cargo.toml:8:12' -``` +## Add WeightInfo to Config -Once complete, you have the required dependencies for writing benchmark tests for your pallet. +Update your pallet's `Config` trait to include `WeightInfo` by adding the following code: -### Write Benchmark Tests +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::config] +pub trait Config: frame_system::Config { + type RuntimeEvent: From> + IsType<::RuntimeEvent>; -Create a `benchmarking.rs` file in your pallet's `src/`. Your directory structure should look similar to the following: + #[pallet::constant] + type CounterMaxValue: Get; + type WeightInfo: weights::WeightInfo; +} ``` -my-pallet/ -├── src/ -│ ├── lib.rs # Main pallet implementation -│ └── benchmarking.rs # Benchmarking -└── Cargo.toml + +The [`WeightInfo`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/trait.WeightInfo.html){target=\_blank} trait provides an abstraction layer that allows weights to be swapped at runtime configuration. By making `WeightInfo` an associated type in the `Config` trait, you will enable each runtime that uses your pallet to specify which weight implementation to use. + +## Update Extrinsic Weight Annotations + +Replace the placeholder weights in your extrinsics with calls to the `WeightInfo` trait by adding the following code: + +```rust title="pallets/pallet-custom/src/lib.rs" +#[pallet::call] +impl Pallet { + #[pallet::call_index(0)] + #[pallet::weight(T::WeightInfo::set_counter_value())] + pub fn set_counter_value(origin: OriginFor, new_value: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(1)] + #[pallet::weight(T::WeightInfo::increment())] + pub fn increment(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } + + #[pallet::call_index(2)] + #[pallet::weight(T::WeightInfo::decrement())] + pub fn decrement(origin: OriginFor, amount: u32) -> DispatchResult { + // ... implementation + } +} ``` -With the directory structure set, you can use the [`polkadot-sdk-parachain-template`](https://github.com/paritytech/polkadot-sdk-parachain-template/tree/master/pallets){target=\_blank} to get started as follows: +By calling `T::WeightInfo::function_name()` instead of using hardcoded `Weight::from_parts()` values, your extrinsics automatically use whichever weight implementation is configured in the runtime. You can switch between placeholder weights for testing and benchmarked weights for production easily, without changing any pallet code. -```rust title="benchmarking.rs (starter template)" ---8<-- 'https://raw.githubusercontent.com/paritytech/polkadot-sdk-parachain-template/refs/tags/v0.0.2/pallets/template/src/benchmarking.rs' -``` +## Include the Benchmarking Module + +At the top of your `lib.rs`, add the module declaration by adding the following code: -In your benchmarking tests, employ these best practices: +```rust title="pallets/pallet-custom/src/lib.rs" +#![cfg_attr(not(feature = "std"), no_std)] -- **Write custom testing functions**: The function `do_something` in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such as `whitelisted_caller()` to sign transactions and facilitate testing. -- **Use the `#[extrinsic_call]` macro**: This macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the [`extrinsic_call`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html#extrinsic_call-and-block){target=\_blank} docs for more details. -- **Validate extrinsic behavior**: The `assert_eq` expression ensures that the extrinsic is working properly within the benchmark context. +extern crate alloc; +use alloc::vec::Vec; -Add the `benchmarking` module to your pallet. In the pallet `lib.rs` file add the following: +pub use pallet::*; -```rust #[cfg(feature = "runtime-benchmarks")] mod benchmarking; + +// Additional pallet code +``` + +The `#[cfg(feature = "runtime-benchmarks")]` attribute ensures that benchmarking code is only compiled when explicitly needed to keep your production runtime efficient. + +## Configure Pallet Dependencies + +Update your pallet's `Cargo.toml` to enable the benchmarking feature by adding the following code: + +```toml title="pallets/pallet-custom/Cargo.toml" +[dependencies] +codec = { features = ["derive"], workspace = true } +scale-info = { features = ["derive"], workspace = true } +frame = { features = ["experimental", "runtime"], workspace = true } + +[features] +default = ["std"] +runtime-benchmarks = [ + "frame/runtime-benchmarks", +] +std = [ + "codec/std", + "scale-info/std", + "frame/std", +] ``` -### Add Benchmarks to Runtime +The Cargo feature flag system lets you conditionally compile code based on which features are enabled. By defining a `runtime-benchmarks` feature that cascades to FRAME's benchmarking features, you create a clean way to build your pallet with or without benchmarking support, ensuring all necessary dependencies are available when needed but excluded from production builds. + +## Update Mock Runtime + +Add the `WeightInfo` type to your test configuration in `mock.rs` by adding the following code: + +```rust title="pallets/pallet-custom/src/mock.rs" +impl pallet_custom::Config for Test { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); +} +``` + +In your mock runtime for testing, use the placeholder `()` implementation of `WeightInfo`, since unit tests focus on verifying functional correctness rather than performance. + +## Configure Runtime Benchmarking -Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows: +To execute benchmarks, your pallet must be integrated into the runtime's benchmarking infrastructure. Follow these steps to update the runtime configuration: -1. Navigate to your `runtime/src` directory and check if a `benchmarks.rs` file exists. If not, create one. This file will contain the macro that registers all pallets for benchmarking along with their respective configurations: +1. **Update `runtime/Cargo.toml`**: Add your pallet to the runtime's `runtime-benchmarks` feature as follows: - ```rust title="benchmarks.rs" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/frame-benchmark-macro.rs' + ```toml title="runtime/Cargo.toml" + runtime-benchmarks = [ + "cumulus-pallet-parachain-system/runtime-benchmarks", + "hex-literal", + "pallet-parachain-template/runtime-benchmarks", + "polkadot-sdk/runtime-benchmarks", + "pallet-custom/runtime-benchmarks", + ] ``` - For example, to add a new pallet named `pallet_parachain_template` for benchmarking, include it in the macro as shown: - ```rust title="benchmarks.rs" hl_lines="3" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/frame-benchmark-macro.rs::3' + When you build the runtime with `--features runtime-benchmarks`, this configuration ensures all necessary benchmarking code across all pallets (including yours) is included. + +2. **Update runtime configuration**: Using the the placeholder implementation, run development benchmarks as follows: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = (); + } + ``` + +3. **Register benchmarks**: Add your pallet to the benchmark list in `runtime/src/benchmarks.rs` as follows: + + ```rust title="runtime/src/benchmarks.rs" + polkadot_sdk::frame_benchmarking::define_benchmarks!( + [frame_system, SystemBench::] + [pallet_balances, Balances] + // ... other pallets + [pallet_custom, CustomPallet] ); ``` - !!!warning "Updating `define_benchmarks!` macro is required" - Any pallet that needs to be benchmarked must be included in the [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro. The CLI will only be able to access and benchmark pallets that are registered here. + The [`define_benchmarks!`](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/macro.define_benchmarks.html){target=\_blank} macro creates the infrastructure that allows the benchmarking CLI tool to discover and execute your pallet's benchmarks. -2. Check your runtime's `lib.rs` file to ensure the `benchmarks` module is imported. The import should look like this: +## Test Benchmark Compilation - ```rust title="lib.rs" - #[cfg(feature = "runtime-benchmarks")] - mod benchmarks; - ``` +Run the following command to verify your benchmarks compile and run as tests: - The `runtime-benchmarks` feature gate ensures benchmark tests are isolated from production runtime code. +```bash +cargo test -p pallet-custom --features runtime-benchmarks +``` -3. Enable runtime benchmarking for your pallet in `runtime/Cargo.toml`: +You will see terminal output similar to the following as your benchmark tests pass: - ```toml - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/runtime-cargo.toml' - ``` +
+ cargo test -p pallet-custom --features runtime-benchmarks + test benchmarking::benchmarks::bench_set_counter_value ... ok + test benchmarking::benchmarks::bench_increment ... ok + test benchmarking::benchmarks::bench_decrement ... ok + +
+ +The `impl_benchmark_test_suite!` macro generates unit tests for each benchmark. Running these tests verifies that your benchmarks compile correctly, execute without panicking, and pass their assertions, catching issues early before building the entire runtime. -### Run Benchmarks +## Build the Runtime with Benchmarks -You can now compile your runtime with the `runtime-benchmarks` feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled: +Compile the runtime with benchmarking enabled to generate the WASM binary using the following command: + +```bash +cargo build --release --features runtime-benchmarks +``` + +This command produces the runtime WASM file needed for benchmarking, typically located at: `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm` + +The build includes all the benchmarking infrastructure and special host functions needed for measurement. The resulting WASM runtime contains your benchmark code and can communicate with the benchmarking tool's execution environment. You'll create a different build later for operating your chain in production. + +## Install the Benchmarking Tool + +Install the `frame-omni-bencher` CLI tool using the following command: + +```bash +cargo install frame-omni-bencher --locked +``` -1. Run `build` with the feature flag included: +[`frame-omni-bencher`](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} is the official Polkadot SDK tool designed explicitly for FRAME pallet benchmarking. It provides a standardized way to execute benchmarks, measure execution times and storage operations, and generate properly formatted weight files with full integration into the FRAME weight system. + +## Download the Weight Template + +Download the official weight template file using the following commands: + +```bash +curl -L https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ +--output ./pallets/pallet-custom/frame-weight-template.hbs +``` + +The weight template is a Handlebars file that transforms raw benchmark data into a correctly formatted Rust source file. It defines the structure of the generated `weights.rs` file, including imports, trait definitions, documentation comments, and formatting. Using the official template ensures your weight files follow the Polkadot SDK conventions and include all necessary metadata, such as benchmark execution parameters, storage operation counts, and hardware information. + +## Execute Benchmarks + +Run benchmarks for your pallet to generate weight files using the following commands: + +```bash +frame-omni-bencher v1 benchmark pallet \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs +``` + +Benchmarks execute against the compiled WASM runtime rather than native code because WASM is what actually runs in production on the blockchain. WASM execution can have different performance characteristics than native code due to compilation and sandboxing overhead, so benchmarking against the WASM ensures your weight measurements reflect real-world conditions. + +??? note "Additional customization" + + You can customize benchmark execution with additional parameters for more detailed measurements, as shown in the sample code below: ```bash - cargo build --features runtime-benchmarks --release + frame-omni-bencher v1 benchmark pallet \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.wasm \ + --pallet pallet_custom \ + --extrinsic "" \ + --steps 50 \ + --repeat 20 \ + --template ./pallets/pallet-custom/frame-weight-template.hbs \ + --output ./pallets/pallet-custom/src/weights.rs ``` + + - `--steps 50`: Number of different input values to test when using linear components (default: 50). More steps provide finer granularity for detecting complexity trends but increase benchmarking time. + - `--repeat 20`: Number of repetitions for each measurement (default: 20). More repetitions improve statistical accuracy by averaging out variance, reducing the impact of system noise, and providing more reliable weight estimates. + - `--heap-pages 4096`: WASM heap pages allocation. Affects available memory during execution. + - `--wasm-execution compiled`: WASM execution method. Use `compiled` for performance closest to production conditions. -2. Create a `weights.rs` file in your pallet's `src/` directory. This file will store the auto-generated weight calculations: +## Use Generated Weights - ```bash - touch weights.rs +After running benchmarks, a `weights.rs` file is generated containing measured weights based on actual measurements of your code running on real hardware, accounting for the specific complexity of your logic, storage access patterns, and computational requirements. + +Follow these steps to use the generated weights with your pallet: + +1. Integrate the generated weights by adding the weights module to your pallet's `lib.rs` as follows: + + ```rust title="pallets/pallet-custom/src/lib.rs" + #![cfg_attr(not(feature = "std"), no_std)] + + extern crate alloc; + use alloc::vec::Vec; + + pub use pallet::*; + + #[cfg(feature = "runtime-benchmarks")] + mod benchmarking; + + pub mod weights; + + #[frame::pallet] + pub mod pallet { + use super::*; + use frame::prelude::*; + use crate::weights::WeightInfo; + // ... rest of pallet + } ``` -3. Before running the benchmarking tool, you'll need a template file that defines how weight information should be formatted. Download the official template from the Polkadot SDK repository and save it in your project folders for future use: + Unlike the benchmarking module (which is only needed when running benchmarks), the weights module must be available in all builds because the runtime needs to call the weight functions during regular operation to calculate transaction fees and enforce block limits. - ```bash - curl https://raw.githubusercontent.com/paritytech/polkadot-sdk/refs/tags/polkadot-stable2412/substrate/.maintain/frame-weight-template.hbs \ - --output ./pallets/benchmarking/frame-weight-template.hbs +2. Update your runtime configuration to use the generated weights instead of the placeholder `()` implementation by adding the following code: + + ```rust title="runtime/src/configs/mod.rs" + impl pallet_custom::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type CounterMaxValue = ConstU32<1000>; + type WeightInfo = pallet_custom::weights::SubstrateWeight; + } + ``` + + This change activates your benchmarked weights in the production runtime. Now, when users submit transactions that call your pallet's extrinsics, the runtime will use the actual measured weights to calculate fees and enforce block limits. + +??? code "Example generated weight file" + + The generated `weights.rs` file will look similar to this: + + ```rust title="pallets/pallet-custom/src/weights.rs" + //! Autogenerated weights for `pallet_custom` + //! + //! THIS FILE WAS AUTO-GENERATED USING THE SUBSTRATE BENCHMARK CLI VERSION 32.0.0 + //! DATE: 2025-01-15, STEPS: `50`, REPEAT: `20` + + #![cfg_attr(rustfmt, rustfmt_skip)] + #![allow(unused_parens)] + #![allow(unused_imports)] + #![allow(missing_docs)] + + use frame_support::{traits::Get, weights::{Weight, constants::RocksDbWeight}}; + use core::marker::PhantomData; + + pub trait WeightInfo { + fn set_counter_value() -> Weight; + fn increment() -> Weight; + fn decrement() -> Weight; + } + + pub struct SubstrateWeight(PhantomData); + impl WeightInfo for SubstrateWeight { + fn set_counter_value() -> Weight { + Weight::from_parts(8_234_000, 0) + .saturating_add(T::DbWeight::get().reads(1)) + .saturating_add(T::DbWeight::get().writes(1)) + } + + fn increment() -> Weight { + Weight::from_parts(12_456_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + + fn decrement() -> Weight { + Weight::from_parts(11_987_000, 0) + .saturating_add(T::DbWeight::get().reads(2)) + .saturating_add(T::DbWeight::get().writes(2)) + } + } ``` -4. Run the benchmarking tool to measure extrinsic weights: + The actual numbers in your `weights.rs` file will vary based on your hardware and implementation complexity. The [`DbWeight`](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.RuntimeDbWeight.html){target=\_blank} accounts for database read and write operations. + +## Run Your Chain Locally + +Now that you've added the pallet to your runtime, you can follow these steps to launch your parachain locally to test the new functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}: + +1. Before running your chain, rebuild the production runtime without the `runtime-benchmarks` feature using the following command: ```bash - frame-omni-bencher v1 benchmark pallet \ - --runtime INSERT_PATH_TO_WASM_RUNTIME \ - --pallet INSERT_NAME_OF_PALLET \ - --extrinsic "" \ - --template ./frame-weight-template.hbs \ - --output weights.rs + cargo build --release ``` - !!! tip "Flag definitions" - - **`--runtime`**: The path to your runtime's Wasm. - - **`--pallet`**: The name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined in `define_benchmarks`. - - **`--extrinsic`**: Which extrinsic to test. Using `""` implies all extrinsics will be benchmarked. - - **`--template`**: Defines how weight information should be formatted. - - **`--output`**: Where the output of the auto-generated weights will reside. + The `runtime-benchmarks` feature flag adds special host functions that are only available in the benchmarking execution environment. A runtime compiled with benchmarking features will fail to start on a production node. -The generated `weights.rs` file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity: + This build produces a production-ready WASM runtime at `target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm`. ---8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/benchmark-output.html' + !!! note "Compare build types" + - `cargo build --release --features runtime-benchmarks` - Compiles with benchmarking host functions for measurement. Use this ONLY when running benchmarks with `frame-omni-bencher`. + - `cargo build --release` - Compiles production runtime without benchmarking features. Use this for running your chain in production. -#### Add Benchmark Weights to Pallet +2. Generate a new chain specification file with the updated runtime using the following commands: + + ```bash + chain-spec-builder create -t development \ + --relay-chain paseo \ + --para-id 1000 \ + --runtime ./target/release/wbuild/parachain-template-runtime/parachain_template_runtime.compact.compressed.wasm \ + named-preset development + ``` -Once the `weights.rs` is generated, you must integrate it with your pallet. + This command generates a chain specification file, `chain_spec.json`, for your parachain with the updated runtime, which defines the initial state and configuration of your blockchain, including the runtime WASM code, genesis storage, and network parameters. Generating this new chain spec with your updated runtime ensures nodes starting from this spec will use the correct version of your code with proper weight calculations. -1. To begin the integration, import the `weights` module and the `WeightInfo` trait, then add both to your pallet's `Config` trait. Complete the following steps to set up the configuration: +3. Start the parachain node using the Polkadot Omni Node with the generated chain specification by running the following command: - ```rust title="lib.rs" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/weight-config.rs' + ```bash + polkadot-omni-node --chain ./chain_spec.json --dev ``` -2. Next, you must add this to the `#[pallet::weight]` annotation in all the extrinsics via the `Config` as follows: + The node will start and display initialization information, including: - ```rust hl_lines="2" title="lib.rs" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/dispatchable-pallet-weight.rs' - ``` + - The chain specification name + - The node identity and peer ID + - Database location + - Network endpoints (JSON-RPC and Prometheus) -3. Finally, configure the actual weight values in your runtime. In `runtime/src/config/mod.rs`, add the following code: +4. Once the node is running, you will see log messages confirming successful production of blocks similar to the following: - ```rust title="mod.rs" - --8<-- 'code/parachains/customize-runtime/pallet-development/benchmark-pallet/runtime-pallet-config.rs' - ``` +
+ polkadot-omni-node --chain ./chain_spec.json --dev + [Parachain] 🔨 Initializing Genesis block/state (state: 0x47ce…ec8d, header-hash: 0xeb12…fecc) + [Parachain] 🎁 Prepared block for proposing at 1 (3 ms) ... + [Parachain] 🏆 Imported #1 (0xeb12…fecc → 0xee51…98d2) + [Parachain] 🎁 Prepared block for proposing at 2 (3 ms) ... + [Parachain] 🏆 Imported #2 (0xee51…98d2 → 0x35e0…cc32) + +
+ + The parachain will produce new blocks every few seconds. You can now interact with your pallet's extrinsics through the JSON-RPC endpoint at `http://127.0.0.1:9944` using tools like [Polkadot.js Apps](https://polkadot.js.org/apps/){target=\_blank}. -## Where to Go Next +## Related Resources -- View the Rust Docs for a more comprehensive, low-level view of the [FRAME V2 Benchmarking Suite](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=_blank}. -- Read the [FRAME Benchmarking and Weights](https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/reference_docs/frame_benchmarking_weight/index.html){target=_blank} reference document, a concise guide which details how weights and benchmarking work. +- [FRAME Benchmarking Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/index.html){target=\_blank} +- [Weight Struct Documentation](https://paritytech.github.io/polkadot-sdk/master/frame_support/weights/struct.Weight.html){target=\_blank} +- [Benchmarking v2 API](https://paritytech.github.io/polkadot-sdk/master/frame_benchmarking/v2/index.html){target=\_blank} +- [frame-omni-bencher Tool](https://paritytech.github.io/polkadot-sdk/master/frame_omni_bencher/index.html){target=\_blank} diff --git a/parachains/customize-runtime/pallet-development/create-a-pallet.md b/parachains/customize-runtime/pallet-development/create-a-pallet.md index 26d75c2b8..f86b7b83b 100644 --- a/parachains/customize-runtime/pallet-development/create-a-pallet.md +++ b/parachains/customize-runtime/pallet-development/create-a-pallet.md @@ -340,7 +340,7 @@ This command validates all pallet configurations and prepares the build for depl ## Run Your Chain Locally -Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. +Launch your parachain locally to test the new pallet functionality using the [Polkadot Omni Node](https://crates.io/crates/polkadot-omni-node){target=\_blank}. For instructions on setting up the Polkadot Omni Node and [Polkadot Chain Spec Builder](https://crates.io/crates/staging-chain-spec-builder){target=\_blank}, refer to the [Set Up a Parachain Template](/parachains/launch-a-parachain/set-up-the-parachain-template/){target=\_blank} guide. ### Generate a Chain Specification diff --git a/parachains/customize-runtime/pallet-development/pallet-testing.md b/parachains/customize-runtime/pallet-development/pallet-testing.md index 6dc75eb5e..816a2401a 100644 --- a/parachains/customize-runtime/pallet-development/pallet-testing.md +++ b/parachains/customize-runtime/pallet-development/pallet-testing.md @@ -1,10 +1,10 @@ --- -title: Pallet Testing +title: Unit Test Pallets description: Learn how to efficiently test pallets in the Polkadot SDK, ensuring the reliability and security of your pallets operations. categories: Parachains --- -# Pallet Testing +# Unit Test Pallets ## Introduction