Skip to content

Conversation

kawahwookiee
Copy link

This PR is intended to add the feature #32873

@kawahwookiee kawahwookiee changed the title added support for more granular storage slot override internal/ethapi/override: added support for more granular storage slot override Oct 10, 2025
@kawahwookiee
Copy link
Author

The current implementation favors full backwards compatibility, sacrificing optimality e.g. by adding an extra map lookup

@MariusVanDerWijden
Copy link
Member

I don't really think that this makes sense, you can easily do this on the user side and I don't see the benefit for this.
Maybe @s1na can weigh in here

@s1na
Copy link
Contributor

s1na commented Oct 13, 2025

I also don't understand the benefit of this mask. In what cases can you describe the storage slots you want to change with a bitmask exactly? Why can't it be done on the client-side?

@kawahwookiee
Copy link
Author

kawahwookiee commented Oct 13, 2025

by "doing it on the client/user side" do you mean the flow I presented in the description (request storage slot -> overwrite required var -> put the whole new slot into stateOverrides)?
if that is the case, i would say omitting the first step is convenient, if one wants to override an 8byte block number or a timestamp or some single flag in an eth_call to some contract.

@kawahwookiee
Copy link
Author

In what cases can you describe the storage slots you want to change

E.g. I know i want to change the last 8 bytes of a storage slot, so I provide a state override 0x.....00000123456789abcdef and a mask 0x....00000ffffffffffffffff in the same eth_call so that there is not need to getStorageAt beforehand

@s1na
Copy link
Contributor

s1na commented Oct 13, 2025

This feature is incredibly niche, and I'm not sure about changing the API surface for it. I'm sorry but I'm going to have to close this one.

@s1na s1na closed this Oct 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants