This PR builds on #29040 and updates it to the new version of the spec.
I filled the EEST tests and they pass.
Link to spec: https://eips.ethereum.org/EIPS/eip-7623
---------
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com>
Co-authored-by: lightclient <lightclient@protonmail.com>
Same as #31015 but requires the contract to exist. Not compatible with
any verkle testnet up to now.
This adds a `isSytemCall` flag so that it is possible to detect when a
system call is executed, so that the code execution and other locations
are not added to the witness.
---------
Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: Ignacio Hagopian <jsign.uy@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
The total difficulty is the sum of all block difficulties from genesis
to a certain block. This value was used in PoW for deciding which chain
is heavier, and thus which chain to select. Since PoS has a different
fork selection algorithm, all blocks since the merge have a difficulty
of 0, and all total difficulties are the same for the past 2 years.
Whilst the TDs are mostly useless nowadays, there was never really a
reason to mess around removing them since they are so tiny. This
reasoning changes when we go down the path of pruned chain history. In
order to reconstruct any TD, we **must** retrieve all the headers from
chain head to genesis and then iterate all the difficulties to compute
the TD.
In a world where we completely prune past chain segments (bodies,
receipts, headers), it is not possible to reconstruct the TD at all. In
a world where we still keep chain headers and prune only the rest,
reconstructing it possible as long as we process (or download) the chain
forward from genesis, but trying to snap sync the head first and
backfill later hits the same issue, the TD becomes impossible to
calculate until genesis is backfilled.
All in all, the TD is a messy out-of-state, out-of-consensus computed
field that is overall useless nowadays, but code relying on it forces
the client into certain modes of operation and prevents other modes or
other optimizations. This PR completely nukes out the TD from the node.
It doesn't compute it, it doesn't operate on it, it's as if it didn't
even exist.
Caveats:
- Whenever we have APIs that return TD (devp2p handshake, tracer, etc.)
we return a TD of 0.
- For era files, we recompute the TD during export time (fairly quick)
to retain the format content.
- It is not possible to "verify" the merge point (i.e. with TD gone, TTD
is useless). Since we're not verifying PoW any more, just blindly trust
it, not verifying but blindly trusting the many year old merge point
seems just the same trust model.
- Our tests still need to be able to generate pre and post merge blocks,
so they need a new way to split the merge without TTD. The PR introduces
a settable ttdBlock field on the consensus object which is used by tests
as the block where originally the TTD happened. This is not needed for
live nodes, we never want to generate old blocks.
- One merge transition consensus test was disabled. With a
non-operational TD, testing how the client reacts to TTD is useless, it
cannot react.
Questions:
- Should we also drop total terminal difficulty from the genesis json?
It's a number we cannot react on any more, so maybe it would be cleaner
to get rid of even more concepts.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
As part of trying to make the inputs and outputs of the evm subcommands
more streamlined and aligned, this PR modifies how `evm t8n` manages
output-files.
Previously, we do a kind of wonky thing where between each transaction,
we invoke a `getTracer` closure. In that closure, we create a new
output-file, a tracer, and then make the tracer stream output to the
file. We also fiddle a bit to ensure that the file becomes properly
closed.
It is a kind of hacky solution we have in place. This PR changes it, so
that from the execution-pipeline point of view, we have just a regular
tracer. No fiddling with re-setting it or closing files.
That particular tracer, however, is a bit special: it takes care of
creating new files per transaction (in the tx-start-hook) and closing
(on tx-end-hook). Also instantiating the right type of underlying
tracer, which can be a json-logger or a custom tracer.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request delivers the new version of the state history, where
the raw storage key is used instead of the hash.
Before the cancun fork, it's supported by protocol to destruct a
specific account and therefore, all the storage slot owned by it should
be wiped in the same transition.
Technically, storage wiping should be performed through storage
iteration, and only the storage key hash will be available for traversal
if the state snapshot is not available. Therefore, the storage key hash
is chosen as the identifier in the old version state history.
Fortunately, account self-destruction has been deprecated by the
protocol since the Cancun fork, and there are no empty accounts eligible
for deletion under EIP-158. Therefore, we can conclude that no storage
wiping should occur after the Cancun fork. In this case, it makes no
sense to keep using hash.
Besides, another big reason for making this change is the current format
state history is unusable if verkle is activated. Verkle tree has a
different key derivation scheme (merkle uses keccak256), the preimage of
key hash must be provided in order to make verkle rollback functional.
This pull request is a prerequisite for landing verkle.
Additionally, the raw storage key is more human-friendly for those who
want to manually check the history, even though Solidity already
performs some hashing to derive the storage location.
---
This pull request doesn't bump the database version, as I believe the
database should still be compatible if users degrade from the new geth
version to old one, the only side effect is the persistent new version
state history will be unusable.
---------
Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
This changes the SenderCacher so its goroutines will only be started on first use.
Avoids starting them when package core is just imported but core.BlockChain isn't used.
We still need to decide how to handle non-specfic `chainId` in the JSON
encoding of authorizations. With `chainId` being a uint64, the previous
implementation just used value zero. However, it might actually be more
correct to use the value `null` for this case.
This pull request refactors the genesis setup function, the major
changes are highlighted here:
**(a) Triedb is opened in verkle mode if `EnableVerkleAtGenesis` is
configured in chainConfig or the database has been initialized previously with
`EnableVerkleAtGenesis` configured**.
A new config field `EnableVerkleAtGenesis` has been added in the
chainConfig. This field must be configured with True if Geth wants to initialize
the genesis in Verkle mode.
In the verkle devnet-7, the verkle transition is activated at genesis.
Therefore, the verkle rules should be used since the genesis. In production
networks (mainnet and public testnets), verkle activation always occurs after
the genesis block. Therefore, this flag is only made for devnet and should be
deprecated later. Besides, verkle transition at non-genesis block hasn't been
implemented yet, it should be done in the following PRs.
**(b) The genesis initialization condition has been simplified**
There is a special mode supported by the Geth is that: Geth can be
initialized with an existing chain segment, which can fasten the node sync
process by retaining the chain freezer folder.
Originally, if the triedb is regarded as uninitialized and the genesis block can
be found in the chain freezer, the genesis block along with genesis state will be
committed. This condition has been simplified to checking the presence of chain
config in key-value store. The existence of chain config can represent the genesis
has been committed.
- it was failing because the maximum data length (previously `dataSize`)
was set to `txMaxSize - 213` but should had been `txMaxSize - 103` and
the last call `dataSize+1+uint64(rand.Intn(10*txMaxSize)))` would
sometimes fail depending on rand.Intn.
- Maximal transaction data size comment (invalid) replaced by code logic
to find the maximum tx length without its data length
- comments and variable naming improved for clarity
- 3rd pool add test replaced to add just 1 above the maximum length,
which is important to ensure the logic is correct
This PR upgrades `golangci-lint` to v1.63.4 and fixes a warn message
which is reported by v1.63.4:
```text
WARN [config_reader] The configuration option `run.skip-dirs-use-default` is deprecated, please use `issues.exclude-dirs-use-default`.
```
Also fixes 2 warnings which are reported by v1.63.4:
```text
core/txpool/blobpool/blobpool.go:1754:12: S1005: unnecessary assignment to the blank identifier (gosimple)
for acct, _ := range p.index {
^
core/txpool/legacypool/legacypool.go:1989:19: S1005: unnecessary assignment to the blank identifier (gosimple)
for localSender, _ := range pool.locals.accounts {
^
```
As the node hash scheme in verkle and merkle are totally different, the
original default node hasher in pathdb is no longer suitable. Therefore,
this pull request configures different node hasher respectively.
Here I am proposing two small changes to the exported API for EIP-7702:
(1) `Authorization` has a very generic name, but it is in fact only used
for one niche use case: authorizing code in a `SetCodeTx`. So I propose
calling it `SetCodeAuthorization` instead. The signing function is
renamed to `SignSetCode` instead of `SignAuth`.
(2) The signing function for authorizations should take key as the first
parameter, and the authorization second. The key will almost always be
in a variable, while the authorization can be given as a literal.
Fixing some issues I found while regenerating RPC tests for Prague:
- Authorization signature values were not encoded as hex
- `requestsRoot` in block should be `requestsHash`
- `authorizationList` should work for `eth_call`
Noticed this omission while doing some work on goevmlab. We don't
properly type some of the opcodes, but apparently implicit casting works
in all the internal usecases.
Adding some missing functionality I noticed while updating the hivechain
tool for the Prague fork:
- we forgot to process the parent block hash
- added `ConsensusLayerRequests` to get the requests list of the block
This PR implements EIP-7702: "Set EOA account code".
Specification: https://eips.ethereum.org/EIPS/eip-7702
> Add a new transaction type that adds a list of `[chain_id, address,
nonce, y_parity, r, s]` authorization tuples. For each tuple, write a
delegation designator `(0xef0100 ++ address)` to the signing account’s
code. All code reading operations must load the code pointed to by the
designator.
---------
Co-authored-by: Mario Vega <marioevz@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR modifies how the metrics library handles `Enabled`: previously,
the package `init` decided whether to serve real metrics or just
dummy-types.
This has several drawbacks:
- During pkg init, we need to determine whether metrics are enabled or
not. So we first hacked in a check if certain geth-specific
commandline-flags were enabled. Then we added a similar check for
geth-env-vars. Then we almost added a very elaborate check for
toml-config-file, plus toml parsing.
- Using "real" types and dummy types interchangeably means that
everything is hidden behind interfaces. This has a performance penalty,
and also it just adds a lot of code.
This PR removes the interface stuff, uses concrete types, and allows for
the setting of Enabled to happen later. It is still assumed that
`metrics.Enable()` is invoked early on.
The somewhat 'heavy' operations, such as ticking meters and exp-decay,
now checks the enable-flag to prevent resource leak.
The change may be large, but it's mostly pretty trivial, and from the
last time I gutted the metrics, I ensured that we have fairly good test
coverage.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
It's a pull request based on https://github.com/ethereum/go-ethereum/pull/30643
In this pull request, the partial functional state reader is enabled if **legacy snapshot
is not enabled**. The tracked flat states in pathdb will be used to serve the state
retrievals, as the second implementation to fasten the state access.
This pull request should be a noop change in normal cases.
This PR extends the Hooks interface with a new method,
`OnSystemCallStartV2`, which takes `VMContext` as its parameter.
Motivation
By including `VMContext` as a parameter, the `OnSystemCallStartV2` hook
achieves parity with the `OnTxStart` hook in terms of provided insights.
This alignment simplifies the inner tracer logic, enabling consistent
handling of state changes and internal calls within the same framework.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR refactors the structlog a bit, making it so that it can be used
in a streaming mode.
-------------
OBS: this PR makes a change in the input `config` config, the third
input-parem field to `debug.traceCall`. Previously, seteting it to e.g.
` {"enableMemory": true, "limit": 1024}` would mean that the response
was limited to `1024` items. Since an 'item' may include both memory and
storage, the actual size of the response was undertermined.
After this change, the response will be limited to `1024` __`bytes`__
(or thereabouts).
-----------
The commandline usage of structlog now uses the streaming mode, leaving
the non-streaming mode of operation for the eth_Call.
There are two benefits of streaming mode
1. Not have to maintain a long list of operations,
2. Not have to duplicate / n-plicate data, e.g. memory / stack /
returndata so that each entry has their own private slice.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
* unify `staterunner` and `blockrunner` CLI flags, especially around
tracing
* added support for struct logger or json logging (although having issue
#30658)
* new --cross-check flag to validate the stateless witness collection
/ execution matches stateful
* adds support for tracing the stateless execution when a tracer is set
(to more easily debug differences)
* --human for more readable test summary
* directory or file input, so if you pass tests/spec-tests/fixtures/blockchain_tests it will execute all
blockchain tests
This change relocates the EVM tx context switching to the ApplyMessage function.
With this change, we can remove a lot of EVM.SetTxContext calls before
message execution.
### Tracing API changes
- This PR replaces the `GasPrice` field of the `VMContext` struct with
`BaseFee`. Users may instead take the effective gas price from
`tx.EffectiveGasTipValue(env.BaseFee)`.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR introduces a `ContractCodeReader` interface with functions defined:
type ContractCodeReader interface {
Code(addr common.Address, codeHash common.Hash) ([]byte, error)
CodeSize(addr common.Address, codeHash common.Hash) (int, error)
}
This interface can be implemented in various ways. Although the codebase
currently includes only one implementation, additional implementations
could be created for different purposes and scenarios, such as a code
reader designed for the Verkle tree approach or one that reads code from
the witness.
*Notably, this interface modifies the function’s semantics. If the
contract code is not found, no error will be returned. An error should
only be returned in the event of an unexpected issue, primarily for
future implementations.*
The original state.Reader interface is extended with ContractCodeReader
methods, it gives us more flexibility to manipulate the reader with additional
logic on top, e.g. Hooks.
type Reader interface {
ContractCodeReader
StateReader
}
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
The existing implementation is correct when building and verifying
blocks, since we will only collect non-empty requests into the block
requests list.
But it isn't correct for cases where a requests list containing empty
items is sent by the consensus layer on the engine API. We want to
ensure that empty requests do not cause a difference in validation
there, so the commitment computation should explicitly skip them.
This workaround is meant to minimize the possibility for snapshot generation
once the geth node upgrades to new version (specifically #30752 )
In #30752, the journal format in state snapshot is modified by removing
the destruct set. Therefore, the existing old format (version = 0) will be
discarded and all in-memory layers will be lost. Unfortunately, the lost
in-memory layers can't be recovered by some other approaches, and the
entire state snapshot will be regenerated (it will last about 2.5 hours).
This pull request introduces a workaround to adopt the legacy journal if
the destruct set contained is empty. Since self-destruction has been
deprecated following the cancun fork, the destruct set is expected to be nil for
layers above the fork block. However, an exception occurs during contract
deployment: pre-funded accounts may self-destruct, causing accounts with
non-zero balances to be removed from the state. For example,
https://etherscan.io/tx/0xa087333d83f0cd63b96bdafb686462e1622ce25f40bd499e03efb1051f31fe49).
For nodes with a fully synced state, the legacy journal is likely compatible with
the updated definition, eliminating the need for regeneration. Unfortunately,
nodes performing a full sync of historical chain segments or encountering
pre-funded account deletions may face incompatibilities, leading to automatic
snapshot regeneration.
This reverts commit 23800122b3.
The original pull request introduces a bug and some flaky tests are
detected because of this flaw.
```
--- FAIL: TestRecoverSnapshotFromWipingCrash (0.27s)
blockchain_snapshot_test.go:158: The disk layer is not integrated snapshot is not constructed
{"pc":0,"op":88,"gas":"0x7148","gasCost":"0x2","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PC"}
{"pc":1,"op":255,"gas":"0x7146","gasCost":"0x1db0","memSize":0,"stack":["0x0"],"depth":1,"refund":0,"opName":"SELFDESTRUCT"}
{"output":"","gasUsed":"0x0"}
{"output":"","gasUsed":"0x1db2"}
{"pc":0,"op":116,"gas":"0x13498","gasCost":"0x3","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PUSH21"}
```
Before the original PR, the snapshot would block the function until the
disk layer
was fully generated under the following conditions:
(a) explicitly required by users with `AsyncBuild = false`.
(b) the snapshot was being fully rebuilt or *the disk layer generation
had resumed*.
Unfortunately, with the changes introduced in that PR, the snapshot no
longer waits
for disk layer generation to complete if the generation is resumed. It
brings lots of
uncertainty and breaks this tiny debug feature.
This PR is purely for improved readability; I was doing work involving
the file and think this may help others who are trying to understand
what's going on.
1. `snapshot.Tree.Rebuild()` now returns a function that blocks until
regeneration is complete, allowing `Tree.waitBuild()` to be removed
entirely as all it did was search for the `done` channel behind this new
function.
2. Its usage inside `New()` is also simplified by (a) only waiting if
`!AsyncBuild`; and (b) avoiding the double negative of `if !NoBuild`.
---------
Co-authored-by: Martin HS <martin@swende.se>