~~Opening this as a draft to have a discussion.~~ Pressed the wrong
button
I had [a previous PR
](https://github.com/ethereum/go-ethereum/pull/24616)a long time ago
which reduced the peak memory used during reorgs by not accumulating all
transactions and logs.
This PR reduces the peak memory further by not storing the blocks in
memory.
However this means we need to pull the blocks back up from storage
multiple times during the reorg.
I collected the following numbers on peak memory usage:
// Master: BenchmarkReorg-8 10000 899591 ns/op 820154 B/op 1440
allocs/op 1549443072 bytes of heap used
// WithoutOldChain: BenchmarkReorg-8 10000 1147281 ns/op 943163 B/op
1564 allocs/op 1163870208 bytes of heap used
// WithoutNewChain: BenchmarkReorg-8 10000 1018922 ns/op 943580 B/op
1564 allocs/op 1171890176 bytes of heap used
Each block contains a transaction with ~50k bytes and we're doing a 10k
block reorg, so the chain should be ~500MB in size
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This PR integrates witness-enabled block production, witness-creating
payload execution and stateless cross-validation into the `engine` API.
The purpose of the PR is to enable the following use-cases (for API
details, please see next section):
- Cross validating locally created blocks:
- Call `forkchoiceUpdatedWithWitness` instead of `forkchoiceUpdated` to
trigger witness creation too.
- Call `getPayload` as before to retrieve the new block and also the
above created witness.
- Call `executeStatelessPayload` against another client to
cross-validate the block.
- Cross validating locally processed blocks:
- Call `newPayloadWithWitness` instead of `newPayload` to trigger
witness creation too.
- Call `executeStatelessPayload` against another client to
cross-validate the block.
- Block production for stateless clients (local or MEV builders):
- Call `forkchoiceUpdatedWithWitness` instead of `forkchoiceUpdated` to
trigger witness creation too.
- Call `getPayload` as before to retrieve the new block and also the
above created witness.
- Propagate witnesses across the consensus libp2p network for stateless
Ethereum.
- Stateless validator validation:
- Call `executeStatelessPayload` with the propagated witness to
statelessly validate the block.
*Note, the various `WithWitness` methods could also *just be* an
additional boolean flag on the base methods, but this PR wanted to keep
the methods separate until a final consensus is reached on how to
integrate in production.*
---
The following `engine` API types are introduced:
```go
// StatelessPayloadStatusV1 is the result of a stateless payload execution.
type StatelessPayloadStatusV1 struct {
Status string `json:"status"`
StateRoot common.Hash `json:"stateRoot"`
ReceiptsRoot common.Hash `json:"receiptsRoot"`
ValidationError *string `json:"validationError"`
}
```
- Add `forkchoiceUpdatedWithWitnessV1,2,3` with same params and returns
as `forkchoiceUpdatedV1,2,3`, but triggering a stateless witness
building if block production is requested.
- Extend `getPayloadV2,3` to return `executionPayloadEnvelope` with an
additional `witness` field of type `bytes` iff created via
`forkchoiceUpdatedWithWitnessV2,3`.
- Add `newPayloadWithWitnessV1,2,3,4` with same params and returns as
`newPayloadV1,2,3,4`, but triggering a stateless witness creation during
payload execution to allow cross validating it.
- Extend `payloadStatusV1` with a `witness` field of type `bytes` if
returned by `newPayloadWithWitnessV1,2,3,4`.
- Add `executeStatelessPayloadV1,2,3,4` with same base params as
`newPayloadV1,2,3,4` and one more additional param (`witness`) of type
`bytes`. The method returns `statelessPayloadStatusV1`, which mirrors
`payloadStatusV1` but replaces `latestValidHash` with `stateRoot` and
`receiptRoot`.
This pull request introduces a state.Reader interface for state
accessing.
The interface could be implemented in various ways. It can be pure trie
only reader, or the combination of trie and state snapshot. What's more,
this interface allows us to have more flexibility in the future, e.g.
the
archive reader (for accessing archive state).
Additionally, this pull request removes the following metrics
- `chain/snapshot/account/reads`
- `chain/snapshot/storage/reads`
This PR changes how sidechains are handled.
Before the merge, it was possible to import a chain with lower td and not set it as canonical. After the merge, we expect every chain that we get via InsertChain to be canonical. Non-canonical blocks can still be inserted
with InsertBlockWIthoutSetHead.
If during the InsertChain, the existing chain is not canonical anymore, we mark it as a sidechain and send the SideChainEvents normally.
This pull request adds a few more performance metrics, specifically:
- The average time cost of an account read
- The average time cost of a storage read
- The rate of account reads
- The rate of storage reads
* all: add stateless verifications
* all: simplify witness and integrate it into live geth
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core/state: trie prefetcher change: calling trie() doesn't stop the associated subfetcher
Co-authored-by: Martin HS <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core/state: improve prefetcher
* core/state: restore async prefetcher stask scheduling
* core/state: finish prefetching async and process storage updates async
* core/state: don't use the prefetcher for missing snapshot items
* core/state: remove update concurrency for Verkle tries
* core/state: add some termination checks to prefetcher async shutdowns
* core/state: differentiate db tries and prefetched tries
* core/state: teh teh teh
---------
Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
Co-authored-by: Martin HS <martin@swende.se>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
* core/state, internal/workerpool: parallelize parts of state commit
* core, internal: move workerpool into syncx
* core/state: use errgroups, commit accounts concurrently
* core: resurrect detailed commit timers to almost-accuracy
Here we add a Go API for running tracing plugins within the main block import process.
As an advanced user of geth, you can now create a Go file in eth/tracers/live/, and within
that file register your custom tracer implementation. Then recompile geth and select your tracer
on the command line. Hooks defined in the tracer will run whenever a block is processed.
The hook system is defined in package core/tracing. It uses a struct with callbacks, instead of
requiring an interface, for several reasons:
- We plan to keep this API stable long-term. The core/tracing hook API does not depend on
on deep geth internals.
- There are a lot of hooks, and tracers will only need some of them. Using a struct allows you
to implement only the hooks you want to actually use.
All existing tracers in eth/tracers/native have been rewritten to use the new hook system.
This change breaks compatibility with the vm.EVMLogger interface that we used to have.
If you are a user of vm.EVMLogger, please migrate to core/tracing, and sorry for breaking
your stuff. But we just couldn't have both the old and new tracing APIs coexist in the EVM.
---------
Co-authored-by: Matthieu Vachon <matthieu.o.vachon@gmail.com>
Co-authored-by: Delweng <delweng@gmail.com>
Co-authored-by: Martin HS <martin@swende.se>
* eth: drop support for forward sync triggers and head block packets
* consensus, eth: enforce always merged network
* eth: fix tx looper startup and shutdown
* cmd, core: fix some tests
* core: remove notion of future blocks
* core, eth: drop unused methods and types
This change simplifies the logic for indexing transactions and enhances the UX when transaction is not found by returning more information to users.
Transaction indexing is now considered as a part of the initial sync, and `eth.syncing` will thus be `true` if transaction indexing is not yet finished. API consumers can use the syncing status to determine if the node is ready to serve users.
This chang creates a GaugeInfo metrics type for registering informational (textual) metrics, e.g. geth version number. It also improves the testing for backend-exporters, and uses a shared subpackage in 'internal' to provide sample datasets and ordered registry.
Implements #21783
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Currently, we trigger the logic to (un)index transactions when the node receives a new
block. However, in some cases the node may not receive new blocks (eg, when the Geth node
is configured without peer discovery, or when it acts as an RPC node for historical-only
data).
In these situations, the Geth node user may not have previously configured txlookuplimit
(i.e. the default of around one year), but later realizes they need to index all
historical blocks. However, adding txlookuplimit=0 and restarting geth has no effect. This
change makes it check for required indexing work once, on startup, to fix the issue.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR removes the newly added txpool.Transaction wrapper type, and instead adds a way
of keeping the blob sidecar within types.Transaction. It's better this way because most
code in go-ethereum does not care about blob transactions, and probably never will. This
will start mattering especially on the client side of RPC, where all APIs are based on
types.Transaction. Users need to be able to use the same signing flows they already
have.
However, since blobs are only allowed in some places but not others, we will now need to
add checks to avoid creating invalid blocks. I'm still trying to figure out the best place
to do some of these. The way I have it currently is as follows:
- In block validation (import), txs are verified not to have a blob sidecar.
- In miner, we strip off the sidecar when committing the transaction into the block.
- In TxPool validation, txs must have a sidecar to be added into the blobpool.
- Note there is a special case here: when transactions are re-added because of a chain
reorg, we cannot use the transactions gathered from the old chain blocks as-is,
because they will be missing their blobs. This was previously handled by storing the
blobs into the 'blobpool limbo'. The code has now changed to store the full
transaction in the limbo instead, but it might be confusing for code readers why we're
not simply adding the types.Transaction we already have.
Code changes summary:
- txpool.Transaction removed and all uses replaced by types.Transaction again
- blobpool now stores types.Transaction instead of defining its own blobTx format for storage
- the blobpool limbo now stores types.Transaction instead of storing only the blobs
- checks to validate the presence/absence of the blob sidecar added in certain critical places
The Go authors updated golang/x/ext to change the function signature of the slices sort method.
It's an entire shitshow now because x/ext is not tagged, so everyone's codebase just
picked a new version that some other dep depends on, causing our code to fail building.
This PR updates the dep on our code too and does all the refactorings to follow upstream...
* all: implement path-based state scheme
* all: edits from review
* core/rawdb, trie/triedb/pathdb: review changes
* core, light, trie, eth, tests: reimplement pbss history
* core, trie/triedb/pathdb: track block number in state history
* trie/triedb/pathdb: add history documentation
* core, trie/triedb/pathdb: address comments from Peter's review
Important changes to list:
- Cache trie nodes by path in clean cache
- Remove root->id mappings when history is truncated
* trie/triedb/pathdb: fallback to disk if unexpect node in clean cache
* core/rawdb: fix tests
* trie/triedb/pathdb: rename metrics, change clean cache key
* trie/triedb: manage the clean cache inside of disk layer
* trie/triedb/pathdb: move journal function
* trie/triedb/path: fix tests
* trie/triedb/pathdb: fix journal
* trie/triedb/pathdb: fix history
* trie/triedb/pathdb: try to fix tests on windows
* core, trie: address comments
* trie/triedb/pathdb: fix test issues
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/types: add data gas fields in Receipt
* core/types: use BlobGas method of tx
* core: fix test
* core/types: fix receipt tests, add data gas used field test
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
The clean trie cache is persisted periodically, therefore Geth can
quickly warmup the cache in next restart.
However it will reduce the robustness of system. The assumption is
held in Geth that if the parent trie node is present, then the entire
sub-trie associated with the parent are all prensent.
Imagine the scenario that Geth rewinds itself to a past block and
restart, but Geth finds the root node of "future state" in clean
cache then regard this state is present in disk, while is not in fact.
Another example is offline pruning tool. Whenever an offline pruning
is performed, the clean cache file has to be removed to aviod hitting
the root node of "deleted states" in clean cache.
All in all, compare with the minor performance gain, system robustness
is something we care more.