This pull request introduces a state.Reader interface for state
accessing.
The interface could be implemented in various ways. It can be pure trie
only reader, or the combination of trie and state snapshot. What's more,
this interface allows us to have more flexibility in the future, e.g.
the
archive reader (for accessing archive state).
Additionally, this pull request removes the following metrics
- `chain/snapshot/account/reads`
- `chain/snapshot/storage/reads`
This PR implements changes related to
[EIP-6800](https://eips.ethereum.org/EIPS/eip-6800) and
[EIP-4762](https://eips.ethereum.org/EIPS/eip-4762) spec updates.
A TL;DR of the changes is that `Version`, `Balance`, `Nonce` and
`CodeSize` are encoded in a single leaf named `BasicData`. For more
details, see the [_Header Values_ table in
EIP-6800](https://eips.ethereum.org/EIPS/eip-6800#header-values).
The motivation for this was simplifying access event patterns, reducing
code complexity, and, as a side effect, saving gas since fewer leaf
nodes must be accessed.
---------
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR adds the bulk verkle witness+proof production at the end of block
production. It reads all data from the tree in one swoop and produces
a verkle proof.
Co-authored-by: Felix Lange <fjl@twurst.com>
This is a follow-up to #29520, and a preparatory PR to a more thorough
change in the journalling system.
### API methods instead of `append` operations
This PR hides the journal-implementation details away, so that the
statedb invokes methods like `JournalCreate`, instead of explicitly
appending journal-events in a list. This means that it's up to the
journal whether to implement it as a sequence of events or
aggregate/merge events.
### Snapshot-management inside the journal
This PR also makes it so that management of valid snapshots is moved
inside the journal, exposed via the methods `Snapshot() int` and
`RevertToSnapshot(revid int, s *StateDB)`.
### SetCode
JournalSetCode journals the setting of code: it is implicit that the
previous values were "no code" and emptyCodeHash. Therefore, we can
simplify the setCode journal.
### Selfdestruct
The self-destruct journalling is a bit strange: we allow the
selfdestruct operation to be journalled several times. This makes it so
that we also are forced to store whether the account was already
destructed.
What we can do instead, is to only journal the first destruction, and
after that only journal balance-changes, but not journal the
selfdestruct itself.
This simplifies the journalling, so that internals about state
management does not leak into the journal-API.
### Preimages
Preimages were, for some reason, integrated into the journal management,
despite not being a consensus-critical data structure. This PR undoes
that.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request adds a few more performance metrics, specifically:
- The average time cost of an account read
- The average time cost of a storage read
- The rate of account reads
- The rate of storage reads
This pull request fixes the broken feature where the entire storage set is overridden.
Originally, the storage set override was achieved by marking the associated account
as deleted, preventing access to the storage slot on disk. However, since #29520, this
flag is also checked when accessing the account, rendering the account unreachable.
A fix has been applied in this pull request, which re-creates a new state object with all
account metadata inherited.
This pull request adds an additional error check after statedb.IntermediateRoot,
ensuring that no errors occur during this call. This step is essential, as the call might
encounter database errors.
Originally, these metrics were added to track the largest storage wiping.
Since account self-destruction was deprecated with the Cancun fork,
these metrics have become meaningless.
* all: add stateless verifications
* all: simplify witness and integrate it into live geth
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Always prefetch the account trie while starting the prefetcher.
Co-authored-by: steven <steven@stevendeMacBook-Pro.local>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
* core/state: trie prefetcher change: calling trie() doesn't stop the associated subfetcher
Co-authored-by: Martin HS <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core/state: improve prefetcher
* core/state: restore async prefetcher stask scheduling
* core/state: finish prefetching async and process storage updates async
* core/state: don't use the prefetcher for missing snapshot items
* core/state: remove update concurrency for Verkle tries
* core/state: add some termination checks to prefetcher async shutdowns
* core/state: differentiate db tries and prefetched tries
* core/state: teh teh teh
---------
Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com>
Co-authored-by: Martin HS <martin@swende.se>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
* core/state, internal/workerpool: parallelize parts of state commit
* core, internal: move workerpool into syncx
* core/state: use errgroups, commit accounts concurrently
* core: resurrect detailed commit timers to almost-accuracy
This pull request defines a gentrie for snap sync purpose.
The stackTrie is used to generate the merkle tree nodes upon receiving a state batch. Several additional options have been added into stackTrie to handle incomplete states (either missing states before or after).
In this pull request, these options have been relocated from stackTrie to genTrie, which serves as a wrapper for stackTrie specifically for snap sync purposes.
Further, the logic for managing incomplete state has been enhanced in this change. Originally, there are two cases handled:
- boundary node filtering
- internal (covered by extension node) node clearing
This changes adds one more:
- Clearing leftover nodes on the boundaries.
This feature is necessary if there are leftover trie nodes in database, otherwise node inconsistency may break the state healing.
This addresses an edge-case (detailed in the code comment) where the computation of the intermediate trie root would force the unnecessary resolution of a hash node. The change makes it so that when we process changes from a block, we first process trie-updates and afterwards process trie-deletions.
Here we add a Go API for running tracing plugins within the main block import process.
As an advanced user of geth, you can now create a Go file in eth/tracers/live/, and within
that file register your custom tracer implementation. Then recompile geth and select your tracer
on the command line. Hooks defined in the tracer will run whenever a block is processed.
The hook system is defined in package core/tracing. It uses a struct with callbacks, instead of
requiring an interface, for several reasons:
- We plan to keep this API stable long-term. The core/tracing hook API does not depend on
on deep geth internals.
- There are a lot of hooks, and tracers will only need some of them. Using a struct allows you
to implement only the hooks you want to actually use.
All existing tracers in eth/tracers/native have been rewritten to use the new hook system.
This change breaks compatibility with the vm.EVMLogger interface that we used to have.
If you are a user of vm.EVMLogger, please migrate to core/tracing, and sorry for breaking
your stuff. But we just couldn't have both the old and new tracing APIs coexist in the EVM.
---------
Co-authored-by: Matthieu Vachon <matthieu.o.vachon@gmail.com>
Co-authored-by: Delweng <delweng@gmail.com>
Co-authored-by: Martin HS <martin@swende.se>
As SELF-DESTRUCT opcode is disabled in the cancun fork(unless the
account is created within the same transaction, nothing to delete
in this case). The account will only be deleted in the following
cases:
- The account is created within the same transaction. In this case
the original storage was empty.
- The account is empty(zero nonce, zero balance, zero code) and
is touched within the transaction. Fortunately this kind of accounts
are not-existent on ethereum-mainnet.
All in all, after cancun, we are pretty sure there is no large contract
deletion and we don't need this mechanism for oom protection.
This change makes use of uin256 to represent balance in state. It touches primarily upon statedb, stateobject and state processing, trying to avoid changes in transaction pools, core types, rpc and tracers.
This change allows the creation of a genesis block for verkle testnets. This makes for a chunk of code that is easier to review and still touches many discussion points.
This change enhances the stacktrie constructor by introducing an option struct. It also simplifies the `Hash` and `Commit` operations, getting rid of the special handling round root node.
This change
- Removes the owner-notion from a stacktrie; the owner is only ever needed for comitting to the database, but the commit-function, the `writeFn` is provided by the caller, so the caller can just set the owner into the `writeFn` instead of having it passed through the stacktrie.
- Removes the `encoding.BinaryMarshaler`/`encoding.BinaryUnmarshaler` interface from stacktrie. We're not using it, and it is doubtful whether anyone downstream is either.
This change includes a lot of things, listed below.
### Split up interfaces, write vs read
The interfaces have been split up into one write-interface and one read-interface, with `Snapshot` being the gateway from write to read. This simplifies the semantics _a lot_.
Example of splitting up an interface into one readonly 'snapshot' part, and one updatable writeonly part:
```golang
type MeterSnapshot interface {
Count() int64
Rate1() float64
Rate5() float64
Rate15() float64
RateMean() float64
}
// Meters count events to produce exponentially-weighted moving average rates
// at one-, five-, and fifteen-minutes and a mean rate.
type Meter interface {
Mark(int64)
Snapshot() MeterSnapshot
Stop()
}
```
### A note about concurrency
This PR makes the concurrency model clearer. We have actual meters and snapshot of meters. The `meter` is the thing which can be accessed from the registry, and updates can be made to it.
- For all `meters`, (`Gauge`, `Timer` etc), it is assumed that they are accessed by different threads, making updates. Therefore, all `meters` update-methods (`Inc`, `Add`, `Update`, `Clear` etc) need to be concurrency-safe.
- All `meters` have a `Snapshot()` method. This method is _usually_ called from one thread, a backend-exporter. But it's fully possible to have several exporters simultaneously: therefore this method should also be concurrency-safe.
TLDR: `meter`s are accessible via registry, all their methods must be concurrency-safe.
For all `Snapshot`s, it is assumed that an individual exporter-thread has obtained a `meter` from the registry, and called the `Snapshot` method to obtain a readonly snapshot. This snapshot is _not_ guaranteed to be concurrency-safe. There's no need for a snapshot to be concurrency-safe, since exporters should not share snapshots.
Note, though: that by happenstance a lot of the snapshots _are_ concurrency-safe, being unmutable minimal representations of a value. Only the more complex ones are _not_ threadsafe, those that lazily calculate things like `Variance()`, `Mean()`.
Example of how a background exporter typically works, obtaining the snapshot and sequentially accessing the non-threadsafe methods in it:
```golang
ms := metric.Snapshot()
...
fields := map[string]interface{}{
"count": ms.Count(),
"max": ms.Max(),
"mean": ms.Mean(),
"min": ms.Min(),
"stddev": ms.StdDev(),
"variance": ms.Variance(),
```
TLDR: `snapshots` are not guaranteed to be concurrency-safe (but often are).
### Sample changes
I also changed the `Sample` type: previously, it iterated the samples fully every time `Mean()`,`Sum()`, `Min()` or `Max()` was invoked. Since we now have readonly base data, we can just iterate it once, in the constructor, and set all four values at once.
The same thing has been done for runtimehistogram.
### ResettingTimer API
Back when ResettingTImer was implemented, as part of https://github.com/ethereum/go-ethereum/pull/15910, Anton implemented a `Percentiles` on the new type. However, the method did not conform to the other existing types which also had a `Percentiles`.
1. The existing ones, on input, took `0.5` to mean `50%`. Anton used `50` to mean `50%`.
2. The existing ones returned `float64` outputs, thus interpolating between values. A value-set of `0, 10`, at `50%` would return `5`, whereas Anton's would return either `0` or `10`.
This PR removes the 'new' version, and uses only the 'legacy' percentiles, also for the ResettingTimer type.
The resetting timer snapshot was also defined so that it would expose the internal values. This has been removed, and getters for `Max, Min, Mean` have been added instead.
### Unexport types
A lot of types were exported, but do not need to be. This PR unexports quite a lot of them.
This changes implements faster post-selfdestruct iteration of storage slots for deletion, by using snapshot-storage+stacktrie to recover the trienodes to be deleted. This mechanism is only implemented for path-based schema.
For hash-based schema, the entire post-selfdestruct storage iteration is skipped, with this change, since hash-based does not actually perform deletion anyway.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Context: The UpdateContractCode method was introduced for the state storage commitment
schemes that include the whole code for their commitment computation. It must therefore be called
before the root hash is computed at the end of IntermediateRoot.
This should have no impact on the MPT since, in this context, the method is a no-op.
* all: implement path-based state scheme
* all: edits from review
* core/rawdb, trie/triedb/pathdb: review changes
* core, light, trie, eth, tests: reimplement pbss history
* core, trie/triedb/pathdb: track block number in state history
* trie/triedb/pathdb: add history documentation
* core, trie/triedb/pathdb: address comments from Peter's review
Important changes to list:
- Cache trie nodes by path in clean cache
- Remove root->id mappings when history is truncated
* trie/triedb/pathdb: fallback to disk if unexpect node in clean cache
* core/rawdb: fix tests
* trie/triedb/pathdb: rename metrics, change clean cache key
* trie/triedb: manage the clean cache inside of disk layer
* trie/triedb/pathdb: move journal function
* trie/triedb/path: fix tests
* trie/triedb/pathdb: fix journal
* trie/triedb/pathdb: fix history
* trie/triedb/pathdb: try to fix tests on windows
* core, trie: address comments
* trie/triedb/pathdb: fix test issues
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>