rebased https://github.com/ethereum/go-ethereum/pull/29766 . The
downstream branch appears to have been deleted and I don't have perms to
push to that fork.
`TerminalTotalDifficultyPassed` is removed. `TerminalTotalDifficulty`
must now be non-nil, and it is expected that networks are already
merged: we can only import PoW/Clique chains, not produce blocks on
them.
---------
Co-authored-by: stevemilk <wangpeculiar@gmail.com>
Changelog: https://golangci-lint.run/product/changelog/#1610
Removes `exportloopref` (no longer needed), replaces it with
`copyloopvar` which is basically the opposite.
Also adds:
- `durationcheck`
- `gocheckcompilerdirectives`
- `reassign`
- `mirror`
- `tenv`
---------
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This implements recent changes to EIP-7685, EIP-6110, and
execution-apis.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Shude Li <islishude@gmail.com>
This PR changes how sidechains are handled.
Before the merge, it was possible to import a chain with lower td and not set it as canonical. After the merge, we expect every chain that we get via InsertChain to be canonical. Non-canonical blocks can still be inserted
with InsertBlockWIthoutSetHead.
If during the InsertChain, the existing chain is not canonical anymore, we mark it as a sidechain and send the SideChainEvents normally.
Here I am adding a discv5 nodes source into the p2p dial iterator. It's
an improved version of #29533.
Unlike discv4, the discv5 random nodes iterator will always provide full
ENRs. This means we can apply filtering to the results and will only try
dialing nodes which explictly opt into the eth protocol with a matching
chain.
I have also removed the dial iterator from snap. We don't have an
official DNS list for snap anymore, and I doubt anyone else is running
one. While we could potentially filter for snap on discv5, there will be
very few nodes announcing it, and the extra iterator would just stall
the dialer.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
This pull request fixes#30229.
During snap sync, large storage will be split into several pieces and
synchronized concurrently. Unfortunately, the tradeoff is that the respective
merkle trie of each storage chunk will be incomplete due to the incomplete
boundaries. The trie nodes on these boundaries will be discarded, and any
dangling nodes on disk will also be removed if they fall on these paths,
ensuring the state healer won't be blocked.
However, the dangling account trie nodes on the path from the root to the
associated account are left untouched. This means the dangling account trie
nodes could potentially stop the state healing and break the assumption that the
entire subtrie should exist if the subtrie root exists. We should consider the
account trie node as the ancestor of the corresponding storage trie node.
In the scenarios described in the above ticket, the state corruption could occur
if there is a dangling account trie node while some storage trie nodes are
removed due to synchronization redo.
The fixing idea is pretty straightforward, the trie nodes on the path from root
to account should all be explicitly removed if an incomplete storage trie
occurs. Therefore, a `delete` operation has been added into `gentrie` to
explicitly clear the account along with all nodes on this path. The special
thing is that it's a cross-trie clearing. In theory, there may be a dangling
node at any position on this account key and we have to clear all of them.
* avoid unnecessary copy
* delete the never used function ProofList
* eth/protocols/snap, trie/trienode: polish the code
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request defines a gentrie for snap sync purpose.
The stackTrie is used to generate the merkle tree nodes upon receiving a state batch. Several additional options have been added into stackTrie to handle incomplete states (either missing states before or after).
In this pull request, these options have been relocated from stackTrie to genTrie, which serves as a wrapper for stackTrie specifically for snap sync purposes.
Further, the logic for managing incomplete state has been enhanced in this change. Originally, there are two cases handled:
- boundary node filtering
- internal (covered by extension node) node clearing
This changes adds one more:
- Clearing leftover nodes on the boundaries.
This feature is necessary if there are leftover trie nodes in database, otherwise node inconsistency may break the state healing.
* eth: drop support for forward sync triggers and head block packets
* consensus, eth: enforce always merged network
* eth: fix tx looper startup and shutdown
* cmd, core: fix some tests
* core: remove notion of future blocks
* core, eth: drop unused methods and types
This change makes the legacy transaction pool use of `uint256.Int` instead of `big.Int`. The changes are made primarily only on the internal functions of legacypool.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change makes use of uin256 to represent balance in state. It touches primarily upon statedb, stateobject and state processing, trying to avoid changes in transaction pools, core types, rpc and tracers.
This PR moves our fuzzers from tests/fuzzers into whatever their respective 'native' package is.
The historical reason why they were placed in an external location, is that when they were based on go-fuzz, they could not be "hidden" via the _test.go prefix. So in order to shove them away from the go-ethereum "production code", they were put aside.
But now we've rewritten them to be based on golang testing, and thus can be brought back. I've left (in tests/) the ones that are not production (bls128381), require non-standard imports (secp requires btcec, bn256 requires gnark/google/cloudflare deps).
This PR also adds a fuzzer for precompiled contracts, because why not.
This PR utilizes a newly rewritten replacement for go-118-fuzz-build, namely gofuzz-shim, which utilises the inputs from the fuzzing engine better.
This change enhances the stacktrie constructor by introducing an option struct. It also simplifies the `Hash` and `Commit` operations, getting rid of the special handling round root node.
During snap-sync, we request ranges of values: either a range of accounts or a range of storage values. For any large trie, e.g. the main account trie or a large storage trie, we cannot fetch everything at once.
Short version; we split it up and request in multiple stages. To do so, we use an origin field, to say "Give me all storage key/values where key > 0x20000000000000000". When the server fulfils this, the server provides the first key after origin, let's say 0x2e030000000000000 -- never providing the exact origin. However, the client-side needs to be able to verify that the 0x2e03.. indeed is the first one after 0x2000.., and therefore the attached proof concerns the origin, not the first key.
So, short-short version: the left-hand side of the proof relates to the origin, and is free-standing from the first leaf.
On the other hand, (pun intended), the right-hand side, there's no such 'gap' between "along what path does the proof walk" and the last provided leaf. The proof must prove the last element (unless there are no elements).
Therefore, we can simplify the semantics for trie.VerifyRangeProof by removing an argument. This doesn't make much difference in practice, but makes it so that we can remove some tests. The reason I am raising this is that the upcoming stacktrie-based verifier does not support such fancy features as standalone right-hand borders.
This change addresses an issue in snap sync, specifically when the entire sync process can be halted due to an encountered empty storage range.
Currently, on the snap sync client side, the response to an empty (partial) storage range is discarded as a non-delivery. However, this response can be a valid response, when the particular range requested does not contain any slots.
For instance, consider a large contract where the entire key space is divided into 16 chunks, and there are no available slots in the last chunk [0xf] -> [end]. When the node receives a request for this particular range, the response includes:
The proof with origin [0xf]
A nil storage slot set
If we simply discard this response, the finalization of the last range will be skipped, halting the entire sync process indefinitely. The test case TestSyncWithUnevenStorage can reproduce the scenario described above.
In addition, this change also defines the common variables MaxAddress and MaxHash.
This change
- Removes the owner-notion from a stacktrie; the owner is only ever needed for comitting to the database, but the commit-function, the `writeFn` is provided by the caller, so the caller can just set the owner into the `writeFn` instead of having it passed through the stacktrie.
- Removes the `encoding.BinaryMarshaler`/`encoding.BinaryUnmarshaler` interface from stacktrie. We're not using it, and it is doubtful whether anyone downstream is either.
This is a minor refactor in preparation of changes to range verifier. This PR contains no intentional functional changes but moves (and renames) the light.NodeSet
This changes the forkID calculation to ignore time-based forks that occurred before the
genesis block. It's supposed to be done this way because the spec says:
> If a chain is configured to start with a non-Frontier ruleset already in its genesis, that is NOT considered a fork.