This PR refactors the genesis initialization a bit, s.th. we only
compute the blockhash once instead of twice as before (during hashAlloc
and flushAlloc)
This will significantly reduce the amount of memory allocated during
genesis init
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This pull request fixes#30229.
During snap sync, large storage will be split into several pieces and
synchronized concurrently. Unfortunately, the tradeoff is that the respective
merkle trie of each storage chunk will be incomplete due to the incomplete
boundaries. The trie nodes on these boundaries will be discarded, and any
dangling nodes on disk will also be removed if they fall on these paths,
ensuring the state healer won't be blocked.
However, the dangling account trie nodes on the path from the root to the
associated account are left untouched. This means the dangling account trie
nodes could potentially stop the state healing and break the assumption that the
entire subtrie should exist if the subtrie root exists. We should consider the
account trie node as the ancestor of the corresponding storage trie node.
In the scenarios described in the above ticket, the state corruption could occur
if there is a dangling account trie node while some storage trie nodes are
removed due to synchronization redo.
The fixing idea is pretty straightforward, the trie nodes on the path from root
to account should all be explicitly removed if an incomplete storage trie
occurs. Therefore, a `delete` operation has been added into `gentrie` to
explicitly clear the account along with all nodes on this path. The special
thing is that it's a cross-trie clearing. In theory, there may be a dangling
node at any position on this account key and we have to clear all of them.
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: lightclient <lightclient@protonmail.com>
Fixes a slight miscalculation in the downloader queue, which was not accurately taking block withdrawals into account when calculating the size of the items in the queue
Due to https://github.com/ethereum/tests/releases/tag/v10.1, the format
of the TransactionTest changed, but it was not properly addressed, causing the test
to pass unexpectedly.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Consistently use `uint64` for indices in `Memory` and drop lots of type
conversions from `uint64` to `int64`.
---------
Co-authored-by: lmittmann <lmittmann@users.noreply.github.com>
Some chains’ network IDs use hexadecimal such as Optimism ("0xa" instead
of "10"), so when converting the string to big.Int, we cannot specify
base 10; otherwise, it will encounter errors with hexadecimal network
IDs.
The struct-based tracing added in #29189 seems to have caused an issue
with the benchmark `BenchmarkTracerStepVsCallFrame`. On master we see
the following panic:
```console
BenchmarkTracerStepVsCallFrame
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x40 pc=0x1019782f0]
goroutine 37 [running]:
github.com/ethereum/go-ethereum/eth/tracers/js.(*jsTracer).OnOpcode(0x140004c4000, 0x0, 0x10?, 0x989680, 0x1, {0x101ea2298, 0x1400000e258}, {0x1400000e258?, 0x14000155928?, 0x10173020c?}, ...)
/Users/matt/dev/go-ethereum/eth/tracers/js/goja.go:328 +0x140
github.com/ethereum/go-ethereum/core/vm.(*EVMInterpreter).Run(0x14000307da0, 0x140003cc0d0, {0x0, 0x0, 0x0}, 0x0)
...
FAIL github.com/ethereum/go-ethereum/core/vm/runtime 0.420s
FAIL
```
The issue seems to be that `OnOpcode` expects that `OnTxStart` has
already been called to initialize the `env` value in the tracer. The JS
tracer uses it in `OnOpcode` for the `GetRefund()` method.
This patch resolves the issue by reusing the `Call` method already
defined in `runtime_test.go` which correctly calls `OnTxStart`.
Fixes#30254
It seems like the removed CreateAccount call is very old and not needed anymore.
After removing it, setting a sender that does not exist in the state doesn't seem to cause
an issue.
Adding the correct accessList parameter when calling a contract can
reduce gas consumption. However, the current version only allows adding
the accessList manually when constructing the transaction. This PR can
provide convenience for saving gas.
The package `github.com/golang/protobuf/proto` is deprecated in favor
`google.golang.org/protobuf/proto`. We should update the codes to
recommended package.
Signed-off-by: Icarus Wu <icaruswu66@qq.com>
This PR fixes an issue in the setMode method of beaconBackfiller where the
log message was not displaying the previous mode correctly. The log message
now shows both the old and new sync modes.
## Issue
If `nextTime` has passed, but all nodes are excluded, `get` would return
`nil` and `run` would therefore not invoke `schedule`. Then, we schedule
a timer for the past, as neither `nextTime` value has been updated. This
creates a busy loop, as the timer immediately returns.
## Fix
With this PR, revalidation will be also rescheduled when all nodes are
excluded.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
The test specifies `ListenAddr: ":0"`, which means a random ephemeral
port will be chosen for the TCP listener by the OS. Additionally, since
no `DiscAddr` was specified, the same port that is chosen automatically
by the OS will also be used for the UDP listener in the discovery UDP
setup. This sometimes leads to test failures if the TCP listener picks a
free TCP port that is already taken for UDP. By specifying `DiscAddr:
":0"`, the UDP port will be chosen independently from the TCP port,
fixing the random failure.
See issue #29830.
Verified using
```
cd p2p
go test -c -race
stress ./p2p.test -test.run=TestServerPortMapping
...
5m0s: 4556 runs so far, 0 failures
```
The issue described above can technically lead to sporadic failures on
systems that specify a listen address via the `--port` flag of 0 while
not setting `--discovery.port`. Since the default is using port `30303`
and using a random ephemeral port is likely not used much to begin with,
not addressing the root cause might be acceptable.
This pull request fixes the broken feature where the entire storage set is overridden.
Originally, the storage set override was achieved by marking the associated account
as deleted, preventing access to the storage slot on disk. However, since #29520, this
flag is also checked when accessing the account, rendering the account unreachable.
A fix has been applied in this pull request, which re-creates a new state object with all
account metadata inherited.
Currently, we have 3 flags to configure blob pool. However, we don't
read these flags and set the blob pool configuration in eth config
accordingly. This commit adds a function to check if these flags are
provided and set blob pool configuration based on them.
This pull request adds an additional error check after statedb.IntermediateRoot,
ensuring that no errors occur during this call. This step is essential, as the call might
encounter database errors.
The address recover is executed and cached in ValidateTransaction already. It's
expected that the cached one is returned in ValidateTransaction. However,
currently, we use the wrong function signer.Sender instead of types.Sender which
will do all the address recover again.
Here we add distinct error messages for network timeouts and JSON parsing errors.
Note this specifically applies to HTTP connections serving a single RPC request.
Co-authored-by: Felix Lange <fjl@twurst.com>
Originally, these metrics were added to track the largest storage wiping.
Since account self-destruction was deprecated with the Cancun fork,
these metrics have become meaningless.