Fixes an issue missed in #30576 where we send empty requests for a full
payload being resolved, causing hash mismatch later on when we get the
payload back via `NewPayload`.
Breaking changes:
- The ChainConfig was exposed to tracers via VMContext passed in
`OnTxStart`. This is unnecessary specially looking through the lens of
live tracers as chain config remains the same throughout the lifetime of
the program. It was there so that native API-invoked tracers could
access it. So instead we moved it to the constructor of API tracers.
Non-breaking:
- Change the default config of the tracers to be `{}` instead of nil.
This way an extra nil check can be avoided.
Refactoring:
- Rename `supply` struct to `supplyTracer`.
- Un-export some hook definitions.
~~Opening this as a draft to have a discussion.~~ Pressed the wrong
button
I had [a previous PR
](https://github.com/ethereum/go-ethereum/pull/24616)a long time ago
which reduced the peak memory used during reorgs by not accumulating all
transactions and logs.
This PR reduces the peak memory further by not storing the blocks in
memory.
However this means we need to pull the blocks back up from storage
multiple times during the reorg.
I collected the following numbers on peak memory usage:
// Master: BenchmarkReorg-8 10000 899591 ns/op 820154 B/op 1440
allocs/op 1549443072 bytes of heap used
// WithoutOldChain: BenchmarkReorg-8 10000 1147281 ns/op 943163 B/op
1564 allocs/op 1163870208 bytes of heap used
// WithoutNewChain: BenchmarkReorg-8 10000 1018922 ns/op 943580 B/op
1564 allocs/op 1171890176 bytes of heap used
Each block contains a transaction with ~50k bytes and we're doing a 10k
block reorg, so the chain should be ~500MB in size
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
## Description
Omit null `witness` field from payload envelope.
## Motivation
Currently, JSON encoded payload types always include `"witness": null`,
which, I believe, is not intentional.
calculating a reasonable tx blob fee cap (`max_blob_fee_per_gas *
total_blob_gas`) only depends on the excess blob gas of the parent
header. The parent header is assumed to be correct, so the method should
not be able to fail and return an error.
This change brings geth into compliance with the current engine API
specification for the Prague fork. I have moved the assignment of
ExecutionPayloadEnvelope.Requests into BlockToExecutableData to ensure
there is a single place where the type is removed.
While doing so, I noticed that handling of requests in the miner was not
quite correct for the empty payload. It would return `nil` requests for
the empty payload even for blocks after the Prague fork. To fix this, I
have added the emptyRequests field in miner.Payload.
Changelog: https://golangci-lint.run/product/changelog/#1610
Removes `exportloopref` (no longer needed), replaces it with
`copyloopvar` which is basically the opposite.
Also adds:
- `durationcheck`
- `gocheckcompilerdirectives`
- `reassign`
- `mirror`
- `tenv`
---------
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This change makes the trie commit operation concurrent, if the number of changes exceed 100.
Co-authored-by: stevemilk <wangpeculiar@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This fixes a few issues missed in #29052:
* `requests` must be hex encoded, so added a helper to marshal.
* The statedb was committed too early and so the result of the system
calls was lost.
* For devnet-4 we need to pull off the type byte prefix from the request
data.
This is a redo of #29052 based on newer specs. Here we implement EIPs
scheduled for the Prague fork:
- EIP-7002: Execution layer triggerable withdrawals
- EIP-7251: Increase the MAX_EFFECTIVE_BALANCE
Co-authored-by: lightclient <lightclient@protonmail.com>
A couple of tests set the debug level to `TRACE` on stdout,
and all subsequent tests in the same package are also affected
by that, resulting in outputs of tens of megabytes.
This PR removes such calls from two packages where it was prevalent.
This makes getting a summary of failing tests simpler, and possibly
reduces some strain from the CI pipeline.
This implements recent changes to EIP-7685, EIP-6110, and
execution-apis.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Shude Li <islishude@gmail.com>
The bulk of this PR is authored by @lightclient , in the original
EOF-work. More recently, the code has been picked up and reworked for the new EOF
specification, by @MariusVanDerWijden , in https://github.com/ethereum/go-ethereum/pull/29518, and also @shemnon has contributed with fixes.
This PR is an attempt to start eating the elephant one small bite at a
time, by selecting only the eof-validation as a standalone piece which
can be merged without interfering too much in the core stuff.
In this PR:
- [x] Validation of eof containers, lifted from #29518, along with
test-vectors from consensus-tests and fuzzing, to ensure that the move
did not lose any functionality.
- [x] Definition of eof opcodes, which is a prerequisite for validation
- [x] Addition of `undefined` to a jumptable entry item. I'm not
super-happy with this, but for the moment it seems the least invasive
way to do it. A better way might be to go back and allowing nil-items or
nil execute-functions to denote "undefined".
- [x] benchmarks of eof validation speed
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Danno Ferrin <danno.ferrin@shemnon.com>
This pull request removes the `fsync` of index files in freezer.ModifyAncients function for
performance gain.
Originally, fsync is added after each freezer write operation to ensure
the written data is truly transferred into disk. Unfortunately, it turns
out `fsync` can be relatively slow, especially on
macOS (see https://github.com/ethereum/go-ethereum/issues/28754 for more
information).
In this pull request, fsync for index file is removed as it turns out
index file can be recovered even after a unclean shutdown. But fsync for data file is still kept, as
we have no meaningful way to validate the data correctness after unclean shutdown.
---
**But why do we need the `fsync` in the first place?**
As it's necessary for freezer to survive/recover after the machine crash
(e.g. power failure).
In linux, whenever the file write is performed, the file metadata update
and data update are
not necessarily performed at the same time. Typically, the metadata will
be flushed/journalled
ahead of the file data. Therefore, we make the pessimistic assumption
that the file is first
extended with invalid "garbage" data (normally zero bytes) and that
afterwards the correct
data replaces the garbage.
We have observed that the index file of the freezer often contain
garbage entry with zero value
(filenumber = 0, offset = 0) after a machine power failure. It proves
that the index file is extended
without the data being flushed. And this corruption can destroy the
whole freezer data eventually.
Performing fsync after each write operation can reduce the time window
for data to be transferred
to the disk and ensure the correctness of the data in the disk to the
greatest extent.
---
**How can we maintain this guarantee without relying on fsync?**
Because the items in the index file are strictly in order, we can
leverage this characteristic to
detect the corruption and truncate them when freezer is opened.
Specifically these validation
rules are performed for each index file:
For two consecutive index items:
- If their file numbers are the same, then the offset of the latter one
MUST not be less than that of the former.
- If the file number of the latter one is equal to that of the former
plus one, then the offset of the latter one MUST not be 0.
- If their file numbers are not equal, and the latter's file number is
not equal to the former plus 1, the latter one is valid
And also, for the first non-head item, it must refer to the earliest
data file, or the next file if the
earliest file is not sufficient to place the first item(very special
case, only theoretical possible
in tests)
With these validation rules, we can detect the invalid item in index
file with greatest possibility.
---
But unfortunately, these scenarios are not covered and could still lead
to a freezer corruption if it occurs:
**All items in index file are in zero value**
It's impossible to distinguish if they are truly zero (e.g. all the data
entries maintained in freezer
are zero size) or just the garbage left by OS. In this case, these index
items will be kept by truncating
the entire data file, namely the freezer is corrupted.
However, we can consider that the probability of this situation
occurring is quite low, and even
if it occurs, the freezer can be considered to be close to an empty
state. Rerun the state sync
should be acceptable.
**Index file is integral while relative data file is corrupted**
It might be possible the data file is corrupted whose file size is
extended correctly with garbage
filled (e.g. zero bytes). In this case, it's impossible to detect the
corruption by index validation.
We can either choose to `fsync` the data file, or blindly believe that
if index file is integral then
the data file could be integral with very high chance. In this pull
request, the first option is taken.
Reverts ethereum/go-ethereum#30495
You are free to create a proper Clear method if that's the best way. But
one that does a proper cleanup, not some hacky call to set gas which
screws up logs, metrics and everything along the way. Also doesn't work
for legacy pool local transactions.
The current code had a hack in the simulated code, now we have a hack in
live txpooling code. No, that's not acceptable. I want the live code to
be proper, meaningful API, meaningful comments, meaningful
implementation.