This PR moves the logging/tracing-facilities out of `*state.StateDB`,
in to a wrapping struct which implements `vm.StateDB` instead.
In most places, it is a pretty straight-forward change:
- First, hoisting the invocations from state objects up to the statedb.
- Then making the mutation-methods simply return the previous value, so
that the external logging layer could log everything.
Some internal code uses the direct object-accessors to mutate the state,
particularly in testing and in setting up state overrides, which means
that these changes are unobservable for the hooked layer. Thus, configuring
the overrides are not necessarily part of the API we want to publish.
The trickiest part about the layering is that when the selfdestructs are
finally deleted during `Finalise`, there's the possibility that someone
sent some ether to it, which is burnt at that point, and thus needs to
be logged. The hooked layer reaches into the inner layer to figure out
these events.
In package `vm`, the conversion from `state.StateDB + hooks` into a
hooked `vm.StateDB` is performed where needed.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Way back we've added `common.math.BigMin` and `common.math.BigMax`.
These were kind of cute helpers, but unfortunate ones, because package
all over out codebase added dependencies to this package just to avoid
having to write out 3 lines of code.
Because of this, we've also started having package name clashes with the
stdlib `math`, which got solves even more badly by moving some helpers
over ***from*** the stdlib into our custom lib (e.g. MaxUint64). The
latter ones were nuked out in a previous PR and this PR nukes out BigMin
and BigMax, inlining them at all call sites.
As we're transitioning to uint256, if need be, we can add a min and max
to that.
Clique currently depends on the `accounts` package. This was a bit of a
big cannon even in the past, just to pass a signer "account" to the
Clique block producer. Either way, nowadays Geth does not support clique
mining any more, so by removing that bit of functionality from our code,
we can also break this dependency.
Clique should ideally be further torn out, but this at least gets us one
step closer to cleanups.
While looking at some mem profiles from `evm` runs, I noticed that
`goja` compilation of the bigint library was present. The bigint library
compilation happens in a package `init`, whenever the package
`eth/tracers/js` is loaded. This PR changes it to load lazily when
needed.
It becomes slightly faster with this change, and slightly less alloc:y.
Non-scientific benchmark with 100 executions:
```
time for i in {1..100}; do ./evm --code 6040 run; done;
```
current `master`:
```
real 0m6.634s
user 0m5.213s
sys 0m2.277s
```
Without compiling bigint
```
real 0m5.802s
user 0m4.191s
sys 0m1.965s
```
Fixes an issue missed in #30576 where we send empty requests for a full
payload being resolved, causing hash mismatch later on when we get the
payload back via `NewPayload`.
Breaking changes:
- The ChainConfig was exposed to tracers via VMContext passed in
`OnTxStart`. This is unnecessary specially looking through the lens of
live tracers as chain config remains the same throughout the lifetime of
the program. It was there so that native API-invoked tracers could
access it. So instead we moved it to the constructor of API tracers.
Non-breaking:
- Change the default config of the tracers to be `{}` instead of nil.
This way an extra nil check can be avoided.
Refactoring:
- Rename `supply` struct to `supplyTracer`.
- Un-export some hook definitions.
~~Opening this as a draft to have a discussion.~~ Pressed the wrong
button
I had [a previous PR
](https://github.com/ethereum/go-ethereum/pull/24616)a long time ago
which reduced the peak memory used during reorgs by not accumulating all
transactions and logs.
This PR reduces the peak memory further by not storing the blocks in
memory.
However this means we need to pull the blocks back up from storage
multiple times during the reorg.
I collected the following numbers on peak memory usage:
// Master: BenchmarkReorg-8 10000 899591 ns/op 820154 B/op 1440
allocs/op 1549443072 bytes of heap used
// WithoutOldChain: BenchmarkReorg-8 10000 1147281 ns/op 943163 B/op
1564 allocs/op 1163870208 bytes of heap used
// WithoutNewChain: BenchmarkReorg-8 10000 1018922 ns/op 943580 B/op
1564 allocs/op 1171890176 bytes of heap used
Each block contains a transaction with ~50k bytes and we're doing a 10k
block reorg, so the chain should be ~500MB in size
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
## Description
Omit null `witness` field from payload envelope.
## Motivation
Currently, JSON encoded payload types always include `"witness": null`,
which, I believe, is not intentional.
calculating a reasonable tx blob fee cap (`max_blob_fee_per_gas *
total_blob_gas`) only depends on the excess blob gas of the parent
header. The parent header is assumed to be correct, so the method should
not be able to fail and return an error.
This change brings geth into compliance with the current engine API
specification for the Prague fork. I have moved the assignment of
ExecutionPayloadEnvelope.Requests into BlockToExecutableData to ensure
there is a single place where the type is removed.
While doing so, I noticed that handling of requests in the miner was not
quite correct for the empty payload. It would return `nil` requests for
the empty payload even for blocks after the Prague fork. To fix this, I
have added the emptyRequests field in miner.Payload.
Changelog: https://golangci-lint.run/product/changelog/#1610
Removes `exportloopref` (no longer needed), replaces it with
`copyloopvar` which is basically the opposite.
Also adds:
- `durationcheck`
- `gocheckcompilerdirectives`
- `reassign`
- `mirror`
- `tenv`
---------
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This change makes the trie commit operation concurrent, if the number of changes exceed 100.
Co-authored-by: stevemilk <wangpeculiar@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This fixes a few issues missed in #29052:
* `requests` must be hex encoded, so added a helper to marshal.
* The statedb was committed too early and so the result of the system
calls was lost.
* For devnet-4 we need to pull off the type byte prefix from the request
data.
This is a redo of #29052 based on newer specs. Here we implement EIPs
scheduled for the Prague fork:
- EIP-7002: Execution layer triggerable withdrawals
- EIP-7251: Increase the MAX_EFFECTIVE_BALANCE
Co-authored-by: lightclient <lightclient@protonmail.com>
A couple of tests set the debug level to `TRACE` on stdout,
and all subsequent tests in the same package are also affected
by that, resulting in outputs of tens of megabytes.
This PR removes such calls from two packages where it was prevalent.
This makes getting a summary of failing tests simpler, and possibly
reduces some strain from the CI pipeline.
This implements recent changes to EIP-7685, EIP-6110, and
execution-apis.
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Shude Li <islishude@gmail.com>
The bulk of this PR is authored by @lightclient , in the original
EOF-work. More recently, the code has been picked up and reworked for the new EOF
specification, by @MariusVanDerWijden , in https://github.com/ethereum/go-ethereum/pull/29518, and also @shemnon has contributed with fixes.
This PR is an attempt to start eating the elephant one small bite at a
time, by selecting only the eof-validation as a standalone piece which
can be merged without interfering too much in the core stuff.
In this PR:
- [x] Validation of eof containers, lifted from #29518, along with
test-vectors from consensus-tests and fuzzing, to ensure that the move
did not lose any functionality.
- [x] Definition of eof opcodes, which is a prerequisite for validation
- [x] Addition of `undefined` to a jumptable entry item. I'm not
super-happy with this, but for the moment it seems the least invasive
way to do it. A better way might be to go back and allowing nil-items or
nil execute-functions to denote "undefined".
- [x] benchmarks of eof validation speed
---------
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Danno Ferrin <danno.ferrin@shemnon.com>
This pull request removes the `fsync` of index files in freezer.ModifyAncients function for
performance gain.
Originally, fsync is added after each freezer write operation to ensure
the written data is truly transferred into disk. Unfortunately, it turns
out `fsync` can be relatively slow, especially on
macOS (see https://github.com/ethereum/go-ethereum/issues/28754 for more
information).
In this pull request, fsync for index file is removed as it turns out
index file can be recovered even after a unclean shutdown. But fsync for data file is still kept, as
we have no meaningful way to validate the data correctness after unclean shutdown.
---
**But why do we need the `fsync` in the first place?**
As it's necessary for freezer to survive/recover after the machine crash
(e.g. power failure).
In linux, whenever the file write is performed, the file metadata update
and data update are
not necessarily performed at the same time. Typically, the metadata will
be flushed/journalled
ahead of the file data. Therefore, we make the pessimistic assumption
that the file is first
extended with invalid "garbage" data (normally zero bytes) and that
afterwards the correct
data replaces the garbage.
We have observed that the index file of the freezer often contain
garbage entry with zero value
(filenumber = 0, offset = 0) after a machine power failure. It proves
that the index file is extended
without the data being flushed. And this corruption can destroy the
whole freezer data eventually.
Performing fsync after each write operation can reduce the time window
for data to be transferred
to the disk and ensure the correctness of the data in the disk to the
greatest extent.
---
**How can we maintain this guarantee without relying on fsync?**
Because the items in the index file are strictly in order, we can
leverage this characteristic to
detect the corruption and truncate them when freezer is opened.
Specifically these validation
rules are performed for each index file:
For two consecutive index items:
- If their file numbers are the same, then the offset of the latter one
MUST not be less than that of the former.
- If the file number of the latter one is equal to that of the former
plus one, then the offset of the latter one MUST not be 0.
- If their file numbers are not equal, and the latter's file number is
not equal to the former plus 1, the latter one is valid
And also, for the first non-head item, it must refer to the earliest
data file, or the next file if the
earliest file is not sufficient to place the first item(very special
case, only theoretical possible
in tests)
With these validation rules, we can detect the invalid item in index
file with greatest possibility.
---
But unfortunately, these scenarios are not covered and could still lead
to a freezer corruption if it occurs:
**All items in index file are in zero value**
It's impossible to distinguish if they are truly zero (e.g. all the data
entries maintained in freezer
are zero size) or just the garbage left by OS. In this case, these index
items will be kept by truncating
the entire data file, namely the freezer is corrupted.
However, we can consider that the probability of this situation
occurring is quite low, and even
if it occurs, the freezer can be considered to be close to an empty
state. Rerun the state sync
should be acceptable.
**Index file is integral while relative data file is corrupted**
It might be possible the data file is corrupted whose file size is
extended correctly with garbage
filled (e.g. zero bytes). In this case, it's impossible to detect the
corruption by index validation.
We can either choose to `fsync` the data file, or blindly believe that
if index file is integral then
the data file could be integral with very high chance. In this pull
request, the first option is taken.