* core/txpool: no need to run rotate if no local txs
Signed-off-by: jsvisa <delweng@gmail.com>
* Revert "core/txpool: no need to run rotate if no local txs"
This reverts commit 17fab17388.
Signed-off-by: jsvisa <delweng@gmail.com>
* use Debug if todo is empty
Signed-off-by: jsvisa <delweng@gmail.com>
---------
Signed-off-by: jsvisa <delweng@gmail.com>
* make blobpool reject blob transactions with fee below the minimum
* core/txpool: some minot nitpick polishes and unified error formats
* core/txpool: do less big.Int constructions with the min blob cap
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This PR fixes an overflow which can could happen if inconsistent blockchain rules were configured. Additionally, it tries to prevent such inconsistencies from occurring by making sure that merge cannot be enabled unless previous fork(s) are also enabled.
* core/txpool, miner: speed up blob pool pending retrievals
* miner: fix test merge issue
* eth: same same
* core/txpool/blobpool: speed up blobtx creation in benchmark a bit
* core/txpool/blobpool: fix linter
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change makes the legacy transaction pool use of `uint256.Int` instead of `big.Int`. The changes are made primarily only on the internal functions of legacypool.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change adds support for blob-transaction in certain API-endpoints, e.g. eth_fillTransaction. A follow-up PR will add support for signing such transactions.
* all: implement era format, add history importer/export
* internal/era/e2store: refactor e2store to provide ReadAt interface
* internal/era/e2store: export HeaderSize
* internal/era: refactor era to use ReadAt interface
* internal/era: elevate anonymous func to named
* cmd/utils: don't store entire era file in-memory during import / export
* internal/era: better abstraction between era and e2store
* cmd/era: properly close era files
* cmd/era: don't let defers stack
* cmd/geth: add description for import-history
* cmd/utils: better bytes buffer
* internal/era: error if accumulator has more records than max allowed
* internal/era: better doc comment
* internal/era/e2store: rm superfluous reader, rm superfluous testcases, add fuzzer
* internal/era: avoid some repetition
* internal/era: simplify clauses
* internal/era: unexport things
* internal/era,cmd/utils,cmd/era: change to iterator interface for reading era entries
* cmd/utils: better defer handling in history test
* internal/era,cmd: add number method to era iterator to get the current block number
* internal/era/e2store: avoid double allocation during write
* internal/era,cmd/utils: fix lint issues
* internal/era: add ReaderAt func so entry value can be read lazily
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* internal/era: improve iterator interface
* internal/era: fix rlp decode of header and correctly read total difficulty
* cmd/era: fix rebase errors
* cmd/era: clearer comments
* cmd,internal: fix comment typos
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/txpool/blobpool: clean up resurrected junk after a crash
* core/txpool/blobpool: track transaction insertions and rejections
* core/txpool/blobpool: linnnnnnnt
This PR fixes an issues in the new simulated backend. The root cause is the fact that the transaction pool has an internal reset operation that runs on a background thread.
When a new transaction is added to the pool via the RPC, the transaction is added to a non-executable queue and will be moved to its final location on a background thread. If the machine is overloaded (or simply due to timing issues), it can happen that the simulated backend will try to produce the next block, whilst the pool has not yet marked the newly added transaction executable. This will cause the block to not contain the transaction. This is an issue because we want determinism from the simulator: add a tx, mine a block. It should be in there.
The PR fixes it by adding a Sync function to the txpool, which waits for the current reset operation (if any) to finish, and then runs an entire round of reset on top. The new round is needed because resets are only triggered by new head events, so newly added transactions will not trigger the outer resets that we can wait on. The transaction pool would eventually internally do a reset even on transaction addition, but there's no easy way to wait on that and there's no meaningful reason to bubble that across everything. A clean outer reset will at worse be a small noop goroutine.
This change switches from using the `Hasher` interface to add/query the bloomfilter to implementing it as methods.
This significantly reduces the allocations for Search and Rebloom.
This change makes use of uin256 to represent balance in state. It touches primarily upon statedb, stateobject and state processing, trying to avoid changes in transaction pools, core types, rpc and tracers.
This change simplifies the logic for indexing transactions and enhances the UX when transaction is not found by returning more information to users.
Transaction indexing is now considered as a part of the initial sync, and `eth.syncing` will thus be `true` if transaction indexing is not yet finished. API consumers can use the syncing status to determine if the node is ready to serve users.
The code to compute a versioned hash was duplicated a couple times, and also had a small
issue: if we ever change params.BlobTxHashVersion, it will most likely also cause changes
to the actual hash computation. So it's a bit useless to have this constant in params.
EIP-4844 adds a new transaction type for blobs. Users can submit such transactions via `eth_sendRawTransaction`. In this PR we refrain from adding support to `eth_sendTransaction` and in fact it will fail if the user passes in a blob hash.
However since the chain can handle such transactions it makes sense to allow simulating them. E.g. an L2 operator should be able to simulate submitting a rollup blob and updating the L2 state. Most methods that take in a transaction object should recognize blobs. The change boils down to adding `blobVersionedHashes` and `maxFeePerBlobGas` to `TransactionArgs`. In summary:
- `eth_sendTransaction`: will fail for blob txes
- `eth_signTransaction`: will fail for blob txes
The methods that sign txes does not, as of this PR, add support the for new EIP-4844 transaction types. Resuming the summary:
- `eth_sendRawTransaction`: can send blob txes
- `eth_fillTransaction`: will fill in a blob tx. Note: here we simply fill in normal transaction fields + possibly `maxFeePerBlobGas` when blobs are present. One can imagine a more elaborate set-up where users can submit blobs themselves and we fill in proofs and commitments and such. Left for future PRs if desired.
- `eth_call`: can simulate blob messages
- `eth_estimateGas`: blobs have no effect here. They have a separate unit of gas which is not tunable in the transaction.
This pull request improves the condition to check if path state scheme is in use.
Originally, root node presence was used as the indicator if path scheme is used or not. However due to fact that root node will be deleted during the initial snap sync, this condition is no longer useful.
If PersistentStateID is present, it shows that we've already configured for path scheme.
Original problem was caused by #28595, where we made it so that as soon as we start to sync, the root of the disk layer is deleted. That is not wrong per se, but another part of the code uses the "presence of the root" as an init-check for the pathdb. And, since the init-check now failed, the code tried to re-initialize it which failed since a sync was already ongoing.
The total impact being: after a state-sync has begun, if the node for some reason is is shut down, it will refuse to start up again, with the error message: `Fatal: Failed to register the Ethereum service: waiting for sync.`.
This change also modifies how `geth removedb` works, so that the user is prompted for two things: `state data` and `ancient chain`. The former includes both the chaindb aswell as any state history stored in ancients.
---------
Co-authored-by: Martin HS <martin@swende.se>