This change adds a code generator tool for creating EncodeRLP method
implementations. The generated methods will behave identically to the
reflect-based encoder, but run faster because there is no reflection overhead.
Package rlp now provides the EncoderBuffer type for incremental encoding. This
is used by generated code, but the new methods can also be useful for
hand-written encoders.
There is also experimental support for generating DecodeRLP, and some new
methods have been added to the existing Stream type to support this. Creating
decoders with rlpgen is not recommended at this time because the generated
methods create very poor error reporting.
More detail about package rlp changes:
* rlp: externalize struct field processing / validation
This adds a new package, rlp/internal/rlpstruct, in preparation for the
RLP encoder generator.
I think the struct field rules are subtle enough to warrant extracting
this into their own package, even though it means that a bunch of
adapter code is needed for converting to/from rlpstruct.Type.
* rlp: add more decoder methods (for rlpgen)
This adds new methods on rlp.Stream:
- Uint64, Uint32, Uint16, Uint8, BigInt
- ReadBytes for decoding into []byte
- MoreDataInList - useful for optional list elements
* rlp: expose encoder buffer (for rlpgen)
This exposes the internal encoder buffer type for use in EncodeRLP
implementations.
The new EncoderBuffer type is a sort-of 'opaque handle' for a pointer to
encBuffer. It is implemented this way to ensure the global encBuffer pool
is handled correctly.
This PR adds an addtional API called `NewBatchWithSize` for db
batcher. It turns out that leveldb batch memory allocation is
super inefficient. The main reason is the allocation step of
leveldb Batch is too small when the batch size is large. It can
take a few second to build a leveldb batch with 100MB size.
Luckily, leveldb also offers another API called MakeBatch which can
pre-allocate the memory area. So if the approximate size of batch is
known in advance, this API can be used in this case.
It's needed in new state scheme PR which needs to commit a batch of
trie nodes in a single batch. Implement the feature in a seperate PR.
This functionality is needed in new path-based storage scheme, but
can be implemented in a seperate PR though.
When an account is deleted, then all the storage slots should be
nuked out from the disk as well. In hash-based storage scheme they
are still left in the disk but in new scheme, they will be iterated
and marked as deleted.
But why the NodeBlob API is needed in this scenario? Because when
the node is marked deleted, the previous value is also required to
be recorded to construct the reverse diff.
This change makes it so WaitMined no longer logs an error when the receipt
is unavailable. It also changes the simulated backend to return NotFound for
unavailable receipts, just like ethclient does.
I believe the sentence is attempting to explain that the URL is "[used] by upper layers to define a sorting order over all wallets from multiple backends."
* eth/tracers: add initial native prestate tracer
* fix balance hex
* handle prestate for tx from and to
* drop created contract from prestate
* fix sender balance
* use switch instead
Co-authored-by: Martin Holst Swende <martin@swende.se>
* minor fix
* lookup create2 account
* mv code around a bit
* check stackLen for create2
* fix transfer tx for js prestate tracer
* fix create2 addr
* track extcodehash in js prestate tracer
Co-authored-by: Martin Holst Swende <martin@swende.se>
When talking to an HTTP2 server, there are situations where it needs to
"rewind" the Request.Body. To allow this, we have to set up the Request.GetBody
function to return a brand new instance of the body.
If not set, we can end up with the following error:
http2: Transport: cannot retry err [http2: Transport received Server's graceful shutdown GOAWAY] after Request.Body was written; define Request.GetBody to avoid this error
See this commit for more information: cffdcf672a?visible=2
* eth, miner: remove duplicated code
* eth/catalyst: remove unneeded code
* miner: keep update pending state even the Merge is happened
* eth, miner: rebase
* miner: fix tests
* eth, miner: address comments from marius
* miner: use empty zero randomness for pending blocks after the merge
* eth/catalyst: gofmt
* miner: add warning log for state recovery
* miner: ignore uncles for post-merge blocks
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* trie/proof: edge case for VerifyRangeProof
* more consistency with other tests in the file
* trie: fix test todo
Co-authored-by: Martin Holst Swende <martin@swende.se>
This replaces the sketchy and undocumented string context keys for HTTP requests
with a defined interface. Using string keys with context is discouraged because
they may clash with keys created by other packages.
We added these keys to make connection metadata available in the signer, so this
change also updates signer/core to use the new PeerInfo API.
* eth/catalyst: evict old payloads, type PayloadID
* eth/catalyst: added tracing info to engine api
* eth/catalyst: add test for create payload timestamps
* catalyst: better logs
* eth/catalyst: computePayloadId return style
* catalyst: add queue for payloads
* eth/catalyst: nitpicks
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* cmd/geth: add db cmd to show metadata
* cmd/geth: better output generator status
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
* cmd: minor
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>