Official Go implementation of the Ethereum protocol
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
go-ethereum/triedb/pathdb/states_test.go

480 lines
13 KiB

// Copyright 2024 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>
package pathdb
import (
"bytes"
"reflect"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/rlp"
)
func TestStatesMerge(t *testing.T) {
a := newStates(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa0},
{0xb}: {0xb0},
{0xc}: {0xc0},
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x10},
common.Hash{0x2}: {0x20},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xb}: {
common.Hash{0x1}: {0x10},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xc}: {
common.Hash{0x1}: {0x10},
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
false,
)
b := newStates(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa1},
{0xb}: {0xb1},
{0xc}: nil, // delete account
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x11},
common.Hash{0x2}: nil, // delete slot
common.Hash{0x3}: {0x31},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xb}: {
common.Hash{0x1}: {0x11},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xc}: {
common.Hash{0x1}: nil, // delete slot
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
false,
)
a.merge(b)
blob, exist := a.account(common.Hash{0xa})
if !exist || !bytes.Equal(blob, []byte{0xa1}) {
t.Error("Unexpected value for account a")
}
blob, exist = a.account(common.Hash{0xb})
if !exist || !bytes.Equal(blob, []byte{0xb1}) {
t.Error("Unexpected value for account b")
}
blob, exist = a.account(common.Hash{0xc})
if !exist || len(blob) != 0 {
t.Error("Unexpected value for account c")
}
// unknown account
blob, exist = a.account(common.Hash{0xd})
if exist || len(blob) != 0 {
t.Error("Unexpected value for account d")
}
blob, exist = a.storage(common.Hash{0xa}, common.Hash{0x1})
if !exist || !bytes.Equal(blob, []byte{0x11}) {
t.Error("Unexpected value for a's storage")
}
blob, exist = a.storage(common.Hash{0xa}, common.Hash{0x2})
if !exist || len(blob) != 0 {
t.Error("Unexpected value for a's storage")
}
blob, exist = a.storage(common.Hash{0xa}, common.Hash{0x3})
if !exist || !bytes.Equal(blob, []byte{0x31}) {
t.Error("Unexpected value for a's storage")
}
blob, exist = a.storage(common.Hash{0xb}, common.Hash{0x1})
if !exist || !bytes.Equal(blob, []byte{0x11}) {
t.Error("Unexpected value for b's storage")
}
blob, exist = a.storage(common.Hash{0xc}, common.Hash{0x1})
if !exist || len(blob) != 0 {
t.Error("Unexpected value for c's storage")
}
// unknown storage slots
blob, exist = a.storage(common.Hash{0xd}, common.Hash{0x1})
if exist || len(blob) != 0 {
t.Error("Unexpected value for d's storage")
}
}
func TestStatesRevert(t *testing.T) {
a := newStates(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa0},
{0xb}: {0xb0},
{0xc}: {0xc0},
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x10},
common.Hash{0x2}: {0x20},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xb}: {
common.Hash{0x1}: {0x10},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xc}: {
common.Hash{0x1}: {0x10},
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
false,
)
b := newStates(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa1},
{0xb}: {0xb1},
{0xc}: nil,
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x11},
common.Hash{0x2}: nil,
common.Hash{0x3}: {0x31},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xb}: {
common.Hash{0x1}: {0x11},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xc}: {
common.Hash{0x1}: nil,
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
false,
)
a.merge(b)
a.revertTo(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa0},
{0xb}: {0xb0},
{0xc}: {0xc0},
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x10},
common.Hash{0x2}: {0x20},
common.Hash{0x3}: nil,
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xb}: {
common.Hash{0x1}: {0x10},
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xc}: {
common.Hash{0x1}: {0x10},
},
},
)
blob, exist := a.account(common.Hash{0xa})
if !exist || !bytes.Equal(blob, []byte{0xa0}) {
t.Error("Unexpected value for account a")
}
blob, exist = a.account(common.Hash{0xb})
if !exist || !bytes.Equal(blob, []byte{0xb0}) {
t.Error("Unexpected value for account b")
}
blob, exist = a.account(common.Hash{0xc})
if !exist || !bytes.Equal(blob, []byte{0xc0}) {
t.Error("Unexpected value for account c")
}
// unknown account
blob, exist = a.account(common.Hash{0xd})
if exist || len(blob) != 0 {
t.Error("Unexpected value for account d")
}
blob, exist = a.storage(common.Hash{0xa}, common.Hash{0x1})
if !exist || !bytes.Equal(blob, []byte{0x10}) {
t.Error("Unexpected value for a's storage")
}
blob, exist = a.storage(common.Hash{0xa}, common.Hash{0x2})
if !exist || !bytes.Equal(blob, []byte{0x20}) {
t.Error("Unexpected value for a's storage")
}
blob, exist = a.storage(common.Hash{0xa}, common.Hash{0x3})
if !exist || len(blob) != 0 {
t.Error("Unexpected value for a's storage")
}
blob, exist = a.storage(common.Hash{0xb}, common.Hash{0x1})
if !exist || !bytes.Equal(blob, []byte{0x10}) {
t.Error("Unexpected value for b's storage")
}
blob, exist = a.storage(common.Hash{0xc}, common.Hash{0x1})
if !exist || !bytes.Equal(blob, []byte{0x10}) {
t.Error("Unexpected value for c's storage")
}
// unknown storage slots
blob, exist = a.storage(common.Hash{0xd}, common.Hash{0x1})
if exist || len(blob) != 0 {
t.Error("Unexpected value for d's storage")
}
}
// TestStateRevertAccountNullMarker tests the scenario that account x did not exist
// before and was created during transition w, reverting w will retain an x=nil
// entry in the set.
func TestStateRevertAccountNullMarker(t *testing.T) {
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
a := newStates(nil, nil, false) // empty initial state
b := newStates(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa},
},
nil,
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
false,
)
a.merge(b) // create account 0xa
a.revertTo(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: nil,
},
nil,
) // revert the transition b
blob, exist := a.account(common.Hash{0xa})
if !exist {
t.Fatal("null marker is not found")
}
if len(blob) != 0 {
t.Fatalf("Unexpected value for account, %v", blob)
}
}
// TestStateRevertStorageNullMarker tests the scenario that slot x did not exist
// before and was created during transition w, reverting w will retain an x=nil
// entry in the set.
func TestStateRevertStorageNullMarker(t *testing.T) {
a := newStates(map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
}, nil, false) // initial state with account 0xa
b := newStates(
nil,
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x1},
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
false,
)
a.merge(b) // create slot 0x1
a.revertTo(
nil,
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: nil,
},
},
) // revert the transition b
blob, exist := a.storage(common.Hash{0xa}, common.Hash{0x1})
if !exist {
t.Fatal("null marker is not found")
}
if len(blob) != 0 {
t.Fatalf("Unexpected value for storage slot, %v", blob)
}
}
func TestStatesEncode(t *testing.T) {
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
testStatesEncode(t, false)
testStatesEncode(t, true)
}
func testStatesEncode(t *testing.T, rawStorageKey bool) {
s := newStates(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0x1}: {0x1},
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0x1}: {
common.Hash{0x1}: {0x1},
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
rawStorageKey,
)
buf := bytes.NewBuffer(nil)
if err := s.encode(buf); err != nil {
t.Fatalf("Failed to encode states, %v", err)
}
var dec stateSet
if err := dec.decode(rlp.NewStream(buf, 0)); err != nil {
t.Fatalf("Failed to decode states, %v", err)
}
if !reflect.DeepEqual(s.accountData, dec.accountData) {
t.Fatal("Unexpected account data")
}
if !reflect.DeepEqual(s.storageData, dec.storageData) {
t.Fatal("Unexpected storage data")
}
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
if s.rawStorageKey != dec.rawStorageKey {
t.Fatal("Unexpected rawStorageKey flag")
}
}
func TestStateWithOriginEncode(t *testing.T) {
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
testStateWithOriginEncode(t, false)
testStateWithOriginEncode(t, true)
}
func testStateWithOriginEncode(t *testing.T, rawStorageKey bool) {
s := NewStateSetWithOrigin(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0x1}: {0x1},
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0x1}: {
common.Hash{0x1}: {0x1},
},
},
map[common.Address][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0x1}: {0x1},
},
map[common.Address]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0x1}: {
common.Hash{0x1}: {0x1},
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
rawStorageKey,
)
buf := bytes.NewBuffer(nil)
if err := s.encode(buf); err != nil {
t.Fatalf("Failed to encode states, %v", err)
}
var dec StateSetWithOrigin
if err := dec.decode(rlp.NewStream(buf, 0)); err != nil {
t.Fatalf("Failed to decode states, %v", err)
}
if !reflect.DeepEqual(s.accountData, dec.accountData) {
t.Fatal("Unexpected account data")
}
if !reflect.DeepEqual(s.storageData, dec.storageData) {
t.Fatal("Unexpected storage data")
}
if !reflect.DeepEqual(s.accountOrigin, dec.accountOrigin) {
t.Fatal("Unexpected account origin data")
}
if !reflect.DeepEqual(s.storageOrigin, dec.storageOrigin) {
t.Fatal("Unexpected storage origin data")
}
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
if s.rawStorageKey != dec.rawStorageKey {
t.Fatal("Unexpected rawStorageKey flag")
}
}
func TestStateSizeTracking(t *testing.T) {
expSizeA := 3*(common.HashLength+1) + /* account data */
2*(2*common.HashLength+1) + /* storage data of 0xa */
2*common.HashLength + 3 + /* storage data of 0xb */
2*common.HashLength + 1 /* storage data of 0xc */
a := newStates(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa0}, // common.HashLength+1
{0xb}: {0xb0}, // common.HashLength+1
{0xc}: {0xc0}, // common.HashLength+1
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x10}, // 2*common.HashLength+1
common.Hash{0x2}: {0x20}, // 2*common.HashLength+1
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xb}: {
common.Hash{0x1}: {0x10, 0x11, 0x12}, // 2*common.HashLength+3
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xc}: {
common.Hash{0x1}: {0x10}, // 2*common.HashLength+1
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
false,
)
if a.size != uint64(expSizeA) {
t.Fatalf("Unexpected size, want: %d, got: %d", expSizeA, a.size)
}
expSizeB := common.HashLength + 2 + common.HashLength + 3 + common.HashLength + /* account data */
2*common.HashLength + 3 + 2*common.HashLength + 2 + /* storage data of 0xa */
2*common.HashLength + 2 + 2*common.HashLength + 2 + /* storage data of 0xb */
3*2*common.HashLength /* storage data of 0xc */
b := newStates(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa1, 0xa1}, // common.HashLength+2
{0xb}: {0xb1, 0xb1, 0xb1}, // common.HashLength+3
{0xc}: nil, // common.HashLength, account deletion
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x11, 0x11, 0x11}, // 2*common.HashLength+3
common.Hash{0x3}: {0x31, 0x31}, // 2*common.HashLength+2, slot creation
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xb}: {
common.Hash{0x1}: {0x11, 0x11}, // 2*common.HashLength+2
common.Hash{0x2}: {0x22, 0x22}, // 2*common.HashLength+2, slot creation
},
// The storage of 0xc is entirely removed
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xc}: {
common.Hash{0x1}: nil, // 2*common.HashLength, slot deletion
common.Hash{0x2}: nil, // 2*common.HashLength, slot deletion
common.Hash{0x3}: nil, // 2*common.HashLength, slot deletion
},
},
all: implement state history v2 (#30107) This pull request delivers the new version of the state history, where the raw storage key is used instead of the hash. Before the cancun fork, it's supported by protocol to destruct a specific account and therefore, all the storage slot owned by it should be wiped in the same transition. Technically, storage wiping should be performed through storage iteration, and only the storage key hash will be available for traversal if the state snapshot is not available. Therefore, the storage key hash is chosen as the identifier in the old version state history. Fortunately, account self-destruction has been deprecated by the protocol since the Cancun fork, and there are no empty accounts eligible for deletion under EIP-158. Therefore, we can conclude that no storage wiping should occur after the Cancun fork. In this case, it makes no sense to keep using hash. Besides, another big reason for making this change is the current format state history is unusable if verkle is activated. Verkle tree has a different key derivation scheme (merkle uses keccak256), the preimage of key hash must be provided in order to make verkle rollback functional. This pull request is a prerequisite for landing verkle. Additionally, the raw storage key is more human-friendly for those who want to manually check the history, even though Solidity already performs some hashing to derive the storage location. --- This pull request doesn't bump the database version, as I believe the database should still be compatible if users degrade from the new geth version to old one, the only side effect is the persistent new version state history will be unusable. --------- Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
3 weeks ago
false,
)
if b.size != uint64(expSizeB) {
t.Fatalf("Unexpected size, want: %d, got: %d", expSizeB, b.size)
}
a.merge(b)
mergeSize := expSizeA + 1 /* account a data change */ + 2 /* account b data change */ - 1 /* account c data change */
mergeSize += 2*common.HashLength + 2 + 2 /* storage a change */
mergeSize += 2*common.HashLength + 2 - 1 /* storage b change */
mergeSize += 2*2*common.HashLength - 1 /* storage data removal of 0xc */
if a.size != uint64(mergeSize) {
t.Fatalf("Unexpected size, want: %d, got: %d", mergeSize, a.size)
}
// Revert the set to original status
a.revertTo(
map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {0xa0},
{0xb}: {0xb0},
{0xc}: {0xc0},
},
map[common.Hash]map[common.Hash][]byte{
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xa}: {
common.Hash{0x1}: {0x10},
common.Hash{0x2}: {0x20},
common.Hash{0x3}: nil, // revert slot creation
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xb}: {
common.Hash{0x1}: {0x10, 0x11, 0x12},
common.Hash{0x2}: nil, // revert slot creation
},
trie/pathdb: state iterator (snapshot integration pt 4) (#30654) In this pull request, the state iterator is implemented. It's mostly a copy-paste from the original state snapshot package, but still has some important changes to highlight here: (a) The iterator for the disk layer consists of a diff iterator and a disk iterator. Originally, the disk layer in the state snapshot was a wrapper around the disk, and its corresponding iterator was also a wrapper around the disk iterator. However, due to structural differences, the disk layer iterator is divided into two parts: - The disk iterator, which traverses the content stored on disk. - The diff iterator, which traverses the aggregated state buffer. Checkout `BinaryIterator` and `FastIterator` for more details. (b) The staleness management is improved in the diffAccountIterator and diffStorageIterator Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked within the Next function to ensure the iterator remained usable. Additionally, a read lock on the associated diff layer was required to first retrieve the account blob. This read lock protection is essential to prevent concurrent map read/write. Afterward, a staleness check was performed to ensure the retrieved data was not outdated. The entire logic can be simplified as follows: a loadAccount callback is provided to retrieve account data. If the corresponding state is immutable (e.g., diff layers in the path database), the staleness check can be skipped, and a single account data retrieval is sufficient. However, if the corresponding state is mutable (e.g., the disk layer in the path database), the callback can operate as follows: ```go func(hash common.Hash) ([]byte, error) { dl.lock.RLock() defer dl.lock.RUnlock() if dl.stale { return nil, errSnapshotStale } return dl.buffer.states.mustAccount(hash) } ``` The callback solution can eliminate the complexity for managing concurrency with the read lock for atomic operation.
2 months ago
{0xc}: {
common.Hash{0x1}: {0x10},
common.Hash{0x2}: {0x20}, // resurrected slot
common.Hash{0x3}: {0x30}, // resurrected slot
},
},
)
revertSize := expSizeA + 2*common.HashLength + 2*common.HashLength // delete-marker of a.3 and b.2 slot
revertSize += 2 * (2*common.HashLength + 1) // resurrected slot, c.2, c.3
if a.size != uint64(revertSize) {
t.Fatalf("Unexpected size, want: %d, got: %d", revertSize, a.size)
}
}