This enables the following linters
- typecheck
- unused
- staticcheck
- bidichk
- durationcheck
- exportloopref
- gosec
WIth a few exceptions.
- We use a deprecated protobuf in trezor. I didn't want to mess with that, since I cannot meaningfully test any changes there.
- The deprecated TypeMux is used in a few places still, so the warning for it is silenced for now.
- Using string type in context.WithValue is apparently wrong, one should use a custom type, to prevent collisions between different places in the hierarchy of callers. That should be fixed at some point, but may require some attention.
- The warnings for using weak random generator are squashed, since we use a lot of random without need for cryptographic guarantees.
Previously freezer has only been used for storing ancient chain data, while obviously it can be used more. This PR unties the chain data and freezer, keep the minimal freezer structure and move all other logic (like incrementally freezing block data) into a separate structure called ChainFreezer.
This PR also extends the database interface by adding a new ancient store function AncientDatadir which can return the root directory of ancient store. The ancient root directory can be used when we want to open some other ancient-stores (e.g. reverse diff freezer).
This commit replaces ioutil.TempDir with t.TempDir in tests. The
directory created by t.TempDir is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using ioutil.TempDir
had to be removed manually by calling os.RemoveAll, which is omitted in
some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but t.TempDir handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
* cmd,core: add simple legacy receipt converter
core/rawdb: use forEach in migrate
core/rawdb: batch reads in forEach
core/rawdb: make forEach anonymous fn
cmd/geth: check for legacy receipts on node startup
fix err msg
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
fix log
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
fix some review comments
add warning to cmd
drop isLegacy fn from migrateTable params
add test for windows rename
test replacing in windows case
* minor fix
* sanity check for tail-deletion
* add log before moving files around
* speed-up hack for mainnet
* fix mainnet check, use networkid instead
* check mainnet genesis
* review fixes
* resume previous migration attempt
* core/rawdb: lint fix
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/rawdb, cmd, ethdb, eth: implement freezer tail deletion
* core/rawdb: address comments from martin and sina
* core/rawdb: fixes cornercase in tail deletion
* core/rawdb: separate metadata into a standalone file
* core/rawdb: remove unused code
* core/rawdb: add random test
* core/rawdb: polish code
* core/rawdb: fsync meta file before manipulating the index
* core/rawdb: fix typo
* core/rawdb: address comments
* freezer: add readonly flag to table
* freezer: enforce readonly in table repair
* freezer: enforce readonly in newFreezer
* minor fix
* minor
* core/rawdb: test that writing during readonly fails
* rm unused log
* check readonly on batch append
* minor
* Revert "check readonly on batch append"
This reverts commit 2ddb5ec4ba.
* review fixes
* minor test refactor
* attempt at fixing windows issue
* add comment re windows sync issue
* k->kind
* open readonly db for genesis check
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change is a rewrite of the freezer code.
When writing ancient chain data to the freezer, the previous version first encoded each
individual item to a temporary buffer, then wrote the buffer. For small item sizes (for
example, in the block hash freezer table), this strategy causes a lot of system calls for
writing tiny chunks of data. It also allocated a lot of temporary []byte buffers.
In the new version, we instead encode multiple items into a re-useable batch buffer, which
is then written to the file all at once. This avoids performing a system call for every
inserted item.
To make the internal batching work, the ancient database API had to be changed. While
integrating this new API in BlockChain.InsertReceiptChain, additional optimizations were
also added there.
Co-authored-by: Felix Lange <fjl@twurst.com>