|
|
|
// Copyright 2015 The go-ethereum Authors
|
|
|
|
// This file is part of the go-ethereum library.
|
|
|
|
//
|
|
|
|
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
|
|
// it under the terms of the GNU Lesser General Public License as published by
|
|
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
|
|
// (at your option) any later version.
|
|
|
|
//
|
|
|
|
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
// GNU Lesser General Public License for more details.
|
|
|
|
//
|
|
|
|
// You should have received a copy of the GNU Lesser General Public License
|
|
|
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
|
|
|
|
// Package discover implements the Node Discovery Protocol.
|
|
|
|
//
|
|
|
|
// The Node Discovery protocol provides a way to find RLPx nodes that
|
|
|
|
// can be connected to. It uses a Kademlia-like protocol to maintain a
|
|
|
|
// distributed database of the IDs and endpoints of all listening
|
|
|
|
// nodes.
|
|
|
|
package discover
|
|
|
|
|
|
|
|
import (
|
log: remove lazy, remove unused interfaces, unexport methods (#28622)
This change
- Removes interface `log.Format`,
- Removes method `log.FormatFunc`,
- unexports `TerminalHandler.TerminalFormat` formatting methods (renamed to `TerminalHandler.format`)
- removes the notion of `log.Lazy` values
The lazy handler was useful in the old log package, since it
could defer the evaluation of costly attributes until later in the
log pipeline: thus, if the logging was done at 'Trace', we could
skip evaluation if logging only was set to 'Info'.
With the move to slog, this way of deferring evaluation is no longer
needed, since slog introduced 'Enabled': the caller can thus do
the evaluate-or-not decision at the callsite, which is much more
straight-forward than dealing with lazy reflect-based evaluation.
Also, lazy evaluation would not work with 'native' slog, as in, these
two statements would be evaluated differently:
```golang
log.Info("foo", "my lazy", lazyObj)
slog.Info("foo", "my lazy", lazyObj)
```
12 months ago
|
|
|
"context"
|
|
|
|
"fmt"
|
|
|
|
"net/netip"
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
"slices"
|
|
|
|
"sync"
|
|
|
|
"time"
|
|
|
|
|
|
|
|
"github.com/ethereum/go-ethereum/common"
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
"github.com/ethereum/go-ethereum/common/mclock"
|
|
|
|
"github.com/ethereum/go-ethereum/log"
|
|
|
|
"github.com/ethereum/go-ethereum/metrics"
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
6 years ago
|
|
|
"github.com/ethereum/go-ethereum/p2p/enode"
|
|
|
|
"github.com/ethereum/go-ethereum/p2p/netutil"
|
|
|
|
)
|
|
|
|
|
|
|
|
const (
|
|
|
|
alpha = 3 // Kademlia concurrency factor
|
|
|
|
bucketSize = 16 // Kademlia bucket size
|
|
|
|
maxReplacements = 10 // Size of per-bucket replacement list
|
|
|
|
|
|
|
|
// We keep buckets for the upper 1/15 of distances because
|
|
|
|
// it's very unlikely we'll ever encounter a node that's closer.
|
|
|
|
hashBits = len(common.Hash{}) * 8
|
|
|
|
nBuckets = hashBits / 15 // Number of buckets
|
|
|
|
bucketMinDistance = hashBits - nBuckets // Log distance of closest bucket
|
|
|
|
|
|
|
|
// IP address limits.
|
|
|
|
bucketIPLimit, bucketSubnet = 2, 24 // at most 2 addresses from the same /24
|
|
|
|
tableIPLimit, tableSubnet = 10, 24
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
seedMinTableTime = 5 * time.Minute
|
|
|
|
seedCount = 30
|
|
|
|
seedMaxAge = 5 * 24 * time.Hour
|
|
|
|
)
|
|
|
|
|
|
|
|
// Table is the 'node table', a Kademlia-like index of neighbor nodes. The table keeps
|
|
|
|
// itself up-to-date by verifying the liveness of neighbors and requesting their node
|
|
|
|
// records when announcements of a new record version are received.
|
|
|
|
type Table struct {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
mutex sync.Mutex // protects buckets, bucket content, nursery, rand
|
|
|
|
buckets [nBuckets]*bucket // index of known nodes by distance
|
|
|
|
nursery []*enode.Node // bootstrap nodes
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
rand reseedingRandom // source of randomness, periodically reseeded
|
|
|
|
ips netutil.DistinctNetSet
|
|
|
|
revalidation tableRevalidation
|
|
|
|
|
|
|
|
db *enode.DB // database of known nodes
|
|
|
|
net transport
|
|
|
|
cfg Config
|
|
|
|
log log.Logger
|
|
|
|
|
|
|
|
// loop channels
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
refreshReq chan chan struct{}
|
|
|
|
revalResponseCh chan revalidationResponse
|
|
|
|
addNodeCh chan addNodeOp
|
|
|
|
addNodeHandled chan bool
|
|
|
|
trackRequestCh chan trackRequestOp
|
|
|
|
initDone chan struct{}
|
|
|
|
closeReq chan struct{}
|
|
|
|
closed chan struct{}
|
|
|
|
|
|
|
|
nodeAddedHook func(*bucket, *tableNode)
|
|
|
|
nodeRemovedHook func(*bucket, *tableNode)
|
|
|
|
}
|
|
|
|
|
|
|
|
// transport is implemented by the UDP transports.
|
|
|
|
type transport interface {
|
|
|
|
Self() *enode.Node
|
|
|
|
RequestENR(*enode.Node) (*enode.Node, error)
|
|
|
|
lookupRandom() []*enode.Node
|
|
|
|
lookupSelf() []*enode.Node
|
|
|
|
ping(*enode.Node) (seq uint64, err error)
|
|
|
|
}
|
|
|
|
|
|
|
|
// bucket contains nodes, ordered by their last activity. the entry
|
|
|
|
// that was most recently active is the first element in entries.
|
|
|
|
type bucket struct {
|
|
|
|
entries []*tableNode // live entries, sorted by time of last contact
|
|
|
|
replacements []*tableNode // recently seen nodes to be used if revalidation fails
|
|
|
|
ips netutil.DistinctNetSet
|
|
|
|
index int
|
|
|
|
}
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
type addNodeOp struct {
|
|
|
|
node *enode.Node
|
|
|
|
isInbound bool
|
|
|
|
forceSetLive bool // for tests
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
}
|
|
|
|
|
|
|
|
type trackRequestOp struct {
|
|
|
|
node *enode.Node
|
|
|
|
foundNodes []*enode.Node
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
success bool
|
|
|
|
}
|
|
|
|
|
|
|
|
func newTable(t transport, db *enode.DB, cfg Config) (*Table, error) {
|
|
|
|
cfg = cfg.withDefaults()
|
|
|
|
tab := &Table{
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
net: t,
|
|
|
|
db: db,
|
|
|
|
cfg: cfg,
|
|
|
|
log: cfg.Log,
|
|
|
|
refreshReq: make(chan chan struct{}),
|
|
|
|
revalResponseCh: make(chan revalidationResponse),
|
|
|
|
addNodeCh: make(chan addNodeOp),
|
|
|
|
addNodeHandled: make(chan bool),
|
|
|
|
trackRequestCh: make(chan trackRequestOp),
|
|
|
|
initDone: make(chan struct{}),
|
|
|
|
closeReq: make(chan struct{}),
|
|
|
|
closed: make(chan struct{}),
|
|
|
|
ips: netutil.DistinctNetSet{Subnet: tableSubnet, Limit: tableIPLimit},
|
|
|
|
}
|
|
|
|
for i := range tab.buckets {
|
|
|
|
tab.buckets[i] = &bucket{
|
|
|
|
index: i,
|
|
|
|
ips: netutil.DistinctNetSet{Subnet: bucketSubnet, Limit: bucketIPLimit},
|
|
|
|
}
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
tab.rand.seed()
|
|
|
|
tab.revalidation.init(&cfg)
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
// initial table content
|
|
|
|
if err := tab.setFallbackNodes(cfg.Bootnodes); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
tab.loadSeedNodes()
|
|
|
|
|
|
|
|
return tab, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Nodes returns all nodes contained in the table.
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
func (tab *Table) Nodes() [][]BucketNode {
|
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
nodes := make([][]BucketNode, len(tab.buckets))
|
|
|
|
for i, b := range &tab.buckets {
|
|
|
|
nodes[i] = make([]BucketNode, len(b.entries))
|
|
|
|
for j, n := range b.entries {
|
|
|
|
nodes[i][j] = BucketNode{
|
|
|
|
Node: n.Node,
|
|
|
|
Checks: int(n.livenessChecks),
|
|
|
|
Live: n.isValidatedLive,
|
|
|
|
AddedToTable: n.addedToTable,
|
|
|
|
AddedToBucket: n.addedToBucket,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nodes
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) self() *enode.Node {
|
|
|
|
return tab.net.Self()
|
|
|
|
}
|
|
|
|
|
|
|
|
// getNode returns the node with the given ID or nil if it isn't in the table.
|
|
|
|
func (tab *Table) getNode(id enode.ID) *enode.Node {
|
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
|
|
|
b := tab.bucket(id)
|
|
|
|
for _, e := range b.entries {
|
|
|
|
if e.ID() == id {
|
|
|
|
return e.Node
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// close terminates the network listener and flushes the node database.
|
|
|
|
func (tab *Table) close() {
|
|
|
|
close(tab.closeReq)
|
|
|
|
<-tab.closed
|
|
|
|
}
|
|
|
|
|
|
|
|
// setFallbackNodes sets the initial points of contact. These nodes
|
|
|
|
// are used to connect to the network if the table is empty and there
|
|
|
|
// are no known nodes in the database.
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
6 years ago
|
|
|
func (tab *Table) setFallbackNodes(nodes []*enode.Node) error {
|
|
|
|
nursery := make([]*enode.Node, 0, len(nodes))
|
|
|
|
for _, n := range nodes {
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
6 years ago
|
|
|
if err := n.ValidateComplete(); err != nil {
|
|
|
|
return fmt.Errorf("bad bootstrap node %q: %v", n, err)
|
|
|
|
}
|
|
|
|
if tab.cfg.NetRestrict != nil && !tab.cfg.NetRestrict.ContainsAddr(n.IPAddr()) {
|
|
|
|
tab.log.Error("Bootstrap node filtered by netrestrict", "id", n.ID(), "ip", n.IPAddr())
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
nursery = append(nursery, n)
|
|
|
|
}
|
|
|
|
tab.nursery = nursery
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// isInitDone returns whether the table's initial seeding procedure has completed.
|
|
|
|
func (tab *Table) isInitDone() bool {
|
|
|
|
select {
|
|
|
|
case <-tab.initDone:
|
|
|
|
return true
|
|
|
|
default:
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) refresh() <-chan struct{} {
|
|
|
|
done := make(chan struct{})
|
|
|
|
select {
|
|
|
|
case tab.refreshReq <- done:
|
|
|
|
case <-tab.closeReq:
|
|
|
|
close(done)
|
|
|
|
}
|
|
|
|
return done
|
|
|
|
}
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
// findnodeByID returns the n nodes in the table that are closest to the given id.
|
|
|
|
// This is used by the FINDNODE/v4 handler.
|
|
|
|
//
|
|
|
|
// The preferLive parameter says whether the caller wants liveness-checked results. If
|
|
|
|
// preferLive is true and the table contains any verified nodes, the result will not
|
|
|
|
// contain unverified nodes. However, if there are no verified nodes at all, the result
|
|
|
|
// will contain unverified nodes.
|
|
|
|
func (tab *Table) findnodeByID(target enode.ID, nresults int, preferLive bool) *nodesByDistance {
|
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
|
|
|
// Scan all buckets. There might be a better way to do this, but there aren't that many
|
|
|
|
// buckets, so this solution should be fine. The worst-case complexity of this loop
|
|
|
|
// is O(tab.len() * nresults).
|
|
|
|
nodes := &nodesByDistance{target: target}
|
|
|
|
liveNodes := &nodesByDistance{target: target}
|
|
|
|
for _, b := range &tab.buckets {
|
|
|
|
for _, n := range b.entries {
|
|
|
|
nodes.push(n.Node, nresults)
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
if preferLive && n.isValidatedLive {
|
|
|
|
liveNodes.push(n.Node, nresults)
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if preferLive && len(liveNodes.entries) > 0 {
|
|
|
|
return liveNodes
|
|
|
|
}
|
|
|
|
return nodes
|
|
|
|
}
|
|
|
|
|
|
|
|
// appendBucketNodes adds nodes at the given distance to the result slice.
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
// This is used by the FINDNODE/v5 handler.
|
|
|
|
func (tab *Table) appendBucketNodes(dist uint, result []*enode.Node, checkLive bool) []*enode.Node {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
if dist > 256 {
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
if dist == 0 {
|
|
|
|
return append(result, tab.self())
|
|
|
|
}
|
|
|
|
|
|
|
|
tab.mutex.Lock()
|
|
|
|
for _, n := range tab.bucketAtDistance(int(dist)).entries {
|
|
|
|
if !checkLive || n.isValidatedLive {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
result = append(result, n.Node)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
tab.mutex.Unlock()
|
|
|
|
|
|
|
|
// Shuffle result to avoid always returning same nodes in FINDNODE/v5.
|
|
|
|
tab.rand.Shuffle(len(result), func(i, j int) {
|
|
|
|
result[i], result[j] = result[j], result[i]
|
|
|
|
})
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
|
|
|
// len returns the number of nodes in the table.
|
|
|
|
func (tab *Table) len() (n int) {
|
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
|
|
|
for _, b := range &tab.buckets {
|
|
|
|
n += len(b.entries)
|
|
|
|
}
|
|
|
|
return n
|
|
|
|
}
|
|
|
|
|
|
|
|
// addFoundNode adds a node which may not be live. If the bucket has space available,
|
|
|
|
// adding the node succeeds immediately. Otherwise, the node is added to the replacements
|
|
|
|
// list.
|
|
|
|
//
|
|
|
|
// The caller must not hold tab.mutex.
|
|
|
|
func (tab *Table) addFoundNode(n *enode.Node, forceSetLive bool) bool {
|
|
|
|
op := addNodeOp{node: n, isInbound: false, forceSetLive: forceSetLive}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
select {
|
|
|
|
case tab.addNodeCh <- op:
|
|
|
|
return <-tab.addNodeHandled
|
|
|
|
case <-tab.closeReq:
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// addInboundNode adds a node from an inbound contact. If the bucket has no space, the
|
|
|
|
// node is added to the replacements list.
|
|
|
|
//
|
|
|
|
// There is an additional safety measure: if the table is still initializing the node is
|
|
|
|
// not added. This prevents an attack where the table could be filled by just sending ping
|
|
|
|
// repeatedly.
|
|
|
|
//
|
|
|
|
// The caller must not hold tab.mutex.
|
|
|
|
func (tab *Table) addInboundNode(n *enode.Node) bool {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
op := addNodeOp{node: n, isInbound: true}
|
|
|
|
select {
|
|
|
|
case tab.addNodeCh <- op:
|
|
|
|
return <-tab.addNodeHandled
|
|
|
|
case <-tab.closeReq:
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) trackRequest(n *enode.Node, success bool, foundNodes []*enode.Node) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
op := trackRequestOp{n, foundNodes, success}
|
|
|
|
select {
|
|
|
|
case tab.trackRequestCh <- op:
|
|
|
|
case <-tab.closeReq:
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// loop is the main loop of Table.
|
|
|
|
func (tab *Table) loop() {
|
|
|
|
var (
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
refresh = time.NewTimer(tab.nextRefreshTime())
|
|
|
|
refreshDone = make(chan struct{}) // where doRefresh reports completion
|
|
|
|
waiting = []chan struct{}{tab.initDone} // holds waiting callers while doRefresh runs
|
|
|
|
revalTimer = mclock.NewAlarm(tab.cfg.Clock)
|
|
|
|
reseedRandTimer = time.NewTicker(10 * time.Minute)
|
|
|
|
)
|
|
|
|
defer refresh.Stop()
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
defer revalTimer.Stop()
|
|
|
|
defer reseedRandTimer.Stop()
|
|
|
|
|
|
|
|
// Start initial refresh.
|
|
|
|
go tab.doRefresh(refreshDone)
|
|
|
|
|
|
|
|
loop:
|
|
|
|
for {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
nextTime := tab.revalidation.run(tab, tab.cfg.Clock.Now())
|
|
|
|
revalTimer.Schedule(nextTime)
|
|
|
|
|
|
|
|
select {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
case <-reseedRandTimer.C:
|
|
|
|
tab.rand.seed()
|
|
|
|
|
|
|
|
case <-revalTimer.C():
|
|
|
|
|
|
|
|
case r := <-tab.revalResponseCh:
|
|
|
|
tab.revalidation.handleResponse(tab, r)
|
|
|
|
|
|
|
|
case op := <-tab.addNodeCh:
|
|
|
|
tab.mutex.Lock()
|
|
|
|
ok := tab.handleAddNode(op)
|
|
|
|
tab.mutex.Unlock()
|
|
|
|
tab.addNodeHandled <- ok
|
|
|
|
|
|
|
|
case op := <-tab.trackRequestCh:
|
|
|
|
tab.handleTrackRequest(op)
|
|
|
|
|
|
|
|
case <-refresh.C:
|
|
|
|
if refreshDone == nil {
|
|
|
|
refreshDone = make(chan struct{})
|
|
|
|
go tab.doRefresh(refreshDone)
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
|
|
|
|
case req := <-tab.refreshReq:
|
|
|
|
waiting = append(waiting, req)
|
|
|
|
if refreshDone == nil {
|
|
|
|
refreshDone = make(chan struct{})
|
|
|
|
go tab.doRefresh(refreshDone)
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
|
|
|
|
case <-refreshDone:
|
|
|
|
for _, ch := range waiting {
|
|
|
|
close(ch)
|
|
|
|
}
|
|
|
|
waiting, refreshDone = nil, nil
|
|
|
|
refresh.Reset(tab.nextRefreshTime())
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
|
|
|
|
case <-tab.closeReq:
|
|
|
|
break loop
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if refreshDone != nil {
|
|
|
|
<-refreshDone
|
|
|
|
}
|
|
|
|
for _, ch := range waiting {
|
|
|
|
close(ch)
|
|
|
|
}
|
|
|
|
close(tab.closed)
|
|
|
|
}
|
|
|
|
|
|
|
|
// doRefresh performs a lookup for a random target to keep buckets full. seed nodes are
|
|
|
|
// inserted if the table is empty (initial bootstrap or discarded faulty peers).
|
|
|
|
func (tab *Table) doRefresh(done chan struct{}) {
|
|
|
|
defer close(done)
|
|
|
|
|
|
|
|
// Load nodes from the database and insert
|
|
|
|
// them. This should yield a few previously seen nodes that are
|
|
|
|
// (hopefully) still alive.
|
|
|
|
tab.loadSeedNodes()
|
|
|
|
|
|
|
|
// Run self lookup to discover new neighbor nodes.
|
|
|
|
tab.net.lookupSelf()
|
|
|
|
|
|
|
|
// The Kademlia paper specifies that the bucket refresh should
|
|
|
|
// perform a lookup in the least recently used bucket. We cannot
|
|
|
|
// adhere to this because the findnode target is a 512bit value
|
|
|
|
// (not hash-sized) and it is not easily possible to generate a
|
|
|
|
// sha3 preimage that falls into a chosen bucket.
|
|
|
|
// We perform a few lookups with a random target instead.
|
|
|
|
for i := 0; i < 3; i++ {
|
|
|
|
tab.net.lookupRandom()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) loadSeedNodes() {
|
|
|
|
seeds := tab.db.QuerySeeds(seedCount, seedMaxAge)
|
|
|
|
seeds = append(seeds, tab.nursery...)
|
|
|
|
for i := range seeds {
|
|
|
|
seed := seeds[i]
|
log: remove lazy, remove unused interfaces, unexport methods (#28622)
This change
- Removes interface `log.Format`,
- Removes method `log.FormatFunc`,
- unexports `TerminalHandler.TerminalFormat` formatting methods (renamed to `TerminalHandler.format`)
- removes the notion of `log.Lazy` values
The lazy handler was useful in the old log package, since it
could defer the evaluation of costly attributes until later in the
log pipeline: thus, if the logging was done at 'Trace', we could
skip evaluation if logging only was set to 'Info'.
With the move to slog, this way of deferring evaluation is no longer
needed, since slog introduced 'Enabled': the caller can thus do
the evaluate-or-not decision at the callsite, which is much more
straight-forward than dealing with lazy reflect-based evaluation.
Also, lazy evaluation would not work with 'native' slog, as in, these
two statements would be evaluated differently:
```golang
log.Info("foo", "my lazy", lazyObj)
slog.Info("foo", "my lazy", lazyObj)
```
12 months ago
|
|
|
if tab.log.Enabled(context.Background(), log.LevelTrace) {
|
|
|
|
age := time.Since(tab.db.LastPongReceived(seed.ID(), seed.IPAddr()))
|
|
|
|
addr, _ := seed.UDPEndpoint()
|
|
|
|
tab.log.Trace("Found seed node in database", "id", seed.ID(), "addr", addr, "age", age)
|
log: remove lazy, remove unused interfaces, unexport methods (#28622)
This change
- Removes interface `log.Format`,
- Removes method `log.FormatFunc`,
- unexports `TerminalHandler.TerminalFormat` formatting methods (renamed to `TerminalHandler.format`)
- removes the notion of `log.Lazy` values
The lazy handler was useful in the old log package, since it
could defer the evaluation of costly attributes until later in the
log pipeline: thus, if the logging was done at 'Trace', we could
skip evaluation if logging only was set to 'Info'.
With the move to slog, this way of deferring evaluation is no longer
needed, since slog introduced 'Enabled': the caller can thus do
the evaluate-or-not decision at the callsite, which is much more
straight-forward than dealing with lazy reflect-based evaluation.
Also, lazy evaluation would not work with 'native' slog, as in, these
two statements would be evaluated differently:
```golang
log.Info("foo", "my lazy", lazyObj)
slog.Info("foo", "my lazy", lazyObj)
```
12 months ago
|
|
|
}
|
|
|
|
tab.mutex.Lock()
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
tab.handleAddNode(addNodeOp{node: seed, isInbound: false})
|
|
|
|
tab.mutex.Unlock()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) nextRefreshTime() time.Duration {
|
|
|
|
half := tab.cfg.RefreshInterval / 2
|
|
|
|
return half + time.Duration(tab.rand.Int63n(int64(half)))
|
|
|
|
}
|
|
|
|
|
|
|
|
// bucket returns the bucket for the given node ID hash.
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
6 years ago
|
|
|
func (tab *Table) bucket(id enode.ID) *bucket {
|
|
|
|
d := enode.LogDist(tab.self().ID(), id)
|
|
|
|
return tab.bucketAtDistance(d)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) bucketAtDistance(d int) *bucket {
|
|
|
|
if d <= bucketMinDistance {
|
|
|
|
return tab.buckets[0]
|
|
|
|
}
|
|
|
|
return tab.buckets[d-bucketMinDistance-1]
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) addIP(b *bucket, ip netip.Addr) bool {
|
|
|
|
if !ip.IsValid() || ip.IsUnspecified() {
|
|
|
|
return false // Nodes without IP cannot be added.
|
|
|
|
}
|
|
|
|
if netutil.AddrIsLAN(ip) {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
if !tab.ips.AddAddr(ip) {
|
|
|
|
tab.log.Debug("IP exceeds table limit", "ip", ip)
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
if !b.ips.AddAddr(ip) {
|
|
|
|
tab.log.Debug("IP exceeds bucket limit", "ip", ip)
|
|
|
|
tab.ips.RemoveAddr(ip)
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) removeIP(b *bucket, ip netip.Addr) {
|
|
|
|
if netutil.AddrIsLAN(ip) {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
tab.ips.RemoveAddr(ip)
|
|
|
|
b.ips.RemoveAddr(ip)
|
|
|
|
}
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
// handleAddNode adds the node in the request to the table, if there is space.
|
|
|
|
// The caller must hold tab.mutex.
|
|
|
|
func (tab *Table) handleAddNode(req addNodeOp) bool {
|
|
|
|
if req.node.ID() == tab.self().ID() {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
// For nodes from inbound contact, there is an additional safety measure: if the table
|
|
|
|
// is still initializing the node is not added.
|
|
|
|
if req.isInbound && !tab.isInitDone() {
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
b := tab.bucket(req.node.ID())
|
|
|
|
n, _ := tab.bumpInBucket(b, req.node, req.isInbound)
|
|
|
|
if n != nil {
|
|
|
|
// Already in bucket.
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
return false
|
|
|
|
}
|
|
|
|
if len(b.entries) >= bucketSize {
|
|
|
|
// Bucket full, maybe add as replacement.
|
|
|
|
tab.addReplacement(b, req.node)
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
if !tab.addIP(b, req.node.IPAddr()) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
// Can't add: IP limit reached.
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
// Add to bucket.
|
|
|
|
wn := &tableNode{Node: req.node}
|
|
|
|
if req.forceSetLive {
|
|
|
|
wn.livenessChecks = 1
|
|
|
|
wn.isValidatedLive = true
|
|
|
|
}
|
|
|
|
b.entries = append(b.entries, wn)
|
|
|
|
b.replacements = deleteNode(b.replacements, wn.ID())
|
|
|
|
tab.nodeAdded(b, wn)
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
|
|
|
// addReplacement adds n to the replacement cache of bucket b.
|
|
|
|
func (tab *Table) addReplacement(b *bucket, n *enode.Node) {
|
|
|
|
if containsID(b.replacements, n.ID()) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
// TODO: update ENR
|
|
|
|
return
|
|
|
|
}
|
|
|
|
if !tab.addIP(b, n.IPAddr()) {
|
|
|
|
return
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
|
|
|
|
wn := &tableNode{Node: n, addedToTable: time.Now()}
|
|
|
|
var removed *tableNode
|
|
|
|
b.replacements, removed = pushNode(b.replacements, wn, maxReplacements)
|
|
|
|
if removed != nil {
|
|
|
|
tab.removeIP(b, removed.IPAddr())
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) nodeAdded(b *bucket, n *tableNode) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
if n.addedToTable == (time.Time{}) {
|
|
|
|
n.addedToTable = time.Now()
|
|
|
|
}
|
|
|
|
n.addedToBucket = time.Now()
|
|
|
|
tab.revalidation.nodeAdded(tab, n)
|
|
|
|
if tab.nodeAddedHook != nil {
|
|
|
|
tab.nodeAddedHook(b, n)
|
|
|
|
}
|
|
|
|
if metrics.Enabled {
|
|
|
|
bucketsCounter[b.index].Inc(1)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tab *Table) nodeRemoved(b *bucket, n *tableNode) {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
tab.revalidation.nodeRemoved(n)
|
|
|
|
if tab.nodeRemovedHook != nil {
|
|
|
|
tab.nodeRemovedHook(b, n)
|
|
|
|
}
|
|
|
|
if metrics.Enabled {
|
|
|
|
bucketsCounter[b.index].Dec(1)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// deleteInBucket removes node n from the table.
|
|
|
|
// If there are replacement nodes in the bucket, the node is replaced.
|
|
|
|
func (tab *Table) deleteInBucket(b *bucket, id enode.ID) *tableNode {
|
|
|
|
index := slices.IndexFunc(b.entries, func(e *tableNode) bool { return e.ID() == id })
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
if index == -1 {
|
|
|
|
// Entry has been removed already.
|
|
|
|
return nil
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
|
|
|
|
// Remove the node.
|
|
|
|
n := b.entries[index]
|
|
|
|
b.entries = slices.Delete(b.entries, index, index+1)
|
|
|
|
tab.removeIP(b, n.IPAddr())
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
tab.nodeRemoved(b, n)
|
|
|
|
|
|
|
|
// Add replacement.
|
|
|
|
if len(b.replacements) == 0 {
|
|
|
|
tab.log.Debug("Removed dead node", "b", b.index, "id", n.ID(), "ip", n.IPAddr())
|
|
|
|
return nil
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
rindex := tab.rand.Intn(len(b.replacements))
|
|
|
|
rep := b.replacements[rindex]
|
|
|
|
b.replacements = slices.Delete(b.replacements, rindex, rindex+1)
|
|
|
|
b.entries = append(b.entries, rep)
|
|
|
|
tab.nodeAdded(b, rep)
|
|
|
|
tab.log.Debug("Replaced dead node", "b", b.index, "id", n.ID(), "ip", n.IPAddr(), "r", rep.ID(), "rip", rep.IPAddr())
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
return rep
|
|
|
|
}
|
|
|
|
|
|
|
|
// bumpInBucket updates a node record if it exists in the bucket.
|
|
|
|
// The second return value reports whether the node's endpoint (IP/port) was updated.
|
|
|
|
func (tab *Table) bumpInBucket(b *bucket, newRecord *enode.Node, isInbound bool) (n *tableNode, endpointChanged bool) {
|
|
|
|
i := slices.IndexFunc(b.entries, func(elem *tableNode) bool {
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
return elem.ID() == newRecord.ID()
|
|
|
|
})
|
|
|
|
if i == -1 {
|
|
|
|
return nil, false // not in bucket
|
|
|
|
}
|
|
|
|
n = b.entries[i]
|
|
|
|
|
|
|
|
// For inbound updates (from the node itself) we accept any change, even if it sets
|
|
|
|
// back the sequence number. For found nodes (!isInbound), seq has to advance. Note
|
|
|
|
// this check also ensures found discv4 nodes (which always have seq=0) can't be
|
|
|
|
// updated.
|
|
|
|
if newRecord.Seq() <= n.Seq() && !isInbound {
|
|
|
|
return n, false
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
}
|
|
|
|
|
|
|
|
// Check endpoint update against IP limits.
|
|
|
|
ipchanged := newRecord.IPAddr() != n.IPAddr()
|
|
|
|
portchanged := newRecord.UDP() != n.UDP()
|
|
|
|
if ipchanged {
|
|
|
|
tab.removeIP(b, n.IPAddr())
|
|
|
|
if !tab.addIP(b, newRecord.IPAddr()) {
|
|
|
|
// It doesn't fit with the limit, put the previous record back.
|
|
|
|
tab.addIP(b, n.IPAddr())
|
|
|
|
return n, false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Apply update.
|
|
|
|
n.Node = newRecord
|
|
|
|
if ipchanged || portchanged {
|
|
|
|
// Ensure node is revalidated quickly for endpoint changes.
|
|
|
|
tab.revalidation.nodeEndpointChanged(tab, n)
|
|
|
|
return n, true
|
|
|
|
}
|
|
|
|
return n, false
|
|
|
|
}
|
|
|
|
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
func (tab *Table) handleTrackRequest(op trackRequestOp) {
|
|
|
|
var fails int
|
|
|
|
if op.success {
|
|
|
|
// Reset failure counter because it counts _consecutive_ failures.
|
|
|
|
tab.db.UpdateFindFails(op.node.ID(), op.node.IPAddr(), 0)
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
} else {
|
|
|
|
fails = tab.db.FindFails(op.node.ID(), op.node.IPAddr())
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
fails++
|
|
|
|
tab.db.UpdateFindFails(op.node.ID(), op.node.IPAddr(), fails)
|
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
6 months ago
|
|
|
|
|
|
|
tab.mutex.Lock()
|
|
|
|
defer tab.mutex.Unlock()
|
|
|
|
|
|
|
|
b := tab.bucket(op.node.ID())
|
|
|
|
// Remove the node from the local table if it fails to return anything useful too
|
|
|
|
// many times, but only if there are enough other nodes in the bucket. This latter
|
|
|
|
// condition specifically exists to make bootstrapping in smaller test networks more
|
|
|
|
// reliable.
|
|
|
|
if fails >= maxFindnodeFailures && len(b.entries) >= bucketSize/4 {
|
|
|
|
tab.deleteInBucket(b, op.node.ID())
|
|
|
|
}
|
|
|
|
|
|
|
|
// Add found nodes.
|
|
|
|
for _, n := range op.foundNodes {
|
|
|
|
tab.handleAddNode(addNodeOp{n, false, false})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// pushNode adds n to the front of list, keeping at most max items.
|
|
|
|
func pushNode(list []*tableNode, n *tableNode, max int) ([]*tableNode, *tableNode) {
|
|
|
|
if len(list) < max {
|
|
|
|
list = append(list, nil)
|
|
|
|
}
|
|
|
|
removed := list[len(list)-1]
|
|
|
|
copy(list[1:], list)
|
|
|
|
list[0] = n
|
|
|
|
return list, removed
|
|
|
|
}
|