mirror of https://github.com/ethereum/go-ethereum
Tag:
Branch:
Tree:
06cbc80754
ChrisChinchilla-patch-1
ChrisChinchilla-patch-3
abigen2
appveyorfix
buildbot-testing
ci-i386-disable-cache
docs/fix-pkg-url
documentNavScroll
gh-pages
kaustinen-with-shapella
manual-local-cache
master
msgrater
pa
poc8
release-1.14.9
release/0.9.36
release/1.0.0
release/1.0.1
release/1.1.0
release/1.10
release/1.11
release/1.12
release/1.13
release/1.14
release/1.2.1
release/1.3.0
release/1.3.2
release/1.3.3
release/1.3.4
release/1.3.5
release/1.3.6
release/1.4
release/1.5
release/1.6
release/1.7
release/1.8
release/1.9
revert-25955-goja1
revert-26265-console/eth_call
revert-26290-generic-prque-extended
revert-30495-sim-backend-rollback-tip
s1na-patch-1
s1na-patch-2
test_travis
unbork-master
website
website-fix-list-margin
website-matomo
0.2.2
0.3.0
0.3.1
0.5.13
0.5.14
0.5.15
0.5.16
0.5.17
0.5.18
0.5.19
0.9.16
0.9.23
1.2.1
2
PoC6
buildbot-testing-tag-3
poc1
poc5-rc1
poc5-rc10
poc5-rc11
poc5-rc12
poc5-rc2
poc5-rc3
poc5-rc4
poc5-rc6
poc5-rc7
poc5-rc8
poc5-rc9
v0.4.1
v0.4.2
v0.4.3
v0.6.0
v0.6.3
v0.6.4
v0.6.5
v0.6.5-1
v0.6.5-2
v0.6.6
v0.6.7
v0.6.8
v0.7.10
v0.7.10-broken
v0.7.11
v0.8.4
v0.8.4-1
v0.8.5
v0.8.5-2
v0.9.17
v0.9.18
v0.9.20
v0.9.21
v0.9.21.1
v0.9.22
v0.9.23
v0.9.24
v0.9.25
v0.9.26
v0.9.28
v0.9.30
v0.9.32
v0.9.34
v0.9.34-1
v0.9.36
v0.9.38
v0.9.39
v1.0.0
v1.0.1
v1.0.1.1
v1.0.1.2
v1.0.2
v1.0.3
v1.0.4
v1.0.5
v1.1.0
v1.1.1
v1.1.2
v1.1.3
v1.10.0
v1.10.1
v1.10.10
v1.10.11
v1.10.12
v1.10.13
v1.10.14
v1.10.15
v1.10.16
v1.10.17
v1.10.18
v1.10.19
v1.10.2
v1.10.20
v1.10.21
v1.10.22
v1.10.23
v1.10.24
v1.10.25
v1.10.26
v1.10.3
v1.10.4
v1.10.5
v1.10.6
v1.10.7
v1.10.8
v1.10.9
v1.11.0
v1.11.1
v1.11.2
v1.11.3
v1.11.4
v1.11.5
v1.11.6
v1.12.0
v1.12.1
v1.12.2
v1.13.0
v1.13.1
v1.13.10
v1.13.11
v1.13.12
v1.13.13
v1.13.14
v1.13.15
v1.13.2
v1.13.3
v1.13.4
v1.13.5
v1.13.6
v1.13.7
v1.13.8
v1.13.9
v1.14.0
v1.14.10
v1.14.11
v1.14.12
v1.14.2
v1.14.3
v1.14.4
v1.14.5
v1.14.6
v1.14.7
v1.14.8
v1.14.9
v1.2.2
v1.2.3
v1.3.1
v1.3.2
v1.3.3
v1.3.4
v1.3.5
v1.3.6
v1.4.0
v1.4.1
v1.4.10
v1.4.11
v1.4.12
v1.4.13
v1.4.14
v1.4.15
v1.4.16
v1.4.17
v1.4.18
v1.4.19
v1.4.2
v1.4.3
v1.4.4
v1.4.5
v1.4.6
v1.4.7
v1.4.8
v1.4.9
v1.5.0
v1.5.1
v1.5.2
v1.5.3
v1.5.4
v1.5.5
v1.5.6
v1.5.7
v1.5.8
v1.5.9
v1.6.0
v1.6.1
v1.6.2
v1.6.3
v1.6.4
v1.6.5
v1.6.6
v1.6.7
v1.7.0
v1.7.1
v1.7.2
v1.7.3
v1.8.0
v1.8.1
v1.8.10
v1.8.11
v1.8.12
v1.8.13
v1.8.14
v1.8.15
v1.8.16
v1.8.17
v1.8.18
v1.8.19
v1.8.2
v1.8.20
v1.8.21
v1.8.22
v1.8.23
v1.8.24
v1.8.25
v1.8.26
v1.8.27
v1.8.3
v1.8.4
v1.8.5
v1.8.6
v1.8.7
v1.8.8
v1.8.9
v1.9.0
v1.9.1
v1.9.10
v1.9.11
v1.9.12
v1.9.13
v1.9.14
v1.9.15
v1.9.16
v1.9.17
v1.9.18
v1.9.19
v1.9.2
v1.9.20
v1.9.21
v1.9.22
v1.9.23
v1.9.24
v1.9.25
v1.9.3
v1.9.4
v1.9.5
v1.9.6
v1.9.7
v1.9.8
v1.9.9
${ noResults }
5 Commits (06cbc80754c3930828cc605f151554bc7ec36eb9)
Author | SHA1 | Message | Date |
---|---|---|---|
Daniel Knopik |
de6d597679
|
p2p/discover: schedule revalidation also when all nodes are excluded (#30239)
## Issue If `nextTime` has passed, but all nodes are excluded, `get` would return `nil` and `run` would therefore not invoke `schedule`. Then, we schedule a timer for the past, as neither `nextTime` value has been updated. This creates a busy loop, as the timer immediately returns. ## Fix With this PR, revalidation will be also rescheduled when all nodes are excluded. --------- Co-authored-by: lightclient <lightclient@protonmail.com> |
4 months ago |
Felix Lange |
94a8b296e4
|
p2p/discover: refactor node and endpoint representation (#29844)
Here we clean up internal uses of type discover.node, converting most code to use enode.Node instead. The discover.node type used to be the canonical representation of network hosts before ENR was introduced. Most code worked with *node to avoid conversions when interacting with Table methods. Since *node also contains internal state of Table and is a mutable type, using *node outside of Table code is prone to data races. It's also cleaner not having to wrap/unwrap *enode.Node all the time. discover.node has been renamed to tableNode to clarify its purpose. While here, we also change most uses of net.UDPAddr into netip.AddrPort. While this is technically a separate refactoring from the *node -> *enode.Node change, it is more convenient because *enode.Node handles IP addresses as netip.Addr. The switch to package netip in discovery would've happened very soon anyway. The change to netip.AddrPort stops at certain interface points. For example, since package p2p/netutil has not been converted to use netip.Addr yet, we still have to convert to net.IP/net.UDPAddr in a few places. |
6 months ago |
lightclient |
cc22e0cdf0
|
p2p/discover: fix update logic in handleAddNode (#29836)
It seems the semantic differences between addFoundNode and addInboundNode were lost in #29572. My understanding is addFoundNode is for a node you have not contacted directly (and are unsure if is available) whereas addInboundNode is for adding nodes that have contacted the local node and we can verify they are active. handleAddNode seems to be the consolidation of those two methods, yet it bumps the node in the bucket (updating it's IP addr) even if the node was not an inbound. This PR fixes this. It wasn't originally caught in tests like TestTable_addSeenNode because the manipulation of the node object actually modified the node value used by the test. New logic is added to reject non-inbound updates unless the sequence number of the (signed) ENR increases. Inbound updates, which are published by the updated node itself, are always accepted. If an inbound update changes the endpoint, the node will be revalidated on an expedited schedule. Co-authored-by: Felix Lange <fjl@twurst.com> |
6 months ago |
Felix Lange |
af0a3274be
|
p2p/discover: fix crash when revalidated node is removed (#29864)
In #29572, I assumed the revalidation list that the node is contained in could only ever be changed by the outcome of a revalidation request. But turns out that's not true: if the node gets removed due to FINDNODE failure, it will also be removed from the list it is in. This causes a crash. The invariant is: while node is in table, it is always in exactly one of the two lists. So it seems best to store a pointer to the current list within the node itself. |
6 months ago |
Felix Lange |
6a9158bb1b
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking if they are still alive. I recently noticed some issues with the implementation of this process, which can cause strange results such as nodes dropping unexpectedly, certain nodes not getting revalidated often enough, and bad results being returned to incoming FINDNODE queries. In this change, the revalidation process is improved with the following logic: - We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'. - The process chooses random nodes from each list on a randomized interval, the interval being faster for the 'fast' list, and performs revalidation for the chosen node. - Whenever a node is newly inserted into the table, it goes into the 'fast' list. Once validation passes, it transfers to the 'slow' list. If a request fails, or the node changes endpoint, it transfers back into 'fast'. - livenessChecks is incremented by one for successful checks. Unlike the old implementation, we will not drop the node on the first failing check. We instead quickly decay the livenessChecks give it another chance. - Order of nodes in bucket doesn't matter anymore. I am also adding a debug API endpoint to dump the node table content. Co-authored-by: Martin HS <martin@swende.se> |
6 months ago |