This should fix an occasional test failure in ethclient/simulated.TestForkResendTx.
Inspection of logs revealed the cause of the failure to be that the txpool was not done
reorganizing by the time Fork is called.
Here we clean up internal uses of type discover.node, converting most code to use
enode.Node instead. The discover.node type used to be the canonical representation of
network hosts before ENR was introduced. Most code worked with *node to avoid conversions
when interacting with Table methods. Since *node also contains internal state of Table and
is a mutable type, using *node outside of Table code is prone to data races. It's also
cleaner not having to wrap/unwrap *enode.Node all the time.
discover.node has been renamed to tableNode to clarify its purpose.
While here, we also change most uses of net.UDPAddr into netip.AddrPort. While this is
technically a separate refactoring from the *node -> *enode.Node change, it is more
convenient because *enode.Node handles IP addresses as netip.Addr. The switch to package
netip in discovery would've happened very soon anyway.
The change to netip.AddrPort stops at certain interface points. For example, since package
p2p/netutil has not been converted to use netip.Addr yet, we still have to convert to
net.IP/net.UDPAddr in a few places.
Create the directory before NewKeyStore. This ensures the watcher successfully starts on
the first attempt, and waitWatcherStart functions as intended.
It seems the semantic differences between addFoundNode and addInboundNode were lost in
#29572. My understanding is addFoundNode is for a node you have not contacted directly
(and are unsure if is available) whereas addInboundNode is for adding nodes that have
contacted the local node and we can verify they are active.
handleAddNode seems to be the consolidation of those two methods, yet it bumps the node in
the bucket (updating it's IP addr) even if the node was not an inbound. This PR fixes
this. It wasn't originally caught in tests like TestTable_addSeenNode because the
manipulation of the node object actually modified the node value used by the test.
New logic is added to reject non-inbound updates unless the sequence number of the
(signed) ENR increases. Inbound updates, which are published by the updated node itself,
are always accepted. If an inbound update changes the endpoint, the node will be
revalidated on an expedited schedule.
Co-authored-by: Felix Lange <fjl@twurst.com>
In #29572, I assumed the revalidation list that the node is contained in could only ever
be changed by the outcome of a revalidation request. But turns out that's not true: if the
node gets removed due to FINDNODE failure, it will also be removed from the list it is in.
This causes a crash.
The invariant is: while node is in table, it is always in exactly one of the two lists. So
it seems best to store a pointer to the current list within the node itself.
This pull request fixes the flay test TestSkeletonSyncRetrievals. In this test, we first
trigger a sync cycle and wait for it to meet certain expectations. We then inject a new
head and potentially also a new peer, then perform another final sync. The test now
performs the newPeer addition before launching the final sync, and waits a bit for that
peer to get registered. This fixes the logic race that made the test fail sometimes.
Co-authored-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
enode.Node has separate accessor functions for getting the IP, UDP port and TCP port.
These methods performed separate checks for attributes set in the ENR.
With this PR, the accessor methods will now return cached information, and the endpoint is
determined when the node is created. The logic to determine the preferred endpoint is now
more correct, and considers how 'global' each address is when both IPv4 and IPv6 addresses
are present in the ENR.
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
This fixes an issue for `debug_traceBlock*` methods where the BASEFEE opcode was returning always 0. This caused the method return invalid results.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
It's a bit confusing to add msg.value into the balanceCheck within the conditional.
No impact on block validation since GasFeeCap is always set when processing transactions.