This PR adds counter metrics for the CPU system and the Geth process.
Currently the only metrics available for these items are gauges. Gauges are
fine when the consumer scrapes metrics data at the same interval as Geth
produces new values (every 3 seconds), but it is likely that most consumers
will not scrape that often. Intervals of 10, 15, or maybe even 30 seconds
are probably more common.
So the problem is, how does the consumer estimate what the CPU was doing in
between scrapes. With a counter, it's easy ... you just subtract two
successive values and divide by the time to get a nice, accurate average.
But with a gauge, you can't do that. A gauge reading is an instantaneous
picture of what was happening at that moment, but it gives you no idea
about what was going on between scrapes. Taking an average of values is
meaningless.
This PR changes metrics collection to actually measure the time interval between collections, rather
than assume 3 seconds. I did some ad hoc profiling, and on slower hardware (eg, my Raspberry Pi 4)
I routinely saw intervals between 3.3 - 3.5 seconds, with some being as high as 4.5 seconds. This
will generally cause the CPU gauge readings to be too high, and in some cases can cause impossibly
large values for the CPU load metrics (eg. greater than 400 for a 4 core CPU).
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This changes how we read performance metrics from the Go runtime. Instead
of using runtime.ReadMemStats, we now rely on the API provided by package
runtime/metrics.
runtime/metrics provides more accurate information. For example, the new
interface has better reporting of memory use. In my testing, the reported
value of held memory more accurately reflects the usage reported by the OS.
The semantics of metrics system/memory/allocs and system/memory/frees have
changed to report amounts in bytes. ReadMemStats only reported the count of
allocations in number-of-objects. This is imprecise: 'tiny objects' are not
counted because the runtime allocates them in batches; and certain
improvements in allocation behavior, such as struct size optimizations,
will be less visible when the number of allocs doesn't change.
Changing allocation reports to be in bytes makes it appear in graphs that
lots more is being allocated. I don't think that's a problem because this
metric is primarily interesting for geth developers.
The metric system/memory/pauses has been changed to report statistical
values from the histogram provided by the runtime. Its name in influxdb has
changed from geth.system/memory/pauses.meter to
geth.system/memory/pauses.histogram.
We also have a new histogram metric, system/cpu/schedlatency, reporting the
Go scheduler latency.
This removes the dashboard project. The dashboard was an experimental
browser UI for geth which displayed metrics and chain information in
real time. We are removing it because it has marginal utility and nobody
on the team can maintain it.
Removing the dashboard removes a lot of dependency code and shaves
6 MB off the geth binary size.
* cmd/swarm: minor cli flag text adjustments
* swarm/api/http: sticky footer for swarm landing page using flex
* swarm/api/http: sticky footer for error pages and fix for multiple choices
* cmd/swarm, swarm/storage, swarm: fix mingw on windows test issues
* cmd/swarm: update description of swarm cmd
* swarm: added network ID test
* cmd/swarm: support for smoke tests on the production swarm cluster
* cmd/swarm/swarm-smoke: simplify cluster logic as per suggestion
* swarm: propagate ctx to internal apis (#754)
* swarm/metrics: collect disk measurements
* swarm/bmt: fix io.Writer interface
* Write now tolerates arbitrary variable buffers
* added variable buffer tests
* Write loop and finalise optimisation
* refactor / rename
* add tests for empty input
* swarm/pss: (UPDATE) Generic notifications package (#744)
swarm/pss: Generic package for creating pss notification svcs
* swarm: Adding context to more functions
* swarm/api: change colour of landing page in templates
* swarm/api: change landing page to react to enter keypress
* core: Add metrics collection for transaction events; replace/discard for pending and future queues, as well as invalid transactions
* core: change namespace for txpool metrics
* core: define more metrics (not yet used)
* core: implement more tx metrics for when transactions are dropped
* core: minor formatting tweeks (will squash later)
* core: remove superfluous meter, fix missing pending nofunds
* core, metrics: switch txpool meters to counters