import draft content

pull/26459/head^2
Joe 2 years ago
parent 8a2ea2af9d
commit 69389283fc
  1. BIN
      assets/node_architecture.png
  2. BIN
      assets/node_basic.png
  3. 86
      content/about-geth/about_geth.md
  4. 0
      content/about-geth/about_us.md
  5. 34
      content/about-geth/contributing.md
  6. 56
      content/about-geth/ethereum.md
  7. 74
      content/about-geth/faq.md
  8. 458
      content/docs/developers/dapp-developer/custom-tracer.md
  9. 323
      content/docs/developers/dapp-developer/mobile/mobile-accounts.md
  10. 180
      content/docs/developers/dapp-developer/mobile/mobile.md
  11. 223
      content/docs/developers/dapp-developer/native-accounts.md
  12. 617
      content/docs/developers/dapp-developer/native-bindings.md
  13. 249
      content/docs/developers/dapp-developer/native.md
  14. 232
      content/docs/developers/dapp-developer/tracing.md
  15. 103
      content/docs/developers/geth-developer/Code-Review-Guidelines.md
  16. 481
      content/docs/developers/geth-developer/Private-Network.md
  17. 353
      content/docs/developers/geth-developer/dev-mode.md
  18. 122
      content/docs/developers/geth-developer/devguide.md
  19. 125
      content/docs/developers/geth-developer/dns-discovery-setup.md
  20. 57
      content/docs/developers/geth-developer/issue-handling-workflow.md
  21. 113
      content/docs/developers/geth-developer/vulnerabilities.md
  22. 65
      content/docs/fundamentals/Backup--restore.md
  23. 238
      content/docs/fundamentals/Command-Line-Options.md
  24. 356
      content/docs/fundamentals/Installing-Geth.md
  25. 352
      content/docs/fundamentals/Managing-your-accounts.md
  26. 167
      content/docs/fundamentals/cross-compile.md
  27. 61
      content/docs/fundamentals/les.md
  28. 165
      content/docs/fundamentals/peer-to-peer.md
  29. 140
      content/docs/getting_started/consensus-clients.md
  30. 474
      content/docs/getting_started/index.md
  31. 65
      content/docs/install_build/Backup--restore.md
  32. 356
      content/docs/install_build/Installing-Geth.md
  33. 167
      content/docs/install_build/cross-compile.md
  34. 147
      content/docs/interacting_with_geth/JavaScript-Console.md
  35. 38
      content/docs/interacting_with_geth/RPC/batch.md
  36. 65
      content/docs/interacting_with_geth/RPC/graphql.md
  37. 296
      content/docs/interacting_with_geth/RPC/ns-admin.md
  38. 148
      content/docs/interacting_with_geth/RPC/ns-clique.md
  39. 947
      content/docs/interacting_with_geth/RPC/ns-debug.md
  40. 207
      content/docs/interacting_with_geth/RPC/ns-eth.md
  41. 318
      content/docs/interacting_with_geth/RPC/ns-les.md
  42. 91
      content/docs/interacting_with_geth/RPC/ns-miner.md
  43. 36
      content/docs/interacting_with_geth/RPC/ns-net.md
  44. 255
      content/docs/interacting_with_geth/RPC/ns-personal.md
  45. 226
      content/docs/interacting_with_geth/RPC/ns-txpool.md
  46. 72
      content/docs/interacting_with_geth/RPC/objects.md
  47. 168
      content/docs/interacting_with_geth/RPC/pubsub.md
  48. 182
      content/docs/interacting_with_geth/RPC/server.md
  49. 173
      content/docs/monitoring/dashboards.md
  50. BIN
      content/docs/monitoring/ethstats-mainnet.png
  51. 46
      content/docs/monitoring/ethstats.md
  52. BIN
      content/docs/monitoring/grafana1.png
  53. BIN
      content/docs/monitoring/grafana2.png
  54. BIN
      content/docs/monitoring/grafana3.png
  55. BIN
      content/docs/monitoring/grafana4.png
  56. BIN
      content/docs/monitoring/grafana5.png
  57. BIN
      content/docs/monitoring/grafana6.png
  58. BIN
      content/docs/monitoring/grafana7.png
  59. BIN
      content/docs/monitoring/grafana8.png
  60. 116
      content/docs/monitoring/metrics.md
  61. 393
      content/docs/tools/Clef/CliqueSigning.md
  62. 208
      content/docs/tools/Clef/Introduction.md
  63. 237
      content/docs/tools/Clef/Rules.md
  64. 202
      content/docs/tools/Clef/Setup.md
  65. 685
      content/docs/tools/Clef/Tutorial.md
  66. 851
      content/docs/tools/Clef/apis.md
  67. 229
      content/docs/tools/Clef/datatypes.md
  68. 27
      content/homepage.md
  69. BIN
      misc/page-tracker.xlsx

Binary file not shown.

After

Width:  |  Height:  |  Size: 484 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

@ -0,0 +1,86 @@
---
title: What is Geth
root: ..
---
## What is Geth?
Geth (go-ethereum) is a [Go](https://go.dev/) implementation of [Ethereum](http://ethereum.org) - a
gateway into the decentralized web.
Running Geth alongside a consensus client turns a computer into an Ethereum node.
Nodes communicate with one another, agreeing on the data they should each add to their local databases.
Ethereum itself is the network of connected nodes running Ethereum software.
## Why run a node?
Running your own node enables you to use Ethereum in a truly private, self-sufficient and trustless
manner. You don't need to trust information you receive because you can verify the data yourself
using your Geth instance.
**"Don't trust, verify"**
![node basic](/assets/node-basic.png)
Your node verifies all changes to its database by itself. This means:
- you don’t have to trust any other nodes in the network.
- You never have to leak your addresses and balances to other nodes.
- You can use Ethereum securely and privately. Most wallet software can be pointed to your own local node.
- You can program your own custom RPC endpoints and make your own modifications to the source code.
- You get low latency, fast access to Ethereum.
A large and diverse set of nodes independently verifying new information is critical for Ethereum’s health,
security and operational resiliency.
**If you run a full node, the whole Ethereum network benefits.**
## Node architecture
Geth is an [execution client](https://ethereum.org/en/developers/docs/nodes-and-clients/#execution-clients).
Originally, an execution client alone was enough to run a full Ethereum node.
However, ever since Ethereum turned off proof-of-work and implemented proof-of-stake,
Geth must to be coupled to another piece of software called a
[“consensus client”](https://ethereum.org/en/developers/docs/nodes-and-clients/#consensus-clients).
The execution client is responsible for transaction handling, transaction gossip, state management and
the Ethereum Virtual Machine (EVM). However, Geth is **not** responsible for block building, block gossiping
or handling consensus logic. These are in the remit of the consensus client.
The relationship between the two Ethereum clients is shown in the schematic below. The two clients each
connect to their own respective peer-to-peer (P2P) networks. This is because the execution clients gossip
transactions over their P2P network enabling them to manage their local transaction pool. The consensus clients
gossip blocks over their P2P network, enabling consensus and chain growth.
![node-architecture](/assets/node_architecture.png)
For this two-client structure to work, consensus clients must be able to pass bundles of transactions to
Geth to be executed. Executing the transactions locally is how the client validates that the transactions
do not violate any Ethereum rules and that the proposed update to Ethereum’s state is correct. Likewise,
when the node is selected to be a block producer the consensus client must be able to request bundles of
transactions from Geth to include in the new block. This inter-client communication is handled by a local
RPC connection using the engine API which is part of the JSON-RPC API exposed by Geth.
## What does Geth do?
As an execution client, Geth is responsible for creating the execution payloads - the bundles of transactions -
that consensus clients include in their blocks. Geth is also responsible for re-executing transactions that arrive
in new blocks to ensure they are valid. Executing transactions is done on Geth's embedded computer, known as the
Ethereum Virtual Machine (EVM).
Geth also offers a user-interface to Ethereum by exposing a set of RPC methods that enable users to query the
Ethereum blockchain, submit transactions and deploy smart contracts using the command line, programmatically
using Geth's built-in console, web3 development frameworks such as Hardhat and Truffle or via web-apps and wallets.
In summary, Geth is:
- a user gateway to Ethereum
- home to the Ethereum Virtual Machine, Ethereum's state and transaction pool.

@ -0,0 +1,34 @@
---
title: Contributing
---
We welcome contributions from anyone on the internet, and are grateful for even the smallest of fixes!
## Contributing to the Geth source code
If you'd like to contribute to the Geth source code, please fork the
[Github repository](https://github.com/ethereum/go-ethereum), fix, commit and send a pull request for the
maintainers to review and merge into the main code base. If you wish to submit more complex changes
though, please check up with the core devs first on our Discord Server to ensure those changes are in
line with the general philosophy of the project and/or get some early feedback which can make both your
efforts much lighter as well as our review and merge procedures quick and simple.
Please make sure your contributions adhere to our coding guidelines:
* Code must adhere to the official Go formatting guidelines (i.e. uses gofmt).
* Code must be documented adhering to the official Go commentary guidelines.
* Pull requests need to be based on and opened against the master branch.
* Commit messages should be prefixed with the package(s) they modify.
E.g. "eth, rpc: make trace configs optional"
## Contributing to the Geth website
The Geth website is hosted separately from Geth itself. The contribution guidelines are the same. Please
for the Geth website Github repository and raise pull requests for the maintainers to review and merge.
## License
The go-ethereum library (i.e. all code outside of the cmd directory) is licensed under the GNU Lesser General Public License v3.0, also included in our repository in the COPYING.LESSER file.
The go-ethereum binaries (i.e. all code inside of the cmd directory) is licensed under the GNU General Public License v3.0, also included in our repository in the COPYING file.

@ -0,0 +1,56 @@
---
title: Intro to Ethereum
description: A brief introduction to Ethereum.
---
Ethereum is a technology for building apps and organizations, holding assets, transacting and
communicating without being controlled by a central authority. There is no need to hand over all
your personal details to use Ethereum - you keep control of your own data and what is being shared.
Ethereum has its own cryptocurrency, Ether (ETH), which is used to pay for certain activities on
the Ethereum network. In essence, Ethereum is a blockchain with an embedded computer.
## What is a blockchain?
A blockchain is a database of transactions that is updated and shared across many computers in a
network. Every time a new set of transactions is added, its called a “block” - hence the name
blockchain. Most blockchains are public and immutable, and you can only add data, not remove. If someone
wanted to alter any of the information or cheat the system, they’d need to do so in such a way that the
majority of computers on the network accept. There are very strong crypto-economic defenses against this
on Ethereum. This makes established blockchains like Ethereum highly secure base-layers for organizations
and applications.
## What are smart contracts?
Smart contracts are computer programs living on the Ethereum blockchain. They only execute when
triggered by a transaction from a user (or another contract). They make Ethereum very flexible in what
it can do and distinguish it from other cryptocurrencies. These programs are what we now call
decentralized apps, or dapps.
Once a smart contract is published to Ethereum, it will be online and operational for as long as Ethereum
exists. Not even the author can take it down. Since smart contracts are automated, they do not discriminate
against any user and are always ready to use.
Popular examples of smart contracts are lending apps, decentralized trading exchanges, insurance,
crowdfunding apps - basically anything you can think of.
## Who runs Ethereum?
Ethereum is not controlled by any one entity. It exists solely through the decentralized participation
and cooperation of the community. Ethereum makes use of nodes (a computer with a copy of the Ethereum
blockchain data) run by volunteers to replace individual server and cloud systems owned by major
internet providers and services.
These distributed nodes, run by individuals and businesses all over the world, provide resiliency to
the Ethereum network infrastructure. It is therefore much less vulnerable to hacks or shutdowns.
Since its launch in 2015, Ethereum has never suffered downtime. There are thousands of individual nodes
running Ethereum network.
## Learn more about Ethereum
[ethereum.org](https://ethereum.org/)

@ -0,0 +1,74 @@
---
title: FAQ
permalink: docs/faq
sort_key: C
---
* TOC
{:toc}
#### I noticed my peercount slowly decreasing, and now it is at 0. Restarting doesn't get any peers.
Check and sync your clock with ntp. For example, you can [force a clock update using ntp](https://askubuntu.com/questions/254826/how-to-force-a-clock-update-using-ntp) like so:
```sh
sudo ntpdate -s time.nist.gov
```
#### I would like to run multiple geth instances but got the error "Fatal: blockchain db err: resource temporarily unavailable".
Geth uses a datadir to store the blockchain, accounts and some additional information. This directory cannot be shared between running instances. If you would like to run multiple instances follow [these](getting-started/private-net) instructions.
#### When I try to use the --password command line flag, I get the error "Could not decrypt key with given passphrase" but the password is correct. Why does this error appear?
Especially if the password file was created on Windows, it may have a Byte Order Mark or other special encoding that the go-ethereum client doesn't currently recognize. You can change this behavior with a PowerShell command like:
```sh
echo "mypasswordhere" | out-file test.txt -encoding ASCII
```
Additional details and/or any updates on more robust handling are at <https://github.com/ethereum/go-ethereum/issues/19905>.
#### How does Ethereum syncing work?
The current default syncing mode used by Geth is called [snap sync](https://github.com/ethereum/devp2p/blob/master/caps/snap.md). Instead of starting from the genesis block and processing all the transactions that ever occurred (which could take weeks), snap sync downloads the blocks, and only verifies the associated proof-of-works. Downloading all the blocks is a straightforward and fast procedure and will relatively quickly reassemble the entire chain.
Many people falsely assume that because they have the blocks, they are in sync. Unfortunately this is not the case. Since no transaction was executed, so we do not have any account state available (ie. balances, nonces, smart contract code and data). These need to be downloaded separately and cross-checked with the latest blocks. This phase is called the state trie download phase. Snap sync tries to hasten this process by downloading contiguous chunks of useful state data, instead of doing so one-by-one, as in previous synchronization methods.
#### So, what's the state trie?
In the Ethereum mainnet, there are a ton of accounts already, which track the balance, nonce, etc of each user/contract. The accounts themselves are however insufficient to run a node, they need to be cryptographically linked to each block so that nodes can actually verify that the accounts are not tampered with.
This cryptographic linking is done by creating a tree-like data structure, where each leaf corresponds to an account, and each intermediary level aggregates the layer below it into an ever smaller layer, until you reach a single root. This gigantic data structure containing all the accounts and the intermediate cryptographic proofs is called the state trie.
#### Why does the state trie download phase require a special syncing mode?
The trie data structure is an intricate interlink of hundreds of millions of tiny cryptographic proofs (trie nodes). To truly have a synchronized node, you need to download all the account data, as well as all the tiny cryptographic proofs to verify that no one in the network is trying to cheat you. This itself is already a crazy number of data items.
The part where it gets even messier is that this data is constantly morphing: at every block (roughly 13s), about 1000 nodes are deleted from this trie and about 2000 new ones are added. This means your node needs to synchronize a dataset that is changing more than 200 times per second. Until you actually do gather all the data, your local node is not usable since it cannot cryptographically prove anything about any accounts. But while you're syncing the network is moving forward and most nodes on the network keep the state for only a limited number of recent blocks. Any sync algorithm needs to consider this fact.
#### What happened to fast sync?
Snap syncing was introduced by version [1.10.0](https://blog.ethereum.org/2021/03/03/geth-v1-10-0/) and was adopted as the default mode in version [1.10.4](https://github.com/ethereum/go-ethereum/releases/tag/v1.10.4). Before that, the default was the "fast" syncing mode, which was dropped in version [1.10.14](https://github.com/ethereum/go-ethereum/releases/tag/v1.10.14). Even though support for fast sync was dropped, Geth still serves the relevant `eth` requests to other client implementations still relying on it. The reason being that snap sync relies on an alternative data structure called the [snapshot](https://blog.ethereum.org/2020/07/17/ask-about-geth-snapshot-acceleration/) which not all clients implement.
You can read more in the article posted above why snap sync replaced fast sync in Geth. Below is a table taken from the article summarising the benefits:
![snap-fast](https://user-images.githubusercontent.com/129561/109820169-6ee0af00-7c3d-11eb-9721-d8484eee4709.png)
#### When doing a fast sync, the node just hangs on importing state enties?!
The node doesn’t hang, it just doesn’t know how large the state trie is in advance so it keeps on going and going and going until it discovers and downloads the entire thing.
The reason is that a block in Ethereum only contains the state root, a single hash of the root node. When the node begins synchronizing, it knows about exactly 1 node and tries to download it. That node, can refer up to 16 new nodes, so in the next step, we’ll know about 16 new nodes and try to download those. As we go along the download, most of the nodes will reference new ones that we didn’t know about until then. This is why you might be tempted to think it’s stuck on the same numbers. It is not, rather it’s discovering and downloading the trie as it goes along.
During this phase you might see that your node is 64 blocks behind mainnet. You aren't actually synchronized. That's a side-effect of how fast sync works and you need to wait out until all state entries are downloaded.
#### I have good bandwidth, so why does downloading the state take so long when using fast sync?
State sync is mostly limited by disk IO, not bandwidth.
The state trie in Ethereum contains hundreds of millions of nodes, most of which take the form of a single hash referencing up to 16 other hashes. This is a horrible way to store data on a disk, because there's almost no structure in it, just random numbers referencing even more random numbers. This makes any underlying database weep, as it cannot optimize storing and looking up the data in any meaningful way. Snap sync solves this issue by adopting the Snapshot data structure.
#### Wait, so I can't use fast sync on an HDD?
Doing a "fast" sync on an HDD will take more time than you're willing to wait, because the data structures used are not optimized for HDDs. Even if you do wait it out, an HDD will not be able to keep up with the read/write requirements of transaction processing on mainnet. You however should be able to run a light client on an HDD with minimal impact on system resources.

@ -0,0 +1,458 @@
---
title: Custom EVM tracer
sort_key: B
---
In addition to the default opcode tracer and the built-in tracers, Geth offers the possibility to write custom code
that hook to events in the EVM to process and return the data in a consumable format. Custom tracers can be
written either in Javascript or Go. JS tracers are good for quick prototyping and experimentation as well as for
less intensive applications. Go tracers are performant but require the tracer to be compiled together with the Geth source code.
* TOC
{:toc}
## Custom Javascript tracing
Transaction traces include the complete status of the EVM at every point during the transaction execution, which
can be a very large amount of data. Often, users are only interested in a small subset of that data. Javascript trace
filters are available to isolate the useful information. Detailed information about `debug_traceTransaction` and its
component parts is available in the [reference documentation](/docs/rpc/ns-debug#debug_tracetransaction).
### A simple filter
Filters are Javascript functions that select information from the trace to persist and discard based on some
conditions. The following Javascript function returns only the sequence of opcodes executed by the transaction as a
comma-separated list. The function could be written directly in the Javascript console, but it is cleaner to
write it in a separate re-usable file and load it into the console.
1. Create a file, `filterTrace_1.js`, with this content:
```javascript
tracer = function(tx) {
return debug.traceTransaction(tx, {tracer:
'{' +
'retVal: [],' +
'step: function(log,db) {this.retVal.push(log.getPC() + ":" + log.op.toString())},' +
'fault: function(log,db) {this.retVal.push("FAULT: " + JSON.stringify(log))},' +
'result: function(ctx,db) {return this.retVal}' +
'}'
}) // return debug.traceTransaction ...
} // tracer = function ...
```
2. Run the [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console).
3. Get the hash of a recent transaction from a node or block explorer.
4. Run this command to run the script:
```javascript
loadScript("filterTrace_1.js")
```
5. Run the tracer from the script. Be patient, it could take a long time.
```javascript
tracer("<hash of transaction>")
```
The bottom of the output looks similar to:
```sh
"3366:POP", "3367:JUMP", "1355:JUMPDEST", "1356:PUSH1", "1358:MLOAD", "1359:DUP1", "1360:DUP3", "1361:ISZERO", "1362:ISZERO",
"1363:ISZERO", "1364:ISZERO", "1365:DUP2", "1366:MSTORE", "1367:PUSH1", "1369:ADD", "1370:SWAP2", "1371:POP", "1372:POP", "1373:PUSH1",
"1375:MLOAD", "1376:DUP1", "1377:SWAP2", "1378:SUB", "1379:SWAP1", "1380:RETURN"
```
6. Run this line to get a more readable output with each string in its own line.
```javascript
console.log(JSON.stringify(tracer("<hash of transaction>"), null, 2))
```
More information about the `JSON.stringify` function is available
[here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify).
The commands above worked by calling the same `debug.traceTransaction` function that was previously
explained in [basic traces](https://geth.ethereum.org/docs/dapp/tracing), but with a new parameter, `tracer`.
This parameter takes the JavaScript object formated as a string. In the case of the trace above, it is:
```javascript
{
retVal: [],
step: function(log,db) {this.retVal.push(log.getPC() + ":" + log.op.toString())},
fault: function(log,db) {this.retVal.push("FAULT: " + JSON.stringify(log))},
result: function(ctx,db) {return this.retVal}
}
```
This object has three member functions:
- `step`, called for each opcode.
- `fault`, called if there is a problem in the execution.
- `result`, called to produce the results that are returned by `debug.traceTransaction` after the execution is done.
In this case, `retVal` is used to store the list of strings to return in `result`.
The `step` function adds to `retVal` the program counter and the name of the opcode there. Then, in `result`, this
list is returned to be sent to the caller.
### Filtering with conditions
For actual filtered tracing we need an `if` statement to only log relevant information. For example, to isolate
the transaction's interaction with storage, the following tracer could be used:
```javascript
tracer = function(tx) {
return debug.traceTransaction(tx, {tracer:
'{' +
'retVal: [],' +
'step: function(log,db) {' +
' if(log.op.toNumber() == 0x54) ' +
' this.retVal.push(log.getPC() + ": SLOAD");' +
' if(log.op.toNumber() == 0x55) ' +
' this.retVal.push(log.getPC() + ": SSTORE");' +
'},' +
'fault: function(log,db) {this.retVal.push("FAULT: " + JSON.stringify(log))},' +
'result: function(ctx,db) {return this.retVal}' +
'}'
}) // return debug.traceTransaction ...
} // tracer = function ...
```
The `step` function here looks at the opcode number of the op, and only pushes an entry if the opcode is
`SLOAD` or `SSTORE` ([here is a list of EVM opcodes and their numbers](https://github.com/wolflo/evm-opcodes)).
We could have used `log.op.toString()` instead, but it is faster to compare numbers rather than strings.
The output looks similar to this:
```javascript
[
"5921: SLOAD",
.
.
.
"2413: SSTORE",
"2420: SLOAD",
"2475: SSTORE",
"6094: SSTORE"
]
```
### Stack Information
The trace above reports the program counter (PC) and whether the program read from storage or wrote to it.
That alone isn't particularly useful. To know more, the `log.stack.peek` function can be used to peek
into the stack. `log.stack.peek(0)` is the stack top, `log.stack.peek(1)` the entry below it, etc.
The values returned by `log.stack.peek` are Go `big.Int` objects. By default they are converted to JavaScript
floating point numbers, so you need `toString(16)` to get them as hexadecimals, which is how 256-bit values such as
storage cells and their content are normally represented.
#### Storage Information
The function below provides a trace of all the storage operations and their parameters. This gives
a more complete picture of the program's interaction with storage.
```javascript
tracer = function(tx) {
return debug.traceTransaction(tx, {tracer:
'{' +
'retVal: [],' +
'step: function(log,db) {' +
' if(log.op.toNumber() == 0x54) ' +
' this.retVal.push(log.getPC() + ": SLOAD " + ' +
' log.stack.peek(0).toString(16));' +
' if(log.op.toNumber() == 0x55) ' +
' this.retVal.push(log.getPC() + ": SSTORE " +' +
' log.stack.peek(0).toString(16) + " <- " +' +
' log.stack.peek(1).toString(16));' +
'},' +
'fault: function(log,db) {this.retVal.push("FAULT: " + JSON.stringify(log))},' +
'result: function(ctx,db) {return this.retVal}' +
'}'
}) // return debug.traceTransaction ...
} // tracer = function ...
```
The output is similar to:
```javascript
[
"5921: SLOAD 0",
.
.
.
"2413: SSTORE 3f0af0a7a3ed17f5ba6a93e0a2a05e766ed67bf82195d2dd15feead3749a575d <- fb8629ad13d9a12456",
"2420: SLOAD cc39b177dd3a7f50d4c09527584048378a692aed24d31d2eabeddb7f3c041870",
"2475: SSTORE cc39b177dd3a7f50d4c09527584048378a692aed24d31d2eabeddb7f3c041870 <- 358c3de691bd19",
"6094: SSTORE 0 <- 1"
]
```
#### Operation Results
One piece of information missing from the function above is the result on an `SLOAD` operation. The
state we get inside `log` is the state prior to the execution of the opcode, so that value is not
known yet. For more operations we can figure it out for ourselves, but we don't have access to the
storage, so here we can't.
The solution is to have a flag, `afterSload`, which is only true in the opcode right after an
`SLOAD`, when we can see the result at the top of the stack.
```javascript
tracer = function(tx) {
return debug.traceTransaction(tx, {tracer:
'{' +
'retVal: [],' +
'afterSload: false,' +
'step: function(log,db) {' +
' if(this.afterSload) {' +
' this.retVal.push(" Result: " + ' +
' log.stack.peek(0).toString(16)); ' +
' this.afterSload = false; ' +
' } ' +
' if(log.op.toNumber() == 0x54) {' +
' this.retVal.push(log.getPC() + ": SLOAD " + ' +
' log.stack.peek(0).toString(16));' +
' this.afterSload = true; ' +
' } ' +
' if(log.op.toNumber() == 0x55) ' +
' this.retVal.push(log.getPC() + ": SSTORE " +' +
' log.stack.peek(0).toString(16) + " <- " +' +
' log.stack.peek(1).toString(16));' +
'},' +
'fault: function(log,db) {this.retVal.push("FAULT: " + JSON.stringify(log))},' +
'result: function(ctx,db) {return this.retVal}' +
'}'
}) // return debug.traceTransaction ...
} // tracer = function ...
```
The output now contains the result in the line that follows the `SLOAD`.
```javascript
[
"5921: SLOAD 0",
" Result: 1",
.
.
.
"2413: SSTORE 3f0af0a7a3ed17f5ba6a93e0a2a05e766ed67bf82195d2dd15feead3749a575d <- fb8629ad13d9a12456",
"2420: SLOAD cc39b177dd3a7f50d4c09527584048378a692aed24d31d2eabeddb7f3c041870",
" Result: 0",
"2475: SSTORE cc39b177dd3a7f50d4c09527584048378a692aed24d31d2eabeddb7f3c041870 <- 358c3de691bd19",
"6094: SSTORE 0 <- 1"
]
```
### Dealing With Calls Between Contracts
So the storage has been treated as if there are only 2<sup>256</sup> cells. However, that is not true.
Contracts can call other contracts, and then the storage involved is the storage of the other contract.
We can see the address of the current contract in `log.contract.getAddress()`. This value is the execution
context - the contract whose storage we are using - even when code from another contract is executed (by using
[`CALLCODE` or `DELEGATECALL`][solidity-delcall]).
However, `log.contract.getAddress()` returns an array of bytes. To convert this to the familiar hexadecimal
representation of Ethereum addresses, `this.byteHex()` and `array2Hex()` can be used.
```javascript
tracer = function(tx) {
return debug.traceTransaction(tx, {tracer:
'{' +
'retVal: [],' +
'afterSload: false,' +
'callStack: [],' +
'byte2Hex: function(byte) {' +
' if (byte < 0x10) ' +
' return "0" + byte.toString(16); ' +
' return byte.toString(16); ' +
'},' +
'array2Hex: function(arr) {' +
' var retVal = ""; ' +
' for (var i=0; i<arr.length; i++) ' +
' retVal += this.byte2Hex(arr[i]); ' +
' return retVal; ' +
'}, ' +
'getAddr: function(log) {' +
' return this.array2Hex(log.contract.getAddress());' +
'}, ' +
'step: function(log,db) {' +
' var opcode = log.op.toNumber();' +
// SLOAD
' if (opcode == 0x54) {' +
' this.retVal.push(log.getPC() + ": SLOAD " + ' +
' this.getAddr(log) + ":" + ' +
' log.stack.peek(0).toString(16));' +
' this.afterSload = true; ' +
' } ' +
// SLOAD Result
' if (this.afterSload) {' +
' this.retVal.push(" Result: " + ' +
' log.stack.peek(0).toString(16)); ' +
' this.afterSload = false; ' +
' } ' +
// SSTORE
' if (opcode == 0x55) ' +
' this.retVal.push(log.getPC() + ": SSTORE " +' +
' this.getAddr(log) + ":" + ' +
' log.stack.peek(0).toString(16) + " <- " +' +
' log.stack.peek(1).toString(16));' +
// End of step
'},' +
'fault: function(log,db) {this.retVal.push("FAULT: " + JSON.stringify(log))},' +
'result: function(ctx,db) {return this.retVal}' +
'}'
}) // return debug.traceTransaction ...
} // tracer = function ...
```
The output is similar to:
```javascript
[
"423: SLOAD 22ff293e14f1ec3a09b137e9e06084afd63addf9:360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc",
" Result: 360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc",
"10778: SLOAD 22ff293e14f1ec3a09b137e9e06084afd63addf9:6",
" Result: 6",
.
.
.
"13529: SLOAD f2d68898557ccb2cf4c10c3ef2b034b2a69dad00:8328de571f86baa080836c50543c740196dbc109d42041802573ba9a13efa340",
" Result: 8328de571f86baa080836c50543c740196dbc109d42041802573ba9a13efa340",
"423: SLOAD f2d68898557ccb2cf4c10c3ef2b034b2a69dad00:360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc",
" Result: 360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc",
"13529: SLOAD f2d68898557ccb2cf4c10c3ef2b034b2a69dad00:b38558064d8dd9c883d2a8c80c604667ddb90a324bc70b1bac4e70d90b148ed4",
" Result: b38558064d8dd9c883d2a8c80c604667ddb90a324bc70b1bac4e70d90b148ed4",
"11041: SSTORE 22ff293e14f1ec3a09b137e9e06084afd63addf9:6 <- 0"
]
```
## Other traces
This tutorial has focused on `debug_traceTransaction()` which reports information about individual transactions. There are
also RPC endpoints that provide different information, including tracing the EVM execution within a block, between two blocks,
for specific `eth_call`s or rejected blocks. The fill list of trace functions can be explored in the
[reference documentation][debug-docs].
## Custom Go tracing
Custom tracers can also be made more performant by writing them in Go. The gain in performance mostly comes from the fact that Geth doesn't need
to interpret JS code and can execute native functions. Geth comes with several built-in [native tracers](https://github.com/ethereum/go-ethereum/tree/master/eth/tracers/native) which can serve as examples. Please note that unlike JS tracers, Go tracing scripts cannot be simply passed as an argument to the API. They will need to be added to and compiled with the rest of the Geth source code.
In this section a simple native tracer that counts the number of opcodes will be covered. First follow the instructions to [clone and build](install-and-build/installing-geth#build-from-source-code) Geth from source code. Next save the following snippet as a `.go` file and add it to `eth/tracers/native`:
```go
package native
import (
"encoding/json"
"math/big"
"sync/atomic"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/eth/tracers"
)
func init() {
// This is how Geth will become aware of the tracer and register it under a given name
register("opcounter", newOpcounter)
}
type opcounter struct {
env *vm.EVM
counts map[string]int // Store opcode counts
interrupt uint32 // Atomic flag to signal execution interruption
reason error // Textual reason for the interruption
}
func newOpcounter(ctx *tracers.Context) tracers.Tracer {
return &opcounter{counts: make(map[string]int)}
}
// CaptureStart implements the EVMLogger interface to initialize the tracing operation.
func (t *opcounter) CaptureStart(env *vm.EVM, from common.Address, to common.Address, create bool, input []byte, gas uint64, value *big.Int) {
t.env = env
}
// CaptureState implements the EVMLogger interface to trace a single step of VM execution.
func (t *opcounter) CaptureState(pc uint64, op vm.OpCode, gas, cost uint64, scope *vm.ScopeContext, rData []byte, depth int, err error) {
// Skip if tracing was interrupted
if atomic.LoadUint32(&t.interrupt) > 0 {
t.env.Cancel()
return
}
name := op.String()
if _, ok := t.counts[name]; !ok {
t.counts[name] = 0
}
t.counts[name]++
}
// CaptureEnter is called when EVM enters a new scope (via call, create or selfdestruct).
func (t *opcounter) CaptureEnter(op vm.OpCode, from common.Address, to common.Address, input []byte, gas uint64, value *big.Int) {}
// CaptureExit is called when EVM exits a scope, even if the scope didn't
// execute any code.
func (t *opcounter) CaptureExit(output []byte, gasUsed uint64, err error) {}
// CaptureFault implements the EVMLogger interface to trace an execution fault.
func (t *opcounter) CaptureFault(pc uint64, op vm.OpCode, gas, cost uint64, scope *vm.ScopeContext, depth int, err error) {}
// CaptureEnd is called after the call finishes to finalize the tracing.
func (t *opcounter) CaptureEnd(output []byte, gasUsed uint64, _ time.Duration, err error) {}
func (*opcounter) CaptureTxStart(gasLimit uint64) {}
func (*opcounter) CaptureTxEnd(restGas uint64) {}
// GetResult returns the json-encoded nested list of call traces, and any
// error arising from the encoding or forceful termination (via `Stop`).
func (t *opcounter) GetResult() (json.RawMessage, error) {
res, err := json.Marshal(t.counts)
if err != nil {
return nil, err
}
return res, t.reason
}
// Stop terminates execution of the tracer at the first opportune moment.
func (t *opcounter) Stop(err error) {
t.reason = err
atomic.StoreUint32(&t.interrupt, 1)
}
```
As can be seen every method of the [EVMLogger interface](https://pkg.go.dev/github.com/ethereum/go-ethereum/core/vm#EVMLogger) needs to be implemented (even if empty). Key parts to notice are the `init()` function which registers the tracer in Geth, the `CaptureState` hook where the opcode counts are incremented and `GetResult` where the result is serialized and delivered. To test this out the source is first compiled with `make geth`. Then in the console it can be invoked through the usual API methods by passing in the name it was registered under:
```console
> debug.traceTransaction('0x7ae446a7897c056023a8104d254237a8d97783a92900a7b0f7db668a9432f384', { tracer: 'opcounter' })
{
ADD: 4,
AND: 3,
CALLDATALOAD: 2,
...
}
```
[solidity-delcall]:https://docs.soliditylang.org/en/v0.8.14/introduction-to-smart-contracts.html#delegatecall-callcode-and-libraries
[debug-docs]: /docs/rpc/ns-debug

@ -0,0 +1,323 @@
---
title: Mobile Account Management
sort_key: G
---
To provide Ethereum integration for your mobile applications, the very first thing you
should be interested in doing is account management.
Although all current leading Ethereum implementations provide account management built in,
it is ill advised to keep accounts in any location that is shared between multiple
applications and/or multiple people. The same way you do not entrust your ISP (who is
after all your gateway into the internet) with your login credentials; you should not
entrust an Ethereum node (who is your gateway into the Ethereum network) with your
credentials either.
The proper way to handle user accounts in your mobile applications is to do client side
account management, everything self-contained within your own application. This way you
can ensure as fine grained (or as coarse) access permissions to the sensitive data as
deemed necessary, without relying on any third party application's functionality and/or
vulnerabilities.
To support this, `go-ethereum` provides a simple, yet thorough accounts library that gives
you all the tools to do properly secured account management via encrypted keystores and
passphrase protected accounts. You can leverage all the security of the `go-ethereum`
crypto implementation while at the same time running everything in your own application.
## Encrypted keystores
Although handling your users' accounts locally on their own mobile device does provide
certain security guarantees, access keys to Ethereum accounts should never lay around in
clear-text form. As such, we provide an encrypted keystore that provides the proper
security guarantees for you without requiring a thorough understanding from your part of
the associated cryptographic primitives.
The important thing to know when using the encrypted keystore is that the cryptographic
primitives used within can operate either in *standard* or *light* mode. The former
provides a higher level of security at the cost of increased computational burden and
resource consumption:
* *standard* needs 256MB memory and 1 second processing on a modern CPU to access a key
* *light* needs 4MB memory and 100 millisecond processing on a modern CPU to access a key
As such, *light* is more suitable for mobile applications, but you should be aware of the
trade-offs nonetheless.
*For those interested in the cryptographic and/or implementation details, the key-store
uses the `secp256k1` elliptic curve as defined in the [Standards for Efficient
Cryptography](sec2), implemented by the [`libsecp256k`][secp256k1] library and wrapped by
[`github.com/ethereum/go-ethereum/accounts`][accounts-go]. Accounts are stored on disk in
the [Web3 Secret Storage][secstore] format.*
### Keystores on Android (Java)
The encrypted keystore on Android is implemented by the `KeyStore` class from the
`org.ethereum.geth` package. The configuration constants (for the *standard* or *light*
security modes described above) are located in the `Geth` abstract class, similarly from
the `org.ethereum.geth` package. Hence to do client side account management on Android,
you'll need to import two classes into your Java code:
```java
import org.ethereum.geth.Geth;
import org.ethereum.geth.KeyStore;
```
Afterwards you can create a new encrypted keystore via:
```java
KeyStore ks = new KeyStore("/path/to/keystore", Geth.LightScryptN, Geth.LightScryptP);
```
The path to the keystore folder needs to be a location that is writable by the local
mobile application but non-readable for other installed applications (for security reasons
obviously), so we'd recommend placing it inside your app's data directory. If you are
creating the `KeyStore` from within a class extending an Android object, you will most
probably have access to the `Context.getFilesDir()` method via `this.getFilesDir()`, so
you could set the keystore path to `this.getFilesDir() + "/keystore"`.
The last two arguments of the `KeyStore` constructor are the crypto parameters defining
how resource-intensive the keystore encryption should be. You can choose between
`Geth.StandardScryptN, Geth.StandardScryptP`, `Geth.LightScryptN, Geth.LightScryptP` or
specify your own numbers (please make sure you understand the underlying cryptography for
this). We recommend using the *light* version.
### Keystores on iOS (Swift 3)
The encrypted keystore on iOS is implemented by the `GethKeyStore` class from the `Geth`
framework. The configuration constants (for the *standard* or *light* security modes
described above) are located in the same namespace as global variables. Hence to do client
side account management on iOS, you'll need to import the framework into your Swift code:
```swift
import Geth
```
Afterwards you can create a new encrypted account manager via:
```swift
let ks = GethNewKeyStore("/path/to/keystore", GethLightScryptN, GethLightScryptP);
```
The path to the keystore folder needs to be a location that is writable by the local
mobile application but non-readable for other installed applications (for security reasons
obviously), so we'd recommend placing it inside your app's document directory. You should
be able to retrieve the document directory via `let datadir =
NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]`, so you
could set the keystore path to `datadir + "/keystore"`.
The last two arguments of the `GethNewKeyStore` factory method are the crypto parameters
defining how resource-intensive the keystore encryption should be. You can choose between
`GethStandardScryptN, GethStandardScryptP`, `GethLightScryptN, GethLightScryptP` or
specify your own numbers (please make sure you understand the underlying cryptography for
this). We recommend using the *light* version.
## Account lifecycle
Having created an encrypted keystore for your Ethereum accounts, you can use this for the
entire account lifecycle requirements of your mobile application. This includes the basic
functionality of creating new accounts and deleting existing ones; as well as the more
advanced functionality of updating access credentials, exporting existing accounts, and
importing them on another device.
Although the keystore defines the encryption strength it uses to store your accounts,
there is no global master password that can grant access to all of them. Rather each
account is maintained individually, and stored on disk in its [encrypted format][secstore]
individually, ensuring a much cleaner and stricter separation of credentials.
This individuality however means that any operation requiring access to an account will
need to provide the necessary authentication credentials for that particular account in
the form of a passphrase:
* When creating a new account, the caller must supply a passphrase to encrypt the account
with. This passphrase will be required for any subsequent access, the lack of which
will forever forfeit using the newly created account.
* When deleting an existing account, the caller must supply a passphrase to verify
ownership of the account. This isn't cryptographically necessary, rather a protective
measure against accidental loss of accounts.
* When updating an existing account, the caller must supply both current and new
passphrases. After completing the operation, the account will not be accessible via the
old passphrase any more.
* When exporting an existing account, the caller must supply both the current passphrase
to decrypt the account, as well as an export passphrase to re-encrypt it with before
returning the key-file to the user. This is required to allow moving accounts between
devices without sharing original credentials.
* When importing a new account, the caller must supply both the encryption passphrase of
the key-file being imported, as well as a new passhprase with which to store the
account. This is required to allow storing account with different credentials than used
for moving them around.
*Please note, there is no recovery mechanisms for losing the passphrases. The
cryptographic properties of the encrypted keystore (if using the provided parameters)
guarantee that account credentials cannot be brute forced in any meaningful time.*
### Accounts on Android (Java)
An Ethereum account on Android is implemented by the `Account` class from the
`org.ethereum.geth` package. Assuming we already have an instance of a `KeyStore` called
`ks` from the previous section, we can easily execute all of the described lifecycle
operations with a handful of function calls.
```java
// Create a new account with the specified encryption passphrase.
Account newAcc = ksm.newAccount("Creation password");
// Export the newly created account with a different passphrase. The returned
// data from this method invocation is a JSON encoded, encrypted key-file.
byte[] jsonAcc = ks.exportKey(newAcc, "Creation password", "Export password");
// Update the passphrase on the account created above inside the local keystore.
ks.updateAccount(newAcc, "Creation password", "Update password");
// Delete the account updated above from the local keystore.
ks.deleteAccount(newAcc, "Update password");
// Import back the account we've exported (and then deleted) above with yet
// again a fresh passphrase.
Account impAcc = ks.importKey(jsonAcc, "Export password", "Import password");
```
*Although instances of `Account` can be used to access various information about specific
Ethereum accounts, they do not contain any sensitive data (such as passphrases or private
keys), rather act solely as identifiers for client code and the keystore.*
### Accounts on iOS (Swift 3)
An Ethereum account on iOS is implemented by the `GethAccount` class from the `Geth`
framework. Assuming we already have an instance of a `GethKeyStore` called `ks` from the
previous section, we can easily execute all of the described lifecycle operations with a
handful of function calls.
```swift
// Create a new account with the specified encryption passphrase.
let newAcc = try! ks?.newAccount("Creation password")
// Export the newly created account with a different passphrase. The returned
// data from this method invocation is a JSON encoded, encrypted key-file.
let jsonKey = try! ks?.exportKey(newAcc!, passphrase: "Creation password", newPassphrase: "Export password")
// Update the passphrase on the account created above inside the local keystore.
try! ks?.update(newAcc, passphrase: "Creation password", newPassphrase: "Update password")
// Delete the account updated above from the local keystore.
try! ks?.delete(newAcc, passphrase: "Update password")
// Import back the account we've exported (and then deleted) above with yet
// again a fresh passphrase.
let impAcc = try! ks?.importKey(jsonKey, passphrase: "Export password", newPassphrase: "Import password")
```
*Although instances of `GethAccount` can be used to access various information about
specific Ethereum accounts, they do not contain any sensitive data (such as passphrases or
private keys), rather act solely as identifiers for client code and the keystore.*
## Signing authorization
As mentioned above, account objects do not hold the sensitive private keys of the
associated Ethereum accounts, but are merely placeholders to identify the cryptographic
keys with. All operations that require authorization (e.g. transaction signing) are
performed by the account manager after granting it access to the private keys.
There are a few different ways one can authorize the account manager to execute signing
operations, each having its advantages and drawbacks. Since the different methods have
wildly different security guarantees, it is essential to be clear on how each works:
* **Single authorization**: The simplest way to sign a transaction via the keystore is to
provide the passphrase of the account every time something needs to be signed, which
will ephemerally decrypt the private key, execute the signing operation and immediately
throw away the decrypted key. The drawbacks are that the passphrase needs to be queried
from the user every time, which can become annoying if done frequently; or the
application needs to keep the passphrase in memory, which can have security
consequences if not done properly; and depending on the keystore's configured strength,
constantly decrypting keys can result in non-negligible resource requirements.
* **Multiple authorizations**: A more complex way of signing transactions via the
keystore is to unlock the account via its passphrase once, and allow the account
manager to cache the decrypted private key, enabling all subsequent signing requests to
complete without the passphrase. The lifetime of the cached private key may be managed
manually (by explicitly locking the account back up) or automatically (by providing a
timeout during unlock). This mechanism is useful for scenarios where the user may need
to sign many transactions or the application would need to do so without requiring user
input. The crucial aspect to remember is that **anyone with access to the account
manager can sign transactions while a particular account is unlocked** (e.g. device
left unattended; application running untrusted code).
*Note, creating transactions is out of scope here, so the remainder of this section will
assume we already have a transaction to sign, and will focus only on creating an
authorized version of it. Creating an actually meaningful transaction will be covered
later.*
### Signing on Android (Java)
Assuming we already have an instance of a `KeyStore` called `ks` from the previous
sections, we can create a new account to sign transactions with via it's already
demonstrated `newAccount` method; and to avoid going into transaction creation for now, we
can hard-code a random transaction to sign instead.
```java
// Create a new account to sign transactions with
Account signer = ks.newAccount("Signer password");
Transaction tx = new Transaction(
1, new Address("0x0000000000000000000000000000000000000000"),
new BigInt(0), new BigInt(0), new BigInt(1), null); // Random empty transaction
BigInt chain = new BigInt(1); // Chain identifier of the main net
```
With the boilerplate out of the way, we can now sign transaction using the authorization
mechanisms described above:
```java
// Sign a transaction with a single authorization
Transaction signed = ks.signTxPassphrase(signer, "Signer password", tx, chain);
// Sign a transaction with multiple manually cancelled authorizations
ks.unlock(signer, "Signer password");
signed = ks.signTx(signer, tx, chain);
ks.lock(signer.getAddress());
// Sign a transaction with multiple automatically cancelled authorizations
ks.timedUnlock(signer, "Signer password", 1000000000);
signed = ks.signTx(signer, tx, chain);
```
### Signing on iOS (Swift 3)
Assuming we already have an instance of a `GethKeyStore` called `ks` from the previous
sections, we can create a new account to sign transactions with via it's already
demonstrated `newAccount` method; and to avoid going into transaction creation for now, we
can hard-code a random transaction to sign instead.
```swift
// Create a new account to sign transactions with
var error: NSError?
let signer = try! ks?.newAccount("Signer password")
let to = GethNewAddressFromHex("0x0000000000000000000000000000000000000000", &error)
let tx = GethNewTransaction(1, to, GethNewBigInt(0), GethNewBigInt(0), GethNewBigInt(0), nil) // Random empty transaction
let chain = GethNewBigInt(1) // Chain identifier of the main net
```
*Note, although Swift usually rewrites `NSError` returns to throws, this particular
instance seems to have been missed for some reason (possibly due to it being a
constructor). It will be fixed in a later version of the iOS bindings when the appropriate
fixed are implemented upstream in the `gomobile` project.*
With the boilerplate out of the way, we can now sign transaction using the authorization
methods described above:
```swift
// Sign a transaction with a single authorization
var signed = try! ks?.signTxPassphrase(signer, passphrase: "Signer password", tx: tx, chainID: chain)
// Sign a transaction with multiple manually cancelled authorizations
try! ks?.unlock(signer, passphrase: "Signer password")
signed = try! ks?.signTx(signer, tx: tx, chainID: chain)
try! ks?.lock(signer?.getAddress())
// Sign a transaction with multiple automatically cancelled authorizations
try! ks?.timedUnlock(signer, passphrase: "Signer password", timeout: 1000000000)
signed = try! ks?.signTx(signer, tx: tx, chainID: chain)
```
[sec2]: https://www.secg.org/sec2-v2.pdf
[accounts-go]: https://godoc.org/github.com/ethereum/go-ethereum/accounts
[secp256k1]: https://github.com/bitcoin-core/secp256k1
[secstore]: https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition

@ -0,0 +1,180 @@
---
title: Mobile API
sort_key: F
---
The Ethereum blockchain along with its two extension protocols Whisper and Swarm was
originally conceptualized to become the supporting pillar of web3, providing the
consensus, messaging and storage backbone for a new generation of distributed (actually,
decentralized) applications called DApps.
The first incarnation towards this dream of web3 was a command line client providing an
RPC interface into the peer-to-peer protocols. The client was soon enough extended with a
web-browser-like graphical user interface, permitting developers to write DApps based on
the tried and proven HTML/CSS/JS technologies.
As many DApps have more complex requirements than what a browser environment can handle,
it became apparent that providing programmatic access to the web3 pillars would open the
door towards a new class of applications. As such, the second incarnation of the web
dream is to open up all our technologies for other projects as reusable components.
Starting with the 1.5 release family of `go-ethereum`, we transitioned away from providing
only a full blown Ethereum client and started shipping official Go packages that could be
embedded into third party desktop and server applications. It took only a small leap from
here to begin porting our code to mobile platforms.
## Quick overview
Similarly to our reusable Go libraries, the mobile wrappers also focus on four main usage
areas:
- Simplified client side account management
- Remote node interfacing via different transports
- Contract interactions through auto-generated bindings
- In-process Ethereum, Whisper and Swarm peer-to-peer node
You can watch a quick overview about these in Peter's (@karalabe) talk titled "Import
Geth: Ethereum from Go and beyond", presented at the Ethereum Devcon2 developer conference
in September, 2016 (Shanghai). Slides are [available
here](https://ethereum.karalabe.com/talks/2016-devcon.html).
[![Peter's Devcon2 talk](https://img.youtube.com/vi/R0Ia1U9Gxjg/0.jpg)](https://www.youtube.com/watch?v=R0Ia1U9Gxjg)
## Library bundles
The `go-ethereum` mobile library is distributed either as an Android `.aar` archive
(containing binaries for `arm-7`, `arm64`, `x86` and `x64`); or as an iOS XCode framework
(containing binaries for `arm-7`, `arm64` and `x86`). We do not provide library bundles
for Windows phone the moment.
### Android archive
The simplest way to use `go-ethereum` in your Android project is through a Maven
dependency. We provide bundles of all our stable releases (starting from v1.5.0) through
Maven Central, and also provide the latest develop bundle through the Sonatype OSS
repository.
#### Stable dependency (Maven Central)
To add an Android dependency to the **stable** library release of `go-ethereum`, you'll
need to ensure that the Maven Central repository is enabled in your Android project, and
that the `go-ethereum` code is listed as a required dependency of your application. You
can do both of these by editing the `build.gradle` script in your Android app's folder:
```gradle
repositories {
mavenCentral()
}
dependencies {
// All your previous dependencies
compile 'org.ethereum:geth:1.5.2' // Change the version to the latest release
}
```
#### Develop dependency (Sonatype)
To add an Android dependency to the current version of `go-ethereum`, you'll need to
ensure that the Sonatype snapshot repository is enabled in your Android project, and that
the `go-ethereum` code is listed as a required `SNAPSHOT` dependency of your application.
You can do both of these by editing the `build.gradle` script in your Android app's
folder:
```gradle
repositories {
maven {
url "https://oss.sonatype.org/content/groups/public"
}
}
dependencies {
// All your previous dependencies
compile 'org.ethereum:geth:1.5.3-SNAPSHOT' // Change the version to the latest release
}
```
#### Custom dependency
If you prefer not to depend on Maven Central or Sonatype; or would like to access an older
develop build not available any more as an online dependency, you can download any bundle
directly from [our website](https://geth.ethereum.org/downloads/) and insert it into your
project in Android Studio via `File -> New -> New module... -> Import .JAR/.AAR Package`.
You will also need to configure `gradle` to link the mobile library bundle to your
application. This can be done by adding a new entry to the `dependencies` section of your
`build.gradle` script, pointing it to the module you just added (named `geth` by default).
```gradle
dependencies {
// All your previous dependencies
compile project(':geth')
}
```
#### Manual builds
Lastly, if you would like to make modifications to the `go-ethereum` mobile code and/or
build it yourself locally instead of downloading a pre-built bundle, you can do so using a
`make` command. This will create an Android archive called `geth.aar` in the `build/bin`
folder that you can import into your Android Studio as described above.
```bash
$ make android
[...]
Done building.
Import "build/bin/geth.aar" to use the library.
```
### iOS framework
The simplest way to use `go-ethereum` in your iOS project is through a
[CocoaPods](https://cocoapods.org/) dependency. We provide bundles of all our stable
releases (starting from v1.5.3) and also latest develop versions.
#### Automatic dependency
To add an iOS dependency to the current stable or latest develop version of `go-ethereum`,
you'll need to ensure that your iOS XCode project is configured to use CocoaPods.
Detailing that is out of scope in this document, but you can find a guide in the upstream
[Using CocoaPods](https://guides.cocoapods.org/using/using-cocoapods.html) page.
Afterwards you can edit your `Podfile` to list `go-ethereum` as a dependency:
```ruby
target 'MyApp' do
# All your previous dependencies
pod 'Geth', '1.5.4' # Change the version to the latest release
end
```
Alternatively, if you'd like to use the latest develop version, replace the package
version `1.5.4` with `~> 1.5.5-unstable` to switch to pre-releases and to always pull in
the latest bundle from a particular release family.
#### Custom dependency
If you prefer not to depend on CocoaPods; or would like to access an older develop build
not available any more as an online dependency, you can download any bundle directly from
[our website](https://geth.ethereum.org/downloads/) and insert it into your project in
XCode via `Project Settings -> Build Phases -> Link Binary With Libraries`.
Do not forget to extract the framework from the compressed `.tar.gz` archive. You can do
that either using a GUI tool or from the command line via (replace the archive with your
downloaded file):
```
tar -zxvf geth-ios-all-1.5.3-unstable-e05d35e6.tar.gz
```
#### Manual builds
Lastly, if you would like to make modifications to the `go-ethereum` mobile code and/or
build it yourself locally instead of downloading a pre-built bundle, you can do so using a
`make` command. This will create an iOS XCode framework called `Geth.framework` in the
`build/bin` folder that you can import into XCode as described above.
```bash
$ make ios
[...]
Done building.
Import "build/bin/Geth.framework" to use the library.
```

@ -0,0 +1,223 @@
---
title: Go Account Management
sort_key: D
---
Geth provides a simple, yet thorough accounts package that includes all the tools developers
need to leverage all the security of Geth's crypto implementation in a Go native application.
The account management is done client side with all sensitive data held inside the application.
This gives the user control over access permissions without relying on any third party.
**Note Geth's built-in account management is convenient and straightforward to use, but
best practise is to use the external tool *Clef* for key management.**
{:toc}
- this will be removed by the toc
## Encrypted keystores
Access keys to Ethereum accounts should never be stored in plain-text. Instead, they should be
stored encrypted so that even if the mobile device is accessed by a malicious third party the
keys are still hidden under an additional layer of security. Geth provides a keystore that enables
developers to store keys securely. The Geth keystore uses [Scrypt][scrypt-docs] to store keys that are encoded
using the [`secp256k1`][secp256k1] elliptic curve. Accounts are stored on disk in the
[Web3 Secret Storage][wss] format. Developers should be aware of these implementation details
but are not required to deeply understand the cryptographic primitives in order to use the keystore.
One thing that should be understood, though, is that the cryptographic primitives underpinning the
keystore can operate in light or standard mode. Light mode is computationally cheaper, while standard
mode has extra security. Light mode is appropriate for mobile devices, but developers should be
aware that there is a security trade-off.
* standard needs 256MB memory and 1 second processing on a modern CPU to access a key
* light needs 4MB memory and 100 millisecond processing on a modern CPU to access a key
The encrypted keystore is implemented by the [`accounts.Manager`][accounts-manager] struct
from the [`accounts`][accounts-pkg] package, which also contains the configuration constants for the
*standard* or *light* security modes described above. Hence client side account management
simply requires importing the `accounts` package into the application code.
```go
import "github.com/ethereum/go-ethereum/accounts"
import "github.com/ethereum/go-ethereum/accounts/keystore"
import "github.com/ethereum/go-ethereum/common"
```
Afterwards a new encrypted account manager can be created via:
```go
ks := keystore.NewKeyStore("/path/to/keystore", keystore.StandardScryptN, keystore.StandardScryptP)
am := accounts.NewManager(&accounts.Config{InsecureUnlockAllowed: false}, ks)
```
The path to the keystore folder needs to be a location that is writable by the local user
but non-readable for other system users, such as inside the user's home directory.
The last two arguments of [`keystore.NewKeyStore`][keystore] are the crypto parameters defining
how resource-intensive the keystore encryption should be. The options are
[`accounts.StandardScryptN, accounts.StandardScryptP`, `accounts.LightScryptN,
accounts.LightScryptP`][pkg-constants] or custom values (requiring understanding of the underlying
cryptography). The *standard* version is recommended.
## Account lifecycle
Once an encrypted keystore for Ethereum accounts exists it, it can be used to manage accounts for the
entire account lifecycle requirements of a Go native application. This includes the basic functionality
of creating new accounts and deleting existing ones as well as updating access credentials,
exporting existing accounts, and importing them on other devices.
Although the keystore defines the encryption strength it uses to store accounts, there is no global master
password that can grant access to all of them. Rather each account is maintained individually, and stored on
disk in its [encrypted format][wss] individually, ensuring a much cleaner and stricter separation of
credentials.
This individuality means that any operation requiring access to an account will need to provide the
necessary authentication credentials for that particular account in the form of a passphrase:
* When creating a new account, the caller must supply a passphrase to encrypt the account
with. This passphrase will be required for any subsequent access, the lack of which
will forever forfeit using the newly created account.
* When deleting an existing account, the caller must supply a passphrase to verify
ownership of the account. This isn't cryptographically necessary, rather a protective
measure against accidental loss of accounts.
* When updating an existing account, the caller must supply both current and new
passphrases. After completing the operation, the account will not be accessible via the
old passphrase any more.
* When exporting an existing account, the caller must supply both the current passphrase
to decrypt the account, as well as an export passphrase to re-encrypt it with before
returning the key-file to the user. This is required to allow moving accounts between
machines and applications without sharing original credentials.
* When importing a new account, the caller must supply both the encryption passphrase of
the key-file being imported, as well as a new passhprase with which to store the
account. This is required to allow storing account with different credentials than used
for moving them around.
***Please note, there are no recovery mechanisms for lost passphrases. The
cryptographic properties of the encrypted keystore (using the provided parameters)
guarantee that account credentials cannot be brute forced in any meaningful time.***
An Ethereum account is implemented by the [`accounts.Account`][accounts-account] struct from
the Geth [accounts][accounts-pkg] package. Assuming an instance of an
[`accounts.Manager`][accounts-manager] called `am` exists, all of the described lifecycle
operations can be executed with a handful of function calls (error handling omitted).
```go
// Create a new account with the specified encryption passphrase.
newAcc, _ := ks.NewAccount("Creation password")
fmt.Println(newAcc)
// Export the newly created account with a different passphrase. The returned
// data from this method invocation is a JSON encoded, encrypted key-file.
jsonAcc, _ := ks.Export(newAcc, "Creation password", "Export password")
// Update the passphrase on the account created above inside the local keystore.
_ = ks.Update(newAcc, "Creation password", "Update password")
// Delete the account updated above from the local keystore.
_ = ks.Delete(newAcc, "Update password")
// Import back the account we've exported (and then deleted) above with yet
// again a fresh passphrase.
impAcc, _ := ks.Import(jsonAcc, "Export password", "Import password")
```
*Although instances of [`accounts.Account`][accounts-account] can be used to access various information about
specific Ethereum accounts, they do not contain any sensitive data (such as passphrases or private keys),
rather they act solely as identifiers for client code and the keystore.*
## Signing authorization
Account objects do not hold the sensitive private keys of the associated Ethereum accounts.
Account objects are placeholders that identify the cryptographic keys. All operations that
require authorization (e.g. transaction signing) are performed by the account manager after
granting it access to the private keys.
There are a few different ways to authorize the account manager to execute signing
operations, each having its advantages and drawbacks. Since the different methods have
wildly different security guarantees, it is essential to be clear on how each works:
* **Single authorization**: The simplest way to sign a transaction via the account
manager is to provide the passphrase of the account every time something needs to be
signed, which will ephemerally decrypt the private key, execute the signing operation
and immediately throw away the decrypted key. The drawbacks are that the passphrase
needs to be queried from the user every time, which can become annoying if done
frequently or the application needs to keep the passphrase in memory, which can have
security consequences if not done properly. Depending on the keystore's configured
strength, constantly decrypting keys can result in non-negligible resource
requirements.
* **Multiple authorizations**: A more complex way of signing transactions via the account
manager is to unlock the account via its passphrase once, and allow the account manager
to cache the decrypted private key, enabling all subsequent signing requests to
complete without the passphrase. The lifetime of the cached private key may be managed
manually (by explicitly locking the account back up) or automatically (by providing a
timeout during unlock). This mechanism is useful for scenarios where the user may need
to sign many transactions or the application would need to do so without requiring user
input. The crucial aspect to remember is that **anyone with access to the account
manager can sign transactions while a particular account is unlocked** (e.g.
application running untrusted code).
Assuming an instance of an [`accounts.Manager`][accounts-manager] called `am` exists, a new
account can be created to sign transactions using [`NewAccount`][new-account]. Creating transactions
is out of scope for this page so instead a random [`common.Hash`][common-hash] will be signed instead.
For information on creating transactions in Go native applications see the [Go API page](/docs/dapp/native).
```go
// Create a new account to sign transactions with
signer, _ := ks.NewAccount("Signer password")
txHash := common.HexToHash("0x0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef")
```
With the boilerplate out of the way, the transaction can be signed using the authorization
mechanisms described above:
```go
// Sign a transaction with a single authorization
signature, _ := ks.SignHashWithPassphrase(signer, "Signer password", txHash.Bytes())
// Sign a transaction with multiple manually cancelled authorizations
_ = ks.Unlock(signer, "Signer password")
signature, _ = ks.SignHash(signer, txHash.Bytes())
_ = ks.Lock(signer.Address)
// Sign a transaction with multiple automatically cancelled authorizations
_ = ks.TimedUnlock(signer, "Signer password", time.Second)
signature, _ = ks.SignHash(signer, txHash.Bytes())
```
Note that [`SignWithPassphrase`][sign-w-phrase] takes an [`accounts.Account`][accounts-account] as the
signer, whereas [`Sign`][accounts-sign] takes only a [`common.Address`][common-address]. The reason
for this is that an [`accounts.Account`][accounts-account] object may also contain a custom key-path, allowing
[`SignWithPassphrase`][sign-w-phrase] to sign using accounts outside of the keystore; however
[`Sign`][accounts-sign] relies on accounts already unlocked within the keystore, so it cannot specify custom paths.
## Summary
Account management is a fundamental pillar of Ethereum development. Geth's Go API provides the tools required
to integrate best-practise account security into Go native applications using a simple set of Go functions.
[accounts-sign]: (https://godoc.org/github.com/ethereum/go-ethereum/accounts#Manager.Sign)
[common-address]: https://godoc.org/github.com/ethereum/go-ethereum/common#Address
[accounts-sign]: https://godoc.org/github.com/ethereum/go-ethereum/accounts#Manager.Sign
[sign-w-phrase]: https://godoc.org/github.com/ethereum/go-ethereum/accounts#Manager.SignWithPassphrase
[secp256k1]: https://www.secg.org/sec2-v2.pdf
[libsecp256k1]: https://github.com/bitcoin-core/secp256k1
[wss]:https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition
[go-accounts]:https://godoc.org/github.com/ethereum/go-ethereum/accounts
[accounts-manager]: https://godoc.org/github.com/ethereum/go-ethereum/accounts#Manager
[accounts-pkg]: https://godoc.org/github.com/ethereum/go-ethereum/accounts
[keystore]: https://godoc.org/github.com/ethereum/go-ethereum/accounts/keystore#NewKeyStore
[pkg-constants]: https://godoc.org/github.com/ethereum/go-ethereum/accounts#pkg-constants
[accounts-account]:https://godoc.org/github.com/ethereum/go-ethereum/accounts#Account
[new-account]: https://godoc.org/github.com/ethereum/go-ethereum/accounts#Manager.NewAccount
[common-hash]: https://godoc.org/github.com/ethereum/go-ethereum/common#Hash
[scrypt-docs]: https://pkg.go.dev/golang.org/x/crypto/scrypt

@ -0,0 +1,617 @@
---
title: Go Contract Bindings
sort_key: E
---
This page introduces the concept of server-side native dapps. Geth provides the tools required
to generate [Go][go-link] language bindings to any Ethereum contract that is compile-time type safe,
highly performant and can be generated completely automatically from a compiled contract.
Interacting with a contract on the Ethereum blockchain from Go is already possible via the
RPC interfaces exposed by Ethereum clients. However, writing the boilerplate code that
translates Go language constructs into RPC calls and back is time consuming and brittle -
implementation bugs can only be detected during runtime and it's almost impossible to evolve
a contract as even a tiny change in Solidity is awkward to port over to Go. Therefore,
Geth provides tools for easily converting contract code into Go code that can be used directly
in Go applications.
This page provides an introduction to generating Go contract bindings and using them in a simple
Go application.
{:toc}
- this will be removed by the toc
## Prerequisites
This page is fairly beginner-friendly and designed for people starting out with
writing Go native dapps. The core concepts will be introduced gradually as a developer
would encounter them. However, some basic familiarity with [Ethereum](https://ethereum.org),
[Solidity](https://docs.soliditylang.org/en/v0.8.15/) and [Go](https://go.dev/) is
assumed.
## What is an ABI?
Ethereum smart contracts have a schema that defines its functions and return types in the form
of a JSON file. This JSON file is known as an *Application Binary Interface*, or ABI. The ABI
acts as a specification for precisely how to encode data sent to a contract and how to
decode the data the contract sends back. The ABI is the only essential piece of information required to
generate Go bindings. Go developers can then use the bindings to interact with the contract
from their Go application without having to deal directly with data encoding and decoding.
An ABI is generated when a contract is compiled.
## Abigen: Go binding generator
Geth includes a source code generator called `abigen` that can convert Ethereum ABI definitions
into easy to use, type-safe Go packages. With a valid Go development environment
set up and the go-ethereum repository checked out correctly, `abigen` can be built as follows:
```
$ cd $GOPATH/src/github.com/ethereum/go-ethereum
$ go build ./cmd/abigen
```
### Generating the bindings
To demonstrate the binding generator a contract is required. The contract `Storage.sol` implements two
very simple functions: `store` updates a user-defined `uint256` to the contract's storage, and `retrieve`
displays the value stored in the contract to the user. The Solidity code is as follows:
```solidity
// SPDX-License-Identifier: GPL-3.0
pragma solidity >0.7.0 < 0.9.0;
/**
* @title Storage
* @dev store or retrieve variable value
*/
contract Storage {
uint256 value;
function store(uint256 number) public{
value = number;
}
function retrieve() public view returns (uint256){
return value;
}
}
```
This contract can be pasted into a text file and saved as `Storage.sol`.
The following code snippet shows how an ABI can be generated for `Storage.sol`
using the Solidity compiler `solc`.
```shell
solc --abi Storage.sol -o build
```
The ABI can also be generated in other ways such as using the `compile` commands in development
frameworks such as [Truffle][truffle-link], [Hardhat][hardhat-link] and [Brownie][brownie-link]
or in the online IDE [Remix][remix-link]. ABIs for existing
verified contracts can be downloaded from [Etherscan](etherscan.io).
The ABI for `Storage.sol` (`Storage.abi`) looks as follows:
```json
[{"inputs":[],"name":"retrieve","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"stateMutability":"view","type":"function"},{"inputs":[{"internalType":"uint256","name":"number","type":"uint256"}],"name":"store","outputs":[],"stateMutability":"nonpayable","type":"function"}]
```
The contract binding can then be generated by passing the ABI to `abigen` as follows:
```
$ abigen --abi Storage.abi --pkg main --type Storage --out Storage.go
```
Where the flags are:
* `--abi`: Mandatory path to the contract ABI to bind to
* `--pkg`: Mandatory Go package name to place the Go code into
* `--type`: Optional Go type name to assign to the binding struct
* `--out`: Optional output path for the generated Go source file (not set = stdout)
This will generate a type-safe Go binding for the Storage contract. The generated code will
look something like the snippet below, the full version of which can be viewed
[here](https://gist.github.com/jmcook1186/a78e59d203bb54b06e1b81f2cda79d93).
```go
// Code generated - DO NOT EDIT.
// This file is a generated binding and any manual changes will be lost.
package main
import (
"errors"
"math/big"
"strings"
ethereum "github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/event"
)
// Reference imports to suppress errors if they are not otherwise used.
var (
_ = errors.New
_ = big.NewInt
_ = strings.NewReader
_ = ethereum.NotFound
_ = bind.Bind
_ = common.Big1
_ = types.BloomLookup
_ = event.NewSubscription
)
// StorageMetaData contains all meta data concerning the Storage contract.
var StorageMetaData = &bind.MetaData{
ABI: "[{\"inputs\":[],\"name\":\"retrieve\",\"outputs\":[{\"internalType\":\"uint256\",\"name\":\"\",\"type\":\"uint256\"}],\"stateMutability\":\"view\",\"type\":\"function\"},{\"inputs\":[{\"internalType\":\"uint256\",\"name\":\"number\",\"type\":\"uint256\"}],\"name\":\"store\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"}]",
}
// StorageABI is the input ABI used to generate the binding from.
// Deprecated: Use StorageMetaData.ABI instead.
var StorageABI = StorageMetaData.ABI
// Storage is an auto generated Go binding around an Ethereum contract.
type Storage struct {
StorageCaller // Read-only binding to the contract
StorageTransactor // Write-only binding to the contract
StorageFilterer // Log filterer for contract events
}
...
```
`Storage.go` contains all the bindings required to interact with `Storage.sol` from a Go application.
However, this isn't very useful unless the contract is actually deployed on Ethereum or one of
Ethereum's testnets. The following sections will demonstrate how to deploy the contract to
an Ethereum testnet and interact with it using the Go bindings.
### Deploying contracts to Ethereum
In the previous section, the contract ABI was sufficient for generating the contract bindings from its ABI.
However, deploying the contract requires some additional information in the form of the compiled
bytecode.
The bytecode is obtained by running the compiler again but this passing the `--bin` flag, e.g.
```shell
solc --bin Storage.sol -o Storage.bin
```
Then `abigen` can be run again, this time passing `Storage.bin`:
```
$ abigen --abi Storage.abi --pkg main --type Storage --out Storage.go --bin Storage.bin
```
This will generate something similar to the bindings generated in the previous section. However,
an additional `DeployStorage` function has been injected:
```go
// DeployStorage deploys a new Ethereum contract, binding an instance of Storage to it.
func DeployStorage(auth *bind.TransactOpts, backend bind.ContractBackend) (common.Address, *types.Transaction, *Storage, error) {
parsed, err := StorageMetaData.GetAbi()
if err != nil {
return common.Address{}, nil, nil, err
}
if parsed == nil {
return common.Address{}, nil, nil, errors.New("GetABI returned nil")
}
address, tx, contract, err := bind.DeployContract(auth, *parsed, common.FromHex(StorageBin), backend)
if err != nil {
return common.Address{}, nil, nil, err
}
return address, tx, &Storage{StorageCaller: StorageCaller{contract: contract}, StorageTransactor: StorageTransactor{contract: contract}, StorageFilterer: StorageFilterer{contract: contract}}, nil
}
```
View the full file [here](https://gist.github.com/jmcook1186/91124cfcbc7f22dcd3bb4f148d2868a8).
The new `DeployStorage()` function can be used to deploy the contract to an Ethereum testnet from a Go application. To do this
requires incorporating the bindings into a Go application that also handles account management, authorization and Ethereum backend
to deploy the contract through. Specifically, this requires:
1. A running Geth node connected to an Ethereum testnet (recommended Goerli)
2. An account in the keystore prefunded with enough ETH to cover gas costs for deploying and interacting with the contract
Assuming these prerequisites exist, a new `ethclient` can be instantiated with the local Geth node's ipc file, providing
access to the testnet from the Go application. The key can be instantiated as a variable in the application by copying the
JSON object from the keyfile in the keystore.
Putting it all together would result in:
```go
package main
import (
"fmt"
"log"
"math/big"
"strings"
"time"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/ethclient"
)
const key = `<<json object from keystore>>`
func main() {
// Create an IPC based RPC connection to a remote node and an authorized transactor
conn, err := rpc.NewIPCClient("/home/go-ethereum/goerli/geth.ipc")
if err != nil {
log.Fatalf("Failed to connect to the Ethereum client: %v", err)
}
auth, err := bind.NewTransactor(strings.NewReader(key), "<<strong_password>>")
if err != nil {
log.Fatalf("Failed to create authorized transactor: %v", err)
}
// Deploy the contract passing the newly created `auth` and `conn` vars
address, tx, instance, err := DeployStorage(auth, conn), new(big.Int), "Storage contract in Go!", 0, "Go!")
if err != nil {
log.Fatalf("Failed to deploy new storage contract: %v", err)
}
fmt.Printf("Contract pending deploy: 0x%x\n", address)
fmt.Printf("Transaction waiting to be mined: 0x%x\n\n", tx.Hash())
time.Sleep(250 * time.Millisecond) // Allow it to be processed by the local node :P
// function call on `instance`. Retrieves pending name
name, err := instance.Name(&bind.CallOpts{Pending: true})
if err != nil {
log.Fatalf("Failed to retrieve pending name: %v", err)
}
fmt.Println("Pending name:", name)
}
```
Running this code requests the creation of a brand new `Storage` contract on the Goerli blockchain.
The contract functions can be called while the contract is waiting to be mined.
```
Contract pending deploy: 0x46506d900559ad005feb4645dcbb2dbbf65e19cc
Transaction waiting to be mined: 0x6a81231874edd2461879b7280ddde1a857162a744e3658ca7ec276984802183b
Pending name: Storage contract in Go!
```
Once mined, the contract exists permanently at its deployment address and can now be interacted with
from other applications without ever needing to be redeployed.
Note that `DeployStorage` returns four variables:
- `address`: the deployment address of the contract
- `tx`: the transaction hash that can be queried using Geth or a service like [Etherscan](etherscan.io)
- `instance`: an instance of the deployed contract whose functions can be called in the Go application
- `err`: a variable that handles errors in case of a deployment failure
### Accessing an Ethereum contract
To interact with a contract already deployed on the blockchain, the deployment `address` is required and
a `backend` through which to access Ethereum must be defined. The binding generator provides an RPC
backend out-of-the-box that can be used to attach to an existing Ethereum node via IPC, HTTP or WebSockets.
As in the previous section, a Geth node running on an Ethereum testnet (recommend Goerli) and an account
with some test ETH to cover gas is required. The `Storage.sol` deployment address is also needed.
Again, an instance of `ethclient` can be created, passing the path to Geth's ipc file. In the example
below this backend is assigned to the variable `conn`.
```go
// Create an IPC based RPC connection to a remote node
// NOTE update the path to the ipc file!
conn, err := ethclient.Dial("/home/go-ethereum/goerli/geth.ipc")
if err != nil {
log.Fatalf("Failed to connect to the Ethereum client: %v", err)
}
```
The functions available for interacting with the `Storage` contract are defined in `Storage.go`. To create
a new instance of the contract in a Go application, the `NewStorage()` function can be used. The function
is defined in `Storage.go` as follows:
```go
// NewStorage creates a new instance of Storage, bound to a specific deployed contract.
func NewStorage(address common.Address, backend bind.ContractBackend) (*Storage, error) {
contract, err := bindStorage(address, backend, backend, backend)
if err != nil {
return nil, err
}
return &Storage{StorageCaller: StorageCaller{contract: contract}, StorageTransactor: StorageTransactor{contract: contract}, StorageFilterer: StorageFilterer{contract: contract}}, nil
}
```
`NewStorage()` takes two arguments: the deployment address and a backend (`conn`) and returns
an instance of the deployed contract. In the example below, the instance is assigned to `store`.
```go
package main
import (
"fmt"
"log"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
)
func main() {
// Create an IPC based RPC connection to a remote node
// NOTE update the path to the ipc file!
conn, err := ethclient.Dial("/home/go-ethereum/goerli/geth.ipc")
if err != nil {
log.Fatalf("Failed to connect to the Ethereum client: %v", err)
}
// Instantiate the contract and display its name
// NOTE update the deployment address!
store, err := NewStorage(common.HexToAddress("0x21e6fc92f93c8a1bb41e2be64b4e1f88a54d3576"), conn)
if err != nil {
log.Fatalf("Failed to instantiate Storage contract: %v", err)
}
```
The contract instance is then available to interact with in the Go application. To read a value from
the blockchain, for example the `value` stored in the contract, the contract's `Retrieve()` function
can be called. Again, the function is defined in `Storage.go` as follows:
```go
// Retrieve is a free data retrieval call binding the contract method 0x2e64cec1.
//
// Solidity: function retrieve() view returns(uint256)
func (_Storage *StorageCaller) Retrieve(opts *bind.CallOpts) (*big.Int, error) {
var out []interface{}
err := _Storage.contract.Call(opts, &out, "retrieve")
if err != nil {
return *new(*big.Int), err
}
out0 := *abi.ConvertType(out[0], new(*big.Int)).(**big.Int)
return out0, err
}
```
Note that the `Retrieve()` function requires a parameter to be passed, even though the
original Solidity contract didn't require any at all none. The parameter required is
a `*bind.CallOpts` type, which can be used to fine tune the call. If no adjustments to the
call are required, pass `nil`. Adjustments to the call include:
* `Pending`: Whether to access pending contract state or the current stable one
* `GasLimit`: Place a limit on the computing resources the call might consume
So to call the `Retrieve()` function in the Go application:
```go
value, err := store.Retrieve(nil)
if err != nil {
log.Fatalf("Failed to retrieve value: %v", err)
}
fmt.Println("Value: ", value)
}
```
The output will be something like:
```terminal
Value: 56
```
### Transacting with an Ethereum contract
Invoking a method that changes contract state (i.e. transacting) is a bit more involved,
as a live transaction needs to be authorized and broadcast into the network. **Go bindings
require local signing of transactions and do not delegate this to a remote node.** This is
to keep accounts private within dapps, and not shared (by default) between them.
Thus to allow transacting with a contract, your code needs to implement a method that
given an input transaction, signs it and returns an authorized output transaction. Since
most users have their keys in the [Web3 Secret Storage][web3-ss-link] format, the `bind`
package contains a small utility method (`bind.NewTransactor(keyjson, passphrase)`) that can
create an authorized transactor from a key file and associated password, without the user
needing to implement key signing themselves.
Changing the previous code snippet to update the value stored in the contract:
```go
package main
import (
"fmt"
"log"
"math/big"
"strings"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethclient"
)
const key = `json object from keystore`
func main() {
// Create an IPC based RPC connection to a remote node and instantiate a contract binding
conn, err := ethclient.Dial("/home/go-ethereum/goerli/geth.ipc")
if err != nil {
log.Fatalf("Failed to connect to the Ethereum client: %v", err)
}
store, err := NewStorage(common.HexToAddress("0x21e6fc92f93c8a1bb41e2be64b4e1f88a54d3576"), conn)
if err != nil {
log.Fatalf("Failed to instantiate a Storage contract: %v", err)
}
// Create an authorized transactor and call the store function
auth, err := bind.NewStorageTransactor(strings.NewReader(key), "strong_password")
if err != nil {
log.Fatalf("Failed to create authorized transactor: %v", err)
}
// Call the store() function
tx, err := store.Store(auth, big.NewInt(420))
if err != nil {
log.Fatalf("Failed to update value: %v", err)
}
fmt.Printf("Update pending: 0x%x\n", tx.Hash())
}
```
And the output:
```terminal
Update pending: 0x4f4aaeb29ed48e88dd653a81f0b05d4df64a86c99d4e83b5bfeb0f0006b0e55b
```
Similar to the method invocations in the previous section which only read contract state,
transacting methods also require a mandatory first parameter, a `*bind.TransactOpts` type,
which authorizes the transaction and potentially fine tunes it:
* `From`: Address of the account to invoke the method with (mandatory)
* `Signer`: Method to sign a transaction locally before broadcasting it (mandatory)
* `Nonce`: Account nonce to use for the transaction ordering (optional)
* `GasLimit`: Place a limit on the computing resources the call might consume (optional)
* `GasPrice`: Explicitly set the gas price to run the transaction with (optional)
* `Value`: Any funds to transfer along with the method call (optional)
The two mandatory fields are automatically set by the `bind` package if the auth options are
constructed using `bind.NewTransactor`. The nonce and gas related fields are automatically
derived by the binding if they are not set. Unset values are assumed to be zero.
### Pre-configured contract sessions
Reading and state modifying contract-calls require a mandatory first parameter which can
authorize and fine tune some of the internal parameters. However, most of the time the
same accounts and parameters will be used to issue many transactions, so constructing
the call/transact options individually quickly becomes unwieldy.
To avoid this, the generator also creates specialized wrappers that can be pre-configured with
tuning and authorization parameters, allowing all the Solidity defined methods to be invoked
without needing an extra parameter.
These are named similarly to the original contract type name but suffixed with `Sessions`:
```go
// Wrap the Storage contract instance into a session
session := &StorageSession{
Contract: store,
CallOpts: bind.CallOpts{
Pending: true,
},
TransactOpts: bind.TransactOpts{
From: auth.From,
Signer: auth.Signer,
GasLimit: big.NewInt(3141592),
},
}
// Call the previous methods without the option parameters
session.Store(big.NewInt(69))
```
## Bind Solidity directly
In the past, abigen allowed compilation and binding of a Solidity source file directly to a Go package in a single step.
This feature has been discontinued from [v1.10.18](https://github.com/ethereum/go-ethereum/releases/tag/v1.10.18)
onwards due to maintenance synchronization challenges with the compiler in Geth.
The compilation and binding steps can be joined together into a pipeline, for example:
```
solc Storage.sol --combined-json abi,bin | abigen --pkg main --type storage --out Storage.go --combined-json -
```
### Project integration (`go generate`)
The `abigen` command was made in such a way as to integrate easily into existing
Go toolchains: instead of having to remember the exact command needed to bind an Ethereum
contract into a Go project, `go generate` can handle all the fine details.
Place the binding generation command into a Go source file before the package definition:
```
//go:generate abigen --sol Storage.sol --pkg main --out Storage.go
```
After which whenever the Solidity contract is modified, instead of needing to remember and
run the above command, we can simply call `go generate` on the package (or even the entire
source tree via `go generate ./...`), and it will correctly generate the new bindings for us.
## Blockchain simulator
Being able to deploy and access deployed Ethereum contracts from native Go code is a powerful
feature. However, using public testnets as a backend does not lend itself well to
*automated unit testing*. Therefore, Geth also implements a *simulated blockchain*
that can be set as a backend to native contracts the same way as a live RPC backend, using the
command `backends.NewSimulatedBackend(genesisAccounts)`. The code snippet below shows how this
can be used as a backend in a Go applicatioon.
```go
package main
import (
"fmt"
"log"
"math/big"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/crypto"
)
func main() {
// Generate a new random account and a funded simulator
key, _ := crypto.GenerateKey()
auth := bind.NewKeyedTransactor(key)
sim := backends.NewSimulatedBackend(core.GenesisAccount{Address: auth.From, Balance: big.NewInt(10000000000)})
// instantiate contract
store, err := NewStorage(common.HexToAddress("0x21e6fc92f93c8a1bb41e2be64b4e1f88a54d3576"), sim)
if err != nil {
log.Fatalf("Failed to instantiate a Storage contract: %v", err)
}
// Create an authorized transactor and call the store function
auth, err := bind.NewStorageTransactor(strings.NewReader(key), "strong_password")
if err != nil {
log.Fatalf("Failed to create authorized transactor: %v", err)
}
// Call the store() function
tx, err := store.Store(auth, big.NewInt(420))
if err != nil {
log.Fatalf("Failed to update value: %v", err)
}
fmt.Printf("Update pending: 0x%x\n", tx.Hash())
}
```
Note, that it is not necessary to wait for a local private chain miner, or testnet miner to
integrate the currently pending transactions. To mine the next block, simply `Commit()` the simulator.
## Summary
To make interacting with Ethereum contracts easier for Go developers, Geth provides tools that generate
contract bindings automatically. This makes contract functions available in Go native applications.
[go-link]:https://github.com/golang/go/wiki#getting-started-with-go
[truffle-link]:https://trufflesuite.com/docs/truffle/
[hardhat-link]:https://hardhat.org/
[brownie-link]:https://eth-brownie.readthedocs.io/en/stable/
[remix-link]:https://remix.ethereum.org/
[web3-ss-link]:https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition

@ -0,0 +1,249 @@
---
title: Go API
sort_key: C
---
Ethereum was originally conceptualized to be the base layer for [Web3][web3-link], providing
the backbone for a new generation of decentralized, permissionless and censorship resistant
applications called [dapps][dapp-link]. The first step towards this vision was the development
of clients providing an RPC interface into the peer-to-peer protocols. This allowed users to
transact between accounts and interact with smart contracts using command line tools.
Geth was one of the original clients to provide this type of gateway to the Ethereum network.
Before long, web-browser-like graphical interfaces (e.g. Mist) were created to extend clients, and
client functions were built into websites built using the time-tested HTML/CSS/JS stack.
However, to support the most diverse, complex dapps, developers require programmatic access to client
functions through an API. This opens up client technologies as re-usable, composable units that
can be applied in creative ways by a global community of developers.
To support this, Geth ships official Go packages that can be embedded into third party
desktop and server applications. There is also a [mobile API](/docs/dapp/mobile) that can be
used to embed Geth into mobile applications.
This page provides a high-level overview of the Go API.
*Note, this guide will assume some familiarity with Go development. It does not cover general topics
about Go project layouts, import paths or any other standard methodologies. If you are new to Go,
consider reading [Getting Started with Go][go-guide] first.*
## Overview
Geth's reusable Go libraries focus on three main usage areas:
- Simplified client side account management
- Remote node interfacing via different transports
- Contract interactions through auto-generated bindings
The libraries are updated synchronously with the Geth Github repository.
The Go libraries can be viewed in full at [Go Packages][go-pkg-link].
Péter Szilágyi (@karalabe) gave a high level overview of the Go libraries in
a talk at DevCon2 in Shanghai in 2016. The slides are still a useful resource
([available here][peter-slides]) and the talk itself can be viewed by clicking
the image below (it is also archived on [IPFS][ipfs-link]).
[![Peter's Devcon2 talk](/static/images/devcon2_labelled.webp)](https://www.youtube.com/watch?v=R0Ia1U9Gxjg)
## Go packages
The `go-ethereum` library is distributed as a collection of standard Go packages straight from go-ethereum's
GitHub repository. The packages can be used directly via the official Go toolkit, without needing any
third party tools.
The canonical import path for Geth is `github.com/ethereum/go-ethereum`, with all packages residing
underneath. Although there are [lots of them][go-ethereum-dir] most developers will only care about
a limited subset.
All the Geth packages can be downloaded using:
```
$ go get -d github.com/ethereum/go-ethereum/...
```
More Go API support for dapp developers can be found on the [Go Contract Bindings](/docs/dapp/native-bindings)
and [Go Account Management](/docs/dapp/native-accounts) pages.
## Tutorial
This section includes some basic usage examples for the `ethclient` and `gethclient` packages available as
part of the Go API. The `ethclient` package provides a client that implements the full Ethereum JSON-RPC API,
whereas `gethclient` offers the Geth-specific API.
### Instantiating a client
The client is an instance of the `Client` struct which has associated functions that wrap requests to the Ethereum
or Geth RPC API endpoints.
A client is instantiated by passing a raw url or path to an ipc file to the client's `Dial` function. In the following
code snippet the path to the ipc file for a local Geth node is provided to `ethclient.Dial()`.
```go
// create instance of ethclient and assign to cl
cl, err := ethclient.Dial("/tmp/geth.ipc")
if err != nil {
panic(err)
}
_ = cl
```
### Interacting with the client
The client can now be used to handle requests to the Geth node using the full JSON-RPC API. For example, the function
`BlockNumer()` wraps a call to the `eth_blockNumber` endpoint. The function `SendTransaction` wraps a call to
`eth_sendTransaction`. The full list of client methods can be found [here][ethclient-pkg].
Frequently, the functions take an instance of the `Context` type as their leading argument. This defines context about requests sent from the application such as deadlines, cancellation signals etc. More information on this can
be found in the [Go documentation](https://pkg.go.dev/golang.org/x/net/context). An empty context instance can be
created using `Context.Background()`.
### Querying client for data
A simple starting point is to fetch the chain ID from the client. This e.g. is needed when signing a transaction as is to be seen in the next section.
```go
chainid, err := cl.ChainID(context.Background())
if err != nil {
return err
}
```
Unlike `ChainID`, many functions require arguments other than context. The Go API takes in and returns high-level types which are used in Geth internals as well to simplify programming and remove the need for knowing how data needs to be formatted exactly as per the JSON-RPC API spec. For example to find out the nonce for an account at a given block the address needs to be provided as a `common.Address` type and the block number as a `*big.Int`:
```go
addr := common.HexToAddress("0xb02A2EdA1b317FBd16760128836B0Ac59B560e9D")
nonce, err := cl.NonceAt(context.Background(), addr, big.NewInt(14000000))
```
### Querying past events
Contracts emit events during execution which can be queried from the client. The parameters for the event one is interested in have to be filled out in the `ethereum.FilterQuery` object. This includes which event topics are of interested, from which contracts and during which range of blocks. The example below queries `Transfer` events of all ERC-20 tokens for the last 10 blocks:
```go
blockNum, err := cl.BlockNumber(context.Background())
if err != nil {
return err
}
q := ethereum.FilterQuery{
FromBlock: new(big.Int).Sub(blockNum, big.NewInt(10)),
ToBlock: blockNum,
Topics: [][]common.Hash{common.HexToHash("0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef")},
}
logs, err := cl.FilterLogs(context.Background(), q)
if err != nil {
return err
}
```
### Sending a transaction
Sending a transaction is achieved using the `SendTransaction()` function. `SendTransaction` takes an instance of
`context.Context` as its leading argument and a signed transaction as its second argument. The signed transaction
must be generated in advance. Building the signed transaction is a multi-stage
process that requires first generating a key pair if none exists already, retrieving some chain data and defining sender and recipient
addresses. Then these data can be collected into a transaction object and signed. The resulting signed transaction
can then be passed to `SendTransaction`.
The example below assumes the following key pair has already been generated:
```go
// SK and ADDR are the secret key and sender address
SK = "0xaf5ead4413ff4b78bc94191a2926ae9ccbec86ce099d65aaf469e9eb1a0fa87f"
ADDR = "0x6177843db3138ae69679A54b95cf345ED759450d"
```
The secret key and address can be used to send a transaction. In the example below 1 ETH is sent from the
address `ADDR` to an arbitrary recipient.
```go
import (
"context"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethclient"
"github.com/ethereum/go-ethereum/params"
)
// sendTransaction sends a transaction with 1 ETH to a specified address.
func sendTransaction(cl *ethclient.Client) error {
var (
sk = crypto.ToECDSAUnsafe(common.FromHex(SK))
to = common.HexToAddress("0xb02A2EdA1b317FBd16760128836B0Ac59B560e9D")
value = new(big.Int).Mul(big.NewInt(1), big.NewInt(params.Ether))
sender = common.HexToAddress(ADDR)
gasLimit = uint64(21000)
)
// Retrieve the chainid (needed for signer)
chainid, err := cl.ChainID(context.Background())
if err != nil {
return err
}
// Retrieve the pending nonce
nonce, err := cl.PendingNonceAt(context.Background(), sender)
if err != nil {
return err
}
// Get suggested gas price
tipCap, _ := cl.SuggestGasTipCap(context.Background())
feeCap, _ := cl.SuggestGasPrice(context.Background())
// Create a new transaction
tx := types.NewTx(
&types.DynamicFeeTx{
ChainID: chainid,
Nonce: nonce,
GasTipCap: tipCap,
GasFeeCap: feeCap,
Gas: gasLimit,
To: &to,
Value: value,
Data: nil,
})
// Sign the transaction using our keys
signedTx, _ := types.SignTx(tx, types.NewLondonSigner(chainid), sk)
// Send the transaction to our node
return cl.SendTransaction(context.Background(), signedTx)
}
```
### gethclient
An instance of `gethclient` can be used in exactly the same way as `ethclient`. However, `gethclient`
includes Geth-specific API methods. These additional methods are:
```shell
CallContract()
CreatAccessList()
GCStats()
GetNodeInfo()
GetProof()
MemStats()
SetHead()
SubscribePendingTransactions()
```
*Note that both `ethclient` and `gethclient` have a `CallContract()` function - the difference is that
the `gethclient` version includes an `overrides` argument.*
Details relating to these endpoints can be found at [pkg.go.dev][go-api-docs] or the Geth [Github][ethclient-link].
The code snippets in this tutorial were adapted from a more more in-depth set of examples available on
[Github][web3go-link].
## Summary
There are a wide variety of Go APIs available for dapp developers that abstract away the complexity of interacting with Ethereum
using a set of composable, reusable functions provided by Geth.
[go-guide]: https://github.com/golang/go/wiki#getting-started-with-go
[peter-slides]: https://ethereum.karalabe.com/talks/2016-devcon.html
[go-ethereum-dir]: https://pkg.go.dev/github.com/ethereum/go-ethereum/#section-directories
[ethclient-pkg]:https://pkg.go.dev/github.com/ethereum/go-ethereum/ethclient#Client
[go-pkg-link]: https://pkg.go.dev/github.com/ethereum/go-ethereum#section-directories
[ipfs-link]: https://ipfs.io/ipfs/QmQRuKPKWWJAamrMqAp9rytX6Q4NvcXUKkhvu3kuREKqXR
[dapp-link]: https://ethereum.org/en/glossary/#dapp
[web3-link]: https://ethereum.org/en/web3/
[ethclient-link]: https://github.com/ethereum/go-ethereum/tree/master/ethclient
[go-api-docs]:https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.19/ethclient/gethclient
[web3go-link]:https://github.com/MariusVanDerWijden/web3go

@ -0,0 +1,232 @@
---
title: EVM Tracing
sort_key: A
---
There are two different types of [transactions][transactions]
in Ethereum: simple value transfers and contract executions. A value transfer just
moves Ether from one account to another. If however the recipient of a transaction is
a contract account with associated [EVM][evm] (Ethereum Virtual Machine) bytecode - beside
transferring any Ether - the code will also be executed as part of the transaction.
Having code associated with Ethereum accounts permits transactions to do arbitrarily
complex data storage and enables them to act on the previously stored data by further
transacting internally with outside accounts and contracts. This creates an interlinked
ecosystem of contracts, where a single transaction can interact with tens or hundreds of
accounts.
The downside of contract execution is that it is very hard to say what a transaction
actually did. A transaction receipt does contain a status code to check whether execution
succeeded or not, but there is no way to see what data was modified, nor what external
contracts where invoked. Geth resolves this by re-running transactions locally and collecting
data about precisely what was executed by the EVM. This is known as "tracing" the transaction.
* TOC
{:toc}
## Tracing prerequisites
In its simplest form, tracing a transaction entails requesting the Ethereum node to
reexecute the desired transaction with varying degrees of data collection and have it
return the aggregated summary for post processing. Reexecuting a transaction however has a
few prerequisites to be met.
In order for an Ethereum node to reexecute a transaction, all historical state accessed
by the transaction must be available. This includes:
* Balance, nonce, bytecode and storage of both the recipient as well as all internally invoked contracts.
* Block metadata referenced during execution of both the outer as well as all internally created transactions.
* Intermediate state generated by all preceding transactions contained in the same block as the one being traced.
This means there are limits on the transactions that can be traced imposed by the synchronization and
pruning configuration of a node.
* An **archive** node retains **all historical data** back to genesis. It can therefore
trace arbitrary transactions at any point in the history of the chain. Tracing a single
transaction requires reexecuting all preceding transactions in the same block.
* A **full synced** node retains the most recent 128 blocks in memory, so transactions in
that range are always accessible. Full nodes also store occasional checkpoints back to genesis
that can be used to rebuild the state at any point on-the-fly. This means older transactions
can be traced but if there is a large distance between the requested transaction and the most
recent checkpoint rebuilding the state can take a long time. Tracing a single
transaction requires reexecuting all preceding transactions in the same block
**and** all preceding blocks until the previous stored snapshot.
* A **snap synced** node holds the most recent 128 blocks in memory, so transactions in that
range are always accessible. However, snap-sync only starts processing from a relatively recent
block (as opposed to genesis for a full node). Between the initial sync block and the 128 most
recent blocks, the node stores occasional checkpoints that can be used to rebuild the state on-the-fly.
This means transactions can be traced back as far as the block that was used for the initial sync.
Tracing a single transaction requires reexecuting all preceding transactions in the same block,
**and** all preceding blocks until the previous stored snapshot.
* A **light synced** node retrieving data **on demand** can in theory trace transactions
for which all required historical state is readily available in the network. This is because the data
required to generate the trace is requested from an les-serving full node. In practice, data
availability **cannot** be reasonably assumed.
*There are exceptions to the above rules when running batch traces of entire blocks or
chain segments. Those will be detailed later.*
## Basic traces
The simplest type of transaction trace that Geth can generate are raw EVM opcode
traces. For every VM instruction the transaction executes, a structured log entry is
emitted, containing all contextual metadata deemed useful. This includes the *program
counter*, *opcode name*, *opcode cost*, *remaining gas*, *execution depth* and any
*occurred error*. The structured logs can optionally also contain the content of the
*execution stack*, *execution memory* and *contract storage*.
The entire output of a raw EVM opcode trace is a JSON object having a few metadata
fields: *consumed gas*, *failure status*, *return value*; and a list of *opcode entries*:
```json
{
"gas": 25523,
"failed": false,
"returnValue": "",
"structLogs": []
}
```
An example log for a single opcode entry has the following format:
```json
{
"pc": 48,
"op": "DIV",
"gasCost": 5,
"gas": 64532,
"depth": 1,
"error": null,
"stack": [
"00000000000000000000000000000000000000000000000000000000ffffffff",
"0000000100000000000000000000000000000000000000000000000000000000",
"2df07fbaabbe40e3244445af30759352e348ec8bebd4dd75467a9f29ec55d98d"
],
"memory": [
"0000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000000",
"0000000000000000000000000000000000000000000000000000000000000060"
],
"storage": {
}
}
```
### Generating basic traces
To generate a raw EVM opcode trace, Geth provides a few [RPC API endpoints](/docs/rpc/ns-debug).
The most commonly used is [`debug_traceTransaction`](/docs/rpc/ns-debug#debug_tracetransaction).
In its simplest form, `traceTransaction` accepts a transaction hash as its only argument. It then
traces the transaction, aggregates all the generated data and returns it as a **large**
JSON object. A sample invocation from the Geth console would be:
```js
debug.traceTransaction("0xfc9359e49278b7ba99f59edac0e3de49956e46e530a53c15aa71226b7aa92c6f")
```
The same call can also be invoked from outside the node too via HTTP RPC (e.g. using Curl). In this
case, the HTTP endpoint must be enabled in Geth using the `--http` command and the `debug` API
namespace must be exposed using `--http.api=debug`.
```
$ curl -H "Content-Type: application/json" -d '{"id": 1, "method": "debug_traceTransaction", "params": ["0xfc9359e49278b7ba99f59edac0e3de49956e46e530a53c15aa71226b7aa92c6f"]}' localhost:8545
```
To follow along with this tutorial, transaction hashes can be found from a local Geth node (e.g. by
attaching a [Javascript console](/docs/interface/javascript-console) and running `eth.getBlock('latest')`
then passing a transaction hash from the returned block to `debug.traceTransaction()`) or from a block
explorer (for [Mainnet](https://etherscan.io/) or a [testnet](https://goerli.etherscan.io/)).
It is also possible to configure the trace by passing Boolean (true/false) values for four parameters
that tweak the verbosity of the trace. By default, the *EVM memory* and *Return data* are not reported
but the *EVM stack* and *EVM storage* are. To report the maximum amount of data:
```shell
enableMemory: true
disableStack: false
disableStorage: false
enableReturnData: true
```
An example call, made in the Geth Javascript console, configured to report the maximum amount of data
looks as follows:
```js
debug.traceTransaction("0xfc9359e49278b7ba99f59edac0e3de49956e46e530a53c15aa71226b7aa92c6f",{enableMemory: true, disableStack: false, disableStorage: false, enableReturnData: true})
```
Running the above operation on the Rinkeby network (with a node retaining enough history)
will result in this [trace dump](https://gist.github.com/karalabe/c91f95ac57f5e57f8b950ec65ecc697f).
Alternatively, disabling *EVM Stack*, *EVM Memory*, *Storage* and *Return data* (as demonstrated in the Curl request below)
results in the following, much shorter, [trace dump](https://gist.github.com/karalabe/d74a7cb33a70f2af75e7824fc772c5b4).
```
$ curl -H "Content-Type: application/json" -d '{"id": 1, "method": "debug_traceTransaction", "params": ["0xfc9359e49278b7ba99f59edac0e3de49956e46e530a53c15aa71226b7aa92c6f", {"disableStack": true, "disableStorage": true}]}' localhost:8545
```
### Limits of basic traces
Although the raw opcode traces generated above are useful, having an individual log entry for every single
opcode is too low level for most use cases, and will require developers to create additional tools to
post-process the traces. Additionally, a full opcode trace can easily go into the hundreds of
megabytes, making them very resource intensive to get out of the node and process externally.
To avoid those issues, Geth supports running custom JavaScript tracers *within* the Ethereum node,
which have full access to the EVM stack, memory and contract storage. This means developers only have to
gather the data they actually need, and do any processing at the source.
## Pruning
Geth does in-memory state-pruning by default, discarding state entries that it deems
no longer necessary to maintain. This is configured via the `--gcmode` command. An error
message alerting the user that the necessary state is not available is common in EVM tracing on
anything other than an archive node.
```sh
Error: required historical state unavailable (reexec=128)
at web3.js:6365:37(47)
at send (web3,js:5099:62(35))
at <eval>:1:23(13)
```
The pruning behaviour, and consequently the state availability and tracing capability of
a node depends on its sync and pruning configuration. The 'oldest' block after which
state is immediately available, and before which state is not immediately available,
is known as the "pivot block". There are then several possible cases for a trace request
on a Geth node.
For tracing a transaction in block `B` where the pivot block is `P` can regenerate the desired
state by replaying blocks from the last :
1. a fast-sync'd node can regenerate the desired state by replaying blocks from the most recent
checkpoint between `P` and `B` as long as `P` < `B`. If `P` > `B` there is no available checkpoint
and the state cannot be regenerated without replying the chain from genesis.
2. a fully sync'd node can regenerate the desired state by replaying blocks from the last available
full state before `B`. A fully sync'd node re-executes all blocks from genesis, so checkpoints are available
across the entire history of the chain. However, database pruning discards older data, moving `P` to a more
recent position in the chain. If `P` > `B` there is no available checkpoint and the state cannot be
regenerated without replaying the chain from genesis.
3. A fully-sync'd node without pruning (i.e. an archive node configured with `--gcmode=archive`)
does not need to replay anything, it can immediately load up any state and serve the request for any `B`.
The time taken to regenerate a specific state increases with the distance between `P` and `B`. If the distance
between `P` and `B` is large, the regeneration time can be substantial.
## Summary
This page covered the concept of EVM tracing and how to generate traces with the default opcode-based tracers using RPC.
More advanced usage is possible, including using other built-in tracers as well as writing [custom tracing](/docs/dapp/custom-tracer) code in Javascript
and Go. The API as well as the JS tracing hooks are defined in [the reference](/docs/rpc/ns-debug#debug_traceTransaction).
[transactions]: https://ethereum.org/en/developers/docs/transactions
[evm]: https://ethereum.org/en/developers/docs/evm

@ -0,0 +1,103 @@
---
title: Code Review Guidelines
sort_key: B
---
The only way to get code into go-ethereum is to send a pull request. Those pull requests
need to be reviewed by someone. This document is a guide that explains our expectations
around PRs for both authors and reviewers.
## Terminology
* The **author** of a pull request is the entity who wrote the diff and submitted it to
GitHub.
* The **team** consists of people with commit rights on the go-ethereum repository.
* The **reviewer** is the person assigned to review the diff. The reviewer must be a team
member.
* The **code owner** is the person responsible for the subsystem being modified by the PR.
## The Process
The first decision to make for any PR is whether it's worth including at all. This
decision lies primarily with the code owner, but may be negotiated with team members.
To make the decision we must understand what the PR is about. If there isn't enough
description content or the diff is too large, request an explanation. Anyone can do this
part.
We expect that reviewers check the style and functionality of the PR, providing comments
to the author using the GitHub review system. Reviewers should follow up with the PR until
it is in good shape, then **approve** the PR. Approved PRs can be merged by any code owner.
When communicating with authors, be polite and respectful.
### Code Style
We expect `gofmt`ed code. For contributions of significant size, we expect authors to
understand and use the guidelines in [Effective Go][effgo]. Authors should avoid common
mistakes explained in the [Go Code Review Comments][revcomment] page.
### Functional Checks
For PRs that fix an issue, reviewers should try reproduce the issue and verify that the
pull request actually fixes it. Authors can help with this by including a unit test that
fails without (and passes with) the change.
For PRs adding new features, reviewers should attempt to use the feature and comment on
how it feels to use it. Example: if a PR adds a new command line flag, use the program
with the flag and comment on whether the flag feels useful.
We expect appropriate unit test coverage. Reviewers should verify that new code is covered
by unit tests.
### CI
Code submitted must pass all unit tests and static analysis ("lint") checks. We use Travis
CI to test code on Linux, macOS and AppVeyor to test code on Microsoft Windows.
For failing CI builds, the issue may not be related to the PR itself. Such failures are
usually related to flakey tests. These failures can be ignored (authors don't need to fix
unrelated issues), but please file a GH issue so the test gets fixed eventually.
### Commit Messages
Commit messages on the master branch should follow the rule below. PR authors are not
required to use any particular style because the message can be modified at merge time.
Enforcing commit message style is the responsibility of the person merging the PR.
The commit message style we use is similar to the style used by the Go project:
The first line of the change description is conventionally a one-line summary of the
change, prefixed by the primary affected Go package. It should complete the sentence "This
change modifies go-ethereum to _____." The rest of the description elaborates and should
provide context for the change and explain what it does.
Template:
```text
package/path: change XYZ
Longer explanation of the change in the commit. You can use
multiple sentences here. It's usually best to include content
from the PR description in the final commit message.
issue notices, e.g. "Fixes #42353".
```
### Special Situations And How To Deal With Them
As a reviewer, you may find yourself in one of the sitations below. Here's how to deal
with those:
* The author doesn't follow up: ping them after a while (i.e. after a few days). If there
is no further response, close the PR or complete the work yourself.
* Author insists on including refactoring changes alongside bug fix: We can tolerate small
refactorings alongside any change. If you feel lost in the diff, ask the author to
submit the refactoring as an independent PR, or at least as an independent commit in the
same PR.
* Author keeps rejecting your feedback: reviewers have authority to reject any change for technical reasons. If you're unsure, ask the team for a second opinion. You may close the PR if no consensus can be reached.
[effgo]: https://golang.org/doc/effective_go.html
[revcomment]: https://github.com/golang/go/wiki/CodeReviewComments

@ -0,0 +1,481 @@
---
title: Private Networks
sort_key: D
---
This guide explains how to set up a private network of multiple Geth nodes. An Ethereum network is private if the nodes are not connected to the main network. In this context private only means reserved or isolated, rather than protected or secure. A fully controlled, private Ethereum network is useful as a backend for core developers working on issues relating to networking/blockchain syncing etc. Private networks are also useful for Dapp developers testing multi-block and multi-user scenarios.
## Prerequisites
To follow the tutorial on this page it is necessary to have a working Geth installation (instructions [here](/docs/install-and-build/installing-geth)). It is also helpful to understand Geth fundamentals (see [Getting Started](/docs/getting-started)).
## Private Networks
A private network is composed of multiple Ethereum nodes that can only connect to each other. In order to run multiple nodes locally, each one requires a separate data directory (`--datadir`). The nodes must also know about each other and be able to exchange information, share an initial state and a common consensus algorithm. The remainder of this page will explain how to configure Geth so that these basic requirements are met, enabling a private network to be started.
### Choosing A Network ID
Ethereum Mainnet has Network ID = 1. There are also many other networks that Geth can connect to by providing alternative Chain IDs, some are testnets and others are alternative networks built from forks of the Geth source code. Providing a network ID that is not already being used by an existing network or testnet means the nodes using that network ID can only connect to each other, creating a private network. A list of current network IDs is available at [Chainlist.org](https://chainlist.org/). The network ID is controlled using the `networkid` flag, e.g.
```shell
geth --networkid 12345
```
### Choosing A Consensus Algorithm
While the main network uses proof-of-work (PoW) to secure the blockchain, Geth also supports the the 'Clique' proof-of-authority (PoA) consensus algorithm as an alternative for private networks. Clique is strongly recommended for private testnets because PoA is far less resource-intensive than PoW. Clique is currently used as the consensus algorithm in public testnets such as [Rinkeby](https://www.rinkeby.io) and [Görli](https://goerli.net). The key differences between the consensus algorithms available in Geth are:
#### Ethash
Geth's PoW algorithm, [Ethhash](https://ethereum.org/en/developers/docs/consensus-mechanisms/pow/mining-algorithms/ethash), is a system that allows open participation by anyone willing to dedicate resources to mining. While this is a critical property for a public network, the overall security of the blockchain strictly depends on the total amount of resources used to secure it. As such, PoW is a poor choice for private networks with few miners. The Ethash mining 'difficulty' is adjusted automatically so that new blocks are created approximately 12 seconds apart. As more mining resources are deployed on the network, creating a new block becomes harder so that the average block time matches the target block time.
#### Clique
Clique consensus is a PoA system where new blocks can be created by authorized 'signers' only. The clique consenus protocol is specified in [EIP-225][clique-eip]. The initial set of authorized signers is configured in the genesis block. Signers can be authorized and de-authorized using a voting mechanism, thus allowing the set of signers to change while the blockchain operates. Clique can be configured to target any block time (within reasonable limits) since it isn't tied to the difficulty adjustment.
[clique-eip]: https://eips.ethereum.org/EIPS/eip-225
### Creating The Genesis Block
Every blockchain starts with a genesis block. When Geth is run with default settings for the first time, it commits the Mainnet genesis to the database. For a private network, it is generally preferable to use a different genesis block. The genesis block is configured using a _genesis.json_ file whose path must be provided to Geth on start-up. When creating a genesis block, a few initial parameters for the private blockchain must be defined:
- Ethereum platform features enabled at launch (`config`). Enabling and disabling features once the blockchain is running requires scheduling a [hard fork](https://ethereum.org/en/glossary/#hard-fork).
- Initial block gas limit (`gasLimit`). This impacts how much EVM computation can happen within a single block. Mirroring the main Ethereum network is generally a [good choice][gaslimit-chart]. The block gas limit can be adjusted after launch using the `--miner.gastarget` command-line flag.
- Initial allocation of ether (`alloc`). This determines how much ether is available to the addresses listed in the genesis block. Additional ether can be created through mining as the chain progresses.
#### Clique Example
Below is an example of a `genesis.json` file for a PoA network. The `config` section ensures that all known protocol changes are available and configures the 'clique' engine to be used for consensus. Note that the initial signer set must be configured through the `extradata` field. This field is required for Clique to work.
The signer account keys can be generated using the [geth account](./managing-your-accounts) command (this command can be run multiple times to create more than one signer key).
```shell
geth account new --datadir data
```
The Ethereum address printed by this command should be recorded. To encode the signer addresses in `extradata`, concatenate 32 zero bytes, all signer addresses and 65 further zero bytes. The result of this concatenation is then used as the value accompanying the `extradata` key in `genesis.json`. In the example below, `extradata` contains a single initial signer address, `0x7df9a875a174b3bc565e6424a0050ebc1b2d1d82`.
The `period` configuration option sets the target block time of the chain.
```json
{
"config": {
"chainId": 12345,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"berlinBlock": 0,
"clique": {
"period": 5,
"epoch": 30000
}
},
"difficulty": "1",
"gasLimit": "8000000",
"extradata": "0x00000000000000000000000000000000000000000000000000000000000000007df9a875a174b3bc565e6424a0050ebc1b2d1d820000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"alloc": {
"7df9a875a174b3bc565e6424a0050ebc1b2d1d82": { "balance": "300000" },
"f41c74c9ae680c1aa78f42e5647a62f353b7bdde": { "balance": "400000" }
}
}
```
#### Ethash Example
Since Ethash is the default consensus algorithm, no additional parameters need to be configured in order to use it. The initial mining difficulty is influenced using the `difficulty` parameter, but note that the difficulty adjustment algorithm will quickly adapt to the amount of mining resources deployed on the chain.
```json
{
"config": {
"chainId": 12345,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"berlinBlock": 0,
"ethash": {}
},
"difficulty": "1",
"gasLimit": "8000000",
"alloc": {
"7df9a875a174b3bc565e6424a0050ebc1b2d1d82": { "balance": "300000" },
"f41c74c9ae680c1aa78f42e5647a62f353b7bdde": { "balance": "400000" }
}
}
```
### Initializing the Geth Database
To create a blockchain node that uses this genesis block, first use `geth init` to import and sets the canonical genesis block for the new chain. This requires the path to `genesis.json` to be passed as an argument.
```shell
geth init --datadir data genesis.json
```
When Geth is started using `--datadir data` the genesis block defined in `genesis.json` will be used. For example:
```shell
geth --datadir data --networkid 12345
```
### Scheduling Hard Forks
As Ethereum protocol development progresses, new features become available. To enable these features on an existing private network, a hard fork must be scheduled. To do this, a future block number must be chosen which determines precisely when the hard fork will activate. Continuing the `genesis.json` example above and assuming the current block number is 35421, a hard fork might be scheduled for block 40000. This hard fork might upgrade the network to conform to the 'London' specs. First, all the Geth instances on the private network must be recent enough to support the specific hard fork. If so, `genesis.json` can be updated so that the `londonBlock` key gets the value 40000. The Geth instances are then shut down and `geth init` is run to update their configuration. When the nodes are restarted they will pick up where they left off and run normally until block 40000, at which point they will automatically upgrade.
The modification to `genesis.json` is as follows:
```json
{
"config": {
"londonBlock": 40000,
},
}
```
The upgrade command is:
```shell
geth init --datadir data genesis.json
```
### Setting Up Networking
With the node configured and initialized, the next step is to set up a peer-to-peer network. This requires a bootstrap node. The bootstrap node is a normal node that is designated to be the entry point that other nodes use to join the network. Any node can be chosen to be the bootstrap node.
To configure a bootstrap node, the IP address of the machine the bootstrap node will run on must be known. The bootsrap node needs to know its own IP address so that it can broadcast it to other nodes. On a local machine this can be found using tools such as `ifconfig` and on cloud instances such as Amazon EC2 the IP address of the virtual machine can be found in the management console. Any firewalls must allow UDP and TCP traffic on port 30303.
The bootstrap node IP is set using the `--nat` flag (the command below contains an example address - replace it with the correct one).
```shell
geth --datadir data --networkid 15 --nat extip:172.16.254.4
```
The 'node record' of the bootnode can be extracted using the JS console:
```shell
geth attach data/geth.ipc --exec admin.nodeInfo.enr
```
This command should print a base64 string such as the following example. Other nodes will use the information contained in the bootstrap node record to connect to the peer-to-peer network.
```text
"enr:-Je4QEiMeOxy_h0aweL2DtZmxnUMy-XPQcZllrMt_2V1lzynOwSx7GnjCf1k8BAsZD5dvHOBLuldzLYxpoD5UcqISiwDg2V0aMfGhGlQhqmAgmlkgnY0gmlwhKwQ_gSJc2VjcDI1NmsxoQKX_WLWgDKONsGvxtp9OeSIv2fRoGwu5vMtxfNGdut4cIN0Y3CCdl-DdWRwgnZf"
```
If the nodes are intended to connect across the Internet, the bootnode and all other nodes must have public IP addresses assigned, and both TCP and UDP traffic can pass their firewalls. If Internet connectivity is not required or all member nodes connect using well-known IPs, Geth should be set up to restrict peer-to-peer connectivity to an IP subnet. Doing so will further isolate the network and prevents cross-connecting with other blockchain networks in case the nodes are reachable from the Internet. Use the
`--netrestrict` flag to configure a whitelist of IP networks:
```shell
geth <other-flags> --netrestrict 172.16.254.0/24
```
With the above setting, Geth will only allow connections from the 172.16.254.0/24 subnet, and will not attempt to connect to other nodes outside of the set IP range.
### Running Member Nodes
Before running a member node, it must be initialized with the same genesis file as used for the bootstrap node. With the bootnode operational and externally reachable (`telnet <ip> <port>` will confirm that it is indeed reachable), more Geth nodes can be started and connected to them via the bootstrap node using the `--bootnodes` flag. The process is to start Geth on the same machine as the bootnode, with a separate data directory and listening port and the bootnode node record provided as an argument:
For example, using data directory (example: `data2`) and listening port (example: `30305`):
```shell
geth --datadir data2 --networkid 12345 --port 30305 --bootnodes <bootstrap-node-record>
```
With the member node running, it is possible to check that it is connected to the bootstrap node or any other node in the network by attaching a console and running `admin.peers`. It may take up to a few seconds for the nodes to get connected.
```shell
geth attach data2/geth.ipc --exec admin.peers
```
### Running A Signer (Clique)
To set up Geth for signing blocks in Clique, a signer account must be available. The account must already be available as a keyfile in the keystore. To use it for signing blocks, it must be unlocked. The following command, for address `0x7df9a875a174b3bc565e6424a0050ebc1b2d1d82` will prompt for the account password, then start signing blocks:
```shell
geth <other-flags> --unlock 0x7df9a875a174b3bc565e6424a0050ebc1b2d1d82 --mine
```
Mining can be further configured by changing the default gas limit blocks converge to (with `--miner.gastarget`) and the price transactions are accepted at (with `--miner.gasprice`).
### Running A Miner (Ethash)
For PoW in a simple private network, a single CPU miner instance is enough to create a stable stream of blocks at regular intervals. To start a Geth instance for mining, it can be run with all the usual flags plus the following to configure mining:
```shell
geth <other-flags> --mine --miner.threads=1 --miner.etherbase=0xf41c74c9ae680c1aa78f42e5647a62f353b7bdde
```
This will start mining bocks and transactions on a single CPU thread, crediting all block rewards to the account specified by `--miner.etherbase`.
## End-to-end example {#end-to-end-example}
This section will run through the commands for setting up a simple private network of two nodes. Both nodes will run on the local machine using the same genesis block and network ID. The data directories for each node will be named `node1` and `node2`.
```shell
`mkdir node1 node2`
```
Each node will have an associated account that will receive some ether at launch. The following command creates an account for Node 1:
```shell
geth --datadir node1 account new
```
This command returns a request for a password. Once a password has been provided the following information is returned to the terminal:
```terminal
Your new account is locked with a password. Please give a password. Do not foget this password.
Password:
Repeat password:
Your new key was generated
Public address of the key: 0xC1B2c0dFD381e6aC08f34816172d6343Decbb12b
Path of the secret key file: node1/keystore/UTC--2022-05-13T14-25-49.229126160Z--c1b2c0dfd381e6ac08f34816172d6343decbb12b
- You can share your public address with anyone. Others need it to interact with you.
- You must NEVER share the secret key with anyone! The key controls access to your funds!
- You must BACKUP your key file! Without the key, it's impossible to access account funds!
- You must remember your password! Without the password, it's impossible to decrypt the key!
```
The keyfile and account password should be backed up securely. These steps can then be repeated for Node 2. These commands create keyfiles that are stored in the `keystore` directory in `node1` and `node2` data directories. In order to unlock the accounts later the passwords for each account should be saved to a text file in each node's data directory.
In each data directory save a copy of the following `genesis.json` to the top level project directory. The account addresses in the `alloc` field should be replaced with those created for each node in the previous step (without the leading `0x`).
```json
{
"config": {
"chainId": 12345,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"muirGlacierBlock": 0,
"berlinBlock": 0,
"londonBlock": 0,
"arrowGlacierBlock": 0,
"grayGlacierBlock": 0,
"clique": {
"period": 5,
"epoch": 30000
}
},
"difficulty": "1",
"gasLimit": "800000000",
"extradata": "0x00000000000000000000000000000000000000000000000000000000000000007df9a875a174b3bc565e6424a0050ebc1b2d1d820000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"alloc": {
"C1B2c0dFD381e6aC08f34816172d6343Decbb12b": { "balance": "500000" },
"c94d95a5106270775351eecfe43f97e8e75e59e8": { "balance": "500000" }
}
}
```
The nodes can now be set up using `geth init` as follows:
```shell
geth init --datadir node1 genesis.json
```
This should be repeated for both nodes. The following will be returned to the terminal:
```terminal
INFO [05-13|15:41:47.520] Maximum peer count ETH=50 LES=0 total=50
INFO [05-13|15:41:47.520] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
INFO [05-13|15:41:47.520] Set global gas cap cap=50,000,000
INFO [05-13|15:41:47.520] Allocated cache and file handles database=/home/go-ethereum/node2/geth/chaindata cache=16.00MiB handles=16
INFO [05-13|15:41:47.542] Writing custom genesis block
INFO [05-13|15:41:47.542] Persisted trie from memory database nodes=3 size=397.00B time="41.246µs" gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [05-13|15:41:47.543] Successfully wrote genesis state database=chaindata hash=c9a158..d415a0
INFO [05-13|15:41:47.543] Allocated cache and file handles database=/home/go-ethereum/node2/geth/chaindata cache=16.00MiB handles=16
INFO [05-13|15:41:47.556] Writing custom genesis block
INFO [05-13|15:41:47.557] Persisted trie from memory database nodes=3 size=397.00B time="81.801µs" gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [05-13|15:41:47.558] Successfully wrote genesis state database=chaindata hash=c9a158..d415a0
```
The next step is to configure a bootnode. This can be any node, but for this tutorial the developer tool `bootnode` will be used to quickly and easily configure a dedicated bootnode. First the bootnode requires a key, which can be created with the following command, which will save a key to `boot.key`:
```shell
bootnode -genkey boot.key
```
This key can then be used to generate a bootnode as follows:
```
bootnode -nodekey boot.key -addr :30305
```
The choice of port passed to `-addr` is arbitrary, but public Ethereum networks use 30303, so this is best avoided. The `bootnode` command returns the following logs to the terminal, confirming that it is running:
```terminal
enode://f7aba85ba369923bffd3438b4c8fde6b1f02b1c23ea0aac825ed7eac38e6230e5cadcf868e73b0e28710f4c9f685ca71a86a4911461637ae9ab2bd852939b77f@127.0.0.1:0?discport=30305
Note: you're using cmd/bootnode, a developer tool.
We recommend using a regular node as bootstrap node for production deployments.
INFO [05-13|15:50:03.645] New local node record seq=1,652,453,403,645 id=a2d37f4a7d515b3a ip=nil udp=0 tcp=0
```
The two nodes can now be started. Open separate terminals for each node, leaving the bootnode running in the original terminal. In each terminal, run the following command (replacing `node1` with `node2` where appropriate, and giving each node a different port ID. The account address and password file for node 1 must also be provided:
```shell
./geth --datadir node1 --port 30306 --bootnodes enode://f7aba85ba369923bffd3438b4c8fde6b1f02b1c23ea0aac825ed7eac38e6230e5cadcf868e73b0e28710f4c9f685ca71a86a4911461637ae9ab2bd852939b77f@127.0.0.1:0?discport=30305 --networkid 123454321 --unlock 0xC1B2c0dFD381e6aC08f34816172d6343Decbb12b --password node1/password.txt
```
This will start the node using the bootnode as an entry point. Repeat the same command with the information appropriate to node 2. In each terminal, the following logs indicate success:
```terminal
INFO [05-13|16:17:40.061] Maximum peer count ETH=50 LES=0 total=50
INFO [05-13|16:17:40.061] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
INFO [05-13|16:17:40.061] Set global gas cap cap=50,000,000
INFO [05-13|16:17:40.061] Allocated trie memory caches clean=154.00MiB dirty=256.00MiB
INFO [05-13|16:17:40.061] Allocated cache and file handles database=/home/go-ethereum/node1/geth/chaindata cache=512.00MiB handles=524,288
INFO [05-13|16:17:40.094] Opened ancient database database=/home/go-ethereum/node1/geth/chaindata/ancient readonly=false
INFO [05-13|16:17:40.095] Initialised chain configuration config="{ChainID: 123454321 Homestead: 0 DAO: nil DAOSupport: false EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: nil, Muir Glacier: nil, Berlin: nil, London: nil, Arrow Glacier: nil, MergeFork: nil, Terminal TD: nil, Engine: clique}"
INFO [05-13|16:17:40.096] Initialising Ethereum protocol network=123,454,321 dbversion=8
INFO [05-13|16:17:40.098] Loaded most recent local header number=0 hash=c9a158..d415a0 td=1 age=53y1mo2w
INFO [05-13|16:17:40.098] Loaded most recent local full block number=0 hash=c9a158..d415a0 td=1 age=53y1mo2w
INFO [05-13|16:17:40.098] Loaded most recent local fast block number=0 hash=c9a158..d415a0 td=1 age=53y1mo2w
INFO [05-13|16:17:40.099] Loaded local transaction journal transactions=0 dropped=0
INFO [05-13|16:17:40.100] Regenerated local transaction journal transactions=0 accounts=0
INFO [05-13|16:17:40.100] Gasprice oracle is ignoring threshold set threshold=2
WARN [05-13|16:17:40.100] Unclean shutdown detected booted=2022-05-13T16:16:46+0100 age=54s
INFO [05-13|16:17:40.100] Starting peer-to-peer node instance=Geth/v1.10.18-unstable-8d84a701-20220503/linux-amd64/go1.18.1
INFO [05-13|16:17:40.130] New local node record seq=1,652,454,949,228 id=f1364e6d060c4625 ip=127.0.0.1 udp=30306 tcp=30306
INFO [05-13|16:17:40.130] Started P2P networking self=enode://87606cd0b27c9c47ca33541d4b68cf553ae6765e22800f0df340e9788912b1e3d2759b3d1933b6f739c720701a56ce26f672823084420746d04c25fc7b8c6824@127.0.0.1:30306
INFO [05-13|16:17:40.133] IPC endpoint opened url=/home/go-ethereum/node1/geth.ipc
INFO [05-13|16:17:40.785] Unlocked account address=0xC1B2c0dFD381e6aC08f34816172d6343Decbb12b
INFO [05-13|16:17:42.636] New local node record seq=1,652,454,949,229 id=f1364e6d060c4625 ip=82.11.59.221 udp=30306 tcp=30306
INFO [05-13|16:17:43.309] Mapped network port proto=tcp extport=30306 intport=30306 interface="UPNP IGDv1-IP1"
INFO [05-13|16:17:43.822] Mapped network port proto=udp extport=30306 intport=30306 interface="UPNP IGDv1-IP1"
[05-13|16:17:50.150] Looking for peers peercount=0 tried=0 static=0
INFO [05-13|16:18:00.164] Looking for peers peercount=0 tried=0 static=0
```
In the first terminal that is currently running the logs resembling the following will be displayed, showing the discovery process in action:
```terminal
INFO [05-13|15:50:03.645] New local node record seq=1,652,453,403,645 id=a2d37f4a7d515b3a ip=nil udp=0 tcp=0
TRACE[05-13|16:15:49.228] PING/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:49.229] PONG/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:49.229] PING/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:49.230] PONG/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:49.730] FINDNODE/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:49.731] NEIGHBORS/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:50.231] FINDNODE/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:50.231] NEIGHBORS/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:50.561] FINDNODE/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:50.561] NEIGHBORS/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:50.731] FINDNODE/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:50.731] NEIGHBORS/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:51.231] FINDNODE/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:51.232] NEIGHBORS/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:52.591] FINDNODE/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:52.591] NEIGHBORS/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
TRACE[05-13|16:15:57.767] PING/v4 id=f1364e6d060c4625 addr=127.0.0.1:30306 err=nil
```
It is now possible to attach a Javascript console to either node to query the network properties:
```shell
geth attach node1/geth.ipc
```
Once the Javascript console is running, check that the node is connected to one other peer (node 2):
```shell
net.peerCount
```
The details of this peer can also be queried and used to check that the peer really is Node 2:
```
admin.peers
```
This should return the following:
```terminal
[{
caps: ["eth/66", "snap/1"],
enode: "enode://6a4576fb12004aa13949dbf25de978102483a6521e6d5d87c5b7ccb1944bbf8995dc730303ae891732410b1dd2e684277e9292fc0a17372a789bb4e87bdf366b@127.0.0.1:30307",
id: "d300c59ba301abcb5f4a3866aab6f833857c3ddf2f0febb583410b1dc466f175",
name: "Geth/v1.10.18-unstable-8d84a701-20220503/linux-amd64/go1.18.1",
network: {
inbound: false,
localAddress: "127.0.0.1:56620",
remoteAddress: "127.0.0.1:30307",
static: false,
trusted: false
},
protocols: {
eth: {
difficulty: 1,
head: "0xc9a158a687eff8a46128bd5b9aaf6b2f04f10f0683acbd7f031514db9ad415a0",
version: 66
},
snap: {
version: 1
}
}
}]
```
The account associated with Node 1 was supposed to be funded with some ether at the chain genesis. This can be checked easily using `eth.getBalance()`:
```shell
eth.getBalance(eth.accounts[0])
```
This account can then be unlocked and some ether sent to Node 2, using the following commands:
```javascript
// unlock account
personal.unlock(eth.accounts[0])
// send some Wei
eth.sendTransaction({to: "0xc94d95a5106270775351eecfe43f97e8e75e59e8", from: eth.accounts[0], value: 25000})
//check the transaction was successful by querying Node 2's account balance
eth.getBalance("0xc94d95a5106270775351eecfe43f97e8e75e59e8")
```
The same steps can then be repeated to attach a console to Node 2.
## Summary
This page explored the various options for configuring a local private network. A step by step guide showed how to set up and launch a private network, unlock the associated accounts, attach a console to check the network status and make some basic interactions.
[gaslimit-chart]: https://etherscan.io/chart/gaslimit

@ -0,0 +1,353 @@
---
title: Developer mode
sort_key: B
---
It is often convenient for developers to work in an environment where changes to client or application software can be deployed and tested rapidly and without putting real-world users or assets at risk. For this purpose, Geth has a `--dev` flag that spins up Geth in "developer mode". This creates a single-node Ethereum test network with no connections to any external peers. It exists solely on the local machine. Starting Geth in developer mode does the following:
- Initializes the data directory with a testing genesis block
- Sets max peers to 0 (meaning Geth does not search for peers)
- Turns off discovery by other nodes (meaning the node is invisible to other nodes)
- Sets the gas price to 0 (no cost to send transactions)
- Uses the Clique proof-of-authority consensus engine which allows blocks to be mined as-needed without excessive CPU and memory consumption
- Uses on-demand block generation, producing blocks when transactions are waiting to be mined
This configuration enables developers to experiment with Geth's source code or develop new applications without having to sync to a pre-existing public network. Blocks are only mined when there are pending transactions. Developers can break things on this network without affecting other users. This page will demonstrate how to spin up a local Geth testnet and a simple smart contract will be deployed to it using the Remix online integrated development environment (IDE).
## Prerequisites
It is assumed that the user has a working Geth installation (see [installation guide](/docs/install-and-build/installing-geth)).
It would also be helpful to have basic knowledge of Geth and the Geth console. See [Getting Started](/docs/getting-started).
Some basic knowledge of [Solidity](https://docs.soliditylang.org/) and [smart contract deployment](https://ethereum.org/en/developers/tutorials/deploying-your-first-smart-contract/) would be useful.
## Start Geth in Dev Mode
Starting Geth in developer mode is as simple as providing the `--dev` flag. It is also possible to create a realistic block creation frequency by setting `--dev.period 13` instead of creating blocks only when transactions are pending. There are also additional configuration options required to follow this tutorial.
First, `http` (or `ws`) must be enabled so that the Javascript console can be attached to the Geth node, and some namespaces must be specified so that certain functions can be executed from the Javascript console, specifically `eth`, `web3` and `personal`. Alternatively, Geth can be started with the `console` command.
Finally, Remix will be used to deploy a smart contract to the node which requires information to be exchanged externally to Geth's own domain. To permit this, the `net` namespace must be enabled and the Remix URL must be provided to `--http.corsdomain`. The full command is as follows:
```shell
geth --dev --http --http.api eth,web3,personal,net --http.corsdomain "http://remix.ethereum.org"
```
The terminal will display the following logs, confirming Geth has started successfully in developer mode:
```terminal
INFO [05-09|10:49:02.951] Starting Geth in ephemeral dev mode...
INFO [05-09|10:49:02.952] Maximum peer count ETH=50 LES=0 total=50
INFO [05-09|10:49:02.952] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
INFO [05-09|10:49:02.953] Set global gas cap cap=50,000,000
INFO [05-09|10:49:03.133] Using developer account address=0x7Aa16266Ba3d309e3cb278B452b1A6307E52Fb62
INFO [05-09|10:49:03.196] Allocated trie memory caches clean=154.00MiB dirty=256.00MiB
INFO [05-09|10:49:03.285] Writing custom genesis block
INFO [05-09|10:49:03.286] Persisted trie from memory database nodes=13 size=1.90KiB time="180.524µs" gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [05-09|10:49:03.287] Initialised chain configuration config="{ ChainID: 1337 Homestead: 0 DAO: nil DAOSupport: false EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 0, Muir Glacier: 0, Berlin: 0, London: 0, Arrow Glacier: nil, MergeFork: nil, Terminal TD: nil, Engine: clique}"
INFO [05-09|10:49:03.288] Initialising Ethereum protocol network=1337 dbversion= nil
INFO [05-09|10:49:03.289] Loaded most recent local header number=0 hash=c9c3de..579bb8 td=1 age=53y1mo1w
INFO [05-09|10:49:03.289] Loaded most recent local full block number=0 hash=c9c3de..579bb8 td=1 age=53y1mo1w
INFO [05-09|10:49:03.289] Loaded most recent local fast block number=0 hash=c9c3de..579bb8 td=1 age=53y1mo1w
WARN [05-09|10:49:03.289] Failed to load snapshot, regenerating err="missing or corrupted snapshot"
INFO [05-09|10:49:03.289] Rebuilding state snapshot
INFO [05-09|10:49:03.290] Resuming state snapshot generation root=ceb850..0662cb accounts=0 slots=0 storage=0.00B elapsed="778.089µs"
INFO [05-09|10:49:03.290] Regenerated local transaction journal transactions=0 accounts=0
INFO [05-09|10:49:03.292] Gasprice oracle is ignoring threshold set threshold=2
INFO [05-09|10:49:03.292] Generated state snapshot accounts=10 slots=0 storage=412.00B elapsed=2.418ms
WARN [05-09|10:49:03.292] Error reading unclean shutdown markers error="leveldb: not found"
INFO [05-09|10:49:03.292] Starting peer-to-peer node instance=Geth/v1.10.18-unstable-8d84a701-20220503/linux-amd64/go1.18.1
WARN [05-09|10:49:03.292] P2P server will be useless, neither dialing nor listening
INFO [05-09|10:49:03.292] Stored checkpoint snapshot to disk number=0 hash=c9c3de..579bb8
INFO [05-09|10:49:03.312] New local node record seq=1,652,089,743,311 id=bfedca74bea20733 ip=127.0.0.1 udp=0 tcp=0
INFO [05-09|10:49:03.313] Started P2P networking self=enode://0544de6446dd5831daa5a391de8d0375d93ac602a95d6a182d499de31f22f75b6645c3f562932cac8328d51321b676c683471e2cf7b3c338bb6930faf6ead389@127.0.0.1:0
INFO [05-09|10:49:03.314] IPC endpoint opened url=/tmp/geth.ipc
INFO [05-09|10:49:03.315] HTTP server started endpoint=127.0.0.1:8545 auth=false prefix= cors=http:remix.ethereum.org vhosts=localhost
INFO [05-09|10:49:03.315] Transaction pool price threshold updated price=0
INFO [05-09|10:49:03.315] Updated mining threads threads=0
INFO [05-09|10:49:03.315] Transaction pool price threshold updated price=1
INFO [05-09|10:49:03.315] Etherbase automatically configured address=0x7Aa16266Ba3d309e3cb278B452b1A6307E52Fb62
INFO [05-09|10:49:03.316] Commit new sealing work number=1 sealhash=2372a2..7fb8e7 uncles=0 txs=0 gas=0 fees=0 elapsed="202.366µs"
WARN [05-09|10:49:03.316] Block sealing failed err="sealing paused while waiting for transactions"
INFO [05-09|10:49:03.316] Commit new sealing work number=1 sealhash=2372a2..7fb8e7 uncles=0 txs=0 gas=0 fees=0 elapsed="540.054µs"
```
This terminal must be left running throughout the entire tutorial. In a second terminal, attach a Javascript console:
```shell
geth attach http://127.0.0.1:8545
```
The Javascript terminal will open with the following welcome message:
```terminal
Welcome to the Geth Javascript console!
instance: Geth/v1.10.18-unstable-8d84a701-20220503/linux-amd64/go.1.18.1
coinbase: 0x540dbaeb2390f2eb005f7a6dbf3436a0959197a9
at block: 0 (Thu Jan 01 1970 01:00:00 GMT+0100 (BST))
modules: eth:1.0 personal:1.0 rpc:1.0 web3:1.0
To exit, press ctrl-d or type exit
>
```
In the [Getting Started](/docs/getting-started/) tutorial it was explained that using the external signing and account management tool, Clef, was best practise for generating and securing user accounts. However, for simplicity this tutorial will use Geth's built-in account management. First, the existing accounts can be displayed using `eth.accounts`:
```shell
eth.accounts
```
An array containing a single address will be displayed in the terminal, despite no accounts having yet been explicitly created. This is the "coinbase" account. The coinbase address is the recipient of the total amount of ether created at the local network genesis. Querying the ether balance of the coinbase account will return a very large number. The coinbase account can be invoked as `eth.accounts[0]` or as `eth.coinbase`:
```terminal
> eth.coinbase==eth.accounts[0]
true
```
The following command can be used to query the balance. The return value is in units of Wei, which is divided by 1<sup>18</sup> to give units of ether. This can be done explicitly or by calling the `web3.FromWei()` function:
```shell
eth.getBalance(eth.coinbase)/1e18
// or
web3.fromWei(eth.getBalance(eth.coinbase))
```
Using `web3.fromWei()` is less error prone because the correct multiplier is built in. These commands both return the following:
```terminal
1.157920892373162e+59
```
A new account can be created and some of the ether from the coinbase transferred across to it. A new account is generated using the `newAccount` function in the `personal` namespace:
```shell
personal.newAccount()
```
The terminal will display a request for a password, twice. Once provided, a new account will be created and its address printed to the terminal. The account creation is also logged in the Geth terminal, including the location of the keyfile in the keystore. It is a good idea to back up the password somewhere at this point. If this were an account on a live network, intended to own assets of real-world value, it would be critical to back up the account password and the keystore in a secure manner.
To reconfirm the account creation, running `eth.accounts` in the Javascript console should display an array containing two account addresses, one being the coinbase and the other being the newly generated address. The following command transfers 50 ETH from the coinbase to the new account:
```shell
eth.sendTransaction({from: eth.coinbase, to: eth.accounts[1], value: web3.toWei(50, "ether")})
```
A transaction hash will be returned to the console. This transaction hash will also be displayed in the logs in the Geth console, followed by logs confirming that a new block was mined (remember in the local development network blocks are mined when transactions are pending). The transaction details can be displayed in the Javascript console by passing the transaction hash to `eth.getTransaction()`:
```shell
eth.getTransaction("0x62044d2cab405388891ee6d53747817f34c0f00341cde548c0ce9834e9718f27")
```
The transaction details are displayed as follows:
```terminal
{
accessList: [],
blockHash: "0xdef68762539ebfb247e31d749acc26ab5df3163aabf9d450b6001c200d17da8a",
blockNumber: 1,
chainId: "0x539",
from: "0x540dbaeb2390f2eb005f7a6dbf3436a0959197a9",
gas: 21000,
gasPrice: 875000001,
hash: "0x2326887390dc04483d435a6303dc05bd2648086eab15f24d7dcdf8c26e8af4b8",
input: "0x",
maxFeePerGas: 2000000001,
maxPriorityFeePerGas: 1,
nonce: 0,
r: "0x3f7b54f095b837ec13480eab5ac7de082465fc79f43b813cca051394dd028d5d",
s: "0x167ef271ae8175239dccdd970db85e06a044d5039252d6232d0954d803bb4e3e",
to: "0x43e3a14fb8c68caa7eea95a02693759de4513950",
transactionIndex: 0,
type: "0x2",
v: "0x0",
value: 50000000000000000000
}
```
Now that the user account is funded with ether, a contract can be created ready to deploy to the Geth node.
## A simple smart contract
This tutorial will make use of a classic example smart contract, `Storage.sol`. This contract exposes two public functions, one to add a value to the contract storage and one to view the stored value. The contract, written in Solidity, is provided below:
```Solidity
pragma solidity >=0.7.0;
contract Storage{
uint256 number;
function store(uint256 num) public{
number = num;
}
function retrieve() public view returns (uint256){
return number;
}
}
```
Solidity is a high-level language that makes code executable by the Ethereum virtual machine (EVM) readable to humans. This means that there is an intermediate step between writing code in Solidity and deploying it to Ethereum. This step is called "compilation" and it converts human-readable code into EVM-executable byte-code. This byte-code is then included in a transaction sent from the Geth node during contract deployment. This can all be done directly from the Geth Javascript console; however this tutorial uses an online IDE called Remix to handle the compilation and deployment of the contract to the local Geth node.
## Compile and deploy using Remix
In a web browser, open <https://remix.ethereum.org>. This opens an online smart contract development environment. On the left-hand side of the screen there is a side-bar menu that toggles between several toolboxes that are displayed in a vertical panel. On the right hand side of the screen there is an editor and a terminal. This layout is similar to the default layout of many other IDEs such as [VSCode](https://code.visualstudio.com/). The contract defined in the previous section, `Storage.sol` is already available in the `Contracts` directory in Remix. It can be opened and reviewed in the editor.
![Remix](/static/images/remix.png)
The Solidity logo is present as an icon in the Remix side-bar. Clicking this icon opens the Solidity compiler wizard. This can be used to compile `Storage.sol` ready. With `Solidity.sol` open in the editor window, simply click the `Compile 1_Storage.sol` button. A green tick will appear next to the Solidity icon to confirm that the contract has compiled successfully. This means the contract bytecode is available.
![Remix-compiler](/static/images/remix-compiler.png)
Below the Solidity icon is a fourth icon that includes the Ethereum logo. Clicking this opens the Deploy menu. In this menu, Remix can be configured to connect to the local Geth node. In the drop-down menu labelled `ENVIRONMENT`, select `Injected Web3`. This will open an information pop-up with instructions for configuring Geth - these can be ignored as they were completed earlier in this tutorial. However, at the bottom of this pop-up is a box labelled `Web3 Provider Endpoint`. This should be set to Geth's 8545 port on `localhost` (`127.0.0.1:8545`). Click OK. The `ACCOUNT` field should automatically populate with the address of the account created earlier using the Geth Javascript console.
![Remix-deploy](/static/images/remix-deploy.png)
To deploy `Storage.sol`, click `DEPLOY`.
The following logs in the Geth terminal confirm that the contract was successfully deployed.
```terminal
INFO [05-09|12:27:09.680] Setting new local account address=0x7Aa16266Ba3d309e3cb278B452b1A6307E52Fb62
INFO [05-09|12:27:09.680] Submitted contract creation hash=0xbf2d2d1c393a882ffb6c90e6d1713906fd799651ae683237223b897d4781c4f2 from=0x7Aa16266Ba3d309e3cb278B452b1A6307E52Fb62 nonce=1 contract=0x4aA11DdfD817dD70e9FF2A2bf9c0306e8EC450d3 value=0
INFO [05-09|12:27:09.681] Commit new sealing work number=2 sealhash=845a53..f22818 uncles=0 txs=1 gas=125,677 fees=0.0003141925 elapsed="335.991µs"
INFO [05-09|12:27:09.681] Successfully sealed new block number=2 sealhash=845a53..f22818 hash=e927bc..f2c8ed elapsed="703.415µs"
INFO [05-09|12:27:09.681] 🔨 mined potential block number=2 hash=e927bc..f2c8ed
```
## Interact with contract using Remix
The contract is now deployed on a local testnet version of the Etheruem blockchain. This means there is a contract address that contains executable bytecode that can be invoked by sending transactions with instructions, also in bytecode, to that address. Again, this can all be achieved by constructing transactions directly in the Geth console or even by making external http requests using tools such as Curl. Here, Remix is used to retrieve the value, then the same action is taken using the Javascript console.
After deploying the contract in Remix, the `Deployed Contracts` tab in the sidebar automatically populates with the public functions exposed by `Storage.sol`. To send a value to the contract storage, type a number in the field adjacent to the `store` button, then click the button.
![Remix-func](/static/images/remix-func.png)
In the Geth terminal, the following logs confirm that the transaction was successful (the actual values will vary from the example below):
```terminal
INFO [05-09|13:41:58.644] Submitted transaction hash=0xfa3cd8df6841c5d3706d3bacfb881d2b985d0b55bdba440f1fdafa4ed5b5cc31 from=0x7Aa16266Ba3d309e3cb278B452b1A6307E52Fb62 nonce=2 recipient=0x4aA11DdfD817dD70e9FF2A2bf9c0306e8EC450d3 value=0
INFO [05-09|13:41:58.644] Commit new sealing work number=3 sealhash=5442e3..f49739 uncles=0 txs=1 gas=43724 fees=0.00010931 elapsed="334.446µs"
INFO [05-09|13:41:58.645] Successfully sealed new block number=3 sealhash=5442e3..f49739 hash=c076c8..eeee77 elapsed="581.374µs"
INFO [05-09|13:41:58.645] 🔨 mined potential block number=3 hash=c076c8..eeee77
```
The transaction hash can be used to retrieve the transaction details using the Geth Javascript console, which will return the following information:
```terminal
{
accessList: [],
blockHash: "0xc076c88200618f4cbbfb4fe7c3eb8d93566724755acc6c4e9a355cc090eeee77",
blockNumber: 3,
chainId: "0x539",
from: "0x7aa16266ba3d309e3cb278b452b1a6307e52fb62",
gas: 43724,
gasPrice: 3172359839,
hash: "0xfa3cd8df6841c5d3706d3bacfb881d2b985d0b55bdba440f1fdafa4ed5b5cc31",
input: "0x6057361d0000000000000000000000000000000000000000000000000000000000000038",
maxFeePerGas: 4032048134,
maxPriorityFeePerGas: 2500000000,
nonce: 2,
r: "0x859b88062715c5d66b9a188886ad51b68a1e4938d5932ce2dac874c104d2b26",
s: "0x61ef6bc454d5e6a76c414f133aeb6321197a61e263a3e270a16bd4a65d94da55",
to: "0x4aa11ddfd817dd70e9ff2a2bf9c0306e8ec450d3",
transactionIndex: 0,
type: "0x2",
v: "0x1",
value: 0
}
```
The `from` address is the account that sent the transaction, the `to` address is the deployment address of the contract. The value entered into Remix is now in storage at that contract address. This can be retrieved using Remix by calling the `retrieve` function - to do this simply click the `retrieve` button. Alternatively, it can be retrieved using `web3.getStorageAt` using the Geth Javascript console. The following command returns the value in the contract storage (replace the given address with the correct one displayed in the Geth logs).
```shell
web3.eth.getStorageAt("0x407d73d8a49eeb85d32cf465507dd71d507100c1", 0)
```
This returns a value that looks like the following:
```terminal
"0x000000000000000000000000000000000000000000000000000000000000000038"
```
The returned value is a left-padded hexadecimal value. For example, the return value `0x000000000000000000000000000000000000000000000000000000000000000038` corresponds to a value of `56` entered as a uint256 to Remix. After converting from hexadecimal string to decimal number the returned value should be equal to that provided to Remix in the previous step.
## Reusing --datadir
This tutorial used an ephemeral blockchain that is completely destroyed and started afresh during each dev-mode session. However, it is also possible to create persistent blockchain and account data that can be reused across multiple sessions. This is done by providing the `--datadir` flag and a directory name when starting Geth in dev-mode.
```shell
geth --datadir dev-chain --dev --http --http.api personal,web3,eth,net --http.corsdomain "remix.ethereum.org"
```
## Re-using accounts
Geth will fail to start in dev-mode if keys have been manually created or imported into the keystore in the `--datadir` directory. This is because the account cannot be automatically unlocked. To resolve this issue, the password defined when the account was created can be saved to a text file and its path passed to the `--password` flag on starting Geth, for example if `password.txt` is saved in the top-level `go-ethereum` directory:
```shell
geth --datadir dev-chain --dev --http --http.api personal,web3,eth,net --http.corsdomain "remix.ethereum.org" --password password.txt
```
**Note** that this is an edge-case that applies when both the `--datadir` and `--dev` flags are used and a key has been manually created or imported into the keystore.
## Summary
This tutorial has demonstrated how to spin up a local developer network using Geth. Having started this development network, a simple contract was deployed to the developer network. Then, Remix was connected to the local Geth node and used to deploy and interact with a contract. Remix was used to add a value to the contract storage and then the value was retrieved using Remix and also using the lower level commands in the Javascript console.

@ -0,0 +1,122 @@
---
title: Developer Guide
sort_key: A
---
**NOTE: These instructions are for people who want to contribute Go source code changes.
If you just want to run ethereum, use the regular [Installation Instructions][install-guide].**
This document is the entry point for developers of the Go implementation of Ethereum.
Developers here refer to the hands-on: who are interested in build, develop, debug, submit
a bug report or pull request or contribute code to go-ethereum.
## Contributing
Thank you for considering to help out with the source code! We welcome contributions from
anyone on the internet, and are grateful for even the smallest of fixes!
GitHub is used to track issues and contribute code, suggestions, feature requests or
documentation.
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull
request (PR) for the maintainers to review and merge into the main code base. If you wish
to submit more complex changes though, please check up with the core devs in the
go-ethereum [Discord Server][discord]. to ensure those changes are in line with the
general philosophy of the project and/or get some early feedback. This can reduce your
effort as well as speeding up our review and merge procedures.
PRs need to be based on and opened against the `master` branch (unless by explicit
agreement, you contribute to a complex feature branch).
Your PR will be reviewed according to the [Code Review guidelines][code-review].
We encourage a PR early approach, meaning you create the PR the earliest even without the
fix/feature. This will let core devs and other volunteers know you picked up an issue.
These early PRs should indicate 'in progress' status.
## Building and Testing
We assume that you have Go installed. Please use Go version 1.13 or later. We use the go
toolchain for development, which you can get from the [Go downloads page][go-install].
go-ethereum is a Go module, and uses the [Go modules system][go-modules] to manage
dependencies. Using `GOPATH` is not required to build go-ethereum.
### Building Executables
Switch to the go-ethereum repository root directory.
You can build all code using the go tool, placing the resulting binary in `$GOPATH/bin`.
```text
go install -v ./...
```
go-ethereum exectuables can be built individually. To build just geth, use:
```text
go install -v ./cmd/geth
```
If you want to compile geth for an architecture that differs from your host, please
consult our [cross compilation guide][cross-compile].
### Testing
Testing a package:
```
go test -v ./eth
```
Running an individual test:
```
go test -v ./eth -run TestMethod
```
**Note**: here all tests with prefix _TestMethod_ will be run, so if you got TestMethod,
TestMethod1, then both tests will run.
Running benchmarks, eg.:
```
go test -v -bench . -run BenchmarkJoin
```
For more information, see the [go test flags][testflag] documentation.
### Getting Stack Traces
If `geth` is started with the `--pprof` option, a debugging HTTP server is made available
on port 6060. You can bring up <http://localhost:6060/debug/pprof> to see the heap,
running routines etc. By clicking "full goroutine stack dump" you can generate a trace
that is useful for debugging.
Note that if you run multiple instances of `geth`, this port will only work for the first
instance that was launched. If you want to generate stacktraces for these other instances,
you need to start them up choosing an alternative pprof port. Make sure you are
redirecting stderr to a logfile.
```
geth -port=30300 -verbosity 5 --pprof --pprof.port 6060 2>> /tmp/00.glog
geth -port=30301 -verbosity 5 --pprof --pprof.port 6061 2>> /tmp/01.glog
geth -port=30302 -verbosity 5 --pprof --pprof.port 6062 2>> /tmp/02.glog
```
Alternatively if you want to kill the clients (in case they hang or stalled syncing, etc)
and have the stacktrace too, you can use the `-QUIT` signal with `kill`:
```
killall -QUIT geth
```
This will dump stack traces for each instance to their respective log file.
[install-guide]: ../install-and-build/installing-geth
[code-review]: ../developers/code-review-guidelines
[cross-compile]: ../install-and-build/cross-compile
[go-modules]: https://github.com/golang/go/wiki/Modules
[discord]: https://discord.gg/invite/nthXNEv
[go-install]: https://golang.org/doc/install
[testflag]: https://golang.org/cmd/go/#hdr-Testing_flags

@ -0,0 +1,125 @@
---
title: DNS Discovery Setup Guide
sort_key: C
---
This document explains how to set up an [EIP 1459][dns-eip] node list using the devp2p
developer tool. The focus of this guide is creating a public list for the Ethereum mainnet
and public testnets, but you may also find this helpful if you want to set up DNS-based
discovery for a private network.
DNS-based node lists can serve as a fallback option when connectivity to the discovery DHT
is unavailable. In this guide, we'll create node lists by crawling the discovery DHT, then
publishing the resulting node sets under chosen DNS names.
### Installing the devp2p command
cmd/devp2p is a developer utility and is not included in the Geth distribution. You can
install this command using `go get`:
```shell
go get github.com/ethereum/go-ethereum/cmd/devp2p
```
To create a signing key, you might also need the `ethkey` utility.
```shell
go get github.com/ethereum/go-ethereum/cmd/ethkey
```
### Crawling the v4 DHT
Our first step is to compile a list of all reachable nodes. The DHT crawler in cmd/devp2p
is a batch process which runs for a set amount of time. You should should schedule this command
to run at a regular interval. To create a node list, run
```shell
devp2p discv4 crawl -timeout 30m all-nodes.json
```
This walks the DHT and stores the set of all found nodes in the `all-nodes.json` file.
Subsequent runs of the same command will revalidate previously discovered node records,
add newly-found nodes to the set, and remove nodes which are no longer alive. The quality
of the node set improves with each run because the number of revalidations is tracked
alongside each node in the set.
### Creating sub-lists through filtering
Once `all-nodes.json` has been created and the set contains a sizeable number of nodes,
useful sub-sets of nodes can be extracted using the `devp2p nodeset filter` command. This
command takes a node set file as argument and applies filters given as command-line flags.
To create a filtered node set, first create a new directory to hold the output set. You
can use any directory name, though it's good practice to use the DNS domain name as the
name of this directory.
```shell
mkdir mainnet.nodes.example.org
```
Then, to create the output set containing Ethereum mainnet nodes only, run
```shell
devp2p nodeset filter all-nodes.json -eth-network mainnet > mainnet.nodes.example.org/nodes.json
```
The following filter flags are available:
* `-eth-network ( mainnet | ropsten | rinkeby | goerli )` selects an Ethereum network.
* `-les-server` selects LES server nodes.
* `-ip <mask>` restricts nodes to the given IP range.
* `-min-age <duration>` restricts the result to nodes which have been live for the
given duration.
### Creating DNS trees
To turn a node list into a DNS node tree, the list needs to be signed. To do this, you
need a key pair. To create the key file in the correct format, you can use the cmd/ethkey
utility. Please choose a good password to encrypt the key on disk.
```shell
ethkey generate dnskey.json
```
Now use `devp2p dns sign` to update the signature of the node list. If your list's
directory name differs from the name you want to publish it at, please specify the DNS
name the using the `-domain` flag. This command will prompt for the key file password and
update the tree signature.
```shell
devp2p dns sign mainnet.nodes.example.org dnskey.json
```
The resulting DNS tree metadata is stored in the
`mainnet.nodes.example.org/enrtree-info.json` file.
### Publishing DNS trees
Now that the tree is signed, it can be published to a DNS provider. cmd/devp2p currently
supports publishing to CloudFlare DNS and Amazon Route53. You can also export TXT records
as a JSON file and publish them yourself.
To publish to CloudFlare, first create an API token in the management console. cmd/devp2p
expects the API token in the `CLOUDFLARE_API_TOKEN` environment variable. Now use the
following command to upload DNS TXT records via the CloudFlare API:
```shell
devp2p dns to-cloudflare mainnet.nodes.example.org
```
Note that this command uses the domain name specified during signing. Any existing records
below this name will be erased by cmd/devp2p.
### Using DNS trees with Geth
Once your tree is available through a DNS name, you can tell geth to use it with the
`--discovery.dns` command line flag. Node trees are referenced using the `enrtree://` URL
scheme. You can find the URL of your tree in the `enrtree-info.json` file created by
`devp2p dns sign`. Just pass the URL as an argument to the flag in order to make use of
the published tree.
```shell
geth --discovery.dns "enrtree://AMBMWDM3J6UY3M32TMMROUNLX6Y3YTLVC3DC6HN2AVG5NHNSAXDW6@mainnet.nodes.example.org"
```
[dns-eip]: https://eips.ethereum.org/EIPS/eip-1459

@ -0,0 +1,57 @@
---
title: Issue Handling Workflow
sort_key: B
---
### (Draft proposal)
Keep the number of open issues under 820
Keep the ratio of open issues per all issues under 13%
Have 50 issues labelled [help wanted](https://github.com/ethereum/go-ethereum/labels/help%20wanted) and 50 [good first issue](https://github.com/ethereum/go-ethereum/labels/good%20first%20issue).
Use structured labels of the form `<category>:<label>` or if need be `<category>:<main>/<sub>`, for example `area: plugins/foobuzzer`.
Use the following labels. Areas and statuses depend on the application and workflow.
- area
- `area: android`
- `area: clef`
- `area: network`
- `area: swarm`
- `area: whisper`
- type
- `type: bug`
- `type: feature`
- `type: documentation`
- `type: discussion`
- status
- `status: PR review`
- `status: community working on it`
- need
- `need: more info`
- `need: steps to reproduce`
- `need: investigation`
- `need: decision`
Use these milestones
- [Future](https://github.com/ethereum/go-ethereum/milestone/80) - Maybe implement one day
- [Coming soon](https://github.com/ethereum/go-ethereum/milestone/81) - Not assigned to a specific release, but to be delivered in one of the upcoming releases
- \<next version\> - Next release with a version number
- \<next-next version\> - The version after the next release with a version number
- \<next major release\> - Optional.
It's ok to not set a due date for a milestone, but once you release it, close it. If you have a few issues dangling, consider moving them to the next milestone, and close this one.
Optionally, use a project board to collect issues of a larger effort that has an end state and overarches multiple releases.
## Workflow
We have a weekly or bi-weekly triage meeting. Issues are preselected by [labelling them "status:triage" and sorted the oldest ones first](https://github.com/ethereum/go-ethereum/issues?q=is%3Aopen+is%3Aissue+label%3Astatus%3Atriage+sort%3Acreated-asc). This is when we go through the new issues and do one of the following
1. Close it.
1. Assign it to "Coming soon" milestone which doesn't have an end date.
1. Move it to the "Future" milestone.
1. Change its status to "Need:\<what-is-needed\>".
Optional further activities:
* Label the issue with the appropriate area/component.
* Add a section to the FAQ or add a wiki page. Link to it from the issue.

@ -0,0 +1,113 @@
---
title: Vulnerability disclosure
sort_key: A
---
## About disclosures
In the software world, it is expected for security vulnerabilities to be immediately
announced, thus giving operators an opportunity to take protective measure against
attackers.
Vulnerabilies typically take two forms:
1. Vulnerabilies that, if exploited, would harm the software operator. In the case of
go-ethereum, examples would be:
- A bug that would allow remote reading or writing of OS files, or
- Remote command execution, or
- Bugs that would leak cryptographic keys
2. Vulnerabilies that, if exploited, would harm the Ethereum mainnet. In the case of
go-ethereum, examples would be:
- Consensus vulnerabilities, which would cause a chain split,
- Denial-of-service during block processing, whereby a malicious transaction could cause the geth-portion of the network to crash.
- Denial-of-service via p2p networking, whereby portions of the network could be made
inaccessible due to crashes or resource consumption.
In most cases so far, vulnerabilities in `geth` have been of the second type, where the
health of the network is a concern, rather than individual node operators. For such
issues, we reserve the right to silently patch and ship fixes in new releases.
### Why silent patches
In the case of Ethereum, it takes a lot of time (weeks, months) to get node operators to
update even to a scheduled hard fork. If we were to highlight that a release contains
important consensus or DoS fixes, there is always a risk of someone trying to beat node
operators to the punch, and exploit the vulnerability. Delaying a potential attack
sufficiently to make the majority of node operators immune may be worth the temporary loss
of transparency.
The primary goal for the Geth team is the health of the Ethereum network as a whole, and
the decision whether or not to publish details about a serious vulnerability boils down to
minimizing the risk and/or impact of discovery and exploitation.
At certain times, it's better to remain silent. This practice is also followed by other
projects such as
[Monero](https://www.getmonero.org/2017/05/17/disclosure-of-a-major-bug-in-cryptonote-based-currencies.html),
[ZCash](https://electriccoin.co/blog/zcash-counterfeiting-vulnerability-successfully-remediated/)
and
[Bitcoin](https://www.coindesk.com/the-latest-bitcoin-bug-was-so-bad-developers-kept-its-full-details-a-secret).
### Public transparency
As of November 2020, our policy going forward is:
- If we silently fix a vulnerability and include the fix in release `X`, then,
- After 4-8 weeks, we will disclose that `X` contained a security-fix.
- After an additional 4-8 weeks, we will publish the details about the vulnerability.
We hope that this provides sufficient balance between transparency versus the need for
secrecy, and aids node operators and downstream projects in keeping up to date with what
versions to run on their infrastructure.
In keeping with this policy, we have taken inspiration from [Solidity bug disclosure](https://solidity.readthedocs.io/en/develop/bugs.html) - see below.
## Disclosed vulnerabilities
In this folder, you can find a JSON-formatted list
([`vulnerabilities.json`](vulnerabilities.json)) of some of the known security-relevant
vulnerabilities concerning `geth`.
As of `geth` version `1.9.25`, geth has a built-in command to check whether it is affected
by any publically disclosed vulnerability, using the command `geth version-check`. This
command will fetch the latest json file (and the accompanying
[signature-file](vulnerabilities.json.minisig), and cross-check the data against it's own
version number.
The file itself is hosted in the Github repository, on the `gh-pages`-branch. The list was
started in November 2020, and covers mainly `v1.9.7` and forward.
The JSON file of known vulnerabilities below is a list of objects, one for each
vulnerability, with the following keys:
- `name`
- Unique name given to the vulnerability.
- `uid`
- Unique identifier of the vulnerability. Format `GETH-<year>-<sequential id>`
- `summary`
- Short description of the vulnerability.
- `description`
- Detailed description of the vulnerability.
- `links`
- List of relevant URLs with more detailed information (optional).
- `introduced`
- The first published Geth version that contained the vulnerability (optional).
- `fixed`
- The first published Geth version that did not contain the vulnerability anymore.
- `published`
- The date at which the vulnerability became known publicly (optional).
- `severity`
- Severity of the vulnerability: `low`, `medium`, `high`, `critical`.
- Takes into account the severity of impact and likelihood of exploitation.
- `check`
- This field contains a regular expression, which can be used against the reported `web3_clientVersion` of a node. If the check
matches, the node is with a high likelyhood affected by the vulnerability.
- `CVE`
- The assigned `CVE` identifier, if available (optional)
### What about Github security advisories
We prefer to not rely on Github as the only/primary publishing protocol for security
advisories, but we plan to use the Github-advisory process as a second channel for
disseminating vulnerability-information.
Advisories published via Github can be accessed [here](https://github.com/ethereum/go-ethereum/security/advisories?state=published).

@ -0,0 +1,65 @@
---
title: Backup & Restore
sort_key: C
---
Most important info first: **REMEMBER YOUR PASSWORD** and **BACKUP YOUR KEYSTORE**.
## Data Directory
Everything `geth` persists gets written inside its data directory. The default data
directory locations are platform specific:
* Mac: `~/Library/Ethereum`
* Linux: `~/.ethereum`
* Windows: `%LOCALAPPDATA%\Ethereum`
Accounts are stored in the `keystore` subdirectory. The contents of this directories
should be transportable between nodes, platforms, implementations (C++, Go, Python).
To configure the location of the data directory, the `--datadir` parameter can be
specified. See [CLI Options](../interface/command-line-options) for more details.
Note the [ethash dag](../interface/mining) is stored at `~/.ethash` (Mac/Linux) or
`%APPDATA%\Ethash` (Windows) so that it can be reused by all clients. You can store this
in a different location by using a symbolic link.
## Cleanup
Geth's blockchain and state databases can be removed with:
```
geth removedb
```
This is useful for deleting an old chain and sync'ing to a new one. It only affects data
directories that can be re-created on synchronisation and does not touch the keystore.
## Blockchain Import/Export
Export the blockchain in binary format with:
```
geth export <filename>
```
Or if you want to back up portions of the chain over time, a first and last block can be
specified. For example, to back up the first epoch:
```
geth export <filename> 0 29999
```
Note that when backing up a partial chain, the file will be appended rather than
truncated.
Import binary-format blockchain exports with:
```
geth import <filename>
```
_See https://eth.wiki/en/howto/blockchain-import-and-export-instructions for more info_
And finally: **REMEMBER YOUR PASSWORD** and **BACKUP YOUR KEYSTORE**

@ -0,0 +1,238 @@
---
title: Command-line Options
sort_key: A
---
```
$ geth --help
NAME:
geth - the go-ethereum command line interface
Copyright 2013-2022 The go-ethereum Authors
USAGE:
geth [options] [command] [command options] [arguments...]
VERSION:
1.10.19-stable-23bee162
COMMANDS:
account Manage accounts
attach Start an interactive JavaScript environment (connect to node)
console Start an interactive JavaScript environment
db Low level database operations
dump Dump a specific block from storage
dumpconfig Show configuration values
dumpgenesis Dumps genesis block JSON configuration to stdout
export Export blockchain into file
export-preimages Export the preimage database into an RLP stream
import Import a blockchain file
import-preimages Import the preimage database from an RLP stream
init Bootstrap and initialize a new genesis block
js Execute the specified JavaScript files
license Display license information
makecache Generate ethash verification cache (for testing)
makedag Generate ethash mining DAG (for testing)
removedb Remove blockchain and state databases
show-deprecated-flags Show flags that have been deprecated
snapshot A set of commands based on the snapshot
version Print version numbers
version-check Checks (online) whether the current version suffers from any known security vulnerabilities
wallet Manage Ethereum presale wallets
help, h Shows a list of commands or help for one command
ETHEREUM OPTIONS:
--config value TOML configuration file
--datadir.minfreedisk value Minimum free disk space in MB, once reached triggers auto shut down (default = --cache.gc converted to MB, 0 = disabled)
--keystore value Directory for the keystore (default = inside the datadir)
--usb Enable monitoring and management of USB hardware wallets
--pcscdpath value Path to the smartcard daemon (pcscd) socket file
--networkid value Explicitly set network id (integer)(For testnets: use --ropsten, --rinkeby, --goerli instead) (default: 1)
--syncmode value Blockchain sync mode ("snap", "full" or "light") (default: snap)
--exitwhensynced Exits after block synchronisation completes
--gcmode value Blockchain garbage collection mode ("full", "archive") (default: "full")
--txlookuplimit value Number of recent blocks to maintain transactions index for (default = about one year, 0 = entire chain) (default: 2350000)
--ethstats value Reporting URL of a ethstats service (nodename:secret@host:port)
--identity value Custom node name
--lightkdf Reduce key-derivation RAM & CPU usage at some expense of KDF strength
--eth.requiredblocks value Comma separated block number-to-hash mappings to require for peering (<number>=<hash>)
--mainnet Ethereum mainnet
--ropsten Ropsten network: pre-configured proof-of-stake test network
--rinkeby Rinkeby network: pre-configured proof-of-authority test network
--goerli Görli network: pre-configured proof-of-authority test network
--sepolia Sepolia network: pre-configured proof-of-work test network
--kiln Kiln network: pre-configured proof-of-work to proof-of-stake test network
--datadir value Data directory for the databases and keystore (default: "~/.ethereum")
--datadir.ancient value Data directory for ancient chain segments (default = inside chaindata)
--remotedb value URL for remote database
LIGHT CLIENT OPTIONS:
--light.serve value Maximum percentage of time allowed for serving LES requests (multi-threaded processing allows values over 100) (default: 0)
--light.ingress value Incoming bandwidth limit for serving light clients (kilobytes/sec, 0 = unlimited) (default: 0)
--light.egress value Outgoing bandwidth limit for serving light clients (kilobytes/sec, 0 = unlimited) (default: 0)
--light.maxpeers value Maximum number of light clients to serve, or light servers to attach to (default: 100)
--ulc.servers value List of trusted ultra-light servers
--ulc.fraction value Minimum % of trusted ultra-light servers required to announce a new head (default: 75)
--ulc.onlyannounce Ultra light server sends announcements only
--light.nopruning Disable ancient light chain data pruning
--light.nosyncserve Enables serving light clients before syncing
DEVELOPER CHAIN OPTIONS:
--dev Ephemeral proof-of-authority network with a pre-funded developer account, mining enabled
--dev.period value Block period to use in developer mode (0 = mine only if transaction pending) (default: 0)
--dev.gaslimit value Initial block gas limit (default: 11500000)
ETHASH OPTIONS:
--ethash.cachedir value Directory to store the ethash verification caches (default = inside the datadir)
--ethash.cachesinmem value Number of recent ethash caches to keep in memory (16MB each) (default: 2)
--ethash.cachesondisk value Number of recent ethash caches to keep on disk (16MB each) (default: 3)
--ethash.cacheslockmmap Lock memory maps of recent ethash caches
--ethash.dagdir value Directory to store the ethash mining DAGs (default: "~/.ethash")
--ethash.dagsinmem value Number of recent ethash mining DAGs to keep in memory (1+GB each) (default: 1)
--ethash.dagsondisk value Number of recent ethash mining DAGs to keep on disk (1+GB each) (default: 2)
--ethash.dagslockmmap Lock memory maps for recent ethash mining DAGs
TRANSACTION POOL OPTIONS:
--txpool.locals value Comma separated accounts to treat as locals (no flush, priority inclusion)
--txpool.nolocals Disables price exemptions for locally submitted transactions
--txpool.journal value Disk journal for local transaction to survive node restarts (default: "transactions.rlp")
--txpool.rejournal value Time interval to regenerate the local transaction journal (default: 1h0m0s)
--txpool.pricelimit value Minimum gas price limit to enforce for acceptance into the pool (default: 1)
--txpool.pricebump value Price bump percentage to replace an already existing transaction (default: 10)
--txpool.accountslots value Minimum number of executable transaction slots guaranteed per account (default: 16)
--txpool.globalslots value Maximum number of executable transaction slots for all accounts (default: 5120)
--txpool.accountqueue value Maximum number of non-executable transaction slots permitted per account (default: 64)
--txpool.globalqueue value Maximum number of non-executable transaction slots for all accounts (default: 1024)
--txpool.lifetime value Maximum amount of time non-executable transaction are queued (default: 3h0m0s)
PERFORMANCE TUNING OPTIONS:
--cache value Megabytes of memory allocated to internal caching (default = 4096 mainnet full node, 128 light mode) (default: 1024)
--cache.database value Percentage of cache memory allowance to use for database io (default: 50)
--cache.trie value Percentage of cache memory allowance to use for trie caching (default = 15% full mode, 30% archive mode) (default: 15)
--cache.trie.journal value Disk journal directory for trie cache to survive node restarts (default: "triecache")
--cache.trie.rejournal value Time interval to regenerate the trie cache journal (default: 1h0m0s)
--cache.gc value Percentage of cache memory allowance to use for trie pruning (default = 25% full mode, 0% archive mode) (default: 25)
--cache.snapshot value Percentage of cache memory allowance to use for snapshot caching (default = 10% full mode, 20% archive mode) (default: 10)
--cache.noprefetch Disable heuristic state prefetch during block import (less CPU and disk IO, more time waiting for data)
--cache.preimages Enable recording the SHA3/keccak preimages of trie keys
--fdlimit value Raise the open file descriptor resource limit (default = system fd limit) (default: 0)
ACCOUNT OPTIONS:
--unlock value Comma separated list of accounts to unlock
--password value Password file to use for non-interactive password input
--signer value External signer (url or path to ipc file)
--allow-insecure-unlock Allow insecure account unlocking when account-related RPCs are exposed by http
API AND CONSOLE OPTIONS:
--ipcdisable Disable the IPC-RPC server
--ipcpath value Filename for IPC socket/pipe within the datadir (explicit paths escape it)
--http Enable the HTTP-RPC server
--http.addr value HTTP-RPC server listening interface (default: "localhost")
--http.port value HTTP-RPC server listening port (default: 8545)
--http.api value API's offered over the HTTP-RPC interface
--http.rpcprefix value HTTP path path prefix on which JSON-RPC is served. Use '/' to serve on all paths.
--http.corsdomain value Comma separated list of domains from which to accept cross origin requests (browser enforced)
--http.vhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--ws Enable the WS-RPC server
--ws.addr value WS-RPC server listening interface (default: "localhost")
--ws.port value WS-RPC server listening port (default: 8546)
--ws.api value API's offered over the WS-RPC interface
--ws.rpcprefix value HTTP path prefix on which JSON-RPC is served. Use '/' to serve on all paths.
--ws.origins value Origins from which to accept websockets requests
--authrpc.jwtsecret value Path to a JWT secret to use for authenticated RPC endpoints
--authrpc.addr value Listening address for authenticated APIs (default: "localhost")
--authrpc.port value Listening port for authenticated APIs (default: 8551)
--authrpc.vhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--graphql Enable GraphQL on the HTTP-RPC server. Note that GraphQL can only be started if an HTTP server is started as well.
--graphql.corsdomain value Comma separated list of domains from which to accept cross origin requests (browser enforced)
--graphql.vhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--rpc.gascap value Sets a cap on gas that can be used in eth_call/estimateGas (0=infinite) (default: 50000000)
--rpc.evmtimeout value Sets a timeout used for eth_call (0=infinite) (default: 5s)
--rpc.txfeecap value Sets a cap on transaction fee (in ether) that can be sent via the RPC APIs (0 = no cap) (default: 1)
--rpc.allow-unprotected-txs Allow for unprotected (non EIP155 signed) transactions to be submitted via RPC
--jspath loadScript JavaScript root path for loadScript (default: ".")
--exec value Execute JavaScript statement
--preload value Comma separated list of JavaScript files to preload into the console
NETWORKING OPTIONS:
--bootnodes value Comma separated enode URLs for P2P discovery bootstrap
--discovery.dns value Sets DNS discovery entry points (use "" to disable DNS)
--port value Network listening port (default: 30303)
--maxpeers value Maximum number of network peers (network disabled if set to 0) (default: 50)
--maxpendpeers value Maximum number of pending connection attempts (defaults used if set to 0) (default: 0)
--nat value NAT port mapping mechanism (any|none|upnp|pmp|extip:<IP>) (default: "any")
--nodiscover Disables the peer discovery mechanism (manual peer addition)
--v5disc Enables the experimental RLPx V5 (Topic Discovery) mechanism
--netrestrict value Restricts network communication to the given IP networks (CIDR masks)
--nodekey value P2P node key file
--nodekeyhex value P2P node key as hex (for testing)
MINER OPTIONS:
--mine Enable mining
--miner.threads value Number of CPU threads to use for mining (default: 0)
--miner.notify value Comma separated HTTP URL list to notify of new work packages
--miner.notify.full Notify with pending block headers instead of work packages
--miner.gasprice value Minimum gas price for mining a transaction (default: 1000000000)
--miner.gaslimit value Target gas ceiling for mined blocks (default: 30000000)
--miner.etherbase value Public address for block mining rewards (default = first account) (default: "0")
--miner.extradata value Block extra data set by the miner (default = client version)
--miner.recommit value Time interval to recreate the block being mined (default: 3s)
--miner.noverify Disable remote sealing verification
GAS PRICE ORACLE OPTIONS:
--gpo.blocks value Number of recent blocks to check for gas prices (default: 20)
--gpo.percentile value Suggested gas price is the given percentile of a set of recent transaction gas prices (default: 60)
--gpo.maxprice value Maximum transaction priority fee (or gasprice before London fork) to be recommended by gpo (default: 500000000000)
--gpo.ignoreprice value Gas price below which gpo will ignore transactions (default: 2)
VIRTUAL MACHINE OPTIONS:
--vmdebug Record information useful for VM and contract debugging
LOGGING AND DEBUGGING OPTIONS:
--fakepow Disables proof-of-work verification
--nocompaction Disables db compaction after import
--verbosity value Logging verbosity: 0=silent, 1=error, 2=warn, 3=info, 4=debug, 5=detail (default: 3)
--vmodule value Per-module verbosity: comma-separated list of <pattern>=<level> (e.g. eth/*=5,p2p=4)
--log.json Format logs with JSON
--log.backtrace value Request a stack trace at a specific logging statement (e.g. "block.go:271")
--log.debug Prepends log messages with call-site location (file and line number)
--pprof Enable the pprof HTTP server
--pprof.addr value pprof HTTP server listening interface (default: "127.0.0.1")
--pprof.port value pprof HTTP server listening port (default: 6060)
--pprof.memprofilerate value Turn on memory profiling with the given rate (default: 524288)
--pprof.blockprofilerate value Turn on block profiling with the given rate (default: 0)
--pprof.cpuprofile value Write CPU profile to the given file
--trace value Write execution trace to the given file
METRICS AND STATS OPTIONS:
--metrics Enable metrics collection and reporting
--metrics.expensive Enable expensive metrics collection and reporting
--metrics.addr value Enable stand-alone metrics HTTP server listening interface (default: "127.0.0.1")
--metrics.port value Metrics HTTP server listening port (default: 6060)
--metrics.influxdb Enable metrics export/push to an external InfluxDB database
--metrics.influxdb.endpoint value InfluxDB API endpoint to report metrics to (default: "http://localhost:8086")
--metrics.influxdb.database value InfluxDB database name to push reported metrics to (default: "geth")
--metrics.influxdb.username value Username to authorize access to the database (default: "test")
--metrics.influxdb.password value Password to authorize access to the database (default: "test")
--metrics.influxdb.tags value Comma-separated InfluxDB tags (key/values) attached to all measurements (default: "host=localhost")
--metrics.influxdbv2 Enable metrics export/push to an external InfluxDB v2 database
--metrics.influxdb.token value Token to authorize access to the database (v2 only) (default: "test")
--metrics.influxdb.bucket value InfluxDB bucket name to push reported metrics to (v2 only) (default: "geth")
--metrics.influxdb.organization value InfluxDB organization name (v2 only) (default: "geth")
ALIASED (deprecated) OPTIONS:
--nousb Disables monitoring for and managing USB hardware wallets (deprecated)
--whitelist value Comma separated block number-to-hash mappings to enforce (<number>=<hash>) (deprecated in favor of --eth.requiredblocks)
MISC OPTIONS:
--snapshot Enables snapshot-database mode (default = enable)
--bloomfilter.size value Megabytes of memory allocated to bloom-filter for pruning (default: 2048)
--ignore-legacy-receipts Geth will start up even if there are legacy receipts in freezer
--help, -h show help
--override.grayglacier value Manually specify Gray Glacier fork-block, overriding the bundled setting (default: 0)
--override.terminaltotaldifficulty value Manually specify TerminalTotalDifficulty, overriding the bundled setting (default: <nil>)
COPYRIGHT:
Copyright 2013-2022 The go-ethereum Authors
```

@ -0,0 +1,356 @@
---
title: Installing Geth
sort_key: A
---
There are several ways to install Geth, including via a package manager, downloading a pre-built bundle, running as a docker container or building from downloaded source code. On this page the various installation options are explained for several major operating systems. Users prioritizing ease of installation should choose to use a package manager or prebuilt bundle. Users prioritizing customization should build from source. It is important to run the latest version of Geth because each release includes bugfixes and improvement over the previous versions. The stable releases are recommended for most users because they have been fully tested. A list of stable releases can be found [here][geth-releases]. Instructions for updating existing Geth installations are also provided in each section.
{:toc}
- this will be removed by the toc
## Package managers
### MacOS via Homebrew
The easiest way to install go-ethereum is to use the Geth Homebrew tap. The first step is to check that Homebrew is installed. The following command should return a version number.
```shell
brew -v
```
If a version number is returned, then Homebrew is installed. If not, Homebrew can be installed by following the instructions [here][brew]. With Homebrew installed, the following commands add the Geth tap and install Geth:
```shell
brew tap ethereum/ethereum
brew install ethereum
```
The previous command installs the latest stable release. Developers that wish to install the most up-to-date version can install the Geth repository's master branch by adding the `--devel` parameter to the install command:
```shell
brew install ethereum --devel
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`. The binaries for each of these tools are saved in `/usr/local/bin/`. The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation to the latest version can be achieved by stopping the node and running the following commands:
```shell
brew update
brew upgrade
brew reinstall ethereum
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### Ubuntu via PPAs
The easiest way to install Geth on Ubuntu-based distributions is with the built-in launchpad PPAs (Personal Package Archives). A single PPA repository is provided, containing stable and development releases for Ubuntu versions `xenial`, `trusty`, `impish`, `focal`, `bionic`.
The following command enables the launchpad repository:
```shell
sudo add-apt-repository -y ppa:ethereum/ethereum
```
Then, to install the stable version of go-ethereum:
```shell
sudo apt-get update
sudo apt-get install ethereum
```
Or, alternatively the develop version:
```shell
sudo apt-get update
sudo apt-get install ethereum-unstable
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`. The binaries for each of these tools are saved in `/usr/local/bin/`. The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation to the latest version can be achieved by stopping the node and running the following commands:
```shell
sudo apt-get update
sudo apt-get install ethereum
sudo apt-get upgrade geth
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### Windows
The easiest way to install Geth is to download a pre-compiled binary from the [downloads][geth-dl] page. The page provides an installer as well as a zip file containing the Geth source code. The install wizard offers the user the option to install Geth, or Geth and the developer tools. The installer adds `geth` to the system's `PATH` automatically. The zip file contains the command `.exe` files that can be run from the command prompt. The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation can be achieved by stopping the node, downloading and installing the latest version following the instructions above. When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### FreeBSD via pkg
Geth can be installed on FreeBSD using the package manager `pkg`. The following command downloads and installs Geth:
```shell
pkg install go-ethereum
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`.
The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation to the latest version can be achieved by stopping the node and running the following commands:
```shell
pkg upgrade
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### FreeBSD via ports
Installing Geth using ports, simply requires navigating to the `net-p2p/go-ethereum` ports directory and running `make install` as root:
```shell
cd /usr/ports/net-p2p/go-ethereum
make install
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`. The binaries for each of these tools are saved in `/usr/local/bin/`.
The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation can be achieved by stopping the node and running the following command:
```shell
portsnap fetch
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### Arch Linux via pacman
The Geth package is available from the [community repo][geth-archlinux]. It can be installed by running:
```shell
pacman -S geth
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`. The binaries for each of these tools are saved in `/usr/bin/`.
The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation can be achieved by stopping the node and running the following command:
```shell
sudo pacman -Sy
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
## Standalone bundle
Stable releases and development builds are provided as standalone bundles. These are useful for users who: a) wish to install a specific version of Geth (e.g., for reproducible environments); b) wish to install on machines without internet access (e.g. air-gapped computers); or c) wish to avoid automatic updates and instead prefer to manually install software.
The following standalone bundles are available:
- 32bit, 64bit, ARMv5, ARMv6, ARMv7 and ARM64 archives (`.tar.gz`) on Linux
- 64bit archives (`.tar.gz`) on macOS
- 32bit and 64bit archives (`.zip`) and installers (`.exe`) on Windows
Some archives contain only Geth, while other archives containing Geth and the various developer tools (`clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`). More information about these executables is available at the [`README`][geth-readme-exe].
The standalone bundles can be downloaded from the [Geth Downloads][geth-dl] page. To update an existing installation, download and manually install the latest version.
## Docker container
A Docker image with recent snapshot builds from our `develop` branch is maintained on DockerHub to support users who prefer to run containerized processes. There four different Docker images available for running the latest stable or development versions of Geth.
- `ethereum/client-go:latest` is the latest development version of Geth (default)
- `ethereum/client-go:stable` is the latest stable version of Geth
- `ethereum/client-go:{version}` is the stable version of Geth at a specific version number
- `ethereum/client-go:release-{version}` is the latest stable version of Geth at a specific version family
Pulling an image and starting a node is achieved by running these commands:
```shell
docker pull ethereum/client-go
docker run -it -p 30303:30303 ethereum/client-go
```
There are also four different Docker images for running the latest stable or development versions of miscellaneous Ethereum tools.
- `ethereum/client-go:alltools-latest` is the latest development version of the Ethereum tools
- `ethereum/client-go:alltools-stable` is the latest stable version of the Ethereum tools
- `ethereum/client-go:alltools-{version}` is the stable version of the Ethereum tools at a specific version number
- `ethereum/client-go:alltools-release-{version}` is the latest stable version of the Ethereum tools at a specific version family
The image has the following ports automatically exposed:
- `8545` TCP, used by the HTTP based JSON RPC API
- `8546` TCP, used by the WebSocket based JSON RPC API
- `8547` TCP, used by the GraphQL API
- `30303` TCP and UDP, used by the P2P protocol running the network
**Note:** if you are running an Ethereum client inside a Docker container, you should mount a data volume as the client's data directory (located at `/root/.ethereum` inside the container) to ensure that downloaded data is preserved between restarts and/or container life-cycles.
Updating Geth to the latest version simply requires stopping the container, pulling the latest version from Docker and running it:
```shell
docker stop ethereum/client-go
docker pull ethereum/client-go:latest
docker run -it -p 30303:30303 ethereum/client-go
```
## Build from source code
### Most Linux systems and macOS
Geth is written in [Go][go], so building from source code requires the most recent version of Go to be installed. Instructions for installing Go are available at the [Go installation page][go-install] and necessary bundles can be downloaded from the [Go download page][go-dl].
With Go installed, Geth can be downloaded into a `GOPATH` workspace via:
```shell
go get -d github.com/ethereum/go-ethereum
```
You can also install specific versions via:
```shell
go get -d github.com/ethereum/go-ethereum@v1.9.21
```
The above commands do not build any executables. To do that you can either build one specifically:
```shell
go install github.com/ethereum/go-ethereum/cmd/geth
```
Alternatively, the following command, run in the project root directory (`ethereum/go-ethereum`) in the GO workspace, builds the entire project and installs Geth and all the developer tools:
```shell
go install ./...
```
For macOS users, errors related to macOS header files are usually fixed by installing XCode Command Line Tools with `xcode-select --install`.
Another common error is: `go: cannot use path@version syntax in GOPATH mode`. This and other similar errors can often be fixed by enabling gomodules using `export GO111MODULE=on`.
Updating an existing Geth installation can be achieved using `go get`:
```shell
go get -u github.com/ethereum/go-ethereum
```
### Windows
The Chocolatey package manager provides an easy way to install the required build tools. Chocolatey can be installed by following these [instructions][chocolatey]. Then, to install the build tool the following commands can be run in an Administrator command prompt:
```
C:\Windows\system32> choco install git
C:\Windows\system32> choco install golang
C:\Windows\system32> choco install mingw
```
Installing these packages sets up the path environment variables. To get the new path a new command prompt must be opened. To install Geth, a Go workspace directory must first be created, then the Geth source code can be created and built.
```
C:\Users\xxx> mkdir src\github.com\ethereum
C:\Users\xxx> git clone https://github.com/ethereum/go-ethereum src\github.com\ethereum\go-ethereum
C:\Users\xxx> cd src\github.com\ethereum\go-ethereum
C:\Users\xxx\src\github.com\ethereum\go-ethereum> go get -u -v golang.org/x/net/context
C:\Users\xxx\src\github.com\ethereum\go-ethereum> go install -v ./cmd/...
```
### FreeBSD
To build Geth from source code on FreeBSD, the Geth Github repository can be cloned into a local directory.
```shell
git clone https://github.com/ethereum/go-ethereum
```
Then, the Go compiler can be used to build Geth:
```shell
pkg install go
```
If the Go version currently installed is >= 1.5, Geth can be built using the following command:
```shell
cd go-ethereum
make geth
```
If the installed Go version is &lt; 1.5 (quarterly packages, for example), the following command can be used instead:
```shell
cd go-ethereum
CC=clang make geth
```
To start the node, the followijng command can be run:
```shell
build/bin/geth
```
### Building without a Go workflow
Geth can also be built without using Go workspaces. In this case, the repository should be cloned to a local repository. Then, the command
`make geth` configures everything for a temporary build and cleans up afterwards. This method of building only works on UNIX-like operating systems, and a Go installation is still required.
```shell
git clone https://github.com/ethereum/go-ethereum.git
cd go-ethereum
make geth
```
These commands create a Geth executable file in the `go-ethereum/build/bin` folder that can be moved and run from another directory if required. The binary is standalone and doesn't require any additional files.
To update an existing Geth installation simply stop the node, navigate to the project root directory and pull the latest version from the Geth Github repository. then rebuild and restart the node.
```shell
cd go-ethereum
git pull
make geth
```
Additionally all the developer tools provided with Geth (`clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`) can be compiled by running `make all`. More information about these tools can be found [here][geth-readme-exe].
Instructions for cross-compiling to another architecture are available in the [cross-compilation guide](./cross-compile).
To build a stable release, e.g. v1.9.21, the command `git checkout v1.9.21` retrieves that specific version. Executing that command before running `make geth` switches Geth to a stable branch.
[brew]: https://brew.sh/
[go]: https://golang.org/
[go-dl]: https://golang.org/dl/
[go-install]: https://golang.org/doc/install
[chocolatey]: https://chocolatey.org
[geth-releases]: https://github.com/ethereum/go-ethereum/releases
[geth-readme-exe]: https://github.com/ethereum/go-ethereum#executables
[geth-cl-options]: https://geth.ethereum.org/docs/interface/command-line-options
[geth-archlinux]: https://www.archlinux.org/packages/community/x86_64/geth/
[geth-dl]: ../../downloads/

@ -0,0 +1,352 @@
---
title: Managing Your Accounts
sort_key: B
---
**WARNING**
Remember your password.
If you lose the password you use to encrypt your account, you will not be able to access that account.
Repeat: It is NOT possible to access your account without a password and there is no _forgot my password_ option here. Do not forget it.
The ethereum CLI `geth` provides account management via the `account` command:
```
$ geth account <command> [options...] [arguments...]
```
Manage accounts lets you create new accounts, list all existing accounts, import a private
key into a new account, migrate to newest key format and change your password.
It supports interactive mode, when you are prompted for password as well as
non-interactive mode where passwords are supplied via a given password file.
Non-interactive mode is only meant for scripted use on test networks or known safe
environments.
Make sure you remember the password you gave when creating a new account (with new, update
or import). Without it you are not able to unlock your account.
Note that exporting your key in unencrypted format is NOT supported.
Keys are stored under `<DATADIR>/keystore`. Make sure you backup your keys regularly! See
[DATADIR backup & restore](../install-and-build/backup-restore)
for more information. If a custom datadir and keystore option are given the keystore
option takes preference over the datadir option.
The newest format of the keyfiles is: `UTC--<created_at UTC ISO8601>--<address hex>`. The
order of accounts when listing, is lexicographic, but as a consequence of the timestamp
format, it is actually order of creation
It is safe to transfer the entire directory or the individual keys therein between
ethereum nodes. Note that in case you are adding keys to your node from a different node,
the order of accounts may change. So make sure you do not rely or change the index in your
scripts or code snippets.
And again. **DO NOT FORGET YOUR PASSWORD**
```
COMMANDS:
list Print summary of existing accounts
new Create a new account
update Update an existing account
import Import a private key into a new account
```
You can get info about subcommands by `geth account <command> --help`.
```
$ geth account list --help
list [command options] [arguments...]
Print a short summary of all accounts
OPTIONS:
--datadir "/home/bas/.ethereum" Data directory for the databases and keystore
--keystore Directory for the keystore (default = inside the datadir)
```
Accounts can also be managed via the [Javascript Console](../interface/javascript-console)
## Examples
### Interactive use
#### creating an account
```
$ geth account new
Your new account is locked with a password. Please give a password. Do not forget this password.
Passphrase:
Repeat Passphrase:
Address: {168bc315a2ee09042d83d7c5811b533620531f67}
```
#### Listing accounts in a custom keystore directory
```
$ geth account list --keystore /tmp/mykeystore/
Account #0: {5afdd78bdacb56ab1dad28741ea2a0e47fe41331} keystore:///tmp/mykeystore/UTC--2017-04-28T08-46-27.437847599Z--5afdd78bdacb56ab1dad28741ea2a0e47fe41331
Account #1: {9acb9ff906641a434803efb474c96a837756287f} keystore:///tmp/mykeystore/UTC--2017-04-28T08-46-52.180688336Z--9acb9ff906641a434803efb474c96a837756287f
```
#### Import private key into a node with a custom datadir
```
$ geth account import --datadir /someOtherEthDataDir ./key.prv
The new account will be encrypted with a passphrase.
Please enter a passphrase now.
Passphrase:
Repeat Passphrase:
Address: {7f444580bfef4b9bc7e14eb7fb2a029336b07c9d}
```
#### Account update
```
$ geth account update a94f5374fce5edbc8e2a8697c15331677e6ebf0b
Unlocking account a94f5374fce5edbc8e2a8697c15331677e6ebf0b | Attempt 1/3
Passphrase:
0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b
Account 'a94f5374fce5edbc8e2a8697c15331677e6ebf0b' unlocked.
Please give a new password. Do not forget this password.
Passphrase:
Repeat Passphrase:
0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b
```
### Non-interactive use
You supply a plaintext password file as argument to the `--password` flag. The data in the
file consists of the raw characters of the password, followed by a single newline.
**Note**: Supplying the password directly as part of the command line is not recommended,
but you can always use shell trickery to get round this restriction.
```
$ geth account new --password /path/to/password
$ geth account import --datadir /someOtherEthDataDir --password /path/to/anotherpassword ./key.prv
```
# Creating accounts
## Creating a new account
```
$ geth account new
$ geth account new --password /path/to/passwdfile
$ geth account new --password <(echo $mypassword)
```
Creates a new account and prints the address.
On the console, use:
```
> personal.newAccount()
... you will be prompted for a password ...
or
> personal.newAccount("passphrase")
```
The account is saved in encrypted format. You **must** remember this passphrase to unlock
your account in the future.
For non-interactive use the passphrase can be specified with the `--password` flag:
```
geth account new --password <passwordfile>
```
Note, this is meant to be used for testing only, it is a bad idea to save your
password to file or expose in any other way.
## Creating an account by importing a private key
```
geth account import <keyfile>
```
Imports an unencrypted private key from `<keyfile>` and creates a new account and prints
the address.
The keyfile is assumed to contain an unencrypted private key as canonical EC raw bytes
encoded into hex.
The account is saved in encrypted format, you are prompted for a passphrase.
You must remember this passphrase to unlock your account in the future.
For non-interactive use the passphrase can be specified with the `--password` flag:
```
geth account import --password <passwordfile> <keyfile>
```
**Note**: Since you can directly copy your encrypted accounts to another ethereum
instance, this import/export mechanism is not needed when you transfer an account between
nodes.
**Warning:** when you copy keys into an existing node's keystore, the order of accounts
you are used to may change. Therefore you make sure you either do not rely on the account
order or doublecheck and update the indexes used in your scripts.
**Warning:** If you use the password flag with a password file, best to make sure the file
is not readable or even listable for anyone but you. You achieve this with:
```
touch /path/to/password
chmod 700 /path/to/password
cat > /path/to/password
>I type my pass here^D
```
## Updating an existing account
You can update an existing account on the command line with the `update` subcommand with
the account address or index as parameter. You can specify multiple accounts at once.
```
geth account update 5afdd78bdacb56ab1dad28741ea2a0e47fe41331 9acb9ff906641a434803efb474c96a837756287f
geth account update 0 1 2
```
The account is saved in the newest version in encrypted format, you are prompted
for a passphrase to unlock the account and another to save the updated file.
This same command can therefore be used to migrate an account of a deprecated
format to the newest format or change the password for an account.
After a successful update, all previous formats/versions of that same key are removed!
# Importing your presale wallet
Importing your presale wallet is very easy. If you remember your password that is:
```
geth wallet import /path/to/my/presale.wallet
```
will prompt for your password and imports your ether presale account. It can be used
non-interactively with the --password option taking a passwordfile as argument containing
the wallet password in cleartext.
# Listing accounts and checking balances
### Listing your current accounts
From the command line, call the CLI with:
```
$ geth account list
Account #0: {5afdd78bdacb56ab1dad28741ea2a0e47fe41331} keystore:///tmp/mykeystore/UTC--2017-04-28T08-46-27.437847599Z--5afdd78bdacb56ab1dad28741ea2a0e47fe41331
Account #1: {9acb9ff906641a434803efb474c96a837756287f} keystore:///tmp/mykeystore/UTC--2017-04-28T08-46-52.180688336Z--9acb9ff906641a434803efb474c96a837756287f
```
to list your accounts in order of creation.
**Note**:
This order can change if you copy keyfiles from other nodes, so make sure you either do not rely on indexes or make sure if you copy keys you check and update your account indexes in your scripts.
When using the console:
```
> eth.accounts
["0x5afdd78bdacb56ab1dad28741ea2a0e47fe41331", "0x9acb9ff906641a434803efb474c96a837756287f"]
```
or via RPC:
```
# Request
$ curl -X POST --data '{"jsonrpc":"2.0","method":"eth_accounts","params":[],"id":1}' -H 'Content-type: application/json' http://127.0.0.1:8545
# Result
{
"id":1,
"jsonrpc": "2.0",
"result": ["0x5afdd78bdacb56ab1dad28741ea2a0e47fe41331", "0x9acb9ff906641a434803efb474c96a837756287f"]
}
```
If you want to use an account non-interactively, you need to unlock it. You can do this on
the command line with the `--unlock` option which takes a comma separated list of accounts
(in hex or index) as argument so you can unlock the accounts programmatically for one
session. This is useful if you want to use your account from Dapps via RPC. `--unlock `
will unlock the first account. This is useful when you created your account
programmatically, you do not need to know the actual account to unlock it.
Create account and start node with account unlocked:
```
geth account new --password <(echo this is not secret!)
geth --password <(echo this is not secret!) --unlock primary --rpccorsdomain localhost --verbosity 6 2>> geth.log
```
Instead of the account address, you can use integer indexes which refers to the address
position in the account listing (and corresponds to order of creation)
The command line allows you to unlock multiple accounts. In this case the argument to
unlock is a comma delimited list of accounts addresses or indexes.
```
geth --unlock "0x407d73d8a49eeb85d32cf465507dd71d507100c1,0,5,e470b1a7d2c9c5c6f03bbaa8fa20db6d404a0c32"
```
If this construction is used non-interactively, your password file will need to contain
the respective passwords for the accounts in question, one per line.
On the console you can also unlock accounts (one at a time) for a duration (in seconds).
```
personal.unlockAccount(address, "password", 300)
```
Note that we do NOT recommend using the password argument here, since the console history
is logged, so you may compromise your account. You have been warned.
### Checking account balances
To check your the etherbase account balance:
```
> web3.fromWei(eth.getBalance(eth.coinbase), "ether")
6.5
```
Print all balances with a JavaScript function:
```
function checkAllBalances() {
var totalBal = 0;
for (var acctNum in eth.accounts) {
var acct = eth.accounts[acctNum];
var acctBal = web3.fromWei(eth.getBalance(acct), "ether");
totalBal += parseFloat(acctBal);
console.log(" eth.accounts[" + acctNum + "]: \t" + acct + " \tbalance: " + acctBal + " ether");
}
console.log(" Total balance: " + totalBal + " ether");
};
```
That can then be executed with:
```
> checkAllBalances();
eth.accounts[0]: 0xd1ade25ccd3d550a7eb532ac759cac7be09c2719 balance: 63.11848 ether
eth.accounts[1]: 0xda65665fc30803cb1fb7e6d86691e20b1826dee0 balance: 0 ether
eth.accounts[2]: 0xe470b1a7d2c9c5c6f03bbaa8fa20db6d404a0c32 balance: 1 ether
eth.accounts[3]: 0xf4dd5c3794f1fd0cdc0327a83aa472609c806e99 balance: 6 ether
```
Since this function will disappear after restarting geth, it can be helpful to store
commonly used functions to be recalled later. The
[loadScript](../interface/javascript-console)
function makes this very easy.
First, save the `checkAllBalances()` function definition to a file on your computer. For
example, `/Users/username/gethload.js`. Then load the file from the interactive console:
```
> loadScript("/Users/username/gethload.js")
true
```
The file will modify your JavaScript environment as if you has typed the commands
manually. Feel free to experiment!

@ -0,0 +1,167 @@
---
title: Cross-Compiling Geth
sort_key: C
---
**Note: All of these and much more have been merged into the project Makefile. You can
cross build via `make geth-<os>-<platform>` without needing to know any of these details
from below.**
Developers usually have a preferred platform that they feel most comfortable working in,
with all the necessary tools, libraries and environments set up for an optimal workflow.
However, there's often need to build for either a different CPU architecture, or an
entirely different operating system; but maintaining a development environment for each
and switching between the them quickly becomes unwieldy.
Here we present a very simple way to cross compile Ethereum to various operating systems
and architectures using a minimal set of prerequisites and a completely containerized
approach, guaranteeing that your development environment remains clean even after the
complex requirements and mechanisms of a cross compilation.
The currently supported target platforms are:
- ARMv7 Android and iOS
- 32 bit, 64 bit and ARMv5 Linux
- 32 bit and 64 bit Mac OSX
- 32 bit and 64 bit Windows
Please note, that cross compilation does not replace a release build. Although resulting
binaries can usually run perfectly on the desired platform, compiling on a native system
with the specialized tools provided by the official vendor can often result in more a
finely optimized code.
## Cross compilation environment
Although the `go-ethereum` project is written in Go, it does include a bit of C code
shared between all implementations to ensure that all perform equally well, including a
dependency to the GNU Multiple Precision Arithmetic Library. Because of these, Go cannot
by itself compile to a different platform than the host. To overcome this limitation, we
will use [`xgo`](https://github.com/karalabe/xgo), a Go cross compiler package based on
Docker containers that has been architected specifically to allow both embedded C snippets
as well as simpler external C dependencies during compilation.
The `xgo` project has two simple dependencies: Docker (to ensure that the build
environment is completely contained) and Go. On most platforms these should be available
from the official package repositories. For manually installing them, please consult their
install guides at [Docker](https://docs.docker.com/installation/) and
[Go](https://golang.org/doc/install) respectively. This guide assumes that these two
dependencies are met.
To install and/or update xgo, simply type:
$ go get -u github.com/karalabe/xgo
You can test whether `xgo` is functioning correctly by requesting it to cross
compile itself and verifying that all cross compilations succeeded or not.
$ xgo github.com/karalabe/xgo
...
$ ls -al
-rwxr-xr-x 1 root root 2792436 Sep 14 16:45 xgo-android-21-arm
-rwxr-xr-x 1 root root 2353212 Sep 14 16:45 xgo-darwin-386
-rwxr-xr-x 1 root root 2906128 Sep 14 16:45 xgo-darwin-amd64
-rwxr-xr-x 1 root root 2388288 Sep 14 16:45 xgo-linux-386
-rwxr-xr-x 1 root root 2960560 Sep 14 16:45 xgo-linux-amd64
-rwxr-xr-x 1 root root 2437864 Sep 14 16:45 xgo-linux-arm
-rwxr-xr-x 1 root root 2551808 Sep 14 16:45 xgo-windows-386.exe
-rwxr-xr-x 1 root root 3130368 Sep 14 16:45 xgo-windows-amd64.exe
## Building Ethereum
Cross compiling Ethereum is analogous to the above example, but an additional flags is
required to satisfy the dependencies:
- `--deps` is used to inject arbitrary C dependency packages and pre-build them
Injecting the GNU Arithmetic Library dependency and selecting `geth` would be:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
github.com/ethereum/go-ethereum/cmd/geth
...
$ ls -al
-rwxr-xr-x 1 root root 23213372 Sep 14 17:59 geth-android-21-arm
-rwxr-xr-x 1 root root 14373980 Sep 14 17:59 geth-darwin-386
-rwxr-xr-x 1 root root 17373676 Sep 14 17:59 geth-darwin-amd64
-rwxr-xr-x 1 root root 21098910 Sep 14 17:59 geth-linux-386
-rwxr-xr-x 1 root root 25049693 Sep 14 17:59 geth-linux-amd64
-rwxr-xr-x 1 root root 20578535 Sep 14 17:59 geth-linux-arm
-rwxr-xr-x 1 root root 16351260 Sep 14 17:59 geth-windows-386.exe
-rwxr-xr-x 1 root root 19418071 Sep 14 17:59 geth-windows-amd64.exe
As the cross compiler needs to build all the dependencies as well as the main project
itself for each platform, it may take a while for the build to complete (approximately 3-4
minutes on a Core i7 3770K machine).
### Fine tuning the build
By default Go, and inherently `xgo`, checks out and tries to build the master branch of a
source repository. However, more often than not, you'll probably want to build a different
branch from possibly an entirely different remote repository. These can be controlled via
the `--remote` and `--branch` flags.
To build the `develop` branch of the official `go-ethereum` repository instead of the
default `master` branch, you just need to specify it as an additional command line flag
(`--branch`):
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--branch=develop \
github.com/ethereum/go-ethereum/cmd/geth
Additionally, during development you will most probably want to not only build a custom
branch, but also one originating from your own fork of the repository instead of the
upstream one. This can be done via the `--remote` flag:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--remote=https://github.com/karalabe/go-ethereum \
--branch=rpi-staging \
github.com/ethereum/go-ethereum/cmd/geth
By default `xgo` builds binaries for all supported platforms and architectures, with
Android binaries defaulting to the highest released Android NDK platform. To limit the
build targets or compile to a different Android platform, use the `--targets` CLI
parameter.
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--targets=android-16/arm,windows/* \
github.com/ethereum/go-ethereum/cmd/geth
### Building locally
If you would like to cross compile your local development version, simply specify a local
path (starting with `.` or `/`), and `xgo` will use all local code from `GOPATH`, only
downloading missing dependencies. In such a case of course, the `--branch`, `--remote` and
`--pkg` arguments are no-op:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
./cmd/geth
## Using the Makefile
Having understood the gist of `xgo` based cross compilation, you do not need to actually
memorize and maintain these commands, as they have been incorporated into the official
[Makefile](https://github.com/ethereum/go-ethereum/blob/master/Makefile) and can be
invoked with a trivial `make` request:
* `make geth-cross`: Cross compiles to every supported OS and architecture
* `make geth-<os>`: Cross compiles supported architectures of a particular OS (e.g. `linux`)
* `make geth-<os>-<arch>`: Cross compiles to a specific OS/architecture (e.g. `linux`, `arm`)
We advise using the `make` based commands opposed to manually invoking `xgo` as we do
maintain the Makefile actively whereas we cannot guarantee that this document will be
always readily updated to latest advancements.
### Tuning the cross builds
A few of the `xgo` build options have also been surfaced directly into the Makefile to
allow fine tuning builds to work around either upstream Go issues, or to enable some
fancier mechanics.
- `make ... GO=<go>`: Use a specific Go runtime (e.g. `1.5.1`, `1.5-develop`, `develop`)
- `make ... MODE=<mode>`: Build a specific target type (e.g. `exe`, `c-archive`).
Please note that these are not yet fully finalized, so they may or may not change in the
future as our code and the Go runtime features change.

@ -0,0 +1,61 @@
---
title: Light client
sort_key: B
---
Running a full node is the most trustless, private, decentralized and censorship resistant way to interact with Ethereum. It is also the best choice for the health of the network, because a decentralized network relies on having many individual nodes that independently verify the head of the chain. In a full node a copy of the blockchain is stored locally enabling users to verify incoming data against a local source of truth. However, running a full node requires a lot of disk space and non-negligible CPU allocation and takes hours (for snap sync) or days (for full sync) to sync the blockchain from genesis. Geth also offers a light mode that overcomes these issues and provides some of the benefits of running a node but requires only a fraction of the resources.
Read more about the reasons to run nodes on [ethereum.org](https://ethereum.org/en/run-a-node/).
## Light node vs full node
Running Geth in light mode has the following advantages for users:
- Syncing takes minutes rather than hours/days
- Light mode uses significantly less storage
- Light mode is lighter on CPU and other resources
- Light mode is suitable for resource-constrained devices
- Light mode can catch up much quicker after having been offline for a while
However, the cost of this performance increase is that a light Geth node depends heavily on full-node peers that choose, for altruistic reasons, to run light servers. There is no monetary incentive for full nodes to run light servers and it is an opt-in, rather than opt-out function of a Geth full node. For those reasons light servers are rather rare and can quickly become overwhelmed by data requests from light clients. The result of this is that **Geth nodes run in light mode often struggle to find peers**.
A light client can be used to query data from Ethereum and submit transactions, acting as a locally-hosted Ethereum wallet. However they have different security guarantees than full nodes. Because they don't keep local copies of the Ethereum state, light nodes can't validate the blocks in the same way as the full nodes. Instead they fetch block headers by requesting them from full nodes and check their proof-of-work (PoW), assuming the heaviest chain is valid. This means that it is sensible to wait until a few additional blocks have been confirmed before trusting the validity of a recently-mined transaction.
### Running a light server
Full node operators that choose to enable light serving altruistically enable other users to run light clients. This is good for Ethereum because it makes it easier for a wider population of users to interact with Ethereum without using trusted intermediaries. However, there is naturally a limit to how much resource a node operator is able and willing to dedicate to serving light clients. Therefore, the command that enables light serving requires arguments that define the upper bound on resource allocation. The value given is in percent of a processing thread, for example `--light.serve 300` enables light-serving and dedicates three processing threads to it.
Recent versions of Geth (>`1.9.14`) unindex older transactions to save disk space. Indexing is required for looking up transactions in Geth's database. Therefore, unindexing limits the data that can be requested by light clients. This unindexing can be disabled by adding `--tx.txlookuplimit 0` to make the maximum data available to light clients.
The whole command for starting Geth with a light server could look as follows:
```shell
geth --light.serve 50 --txlookuplimit 0
```
### Running a light client
Running a light client simply requires Geth to be started in light mode. It is likely that a user would also want to interact with the light node using, for example, RPC. This can be enabled using the `--http` command.
```shell
geth --syncmode light --http --http.api "eth,debug"
```
Data can be requested from this light Geth instance in the same way as for a full node (i.e. using the [JSON-RPC-API](/docs/rpc/server) using tools such as [Curl](https://curl.se/) or Geth's [Javascript console](/docs/interface/javascript-console)). Instead of fetching the data from a local database as in a full node, the light Geth instance requests the data from full-node peers.
It's also possible to send transactions. However, light clients are not connected directly to Ethereum Mainnet but to a network of light servers that connect to Ethereum Mainnet. This means a transaction submitted by a light client is received first by a light server that then propagates it to full-node peers on the light-client's behalf. This reliance on honest light-servers is one of the trust compromises that comes along with running a light node instead of a full node.
### Ultra light clients
Geth has an even lighter sync mode called ultra light client (ULC). The difference between light mode and ultra-light mode is that a ULC doesn't check the PoW in block headers. There is an assumption that the ULC has access to one or more trusted light servers. This option has the greatest trust assumptions but the smallest resource requirement.
To start an ultra-light client, the enode addresses of the trusted light servers must be passed to the `--ulc.servers` command and the sync mode is `light`:
```sh
geth --syncmode light --ulc.servers "enode://...,enode://..." --http --http.api "eth,debug"
```
## Summary
Running a full node is the most trustless way to interact with Ethereum. However, Geth provides a low-resource "light" mode that can be run on modest computers and requires much less disk space. The trade-offs are additional trust assumptions and a small pool of light-serving peers to connect to.

@ -0,0 +1,165 @@
---
title: Connecting To The Network
sort_key: B
---
The default behaviour for Geth is to connect to Ethereum Mainnet. However, Geth can also connect to public testnets, [private networks](/docs/getting-started/private-net) and [local testnets](/docs/getting-started/dev-mode). Command line flags are provided for connecting to the popular public testnets:
- `--ropsten`, Ropsten proof-of-work test network
- `--rinkeby`, Rinkeby proof-of-authority test network
- `--goerli`, Goerli proof-of-authority test network
- `--sepolia` Sepolia proof-of-work test network
Providing these flags at startup instructs Geth to connect to the specific public testnet instead of Ethereum Mainnet. Because these are public testnets that have been running for several years, Geth has to download the historical blockchain data from genesis, just the same as for Ethereum Mainnet.
**Note:** network selection is not persisted in the config file. To connect to a pre-defined network you must always enable it explicitly, even when using the `--config` flag to load other configuration values. For example:
```shell
# Generate desired config file. You must specify testnet here.
geth --goerli --syncmode "full" ... dumpconfig > goerli.toml
# Start geth with given config file. Here too the testnet must be specified.
geth --goerli --config goerli.toml
```
## Finding peers
Geth continuously attempts to connect to other nodes on the network until it has enough peers. If UPnP (Universal Plug and Play) is enabled at the router or Ethereum is run on an Internet-facing server, it will also accept connections from other nodes. Geth finds peers using the [discovery protocol](https://ethereum.org/en/developers/docs/networking-layer/#discovery). In the discovery protocol, nodes exchange connectivity details and then establish sessions ([RLPx](https://github.com/ethereum/devp2p/blob/master/rlpx.md)). If the nodes support compatible sub-protocols they can start exchanging Ethereum data [on the wire](https://ethereum.org/en/developers/docs/networking-layer/#wire-protocol).
A new node entering the network for the first time gets introduced to a set of peers by a bootstrap node ("bootnode") whose sole purpose is to connect new nodes to peers. The endpoints for these bootnodes are hardcoded into Geth, but they can also be specified by providing the `--bootnode` flag along with comma-separated bootnode addresses in the form of [enodes](https://ethereum.org/en/developers/docs/networking-layer/network-addresses/#enode) on startup. For example:
```shell
geth --bootnodes enode://pubkey1@ip1:port1,enode://pubkey2@ip2:port2,enode://pubkey3@ip3:port3
```
There are scenarios where disabling the discovery process is useful, for example for running a local test node or an experimental test network with known, fixed nodes. This can be achieved by passing the `--nodiscover` flag to Geth at startup.
## Connectivity problems
There are occasions when Geth simply fails to connect to peers. The common reasons for this are:
- Local time might be incorrect. An accurate clock is required to participate in the Ethereum network. The local clock can be resynchronized using commands such as `sudo ntpdate -s time.nist.gov` (this will vary depending on operating system).
- Some firewall configurations can prohibit UDP traffic. The static nodes feature or `admin.addPeer()` on the console can be used to configure connections manually.
- Running Geth in [light mode](/docs/interface/les) often leads to connectivity issues because there are few nodes running light servers. There is no easy fix for this except to switch Geth out of light mode.
- The public test network Geth is connecting to might be deprecated or have a low number of active nodes that are hard to find. In this case, the best action is to switch to an alternative test network.
## Checking Connectivity
The `net` module has two attributes that enable checking node connectivity from the [interactive Javascript console](/docs/interface/javascript-console). These are `net.listening` which reports whether the Geth node is listening for inbound requests, and `peerCount` which returns the number of active peers the node is connected to.
```javascript
> net.listening
true
> net.peerCount
4
```
Functions in the `admin` module provide more information about the connected peers, including their IP address, port number, supported protocols etc. Calling `admin.peers` returns this information for all connected peers.
```
> admin.peers
[{
ID: 'a4de274d3a159e10c2c9a68c326511236381b84c9ec52e72ad732eb0b2b1a2277938f78593cdbe734e6002bf23114d434a085d260514ab336d4acdc312db671b',
Name: 'Geth/v0.9.14/linux/go1.4.2',
Caps: 'eth/60',
RemoteAddress: '5.9.150.40:30301',
LocalAddress: '192.168.0.28:39219'
}, {
ID: 'a979fb575495b8d6db44f750317d0f4622bf4c2aa3365d6af7c284339968eef29b69ad0dce72a4d8db5ebb4968de0e3bec910127f134779fbcb0cb6d3331163c',
Name: 'Geth/v0.9.15/linux/go1.4.2',
Caps: 'eth/60',
RemoteAddress: '52.16.188.185:30303',
LocalAddress: '192.168.0.28:50995'
}, {
ID: 'f6ba1f1d9241d48138136ccf5baa6c2c8b008435a1c2bd009ca52fb8edbbc991eba36376beaee9d45f16d5dcbf2ed0bc23006c505d57ffcf70921bd94aa7a172',
Name: 'pyethapp_dd52/v0.9.13/linux2/py2.7.9',
Caps: 'eth/60, p2p/3',
RemoteAddress: '144.76.62.101:30303',
LocalAddress: '192.168.0.28:40454'
}, {
ID: 'f4642fa65af50cfdea8fa7414a5def7bb7991478b768e296f5e4a54e8b995de102e0ceae2e826f293c481b5325f89be6d207b003382e18a8ecba66fbaf6416c0',
Name: '++eth/Zeppelin/Rascal/v0.9.14/Release/Darwin/clang/int',
Caps: 'eth/60, shh/2',
RemoteAddress: '129.16.191.64:30303',
LocalAddress: '192.168.0.28:39705'
} ]
```
The `admin` module also includes functions for gathering information about the local node rather than its peers. For example, `admin.nodeInfo` returns the name and connectivity details for the local node.
```
> admin.nodeInfo
{
Name: 'Geth/v0.9.14/darwin/go1.4.2',
NodeUrl: 'enode://3414c01c19aa75a34f2dbd2f8d0898dc79d6b219ad77f8155abf1a287ce2ba60f14998a3a98c0cf14915eabfdacf914a92b27a01769de18fa2d049dbf4c17694@[::]:30303',
NodeID: '3414c01c19aa75a34f2dbd2f8d0898dc79d6b219ad77f8155abf1a287ce2ba60f14998a3a98c0cf14915eabfdacf914a92b27a01769de18fa2d049dbf4c17694',
IP: '::',
DiscPort: 30303,
TCPPort: 30303,
Td: '2044952618444',
ListenAddr: '[::]:30303'
}
```
## Custom Networks
It is often useful for developers to connect to private test networks rather than public testnets or Etheruem mainnet. These sandbox environments allow block creation without competing against other miners, easy minting of test ether and give freedom to break things without real-world consequences. A private network is started by providing a value to `--networkid` that is not used by any other existing public network ([Chainlist](https://chainlist.org)) and creating a custom `genesis.json` file. Detailed instructions for this are available on the [Private Networks page](/docs/interface/private-network).
## Static nodes
Geth also supports static nodes. Static nodes are specific peers that are always connected to. Geth reconnects to these peers automatically when it is restarted. Specific nodes are defined to be static nodes by saving their enode addresses to a json file which must be stored in `datadir/geth/static-nodes.json`. The content of `static-nodes.json` should be formatted as follows:
```javascript
[
"enode://f4642fa65af50cfdea8fa7414a5def7bb7991478b768e296f5e4a54e8b995de102e0ceae2e826f293c481b5325f89be6d207b003382e18a8ecba66fbaf6416c0@33.4.2.1:30303",
"enode://pubkey@ip:port"
]
```
Static nodes can also be added at runtime in the Javascript console by passing an enode address to `admin.addPeer()`:
```javascript
admin.addPeer("enode://f4642fa65af50cfdea8fa7414a5def7bb7991478b768e296f5e4a54e8b995de102e0ceae2e826f293c481b5325f89be6d207b003382e18a8ecba66fbaf6416c0@33.4.2.1:30303")
```
## Peer limit
It is sometimes desirable to cap the number of peers Geth will connect to in order to limit on the computational and bandwidth cost associated with running a node. By default, the limit is 50 peers, however, this can be updated by passing a value to `--maxpeers`:
```shell
geth <otherflags> --maxpeers 15
```
## Trusted nodes
Geth supports trusted nodes that are always allowed to reconnect, even if the peer limit is reached. They can be added persistently via a config file `<datadir>/geth/trusted-nodes.json` or temporarily using the Javascript console. The format for the config file is identical to the one used for static nodes.
Nodes can be added using the `admin.addTrustedPeer()` call in the Javascript console and removed using `admin.removeTrustedPeer()` call.
```javascript
admin.addTrustedPeer("enode://f4642fa65af50cfdea8fa7414a5def7bb7991478b768e296f5e4a54e8b995de102e0ceae2e826f293c481b5325f89be6d207b003382e18a8ecba66fbaf6416c0@33.4.2.1:30303")
```
## Summary
Geth connects to Ethereum Mainnet by default. However, this behaviour can be changed using combinations of command line flags and files. This page has described the various options available for connecting a Geth node to Ethereum, public testnets and private networks.

@ -0,0 +1,140 @@
---
title: Connecting to Consensus Clients
sort_key: A3
---
Geth is an [execution client][ex-client-link]. Historically, an execution client alone has been enough to run a full Ethereum node.
However, Ethereum will soon swap its consensus mechanism from [proof-of-work][pow-link] (PoW) to
[proof-of-stake][pos-link] (PoS) in a transition known as [The Merge](/docs/interface/merge).
When that happens, Geth will not be able to track the Ethereum chain on its own. Instead, it will need to
be coupled to another piece of software called a ["consensus client"][con-client-link]. For Geth users that
intend to continue to run full nodes after The Merge, it is sensible to start running a consensus client now,
so that The Merge can happen smoothly. There are five consensus clients available, all of which connect to Geth in the same way.
This page will outline how Geth can be set up with a consensus client in advance of The Merge (or to interact with an alread-merged testnet).
{% include note.html content=" It is recommended to practise connecting a consensus client to Geth on a testnet such as Sepolia or Goerli but to
wait until merge-ready releases are available before doing it on Ethereum Mainnet." %}
## Configuring Geth
Geth can be downloaded and installed according to the instructions on the
[Installing Geth](/docs/install-and-build/installing-geth) page. In order to connect to a consensus client,
Geth must expose a port for the inter-client RPC connection.
The RPC connection must be authenticated using a `jwtsecret` file. This is created and saved
to `<datadir>/geth/jwtsecret` by default but can also be created and saved to a custom location or it can be
self-generated and provided to Geth by passing the file path to `--authrpc.jwtsecret`. The `jwtsecret` file
is required by both Geth and the consensus client.
The authorization must then be applied to a specific address/port. This is achievd by passing an address to
`--authrpc.addr` and a port number to `--authrpc.port`. It is also safe to provide either `localhost` or a wildcard
`*` to `--authrpc.vhosts` so that incoming requests from virtual hosts are accepted by Geth because it only
applies to the port authenticated using `jwtsecret`.
The Merge itself will be triggered using a terminal total difficulty (TTD). The specific value for the TTD has not yet
been decided. When it is decided, Geth needs to know what it is in order to merge successfully. This will most likely be
included in a new release, so Geth will have to be stopped, updated and restarted in advance of The Merge.
A complete command to start Geth so that it can connect to a consensus client looks as follows:
```shell
geth --authrpc.addr localhost --authrpc.port 8551 --authrpc.vhosts localhost --authrpc.jwtsecret /tmp/jwtsecret
```
## Consensus clients
There are currently four consensus clients that can be run alongside Geth. These are:
[Lighthouse](https://lighthouse-book.sigmaprime.io/): written in Rust
[Nimbus](https://nimbus.team/): written in Nim
[Prysm](https://docs.prylabs.network/docs/getting-started/): written in Go
[Teku](https://pegasys.tech/teku): written in Java
It is recommended to consider [client diversity][client-div-link] when choosing a consensus client. Instructions for installing each client are provided in the documentation linked in the list above.
The consensus client must be started with the right port configuration to establish an RPC connection
to the local Geth instance. In the example above, `localhost:8551` was authorized
for this purpose. The consensus clients all have a command similar to `--http-webprovider` that
takes the exposed Geth port as an argument.
The consensus client also needs the path to Geth's `jwt-secret` in order to authenticate the RPC connection between them.
Each consensus client has a command similar to `--jwt-secret` that takes the file path as an argument. This must
be consistent with the `--authrpc.jwtsecret` path provided to Geth.
The consensus clients all expose a [Beacon API][beacon-api-link] that can be used to check the status
of the Beacon client or download blocks and consensus data by sending requests using tools such as [Curl](https://curl.se).
More information on this can be found in the documentation for each consensus client.
## Validators
After The Merge, miners are no longer responsible for securing the Ethereum blockchain. Instead, this becomes the responsibility
of validators that have staked at least 32 ETH into a deposit contract and run validator software. Each of the consensus clients
have their own validator software that is described in detail in their respective documentation. The easiest way to handle
staking and validator key generation is to use the Ethereum Foundation [Staking Launchpad][launchpad-link]. The launchpad is also
available for [Prater][prater-launchpad-link], [Ropsten][ropsten-launchpad-link] and [Kiln][kiln-launchpad-link] testnets. It is
also highly recommended to review the [Merge readiness checklist][checklist-link].
## Using Geth
After the merge, Geth will follow the head of the chain via its connection to the consensus client. However, Geth is still
the portal for users to send transactions to Ethereum. Overall, Geth will not change very much from a user-perspective.
The Geth Javascript console is still available for this purpose, and the majority of the [JSON-RPC API](/docs/rpc/server) will
remain available via web3js or HTTP requests with commands as json payloads. These options are explained in more detail on the
[Javascript Console page](/docs/interface/javascript-console). The Javascript console can be started using the following command
in a separate terminal (assuming Geth's IPC file is saved in `datadir`):
```shell
geth attach datadir/geth.ipc
```
## Testnets
Ethereum Mainnet has not yet undergone The Merge, but some public testnets have. This means that running Geth alone is no longer
enough to interact with merged testnets. This includes two testnets that were purpose built to test The Merge (Kiln, Kintsugi) and
the long-standing public PoW chain, Ropsten, as well as the relatively new testnet Sepolia. If Geth is connected to these merged networks alone it will simply stall when it syncs as far
as the merge block, awaiting information from a consensus client. Therefore, any activity on these testnets requires Geth to be
connected to a consensus client. There are many instructional articles that exlain how to connect to these testnets using Geth in
combination with various consensus clients, for example:
[Connecting to Kiln using Teku](https://github.com/chrishobcroft/TestingTheMerge/blob/main/geku.md)
[Connecting to Kiln using Lighthouse](https://github.com/remyroy/ethstaker/blob/main/merge-devnet.md)
[Connecting to Kiln using Prysm](https://hackmd.io/@prysmaticlabs/B1Q2SluWq)
[Connecting to Ropsten using Lighthouse](https://github.com/remyroy/ethstaker/blob/main/merge-ropsten.md)
The Merge testing will soon progress to merging the Goerli testnet. Once this has happened Geth will require a connection
to a consensus client to work on those networks too.
## Summary
As The Merge approaches it is important for Geth users to prepare by installing and running a consensus client. Otherwise, Geth will stop
following the head of the chain immediately after The Merge. There are five consensus clients to choose from. This page provided an overview
of how to choose a consensus client and configure Geth to connect to it. This pre-emptive action will protect against disruption to users as a
result of The Merge.
[pow-link]:https://ethereum.org/en/developers/docs/consensus-mechanisms/pow
[pos-link]:https://ethereum.org/en/developers/docs/consensus-mechanisms/pos
[con-client-link]:https://ethereum.org/en/glossary/#consensus-client
[ex-client-link]:https://ethereum.org/en/glossary/#execution-client
[beacon-api-link]:https://ethereum.github.io/beacon-APIs
[engine-api-link]: https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md
[client-div-link]:https://ethereum.org/en/developers/docs/nodes-and-clients/client-diversity
[execution-clients-link]: https://ethereum.org/en/developers/docs/nodes-and-clients/client-diversity/#execution-clients
[launchpad-link]:https://launchpad.ethereum.org/
[prater-launchpad-link]:https://prater.launchpad.ethereum.org/
[kiln-launchpad-link]:https://kiln.launchpad.ethereum.org/
[ropsten-launchpad-link]:https://ropsten.launchpad.ethereum.org/
[e-org-link]: https://ethereum.org/en/developers/docs/nodes-and-clients/run-a-node/
[checklist-link]:https://launchpad.ethereum.org/en/merge-readiness

@ -0,0 +1,474 @@
---
title: Getting Started with Geth
permalink: docs/getting-started
sort_key: A
---
This page explains how to set up Geth and execute some basic tasks using the command line tools. In order to use Geth, the software must first be installed. There are several ways Geth can be installed depending on the operating system and the user's choice of installation method, for example using a package manager, container or building from source. Instructions for installing Geth can be found on the ["Install and Build"](install-and-build/installing-geth) pages. The tutorial on this page assumes Geth and the associated developer tools have been installed successfully.
This page provides step-by-step instructions covering the fundamentals of using Geth. This includes generating accounts, joining an Ethereum network, syncing the blockchain and sending ether between accounts. This tutorial also uses [Clef](clef/tutorial). Clef is an account management tool external to Geth itself that allows users to sign transactions. It is developed and maintained by the Geth team and is intended to eventually replace the account management tool built in to Geth.
## Prerequisites
In order to get the most value from the tutorials on this page, the following skills are necessary:
- Experience using the command line
- Basic knowledge about Ethereum and testnets
- Basic knowledge about HTTP and JavaScript
Users that need to revisit these fundamentals can find helpful resources relating to the command line [here](https://developer.mozilla.org/en-US/docs/Learn/Tools_and_testing/Understanding_client-side_tools/Command_line), Ethereum and its testnets [here](https://ethereum.org/en/developers/tutorials/), http [here](https://developer.mozilla.org/en-US/docs/Web/HTTP) and Javascript [here](https://www.javascript.com/learn).
{% include note.html content="If Geth was installed from source on Linux, `make` saves the binaries for Geth and the associated tools in `/build/bin`. To run these programs it is convenient to move them to the top level project directory (e.g. running `mv ./build/bin/* ./`) from `/go-ethereum`. Then `./` must be prepended to the commands in the code snippets in order to execute a particular program, e.g. `./geth` instead of simply `geth`. If the executables are not moved then either navigate to the `bin` directory to run them (e.g. `cd ./build/bin` and `./geth`) or provide their path (e.g. `./build/bin/geth`). These instructions can be ignored for other installations." %}
## Background
Geth is an Ethereum client written in Go. This means running Geth turns a computer into an Ethereum node. Ethereum is a peer-to-peer network where information is shared directly between nodes rather than being managed by a central server. Nodes compete to generate new blocks of transactions to send to its peers because they are rewarded for doing so in Ethereum's native token, ether (ETH). On receiving a new block, each node checks that it is valid and adds it to their database. The sequence of discrete blocks is called a "blockchain". The information provided in each block is used by Geth to update its "state" - the ether balance of each account on Ethereum. There are two types of account: externally-owned accounts (EOAs) and contract accounts. Contract accounts execute contract code when they receive transactions. EOAs are accounts that users manage locally in order to sign and submit transactions. Each EOA is a public-private key pair, where the public key is used to derive a unique address for the user and the private key is used to protect the account and securely sign messages. Therefore, in order to use Ethereum, it is first necessary to generate an EOA (hereafter, "account"). This tutorial will guide the user through creating an account, funding it with ether and sending some to another address.
Read more about Ethereum accounts [here](https://ethereum.org/en/developers/docs/accounts/).
## Step 1: Generating accounts
There are several methods for generating accounts in Geth. This tutorial demonstrates how to generate accounts using Clef, as this is considered best practice, largely because it decouples the users' key management from Geth, making it more modular and flexible. It can also be run from secure USB sticks or virtual machines, offering security benefits. For convenience, this tutorial will execute Clef on the same computer that will also run Geth, although more secure options are available (see [here](https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/docs/setup.md)).
An account is a pair of keys (public and private). Clef needs to know where to save these keys to so that they can be retrieved later. This information is passed to Clef as an argument. This is achieved using the following command:
```shell
clef newaccount --keystore geth-tutorial/keystore
```
The specific function from Clef that generates new accounts is `newaccount` and it accepts a parameter, `--keystore`, that tells it where to store the newly generated keys. In this example the keystore location is a new directory that will be created automatically: `geth-tutorial/keystore`. Clef will return the following result in the terminal:
```terminal
WARNING!
Clef is an account management tool. It may, like any software, contain bugs.
Please take care to
- backup your keystore files,
- verify that the keystore(s) can be opened with your password.
Clef is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
Enter 'ok' to proceed:
>
```
This is important information. The `geth-tutorial/keystore` directory will soon contain a secret key that can be used to access any funds held in the new account. If it is compromised, the funds can be stolen. If it is lost, there is no way to retrieve the funds. This tutorial will only use dummy funds with no real world value, but when these steps are repeated on Ethereum mainnet is critical that the keystore is kept secure and backed up.
Typing `ok` into the terminal and pressing `enter` causes Clef to prompt for a password. Clef requires a password that is at least 10 characters long, and best practice would be to use a combination of numbers, characters and special characters. Entering a suitable password and pressing `enter` returns the following result to the terminal:
```terminal
-----------------------
DEBUG[02-10|13:46:46.436] FS scan times list="92.081µs" set="12.629µs" diff="2.129µs"
INFO [02-10|13:46:46.592] Your new key was generated address=0xCe8dBA5e4157c2B284d8853afEEea259344C1653
WARN [02-10|13:46:46.595] Please backup your key file! path=keystore:///.../geth-tutorial/keystore/UTC--2022-02-07T17-19-56.517538000Z--ca57f3b40b42fcce3c37b8d18adbca5260ca72ec
WARN [02-10|13:46:46.595] Please remember your password!
Generated account 0xCe8dBA5e4157c2B284d8853afEEea259344C1653
```
It is important to save the account address and the password somewhere secure. They will be used again later in this tutorial. Please note that the account address shown in the code snippets above and later in this tutorials are examples - those generated by followers of this tutorial will be different. The account generated above can be used as the main account throughout the remainder of this tutorial. However in order to demonstrate transactions between accounts it is also necessary to have a second account. A second account can be added to the same keystore by precisely repeating the previous steps, providing the same password.
## Step 2: Start Clef
The previous commands used Clef's `newaccount` function to add new key pairs to the keystore. Clef uses the private key(s) saved in the keystore is used to sign transactions. In order to do this, Clef needs to be started and left running while Geth is running simultaneously, so that the two programs can communicate between one another.
To start Clef, run the Clef executable passing as arguments the keystore file location, config directory location and a chain ID. The config directory was automatically created inside the `geth-tutorial` directory during the previous step. The [chain ID](https://chainlist.org/) is an integer that defines which Ethereum network to connect to. Ethereum mainnet has chain ID 1. In this tutorial Chain ID 5 is used which is that of the Goerli testnet. It is very important that this chain ID parameter is set to 5. The following command starts Clef on Goerli:
```shell
clef --keystore geth-tutorial/keystore --configdir geth-tutorial/clef --chainid 5
```
After running the command above, Clef requests the user to type “ok” to proceed. On typing "ok" and pressing enter, Clef returns the following to the terminal:
```terminal
INFO [02-10|13:55:30.812] Using CLI as UI-channel
INFO [02-10|13:55:30.946] Loaded 4byte database embeds=146,841 locals=0 local=./4byte-custom.json
WARN [02-10|13:55:30.947] Failed to open master, rules disabled err="failed stat on geth-tutorial/clef/masterseed.json: stat geth-tutorial/clef/masterseed.json: no such file or directory"
INFO [02-10|13:55:30.947] Starting signer chainid=5 keystore=geth-tutorial/keystore light-kdf=false advanced=false
DEBUG[02-10|13:55:30.948] FS scan times list="133.35µs" set="5.692µs" diff="3.262µs"
DEBUG[02-10|13:55:30.970] Ledger support enabled
DEBUG[02-10|13:55:30.973] Trezor support enabled via HID
DEBUG[02-10|13:55:30.976] Trezor support enabled via WebUSB
INFO [02-10|13:55:30.978] Audit logs configured file=audit.log
DEBUG[02-10|13:55:30.981] IPCs registered namespaces=account
INFO [02-10|13:55:30.984] IPC endpoint opened url=geth-tutorial/clef/clef.ipc
------- Signer info -------
* intapi_version : 7.0.1
* extapi_version : 6.1.0
* extapi_http : n/a
* extapi_ipc : geth-tutorial/clef/clef.ipc
```
This result indicates that Clef is running. This terminal should be left running for the duration of this tutorial. If the tutorial is stopped and restarted later Clef must also be restarted by running the previous command.
## Step 3: Start Geth
Geth is the Ethereum client that will connect the computer to the Ethereum network. In this tutorial the network is Goerli, an Ethereum testnet. Testnets are used to test Ethereum client software and smart contracts in an environment where no real-world value is at risk. To start Geth, run the Geth executable file passing argument that define the data directory (where Geth should save blockchain data), signer (points Geth to Clef), the network ID and the sync mode. For this tutorial, snap sync is recommended (see [here](https://blog.ethereum.org/2021/03/03/geth-v1-10-0/) for reasons why). The final argument passed to Geth is the `--http` flag. This enables the http-rpc server that allows external programs to interact with Geth by sending it http requests. By default the http server is only exposed locally using port 8545: `localhost:8545`.
The following command should be run in a new terminal, separate to the one running Clef:
```shell
geth --datadir geth-tutorial --signer=geth-tutorial/clef/clef.ipc --goerli --syncmode snap --http
```
Running the above command starts Geth. The terminal should rapidly fill with status updates, starting with:
```terminal
INFO [02-10|13:59:06.649] Starting Geth on goerli testnet...
INFO [02-10|13:59:06.649] Dropping default light client cache provided=1024 updated=128
INFO [02-10|13:59:06.652] Maximum peer count ETH=50 LES=0 total=50
INFO [02-10|13:59:06.655] Using external signer url=geth-tutorial/clef/clef.ipc
INFO [02-10|13:59:06.660] Set global gas cap cap=50,000,000
INFO [02-10|13:59:06.661] Allocated cache and file handles database=/.../geth-tutorial/geth/chaindata cache=64.00MiB handles=5120
INFO [02-10|13:59:06.855] Persisted trie from memory database nodes=361 size=51.17KiB time="643.54µs" gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [02-10|13:59:06.855] Initialised chain configuration config="{ChainID: 5 Homestead: 0 DAO: <nil> DAOSupport: true EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 1561651, Muir Glacier: <nil>, Berlin: 4460644, London: 5062605, Arrow Glacier: <nil>, MergeFork: <nil>, Engine: clique}"
INFO [02-10|13:59:06.862] Added trusted checkpoint block=5,799,935 hash=2de018..c32427
INFO [02-10|13:59:06.863] Loaded most recent local header number=6,340,934 hash=483cf5..858315 td=9,321,576 age=2d9h29m
INFO [02-10|13:59:06.867] Configured checkpoint oracle address=0x18CA0E045F0D772a851BC7e48357Bcaab0a0795D signers=5 threshold=2
INFO [02-10|13:59:06.867] Gasprice oracle is ignoring threshold set threshold=2
WARN [02-10|13:59:06.869] Unclean shutdown detected booted=2022-02-08T04:25:08+0100 age=2d9h33m
INFO [02-10|13:59:06.870] Starting peer-to-peer node instance=Geth/v1.10.15-stable/darwin-amd64/go1.17.5
INFO [02-10|13:59:06.995] New local node record seq=1,644,272,735,880 id=d4ffcd252d322a89 ip=127.0.0.1 udp=30303 tcp=30303
INFO [02-10|13:59:06.996] Started P2P networking self=enode://4b80ebd341b5308f7a6b61d91aa0ea31bd5fc9e0a6a5483e59fd4ea84e0646b13ecd289e31e00821ccedece0bf4b9189c474371af7393093138f546ac23ef93e@127.0.0.1:30303
INFO [02-10|13:59:06.997] IPC endpoint opened url=/.../geth-tutorial/geth.ipc
INFO [02-10|13:59:06.998] HTTP server started endpoint=127.0.0.1:8545 prefix= cors= vhosts=localhost
WARN [02-10|13:59:06.998] Light client mode is an experimental feature
WARN [02-10|13:59:06.999] Failed to open wallet url=extapi://geth-tutorial/clef/cle.. err="operation not supported on external signers"
INFO [02-10|13:59:08.793] Block synchronisation started
```
This indicates that Geth has started up and is searching for peers to connect to. Once it finds peers it can request block headers from them, starting at the genesis block for the Goerli blockchain. Geth continues to download blocks sequentially, saving the data in files in `/go-ethereum/geth-tutorial/geth/chaindata/`. This is confirmed by the logs printed to the terminal. There should be a rapidly-growing sequence of logs in the terminal with the following syntax:
```terminal
INFO [04-29][15:54:09.238] Looking for peers peercount=2 tried=0 static=0
INFO [04-29][15:54:19.393] Imported new block headers count=2 elapsed=1.127ms number=996288 hash=09f1e3..718c47 age=13h9m5s
INFO [04-29][15:54:19:656] Imported new block receipts count=698 elapsed=4.464ms number=994566 hash=56dc44..007c93 age=13h9m9s
```
These logs indicate that Geth is running as expected. Sending an empty Curl request to the http server provides a quick way to confirm that this too has been started without any issues. In a third terminal, the following command can be run:
```shell
curl http://localhost:8545
```
If there is no error message reported to the terminal, everything is OK. Geth must be running in order for a user to interact with the Ethereum network. If this terminal is closed down then Geth must be restarted in a new terminal. Geth can be started and stopped easily, but it must be running for any interaction with Ethereum to take place. To shut down Geth, simply press `CTRL+C` in the Geth terminal. To start it again, run the previous command `geth --datadir ... ..`.
{% include note.html content="Snap syncing Goerli will take some time and until the sync is finished you can't use the node to transfer funds. You can also try doing a [light sync](interface/les) which will be much quicker but depends on light servers being available to serve your node the data it needs." %}
## Step 4: Get Testnet Ether
In order to make some transactions, the user must fund their account with ether. On Ethereum mainnet, ether can only be obtained in three ways: 1) by receiving it as a reward for mining/validating; 2) receiving it in a transfer from another Ethereum user or contract; 3) receiving it from an exchange, having paid for it with fiat money. On Ethereum testnets, the ether has no real world value so it can be made freely available via faucets. Faucets allow users to request a transfer of testnet ether to their account.
The address generated by Clef in Step 1 can be pasted into the Paradigm Multifaucet faucet [here](https://fauceth.komputing.org/?chain=1115511). This requires a Twitter login as proof of personhood. The faucets adds ether to the given address on multiple testnets simultaneously, including Goerli. In the next steps Geth will be used to check that the ether has been sent to the given address and send some of it to the second address created earlier.
## Step 5: Interact with Geth via IPC or RPC
For interacting with the blockchain, Geth provides JSON-RPC APIs. [JSON-RPC](https://ethereum.org/en/developers/docs/apis/json-rpc/) is a way to execute specific tasks by sending instructions to Geth in the form of [JSON](https://www.json.org/json-en.html) objects. RPC stands for "Remote Procedure Call" and it refers to the ability to send these JSON-encoded instructions from locations outside of those managed by Geth. It is possible to interact with Geth by sending these JSON encoded instructions directly over Geth's exposed http port using tools like Curl. However, this is somewhat user-unfriendly and error-prone, especially for more complex instructions. For this reason, there are a set of libraries built on top of JSON-RPC that provide a more user-friendly interface for interacting with Geth. One of the most widely used is Web3.js.
Geth provides a Javascript console that exposes the Web3.js API. This means that with Geth running in one terminal, a Javascript environment can be opened in another allowing the user to interact with Geth using Web3.js. There are two transport protocols that can be used to connect the Javascript environment to Geth:
- IPC (Inter-Process Communication): This provides unrestricted access to all APIs, but only works when the console is run on the same host as the geth node.
- HTTP: This connection method by default provides access to the `eth`, `web3` and `net` method namespaces.
This tutorial will use the HTTP option. Note that the terminals running Geth and Clef should both still be active. In a new (third) terminal, the following command can be run to start the console and connect it to Geth using the exposed http port:
```shell
geth attach http://127.0.0.1:8545
```
This command causes the terminal to hang because it is waiting for approval from Clef. Approving the request in the terminal running Clef will lead to the following welcome message being displayed in the Javascript console:
```terminal
Welcome to the Geth JavaScript console!
instance: Geth/v1.10.15-stable/darwin-amd64/go1.17.5
at block: 6354736 (Thu Feb 10 2022 14:01:46 GMT+0100 (WAT))
modules: eth:1.0 net:1.0 rpc:1.0 web3:1.0
To exit, press ctrl-d or type exit
```
The console is now active and connected to Geth. It can now be used to interact with the Ethereum (Goerli) network.
### List of accounts
In this tutorial, the accounts are managed using Clef. This means that requesting information about the accounts requires explicit approval in Clef, which should still be running in its own terminal. Earlier in this tutorial, two accounts were created using Clef. The following command will display the addresses of those two accounts and any others that might have been added to the keystore before or since.
```javascript
eth.accounts
```
The console will hang, because Clef is waiting for approval. The following message will be displayed in the Clef terminal:
```terminal
-------- List Account request--------------
A request has been made to list all accounts.
You can select which accounts the caller can see
[x] 0xca57F3b40B42FCce3c37B8D18aDBca5260ca72EC
URL: keystore:///.../geth-tutorial/keystore/UTC--2022-02-07T17-19-56.517538000Z--ca57f3b40b42fcce3c37b8d18adbca5260ca72ec
[x] 0xCe8dBA5e4157c2B284d8853afEEea259344C1653
URL: keystore:///.../geth-tutorial/keystore/UTC--2022-02-10T12-46-45.265592000Z--ce8dba5e4157c2b284d8853afeeea259344c1653
-------------------------------------------
Request context:
NA -> ipc -> NA
Additional HTTP header data, provided by the external caller:
User-Agent: ""
Origin: ""
Approve? [y/N]:
> y
```
Entering `y` approves the request from the console. In the terminal running the Javascript console, the account addresses are now displayed:
```terminal
["0xca57f3b40b42fcce3c37b8d18adbca5260ca72ec", "0xce8dba5e4157c2b284d8853afeeea259344c1653"]
```
It is also possible for this request to time out if the Clef approval took too long - in this case simply repeat the request and approval.
### Checking account balance.
Having confirmed that the two addresses created earlier are indeed in the keystore and accessible through the Javascript console, it is possible to retrieve information about how much ether they own. The Goerli faucet should have sent 1 ETH to the address provided, meaning that the balance of one of the accounts should be 1 ether and the other should be 0. The following command displays the account balance in the console:
```javascript
web3.fromWei(eth.getBalance("0xca57F3b40B42FCce3c37B8D18aDBca5260ca72EC"), "ether")
```
There are actually two instructions sent in the above command. The inner one is the `getBalance` function from the `eth` namespace. This takes the account address as its only argument. By default, this returns the account balance in units of Wei. There are 10<sup>18</sup> Wei to one ether. To present the result in units of ether, `getBalance` is wrapped in the `fromWei` function from the `web3` namespace. Running this command should provide the following result (for the account that received faucet funds):
```terminal
1
```
Repeating the command for the other account should yield:
```terminal
0
```
### Send ether to another account
The command `eth.sendTransaction` can be used to send some ether from one address to another. This command takes three arguments: `from`, `to` and `value`. These define the sender and recipient addresses (as strings) and the amount of Wei to transfer. It is far less error prone to enter the transaction value in units of ether rather than Wei, so the value field can take the return value from the `toWei` function. The following command, run in the Javascript console, sends 0.1 ether from one of the accounts in the Clef keystore to the other. Note that the addresses here are examples - the user must replace the address in the `from` field with the address currently owning 1 ether, and the address in the `to` field with the address currently holding 0 ether.
```javascript
eth.sendTransaction({
from: "0xca57f3b40b42fcce3c37b8d18adbca5260ca72ec",
to: "0xce8dba5e4157c2b284d8853afeeea259344c1653",
value: web3.toWei(0.1, "ether")
})
```
Note that submitting this transaction requires approval in Clef. In the Clef terminal, Clef will prompt for approval and request the account password. If the password is correctly entered, Geth proceeds with the transaction. The transaction request summary is presented by Clef in the Clef terminal. This is an opportunity for the sender to review the details and ensure they are correct.
```terminal
--------- Transaction request-------------
to: 0xCe8dBA5e4157c2B284d8853afEEea259344C1653
from: 0xca57F3b40B42FCce3c37B8D18aDBca5260ca72EC [chksum ok]
value: 10000000000000000 wei
gas: 0x5208 (21000)
maxFeePerGas: 2425000057 wei
maxPriorityFeePerGas: 2424999967 wei
nonce: 0x3 (3)
chainid: 0x5
Accesslist
Request context:
NA -> ipc -> NA
Additional HTTP header data, provided by the external caller:
User-Agent: ""
Origin: ""
-------------------------------------------
Approve? [y/N]:
> y
Please enter the password for account 0xca57F3b40B42FCce3c37B8D18aDBca5260ca72EC
>
```
After approving the transaction, the following confirmation screen in displayed in the Clef terminal:
```terminal
-----------------------
Transaction signed:
{
"type": "0x2",
"nonce": "0x3",
"gasPrice": null,
"maxPriorityFeePerGas": "0x908a901f",
"maxFeePerGas": "0x908a9079",
"gas": "0x5208",
"value": "0x2386f26fc10000",
"input": "0x",
"v": "0x0",
"r": "0x66e5d23ad156e04363e68b986d3a09e879f7fe6c84993cef800bc3b7ba8af072",
"s": "0x647ff82be943ea4738600c831c4a19879f212eb77e32896c05055174045da1bc",
"to": "0xce8dba5e4157c2b284d8853afeeea259344c1653",
"chainId": "0x5",
"accessList": [],
"hash": "0x99d489d0bd984915fd370b307c2d39320860950666aac3f261921113ae4f95bb"
}
```
In the Javascript console, the transaction hash is displayed. This will be used in the next section to retrieve the transaction details.
```terminal
"0x99d489d0bd984915fd370b307c2d39320860950666aac3f261921113ae4f95bb"
```
It is also advised to check the account balances using Geth by repeating the instructions from earlier. At this point in the tutorial, the two accounts in the Clef keystore should have balances just below 0.9 ether (because 0.1 ether has been transferred out and some small amount paid in transaction gas) and 0.1 ether.
### Checking the transaction hash
The transaction hash is a unique identifier for this specific transaction that can be used later to retrieve the transaction details. For example, the transaction details can be viewed by pasting this hash into the [Goerli block explorer](https://goerli.etherscan.io/). The same information can also be retrieved directly from the Geth node. The hash returned in the previous step can be provided as an argument to `eth.getTransaction` to return the transaction information:
```javascript
eth.getTransaction("0x99d489d0bd984915fd370b307c2d39320860950666aac3f261921113ae4f95bb")
```
This returns the following response (although the actual values for each field will vary because they are specific to each transaction):
```terminal
{
accessList: [],
blockHash: "0x1c5d3f8dd997b302935391b57dc3e4fffd1fa2088ef2836d51f844f993eb39c4",
blockNumber: 6355150,
chainId: "0x5",
from: "0xca57f3b40b42fcce3c37b8d18adbca5260ca72ec",
gas: 21000,
gasPrice: 2425000023,
hash: "0x99d489d0bd984915fd370b307c2d39320860950666aac3f261921113ae4f95bb",
input: "0x",
maxFeePerGas: 2425000057,
maxPriorityFeePerGas: 2424999967,
nonce: 3,
r: "0x66e5d23ad156e04363e68b986d3a09e879f7fe6c84993cef800bc3b7ba8af072",
s: "0x647ff82be943ea4738600c831c4a19879f212eb77e32896c05055174045da1bc",
to: "0xce8dba5e4157c2b284d8853afeeea259344c1653",
transactionIndex: 630,
type: "0x2",
v: "0x0",
value: 10000000000000000
}
```
## Using Curl
Up to this point this tutorial has interacted with Geth using the convenience library Web3.js. This library enables the user to send instructions to Geth using a more user-friendly interface compared to sending raw JSON objects. However, it is also possible for the user to send these JSON objects directly to Geth's exposed HTTP port. Curl is a command line tool that sends HTTP requests. This part of the tutorial demonstrates how to check account balances and send a transaction using Curl.
### Checking account balance
The command below returns the balance of the given account. This is a HTTP POST request to the local port 8545. The `-H` flag is for header information. It is used here to define the format of the incoming payload, which is JSON. The `--data` flag defines the content of the payload, which is a JSON object. That JSON object contains four fields: `jsonrpc` defines the spec version for the JSON-RPC API, `method` is the specific function being invoked, `params` are the function arguments, and `id` is used for ordering transactions. The two arguments passed to `eth_getBalance` are the account address whose balance to check and the block to query (here `latest` is used to check the balance in the most recently mined block).
```shell
curl -X POST http://127.0.0.1:8545 \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0", "method":"eth_getBalance", "params":["0xca57f3b40b42fcce3c37b8d18adbca5260ca72ec","latest"], "id":1}'
```
A successful call will return a response like the one below:
```terminal
{"jsonrpc":"2.0","id":1,"result":"0xc7d54951f87f7c0"}
```
The balance is in the `result` field in the returned JSON object. However, it is denominated in Wei and presented as a hexadecimal string. There are many options for converting this value to a decimal in units of ether, for example by opening a Python console and running:
```python
0xc7d54951f87f7c0 / 1e18
```
This returns the balance in ether:
```terminal
0.8999684999998321
```
### Checking the account list
The curl command below returns the list of all accounts.
```shell
curl -X POST http://127.0.0.1:8545 \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0", "method":"eth_accounts","params":[], "id":1}'
```
This requires approval in Clef. Once approved, the following information is returned to the terminal:
```terminal
{"jsonrpc":"2.0","id":1,"result":["0xca57f3b40b42fcce3c37b8d18adbca5260ca72ec"]}
```
### Sending Transactions
Sending a transaction between accounts can also be achieved using Curl. Notice that the value of the transaction is a hexadecimal string in units of Wei. To transfer 0.1 ether, it is first necessary to convert this to Wei by multiplying by 10<sup>18</sup> then converting to hex. 0.1 ether is `"0x16345785d8a0000"` in hex. As before, update the `to` and `from` fields with the addresses in the Clef keystore.
```shell
curl -X POST http://127.0.0.1:8545 \
-H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0", "method":"eth_sendTransaction", "params":[{"from": "0xca57f3b40b42fcce3c37b8d18adbca5260ca72ec","to": "0xce8dba5e4157c2b284d8853afeeea259344c1653","value": "0x16345785d8a0000"}], "id":1}'
```
This requires approval in Clef. Once the password for the sender account has been provided, Clef will return a summary of the transaction details and the terminal that made the Curl request will display a response containing the transaction hash.
```terminal
{"jsonrpc":"2.0","id":5,"result":"0xac8b347d70a82805edb85fc136fc2c4e77d31677c2f9e4e7950e0342f0dc7e7c"}
```
## Summary
This tutorial has demonstrated how to generate accounts using Clef, fund them with testnet ether and use those accounts to interact with Ethereum (Goerli) through a Geth node. Checking account balances, sending transactions and retrieving transaction details were explained using the web3.js library via the Geth console and using the JSON-RPC directly using Curl.

@ -0,0 +1,65 @@
---
title: Backup & Restore
sort_key: C
---
Most important info first: **REMEMBER YOUR PASSWORD** and **BACKUP YOUR KEYSTORE**.
## Data Directory
Everything `geth` persists gets written inside its data directory. The default data
directory locations are platform specific:
* Mac: `~/Library/Ethereum`
* Linux: `~/.ethereum`
* Windows: `%LOCALAPPDATA%\Ethereum`
Accounts are stored in the `keystore` subdirectory. The contents of this directories
should be transportable between nodes, platforms, implementations (C++, Go, Python).
To configure the location of the data directory, the `--datadir` parameter can be
specified. See [CLI Options](../interface/command-line-options) for more details.
Note the [ethash dag](../interface/mining) is stored at `~/.ethash` (Mac/Linux) or
`%APPDATA%\Ethash` (Windows) so that it can be reused by all clients. You can store this
in a different location by using a symbolic link.
## Cleanup
Geth's blockchain and state databases can be removed with:
```
geth removedb
```
This is useful for deleting an old chain and sync'ing to a new one. It only affects data
directories that can be re-created on synchronisation and does not touch the keystore.
## Blockchain Import/Export
Export the blockchain in binary format with:
```
geth export <filename>
```
Or if you want to back up portions of the chain over time, a first and last block can be
specified. For example, to back up the first epoch:
```
geth export <filename> 0 29999
```
Note that when backing up a partial chain, the file will be appended rather than
truncated.
Import binary-format blockchain exports with:
```
geth import <filename>
```
_See https://eth.wiki/en/howto/blockchain-import-and-export-instructions for more info_
And finally: **REMEMBER YOUR PASSWORD** and **BACKUP YOUR KEYSTORE**

@ -0,0 +1,356 @@
---
title: Installing Geth
sort_key: A
---
There are several ways to install Geth, including via a package manager, downloading a pre-built bundle, running as a docker container or building from downloaded source code. On this page the various installation options are explained for several major operating systems. Users prioritizing ease of installation should choose to use a package manager or prebuilt bundle. Users prioritizing customization should build from source. It is important to run the latest version of Geth because each release includes bugfixes and improvement over the previous versions. The stable releases are recommended for most users because they have been fully tested. A list of stable releases can be found [here][geth-releases]. Instructions for updating existing Geth installations are also provided in each section.
{:toc}
- this will be removed by the toc
## Package managers
### MacOS via Homebrew
The easiest way to install go-ethereum is to use the Geth Homebrew tap. The first step is to check that Homebrew is installed. The following command should return a version number.
```shell
brew -v
```
If a version number is returned, then Homebrew is installed. If not, Homebrew can be installed by following the instructions [here][brew]. With Homebrew installed, the following commands add the Geth tap and install Geth:
```shell
brew tap ethereum/ethereum
brew install ethereum
```
The previous command installs the latest stable release. Developers that wish to install the most up-to-date version can install the Geth repository's master branch by adding the `--devel` parameter to the install command:
```shell
brew install ethereum --devel
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`. The binaries for each of these tools are saved in `/usr/local/bin/`. The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation to the latest version can be achieved by stopping the node and running the following commands:
```shell
brew update
brew upgrade
brew reinstall ethereum
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### Ubuntu via PPAs
The easiest way to install Geth on Ubuntu-based distributions is with the built-in launchpad PPAs (Personal Package Archives). A single PPA repository is provided, containing stable and development releases for Ubuntu versions `xenial`, `trusty`, `impish`, `focal`, `bionic`.
The following command enables the launchpad repository:
```shell
sudo add-apt-repository -y ppa:ethereum/ethereum
```
Then, to install the stable version of go-ethereum:
```shell
sudo apt-get update
sudo apt-get install ethereum
```
Or, alternatively the develop version:
```shell
sudo apt-get update
sudo apt-get install ethereum-unstable
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`. The binaries for each of these tools are saved in `/usr/local/bin/`. The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation to the latest version can be achieved by stopping the node and running the following commands:
```shell
sudo apt-get update
sudo apt-get install ethereum
sudo apt-get upgrade geth
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### Windows
The easiest way to install Geth is to download a pre-compiled binary from the [downloads][geth-dl] page. The page provides an installer as well as a zip file containing the Geth source code. The install wizard offers the user the option to install Geth, or Geth and the developer tools. The installer adds `geth` to the system's `PATH` automatically. The zip file contains the command `.exe` files that can be run from the command prompt. The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation can be achieved by stopping the node, downloading and installing the latest version following the instructions above. When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### FreeBSD via pkg
Geth can be installed on FreeBSD using the package manager `pkg`. The following command downloads and installs Geth:
```shell
pkg install go-ethereum
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`.
The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation to the latest version can be achieved by stopping the node and running the following commands:
```shell
pkg upgrade
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### FreeBSD via ports
Installing Geth using ports, simply requires navigating to the `net-p2p/go-ethereum` ports directory and running `make install` as root:
```shell
cd /usr/ports/net-p2p/go-ethereum
make install
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`. The binaries for each of these tools are saved in `/usr/local/bin/`.
The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation can be achieved by stopping the node and running the following command:
```shell
portsnap fetch
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
### Arch Linux via pacman
The Geth package is available from the [community repo][geth-archlinux]. It can be installed by running:
```shell
pacman -S geth
```
These commands install the core Geth software and the following developer tools: `clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`. The binaries for each of these tools are saved in `/usr/bin/`.
The full list of command line options can be viewed [here][geth-cl-options] or in the terminal by running `geth --help`.
Updating an existing Geth installation can be achieved by stopping the node and running the following command:
```shell
sudo pacman -Sy
```
When the node is started again, Geth will automatically use all the data from the previous version and sync the blocks that were missed while the node was offline.
## Standalone bundle
Stable releases and development builds are provided as standalone bundles. These are useful for users who: a) wish to install a specific version of Geth (e.g., for reproducible environments); b) wish to install on machines without internet access (e.g. air-gapped computers); or c) wish to avoid automatic updates and instead prefer to manually install software.
The following standalone bundles are available:
- 32bit, 64bit, ARMv5, ARMv6, ARMv7 and ARM64 archives (`.tar.gz`) on Linux
- 64bit archives (`.tar.gz`) on macOS
- 32bit and 64bit archives (`.zip`) and installers (`.exe`) on Windows
Some archives contain only Geth, while other archives containing Geth and the various developer tools (`clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`). More information about these executables is available at the [`README`][geth-readme-exe].
The standalone bundles can be downloaded from the [Geth Downloads][geth-dl] page. To update an existing installation, download and manually install the latest version.
## Docker container
A Docker image with recent snapshot builds from our `develop` branch is maintained on DockerHub to support users who prefer to run containerized processes. There four different Docker images available for running the latest stable or development versions of Geth.
- `ethereum/client-go:latest` is the latest development version of Geth (default)
- `ethereum/client-go:stable` is the latest stable version of Geth
- `ethereum/client-go:{version}` is the stable version of Geth at a specific version number
- `ethereum/client-go:release-{version}` is the latest stable version of Geth at a specific version family
Pulling an image and starting a node is achieved by running these commands:
```shell
docker pull ethereum/client-go
docker run -it -p 30303:30303 ethereum/client-go
```
There are also four different Docker images for running the latest stable or development versions of miscellaneous Ethereum tools.
- `ethereum/client-go:alltools-latest` is the latest development version of the Ethereum tools
- `ethereum/client-go:alltools-stable` is the latest stable version of the Ethereum tools
- `ethereum/client-go:alltools-{version}` is the stable version of the Ethereum tools at a specific version number
- `ethereum/client-go:alltools-release-{version}` is the latest stable version of the Ethereum tools at a specific version family
The image has the following ports automatically exposed:
- `8545` TCP, used by the HTTP based JSON RPC API
- `8546` TCP, used by the WebSocket based JSON RPC API
- `8547` TCP, used by the GraphQL API
- `30303` TCP and UDP, used by the P2P protocol running the network
**Note:** if you are running an Ethereum client inside a Docker container, you should mount a data volume as the client's data directory (located at `/root/.ethereum` inside the container) to ensure that downloaded data is preserved between restarts and/or container life-cycles.
Updating Geth to the latest version simply requires stopping the container, pulling the latest version from Docker and running it:
```shell
docker stop ethereum/client-go
docker pull ethereum/client-go:latest
docker run -it -p 30303:30303 ethereum/client-go
```
## Build from source code
### Most Linux systems and macOS
Geth is written in [Go][go], so building from source code requires the most recent version of Go to be installed. Instructions for installing Go are available at the [Go installation page][go-install] and necessary bundles can be downloaded from the [Go download page][go-dl].
With Go installed, Geth can be downloaded into a `GOPATH` workspace via:
```shell
go get -d github.com/ethereum/go-ethereum
```
You can also install specific versions via:
```shell
go get -d github.com/ethereum/go-ethereum@v1.9.21
```
The above commands do not build any executables. To do that you can either build one specifically:
```shell
go install github.com/ethereum/go-ethereum/cmd/geth
```
Alternatively, the following command, run in the project root directory (`ethereum/go-ethereum`) in the GO workspace, builds the entire project and installs Geth and all the developer tools:
```shell
go install ./...
```
For macOS users, errors related to macOS header files are usually fixed by installing XCode Command Line Tools with `xcode-select --install`.
Another common error is: `go: cannot use path@version syntax in GOPATH mode`. This and other similar errors can often be fixed by enabling gomodules using `export GO111MODULE=on`.
Updating an existing Geth installation can be achieved using `go get`:
```shell
go get -u github.com/ethereum/go-ethereum
```
### Windows
The Chocolatey package manager provides an easy way to install the required build tools. Chocolatey can be installed by following these [instructions][chocolatey]. Then, to install the build tool the following commands can be run in an Administrator command prompt:
```
C:\Windows\system32> choco install git
C:\Windows\system32> choco install golang
C:\Windows\system32> choco install mingw
```
Installing these packages sets up the path environment variables. To get the new path a new command prompt must be opened. To install Geth, a Go workspace directory must first be created, then the Geth source code can be created and built.
```
C:\Users\xxx> mkdir src\github.com\ethereum
C:\Users\xxx> git clone https://github.com/ethereum/go-ethereum src\github.com\ethereum\go-ethereum
C:\Users\xxx> cd src\github.com\ethereum\go-ethereum
C:\Users\xxx\src\github.com\ethereum\go-ethereum> go get -u -v golang.org/x/net/context
C:\Users\xxx\src\github.com\ethereum\go-ethereum> go install -v ./cmd/...
```
### FreeBSD
To build Geth from source code on FreeBSD, the Geth Github repository can be cloned into a local directory.
```shell
git clone https://github.com/ethereum/go-ethereum
```
Then, the Go compiler can be used to build Geth:
```shell
pkg install go
```
If the Go version currently installed is >= 1.5, Geth can be built using the following command:
```shell
cd go-ethereum
make geth
```
If the installed Go version is &lt; 1.5 (quarterly packages, for example), the following command can be used instead:
```shell
cd go-ethereum
CC=clang make geth
```
To start the node, the followijng command can be run:
```shell
build/bin/geth
```
### Building without a Go workflow
Geth can also be built without using Go workspaces. In this case, the repository should be cloned to a local repository. Then, the command
`make geth` configures everything for a temporary build and cleans up afterwards. This method of building only works on UNIX-like operating systems, and a Go installation is still required.
```shell
git clone https://github.com/ethereum/go-ethereum.git
cd go-ethereum
make geth
```
These commands create a Geth executable file in the `go-ethereum/build/bin` folder that can be moved and run from another directory if required. The binary is standalone and doesn't require any additional files.
To update an existing Geth installation simply stop the node, navigate to the project root directory and pull the latest version from the Geth Github repository. then rebuild and restart the node.
```shell
cd go-ethereum
git pull
make geth
```
Additionally all the developer tools provided with Geth (`clef`, `devp2p`, `abigen`, `bootnode`, `evm`, `rlpdump` and `puppeth`) can be compiled by running `make all`. More information about these tools can be found [here][geth-readme-exe].
Instructions for cross-compiling to another architecture are available in the [cross-compilation guide](./cross-compile).
To build a stable release, e.g. v1.9.21, the command `git checkout v1.9.21` retrieves that specific version. Executing that command before running `make geth` switches Geth to a stable branch.
[brew]: https://brew.sh/
[go]: https://golang.org/
[go-dl]: https://golang.org/dl/
[go-install]: https://golang.org/doc/install
[chocolatey]: https://chocolatey.org
[geth-releases]: https://github.com/ethereum/go-ethereum/releases
[geth-readme-exe]: https://github.com/ethereum/go-ethereum#executables
[geth-cl-options]: https://geth.ethereum.org/docs/interface/command-line-options
[geth-archlinux]: https://www.archlinux.org/packages/community/x86_64/geth/
[geth-dl]: ../../downloads/

@ -0,0 +1,167 @@
---
title: Cross-Compiling Geth
sort_key: C
---
**Note: All of these and much more have been merged into the project Makefile. You can
cross build via `make geth-<os>-<platform>` without needing to know any of these details
from below.**
Developers usually have a preferred platform that they feel most comfortable working in,
with all the necessary tools, libraries and environments set up for an optimal workflow.
However, there's often need to build for either a different CPU architecture, or an
entirely different operating system; but maintaining a development environment for each
and switching between the them quickly becomes unwieldy.
Here we present a very simple way to cross compile Ethereum to various operating systems
and architectures using a minimal set of prerequisites and a completely containerized
approach, guaranteeing that your development environment remains clean even after the
complex requirements and mechanisms of a cross compilation.
The currently supported target platforms are:
- ARMv7 Android and iOS
- 32 bit, 64 bit and ARMv5 Linux
- 32 bit and 64 bit Mac OSX
- 32 bit and 64 bit Windows
Please note, that cross compilation does not replace a release build. Although resulting
binaries can usually run perfectly on the desired platform, compiling on a native system
with the specialized tools provided by the official vendor can often result in more a
finely optimized code.
## Cross compilation environment
Although the `go-ethereum` project is written in Go, it does include a bit of C code
shared between all implementations to ensure that all perform equally well, including a
dependency to the GNU Multiple Precision Arithmetic Library. Because of these, Go cannot
by itself compile to a different platform than the host. To overcome this limitation, we
will use [`xgo`](https://github.com/karalabe/xgo), a Go cross compiler package based on
Docker containers that has been architected specifically to allow both embedded C snippets
as well as simpler external C dependencies during compilation.
The `xgo` project has two simple dependencies: Docker (to ensure that the build
environment is completely contained) and Go. On most platforms these should be available
from the official package repositories. For manually installing them, please consult their
install guides at [Docker](https://docs.docker.com/installation/) and
[Go](https://golang.org/doc/install) respectively. This guide assumes that these two
dependencies are met.
To install and/or update xgo, simply type:
$ go get -u github.com/karalabe/xgo
You can test whether `xgo` is functioning correctly by requesting it to cross
compile itself and verifying that all cross compilations succeeded or not.
$ xgo github.com/karalabe/xgo
...
$ ls -al
-rwxr-xr-x 1 root root 2792436 Sep 14 16:45 xgo-android-21-arm
-rwxr-xr-x 1 root root 2353212 Sep 14 16:45 xgo-darwin-386
-rwxr-xr-x 1 root root 2906128 Sep 14 16:45 xgo-darwin-amd64
-rwxr-xr-x 1 root root 2388288 Sep 14 16:45 xgo-linux-386
-rwxr-xr-x 1 root root 2960560 Sep 14 16:45 xgo-linux-amd64
-rwxr-xr-x 1 root root 2437864 Sep 14 16:45 xgo-linux-arm
-rwxr-xr-x 1 root root 2551808 Sep 14 16:45 xgo-windows-386.exe
-rwxr-xr-x 1 root root 3130368 Sep 14 16:45 xgo-windows-amd64.exe
## Building Ethereum
Cross compiling Ethereum is analogous to the above example, but an additional flags is
required to satisfy the dependencies:
- `--deps` is used to inject arbitrary C dependency packages and pre-build them
Injecting the GNU Arithmetic Library dependency and selecting `geth` would be:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
github.com/ethereum/go-ethereum/cmd/geth
...
$ ls -al
-rwxr-xr-x 1 root root 23213372 Sep 14 17:59 geth-android-21-arm
-rwxr-xr-x 1 root root 14373980 Sep 14 17:59 geth-darwin-386
-rwxr-xr-x 1 root root 17373676 Sep 14 17:59 geth-darwin-amd64
-rwxr-xr-x 1 root root 21098910 Sep 14 17:59 geth-linux-386
-rwxr-xr-x 1 root root 25049693 Sep 14 17:59 geth-linux-amd64
-rwxr-xr-x 1 root root 20578535 Sep 14 17:59 geth-linux-arm
-rwxr-xr-x 1 root root 16351260 Sep 14 17:59 geth-windows-386.exe
-rwxr-xr-x 1 root root 19418071 Sep 14 17:59 geth-windows-amd64.exe
As the cross compiler needs to build all the dependencies as well as the main project
itself for each platform, it may take a while for the build to complete (approximately 3-4
minutes on a Core i7 3770K machine).
### Fine tuning the build
By default Go, and inherently `xgo`, checks out and tries to build the master branch of a
source repository. However, more often than not, you'll probably want to build a different
branch from possibly an entirely different remote repository. These can be controlled via
the `--remote` and `--branch` flags.
To build the `develop` branch of the official `go-ethereum` repository instead of the
default `master` branch, you just need to specify it as an additional command line flag
(`--branch`):
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--branch=develop \
github.com/ethereum/go-ethereum/cmd/geth
Additionally, during development you will most probably want to not only build a custom
branch, but also one originating from your own fork of the repository instead of the
upstream one. This can be done via the `--remote` flag:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--remote=https://github.com/karalabe/go-ethereum \
--branch=rpi-staging \
github.com/ethereum/go-ethereum/cmd/geth
By default `xgo` builds binaries for all supported platforms and architectures, with
Android binaries defaulting to the highest released Android NDK platform. To limit the
build targets or compile to a different Android platform, use the `--targets` CLI
parameter.
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
--targets=android-16/arm,windows/* \
github.com/ethereum/go-ethereum/cmd/geth
### Building locally
If you would like to cross compile your local development version, simply specify a local
path (starting with `.` or `/`), and `xgo` will use all local code from `GOPATH`, only
downloading missing dependencies. In such a case of course, the `--branch`, `--remote` and
`--pkg` arguments are no-op:
$ xgo --deps=https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2 \
./cmd/geth
## Using the Makefile
Having understood the gist of `xgo` based cross compilation, you do not need to actually
memorize and maintain these commands, as they have been incorporated into the official
[Makefile](https://github.com/ethereum/go-ethereum/blob/master/Makefile) and can be
invoked with a trivial `make` request:
* `make geth-cross`: Cross compiles to every supported OS and architecture
* `make geth-<os>`: Cross compiles supported architectures of a particular OS (e.g. `linux`)
* `make geth-<os>-<arch>`: Cross compiles to a specific OS/architecture (e.g. `linux`, `arm`)
We advise using the `make` based commands opposed to manually invoking `xgo` as we do
maintain the Makefile actively whereas we cannot guarantee that this document will be
always readily updated to latest advancements.
### Tuning the cross builds
A few of the `xgo` build options have also been surfaced directly into the Makefile to
allow fine tuning builds to work around either upstream Go issues, or to enable some
fancier mechanics.
- `make ... GO=<go>`: Use a specific Go runtime (e.g. `1.5.1`, `1.5-develop`, `develop`)
- `make ... MODE=<mode>`: Build a specific target type (e.g. `exe`, `c-archive`).
Please note that these are not yet fully finalized, so they may or may not change in the
future as our code and the Go runtime features change.

@ -0,0 +1,147 @@
---
title: JavaScript Console
sort_key: D
---
Geth responds to instructions encoded as JSON objects as defined in the [JSON-RPC-API](/docs/rpc/server). A Geth user can send these instructions directly, for example over HTTP using tools like [Curl](https://github.com/curl/curl). The code snippet below shows a request for an account balance sent to a local Geth node with the HTTP port `8545` exposed.
```
curl --data '{"jsonrpc":"2.0","method":"eth_getBalance", "params": ["0x9b1d35635cc34752ca54713bb99d38614f63c955", "latest"], "id":2}' -H "Content-Type: application/json" localhost:8545
```
This returns a result which is also a JSON object, with values expressed as hexadecimal strings, for example:
```terminal
{"id":2,"jsonrpc":"2.0","result":"0x1639e49bba16280000"}
```
While this approach is valid, it is also a very low level and rather error-prone way to interact with Geth. Most developers prefer to use convenience libraries that abstract away some of the more tedious and awkward tasks such as converting values from hexadecimal strings into numbers, or converting between denominations of ether (Wei, Gwei, etc). One such library is [Web3.js](https://web3js.readthedocs.io/en/v1.7.3/). This is a collection of Javascript libraries for interacting with an Ethereum node at a higher level than sending raw JSON objects to the node. The purpose of Geth's Javascript console is to provide a built-in environment to use a subset of the Web3.js libraries to interact with a Geth node.
{% include note.html content="The web3.js version that comes bundled with Geth is not up to date with the official Web3.js documentation. There are several Web3.js libraries that are not available in the Geth Javascript Console. There are also administrative APIs included in the Geth console that are not documented in the Web3.js documentation. The full list of libraries available in the Geth console is available on the [JSON-RPC API page](/docs/rpc/server)." %}
## Starting the console
There are two ways to start an interactive session using Geth console. The first is to provide the `console` command when Geth is started up. This starts the node and runs the console in the same terminal. It is therefore convenient to suppress the logs from the node to prevent them from obscuring the console. If the logs are not needed, they can be redirected to the `dev/null` path, effectively muting them. Alternatively, if the logs are required they can be redirected to a text file. The level of detail provided in the logs can be adjusted by providing a value between 1-6 to the `--verbosity` flag as in the example below:
```shell
# to mute logs
geth <other flags> console 2> /dev/null
# to save logs to file
geth <other flags> console --verbosity 3 2> geth-logs.log
```
Alternatively, a Javascript console can be attached to an existing Geth instance (i.e. one that is running in another terminal or remotely). In this case, `geth attach` can be used to open a Javascript console connected to the Geth node. It is also necessary to define the method used to connect the console to the node. Geth supports websockets, HTTP or local IPC. To use HTTP or Websockets, these must be enabled at the node by providing the following flags at startup:
```shell
# enable websockets
geth <other flags> --ws
# enable http
geth <other flags> --http
```
The commands above use default HTTP/WS endpoints and only enables the default JSON-RPC libraries. To update the Websockets or HTTP endpoints used, or to add support for additional libraries, the `.addr` `.port` and `.api` flags can be used as follows:
```shell
# define a custom http adress, custom http port and enable libraries
geth <other commands> --http --http.addr 192.60.52.21 --http.port 8552 --http.api eth,web3,admin
# define a custom Websockets address and enable libraries
geth <other commands> --ws --ws.addr 192.60.52.21 --ws.port 8552 --ws.api eth,web3,admin
```
It is important to note that by default **some functionality, including account unlocking is forbidden when HTTP or Websockets access is enabled**. This is because an attacker that manages to access the node via the externally-exposed HTTP/WS port then control the unlocked account. It is possible to force account unlock by including the `--allow-insecure-unlock` flag but this is not recommended if there is any chance of the node connecting to Ethereum Mainnet. This is not a hypothetical risk: **there are bots that continually scan for http-enabled Ethereum nodes to attack**"
The Javascript console can also be connected to a Geth node using IPC. When Geth is started, a `geth.ipc` file is automatically generated and saved to the data directory. This file, or a custom path to a specific ipc file can be passed to `geth attach` as follows:
```shell
geth attach datadir/geth.ipc
```
Once started, the console looks like this:
```terminal
Welcome to the Geth Javascript console!
instance: Geth/v1.10.18-unstable-8d85a701-20220503/linux-amd64/go1.18.1
coinbase: 0x281aabb85c68e1638bb092750a0d9bb06ba103ee
at block: 12305815 (Thu May 26 2022 16:16:00 GMT+0100 (BST))
datadir: /home/go-ethereum/data
modules: admin:1.0 debug:1.0 eth:1.0 ethash:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
To exit, press ctrl-d or type exit
>
```
## Interactive use
Once the console has been started, it can be used to interact with Geth. The console supports Javascript and the full Geth [JSON-RPC API](/docs/rpc/server). For example, to create an account:
```js
personal.newAccount()
```
To check the balance of the first account already existing in the keystore:
```js
eth.getBalance(personal.listAccounts[0])
```
To make a transaction (without global account unlocking):
```js
personal.sendTransaction({to: eth.accounts[0], to: eth.accounts[1], value: web3.toWei(0.5, "ether")})
```
It is also possible to load pre-written Javascript files into the console by passing the `--preload` flag
when starting the console. This is useful for setting up complex contract objects or loading frequently-used
functions.
```shell
geth console --preload "/my/scripts/folder/utils.js"
```
Once the interactive session is over, the console can be closed down by typing `exit` or `CTRL-D`.
## Non-interactive Use: Script Mode
It is also possible to execute JavaScript code non-interactively by passing the `--exec` and a JSON-RPC-API endpoint
to `geth attach` or `geth console`. The result is displayed directly in the terminal rather than in an interactive Javascript console.
For example, to display the accounts in the keystore:
```shell
geth attach --exec eth.accounts
```
```shell
geth attach --exec eth.blockNumber
```
The same syntax can be used to execute a local script file with more complex statements on a remote node over http, for example:
```shell
geth attach http://geth.example.org:8545 --exec 'loadScript("/tmp/checkbalances.js")'
geth attach http://geth.example.org:8545 --jspath "/tmp" --exec 'loadScript("checkbalances.js")'
```
The `--jspath` flag is used to set a library directory for the Javascript scripts. Any parameters passed to `loadScript()`
that do not explicitly define an absolute path will be interpreted relative to the `jspath` directory.
## Timers
In addition to the full functionality of JS (as per ECMA5), the Ethereum Javascript Runtime Environment (JSRE) is augmented with various timers. It implements `setInterval`, `clearInterval`, `setTimeout`, `clearTimeout` which some users will be familiar with from browser windows. It also provides implementation for `admin.sleep(seconds)` and a block based timer, `admin.sleepBlocks(n)` which sleeps till the number of new blocks added is equal to or greater than `n`.
## Caveats
Geth's console is built using the [GoJa JS Virtual Machine](https://github.com/dop251/goja) which is compatible with ECMAScript 5.1. This does not support promises or `async` functions. Web3js depends upon the `bignumber.js` library. This is auto-loaded into the console.

@ -0,0 +1,38 @@
---
title: Batch requests
sort_key: C
---
The JSON-RPC [specification](https://www.jsonrpc.org/specification#batch) outlines how clients can send multiple requests at the same time by filling the request objects in an array. This feature is implemented by Geth's API and can be used to cut network delays. Batching offers visible speed-ups specially when used for fetching larger amounts of mostly independent data objects. Below is an example for fetching a list of blocks in JS:
```javascript
import fetch from 'node-fetch'
async function main() {
const endpoint = 'http://127.0.0.1:8545'
const from = parseInt(process.argv[2])
const to = parseInt(process.argv[3])
const reqs = []
for (let i = from; i < to; i++) {
reqs.push({
method: 'eth_getBlockByNumber',
params: [`0x${i.toString(16)}`, false],
id: i-from,
jsonrpc: '2.0',
})
}
const res = await fetch(endpoint, {method: 'POST', body: JSON.stringify(reqs), headers: {'Content-Type': 'application/json'}})
const data = await res.json()
}
main().then().catch((err) => console.log(err))
```
In this case there's no dependency between the requests. Often the retrieved data from one request is needed to issue a second one. Let's take the example of fetching all the receipts for a range of blocks. The JSON-RPC API provides `eth_getTransactionReceipt` which takes in a transaction hash and returns the corresponding receipt object, but no method to fetch receipt objects for a whole block. We need to get the list of transactions in a block, and then call `eth_getTransactionReceipt` for each of them. We can break this into 2 batch requests:
- First to download the list of transaction hashes for all of the blocks in our desired range
- And then to download the list of receipts objects for all of the transaction hashes
For use-cases which depend on several JSON-RPC endpoints the batching approach can get easily complicated. In that case Geth offers a [GraphQL API](./graphql) which is more suitable.

@ -0,0 +1,65 @@
---
title: GraphQL Server
sort_key: C
---
In addition to the [JSON-RPC APIs](../rpc/server), Geth supports the GraphQL API as specified by [EIP-1767](eip-1767). GraphQL lets you specify which fields of an objects you need as part of the query, eliminating the extra load on the client for filling in fields which are not needed. It also allows for combining several traditional JSON-RPC requests into one query which translates into less overhead and more performance.
The GraphQL endpoint piggybacks on the HTTP transport used by JSON-RPC. Hence you'll have to enable and configure the relevant `--http` flags, and the `--graphql` flag itself:
```bash
geth --http --graphql
```
Now you can start querying against `http://localhost:8545/graphql`. To change the port, you'll need to provide `--http.port`, e.g.:
```bash
geth --http --http.port 9545 --graphql
```
### GraphiQL
An easy way to get started right away and try out queries is the GraphiQL interface shipped with Geth. To open it visit `http://localhost:8545/graphql/ui`. To see how this works let's read the sender, recipient and value of all transactions in block number 6000000. Try this out in GraphiQL:
```graphql
query txInfo {
block (number: 6000000) { transactions { hash from { address } to { address } value } }
}
```
GraphiQL also provides a way to explore the schema Geth provides to help you formulate your queries, which you can see on the right sidebar. Under the title `Root Types` click on `Query` to see the high-level types and their fields.
### Query
Reading out data from Geth is the biggest use-case for GraphQL. However after trying out queries in the UI you may want to do it programmatically. You can consult the official [docs](graphql-code) to find bindings for your language. Or use your favorite tool for sending HTTP requests. For sake of completeness we briefly touch on two approaches here. First via cURL, and second via a JS script.
Here's how you'd get the latest block's number via cURL. Note the use of a JSON object for the data section:
```bash
❯ curl -X POST http://localhost:8545/graphql -H "Content-Type: application/json" --data '{ "query": "query { block { number } }" }'
{"data":{"block":{"number":6004069}}}
```
Alternatively store the JSON-ified query in a file (let's call it `block-num.query`) and do:
```bash
❯ curl -X POST http://localhost:8545/graphql -H "Content-Type: application/json" --data '@block-num.query'
```
Executing a simple query in JS looks like the following. Here we're using the lightweight library `graphql-request` to perform the request. Note the use of variables instead of hardcoding the block number in the query:
```javascript
const { request, gql } = require('graphql-request')
const query = gql`
query blockInfo($number: Long) {
block (number: $number) { hash stateRoot }
}
`
request('http://localhost:8545/graphql', query, { number: '6004067' })
.then((res) => { console.log(res) })
.catch((err) => { console.log(err) })
```
[eip-1767]: https://eips.ethereum.org/EIPS/eip-1767
[graphql-code]: https://graphql.org/code/

@ -0,0 +1,296 @@
---
title: admin Namespace
sort_key: C
---
The `admin` API gives you access to several non-standard RPC methods, which will allow you to have
a fine grained control over your Geth instance, including but not limited to network peer and RPC
endpoint management.
* TOC
{:toc}
### admin_addPeer
The `addPeer` administrative method requests adding a new remote node to the list of tracked static
nodes. The node will try to maintain connectivity to these nodes at all times, reconnecting every
once in a while if the remote connection goes down.
The method accepts a single argument, the [`enode`](https://github.com/ethereum/wiki/wiki/enode-url-format)
URL of the remote peer to start tracking and returns a `BOOL` indicating whether the peer was accepted
for tracking or some error occurred.
| Client | Method invocation |
|:--------|------------------------------------------------|
| Go | `admin.AddPeer(url string) (bool, error)` |
| Console | `admin.addPeer(url)` |
| RPC | `{"method": "admin_addPeer", "params": [url]}` |
#### Example
```javascript
> admin.addPeer("enode://a979fb575495b8d6db44f750317d0f4622bf4c2aa3365d6af7c284339968eef29b69ad0dce72a4d8db5ebb4968de0e3bec910127f134779fbcb0cb6d3331163c@52.16.188.185:30303")
true
```
### admin_addTrustedPeer
Adds the given node to a reserved trusted list which allows the
node to always connect, even if the slots are full.
It returns a `BOOL` to indicate whether the peer was successfully added to the list.
| Client | Method invocation |
|:--------|------------------------------------------------|
| Console | `admin.addTrustedPeer(url)` |
| RPC | `{"method": "admin_addTrustedPeer", "params": [url]}` |
### admin_datadir
The `datadir` administrative property can be queried for the absolute path the running Geth node
currently uses to store all its databases.
| Client | Method invocation |
|:--------|-----------------------------------|
| Go | `admin.Datadir() (string, error`) |
| Console | `admin.datadir` |
| RPC | `{"method": "admin_datadir"}` |
#### Example
```javascript
> admin.datadir
"/home/john/.ethereum"
```
### admin_exportChain
Exports the current blockchain into a local file.
It optionally takes a first and last block number, in which case it exports only that range of blocks.
It returns a boolean indicating whether the operation succeeded.
| Client | Method invocation |
|:--------|---------------------------------------------------------------------- |
| Console | `admin.exportChain(file, first, last)` |
| RPC | `{"method": "admin_exportChain", "params": [string, uint64, uint64]}` |
### admin_importChain
Imports an exported list of blocks from a local file. Importing involves processing the blocks and inserting them
into the canonical chain. The state from the parent block of this range is required.
It returns a boolean indicating whether the operation succeeded.
| Client | Method invocation |
|:--------|-------------------------------------------------------|
| Console | `admin.importChain(file)` |
| RPC | `{"method": "admin_importChain", "params": [string]}` |
### admin_nodeInfo
The `nodeInfo` administrative property can be queried for all the information known about the running
Geth node at the networking granularity. These include general information about the node itself as a
participant of the [ÐΞVp2p](https://github.com/ethereum/wiki/wiki/%C3%90%CE%9EVp2p-Wire-Protocol) P2P
overlay protocol, as well as specialized information added by each of the running application protocols
(e.g. `eth`, `les`, `shh`, `bzz`).
| Client | Method invocation |
|:--------|-------------------------------------------|
| Go | `admin.NodeInfo() (*p2p.NodeInfo, error`) |
| Console | `admin.nodeInfo` |
| RPC | `{"method": "admin_nodeInfo"}` |
#### Example
```javascript
> admin.nodeInfo
{
enode: "enode://44826a5d6a55f88a18298bca4773fca5749cdc3a5c9f308aa7d810e9b31123f3e7c5fba0b1d70aac5308426f47df2a128a6747040a3815cc7dd7167d03be320d@[::]:30303",
id: "44826a5d6a55f88a18298bca4773fca5749cdc3a5c9f308aa7d810e9b31123f3e7c5fba0b1d70aac5308426f47df2a128a6747040a3815cc7dd7167d03be320d",
ip: "::",
listenAddr: "[::]:30303",
name: "Geth/v1.5.0-unstable/linux/go1.6",
ports: {
discovery: 30303,
listener: 30303
},
protocols: {
eth: {
difficulty: 17334254859343145000,
genesis: "0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3",
head: "0xb83f73fbe6220c111136aefd27b160bf4a34085c65ba89f24246b3162257c36a",
network: 1
}
}
}
```
### admin_peerEvents
PeerEvents creates an [RPC subscription](/docs/rpc/pubsub) which receives peer events from the node's p2p server.
The type of events emitted by the server are as follows:
- `add`: emitted when a peer is added
- `drop`: emitted when a peer is dropped
- `msgsend`: emitted when a message is successfully sent to a peer
- `msgrecv`: emitted when a message is received from a peer
### admin_peers
The `peers` administrative property can be queried for all the information known about the connected
remote nodes at the networking granularity. These include general information about the nodes themselves
as participants of the [ÐΞVp2p](https://github.com/ethereum/wiki/wiki/%C3%90%CE%9EVp2p-Wire-Protocol)
P2P overlay protocol, as well as specialized information added by each of the running application
protocols (e.g. `eth`, `les`, `shh`, `bzz`).
| Client | Method invocation |
|:--------|------------------------------------------|
| Go | `admin.Peers() ([]*p2p.PeerInfo, error`) |
| Console | `admin.peers` |
| RPC | `{"method": "admin_peers"}` |
#### Example
```javascript
> admin.peers
[{
caps: ["eth/61", "eth/62", "eth/63"],
id: "08a6b39263470c78d3e4f58e3c997cd2e7af623afce64656cfc56480babcea7a9138f3d09d7b9879344c2d2e457679e3655d4b56eaff5fd4fd7f147bdb045124",
name: "Geth/v1.5.0-unstable/linux/go1.5.1",
network: {
localAddress: "192.168.0.104:51068",
remoteAddress: "71.62.31.72:30303"
},
protocols: {
eth: {
difficulty: 17334052235346465000,
head: "5794b768dae6c6ee5366e6ca7662bdff2882576e09609bf778633e470e0e7852",
version: 63
}
}
}, /* ... */ {
caps: ["eth/61", "eth/62", "eth/63"],
id: "fcad9f6d3faf89a0908a11ddae9d4be3a1039108263b06c96171eb3b0f3ba85a7095a03bb65198c35a04829032d198759edfca9b63a8b69dc47a205d94fce7cc",
name: "Geth/v1.3.5-506c9277/linux/go1.4.2",
network: {
localAddress: "192.168.0.104:55968",
remoteAddress: "121.196.232.205:30303"
},
protocols: {
eth: {
difficulty: 17335165914080772000,
head: "5794b768dae6c6ee5366e6ca7662bdff2882576e09609bf778633e470e0e7852",
version: 63
}
}
}]
```
### admin_removePeer
Disconnects from a remote node if the connection exists.
It returns a boolean indicating validations succeeded. Note a `true` value doesn't necessarily mean
that there was a connection which was disconnected.
| Client | Method invocation |
|:--------|----------------------------------------------------- |
| Console | `admin.removePeer(url)` |
| RPC | `{"method": "admin_removePeer", "params": [string]}` |
### admin_removeTrustedPeer
Removes a remote node from the trusted peer set, but it does not disconnect it automatically.
It returns a boolean indicating validations succeeded.
| Client | Method invocation |
|:--------|----------------------------------------------------- |
| Console | `admin.removeTrustedPeer(url)` |
| RPC | `{"method": "admin_removeTrustedPeer", "params": [string]}` |
### admin_startHTTP
The `startHTTP` administrative method starts an HTTP based JSON-RPC [API](/docs/rpc/server)
webserver to handle client requests. All the parameters are optional:
* `host`: network interface to open the listener socket on (defaults to `"localhost"`)
* `port`: network port to open the listener socket on (defaults to `8545`)
* `cors`: [cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) header to use (defaults to `""`)
* `apis`: API modules to offer over this interface (defaults to `"eth,net,web3"`)
The method returns a boolean flag specifying whether the HTTP RPC listener was opened or not. Please note, only one HTTP endpoint is allowed to be active at any time.
| Client | Method invocation |
|:--------|-----------------------------------------------------------------------------------------------|
| Go | `admin.StartHTTP(host *string, port *rpc.HexNumber, cors *string, apis *string) (bool, error)` |
| Console | `admin.startHTTP(host, port, cors, apis)` |
| RPC | `{"method": "admin_startHTTP", "params": [host, port, cors, apis]}` |
#### Example
```javascript
> admin.startHTTP("127.0.0.1", 8545)
true
```
### admin_startWS
The `startWS` administrative method starts an WebSocket based [JSON RPC](https://www.jsonrpc.org/specification)
API webserver to handle client requests. All the parameters are optional:
* `host`: network interface to open the listener socket on (defaults to `"localhost"`)
* `port`: network port to open the listener socket on (defaults to `8546`)
* `cors`: [cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) header to use (defaults to `""`)
* `apis`: API modules to offer over this interface (defaults to `"eth,net,web3"`)
The method returns a boolean flag specifying whether the WebSocket RPC listener was opened or not. Please note, only one WebSocket endpoint is allowed to be active at any time.
| Client | Method invocation |
|:--------|-----------------------------------------------------------------------------------------------|
| Go | `admin.StartWS(host *string, port *rpc.HexNumber, cors *string, apis *string) (bool, error)` |
| Console | `admin.startWS(host, port, cors, apis)` |
| RPC | `{"method": "admin_startWS", "params": [host, port, cors, apis]}` |
#### Example
```javascript
> admin.startWS("127.0.0.1", 8546)
true
```
### admin_stopHTTP
The `stopHTTP` administrative method closes the currently open HTTP RPC endpoint. As the node can only have a single HTTP endpoint running, this method takes no parameters, returning a boolean whether the endpoint was closed or not.
| Client | Method invocation |
|:--------|---------------------------------|
| Go | `admin.StopHTTP() (bool, error`) |
| Console | `admin.stopHTTP()` |
| RPC | `{"method": "admin_stopHTTP"` |
#### Example
```javascript
> admin.stopHTTP()
true
```
### admin_stopWS
The `stopWS` administrative method closes the currently open WebSocket RPC endpoint. As the node can only have a single WebSocket endpoint running, this method takes no parameters, returning a boolean whether the endpoint was closed or not.
| Client | Method invocation |
|:--------|--------------------------------|
| Go | `admin.StopWS() (bool, error`) |
| Console | `admin.stopWS()` |
| RPC | `{"method": "admin_stopWS"` |
#### Example
```javascript
> admin.stopWS()
true
```

@ -0,0 +1,148 @@
---
title: clique Namespace
sort_key: C
---
The `clique` API provides access to the state of the clique consensus engine. You can use
this API to manage signer votes and to check the health of a private network.
* TOC
{:toc}
### clique_getSnapshot
Retrieves a snapshot of all clique state at a given block.
| Client | Method invocation |
|:--------|------------------------------------------------------------|
| Console | `clique.getSnapshot(blockNumber)` |
| RPC | `{"method": "clique_getSnapshot", "params": [blockNumber]}` |
Example:
```javascript
> clique.getSnapshot(5463755)
{
hash: "0x018194fc50ca62d973e2f85cffef1e6811278ffd2040a4460537f8dbec3d5efc",
number: 5463755,
recents: {
5463752: "0x42eb768f2244c8811c63729a21a3569731535f06",
5463753: "0x6635f83421bf059cd8111f180f0727128685bae4",
5463754: "0x7ffc57839b00206d1ad20c69a1981b489f772031",
5463755: "0xb279182d99e65703f0076e4812653aab85fca0f0"
},
signers: {
0x42eb768f2244c8811c63729a21a3569731535f06: {},
0x6635f83421bf059cd8111f180f0727128685bae4: {},
0x7ffc57839b00206d1ad20c69a1981b489f772031: {},
0xb279182d99e65703f0076e4812653aab85fca0f0: {},
0xd6ae8250b8348c94847280928c79fb3b63ca453e: {},
0xda35dee8eddeaa556e4c26268463e26fb91ff74f: {},
0xfc18cbc391de84dbd87db83b20935d3e89f5dd91: {}
},
tally: {},
votes: []
}
```
### clique_getSnapshotAtHash
Retrieves the state snapshot at a given block.
| Client | Method invocation |
|:--------|----------------------------------------------------------|
| Console | `clique.getSnapshotAtHash(blockHash)` |
| RPC | `{"method": "clique_getSnapshotAtHash", "params": [blockHash]}` |
### clique_getSigner
Returns the signer for a specific clique block. Can be called with either a blocknumber, blockhash or an rlp encoded blob.
The RLP encoded blob can either be a block or a header.
| Client | Method invocation |
|:--------|------------------------------------------------------|
| Console | `clique.getSigner(blockNrOrHashOrRlp)` |
| RPC | `{"method": "clique_getSigner", "params": [string]}` |
### clique_getSigners
Retrieves the list of authorized signers at the specified block number.
| Client | Method invocation |
|:--------|------------------------------------------------------------|
| Console | `clique.getSigners(blockNumber)` |
| RPC | `{"method": "clique_getSigners", "params": [blockNumber]}` |
### clique_getSignersAtHash
Retrieves the list of authorized signers at the specified block hash.
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Console | `clique.getSignersAtHash(blockHash)` |
| RPC | `{"method": "clique_getSignersAtHash", "params": [string]}` |
### clique_proposals
Returns the current proposals the node is voting on.
| Client | Method invocation |
|:--------|------------------------------------------------|
| Console | `clique.proposals()` |
| RPC | `{"method": "clique_proposals", "params": []}` |
### clique_propose
Adds a new authorization proposal that the signer will attempt to push through. If the
`auth` parameter is true, the local signer votes for the given address to be included in
the set of authorized signers. With `auth` set to `false`, the vote is against the
address.
| Client | Method invocation |
|:--------|-----------------------------------------------------------|
| Console | `clique.propose(address, auth)` |
| RPC | `{"method": "clique_propose", "params": [address, auth]}` |
### clique_discard
This method drops a currently running proposal. The signer will not cast
further votes (either for or against) the address.
| Client | Method invocation |
|:--------|-----------------------------------------------------|
| Console | `clique.discard(address)` |
| RPC | `{"method": "clique_discard", "params": [address]}` |
### clique_status
This is a debugging method which returns statistics about signer activity
for the last 64 blocks. The returned object contains the following fields:
- `inturnPercent`: percentage of blocks signed in-turn
- `sealerActivity`: object containing signer addresses and the number
of blocks signed by them
- `numBlocks`: number of blocks analyzed
| Client | Method invocation |
|:--------|-----------------------------------------------------|
| Console | `clique.status()` |
| RPC | `{"method": "clique_status", "params": []}` |
Example:
```
> clique.status()
{
inturnPercent: 100,
numBlocks: 64,
sealerActivity: {
0x42eb768f2244c8811c63729a21a3569731535f06: 9,
0x6635f83421bf059cd8111f180f0727128685bae4: 9,
0x7ffc57839b00206d1ad20c69a1981b489f772031: 9,
0xb279182d99e65703f0076e4812653aab85fca0f0: 10,
0xd6ae8250b8348c94847280928c79fb3b63ca453e: 9,
0xda35dee8eddeaa556e4c26268463e26fb91ff74f: 9,
0xfc18cbc391de84dbd87db83b20935d3e89f5dd91: 9
}
}
```

@ -0,0 +1,947 @@
---
title: debug Namespace
sort_key: C
---
The `debug` API gives you access to several non-standard RPC methods, which will allow you
to inspect, debug and set certain debugging flags during runtime.
* TOC
{:toc}
### debug_accountRange
Enumerates all accounts at a given block with paging capability. `maxResults` are returned in the page and the items have keys that come after the `start` key (hashed address).
If `incompletes` is false, then accounts for which the key preimage (i.e: the `address`) doesn't exist in db are skipped. NB: geth by default does not store preimages.
| Client | Method invocation |
|:--------|------------------------------------------------------------------------------------------------------------------|
| Console | `debug.accountRange(blockNrOrHash, start, maxResults, nocode, nostorage, incompletes)` |
| RPC | `{"method": "debug_getHeaderRlp", "params": [blockNrOrHash, start, maxResults, nocode, nostorage, incompletes]}` |
### debug_backtraceAt
Sets the logging backtrace location. When a backtrace location
is set and a log message is emitted at that location, the stack
of the goroutine executing the log statement will be printed to stderr.
The location is specified as `<filename>:<line>`.
| Client | Method invocation |
|:--------|-------------------------------------------------------|
| Console | `debug.backtraceAt(string)` |
| RPC | `{"method": "debug_backtraceAt", "params": [string]}` |
Example:
``` javascript
> debug.backtraceAt("server.go:443")
```
### debug_blockProfile
Turns on block profiling for the given duration and writes
profile data to disk. It uses a profile rate of 1 for most
accurate information. If a different rate is desired, set
the rate and write the profile manually using
`debug_writeBlockProfile`.
| Client | Method invocation |
|:--------|----------------------------------------------------------------|
| Console | `debug.blockProfile(file, seconds)` |
| RPC | `{"method": "debug_blockProfile", "params": [string, number]}` |
### debug_chaindbCompact
Flattens the entire key-value database into a single level, removing all unused slots and merging all keys.
| Client | Method invocation |
|:--------|----------------------------------------------------|
| Console | `debug.chaindbCompact()` |
| RPC | `{"method": "debug_chaindbCompact", "params": []}` |
### debug_chaindbProperty
Returns leveldb properties of the key-value database.
| Client | Method invocation |
|:--------|----------------------------------------------------------------|
| Console | `debug.chaindbProperty(property string)` |
| RPC | `{"method": "debug_chaindbProperty", "params": [property]}` |
### debug_cpuProfile
Turns on CPU profiling for the given duration and writes
profile data to disk.
| Client | Method invocation |
|:--------|--------------------------------------------------------------|
| Console | `debug.cpuProfile(file, seconds)` |
| RPC | `{"method": "debug_cpuProfile", "params": [string, number]}` |
### debug_dbAncient
Retrieves an ancient binary blob from the freezer. The freezer is a collection of append-only immutable files.
The first argument `kind` specifies which table to look up data from. The list of all table kinds are as follows:
- `headers`: block headers
- `hashes`: canonical hash table (block number -> block hash)
- `bodies`: block bodies
- `receipts`: block receipts
- `diffs`: total difficulty table (block number -> td)
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Console | `debug.dbAncient(kind string, number uint64)` |
| RPC | `{"method": "debug_dbAncient", "params": [string, number]}` |
### debug_dbAncients
Returns the number of ancient items in the ancient store.
| Client | Method invocation |
|:--------|----------------------------------|
| Console | `debug.dbAncients()` |
| RPC | `{"method": "debug_dbAncients"}` |
### debug_dbGet
Returns the raw value of a key stored in the database.
| Client | Method invocation |
|:--------|----------------------------------------------------------------|
| Console | `debug.dbGet(key string)` |
| RPC | `{"method": "debug_dbGet", "params": [key]}` |
### debug_dumpBlock
Retrieves the state that corresponds to the block number and returns a list of accounts (including
storage and code).
| Client | Method invocation |
|:--------|-------------------------------------------------------|
| Go | `debug.DumpBlock(number uint64) (state.World, error)` |
| Console | `debug.traceBlockByHash(number, [options])` |
| RPC | `{"method": "debug_dumpBlock", "params": [number]}` |
#### Example
```javascript
> debug.dumpBlock(10)
{
fff7ac99c8e4feb60c9750054bdc14ce1857f181: {
balance: "49358640978154672",
code: "",
codeHash: "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470",
nonce: 2,
root: "56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
storage: {}
},
fffbca3a38c3c5fcb3adbb8e63c04c3e629aafce: {
balance: "3460945928",
code: "",
codeHash: "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470",
nonce: 657,
root: "56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421",
storage: {}
}
},
root: "19f4ed94e188dd9c7eb04226bd240fa6b449401a6c656d6d2816a87ccaf206f1"
}
```
### debug_freeOSMemory
Forces garbage collection
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Go | `debug.FreeOSMemory()` |
| Console | `debug.freeOSMemory()` |
| RPC | `{"method": "debug_freeOSMemory", "params": []}` |
### debug_freezeClient
Forces a temporary client freeze, normally when the server is overloaded.
Available as part of LES light server.
| Client | Method invocation |
|:--------|------------------------------------------------------|
| Console | `debug.freezeClient(node string)` |
| RPC | `{"method": "debug_freezeClient", "params": [node]}` |
### debug_gcStats
Returns garbage collection statistics.
See https://golang.org/pkg/runtime/debug/#GCStats for information about
the fields of the returned object.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Console | `debug.gcStats()` |
| RPC | `{"method": "debug_gcStats", "params": []}` |
### debug_getAccessibleState
Returns the first number where the node has accessible state on disk.
This is the post-state of that block and the pre-state of the next
block. The (from, to) parameters are the sequence of blocks
to search, which can go either forwards or backwards.
| Client | Method invocation |
|:--------|-----------------------------------------------------------------------|
| Console | `debug.getAccessibleState(from, to rpc.BlockNumber)` |
| RPC | `{"method": "debug_getAccessibleState", "params": [from, to]}` |
### debug_getBadBlocks
Returns a list of the last 'bad blocks' that the client has seen on
the network and returns them as a JSON list of block-hashes.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Console | `debug.getBadBlocks()` |
| RPC | `{"method": "debug_getBadBlocks", "params": []}` |
### debug_getBlockRlp
Retrieves and returns the RLP encoded block by number.
| Client | Method invocation |
|:--------|-------------------------------------------------------|
| Go | `debug.GetBlockRlp(number uint64) (string, error)` |
| Console | `debug.getBlockRlp(number, [options])` |
| RPC | `{"method": "debug_getBlockRlp", "params": [number]}` |
References: [RLP](https://github.com/ethereum/wiki/wiki/RLP)
### debug_getHeaderRlp
Returns an RLP-encoded header.
| Client | Method invocation |
|:--------|-----------------------------------------------------|
| Console | `debug.getHeaderRlp(blockNum)` |
| RPC | `{"method": "debug_getHeaderRlp", "params": [num]}` |
### debug_getModifiedAccountsByHash
Returns all accounts that have changed between the two blocks specified. A change is defined as a difference in nonce, balance, code hash, or storage hash. With one parameter, returns the list of accounts modified in the specified block.
| Client | Method invocation |
|:--------|---------------------------------------------------------------------------------|
| Console | `debug.getModifiedAccountsByHash(startHash, endHash)` |
| RPC | `{"method": "debug_getModifiedAccountsByHash", "params": [startHash, endHash]}` |
### debug_getModifiedAccountsByNumber
Returns all accounts that have changed between the two blocks specified.
A change is defined as a difference in nonce, balance, code hash or
storage hash.
| Client | Method invocation |
|:--------|---------------------------------------------------------------------------------|
| Console | `debug.getModifiedAccountsByNumber(startNum uint64, endNum uint64)` |
| RPC | `{"method": "debug_getModifiedAccountsByNumber", "params": [startNum, endNum]}` |
### debug_getRawReceipts
Returns the consensus-encoding of all receipts in a single block.
| Client | Method invocation |
|:--------|-----------------------------------------------------------------|
| Console | `debug.getRawReceipts(blockNrOrHash)` |
| RPC | `{"method": "debug_getRawReceipts", "params": [blockNrOrHash]}` |
### debug_goTrace
Turns on Go runtime tracing for the given duration and writes
trace data to disk.
| Client | Method invocation |
|:--------|-----------------------------------------------------------|
| Console | `debug.goTrace(file, seconds)` |
| RPC | `{"method": "debug_goTrace", "params": [string, number]}` |
### debug_intermediateRoots
Executes a block (bad- or canon- or side-), and returns a list of intermediate roots: the stateroot after each transaction.
| Client | Method invocation |
|:--------|--------------------------------------------------------------------|
| Console | `debug.intermediateRoots(blockHash, [options])` |
| RPC | `{"method": "debug_intermediateRoots", "params": [blockHash, {}]}` |
### debug_memStats
Returns detailed runtime memory statistics.
See https://golang.org/pkg/runtime/#MemStats for information about
the fields of the returned object.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Console | `debug.memStats()` |
| RPC | `{"method": "debug_memStats", "params": []}` |
### debug_mutexProfile
Turns on mutex profiling for nsec seconds and writes profile data to file. It uses a profile rate of 1 for most accurate information. If a different rate is desired, set the rate and write the profile manually.
| Client | Method invocation |
|:--------|------------------------------------------------------------|
| Console | `debug.mutexProfile(file, nsec)` |
| RPC | `{"method": "debug_mutexProfile", "params": [file, nsec]}` |
### debug_preimage
Returns the preimage for a sha3 hash, if known.
| Client | Method invocation |
|:--------|--------------------------------------------------|
| Console | `debug.preimage(hash)` |
| RPC | `{"method": "debug_preimage", "params": [hash]}` |
### debug_printBlock
Retrieves a block and returns its pretty printed form.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Console | `debug.printBlock(number uint64)` |
| RPC | `{"method": "debug_printBlock", "params": [number]}` |
### debug_seedHash
Fetches and retrieves the seed hash of the block by number
| Client | Method invocation |
|:--------|----------------------------------------------------|
| Go | `debug.SeedHash(number uint64) (string, error)` |
| Console | `debug.seedHash(number, [options])` |
| RPC | `{"method": "debug_seedHash", "params": [number]}` |
### debug_setBlockProfileRate
Sets the rate (in samples/sec) of goroutine block profile
data collection. A non-zero rate enables block profiling,
setting it to zero stops the profile. Collected profile data
can be written using `debug_writeBlockProfile`.
| Client | Method invocation |
|:--------|---------------------------------------------------------------|
| Console | `debug.setBlockProfileRate(rate)` |
| RPC | `{"method": "debug_setBlockProfileRate", "params": [number]}` |
### debug_setGCPercent
Sets the garbage collection target percentage. A negative value disables garbage
collection.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Go | `debug.SetGCPercent(v int)` |
| Console | `debug.setGCPercent(v)` |
| RPC | `{"method": "debug_setGCPercent", "params": [v]}` |
### debug_setHead
Sets the current head of the local chain by block number. **Note**, this is a
destructive action and may severely damage your chain. Use with *extreme* caution.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Go | `debug.SetHead(number uint64)` |
| Console | `debug.setHead(number)` |
| RPC | `{"method": "debug_setHead", "params": [number]}` |
References:
[Ethash](https://eth.wiki/en/concepts/ethash/ethash)
### debug_setMutexProfileFraction
Sets the rate of mutex profiling.
| Client | Method invocation |
|:--------|-----------------------------------------------------------------|
| Console | `debug.setMutexProfileFraction(rate int)` |
| RPC | `{"method": "debug_setMutexProfileFraction", "params": [rate]}` |
### debug_stacks
Returns a printed representation of the stacks of all goroutines.
Note that the web3 wrapper for this method takes care of the printing
and does not return the string.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Console | `debug.stacks()` |
| RPC | `{"method": "debug_stacks", "params": []}` |
### debug_standardTraceBlockToFile
When JS-based tracing (see below) was first implemented, the intended usecase was to enable long-running tracers that could stream results back via a subscription channel.
This method works a bit differently. (For full details, see [PR](https://github.com/ethereum/go-ethereum/pull/17914))
- It streams output to disk during the execution, to not blow up the memory usage on the node
- It uses `jsonl` as output format (to allow streaming)
- Uses a cross-client standardized output, so called 'standard json'
* Uses `op` for string-representation of opcode, instead of `op`/`opName` for numeric/string, and other simlar small differences.
* has `refund`
* Represents memory as a contiguous chunk of data, as opposed to a list of `32`-byte segments like `debug_traceTransaction`
This means that this method is only 'useful' for callers who control the node -- at least sufficiently to be able to read the artefacts from the filesystem after the fact.
The method can be used to dump a certain transaction out of a given block:
```
> debug.standardTraceBlockToFile("0x0bbe9f1484668a2bf159c63f0cf556ed8c8282f99e3ffdb03ad2175a863bca63", {txHash:"0x4049f61ffbb0747bb88dc1c85dd6686ebf225a3c10c282c45a8e0c644739f7e9", disableMemory:true})
["/tmp/block_0x0bbe9f14-14-0x4049f61f-099048234"]
```
Or all txs from a block:
```
> debug.standardTraceBlockToFile("0x0bbe9f1484668a2bf159c63f0cf556ed8c8282f99e3ffdb03ad2175a863bca63", {disableMemory:true})
["/tmp/block_0x0bbe9f14-0-0xb4502ea7-409046657", "/tmp/block_0x0bbe9f14-1-0xe839be8f-954614764", "/tmp/block_0x0bbe9f14-2-0xc6e2052f-542255195", "/tmp/block_0x0bbe9f14-3-0x01b7f3fe-209673214", "/tmp/block_0x0bbe9f14-4-0x0f290422-320999749", "/tmp/block_0x0bbe9f14-5-0x2dc0fb80-844117472", "/tmp/block_0x0bbe9f14-6-0x35542da1-256306111", "/tmp/block_0x0bbe9f14-7-0x3e199a08-086370834", "/tmp/block_0x0bbe9f14-8-0x87778b88-194603593", "/tmp/block_0x0bbe9f14-9-0xbcb081ba-629580052", "/tmp/block_0x0bbe9f14-10-0xc254381a-578605923", "/tmp/block_0x0bbe9f14-11-0xcc434d58-405931366", "/tmp/block_0x0bbe9f14-12-0xce61967d-874423181", "/tmp/block_0x0bbe9f14-13-0x05a20b35-267153288", "/tmp/block_0x0bbe9f14-14-0x4049f61f-606653767", "/tmp/block_0x0bbe9f14-15-0x46d473d2-614457338", "/tmp/block_0x0bbe9f14-16-0x35cf5500-411906321", "/tmp/block_0x0bbe9f14-17-0x79222961-278569788", "/tmp/block_0x0bbe9f14-18-0xad84e7b1-095032683", "/tmp/block_0x0bbe9f14-19-0x4bd48260-019097038", "/tmp/block_0x0bbe9f14-20-0x1517411d-292624085", "/tmp/block_0x0bbe9f14-21-0x6857e350-971385904", "/tmp/block_0x0bbe9f14-22-0xbe3ae2ca-236639695"]
```
Files are created in a temp-location, with the naming standard `block_<blockhash:4>-<txindex>-<txhash:4>-<random suffix>`. Each opcode immediately streams to file, with no in-geth buffering aside from whatever buffering the os normally does.
On the server side, it also adds some more info when regenerating historical state, namely, the reexec-number if `required historical state is not avaiable` is encountered, so a user can experiment with increasing that setting. It also prints out the remaining block until it reaches target:
```
INFO [10-15|13:48:25.263] Regenerating historical state block=2385959 target=2386012 remaining=53 elapsed=3m30.990537767s
INFO [10-15|13:48:33.342] Regenerating historical state block=2386012 target=2386012 remaining=0 elapsed=3m39.070073163s
INFO [10-15|13:48:33.343] Historical state regenerated block=2386012 elapsed=3m39.070454362s nodes=10.03mB preimages=652.08kB
INFO [10-15|13:48:33.352] Wrote trace file=/tmp/block_0x14490c57-0-0xfbbd6d91-715824834
INFO [10-15|13:48:33.352] Wrote trace file=/tmp/block_0x14490c57-1-0x71076194-187462969
INFO [10-15|13:48:34.421] Wrote trace file=/tmp/block_0x14490c57-2-0x3f4263fe-056924484
```
The `options` is as follows:
```
type StdTraceConfig struct {
*vm.LogConfig
Reexec *uint64
TxHash *common.Hash
}
```
### debug_standardTraceBadBlockToFile
This method is similar to `debug_standardTraceBlockToFile`, but can be used to obtain info about a block which has been _rejected_ as invalid (for some reason).
### debug_startCPUProfile
Turns on CPU profiling indefinitely, writing to the given file.
| Client | Method invocation |
|:--------|-----------------------------------------------------------|
| Console | `debug.startCPUProfile(file)` |
| RPC | `{"method": "debug_startCPUProfile", "params": [string]}` |
### debug_startGoTrace
Starts writing a Go runtime trace to the given file.
| Client | Method invocation |
|:--------|--------------------------------------------------------|
| Console | `debug.startGoTrace(file)` |
| RPC | `{"method": "debug_startGoTrace", "params": [string]}` |
### debug_stopCPUProfile
Stops an ongoing CPU profile.
| Client | Method invocation |
|:--------|----------------------------------------------------|
| Console | `debug.stopCPUProfile()` |
| RPC | `{"method": "debug_stopCPUProfile", "params": []}` |
### debug_stopGoTrace
Stops writing the Go runtime trace.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Console | `debug.startGoTrace(file)` |
| RPC | `{"method": "debug_stopGoTrace", "params": []}` |
### debug_storageRangeAt
Returns the storage at the given block height and transaction index. The result can be paged by providing a `maxResult` to cap the number of storage slots returned as well as specifying the offset via `keyStart` (hash of storage key).
| Client | Method invocation |
|:--------|----------------------------------------------------------------------------------------------------------|
| Console | `debug.storageRangeAt(blockHash, txIdx, contractAddress, keyStart, maxResult)` |
| RPC | `{"method": "debug_storageRangeAt", "params": [blockHash, txIdx, contractAddress, keyStart, maxResult]}` |
### debug_traceBadBlock
Returns the structured logs created during the execution of EVM against a block pulled from the pool of bad ones and returns them as a JSON object.
| Client | Method invocation |
|:--------|----------------------------------------------------------------|
| Console | `debug.traceBadBlock(blockHash, [options])` |
| RPC | `{"method": "debug_traceBadBlock", "params": [blockHash, {}]}` |
### debug_traceBlock
The `traceBlock` method will return a full stack trace of all invoked opcodes of all transaction
that were included in this block. **Note**, the parent of this block must be present or it will
fail.
| Client | Method invocation |
|:--------|--------------------------------------------------------------------------|
| Go | `debug.TraceBlock(blockRlp []byte, config. *vm.Config) BlockTraceResult` |
| Console | `debug.traceBlock(tblockRlp, [options])` |
| RPC | `{"method": "debug_traceBlock", "params": [blockRlp, {}]}` |
References:
[RLP](https://github.com/ethereum/wiki/wiki/RLP)
#### Example
```javascript
> debug.traceBlock("0xblock_rlp")
{
gas: 85301,
returnValue: "",
structLogs: [{
depth: 1,
error: "",
gas: 162106,
gasCost: 3,
memory: null,
op: "PUSH1",
pc: 0,
stack: [],
storage: {}
},
/* snip */
{
depth: 1,
error: "",
gas: 100000,
gasCost: 0,
memory: ["0000000000000000000000000000000000000000000000000000000000000006", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000060"],
op: "STOP",
pc: 120,
stack: ["00000000000000000000000000000000000000000000000000000000d67cbec9"],
storage: {
0000000000000000000000000000000000000000000000000000000000000004: "8241fa522772837f0d05511f20caa6da1d5a3209000000000000000400000001",
0000000000000000000000000000000000000000000000000000000000000006: "0000000000000000000000000000000000000000000000000000000000000001",
f652222313e28459528d920b65115c16c04f3efc82aaedc97be59f3f377c0d3f: "00000000000000000000000002e816afc1b5c0f39852131959d946eb3b07b5ad"
}
}]
```
### debug_traceBlockByNumber
Similar to [debug_traceBlock](#debug_traceblock), `traceBlockByNumber` accepts a block number and will replay the
block that is already present in the database.
| Client | Method invocation |
|:--------|--------------------------------------------------------------------------------|
| Go | `debug.TraceBlockByNumber(number uint64, config. *vm.Config) BlockTraceResult` |
| Console | `debug.traceBlockByNumber(number, [options])` |
| RPC | `{"method": "debug_traceBlockByNumber", "params": [number, {}]}` |
References:
[RLP](https://github.com/ethereum/wiki/wiki/RLP)
### debug_traceBlockByHash
Similar to [debug_traceBlock](#debug_traceblock), `traceBlockByHash` accepts a block hash and will replay the
block that is already present in the database.
| Client | Method invocation |
|:--------|---------------------------------------------------------------------------------|
| Go | `debug.TraceBlockByHash(hash common.Hash, config. *vm.Config) BlockTraceResult` |
| Console | `debug.traceBlockByHash(hash, [options])` |
| RPC | `{"method": "debug_traceBlockByHash", "params": [hash {}]}` |
References:
[RLP](https://github.com/ethereum/wiki/wiki/RLP)
### debug_traceBlockFromFile
Similar to [debug_traceBlock](#debug_traceblock), `traceBlockFromFile` accepts a file containing the RLP of the block.
| Client | Method invocation |
|:--------|----------------------------------------------------------------------------------|
| Go | `debug.TraceBlockFromFile(fileName string, config. *vm.Config) BlockTraceResult` |
| Console | `debug.traceBlockFromFile(fileName, [options])` |
| RPC | `{"method": "debug_traceBlockFromFile", "params": [fileName, {}]}` |
References:
[RLP](https://github.com/ethereum/wiki/wiki/RLP)
### debug_traceCall
The `debug_traceCall` method lets you run an `eth_call` within the context of the given block execution using the final state of parent block as the base. The first argument (just as in `eth_call`) is a [transaction object](/docs/rpc/objects#transaction-call-object). The block can be specified either by hash or by number as the second argument. A tracer can be specified as a third argument, similar to `debug_traceTransaction`. It returns the same output as `debug_traceTransaction`.
| Client | Method invocation |
|:-------:|-----------------------------------|
| Go | `debug.TraceCall(args ethapi.CallArgs, blockNrOrHash rpc.BlockNumberOrHash, config *TraceConfig) (*ExecutionResult, error)` |
| Console | `debug.traceCall(object, blockNrOrHash, [options])` |
| RPC | `{"method": "debug_traceCall", "params": [object, blockNrOrHash, {}]}` |
#### Example
No specific call options:
```
> debug.traceCall(null, "0x0")
{
failed: false,
gas: 53000,
returnValue: "",
structLogs: []
}
```
Tracing a call with a destination and specific sender, disabling the storage and memory output (less data returned over RPC)
```
debug.traceCall({
"from": "0xdeadbeef29292929192939494959594933929292",
"to": "0xde929f939d939d393f939393f93939f393929023",
"gas": "0x7a120",
"data": "0xf00d4b5d00000000000000000000000001291230982139282304923482304912923823920000000000000000000000001293123098123928310239129839291010293810"
},
"latest", {"disableStorage": true, "disableMemory": true})
```
It is possible to supply 'overrides' for both state-data (accounts/storage) and block data (number, timestamp etc). In the example below,
a call which executes `NUMBER` is performed, and the overridden number is placed on the stack:
```
> debug.traceCall({
from: eth.accounts[0],
value:"0x1",
gasPrice: "0xffffffff",
gas: "0xffff",
input: "0x43"},
"latest",
{"blockoverrides":
{"number": "0x50"}
})
{
failed: false,
gas: 53018,
returnValue: "",
structLogs: [{
depth: 1,
gas: 12519,
gasCost: 2,
op: "NUMBER",
pc: 0,
stack: []
}, {
depth: 1,
gas: 12517,
gasCost: 0,
op: "STOP",
pc: 1,
stack: ["0x50"]
}]
}
```
Curl example:
```
> curl -H "Content-Type: application/json" -X POST localhost:8545 --data '{"jsonrpc":"2.0","method":"debug_traceCall","params":[null, "pending"],"id":1}'
{"jsonrpc":"2.0","id":1,"result":{"gas":53000,"failed":false,"returnValue":"","structLogs":[]}}
```
### debug_traceChain
Returns the structured logs created during the execution of EVM between two blocks (excluding start) as a JSON object.
This endpoint must be invoked via `debug_subscribe` as follows:
`const res = provider.send('debug_subscribe', ['traceChain', '0x3f3a2a', '0x3f3a2b'])`
please refer to the [subscription page](https://geth.ethereum.org/docs/rpc/pubsub) for more details.
### debug_traceTransaction
**OBS** In most scenarios, `debug.standardTraceBlockToFile` is better suited for tracing!
The `traceTransaction` debugging method will attempt to run the transaction in the exact same manner
as it was executed on the network. It will replay any transaction that may have been executed prior
to this one before it will finally attempt to execute the transaction that corresponds to the given
hash.
In addition to the hash of the transaction you may give it a secondary *optional* argument, which
specifies the options for this specific call. The possible options are:
* `disableStorage`: `BOOL`. Setting this to true will disable storage capture (default = false).
* `disableStack`: `BOOL`. Setting this to true will disable stack capture (default = false).
* `enableMemory`: `BOOL`. Setting this to true will enable memory capture (default = false).
* `enableReturnData`: `BOOL`. Setting this to true will enable return data capture (default = false).
* `tracer`: `STRING`. Setting this will enable JavaScript-based transaction tracing, described below. If set, the previous four arguments will be ignored.
* `timeout`: `STRING`. Overrides the default timeout of 5 seconds for JavaScript-based tracing calls. Valid values are described [here](https://golang.org/pkg/time/#ParseDuration).
| Client | Method invocation |
|:--------|----------------------------------------------------------------------------------------------|
| Go | `debug.TraceTransaction(txHash common.Hash, logger *vm.LogConfig) (*ExecutionResult, error)` |
| Console | `debug.traceTransaction(txHash, [options])` |
| RPC | `{"method": "debug_traceTransaction", "params": [txHash, {}]}` |
#### Example
```javascript
> debug.traceTransaction("0x2059dd53ecac9827faad14d364f9e04b1d5fe5b506e3acc886eff7a6f88a696a")
{
gas: 85301,
returnValue: "",
structLogs: [{
depth: 1,
error: "",
gas: 162106,
gasCost: 3,
memory: null,
op: "PUSH1",
pc: 0,
stack: [],
storage: {}
},
/* snip */
{
depth: 1,
error: "",
gas: 100000,
gasCost: 0,
memory: ["0000000000000000000000000000000000000000000000000000000000000006", "0000000000000000000000000000000000000000000000000000000000000000", "0000000000000000000000000000000000000000000000000000000000000060"],
op: "STOP",
pc: 120,
stack: ["00000000000000000000000000000000000000000000000000000000d67cbec9"],
storage: {
0000000000000000000000000000000000000000000000000000000000000004: "8241fa522772837f0d05511f20caa6da1d5a3209000000000000000400000001",
0000000000000000000000000000000000000000000000000000000000000006: "0000000000000000000000000000000000000000000000000000000000000001",
f652222313e28459528d920b65115c16c04f3efc82aaedc97be59f3f377c0d3f: "00000000000000000000000002e816afc1b5c0f39852131959d946eb3b07b5ad"
}
}]
```
#### JavaScript-based tracing
Specifying the `tracer` option in the second argument enables JavaScript-based tracing. In this mode, `tracer` is interpreted as a JavaScript expression that is expected to evaluate to an object which must expose the `result` and `fault` methods. There exist 3 additional methods, namely: `step`, `enter` and `exit`. You must provide either `step`, or `enter` AND `exit` (i.e. these two must be exposed together). You may expose all three if you choose to do so.
##### Step
`step`is a function that takes two arguments, log and db, and is called for each step of the EVM, or when an error occurs, as the specified transaction is traced.
`log` has the following fields:
- `op`: Object, an OpCode object representing the current opcode
- `stack`: array[big.Int], the EVM execution stack
- `memory`: Object, a structure representing the contract's memory space
- `contract`: Object, an object representing the account executing the current operation
and the following methods:
- `getPC()` - returns a Number with the current program counter
- `getGas()` - returns a Number with the amount of gas remaining
- `getCost()` - returns the cost of the opcode as a Number
- `getDepth()` - returns the execution depth as a Number
- `getRefund()` - returns the amount to be refunded as a Number
- `getError()` - returns information about the error if one occured, otherwise returns `undefined`
If error is non-empty, all other fields should be ignored.
For efficiency, the same `log` object is reused on each execution step, updated with current values; make sure to copy values you want to preserve beyond the current call. For instance, this step function will not work:
function(log) {
this.logs.append(log);
}
But this step function will:
function(log) {
this.logs.append({gas: log.getGas(), pc: log.getPC(), ...});
}
`log.op` has the following methods:
- `isPush()` - returns true iff the opcode is a PUSHn
- `toString()` - returns the string representation of the opcode
- `toNumber()` - returns the opcode's number
`log.memory` has the following methods:
- `slice(start, stop)` - returns the specified segment of memory as a byte slice
- `getUint(offset)` - returns the 32 bytes at the given offset
`log.stack` has the following methods:
- `peek(idx)` - returns the idx-th element from the top of the stack (0 is the topmost element) as a big.Int
- `length()` - returns the number of elements in the stack
`log.contract` has the following methods:
- `getCaller()` - returns the address of the caller
- `getAddress()` - returns the address of the current contract
- `getValue()` - returns the amount of value sent from caller to contract as a big.Int
- `getInput()` - returns the input data passed to the contract
`db` has the following methods:
- `getBalance(address)` - returns a `big.Int` with the specified account's balance
- `getNonce(address)` - returns a Number with the specified account's nonce
- `getCode(address)` - returns a byte slice with the code for the specified account
- `getState(address, hash)` - returns the state value for the specified account and the specified hash
- `exists(address)` - returns true if the specified address exists
If the step function throws an exception or executes an illegal operation at any point, it will not be called on any further VM steps, and the error will be returned to the caller.
##### Result
`result` is a function that takes two arguments `ctx` and `db`, and is expected to return a JSON-serializable value to return to the RPC caller.
`ctx` is the context in which the transaction is executing and has the following fields:
- `type` - String, one of the two values `CALL` and `CREATE`
- `from` - Address, sender of the transaction
- `to` - Address, target of the transaction
- `input` - Buffer, input transaction data
- `gas` - Number, gas budget of the transaction
- `value` - big.Int, amount to be transferred in wei
- `block` - Number, block number
- `output` - Buffer, value returned from EVM
- `gasUsed` - Number, amount of gas used in executing the transaction (excludes txdata costs)
- `time` - String, execution runtime
##### Fault
`fault` is a function that takes two arguments, `log` and `db`, just like `step` and is invoked when an error happens during the execution of an opcode which wasn't reported in `step`. The method `log.getError()` has information about the error.
##### Enter & Exit
`enter` and `exit` are respectively invoked on stepping in and out of an internal call. More specifically they are invoked on the `CALL` variants, `CREATE` variants and also for the transfer implied by a `SELFDESTRUCT`.
`enter` takes a `callFrame` object as argument which has the following methods:
- `getType()` - returns a string which has the type of the call frame
- `getFrom()` - returns the address of the call frame sender
- `getTo()` - returns the address of the call frame target
- `getInput()` - returns the input as a buffer
- `getGas()` - returns a Number which has the amount of gas provided for the frame
- `getValue()` - returns a `big.Int` with the amount to be transferred only if available, otherwise `undefined`
`exit` takes in a `frameResult` object which has the following methods:
- `getGasUsed()` - returns amount of gas used throughout the frame as a Number
- `getOutput()` - returns the output as a buffer
` -getError()` - returns an error if one occured during execution and `undefined` otherwise
##### Usage
Note that several values are Golang big.Int objects, not JavaScript numbers or JS bigints. As such, they have the same interface as described in the godocs. Their default serialization to JSON is as a Javascript number; to serialize large numbers accurately call `.String()` on them. For convenience, `big.NewInt(x)` is provided, and will convert a uint to a Go BigInt.
Usage example, returns the top element of the stack at each CALL opcode only:
debug.traceTransaction(txhash, {tracer: '{data: [], fault: function(log) {}, step: function(log) { if(log.op.toString() == "CALL") this.data.push(log.stack.peek(0)); }, result: function() { return this.data; }}'});
### debug_verbosity
Sets the logging verbosity ceiling. Log messages with level
up to and including the given level will be printed.
The verbosity of individual packages and source files
can be raised using `debug_vmodule`.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Console | `debug.verbosity(level)` |
| RPC | `{"method": "debug_vmodule", "params": [number]}` |
### debug_vmodule
Sets the logging verbosity pattern.
| Client | Method invocation |
|:--------|---------------------------------------------------|
| Console | `debug.vmodule(string)` |
| RPC | `{"method": "debug_vmodule", "params": [string]}` |
#### Examples
If you want to see messages from a particular Go package (directory)
and all subdirectories, use:
``` javascript
> debug.vmodule("eth/*=6")
```
If you want to restrict messages to a particular package (e.g. p2p)
but exclude subdirectories, use:
``` javascript
> debug.vmodule("p2p=6")
```
If you want to see log messages from a particular source file, use
``` javascript
> debug.vmodule("server.go=6")
```
You can compose these basic patterns. If you want to see all
output from peer.go in a package below eth (eth/peer.go,
eth/downloader/peer.go) as well as output from package p2p
at level <= 5, use:
``` javascript
debug.vmodule("eth/*/peer.go=6,p2p=5")
```
### debug_writeBlockProfile
Writes a goroutine blocking profile to the given file.
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Console | `debug.writeBlockProfile(file)` |
| RPC | `{"method": "debug_writeBlockProfile", "params": [string]}` |
### debug_writeMemProfile
Writes an allocation profile to the given file.
Note that the profiling rate cannot be set through the API,
it must be set on the command line using the `--pprof.memprofilerate`
flag.
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Console | `debug.writeMemProfile(file string)` |
| RPC | `{"method": "debug_writeBlockProfile", "params": [string]}` |
### debug_writeMutexProfile
Writes a goroutine blocking profile to the given file.
| Client | Method invocation |
|:--------|-----------------------------------------------------------|
| Console | `debug.writeMutexProfile(file)` |
| RPC | `{"method": "debug_writeMutexProfile", "params": [file]}` |

@ -0,0 +1,207 @@
---
title: eth Namespace
sort_key: C
---
Geth provides several extensions to the standard "eth" JSON-RPC namespace.
* TOC
{:toc}
### eth_subscribe, eth_unsubscribe
These methods are used for real-time events through subscriptions. See the [subscription
documentation](./pubsub) for more information.
### eth_call
Executes a new message call immediately, without creating a transaction on the block
chain. The `eth_call` method can be used to query internal contract state, to execute
validations coded into a contract or even to test what the effect of a transaction would
be without running it live.
#### Parameters
The method takes 3 parameters: an unsigned transaction object to execute in read-only
mode; the block number to execute the call against; and an optional state override-set to
allow executing the call against a modified chain state.
##### 1. `Object` - Transaction call object
The *transaction call object* is mandatory. Please see [here](/docs/rpc/objects#transaction-call-object) for details.
##### 2. `Quantity | Tag` - Block number or the string `latest` or `pending`
The *block number* is mandatory and defines the context (state) against which the
specified transaction should be executed. It is not possible to execute calls against
reorged blocks; or blocks older than 128 (unless the node is an archive node).
##### 3. `Object` - State override set
The *state override set* is an optional address-to-state mapping, where each entry
specifies some state to be ephemerally overridden prior to executing the call. Each
address maps to an object containing:
| Field | Type | Bytes | Optional | Description |
|:------------|:-----------|:------|:---------|:------------|
| `balance` | `Quantity` | <32 | Yes | Fake balance to set for the account before executing the call. |
| `nonce` | `Quantity` | <8 | Yes | Fake nonce to set for the account before executing the call. |
| `code` | `Binary` | any | Yes | Fake EVM bytecode to inject into the account before executing the call. |
| `state` | `Object` | any | Yes | Fake key-value mapping to override **all** slots in the account storage before executing the call. |
| `stateDiff` | `Object` | any | Yes | Fake key-value mapping to override **individual** slots in the account storage before executing the call. |
The goal of the *state override set* is manyfold:
* It can be used by DApps to reduce the amount of contract code needed to be deployed on
chain. Code that simply returns internal state or does pre-defined validations can be
kept off chain and fed to the node on-demand.
* It can be used for smart contract analysis by extending the code deployed on chain with
custom methods and invoking them. This avoids having to download and reconstruct the
entire state in a sandbox to run custom code against.
* It can be used to debug smart contracts in an already deployed large suite of contracts
by selectively overriding some code or state and seeing how execution changes.
Specialized tooling will probably be necessary.
Example:
```json
{
"0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3": {
"balance": "0xde0b6b3a7640000"
},
"0xebe8efa441b9302a0d7eaecc277c09d20d684540": {
"code": "0x...",
"state": {
""
}
}
}
```
#### Return Values
The method returns a single `Binary` consisting the return value of the executed contract
call.
#### Simple example
With a synced Rinkeby node with RPC exposed on localhost (`geth --rinkeby --http`) we can
make a call against the [Checkpoint
Oracle](https://rinkeby.etherscan.io/address/0xebe8efa441b9302a0d7eaecc277c09d20d684540)
to retrieve the list of administrators:
```
$ curl --data '{"method":"eth_call","params":[{"to":"0xebe8efa441b9302a0d7eaecc277c09d20d684540","data":"0x45848dfc"},"latest"],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
```
And the result is an Ethereum ABI encoded list of accounts:
```json
{
"id": 1,
"jsonrpc": "2.0",
"result": "0x00000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000004000000000000000000000000d9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f300000000000000000000000078d1ad571a1a09d60d9bbf25894b44e4c8859595000000000000000000000000286834935f4a8cfb4ff4c77d5770c2775ae2b0e7000000000000000000000000b86e2b0ab5a4b1373e40c51a7c712c70ba2f9f8e"
}
```
Just for the sake of completeness, decoded the response is:
```
0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3,
0x78d1ad571a1a09d60d9bbf25894b44e4c8859595,
0x286834935f4a8cfb4ff4c77d5770c2775ae2b0e7,
0xb86e2b0ab5a4b1373e40c51a7c712c70ba2f9f8e
```
#### Override example
The above *simple example* showed how to call a method already exposed by an on-chain
smart contract. What if we want to access some data not exposed by it?
We can gut out the
[original](https://github.com/ethereum/go-ethereum/blob/master/contracts/checkpointoracle/contract/oracle.sol)
checkpoint oracle contract with one that retains the same fields (to retain the same
storage layout), but one that includes a different method set:
```
pragma solidity ^0.5.10;
contract CheckpointOracle {
mapping(address => bool) admins;
address[] adminList;
uint64 sectionIndex;
uint height;
bytes32 hash;
uint sectionSize;
uint processConfirms;
uint threshold;
function VotingThreshold() public view returns (uint) {
return threshold;
}
}
```
With a synced Rinkeby node with RPC exposed on localhost (`geth --rinkeby --http`) we can
make a call against the live [Checkpoint
Oracle](https://rinkeby.etherscan.io/address/0xebe8efa441b9302a0d7eaecc277c09d20d684540),
but override its byte code with our own version that has an accessor for the voting
threshold field:
```
$ curl --data '{"method":"eth_call","params":[{"to":"0xebe8efa441b9302a0d7eaecc277c09d20d684540","data":"0x0be5b6ba"}, "latest", {"0xebe8efa441b9302a0d7eaecc277c09d20d684540": {"code":"0x6080604052348015600f57600080fd5b506004361060285760003560e01c80630be5b6ba14602d575b600080fd5b60336045565b60408051918252519081900360200190f35b6007549056fea265627a7a723058206f26bd0433456354d8d1228d8fe524678a8aeeb0594851395bdbd35efc2a65f164736f6c634300050a0032"}}],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
```
And the result is the Ethereum ABI encoded threshold number:
```json
{
"id": 1,
"jsonrpc": "2.0",
"result": "0x0000000000000000000000000000000000000000000000000000000000000002"
}
```
Just for the sake of completeness, decoded the response is: `2`.
### eth_createAccessList
This method creates an [EIP2930](https://eips.ethereum.org/EIPS/eip-2930) type `accessList` based on a given `Transaction`.
The `accessList` contains all storage slots and addresses read and written by the transaction, except for the sender account and the precompiles.
This method uses the same `transaction` call [object](/docs/rpc/objects#transaction-call-object) and `blockNumberOrTag` object as `eth_call`.
An `accessList` can be used to unstuck contracts that became inaccessible due to gas cost increases.
#### Parameters
| Field | Type | Description |
|:-------------------|:-----------|:---------------------|
| `transaction` | `Object` | `TransactionCall` object |
| `blockNumberOrTag` | `Object` | Optional, blocknumber or `latest` or `pending` |
#### Usage
```
curl --data '{"method":"eth_createAccessList","params":[{"from": "0x8cd02c6cbd8375b39b06577f8d50c51d86e8d5cd", "data": "0x608060806080608155"}, "pending"],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545
```
#### Response
The method `eth_createAccessList` returns list of addresses and storage keys used by the transaction, plus the gas consumed when the access list is added.
That is, it gives you the list of addresses and storage keys that will be used by that transaction, plus the gas consumed if the access list is included. Like `eth_estimateGas`, this is an estimation; the list could change when the transaction is actually mined.
Adding an `accessList` to your transaction does not necessary result in lower gas usage compared to a transaction without an access list.
Example:
```json
{
"accessList": [
{
"address": "0xa02457e5dfd32bda5fc7e1f1b008aa5979568150",
"storageKeys": [
"0x0000000000000000000000000000000000000000000000000000000000000081",
]
}
]
"gasUsed": "0x125f8"
}
```

@ -0,0 +1,318 @@
---
title: les Namespace
sort_key: C
---
The `les` API allows you to manage LES server settings, including client parameters and payment settings for prioritized clients. It also provides functions to query checkpoint information in both server and client mode.
* TOC
{:toc}
### les_serverInfo
Get information about currently connected and total/individual allowed connection capacity.
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Go | `les.ServerInfo() map[string]interface{}` |
| Console | `les.serverInfo()` |
| RPC | `{"method": "les_serverInfo", "params": []}` |
#### Example
```javascript
> les.serverInfo
{
freeClientCapacity: 16000,
maximumCapacity: 1600000,
minimumCapacity: 16000,
priorityConnectedCapacity: 180000,
totalCapacity: 1600000,
totalConnectedCapacity: 180000
}
```
### les_clientInfo
Get individual client information (connection, balance, pricing) on the specified list of clients or for all connected clients if the ID list is empty.
| Client | Method invocation |
|:--------|---------------------------------------------------------------------------|
| Go | `les.ClientInfo(ids []enode.ID) map[enode.ID]map[string]interface{}` |
| Console | `les.clientInfo([id, ...])` |
| RPC | `{"method": "les_clientInfo", "params": [[id, ...]]}` |
#### Example
```javascript
> les.clientInfo([])
{
37078bf8ea160a2b3d129bb4f3a930ce002356f83b820f467a07c1fe291531ea: {
capacity: 16000,
connectionTime: 11225.335901136,
isConnected: true,
pricing/balance: 998266395881,
pricing/balanceMeta: "",
pricing/negBalance: 501657912857,
priority: true
},
6a47fe7bb23fd335df52ef1690f37ab44265a537b1d18eb616a3e77f898d9e77: {
capacity: 100000,
connectionTime: 9874.839293082,
isConnected: true,
pricing/balance: 2908840710198,
pricing/balanceMeta: "qwerty",
pricing/negBalance: 206242704507,
priority: true
},
740c78f7d914e5c763731bc751b513fc2388ffa0b47db080ded3e8b305e68c75: {
capacity: 16000,
connectionTime: 3089.286712188,
isConnected: true,
pricing/balance: 998266400174,
pricing/balanceMeta: "",
pricing/negBalance: 55135348863,
priority: true
},
9985ade55b515f79f64274bf2ae440ca8c433cfb0f283fb6010bf46f796b2a3b: {
capacity: 16000,
connectionTime: 11479.335479545,
isConnected: true,
pricing/balance: 998266452203,
pricing/balanceMeta: "",
pricing/negBalance: 564116425655,
priority: true
},
ce65ada2c3e17d6da00cec0b3cc4c8ed5e74428b60f42fa287eaaec8cca62544: {
capacity: 16000,
connectionTime: 7095.794385419,
isConnected: true,
pricing/balance: 998266448492,
pricing/balanceMeta: "",
pricing/negBalance: 214617753229,
priority: true
},
e1495ceb6db842f3ee66428d4bb7f4a124b2b17111dae35d141c3d568b869ef1: {
capacity: 16000,
connectionTime: 8614.018237937,
isConnected: true,
pricing/balance: 998266391796,
pricing/balanceMeta: "",
pricing/negBalance: 185964891797,
priority: true
}
}
```
### les_priorityClientInfo
Get individual client information on clients with a positive balance in the specified ID range, `start` included, `stop` excluded. If `stop` is zero then results are returned until the last existing balance entry. `maxCount` limits the number of returned results. If the count limit is reached but there are more IDs in the range then the first missing ID is included in the result with an empty value assigned to it.
| Client | Method invocation |
|:--------|----------------------------------------------------------------------------------------------------|
| Go | `les.PriorityClientInfo(start, stop enode.ID, maxCount int) map[enode.ID]map[string]interface{}` |
| Console | `les.priorityClientInfo(id, id, number)` |
| RPC | `{"method": "les_priorityClientInfo", "params": [id, id, number]}` |
#### Example
```javascript
> les.priorityClientInfo("0x0000000000000000000000000000000000000000000000000000000000000000", "0x0000000000000000000000000000000000000000000000000000000000000000", 100)
{
37078bf8ea160a2b3d129bb4f3a930ce002356f83b820f467a07c1fe291531ea: {
capacity: 16000,
connectionTime: 11128.247204027,
isConnected: true,
pricing/balance: 999819815030,
pricing/balanceMeta: "",
pricing/negBalance: 501657912857,
priority: true
},
6a47fe7bb23fd335df52ef1690f37ab44265a537b1d18eb616a3e77f898d9e77: {
capacity: 100000,
connectionTime: 9777.750592047,
isConnected: true,
pricing/balance: 2918549830576,
pricing/balanceMeta: "qwerty",
pricing/negBalance: 206242704507,
priority: true
},
740c78f7d914e5c763731bc751b513fc2388ffa0b47db080ded3e8b305e68c75: {
capacity: 16000,
connectionTime: 2992.198001116,
isConnected: true,
pricing/balance: 999819845102,
pricing/balanceMeta: "",
pricing/negBalance: 55135348863,
priority: true
},
9985ade55b515f79f64274bf2ae440ca8c433cfb0f283fb6010bf46f796b2a3b: {
capacity: 16000,
connectionTime: 11382.246766963,
isConnected: true,
pricing/balance: 999819871598,
pricing/balanceMeta: "",
pricing/negBalance: 564116425655,
priority: true
},
ce65ada2c3e17d6da00cec0b3cc4c8ed5e74428b60f42fa287eaaec8cca62544: {
capacity: 16000,
connectionTime: 6998.705683407,
isConnected: true,
pricing/balance: 999819882177,
pricing/balanceMeta: "",
pricing/negBalance: 214617753229,
priority: true
},
e1495ceb6db842f3ee66428d4bb7f4a124b2b17111dae35d141c3d568b869ef1: {
capacity: 16000,
connectionTime: 8516.929533901,
isConnected: true,
pricing/balance: 999819891640,
pricing/balanceMeta: "",
pricing/negBalance: 185964891797,
priority: true
}
}
> les.priorityClientInfo("0x4000000000000000000000000000000000000000000000000000000000000000", "0xe000000000000000000000000000000000000000000000000000000000000000", 2)
{
6a47fe7bb23fd335df52ef1690f37ab44265a537b1d18eb616a3e77f898d9e77: {
capacity: 100000,
connectionTime: 9842.11178361,
isConnected: true,
pricing/balance: 2912113588853,
pricing/balanceMeta: "qwerty",
pricing/negBalance: 206242704507,
priority: true
},
740c78f7d914e5c763731bc751b513fc2388ffa0b47db080ded3e8b305e68c75: {
capacity: 16000,
connectionTime: 3056.559199029,
isConnected: true,
pricing/balance: 998790060237,
pricing/balanceMeta: "",
pricing/negBalance: 55135348863,
priority: true
},
9985ade55b515f79f64274bf2ae440ca8c433cfb0f283fb6010bf46f796b2a3b: {}
}
```
### les_addBalance
Add signed value to the token balance of the specified client and update its `meta` tag. The balance cannot go below zero or over `2^^63-1`. The balance values before and after the update are returned. The `meta` tag can be used to store a sequence number or reference to the last processed incoming payment, token expiration info, balance in other currencies or any application-specific additional information.
| Client | Method invocation |
|:--------|-----------------------------------------------------------------------------------|
| Go | `les.AddBalance(id enode.ID, value int64, meta string) ([2]uint64, error)}` |
| Console | `les.addBalance(id, number, string)` |
| RPC | `{"method": "les_addBalance", "params": [id, number, string]}` |
#### Example
```javascript
> les.addBalance("0x6a47fe7bb23fd335df52ef1690f37ab44265a537b1d18eb616a3e77f898d9e77", 1000000000, "qwerty")
[968379616, 1968379616]
```
### les_setClientParams
Set capacity and pricing factors for the specified list of connected clients or for all connected clients if the ID list is empty.
| Client | Method invocation |
|:--------|-----------------------------------------------------------------------------------|
| Go | `les.SetClientParams(ids []enode.ID, params map[string]interface{}) error` |
| Console | `les.setClientParams([id, ...], {string: value, ...})` |
| RPC | `{"method": "les_setClientParams", "params": [[id, ...], {string: value, ...}]}` |
#### Example
```javascript
> les.setClientParams(["0x6a47fe7bb23fd335df52ef1690f37ab44265a537b1d18eb616a3e77f898d9e77"], {
"capacity": 100000,
"pricing/timeFactor": 0,
"pricing/capacityFactor": 1000000000,
"pricing/requestCostFactor": 1000000000,
"pricing/negative/timeFactor": 0,
"pricing/negative/capacityFactor": 1000000000,
"pricing/negative/requestCostFactor": 1000000000,
})
null
```
### les_setDefaultParams
Set default pricing factors for subsequently connected clients.
| Client | Method invocation |
|:--------|-----------------------------------------------------------------------------------|
| Go | `les.SetDefaultParams(params map[string]interface{}) error` |
| Console | `les.setDefaultParams({string: value, ...})` |
| RPC | `{"method": "les_setDefaultParams", "params": [{string: value, ...}]}` |
#### Example
```javascript
> les.setDefaultParams({
"pricing/timeFactor": 0,
"pricing/capacityFactor": 1000000000,
"pricing/requestCostFactor": 1000000000,
"pricing/negative/timeFactor": 0,
"pricing/negative/capacityFactor": 1000000000,
"pricing/negative/requestCostFactor": 1000000000,
})
null
```
### les_latestCheckpoint
Get the index and hashes of the latest known checkpoint.
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Go | `les.LatestCheckpoint() ([4]string, error)` |
| Console | `les.latestCheckpoint()` |
| RPC | `{"method": "les_latestCheckpoint", "params": []}` |
#### Example
```javascript
> les.latestCheckpoint
["0x110", "0x6eedf8142d06730b391bfcbd32e9bbc369ab0b46ae226287ed5b29505a376164", "0x191bb2265a69c30201a616ae0d65a4ceb5937c2f0c94b125ff55343d707463e5", "0xf58409088a5cb2425350a59d854d546d37b1e7bef8bbf6afee7fd15f943d626a"]
```
### les_getCheckpoint
Get checkpoint hashes by index.
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Go | `les.GetCheckpoint(index uint64) ([3]string, error)` |
| Console | `les.getCheckpoint(number)` |
| RPC | `{"method": "les_getCheckpoint", "params": [number]}` |
#### Example
```javascript
> les.getCheckpoint(256)
["0x93eb4af0b224b1097e09181c2e51536fe0a3bf3bb4d93e9a69cab9eb3e28c75f", "0x0eb055e384cf58bc72ca20ca5e2b37d8d4115dce80ab4a19b72b776502c4dd5b", "0xda6c02f7c51f9ecc3eca71331a7eaad724e5a0f4f906ce9251a2f59e3115dd6a"]
```
### les_getCheckpointContractAddress
Get the address of the checkpoint oracle contract.
| Client | Method invocation |
|:--------|-------------------------------------------------------------------|
| Go | `les.GetCheckpointContractAddress() (string, error)` |
| Console | `les.checkpointContractAddress()` |
| RPC | `{"method": "les_getCheckpointContractAddress", "params": []}` |
#### Example
```javascript
> les.checkpointContractAddress
"0x9a9070028361F7AAbeB3f2F2Dc07F82C4a98A02a"
```

@ -0,0 +1,91 @@
---
title: miner Namespace
sort_key: C
---
The `miner` API allows you to remote control the node's mining operation and set various
mining specific settings.
* TOC
{:toc}
### miner_getHashrate
Get your hashrate in H/s (Hash operations per second).
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Console | `miner.getHashrate()` |
| RPC | `{"method": "miner_getHashrate", "params": []}` |
### miner_setExtra
Sets the extra data a miner can include when miner blocks. This is capped at
32 bytes.
| Client | Method invocation |
|:--------|----------------------------------------------------|
| Go | `miner.setExtra(extra string) (bool, error)` |
| Console | `miner.setExtra(string)` |
| RPC | `{"method": "miner_setExtra", "params": [string]}` |
### miner_setGasPrice
Sets the minimal accepted gas price when mining transactions. Any transactions that are
below this limit are excluded from the mining process.
| Client | Method invocation |
|:--------|-------------------------------------------------------|
| Go | `miner.setGasPrice(number *rpc.HexNumber) bool` |
| Console | `miner.setGasPrice(number)` |
| RPC | `{"method": "miner_setGasPrice", "params": [number]}` |
### miner_setRecommitInterval
Updates the interval for recomitting the miner sealing work.
| Client | Method invocation |
|:--------|---------------------------------------------------------------|
| Console | `miner.setRecommitInterval(interval int)` |
| RPC | `{"method": "miner_setRecommitInterval", "params": [number]}` |
### miner_start
Start the CPU mining process with the given number of threads and generate a new DAG
if need be.
| Client | Method invocation |
|:--------|-----------------------------------------------------|
| Go | `miner.Start(threads *rpc.HexNumber) (bool, error)` |
| Console | `miner.start(number)` |
| RPC | `{"method": "miner_start", "params": [number]}` |
### miner_stop
Stop the CPU mining operation.
| Client | Method invocation |
|:--------|----------------------------------------------|
| Go | `miner.Stop() bool` |
| Console | `miner.stop()` |
| RPC | `{"method": "miner_stop", "params": []}` |
### miner_setEtherbase
Sets the etherbase, where mining rewards will go.
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Go | `miner.SetEtherbase(common.Address) bool` |
| Console | `miner.setEtherbase(address)` |
| RPC | `{"method": "miner_setEtherbase", "params": [address]}` |
### miner_setGasLimit
Sets the gas limit the miner will target when mining. Note: on networks where EIP-1559 is activated, this should be set to twice what you want the gas target (i.e. the effective gas used on average per block) to be.
| Client | Method invocation |
|:--------|-------------------------------------------------------------|
| Go | `miner.SetGasLimit(number *rpc.HexNumber) bool` |
| Console | `miner.SetGasLimit(number)` |
| RPC | `{"method": "miner_setGasLimit", "params": [number]}` |

@ -0,0 +1,36 @@
---
title: net Namespace
sort_key: C
---
The `net` API provides insight about the networking aspect of the client.
* TOC
{:toc}
### net_listening
Returns an indication if the node is listening for network connections.
| Client | Method invocation |
|:--------|-------------------------------|
| Console | `net.listening` |
| RPC | `{"method": "net_listening"}` |
### net_peerCount
Returns the number of connected peers.
| Client | Method invocation |
|:--------|-------------------------------|
| Console | `net.peerCount` |
| RPC | `{"method": "net_peerCount"}` |
### net_version
Returns the devp2p network ID (e.g. 1 for mainnet, 5 for goerli).
| Client | Method invocation |
|:--------|-----------------------------|
| Console | `net.version` |
| RPC | `{"method": "net_version"}` |

@ -0,0 +1,255 @@
---
title: personal Namespace
sort_key: C
---
The personal API manages private keys in the key store.
* TOC
{:toc}
### personal_deriveAccount
Requests a HD wallet to derive a new account, optionally pinning it for later reuse.
| Client | Method invocation |
| :--------| ------------------------------------------------------------------------ |
| Console | `personal.deriveAccount(url, path, pin)` |
| RPC | `{"method": "personal_deriveAccount", "params": [string, string, bool]}` |
### personal_importRawKey
Imports the given unencrypted private key (hex string) into the key store,
encrypting it with the passphrase.
Returns the address of the new account.
| Client | Method invocation |
| :--------| ----------------------------------------------------------------- |
| Console | `personal.importRawKey(keydata, passphrase)` |
| RPC | `{"method": "personal_importRawKey", "params": [string, string]}` |
### personal_initializeWallets
Initializes a new wallet at the provided URL by generating and returning a new private key.
| Client | Method invocation |
| :--------| ------------------------------------------------------------- |
| Console | `personal.initializeWallet(url)` |
| RPC | `{"method": "personal_initializeWallet", "params": [string]}` |
### personal_listAccounts
Returns all the Ethereum account addresses of all keys
in the key store.
| Client | Method invocation |
| :--------| --------------------------------------------------- |
| Console | `personal.listAccounts` |
| RPC | `{"method": "personal_listAccounts", "params": []}` |
#### Example
``` javascript
> personal.listAccounts
["0x5e97870f263700f46aa00d967821199b9bc5a120", "0x3d80b31a78c30fc628f20b2c89d7ddbf6e53cedc"]
```
### personal_listWallets
Returns a list of wallets this node manages.
| Client | Method invocation |
| :--------| --------------------------------------------------- |
| Console | `personal.listWallets` |
| RPC | `{"method": "personal_listWallets", "params": []}` |
#### Example
``` javascript
> personal.listWallets
[{
accounts: [{
address: "0x51594065a986c58d4698c23e3d932b68a22c4d21",
url: "keystore:///var/folders/cp/k3x0xm3959qf9l0pcbbdxdt80000gn/T/go-ethereum-keystore65174700/UTC--2022-06-28T10-31-09.477982000Z--51594065a986c58d4698c23e3d932b68a22c4d21"
}],
status: "Unlocked",
url: "keystore:///var/folders/cp/k3x0xm3959qf9l0pcbbdxdt80000gn/T/go-ethereum-keystore65174700/UTC--2022-06-28T10-31-09.477982000Z--51594065a986c58d4698c23e3d932b68a22c4d21"
}]
```
### personal_lockAccount
Removes the private key with given address from memory.
The account can no longer be used to send transactions.
| Client | Method invocation |
| :--------| -------------------------------------------------------- |
| Console | `personal.lockAccount(address)` |
| RPC | `{"method": "personal_lockAccount", "params": [string]}` |
### personal_newAccount
Generates a new private key and stores it in the key store directory.
The key file is encrypted with the given passphrase.
Returns the address of the new account.
At the geth console, `newAccount` will prompt for a passphrase when
it is not supplied as the argument.
| Client | Method invocation |
| :--------| --------------------------------------------------- |
| Console | `personal.newAccount()` |
| RPC | `{"method": "personal_newAccount", "params": [string]}` |
#### Example
``` javascript
> personal.newAccount()
Passphrase:
Repeat passphrase:
"0x5e97870f263700f46aa00d967821199b9bc5a120"
```
The passphrase can also be supplied as a string.
``` javascript
> personal.newAccount("h4ck3r")
"0x3d80b31a78c30fc628f20b2c89d7ddbf6e53cedc"
```
### personal_openWallet
Initiates a hardware wallet opening procedure by establishing a USB
connection and then attempting to authenticate via the provided passphrase. Note,
the method may return an extra challenge requiring a second open (e.g. the
Trezor PIN matrix challenge).
| Client | Method invocation |
| :--------| ----------------------------------------------------------- |
| Console | `personal.openWallet(url, passphrase)` |
| RPC | `{"method": "personal_openWallet", "params": [string, string]}` |
### personal_unlockAccount
Decrypts the key with the given address from the key store.
Both passphrase and unlock duration are optional when using the JavaScript console.
If the passphrase is not supplied as an argument, the console will prompt for
the passphrase interactively.
The unencrypted key will be held in memory until the unlock duration expires.
If the unlock duration defaults to 300 seconds. An explicit duration
of zero seconds unlocks the key until geth exits.
The account can be used with `eth_sign` and `eth_sendTransaction` while it is unlocked.
| Client | Method invocation |
| :--------| -------------------------------------------------------------------------- |
| Console | `personal.unlockAccount(address, passphrase, duration)` |
| RPC | `{"method": "personal_unlockAccount", "params": [string, string, number]}` |
#### Examples
``` javascript
> personal.unlockAccount("0x5e97870f263700f46aa00d967821199b9bc5a120")
Unlock account 0x5e97870f263700f46aa00d967821199b9bc5a120
Passphrase:
true
```
Supplying the passphrase and unlock duration as arguments:
``` javascript
> personal.unlockAccount("0x5e97870f263700f46aa00d967821199b9bc5a120", "foo", 30)
true
```
If you want to type in the passphrase and stil override the default unlock duration,
pass `null` as the passphrase.
```
> personal.unlockAccount("0x5e97870f263700f46aa00d967821199b9bc5a120", null, 30)
Unlock account 0x5e97870f263700f46aa00d967821199b9bc5a120
Passphrase:
true
```
### personal_unpair
Deletes a pairing between wallet and geth.
| Client | Method invocation |
| :--------| ----------------------------------------------------------- |
| Console | `personal.unpair(url, pin)` |
| RPC | `{"method": "personal_unpair", "params": [string, string]}` |
### personal_sendTransaction
Validate the given passphrase and submit transaction.
The transaction is the same argument as for `eth_sendTransaction` (i.e. [transaction object](/docs/rpc/objects#transaction-call-object)) and contains the `from` address. If the passphrase can be used to decrypt the private key belogging to `tx.from` the transaction is verified, signed and send onto the network. The account is not unlocked globally in the node and cannot be used in other RPC calls.
| Client | Method invocation |
| :--------| -----------------------------------------------------------------|
| Console | `personal.sendTransaction(tx, passphrase)` |
| RPC | `{"method": "personal_sendTransaction", "params": [tx, string]}` |
#### Examples
``` javascript
> var tx = {from: "0x391694e7e0b0cce554cb130d723a9d27458f9298", to: "0xafa3f8684e54059998bc3a7b0d2b0da075154d66", value: web3.toWei(1.23, "ether")}
undefined
> personal.sendTransaction(tx, "passphrase")
0x8474441674cdd47b35b875fd1a530b800b51a5264b9975fb21129eeb8c18582f
```
### personal_sign
The sign method calculates an Ethereum specific signature with:
`sign(keccak256("\x19Ethereum Signed Message:\n" + len(message) + message)))`.
By adding a prefix to the message makes the calculated signature recognisable as an Ethereum specific signature. This prevents misuse where a malicious DApp can sign arbitrary data (e.g. transaction) and use the signature to impersonate the victim.
See ecRecover to verify the signature.
| Client | Method invocation |
|:--------|-------------------------------------------------------|
| Console | `personal.sign(message, account, [password])` |
| RPC | `{"method": "personal_sign", "params": [message, account, password]}` |
#### Examples
``` javascript
> personal.sign("0xdeadbeaf", "0x9b2055d370f73ec7d8a03e965129118dc8f5bf83", "")
"0xa3f20717a250c2b0b729b7e5becbff67fdaef7e0699da4de7ca5895b02a170a12d887fd3b17bfdce3481f10bea41f45ba9f709d39ce8325427b57afcfc994cee1b"
```
### personal_signTransaction
SignTransaction will create a transaction from the given arguments and tries to sign it with the key associated with `tx.from`. If the given passwd isn't able to decrypt the key it fails. The transaction is returned in RLP-form, not broadcast to other nodes. The first argument is a [transaction object](/docs/rpc/objects#transaction-call-object) and the second argument is the password, similar to `personal_sendTransaction`.
| Client | Method invocation |
| :--------| -----------------------------------------------------------------|
| Console | `personal.signTransaction(tx, passphrase)` |
| RPC | `{"method": "personal_signTransaction", "params": [tx, string]}` |
### personal_ecRecover
`ecRecover` returns the address associated with the private key that was used to calculate the signature in `personal_sign`.
| Client | Method invocation |
|:--------|-------------------------------------------------------|
| Console | `personal.ecRecover(message, signature)` |
| RPC | `{"method": "personal_ecRecover", "params": [message, signature]}` |
#### Examples
``` javascript
> personal.sign("0xdeadbeaf", "0x9b2055d370f73ec7d8a03e965129118dc8f5bf83", "")
"0xa3f20717a250c2b0b729b7e5becbff67fdaef7e0699da4de7ca5895b02a170a12d887fd3b17bfdce3481f10bea41f45ba9f709d39ce8325427b57afcfc994cee1b"
> personal.ecRecover("0xdeadbeaf", "0xa3f20717a250c2b0b729b7e5becbff67fdaef7e0699da4de7ca5895b02a170a12d887fd3b17bfdce3481f10bea41f45ba9f709d39ce8325427b57afcfc994cee1b")
"0x9b2055d370f73ec7d8a03e965129118dc8f5bf83"
```

@ -0,0 +1,226 @@
---
title: txpool Namespace
sort_key: C
---
The `txpool` API gives you access to several non-standard RPC methods to inspect the contents of the
transaction pool containing all the currently pending transactions as well as the ones queued for
future processing.
* TOC
{:toc}
### txpool_content
The `content` inspection property can be queried to list the exact details of all the transactions
currently pending for inclusion in the next block(s), as well as the ones that are being scheduled
for future execution only.
The result is an object with two fields `pending` and `queued`. Each of these fields are associative
arrays, in which each entry maps an origin-address to a batch of scheduled transactions. These batches
themselves are maps associating nonces with actual transactions.
Please note, there may be multiple transactions associated with the same account and nonce. This can
happen if the user broadcast mutliple ones with varying gas allowances (or even completely different
transactions).
| Client | Method invocation |
|:-------:|-----------------------------------------------------------------------|
| Go | `txpool.Content() (map[string]map[string]map[string]*RPCTransaction)` |
| Console | `txpool.content` |
| RPC | `{"method": "txpool_content"}` |
#### Example
```javascript
> txpool.content
{
pending: {
0x0216d5032f356960cd3749c31ab34eeff21b3395: {
806: {
blockHash: "0x0000000000000000000000000000000000000000000000000000000000000000",
blockNumber: null,
from: "0x0216d5032f356960cd3749c31ab34eeff21b3395",
gas: "0x5208",
gasPrice: "0xba43b7400",
hash: "0xaf953a2d01f55cfe080c0c94150a60105e8ac3d51153058a1f03dd239dd08586",
input: "0x",
nonce: "0x326",
to: "0x7f69a91a3cf4be60020fb58b893b7cbb65376db8",
transactionIndex: null,
value: "0x19a99f0cf456000"
}
},
0x24d407e5a0b506e1cb2fae163100b5de01f5193c: {
34: {
blockHash: "0x0000000000000000000000000000000000000000000000000000000000000000",
blockNumber: null,
from: "0x24d407e5a0b506e1cb2fae163100b5de01f5193c",
gas: "0x44c72",
gasPrice: "0x4a817c800",
hash: "0xb5b8b853af32226755a65ba0602f7ed0e8be2211516153b75e9ed640a7d359fe",
input: "0xb61d27f600000000000000000000000024d407e5a0b506e1cb2fae163100b5de01f5193c00000000000000000000000000000000000000000000000053444835ec580000000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
nonce: "0x22",
to: "0x7320785200f74861b69c49e4ab32399a71b34f1a",
transactionIndex: null,
value: "0x0"
}
}
},
queued: {
0x976a3fc5d6f7d259ebfb4cc2ae75115475e9867c: {
3: {
blockHash: "0x0000000000000000000000000000000000000000000000000000000000000000",
blockNumber: null,
from: "0x976a3fc5d6f7d259ebfb4cc2ae75115475e9867c",
gas: "0x15f90",
gasPrice: "0x4a817c800",
hash: "0x57b30c59fc39a50e1cba90e3099286dfa5aaf60294a629240b5bbec6e2e66576",
input: "0x",
nonce: "0x3",
to: "0x346fb27de7e7370008f5da379f74dd49f5f2f80f",
transactionIndex: null,
value: "0x1f161421c8e0000"
}
},
0x9b11bf0459b0c4b2f87f8cebca4cfc26f294b63a: {
2: {
blockHash: "0x0000000000000000000000000000000000000000000000000000000000000000",
blockNumber: null,
from: "0x9b11bf0459b0c4b2f87f8cebca4cfc26f294b63a",
gas: "0x15f90",
gasPrice: "0xba43b7400",
hash: "0x3a3c0698552eec2455ed3190eac3996feccc806970a4a056106deaf6ceb1e5e3",
input: "0x",
nonce: "0x2",
to: "0x24a461f25ee6a318bdef7f33de634a67bb67ac9d",
transactionIndex: null,
value: "0xebec21ee1da40000"
},
6: {
blockHash: "0x0000000000000000000000000000000000000000000000000000000000000000",
blockNumber: null,
from: "0x9b11bf0459b0c4b2f87f8cebca4cfc26f294b63a",
gas: "0x15f90",
gasPrice: "0x4a817c800",
hash: "0xbbcd1e45eae3b859203a04be7d6e1d7b03b222ec1d66dfcc8011dd39794b147e",
input: "0x",
nonce: "0x6",
to: "0x6368f3f8c2b42435d6c136757382e4a59436a681",
transactionIndex: null,
value: "0xf9a951af55470000"
}
}
}
}
```
### txpool_contentFrom
Retrieves the transactions contained within the txpool,
returning pending as well as queued transactions of this address, grouped by nonce.
| Client | Method invocation |
|:-------:|--------------------------------------------------------|
| Console | `txpool.contentFrom(address)` |
| RPC | `{"method": "txpool_contentFrom, "params": [string]"}` |
### txpool_inspect
The `inspect` inspection property can be queried to list a textual summary of all the transactions
currently pending for inclusion in the next block(s), as well as the ones that are being scheduled
for future execution only. This is a method specifically tailored to developers to quickly see the
transactions in the pool and find any potential issues.
The result is an object with two fields `pending` and `queued`. Each of these fields are associative
arrays, in which each entry maps an origin-address to a batch of scheduled transactions. These batches
themselves are maps associating nonces with transactions summary strings.
Please note, there may be multiple transactions associated with the same account and nonce. This can
happen if the user broadcast mutliple ones with varying gas allowances (or even completely different
transactions).
| Client | Method invocation |
|:-------:|--------------------------------------------------------------|
| Go | `txpool.Inspect() (map[string]map[string]map[string]string)` |
| Console | `txpool.inspect` |
| RPC | `{"method": "txpool_inspect"}` |
#### Example
```javascript
> txpool.inspect
{
pending: {
0x26588a9301b0428d95e6fc3a5024fce8bec12d51: {
31813: "0x3375ee30428b2a71c428afa5e89e427905f95f7e: 0 wei + 500000 × 20000000000 wei"
},
0x2a65aca4d5fc5b5c859090a6c34d164135398226: {
563662: "0x958c1fa64b34db746925c6f8a3dd81128e40355e: 1051546810000000000 wei + 90000 gas × 20000000000 wei",
563663: "0x77517b1491a0299a44d668473411676f94e97e34: 1051190740000000000 wei + 90000 gas × 20000000000 wei",
563664: "0x3e2a7fe169c8f8eee251bb00d9fb6d304ce07d3a: 1050828950000000000 wei + 90000 gas × 20000000000 wei",
563665: "0xaf6c4695da477f8c663ea2d8b768ad82cb6a8522: 1050544770000000000 wei + 90000 gas × 20000000000 wei",
563666: "0x139b148094c50f4d20b01caf21b85edb711574db: 1048598530000000000 wei + 90000 gas × 20000000000 wei",
563667: "0x48b3bd66770b0d1eecefce090dafee36257538ae: 1048367260000000000 wei + 90000 gas × 20000000000 wei",
563668: "0x468569500925d53e06dd0993014ad166fd7dd381: 1048126690000000000 wei + 90000 gas × 20000000000 wei",
563669: "0x3dcb4c90477a4b8ff7190b79b524773cbe3be661: 1047965690000000000 wei + 90000 gas × 20000000000 wei",
563670: "0x6dfef5bc94b031407ffe71ae8076ca0fbf190963: 1047859050000000000 wei + 90000 gas × 20000000000 wei"
},
0x9174e688d7de157c5c0583df424eaab2676ac162: {
3: "0xbb9bc244d798123fde783fcc1c72d3bb8c189413: 30000000000000000000 wei + 85000 gas × 21000000000 wei"
},
0xb18f9d01323e150096650ab989cfecd39d757aec: {
777: "0xcd79c72690750f079ae6ab6ccd7e7aedc03c7720: 0 wei + 1000000 gas × 20000000000 wei"
},
0xb2916c870cf66967b6510b76c07e9d13a5d23514: {
2: "0x576f25199d60982a8f31a8dff4da8acb982e6aba: 26000000000000000000 wei + 90000 gas × 20000000000 wei"
},
0xbc0ca4f217e052753614d6b019948824d0d8688b: {
0: "0x2910543af39aba0cd09dbb2d50200b3e800a63d2: 1000000000000000000 wei + 50000 gas × 1171602790622 wei"
},
0xea674fdde714fd979de3edf0f56aa9716b898ec8: {
70148: "0xe39c55ead9f997f7fa20ebe40fb4649943d7db66: 1000767667434026200 wei + 90000 gas × 20000000000 wei"
}
},
queued: {
0x0f6000de1578619320aba5e392706b131fb1de6f: {
6: "0x8383534d0bcd0186d326c993031311c0ac0d9b2d: 9000000000000000000 wei + 21000 gas × 20000000000 wei"
},
0x5b30608c678e1ac464a8994c3b33e5cdf3497112: {
6: "0x9773547e27f8303c87089dc42d9288aa2b9d8f06: 50000000000000000000 wei + 90000 gas × 50000000000 wei"
},
0x976a3fc5d6f7d259ebfb4cc2ae75115475e9867c: {
3: "0x346fb27de7e7370008f5da379f74dd49f5f2f80f: 140000000000000000 wei + 90000 gas × 20000000000 wei"
},
0x9b11bf0459b0c4b2f87f8cebca4cfc26f294b63a: {
2: "0x24a461f25ee6a318bdef7f33de634a67bb67ac9d: 17000000000000000000 wei + 90000 gas × 50000000000 wei",
6: "0x6368f3f8c2b42435d6c136757382e4a59436a681: 17990000000000000000 wei + 90000 gas × 20000000000 wei",
7: "0x6368f3f8c2b42435d6c136757382e4a59436a681: 17900000000000000000 wei + 90000 gas × 20000000000 wei"
}
}
}
```
### txpool_status
The `status` inspection property can be queried for the number of transactions currently pending for
inclusion in the next block(s), as well as the ones that are being scheduled for future execution only.
The result is an object with two fields `pending` and `queued`, each of which is a counter representing
the number of transactions in that particular state.
| Client | Method invocation |
|:--------|-----------------------------------------------|
| Go | `txpool.Status() (map[string]*rpc.HexNumber)` |
| Console | `txpool.status` |
| RPC | `{"method": "txpool_status"}` |
#### Example
```javascript
> txpool.status
{
pending: 10,
queued: 7
}
```

@ -0,0 +1,72 @@
---
title: Objects
sort_key: D
---
The following are data structures which are used for various RPC methods.
### Transaction call object
The *transaction call object* contains all the necessary parameters for executing an EVM contract method.
| Field | Type | Bytes | Optional | Description |
|:-----------|:-----------|:------|:---------|:------------|
| `from` | `Address` | 20 | Yes | Address the transaction is simulated to have been sent from. Defaults to first account in the local keystore or the `0x00..0` address if no local accounts are available. |
| `to` | `Address` | 20 | No | Address the transaction is sent to. |
| `gas` | `Quantity` | <8 | Yes | Maximum gas allowance for the code execution to avoid infinite loops. Defaults to `2^63` or whatever value the node operator specified via `--rpc.gascap`. |
| `gasPrice` | `Quantity` | <32 | Yes | Number of `wei` to simulate paying for each unit of gas during execution. Defaults to `1 gwei`. |
| `maxFeePerGas` | `Quantity` | <32 | Yes | Maximum fee per gas the transaction should pay in total. Relevant for type-2 transactions. |
| `maxPriorityFeePerGas` | `Quantity` | <32 | Yes | Maximum tip per gas that's given directly to the miner. Relevant for type-2 transactions. |
| `value` | `Quantity` | <32 | Yes | Amount of `wei` to simulate sending along with the transaction. Defaults to `0`. |
| `nonce` | `Quantity` | <8 | Yes | Nonce of sender account. |
| `input` | `Binary` | any | Yes | Binary data to send to the target contract. Generally the 4 byte hash of the method signature followed by the ABI encoded parameters. For details please see the [Ethereum Contract ABI](https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI). This field was previously called `data`. |
| `accessList` | `AccessList` | any | Yes | A list of addresses and storage keys that the transaction plans to access. Used in non-legacy, i.e. type 1 and 2 transactions. |
| `chainId` | `Quantity` | <32 | Yes | Transaction only valid on networks with this chain ID. Used in non-legacy, i.e. type 1 and 2 transactions. |
Example for a legacy transaction:
```json
{
"from": "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3",
"to": "0xebe8efa441b9302a0d7eaecc277c09d20d684540",
"gas": "0x1bd7c",
"data": "0xd459fc46000000000000000000000000000000000000000000000000000000000046c650dbb5e8cb2bac4d2ed0b1e6475d37361157738801c494ca482f96527eb48f9eec488c2eba92d31baeccfb6968fad5c21a3df93181b43b4cf253b4d572b64172ef000000000000000000000000000000000000000000000000000000000000008c00000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000014000000000000000000000000000000000000000000000000000000000000001a00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000001c0000000000000000000000000000000000000000000000000000000000000002b85c0c828d7a98633b4e1b65eac0c017502da909420aeade9a280675013df36bdc71cffdf420cef3d24ba4b3f9b980bfbb26bd5e2dcf7795b3519a3fd22ffbb2000000000000000000000000000000000000000000000000000000000000000238fb6606dc2b5e42d00c653372c153da8560de77bd9afaba94b4ab6e4aa11d565d858c761320dbf23a94018d843772349bd9d92301b0ca9ca983a22d86a70628",
}
```
Example for a type-1 transaction:
```json
{
"from": "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3",
"to": "0xebe8efa441b9302a0d7eaecc277c09d20d684540",
"gas": "0x1bd7c",
"data": "0xd459fc46000000000000000000000000000000000000000000000000000000000046c650dbb5e8cb2bac4d2ed0b1e6475d37361157738801c494ca482f96527eb48f9eec488c2eba92d31baeccfb6968fad5c21a3df93181b43b4cf253b4d572b64172ef000000000000000000000000000000000000000000000000000000000000008c00000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000014000000000000000000000000000000000000000000000000000000000000001a00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000001c0000000000000000000000000000000000000000000000000000000000000002b85c0c828d7a98633b4e1b65eac0c017502da909420aeade9a280675013df36bdc71cffdf420cef3d24ba4b3f9b980bfbb26bd5e2dcf7795b3519a3fd22ffbb2000000000000000000000000000000000000000000000000000000000000000238fb6606dc2b5e42d00c653372c153da8560de77bd9afaba94b4ab6e4aa11d565d858c761320dbf23a94018d843772349bd9d92301b0ca9ca983a22d86a70628",
"chainId": "0x1",
"accessList": [
{
"address": "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48",
"storageKeys": ["0xda650992a54ccb05f924b3a73ba785211ba39a8912b6d270312f8e2c223fb9b1", "0x10d6a54a4754c8869d6886b5f5d7fbfa5b4
522237ea5c60d11bc4e7a1ff9390b"]
}, {
"address": "0xa2327a938febf5fec13bacfb16ae10ecbc4cbdcf",
"storageKeys": []
},
]
}
```
Example for a type-2 transaction:
```json
{
"from": "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3",
"to": "0xebe8efa441b9302a0d7eaecc277c09d20d684540",
"gas": "0x1bd7c",
"maxFeePerGas": "0x6b44b0285",
"maxPriorityFeePerGas": "0x6b44b0285",
"data": "0xd459fc46000000000000000000000000000000000000000000000000000000000046c650dbb5e8cb2bac4d2ed0b1e6475d37361157738801c494ca482f96527eb48f9eec488c2eba92d31baeccfb6968fad5c21a3df93181b43b4cf253b4d572b64172ef000000000000000000000000000000000000000000000000000000000000008c00000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000014000000000000000000000000000000000000000000000000000000000000001a00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000001c0000000000000000000000000000000000000000000000000000000000000002b85c0c828d7a98633b4e1b65eac0c017502da909420aeade9a280675013df36bdc71cffdf420cef3d24ba4b3f9b980bfbb26bd5e2dcf7795b3519a3fd22ffbb2000000000000000000000000000000000000000000000000000000000000000238fb6606dc2b5e42d00c653372c153da8560de77bd9afaba94b4ab6e4aa11d565d858c761320dbf23a94018d843772349bd9d92301b0ca9ca983a22d86a70628",
"chainId": "0x1",
"accessList": []
}
```

@ -0,0 +1,168 @@
---
title: Real-time Events
sort_key: B
---
Geth v1.4 and later support publish / subscribe using JSON-RPC notifications. This allows
clients to wait for events instead of polling for them.
It works by subscribing to particular events. The node will return a subscription id. For
each event that matches the subscription a notification with relevant data is send
together with the subscription id.
Example:
// create subscription
>> {"id": 1, "method": "eth_subscribe", "params": ["newHeads"]}
<< {"jsonrpc":"2.0","id":1,"result":"0xcd0c3e8af590364c09d0fa6a1210faf5"}
// incoming notifications
<< {"jsonrpc":"2.0","method":"eth_subscription","params":{"subscription":"0xcd0c3e8af590364c09d0fa6a1210faf5","result":{"difficulty":"0xd9263f42a87",<...>, "uncles":[]}}}
<< {"jsonrpc":"2.0","method":"eth_subscription","params":{"subscription":"0xcd0c3e8af590364c09d0fa6a1210faf5","result":{"difficulty":"0xd90b1a7ad02", <...>, "uncles":["0x80aacd1ea4c9da32efd8c2cc9ab38f8f70578fcd46a1a4ed73f82f3e0957f936"]}}}
// cancel subscription
>> {"id": 1, "method": "eth_unsubscribe", "params": ["0xcd0c3e8af590364c09d0fa6a1210faf5"]}
<< {"jsonrpc":"2.0","id":1,"result":true}
### Considerations
1. notifications are sent for current events and not for past events. If your use case
requires you not to miss any notifications then subscriptions are probably not the best
option.
2. subscriptions require a full duplex connection. Geth offers such connections in the
form of WebSocket and IPC (enabled by default).
3. subscriptions are coupled to a connection. If the connection is closed all
subscriptions that are created over this connection are removed.
4. notifications are stored in an internal buffer and sent from this buffer to the client.
If the client is unable to keep up and the number of buffered notifications reaches a
limit (currently 10k) the connection is closed. Keep in mind that subscribing to some
events can cause a flood of notifications, e.g. listening for all logs/blocks when the
node starts to synchronize.
## Create subscription
Subscriptions are created with a regular RPC call with `eth_subscribe` as method and the
subscription name as first parameter. If successful it returns the subscription id.
### Parameters
1. subscription name
2. optional arguments
### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["newHeads"]}
<< {"id": 1, "jsonrpc": "2.0", "result": "0x9cef478923ff08bf67fde6c64013158d"}
## Cancel subscription
Subscriptions are cancelled with a regular RPC call with `eth_unsubscribe` as method and
the subscription id as first parameter. It returns a bool indicating if the subscription
was cancelled successful.
### Parameters
1. subscription id
### Example
>> {"id": 1, "method": "eth_unsubscribe", "params": ["0x9cef478923ff08bf67fde6c64013158d"]}
<< {"jsonrpc":"2.0","id":1,"result":true}
## Supported Subscriptions
### newHeads
Fires a notification each time a new header is appended to the chain, including chain reorganizations. Users can use the bloom filter to determine if the block contains logs that are interested to them. Note that if geth receives multiple blocks simultaneously, e.g. catching up after being out of sync, only the last block is emitted.
In case of a chain reorganization the subscription will emit the last header in the new
chain. Therefore the subscription can emit multiple headers on the same height.
#### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["newHeads"]}
<< {"jsonrpc":"2.0","id":2,"result":"0x9ce59a13059e417087c02d3236a0b1cc"}
<< {
"jsonrpc": "2.0",
"method": "eth_subscription",
"params": {
"result": {
"difficulty": "0x15d9223a23aa",
"extraData": "0xd983010305844765746887676f312e342e328777696e646f7773",
"gasLimit": "0x47e7c4",
"gasUsed": "0x38658",
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"miner": "0xf8b483dba2c3b7176a3da549ad41a48bb3121069",
"nonce": "0x084149998194cc5f",
"number": "0x1348c9",
"parentHash": "0x7736fab79e05dc611604d22470dadad26f56fe494421b5b333de816ce1f25701",
"receiptRoot": "0x2fab35823ad00c7bb388595cb46652fe7886e00660a01e867824d3dceb1c8d36",
"sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
"stateRoot": "0xb3346685172db67de536d8765c43c31009d0eb3bd9c501c9be3229203f15f378",
"timestamp": "0x56ffeff8",
"transactionsRoot": "0x0167ffa60e3ebc0b080cdb95f7c0087dd6c0e61413140e39d94d3468d7c9689f"
},
"subscription": "0x9ce59a13059e417087c02d3236a0b1cc"
}
}
### logs
Returns logs that are included in new imported blocks and match the given filter criteria.
In case of a chain reorganization previous sent logs that are on the old chain will be resend with the `removed` property set to true. Logs from transactions that ended up in the new chain are emitted. Therefore a subscription can emit logs for the same transaction multiple times.
#### Parameters
1. `object` with the following (optional) fields
- **address**, either an address or an array of addresses. Only logs that are created from these addresses are returned (optional)
- **topics**, only logs which match the specified topics (optional)
#### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["logs", {"address": "0x8320fe7702b96808f7bbc0d4a888ed1468216cfd", "topics": ["0xd78a0cb8bb633d06981248b816e7bd33c2a35a6089241d099fa519e361cab902"]}]}
<< {"jsonrpc":"2.0","id":2,"result":"0x4a8a4c0517381924f9838102c5a4dcb7"}
<< {"jsonrpc":"2.0","method":"eth_subscription","params": {"subscription":"0x4a8a4c0517381924f9838102c5a4dcb7","result":{"address":"0x8320fe7702b96808f7bbc0d4a888ed1468216cfd","blockHash":"0x61cdb2a09ab99abf791d474f20c2ea89bf8de2923a2d42bb49944c8c993cbf04","blockNumber":"0x29e87","data":"0x00000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000003","logIndex":"0x0","topics":["0xd78a0cb8bb633d06981248b816e7bd33c2a35a6089241d099fa519e361cab902"],"transactionHash":"0xe044554a0a55067caafd07f8020ab9f2af60bdfe337e395ecd84b4877a3d1ab4","transactionIndex":"0x0"}}}
### newPendingTransactions
Returns the hash for all transactions that are added to the pending state and are signed with a key that is available in the node.
When a transaction that was previously part of the canonical chain isn't part of the new canonical chain after a reorganization its again emitted.
#### Parameters
none
#### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["newPendingTransactions"]}
<< {"jsonrpc":"2.0","id":2,"result":"0xc3b33aa549fb9a60e95d21862596617c"}
<< {
"jsonrpc":"2.0",
"method":"eth_subscription",
"params":{
"subscription":"0xc3b33aa549fb9a60e95d21862596617c",
"result":"0xd6fdc5cc41a9959e922f30cb772a9aef46f4daea279307bc5f7024edc4ccd7fa"
}
}
### syncing
Indicates when the node starts or stops synchronizing. The result can either be a boolean
indicating that the synchronization has started (true), finished (false) or an object with
various progress indicators.
#### Parameters
none
#### Example
>> {"id": 1, "method": "eth_subscribe", "params": ["syncing"]}
<< {"jsonrpc":"2.0","id":2,"result":"0xe2ffeb2703bcf602d42922385829ce96"}
<< {"subscription":"0xe2ffeb2703bcf602d42922385829ce96","result":{"syncing":true,"status":{"startingBlock":674427,"currentBlock":67400,"highestBlock":674432,"pulledStates":0,"knownStates":0}}}}

@ -0,0 +1,182 @@
---
title: JSON-RPC Server
sort_key: A
---
Interacting with Geth requires sending requests to specific JSON-RPC API
methods. Geth supports all standard [JSON-RPC API][web3-rpc] endpoints.
The RPC requests must be sent to the node and the response returned to the client
using some transport protocol. This page outlines the available transport protocols
in Geth, providing the information users require to choose a transport protocol for
a specific user scenario.
{:toc}
- this will be removed by the toc
## Introduction
JSON-RPC is provided on multiple transports. Geth supports JSON-RPC over HTTP,
WebSocket and Unix Domain Sockets. Transports must be enabled through
command-line flags.
Ethereum JSON-RPC APIs use a name-space system. RPC methods are grouped into
several categories depending on their purpose. All method names are composed of
the namespace, an underscore, and the actual method name within the namespace.
For example, the `eth_call` method resides in the `eth` namespace.
Access to RPC methods can be enabled on a per-namespace basis. Find
documentation for individual namespaces in the sidebar.
## Transports
There are three transport protocols available in Geth: IPC, HTTP and Websockets.
### HTTP Server
[HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP) is a unidirectional transport protocol
that connects a client and server. The client sends a request to the server, and the server
returns a response back to the client. An HTTP connection is closed after the response for a given
request is sent.
HTTP is supported in every browser as well as almost all programming toolchains. Due to its ubiquity
it has become the most widely used transport for interacting with Geth. To start a HTTP server in Geth, include the `--http` flag:
```sh
geth --http
```
If no other commands are provided, Geth falls back to its default behaviour of accepting connections
from the local loopback interface (127.0.0.1). The default listening port is 8545. The ip address and
listening port can be customized using the `--http.addr` and `--http.port` flags:
```sh
geth --http --http.port 3334
```
Not all of the JSON-RPC method namespaces are enabled for HTTP requests by default.
Instead, they have to be whitelisted explicitly when Geth is started. Calling non-whitelisted
RPC namespaces returns an RPC error with code `-32602`.
The default whitelist allows access to the `eth`, `net` and `web3` namespaces. To enable access
to other APIs like account management (`personal`) and debugging (`debug`), they must be configured
using the `--http.api` flag. Enabling these APIs over HTTP is **not recommended** because access
to these methods increases the attack surface.
```sh
geth --http --http.api personal,eth,net,web3
```
Since the HTTP server is reachable from any local application, additional protection is built into
the server to prevent misuse of the API from web pages. To enable access to the API from a web page
(for example to use the online IDE, [Remix](https://remix.ethereum.org)), the server needs to be
configured to accept Cross-Origin requests. This is achieved using the `--http.corsdomain` flag.
```sh
geth --http --http.corsdomain https://remix.ethereum.org
```
The `--http.corsdomain` command also acceptsd wildcards that enable access to the RPC from any
origin:
```sh
--http.corsdomain '*'
```
### WebSocket Server
Websocket is a bidirectional transport protocol. A Websocket connection is maintained by client and server
until it is explicitly terminated by one. Most modern browsers support Websocket which means
it has good tooling.
Because Websocket is bidirectional, servers can push events to clients. That makes Websocket a good
choice for use-cases involving [event subscription](https://geth.ethereum.org/docs/rpc/pubsub). Another
benefit of Websocket is that after the handshake procedure, the overhead of individual messages is low,
making it good for sending high number of requests.
Configuration of the WebSocket endpoint in Geth follows the same pattern as the HTTP transport.
WebSocket access can be enabled using the `--ws` flag. If no additional information is provided,
Geth falls back to its default behaviour which is to establish the Websocket on port 8546.
The `--ws.addr`, `--ws.port` and `--ws.api` flags can be used to customize settings
for the WebSocket server. For example, to start Geth with a Websocket connection for RPC using
the custom port 3334 and whitelisting the `eth`, `net` and `web3` namespaces:
```sh
geth --ws --ws.port 3334 --ws.api eth,net,web3
```
Cross-Origin request protection also applies to the WebSocket server. The
`--ws.origins` flag can be used to allow access to the server from web pages:
```sh
geth --ws --ws.origins http://myapp.example.com
```
As with `--http.corsdomain`, using the wildcard `--ws.origins '*'` allows access from any origin.
{% include note.html content=" By default, **account unlocking is forbidden when HTTP or
Websocket access is enabled** (i.e. by passing `--http` or `ws` flag). This is because an
attacker that manages to access the node via the externally-exposed HTTP/WS port can then
control the unlocked account. It is possible to force account unlock by including the
`--allow-insecure-unlock` flag but this is unsafe and **not recommended** except for expert
users that completely understand how it can be used safely.
This is not a hypothetical risk: **there are bots that continually scan for http-enabled
Ethereum nodes to attack**" %}
### IPC Server
IPC is normally available for use in local environments where the node and the console
exist on the same machine. Geth creates a pipe in the computers local file system
(at `ipcpath`) that configures a connection between node and console. The `geth.ipc` file can
also be used by other processes on the same machine to interact with Geth.
On UNIX-based systems (Linux, OSX) the IPC is a UNIX domain socket. On Windows IPC is
provided using named pipes. The IPC server is enabled by default and has access to all
JSON-RPC namespaces.
The listening socket is placed into the data directory by default. On Linux and macOS,
the default location of the geth socket is
```sh
~/.ethereum/geth.ipc
```
On Windows, IPC is provided via named pipes. The default location of the geth pipe is:
```sh
\\.\pipe\geth.ipc
```
The location of the socket can be customized using the `--ipcpath` flag. IPC can be disabled
using the `--ipcdisable` flag.
## Choosing a transport protocol
The following table summarizes the relative strengths and weaknesses of each transport
protocol so that users can make informed decisions about which to use.
| | HTTP | WS | IPC |
| :----------------------------------:|:-----------:|:--------:|:-------:|
| Event subscription | N | **Y** | **Y** |
| Remote connection | **Y** | **Y** | N |
| Per-message metadata overhead | high | low | low |
As a general rule IPC is most secure because it is limited to interactions on the
local machine and cannot be exposed to external traffic. It can also be used
to subscribe to events. HTTP is a familiar and idempotent transport that closes
connections between requests and can therefore have lower overall overheads if the number
of requests is fairly low. Websockets provides a continuous open channel that can enable
event subscriptions and streaming and handle large volumes of requests with smaller per-message
overheads.
## Summary
RPC requests to a Geth node can be made using three different transport protocols. The
protocols are enabled at startup using their respective flags. The right choice of transport
protocol depends on the specific use case.
[web3-rpc]: https://github.com/ethereum/execution-apis
[remix]: https://remix.ethereum.org
[rpc]: https://www.ibm.com/docs/en/aix/7.1?topic=concepts-remote-procedure-call

@ -0,0 +1,173 @@
---
title: Monitoring Geth with InfluxDB and Grafana
---
There are several ways to monitor the performance of a Geth node. Insights into a node's
performance are useful for debugging, tuning and understanding what is really happening when
Geth is running.
## Prerequisites {#prerequisites}
To follow along with the instructions on this page it will be useful to have:
- a running Geth instance.
- basic working knowlegde of bash/terminal.
[This video](https://www.youtube.com/watch?v=cOBab8IJMYI) provides an excellent introduction
to Geth monitoring.
## Monitoring stack {#monitoring-stack}
An Ethereum client collects lots of data which can be read in the form of a chronological
database. To make monitoring easier, this data can be fed into data visualisation software.
There are many options available:
- [Prometheus](https://prometheus.io/) (pull model)
- [InfluxDB](https://www.influxdata.com/get-influxdb/) (push model)
- [Telegraf](https://www.influxdata.com/get-influxdb/)
- [Grafana](https://www.grafana.com/)
- [Datadog](https://www.datadoghq.com/)
- [Chronograf](https://www.influxdata.com/time-series-platform/chronograf/)
There's also [Geth Prometheus Exporter](https://github.com/hunterlong/gethexporter), an option
preconfigured with InfluxDB and Grafana. You can set it up easily using docker and
[Ethbian OS](https://ethbian.org/index.html) for RPi 4.
On this page, a Geth client will be configured to push data into a InfluxDB database and
Grafana will be used to visualize the data.
## Setting up InfluxDB {#setting-up-influxdb}
InfluxDB can be downloaded from the [Influxdata release page](https://portal.influxdata.com/downloads/).
It can also be installed from a [repository](https://repos.influxdata.com/).
For example for a Debian based Linux operating system:
```sh
curl -tlsv1.3 --proto =https -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add
source /etc/lsb-release
echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo apt update
sudo apt install influxdb -y
sudo systemctl enable influxdb
sudo systemctl start influxdb
sudo apt install influxdb-client
```
By default,InfluxDB it is reachable at `localhost:8086`. Before using the `influx` client, a new user with admin privileges
needs to be created. This user will serve for high level management, creating databases and users.
```sh
curl -XPOST "http://localhost:8086/query" --data-urlencode "q=CREATE USER username WITH PASSWORD 'password' WITH ALL PRIVILEGES"
```
Now the influx client can be used to enter [InfluxDB shell](https://docs.influxdata.com/influxdb/v1.8/tools/shell/) with the new user.
```sh
influx -username 'username' -password 'password'
```
A database and user for geth metrics can be created by communicatign with it directly via its shell.
```sh
create database geth
create user geth with password choosepassword
```
Verify created entries with:
```
show databases
show users
```
Leave InfluxDB shell.
```sh
exit
```
InfluxDB is running and configured to store metrics from Geth.
## Preparing Geth {#preparing-geth}
After setting up database, metrics need to be enabled in Geth. Various options are available,
as documented in the `METRICS AND STATS OPTIONS` in `geth --help` and in our [metrics page]().
In this case Geth will be configured to push data into InfluxDB. Basic setup specifies the endpoint
where InfluxDB is reachable and authenticates the database.
```sh
geth --metrics --metrics.influxdb --metrics.influxdb.endpoint "http://0.0.0.0:8086" --metrics.influxdb.username "geth" --metrics.influxdb.password "chosenpassword"
```
These flags can be provided when Geth is started or saved to the configuration file.
Listing the metrics in the database verifies that Geth is pushing data correctly. In InfluxDB shell:
```sh
use geth
show measurements
```
## Setting up Grafana {#setting-up-grafana}
With the InfluxDB database setup and successfully receiving data from Geth, the next step is to
install Grafana so that the data can be visualized. Instructions for specific operating systems
are available on the Grafana [downloads page](https://grafana.com/grafana/download?pg=get&plcmt=selfmanaged-box1-cta1).
Alternatively, the following code snippet shows how to download, install and run Grafana on a Debian
based Linux system:
```sh
curl -tlsv1.3 --proto =https -sL https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt update
sudo apt install grafana
sudo systemctl enable grafana-server
sudo systemctl start grafana-server
```
When Grafana is up and running, it should be reachable at `localhost:3000`. A browser can be pointed to that URL
to access a visualization dashboard. The browser will prompt for login credentials (user: `admin` and password: `admin`).
When prompted, the default password should be changed and saved.
![](./grafana1.png)
The browser first redirects to the Grafana home page to set up the source data.
Click on the configuration icon in the left bar and select "Data sources".
![](./grafana2.png)
There aren't any data sources yet, click on "Add data source" to define one.
![](./grafana3.png)
Select "InfluxDB" and proceed.
![](./grafana4.png)
Data source configuration is pretty straight forward if you are running tools on the same machine. You need to set the
InfluxDB address and details for accessing the database. Refer to the picture below.
![](./grafana5.png)
If everything is complete and InfluxDB is reachable, click on "Save and test" and wait for the confirmation to pop up.
![](./grafana6.png)
Grafana is now set up to read data from InfluxDB. Now you need to create a dashboard which will interpret and display it.
Dashboards properties are encoded in JSON files which can be created by anybody and easily imported. On the left bar,
click on "Create and Import".
![](./grafana7.png)
For a Geth monitoring dashboard, copy the ID of [this dashboard](https://grafana.com/grafana/dashboards/13877/)
and paste it in the "Import page" in Grafana. After saving the dashboard, it should look like this:
![](./grafana8.png)
The dashboards can be customized further. Each panel can be edited, moved, removed or added.
To learn more about how dashboards work, refer to [Grafana's documentation](https://grafana.com/docs/grafana/latest/dashboards/).
Some users might also be interested in automatic [alerting](https://grafana.com/docs/grafana/latest/alerting/), which
sets up alert notifications that are sent automatically when metrics reach certain values. Various communication channels are supported.

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

@ -0,0 +1,46 @@
---
title: Monitoring with Ethstats
---
Ethstats is a service that displays real time and historical statistics about individual
nodes connected to a network and about the network itself. Individual node statistics include
the last received block, block time, propagation time, connected peers, latency etc. Network
metrics include the number of nodes, average block times, node geolocation,
transaction counts etc.
These statistics are presented to the user in the form of a dashboard served to a web browser.
This can be configured using the public Ethstats server for Ethereum mainnet or some
public testnets, or using a local copy of Ethstats for private networks. This page will
demonstrate how to set up an Ethstats dashboard for private and public networks.
## Prerequisites
To follow the instructions on this page the following are required:
* Geth
* Node
* NPM
* Git
## Ethstats
Ethstats has three components:
* a server that consumes data sent to it by each individual node on a network and serves
statistics generated from that data.
* a client that queries a node and sends its data to the server
* a dashboard that displays the statistics generated by the server
The summary dashboard for Ethereum Mainnet can be viewed at [ethstats.net](https://ethstats.net/).
![Ethstats](/ethstats-mainnet.png)
Note that the Ethstats dashboard is not a reliable source of information about the entire Ethereum
network because submitting data to the Ethstats server is voluntary and has to be configured by
individual nodes. Therefore, many nodes are omitted from the summary statistics.
.. UNFINISHED
Reporting URL of a ethstats service (nodename:secret@host:port)

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 330 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 309 KiB

@ -0,0 +1,116 @@
---
title: Metrics
sort_key: G
---
Geth includes a variety of optional metrics that can be reported to the user. However, metrics are disabled by default to save on the computational overhead for the average user. Users that choose to see more detailed metrics can enable them using the `--metrics` flag when starting Geth. Some metrics are classed as especially expensive and are only enabled when the `--metrics.expensive` flag is supplied. For example, per-packet network traffic data is considered expensive.
The goal of the Geth metrics system is that - similar to logs - arbitrary metric collections can be added to any part of the code without requiring fancy constructs to analyze them (counter variables, public interfaces, crossing over the APIs, console hooks, etc). Instead, metrics should be "updated" whenever and wherever needed and be automatically collected, surfaced through the APIs, queryable and visualizable for analysis.
## Metric types
Geth's metrics can be classified into four types: meters, timers, counters and guages.
### Meters
Analogous to physical meters (electricity, water, etc), Geth's meters are capable of measuring the *amount* of "things" that pass through and at the *rate* at which they do. A meter doesn't have a specific unit of measure (byte, block, malloc, etc), it just counts arbitrary *events*. At any point in time a meter can report:
* *Total number of events* that passed through the meter
* *Mean throughput rate* of the meter since startup (events / second)
* *Weighted throughput rate* in the last *1*, *5* and *15* minutes (events / second)
("weighted" means that recent seconds count more that in older ones*)
### Timers
Timers are extensions of *meters*, the *duration* of an event is collected alongside a log of its occurrence. Similarly to meters, a timer can also measure arbitrary events but each requires a duration to be assigned individually. In addition generating all of the meter report types, a timer also reports:
* *Percentiles (5, 20, 50, 80, 95)*, reporting that some percentage of the events took less than the reported time to execute (*e.g. Percentile 20 = 1.5s would mean that 20% of the measured events took less time than 1.5 seconds to execute; inherently 80%(=100%-20%) took more that 1.5s*)
* Percentile 5: minimum durations (this is as fast as it gets)
* Percentile 50: well behaved samples (boring, just to give an idea)
* Percentile 80: general performance (these should be optimised)
* Percentile 95: worst case outliers (rare, just handle gracefully)
### Counters:
A counter is a single int64 value that can be incremented and decremented. The current value of the counter can be queried.
### Gauges:
A gauge is a single int64 value. Its value can increment and decrement - as with a counter - but can also be set arbitrarily.
## Querying metrics
Geth collects metrics if the `--metrics` flag is provided at startup. Those metrics are available via an HTTP server if the `--metrics.addr` flag is also provided. By default the metrics are served at `127.0.0.1:6060/debug/metrics` but a custom IP address can be provided. A custom port can also be provided to the `--metrics.port` flag. More computationally expensive metrics are toggled on or off by providing or omitting the `--metrics.expensive` flag. For example, to serve all metrics at the default address and port:
```
geth <other commands> --metrics --metrics.addr 127.0.0.1 --metrics.expensive
```
Navigating the browser to the given metrics address displays all the available metrics in the form
of JSON data that looks similar to:
```
chain/account/commits.50-percentile: 374072
chain/account/commits.75-percentile: 830356
chain/account/commits.95-percentile: 1783005.3999976
chain/account/commits.99-percentile: 3991806
chain/account/commits.99.999-percentile: 3991806
chain/account/commits.count: 43
chain/account/commits.fifteen-minute: 0.029134344092314267
chain/account/commits.five-minute: 0.029134344092314267
...
```
Any developer is free to add, remove or modify the available metrics as they see fit. The precise list of available metrics is always available by opening the metrics server in the browser.
Geth also supports dumping metrics directly into an influx database. In order to activate this, the `--metrics.influxdb` flag must be provided at startup. The API endpoint,username, password and other influxdb tags can also be provided. The available tags are:
```
--metrics.influxdb.endpoint value InfluxDB API endpoint to report metrics to (default: "http://localhost:8086")
--metrics.influxdb.database value InfluxDB database name to push reported metrics to (default: "geth")
--metrics.influxdb.username value Username to authorize access to the database (default: "test")
--metrics.influxdb.password value Password to authorize access to the database (default: "test")
--metrics.influxdb.tags value Comma-separated InfluxDB tags (key/values) attached to all measurements (default: "host=localhost")
--metrics.influxdbv2 Enable metrics export/push to an external InfluxDB v2 database
--metrics.influxdb.token value Token to authorize access to the database (v2 only) (default: "test")
--metrics.influxdb.bucket value InfluxDB bucket name to push reported metrics to (v2 only) (default: "geth")
--metrics.influxdb.organization value InfluxDB organization name (v2 only) (default: "geth")
```
## Creating and updating metrics
Metrics can be added easily in the Geth source code:
```go
meter := metrics.NewMeter("system/memory/allocs")
timer := metrics.NewTimer("chain/inserts")
```
In order to use the same meter from two different packages without creating dependency cycles, the metrics can be created using `NewOrRegisteredX()` functions. This creates a new meter if no meter with this name is available or returns the existing meter.
```go
meter := metrics.NewOrRegisteredMeter("system/memory/allocs")
timer := metrics.NewOrRegisteredTimer("chain/inserts")
```
The name given to the metric can be any arbitrary string. However, since Geth assumes it to be some meaningful sub-system hierarchy, it should be named accordingly.
Metrics can then be updated:
```go
meter.Mark(n) // Record the occurrence of `n` events
timer.Update(duration) // Record an event that took `duration`
timer.UpdateSince(time) // Record an event that started at `time`
timer.Time(function) // Measure and record the execution of `function`
```
## Summary
Geth can be configured to report metrics to an HTTP server or database. These functions are disabled by default but can be configured by passing the appropriate commands on startup. Users can easily create custom metrics by adding them to the Geth source code, following the instructions on this page.

@ -0,0 +1,393 @@
---
title: Clique-signing with Clef
sort_key: C
---
The 'classic' way to sign PoA blocks is to use the "unlock"-feature of `geth`. This is a highly dangerous thing to do, because "unlock" is totally un-discriminatory. Meaning: if an account is unlocked and an attacker obtains access to the RPC api, the attacker can have anything signed by that account, without supplying a password.
The idea with `clef` was to remove the `unlock` capability, yet still provide sufficient usability to make it possible to automate some things while maintaining a high level of security. This post will show how to integrate `clef` as a sealer of clique-blocks.
## Part 0: Prepping a Clique network
Feel free to skip this section if you already have a Clique-network.
First of all, we'll set up a rudimentary testnet to have something to sign on. We create a new keystore (password `testtesttest`)
```
$ geth account new --datadir ./ddir
INFO [06-16|11:10:39.600] Maximum peer count ETH=50 LES=0 total=50
Your new account is locked with a password. Please give a password. Do not forget this password.
Password:
Repeat password:
Your new key was generated
Public address of the key: 0x9CD932F670F7eDe5dE86F756A6D02548e5899f47
Path of the secret key file: ddir/keystore/UTC--2022-06-16T09-10-48.578523828Z--9cd932f670f7ede5de86f756a6d02548e5899f47
- You can share your public address with anyone. Others need it to interact with you.
- You must NEVER share the secret key with anyone! The key controls access to your funds!
- You must BACKUP your key file! Without the key, it's impossible to access account funds!
- You must REMEMBER your password! Without the password, it's impossible to decrypt the key!
```
And create a genesis with that account as a sealer:
```json
{
"config": {
"chainId": 15,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"clique": {
"period": 30,
"epoch": 30000
}
},
"difficulty": "1",
"gasLimit": "8000000",
"extradata": "0x00000000000000000000000000000000000000000000000000000000000000009CD932F670F7eDe5dE86F756A6D02548e5899f470000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"alloc": {
"0x9CD932F670F7eDe5dE86F756A6D02548e5899f47": {
"balance": "300000000000000000000000000000000"
}
}
}
```
And init `geth`
```
$ geth --datadir ./ddir init genesis.json
...
INFO [06-16|11:14:54.123] Writing custom genesis block
INFO [06-16|11:14:54.125] Persisted trie from memory database nodes=1 size=153.00B time="64.715µs" gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [06-16|11:14:54.125] Successfully wrote genesis state database=lightchaindata hash=187412..4deb98
```
At this point, we have a Clique network which we can start sealing on.
## Part 1: Prepping Clef
In order to make use of `clef` for signing, we need to do a couple of things.
1. Make sure that `clef` knows the password for the keystore.
2. Make sure that `clef` auto-approves clique signing requests.
These two things are independent of each other. First of all, however, we need to `init` clef (for this test I use the password `clefclefclef`)
```
$ clef --keystore ./ddir/keystore --configdir ./clef --chainid 15 --suppress-bootwarn init
The master seed of clef will be locked with a password.
Please specify a password. Do not forget this password!
Password:
Repeat password:
A master seed has been generated into clef/masterseed.json
This is required to be able to store credentials, such as:
* Passwords for keystores (used by rule engine)
* Storage for JavaScript auto-signing rules
* Hash of JavaScript rule-file
You should treat 'masterseed.json' with utmost secrecy and make a backup of it!
* The password is necessary but not enough, you need to back up the master seed too!
* The master seed does not contain your accounts, those need to be backed up separately!
```
After this operation, `clef` has it's own vault where it can store secrets and attestations, which we will utilize going forward.
### Storing passwords in `clef`
With that done, we can now make `clef` aware of the password. We invoke `setpw <address>` to store a password for a given address. `clef` asks for the password, and it also asks for the clef master-password, in order to update and store the new secrets inside clef vault.
```
$ clef --keystore ./ddir/keystore --configdir ./clef --chainid 15 --suppress-bootwarn setpw 0x9CD932F670F7eDe5dE86F756A6D02548e5899f47
Please enter a password to store for this address:
Password:
Repeat password:
Decrypt master seed of clef
Password:
INFO [06-16|11:27:09.153] Credential store updated set=0x9CD932F670F7eDe5dE86F756A6D02548e5899f47
```
At this point, if we were to use clef as a sealer, we would be forced to manually click Approve for each block, but we would not be required to provide the password.
#### Testing stored password
Let's test using the stored password when sealing Clique-blocks. Start `clef` with
```
$ clef --keystore ./ddir/keystore --configdir ./clef --chainid 15 --suppress-bootwarn
```
And start `geth` with
```
$ geth --datadir ./ddir --signer ./clef/clef.ipc --mine
```
Geth will ask what accounts are present, to which we need to manually enter `y` to approve:
```
-------- List Account request--------------
A request has been made to list all accounts.
You can select which accounts the caller can see
[x] 0x9CD932F670F7eDe5dE86F756A6D02548e5899f47
URL: keystore:///home/user/tmp/clique_clef/ddir/keystore/UTC--2022-06-16T09-10-48.578523828Z--9cd932f670f7ede5de86f756a6d02548e5899f47
-------------------------------------------
Request context:
NA -> ipc -> NA
Additional HTTP header data, provided by the external caller:
User-Agent: ""
Origin: ""
Approve? [y/N]:
> y
DEBUG[06-16|11:36:42.499] Served account_list reqid=2 duration=3.213768195s
```
After this, `geth` will start asking `clef` to sign things:
```
-------- Sign data request--------------
Account: 0x9CD932F670F7eDe5dE86F756A6D02548e5899f47 [chksum ok]
messages:
  Clique header [clique]: "clique header 1 [0x9b08fa3705e8b6e1b327d84f7936c21a3cb11810d9344dc4473f78f8da71e571]"
raw data:
"\xf9\x02\x14\xa0\x18t\x12:\x91f\xa2\x90U\b\xf9\xac\xc02i\xffs\x9f\xf4\xc9⮷!\x0f\x16\xaa?#M똠\x1d\xccM\xe8\xde\xc7]z\xab\x85\xb5g\xb6\xcc\xd4\x1a\xd3\x12E\x1b\x94\x8at\x13\xf0\xa1B\xfd@ԓG\x94\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa0]1%\n\xfc\xee'\xd0e\xce\xc7t\xcc\\?\t4v\x8f\x06\xcb\xf8\xa0P5\xfeN\xea\x0ff\xfe\x9c\xa0V\xe8\x1f\x17\x1b\xccU\xa6\xff\x83E\xe6\x92\xc0\xf8n[H\xe0\x1b\x99l\xad\xc0\x01b/\xb5\xe3c\xb4!\xa0V\xe8\x1f\x17\x1b\xccU\xa6\xff\x83E\xe6\x92\xc0\xf8n[H\xe0\x1b\x99l\xad\xc0\x01b/\xb5\xe3c\xb4!\xb9\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x01\x83z0\x83\x80\x84b\xaa\xf9\xaa\xa0\u0603\x01\n\x14\x84geth\x88go1.18.1\x85linux\x00\x00\x00\x00\x00\x00\x00\xa0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x88\x00\x00\x00\x00\x00\x00\x00\x00"
data hash: 0x9589ed81e959db6330b3d70e5f8e426fb683d03512f203009f7e41fc70662d03
-------------------------------------------
Request context:
NA -> ipc -> NA
Additional HTTP header data, provided by the external caller:
User-Agent: ""
Origin: ""
Approve? [y/N]:
> y
```
And indeed, after approving with `y`, we are not required to provide the password -- the signed block is returned to geth:
```
INFO [06-16|11:36:46.714] Successfully sealed new block number=1 sealhash=9589ed..662d03 hash=bd20b9..af8b87 elapsed=4.214s
```
This mode of operation is somewhat unusable, since we'd need to keep "Approving" each block to be sealed. So let's fix that too.
### Using rules to approve blocks
The basic idea with clef rules, is to let a piece of javascript take over the Approve/Deny decision. The javascript snippet has access to the same information as the manual operator.
Let's try with a simplistic first approach, which approves listing, and spits out the request data for `ApproveListing`
```js
function ApproveListing(){
return "Approve"
}
function ApproveSignData(r){
console.log("In Approve Sign data")
console.log(JSON.stringify(r))
}
```
In order to use a certain rule-file, we must first `attest` it. This is to prevent someone from modifying a ruleset-file on disk after creation.
```
$ clef --keystore ./ddir/keystore --configdir ./clef --chainid 15 --suppress-bootwarn attest `sha256sum rules.js | cut -f1`
Decrypt master seed of clef
Password:
INFO [06-16|13:49:00.298] Ruleset attestation updated sha256=54aae496c3f0eda063a62c73ee284ca9fae3f43b401da847ef30ea30e85e35d1
```
And then we can start clef, pointing out the `rules.js` file. OBS: if you later modify this file, you need to redo the `attest`-step.
```
$ clef --keystore ./ddir/keystore --configdir ./clef --chainid 15 --suppress-bootwarn --rules ./rules.js
```
Once `geth` starts asking it to seal blocks, we will now see the data. And from that, we can decide on how to make a rule which allows signing clique headers but nothing else.
The actual data that gets passed to the js environment (and which our ruleset spit out to the console) looks like this:
```json
{
"content_type": "application/x-clique-header",
"address": "0x9CD932F670F7eDe5dE86F756A6D02548e5899f47",
"raw_data": "+QIUoL0guY+66jZpzZh1wDX4Si/ycX4zD8FQqF/1Apy/r4uHoB3MTejex116q4W1Z7bM1BrTEkUblIp0E/ChQv1A1JNHlAAAAAAAAAAAAAAAAAAAAAAAAAAAoF0xJQr87ifQZc7HdMxcPwk0do8Gy/igUDX+TuoPZv6coFboHxcbzFWm/4NF5pLA+G5bSOAbmWytwAFiL7XjY7QhoFboHxcbzFWm/4NF5pLA+G5bSOAbmWytwAFiL7XjY7QhuQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAICg3pPDoCEYqsY1qDYgwEKFIRnZXRoiGdvMS4xOC4xhWxpbnV4AAAAAAAAAKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIgAAAAAAAAAAA==",
"messages": [
{
"name": "Clique header",
"value": "clique header 2 [0xae525b65bc7f711bc136f502650039cd6959c3abc28fdf0ebfe2a5f85c92f3b6]",
"type": "clique"
}
],
"call_info": null,
"hash": "0x8ca6c78af7d5ae67ceb4a1e465a8b639b9fbdec4b78e4d19cd9b1232046fbbf4",
"meta": {
"remote": "NA",
"local": "NA",
"scheme": "ipc",
"User-Agent": "",
"Origin": ""
}
}
```
If we wanted our js to be extremely trustless/paranoid, we could (inside the javascript) take the `raw_data` and verify that it's the rlp structure for a clique header:
```
echo "+QIUoL0guY+66jZpzZh1wDX4Si/ycX4zD8FQqF/1Apy/r4uHoB3MTejex116q4W1Z7bM1BrTEkUblIp0E/ChQv1A1JNHlAAAAAAAAAAAAAAAAAAAAAAAAAAAoF0xJQr87ifQZc7HdMxcPwk0do8Gy/igUDX+TuoPZv6coFboHxcbzFWm/4NF5pLA+G5bSOAbmWytwAFiL7XjY7QhoFboHxcbzFWm/4NF5pLA+G5bSOAbmWytwAFiL7XjY7QhuQEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAICg3pPDoCEYqsY1qDYgwEKFIRnZXRoiGdvMS4xOC4xhWxpbnV4AAAAAAAAAKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIgAAAAAAAAAAA==" | base64 -d | rlpdump
[
bd20b98fbaea3669cd9875c035f84a2ff2717e330fc150a85ff5029cbfaf8b87,
1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347,
0000000000000000000000000000000000000000,
5d31250afcee27d065cec774cc5c3f0934768f06cbf8a05035fe4eea0f66fe9c,
56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421,
56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421,
00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000,
02,
02,
7a4f0e,
"",
62ab18d6,
d883010a14846765746888676f312e31382e31856c696e757800000000000000,
0000000000000000000000000000000000000000000000000000000000000000,
0000000000000000,
]
```
However, we can also use the `messages`. They do not come from the external caller, but are generated from the `clef` internals: `clef` parsed the incoming request and verified the Clique wellformedness of the content. So we let's just check for such a message:
```js
function OnSignerStartup(info){}
function ApproveListing(){
return "Approve"
}
function ApproveSignData(r){
if (r.content_type == "application/x-clique-header"){
for(var i = 0; i < r.messages.length; i++){
var msg = r.messages[i]
if (msg.name=="Clique header" && msg.type == "clique"){
return "Approve"
}
}
}
return "Reject"
}
```
Attest
```
$ clef --keystore ./ddir/keystore --configdir ./clef --chainid 15 --suppress-bootwarn attest `sha256sum rules.js | cut -f1`
Decrypt master seed of clef
Password:
INFO [06-16|14:18:53.476] Ruleset attestation updated sha256=7d5036d22d1cc66599e7050fb1877f4e48b89453678c38eea06e3525996c2379
```
Run clef
```
$ clef --keystore ./ddir/keystore --configdir ./clef --chainid 15 --suppress-bootwarn --rules ./rules.js
```
Run geth
```
$ geth --datadir ./ddir --signer ./clef/clef.ipc --mine
```
And you should now see `clef` happily signing blocks:
```
DEBUG[06-16|14:20:02.136] Served account_version reqid=1 duration="131.38µs"
INFO [06-16|14:20:02.289] Op approved
DEBUG[06-16|14:20:02.289] Served account_list reqid=2 duration=4.672441ms
INFO [06-16|14:20:02.303] Op approved
DEBUG[06-16|14:20:03.450] Served account_signData reqid=3 duration=1.152074109s
INFO [06-16|14:20:03.456] Op approved
DEBUG[06-16|14:20:04.267] Served account_signData reqid=4 duration=815.874746ms
INFO [06-16|14:20:32.823] Op approved
DEBUG[06-16|14:20:33.584] Served account_signData reqid=5 duration=766.840681ms
```
### Further refinements
If an attacker find the clef "external" interface (which would only happen if you start it with `http` enabled) , he
- cannot make it sign arbitrary transactions,
- cannot sign arbitrary data message,
However, he could still make it sign e.g. 1000 versions of a certain block height, making the chain very unstable.
It is possible for rule execution to be stateful -- storing data. In this case, one could for example store what block heights have been sealed, and thus reject sealing a particular block height twice. In other words, we can use these rules to build our own version of an Execution-Layer slashing-db.
We simply split the `clique header 2 [0xae525b65bc7f711bc136f502650039cd6959c3abc28fdf0ebfe2a5f85c92f3b6]` line, and store/check the number, using `storage.get` and `storage.put`:
```js
function OnSignerStartup(info){}
function ApproveListing(){
return "Approve"
}
function ApproveSignData(r){
if (r.content_type != "application/x-clique-header"){
return "Reject"
}
for(var i = 0; i < r.messages.length; i++){
var msg = r.messages[i]
if (msg.name=="Clique header" && msg.type == "clique"){
var number = parseInt(msg.value.split(" ")[2])
var latest = storage.get("lastblock") || 0
console.log("number", number, "latest", latest)
if ( number > latest ){
storage.put("lastblock", number)
return "Approve"
}
}
}
return "Reject"
}
```
Running with this ruleset:
```
JS:> number 45 latest 44
INFO [06-16|22:26:43.023] Op approved
DEBUG[06-16|22:26:44.305] Served account_signData reqid=3 duration=1.287465394s
JS:> number 46 latest 45
INFO [06-16|22:26:44.313] Op approved
DEBUG[06-16|22:26:45.317] Served account_signData reqid=4 duration=1.010612774s
```
This might be a bit over-the-top, security-wise, and may cause problems, if for some reason a clique-deadlock needs to be resolved by rolling back and continuing on a side-chain. It is mainly meant as a demonstration that rules can use javascript and statefulness to construct very intricate signing logic.
### TLDR quick-version
Creation and attestation is a one-off event:
```bash
## Create the rules-file
cat << END > rules.js
function OnSignerStartup(info){}
function ApproveListing(){
return "Approve"
}
function ApproveSignData(r){
if (r.content_type == "application/x-clique-header"){
for(var i = 0; i < r.messages.length; i++){
var msg = r.messages[i]
if (msg.name=="Clique header" && msg.type == "clique"){
return "Approve"
}
}
}
return "Reject"
}
END
## Attest it, assumes clef master password is in `./clefpw`
clef --keystore ./ddir/keystore \
--configdir ./clef --chainid 15 \
--suppress-bootwarn --signersecret ./clefpw \
attest `sha256sum rules.js | cut -f1`
```
The normal startup command for `clef`:
```bash
clef --keystore ./ddir/keystore \
--configdir ./clef --chainid 15 \
--suppress-bootwarn --signersecret ./clefpw --rules ./rules.js
```
For `geth`, the only change is to provide `--signer <path to clef ipc>`.

@ -0,0 +1,208 @@
---
title: Introduction to Clef
sort_key: A
---
{:toc}
- this will be removed by the toc
## What is Clef?
Clef is a tool for **signing transactions and data** in a secure local environment.
t is intended to become a more composable and secure replacement for Geth's built-in
account management. Clef decouples key management from Geth itself, meaning it can be
used as an independent, standalone key management and signing application, or it
can be integrated into Geth. This provides a more flexible modular tool compared to
Geth's account manager. Clef can be used safely in situations where access to Ethereum is
via a remote and/or untrusted node because signing happens locally, either manually or
automatically using custom rulesets. The separation of Clef from the node itself enables it
to run as a daemon on the same machine as the client software, on a secure usb-stick like
[USB armory](https://inversepath.com/usbarmory), or even a separate VM in a
[QubesOS](https://www.qubes-os.org/) type setup.
## Installing and starting Clef
Clef comes bundled with Geth and can be built along with Geth and the other bundled tools using:
`make all`
However, Clef is not bound to Geth and can be built on its own using:
`make clef`
Once built, Clef must be initialized. This includes storing some data, some of which is sensitive
(such as passwords, account data, signing rules etc). Initializing Clef takes that data and
encrypts it using a user-defined password.
`clef init`
```terminal
WARNING!
Clef is an account management tool. It may, like any software, contain bugs.
Please take care to
- backup your keystore files,
- verify that the keystore(s) can be opened with your password.
Clef is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
Enter 'ok' to proceed:
> ok
The master seed of clef will be locked with a password.
Please specify a password. Do not forget this password!
Password:
Repeat password:
A master seed has been generated into /home/martin/.clef/masterseed.json
This is required to be able to store credentials, such as:
* Passwords for keystores (used by rule engine)
* Storage for JavaScript auto-signing rules
* Hash of JavaScript rule-file
You should treat 'masterseed.json' with utmost secrecy and make a backup of it!
* The password is necessary but not enough, you need to back up the master seed too!
* The master seed does not contain your accounts, those need to be backed up separately!
```
## Security model
One of the major benefits of Clef is that it is decoupled from the client software,
meaning it can be used by users and dapps to sign data and transactions in a secure,
local environment and send the signed packet to an arbitrary Ethereum entry-point, which
might include, for example, an untrusted remote node. Alternatively, Clef can simply be
used as a standalone, composable signer that can be a backend component for decentralized
applications. This requires a secure architecture that separates cryptographic operations
from user interactions and internal/external communication.
The security model of Clef is as follows:
* A self-contained binary controls all cryptographic operations including encryption,
decryption and storage of keystore files, and signing data and transactions.
* A well defined, deliberately minimal "external" API is used to communicate with the
Clef binary - Clef considers this external traffic to be UNTRUSTED. This means Clef
does not accept any credentials and does not recognize authority of requests received
over this channel. Clef listens on `http.addr:http.port` or `ipcpath` - the same as Geth -
and expects messages to be formatted using the [JSON-RPC 2.0 standard](https://www.jsonrpc.org/specification).
Some of the external API calls require some user interaction (manual approve/deny)- if it is
not received responses can be delayed indefinitely.
* Clef communicates with the process that invoked the binary using stin/stout. The process
invoking the binary is usually the native console-based user interface (UI) but there is
also an API that enables communication with an external UI. This has to be enabled using `--stdio-ui`
at startup. This channel is considered TRUSTED and is used to pass approvals and passwords between
the user and Clef.
* Clef does not store keys - the user is responsible for securely storing and backing up keyfiles.
Clef does store account passwords in its encrypted vault if they are explicitly provided to
Clef by the user to enable automatic account unlocking.
The external API never handles any sensitive data directly, but it can be used to request Clef to
sign some data or a transaction. It is the internal API that controls signing and triggers requests for
manual approval (automatic approves actions that conform to attested rulesets) and passwords.
The general flow for a basic transaction-signing operation using Clef and an Ethereum node such as
Geth is as follows:
![Clef signing logic](/static/images/clef_sign_flow.png)
In the case illustrated in the schematic above, Geth would be started with `--signer <addr>:<port>` and
would relay requests to `eth.sendTransaction`. Text in `mono` font positioned along arrows shows the objects
passed between each component.
Most users use Clef by manually approving transactions through the UI as in the schematic above, but it is also
possible to configure Clef to sign transactions without always prompting the user. This requires defining the
precise conditions under which a transaction will be signed. These conditions are known as `Rules` and they are
small Javascript snippets that are *attested* by the user by injecting the snippet's hash into Clef's secure
whitelist. Clef is then started with the rule file, so that requests that satisfy the conditions in the whitelisted
rule files are automatically signed. This is covered in detail on the [Rules page](/docs/_clef/Rules.md).
## Basic usage
Clef is started on the command line using the `clef` command. Clef can be configured by providing flags and
commands to `clef` on startup. The full list of command line options is available [below](#command-line-options).
Frequently used options include `--keystore` and `--chainid` which configure the path to an existing keystore
and a network to connect to. These options default to `$HOME/.ethereum/keystore` and `1` (corresponding to
Ethereum Mainnet) respectively. The following code snippet starts Clef, providing a custom path to an existing
keystore and connecting to the Goerli testnet:
```sh
clef --keystore /my/keystore --chainid 5
```
On starting Clef, the following welcome messgae is displayed in the terminal:
```terminal
WARNING!
Clef is an account management tool. It may, like any software, contain bugs.
Please take care to
- backup your keystore files,
- verify that the keystore(s) can be opened with your password.
Clef is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY.
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
Enter 'ok' to proceed:
>
```
Requests requiring account access or signing now require explicit consent in this terminal.
Activities such as sending transactions via a local Geth node's attached Javascript console or
RPC will now hang indefinitely, awaiting approval in this terminal.
A much more detailed Clef tutorial is available on the [Tutorial page](/docs/clef/tutorial).
## Command line options
```sh
COMMANDS:
init Initialize the signer, generate secret storage
attest Attest that a js-file is to be used
setpw Store a credential for a keystore file
delpw Remove a credential for a keystore file
newaccount Create a new account
gendoc Generate documentation about json-rpc format
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--loglevel value log level to emit to the screen (default: 4)
--keystore value Directory for the keystore (default: "$HOME/.ethereum/keystore")
--configdir value Directory for Clef configuration (default: "$HOME/.clef")
--chainid value Chain id to use for signing (1=mainnet, 3=Ropsten, 4=Rinkeby, 5=Goerli) (default: 1)
--lightkdf Reduce key-derivation RAM & CPU usage at some expense of KDF strength
--nousb Disables monitoring for and managing USB hardware wallets
--pcscdpath value Path to the smartcard daemon (pcscd) socket file (default: "/run/pcscd/pcscd.comm")
--http.addr value HTTP-RPC server listening interface (default: "localhost")
--http.vhosts value Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. (default: "localhost")
--ipcdisable Disable the IPC-RPC server
--ipcpath value Filename for IPC socket/pipe within the datadir (explicit paths escape it)
--http Enable the HTTP-RPC server
--http.port value HTTP-RPC server listening port (default: 8550)
--signersecret value A file containing the (encrypted) master seed to encrypt Clef data, e.g. keystore credentials and ruleset hash
--4bytedb-custom value File used for writing new 4byte-identifiers submitted via API (default: "./4byte-custom.json")
--auditlog value File used to emit audit logs. Set to "" to disable (default: "audit.log")
--rules value Path to the rule file to auto-authorize requests with
--stdio-ui Use STDIN/STDOUT as a channel for an external UI. This means that an STDIN/STDOUT is used for RPC-communication with a e.g. a graphical user interface, and can be used when Clef is started by an external process.
--stdio-ui-test Mechanism to test interface between Clef and UI. Requires 'stdio-ui'.
--advanced If enabled, issues warnings instead of rejections for suspicious requests. Default off
--suppress-bootwarn If set, does not show the warning during boot
```
## Summary
Clef is an external key management and signer tool that comes bundled with Geth but can either be used
as a backend account manager and signer for Geth or as a completely separate standalone application. Being
modular and composable it can be used as a component in decentralized applications or to sign data and
transactions in untrusted environments. Clef is intended to eventually replace Geth's built-in account
management tools.

@ -0,0 +1,237 @@
---
title: Rules
sort_key: B
---
The `signer` binary contains a ruleset engine, implemented with [OttoVM](https://github.com/robertkrimen/otto)
It enables usecases like the following:
* I want to auto-approve transactions with contract `CasinoDapp`, with up to `0.05 ether` in value to maximum `1 ether` per 24h period
* I want to auto-approve transaction to contract `EthAlarmClock` with `data`=`0xdeadbeef`, if `value=0`, `gas < 44k` and `gasPrice < 40Gwei`
The two main features that are required for this to work well are;
1. Rule Implementation: how to create, manage and interpret rules in a flexible but secure manner
2. Credential managements and credentials; how to provide auto-unlock without exposing keys unnecessarily.
The section below deals with both of them
## Rule Implementation
A ruleset file is implemented as a `js` file. Under the hood, the ruleset-engine is a `SignerUI`, implementing the same methods as the `json-rpc` methods
defined in the UI protocol. Example:
```js
function asBig(str) {
if (str.slice(0, 2) == "0x") {
return new BigNumber(str.slice(2), 16)
}
return new BigNumber(str)
}
// Approve transactions to a certain contract if value is below a certain limit
function ApproveTx(req) {
var limit = big.Newint("0xb1a2bc2ec50000")
var value = asBig(req.transaction.value);
if (req.transaction.to.toLowerCase() == "0xae967917c465db8578ca9024c205720b1a3651a9") && value.lt(limit)) {
return "Approve"
}
// If we return "Reject", it will be rejected.
// By not returning anything, it will be passed to the next UI, for manual processing
}
// Approve listings if request made from IPC
function ApproveListing(req){
if (req.metadata.scheme == "ipc"){ return "Approve"}
}
```
Whenever the external API is called (and the ruleset is enabled), the `signer` calls the UI, which is an instance of a ruleset-engine. The ruleset-engine
invokes the corresponding method. In doing so, there are three possible outcomes:
1. JS returns "Approve"
* Auto-approve request
2. JS returns "Reject"
* Auto-reject request
3. Error occurs, or something else is returned
* Pass on to `next` ui: the regular UI channel.
A more advanced example can be found below, "Example 1: ruleset for a rate-limited window", using `storage` to `Put` and `Get` `string`s by key.
* At the time of writing, storage only exists as an ephemeral unencrypted implementation, to be used during testing.
### Things to note
The Otto vm has a few [caveats](https://github.com/robertkrimen/otto):
* "use strict" will parse, but does nothing.
* The regular expression engine (re2/regexp) is not fully compatible with the ECMA5 specification.
* Otto targets ES5. ES6 features (eg: Typed Arrays) are not supported.
Additionally, a few more have been added
* The rule execution cannot load external javascript files.
* The only preloaded library is [`bignumber.js`](https://github.com/MikeMcl/bignumber.js) version `2.0.3`. This one is fairly old, and is not aligned with the documentation at the github repository.
* Each invocation is made in a fresh virtual machine. This means that you cannot store data in global variables between invocations. This is a deliberate choice -- if you want to store data, use the disk-backed `storage`, since rules should not rely on ephemeral data.
* Javascript API parameters are _always_ an object. This is also a design choice, to ensure that parameters are accessed by _key_ and not by order. This is to prevent mistakes due to missing parameters or parameter changes.
* The JS engine has access to `storage` and `console`.
#### Security considerations
##### Security of ruleset
Some security precautions can be made, such as:
* Never load `ruleset.js` unless the file is `readonly` (`r-??-??-?`). If the user wishes to modify the ruleset, he must make it writeable and then set back to readonly.
* This is to prevent attacks where files are dropped on the users disk.
* Since we're going to have to have some form of secure storage (not defined in this section), we could also store the `sha3` of the `ruleset.js` file in there.
* If the user wishes to modify the ruleset, he'd then have to perform e.g. `signer --attest /path/to/ruleset --credential <creds>`
##### Security of implementation
The drawbacks of this very flexible solution is that the `signer` needs to contain a javascript engine. This is pretty simple to implement, since it's already
implemented for `geth`. There are no known security vulnerabilities in, nor have we had any security-problems with it so far.
The javascript engine would be an added attack surface; but if the validation of `rulesets` is made good (with hash-based attestation), the actual javascript cannot be considered
an attack surface -- if an attacker can control the ruleset, a much simpler attack would be to implement an "always-approve" rule instead of exploiting the js vm. The only benefit
to be gained from attacking the actual `signer` process from the `js` side would be if it could somehow extract cryptographic keys from memory.
##### Security in usability
Javascript is flexible, but also easy to get wrong, especially when users assume that `js` can handle large integers natively. Typical errors
include trying to multiply `gasCost` with `gas` without using `bigint`:s.
It's unclear whether any other DSL could be more secure; since there's always the possibility of erroneously implementing a rule.
## Credential management
The ability to auto-approve transaction means that the signer needs to have necessary credentials to decrypt keyfiles. These passwords are hereafter called `ksp` (keystore pass).
### Example implementation
Upon startup of the signer, the signer is given a switch: `--seed <path/to/masterseed>`
The `seed` contains a blob of bytes, which is the master seed for the `signer`.
The `signer` uses the `seed` to:
* Generate the `path` where the settings are stored.
* `./settings/1df094eb-c2b1-4689-90dd-790046d38025/vault.dat`
* `./settings/1df094eb-c2b1-4689-90dd-790046d38025/rules.js`
* Generate the encryption password for `vault.dat`.
The `vault.dat` would be an encrypted container storing the following information:
* `ksp` entries
* `sha256` hash of `rules.js`
* Information about pair:ed callers (not yet specified)
### Security considerations
This would leave it up to the user to ensure that the `path/to/masterseed` is handled in a secure way. It's difficult to get around this, although one could
imagine leveraging OS-level keychains where supported. The setup is however in general similar to how ssh-keys are stored in `.ssh/`.
# Implementation status
This is now implemented (with ephemeral non-encrypted storage for now, so not yet enabled).
## Example 1: ruleset for a rate-limited window
```js
function big(str) {
if (str.slice(0, 2) == "0x") {
return new BigNumber(str.slice(2), 16)
}
return new BigNumber(str)
}
// Time window: 1 week
var window = 1000* 3600*24*7;
// Limit : 1 ether
var limit = new BigNumber("1e18");
function isLimitOk(transaction) {
var value = big(transaction.value)
// Start of our window function
var windowstart = new Date().getTime() - window;
var txs = [];
var stored = storage.get('txs');
if (stored != "") {
txs = JSON.parse(stored)
}
// First, remove all that have passed out of the time-window
var newtxs = txs.filter(function(tx){return tx.tstamp > windowstart});
console.log(txs, newtxs.length);
// Secondly, aggregate the current sum
sum = new BigNumber(0)
sum = newtxs.reduce(function(agg, tx){ return big(tx.value).plus(agg)}, sum);
console.log("ApproveTx > Sum so far", sum);
console.log("ApproveTx > Requested", value.toNumber());
// Would we exceed weekly limit ?
return sum.plus(value).lt(limit)
}
function ApproveTx(r) {
if (isLimitOk(r.transaction)) {
return "Approve"
}
return "Nope"
}
/**
* OnApprovedTx(str) is called when a transaction has been approved and signed. The parameter
* 'response_str' contains the return value that will be sent to the external caller.
* The return value from this method is ignore - the reason for having this callback is to allow the
* ruleset to keep track of approved transactions.
*
* When implementing rate-limited rules, this callback should be used.
* If a rule responds with neither 'Approve' nor 'Reject' - the tx goes to manual processing. If the user
* then accepts the transaction, this method will be called.
*
* TLDR; Use this method to keep track of signed transactions, instead of using the data in ApproveTx.
*/
function OnApprovedTx(resp) {
var value = big(resp.tx.value)
var txs = []
// Load stored transactions
var stored = storage.get('txs');
if (stored != "") {
txs = JSON.parse(stored)
}
// Add this to the storage
txs.push({tstamp: new Date().getTime(), value: value});
storage.put("txs", JSON.stringify(txs));
}
```
## Example 2: allow destination
```js
function ApproveTx(r) {
if (r.transaction.from.toLowerCase() == "0x0000000000000000000000000000000000001337") {
return "Approve"
}
if (r.transaction.from.toLowerCase() == "0x000000000000000000000000000000000000dead") {
return "Reject"
}
// Otherwise goes to manual processing
}
```
## Example 3: Allow listing
```js
function ApproveListing() {
return "Approve"
}
```

@ -0,0 +1,202 @@
---
title: Advanced setup
sort_key: D
---
This document describes how Clef can be used in a more secure manner than executing it from your everyday laptop,
in order to ensure that the keys remain safe in the event that your computer should get compromised.
## Qubes OS
### Background
The Qubes operating system is based around virtual machines (qubes), where a set of virtual machines are configured, typically for
different purposes such as:
- personal
- Your personal email, browsing etc
- work
- Work email etc
- vault
- a VM without network access, where gpg-keys and/or keepass credentials are stored.
A couple of dedicated virtual machines handle externalities:
- sys-net provides networking to all other (network-enabled) machines
- sys-firewall handles firewall rules
- sys-usb handles USB devices, and can map usb-devices to certain qubes.
The goal of this document is to describe how we can set up clef to provide secure transaction
signing from a `vault` vm, to another networked qube which runs Dapps.
### Setup
There are two ways that this can be achieved: integrated via Qubes or integrated via networking.
#### 1. Qubes Integrated
Qubes provides a facility for inter-qubes communication via `qrexec`. A qube can request to make a cross-qube RPC request
to another qube. The OS then asks the user if the call is permitted.
![Example](qrexec-example.png)
A policy-file can be created to allow such interaction. On the `target` domain, a service is invoked which can read the
`stdin` from the `client` qube.
This is how [Split GPG](https://www.qubes-os.org/doc/split-gpg/) is implemented. We can set up Clef the same way:
##### Server
![Clef via qrexec](clef_qubes_qrexec.png)
On the `target` qubes, we need to define the RPC service.
[qubes.Clefsign](qubes.Clefsign):
```bash
#!/bin/bash
SIGNER_BIN="/home/user/tools/clef/clef"
SIGNER_CMD="/home/user/tools/gtksigner/gtkui.py -s $SIGNER_BIN"
# Start clef if not already started
if [ ! -S /home/user/.clef/clef.ipc ]; then
$SIGNER_CMD &
sleep 1
fi
# Should be started by now
if [ -S /home/user/.clef/clef.ipc ]; then
# Post incoming request to HTTP channel
curl -H "Content-Type: application/json" -X POST -d @- http://localhost:8550 2>/dev/null
fi
```
This RPC service is not complete (see notes about HTTP headers below), but works as a proof-of-concept.
It will forward the data received on `stdin` (forwarded by the OS) to Clef's HTTP channel.
It would have been possible to send data directly to the `/home/user/.clef/.clef.ipc`
socket via e.g `nc -U /home/user/.clef/clef.ipc`, but the reason for sending the request
data over `HTTP` instead of `IPC` is that we want the ability to forward `HTTP` headers.
To enable the service:
``` bash
sudo cp qubes.Clefsign /etc/qubes-rpc/
sudo chmod +x /etc/qubes-rpc/ qubes.Clefsign
```
This setup uses [gtksigner](https://github.com/holiman/gtksigner), which is a very minimal GTK-based UI that works well
with minimal requirements.
##### Client
On the `client` qube, we need to create a listener which will receive the request from the Dapp, and proxy it.
[qubes-client.py](qubes-client.py):
```python
"""
This implements a dispatcher which listens to localhost:8550, and proxies
requests via qrexec to the service qubes.EthSign on a target domain
"""
import http.server
import socketserver,subprocess
PORT=8550
TARGET_DOMAIN= 'debian-work'
class Dispatcher(http.server.BaseHTTPRequestHandler):
def do_POST(self):
post_data = self.rfile.read(int(self.headers['Content-Length']))
p = subprocess.Popen(['/usr/bin/qrexec-client-vm',TARGET_DOMAIN,'qubes.Clefsign'],stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output = p.communicate(post_data)[0]
self.wfile.write(output)
with socketserver.TCPServer(("",PORT), Dispatcher) as httpd:
print("Serving at port", PORT)
httpd.serve_forever()
```
#### Testing
To test the flow, if we have set up `debian-work` as the `target`, we can do
```bash
$ cat newaccnt.json
{ "id": 0, "jsonrpc": "2.0","method": "account_new","params": []}
$ cat newaccnt.json| qrexec-client-vm debian-work qubes.Clefsign
```
A dialog should pop up first to allow the IPC call:
![one](qubes_newaccount-1.png)
Followed by a GTK-dialog to approve the operation:
![two](qubes_newaccount-2.png)
To test the full flow, we use the client wrapper. Start it on the `client` qube:
```
[user@work qubes]$ python3 qubes-client.py
```
Make the request over http (`client` qube):
```
[user@work clef]$ cat newaccnt.json | curl -X POST -d @- http://localhost:8550
```
And it should show the same popups again.
##### Pros and cons
The benefits of this setup are:
- This is the qubes-os intended model for inter-qube communication,
- and thus benefits from qubes-os dialogs and policies for user approval
However, it comes with a couple of drawbacks:
- The `qubes-gpg-client` must forward the http request via RPC to the `target` qube. When doing so, the proxy
will either drop important headers, or replace them.
- The `Host` header is most likely `localhost`
- The `Origin` header must be forwarded
- Information about the remote ip must be added as a `X-Forwarded-For`. However, Clef cannot always trust an `XFF` header,
since malicious clients may lie about `XFF` in order to fool the http server into believing it comes from another address.
- Even with a policy in place to allow RPC calls between `caller` and `target`, there will be several popups:
- One qubes-specific where the user specifies the `target` vm
- One clef-specific to approve the transaction
#### 2. Network integrated
The second way to set up Clef on a qubes system is to allow networking, and have Clef listen to a port which is accessible
from other qubes.
![Clef via http](clef_qubes_http.png)
## USBArmory
The [USB armory](https://inversepath.com/usbarmory) is an open source hardware design with an 800 MHz ARM processor. It is a pocket-size
computer. When inserted into a laptop, it identifies itself as a USB network interface, basically adding another network
to your computer. Over this new network interface, you can SSH into the device.
Running Clef off a USB armory means that you can use the armory as a very versatile offline computer, which only
ever connects to a local network between your computer and the device itself.
Needless to say, while this model should be fairly secure against remote attacks, an attacker with physical access
to the USB Armory would trivially be able to extract the contents of the device filesystem.

@ -0,0 +1,685 @@
---
title: Tutorial
sort_key: A
---
This page provides a step-by-step walkthrough tutorial demonstrating some common uses of Clef. This
includes manual approvals and automated rules. Clef is presented both as a standalone general signer
with requests made via RPC and also as a backend signer for Geth.
{:toc}
- this will be removed by the toc
## Initializing Clef
First things first, Clef needs to store some data itself. Since that data might be sensitive
(passwords, signing rules, accounts), Clef's entire storage is encrypted. To support encrypting data,
the first step is to initialize Clef with a random master seed, itself too encrypted with your chosen
password:
```text
$ clef init
WARNING!
Clef is an account management tool. It may, like any software, contain bugs.
Please take care to
- backup your keystore files,
- verify that the keystore(s) can be opened with your password.
Clef is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
Enter 'ok' to proceed:
> ok
The master seed of clef will be locked with a password.
Please specify a password. Do not forget this password!
Password:
Repeat password:
A master seed has been generated into /home/martin/.clef/masterseed.json
This is required to be able to store credentials, such as:
* Passwords for keystores (used by rule engine)
* Storage for JavaScript auto-signing rules
* Hash of JavaScript rule-file
You should treat 'masterseed.json' with utmost secrecy and make a backup of it!
* The password is necessary but not enough, you need to back up the master seed too!
* The master seed does not contain your accounts, those need to be backed up separately!
```
*For readability purposes, we'll remove the WARNING printout, user confirmation and the unlocking of the master seed in the rest of this document.*
## Remote interactions
This tutorial will use Clef with Geth on the Goerli testnet. The accounts used will be in the
Goerli keystore with the path `~/go-ethereum/goerli-data/keystore`. The tutorial assumes there
are two accounts in this keystore. Instructions for creating accounts can be found on the
[Account managament page](/docs/interface/managing-your-accounts). Note that Clef can also interact
with hardware wallets, although that is not demonstrated here.
Clef should be started before Geth, otherwise Geth will complain that it cannot find a Clef
instance to connect to. Clef should be started with the correct `chainid` for Goerli. Clef
itself does not connect to a blockchain, but the `chainID` parameter is included in the data
that is aggregated to form a signature. Clef also needs a path to the correct keystore passed to
the `--keystore` command. A custom path to the config directory can also be provided. This is where the
`ipc` file will be saved which is needed to connect Clef to Geth:
```sh
clef --keystore ~/go-ethereum/goerli-data/keystore --configdir ~/go-ethereum/goerli-data/clef --chainid=5
```
The following logs will be displayed in the console:
```terminal
INFO [07-01|11:00:46.385] Starting signer chainid=4 keystore= go-ethereum/goerli-data/keystore light-kdf=false advanced=false
DEBUG[07-01|11:00:46.389] FS scan times list=3.521941ms set=9.017µs diff=4.112µs
DEBUG[07-01|11:00:46.391] Ledger support enabled
DEBUG[07-01|11:00:46.391] Trezor support enabled via HID
DEBUG[07-01|11:00:46.391] Trezor support enabled via WebUSB
INFO [07-01|11:00:46.391] Audit logs configured file=audit.log
DEBUG[07-01|11:00:46.392] IPC registered namespace=account
INFO [07-01|11:00:46.392] IPC endpoint opened url=go-ethereum/goerli-data/clef/clef.ipc
------- Signer info -------
* intapi_version : 7.0.1
* extapi_version : 6.1.0
* extapi_http : n/a
* extapi_ipc : go-ethereum/goerli-data/clef/clef.ipc
```
Clef starts up in CLI (Command Line Interface) mode by default. Arbitrary remote
processes may *request* account interactions (e.g. sign a transaction), which the user
can individually *confirm* or *deny*.
The code snippet below shows a request made to Clef via its *External API endpoint* using
[NetCat](http://netcat.sourceforge.net/). The request invokes the
["account_list"](/docs/_clef/apis#accountlist) endpoint which lists the accounts in the keystore.
This command should be run in a new terminal.
```sh
echo '{"id": 1, "jsonrpc": "2.0", "method": "account_list"}' | nc -U ~/.clef/clef.ipc
```
The terminal used to send the command will now hang. This is because the process is awaiting
confirmation from Clef. Switching to the Clef console reveals Clef's prompt to the user to
confirm or deny the request:
```terminal
-------- List Account request--------------
A request has been made to list all accounts.
You can select which accounts the caller can see
[x] 0xD9C9Cd5f6779558b6e0eD4e6Acf6b1947E7fA1F3
URL: keystore://go-ethereum/goerli-data/keystore/UTC--2017-04-14T15-15-00.327614556Z--d9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3
[x] 0x086278A6C067775F71d6B2BB1856Db6E28c30418
URL: keystore://go-ethereum/goerli-data/keystore/UTC--2018-02-06T22-53-11.211657239Z--086278a6c067775f71d6b2bb1856db6e28c30418
-------------------------------------------
Request context:
NA - ipc - NA
Additional HTTP header data, provided by the external caller:
User-Agent:
Origin:
Approve? [y/N]:
```
Depending on whether the request is approved or denied, the NetCat process in the other terminal
will receive one of the following responses:
```terminal
{"jsonrpc":"2.0","id":1,"result":["0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3","0x086278a6c067775f71d6b2bb1856db6e28c30418"]}
```
or
```terminal
{"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"Request denied"}}
```
Apart from listing accounts, you can also *request* creating a new account, signing transactions
and data or recovering signatures. The available methods are documented in the Clef
[External API Spec](https://github.com/ethereum/go-ethereum/tree/master/cmd/clef#external-api-1)
and the [External API Changelog](https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/extapi_changelog.md).
*Note, the number of things you can do from the External API is deliberately small to limit
the power of remote calls as much as possible! Clef has an
[Internal API](https://github.com/ethereum/go-ethereum/tree/master/cmd/clef#ui-api-1)
too for the UI (User Interface) which is much richer and can support custom interfaces on top.
But that's out of scope here.*
The example above used Clef completely independently of Geth. However, by defining Clef as
the signer when Geth is started imposes Clef's `request - confirm - result` pattern to any
interaction with the local Geth node that touches accounts, including requests made using
RPC or an attached Javascript console. To demonstrate this, Geth can be started,
with Clef as the signer:
```sh
geth --goerli --datadir goerli-data --signer=goerli-data/clef/clef.ipc
```
With Geth running, open a new terminal and attach a Javascript console:
```sh
geth attach goerli-data/geth.ipc
```
A simple request to list the accounts in the keystore will cause the Javascript console to hang.
```js
eth.accounts
```
Switching to the Clef terminal reveals that this is because the request is awaiting explicit
confirmation from the user. The log is identical to the one shown above, when the same request
for account information was made to Clef via Netcat:
```terminal
-------- List Account request--------------
A request has been made to list all accounts.
You can select which accounts the caller can see
[x] 0xD9C9Cd5f6779558b6e0eD4e6Acf6b1947E7fA1F3
URL: keystore://go-ethereum/goerli-data/keystore/UTC--2017-04-14T15-15-00.327614556Z--d9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3
[x] 0x086278A6C067775F71d6B2BB1856Db6E28c30418
URL: keystore://go-ethereum/goerli-data/keystore/UTC--2018-02-06T22-53-11.211657239Z--086278a6c067775f71d6b2bb1856db6e28c30418
-------------------------------------------
Request context:
NA - ipc - NA
Additional HTTP header data, provided by the external caller:
User-Agent:
Origin:
Approve? [y/N]:
```
In this mode, the user is required to manually confirm every action that touches account data,
including querying accounts, signing and sending transactions.
The example below shows an ether transaction between the two accounts in the keystore
using `eth.sendTransaction` in the attached Javascript console.
```js
// this command requires 2x approval in Clef because it loads account data via eth.accounts[0]
// and eth.accounts[1]
var tx = {from: eth.accounts[0], to: eth.accounts[1], value: web3.toWei(0.1, "ether")}
// then send the transaction
eth.sendTransaction(tx)
```
This example demonstrates the power of Clef much more clearly than the account-listing example.
In the Clef terminal, all the details of the transaction are presented to the user so that they
can be reviewed before being confirmed. This gives the user an opportunity to review the fine
details and make absolutely sure they really want to sign the transaction. `eth.sendTransaction`
returns the following confirmation prompt in the Clef terminal:
```terminal
-------- Transaction request----------------
to: 0x086278A6C067775F71d6B2BB1856Db6E28c30418
from: 0xD9C9Cd5f6779558b6e0eD4e6Acf6b1947E7fA1F3 [chksum ok]
value: 100000000000000000 wei
gas: 0x5208 (21000)
maxFeePerGas: 1500000016 wei
maxPriorityFeePerGas: 1500000000 wei
nonce: 0x0 (0)
chainid: 0x5
Accesslist
Request context:
NA - ipc - NA
Additional HTTP header data, provided by the external caller:
User-Agent: ""
Origin: ""
---------------------------------------------
Approve? [y/N]
```
Approving this transaction causes Clef to prompt the user to provide the password for
the sender account. Providing the password enables the transaction to be signed and sent to
Geth for broadcasting to the network. The details of the signed transaction are displayed
in the console. Account passwords can also be stored in Clef's encrypted vault so that they
do not have to be manually entered - [more on this below](#account-passwords).
## Automatic rules
For most users, manually confirming every transaction is the right way to use Clef because a
human-in-the-loop can review every action. However, there are cases when it makes sense to
set up some rules which permit Clef to sign a transaction without prompting the user.
For example, well defined rules such as:
* Auto-approve transactions with Uniswap v2, with value between 0.1 and 0.5 ETH
per 24h period
* Auto-approve transactions to address `0xD9C9Cd5f6779558b6e0eD4e6Acf6b1947E7fA1F3`
as long as gas < 44k and gasPrice < 80Gwei
can be encoded and intepreted by Clef's built-in ruleset engine.
### Rule files
Rules are implemented as Javascript code in `js` files. The ruleset engine includes the
same methods as the JSON_RPC defined in the [UI Protocol](/docs/_clef/datatypes.md).
The following code snippet demonstrates a rule file that approves a transaction if it
satisfies the following conditions:
* the recipient is `0xae967917c465db8578ca9024c205720b1a3651a9`
* the value is less than 50000000000000000 wei (0.05 ETH)
and approves account listing if:
* the request has arrived via ipc
```js
//ancillary function for formatting numbers
function asBig(str) {
if (str.slice(0, 2) == "0x") {
return new BigNumber(str.slice(2), 16)
}
return new BigNumber(str)
}
// Approve transactions to a certain contract if value is below a certain limit
function ApproveTx(req) {
var limit = big.Newint("0xb1a2bc2ec50000")
var value = asBig(req.transaction.value);
if (req.transaction.to.toLowerCase() == "0xae967917c465db8578ca9024c205720b1a3651a9")
&& value.lt(limit)) {
return "Approve"
}
else{
return "Reject"
}
}
// Approve listings if request made from IPC
function ApproveListing(req){
if (req.metadata.scheme == "ipc"){ return "Approve"}
}
// returning nothing passes the decision to the next UI for manual assessment
```
There are three possible outcomes to this ruleset that are handled in different ways:
| Return value | Action |
| ----------- | ----------- |
| "Approve" | Auto-approve request |
| "Reject" | Auto-approve request |
| Error | Pass decision to UI for manual approval |
| Unexpected value | Pass decision to UI for manual approval |
| Nothing | Pass decision to UI for manual approval |
### Attestations
Clef will not just accept and run arbitrary scripts - that would create an attack vector because a malicious party could
change the rule file. Instead, the user explicitly *attests* to a rule file, which involves injecting the file's SHA256
hash into Clef's secure store. The following code snippet shows how to calculate a SHA256 hash for a file named `rules.js`
and pass it to Clef. Note that Clef will prompt the user to provide the master password because the Clef store has to
be decrypted in order to add the attestation to it.
```sh
# calculate hash
sha256sum rules.js
# attest to rules.js in Clef
clef attest 645b58e4f945e24d0221714ff29f6aa8e860382ced43490529db1695f5fcc71c
```
Once this attestation has been added to the Clef store, it can be used to automatically approve
interactions that satisfy the conditions encoded in `rules.js` in Clef.
### Account passwords
The rules described in `rules.js` above require access to the accounts in the Clef keystore which
are protected by user-defined passwords. The signer therefore requires access to these passwords
in order to automatically unlock the keystore and sign data and transactions using the accounts.
This is done using `clef setpw`, passing the account address as the sole argument:
```sh
clef setpw 0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3
```
which displays the following in the terminal:
```terminal
Please enter a password to store for this address:
Password:
Repeat password:
Decrypt master seed of clef
Password:
INFO [07-01|14:05:56.031] Credential store updated key=0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3
```
Note that Clef does not really 'unlock' an account, it just abstracts the process of providing the
password away from the end-user in specific, predefined scenarios. If an account password
exists in the Clef vault and the rule evaluates to "Approve" then Clef decrypts the password,
uses it to decrypt the key, does the requested signing and then re-locks the account.
### Implementing rules
Clef can be instructed to run an attested rule file simply by passing the path to `rules.js`
to the `--rules` flag:
```sh
clef --keystore go-ethereum/goerli-data/ --configdir go-ethereum/goerli-data/clef --chainid 5 --rules rules.js
```
The following logs will be displayed in the terminal:
```
INFO [07-01|13:39:49.726] Rule engine configured file=rules.js
INFO [07-01|13:39:49.726] Starting signer chainid=5 keystore=$go-ethereum/goerli-data/ light-kdf=false advanced=false
DEBUG[07-01|13:39:49.726] FS scan times list=35.15µs set=4.251µs diff=2.766µs
DEBUG[07-01|13:39:49.727] Ledger support enabled
DEBUG[07-01|13:39:49.727] Trezor support enabled via HID
DEBUG[07-01|13:39:49.727] Trezor support enabled via WebUSB
INFO [07-01|13:39:49.728] Audit logs configured file=audit.log
DEBUG[07-01|13:39:49.728] IPC registered namespace=account
INFO [07-01|13:39:49.728] IPC endpoint opened url=go-ethereum/goerli-data/clef/clef.ipc
------- Signer info -------
* intapi_version : 7.0.0
* extapi_version : 6.0.0
* extapi_http : n/a
* extapi_ipc : go-ethereum/goerli-data/clef/clef.ipc
```
Any request that satisfies the ruleset will now be auto-approved by the rule file, for example
the following request to sign a transaction made using the Geth Javascript console
(note that the password for account `0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3`
has already been provided to `setpw` and the recipient and value comply with the rules in `rules.js`):
```js
var tx = {to: "0xae967917c465db8578ca9024c205720b1a3651a9", from: "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3", value: web3.toWei(0.01, "ether")}
eth.sendTransaction(tx)
```
By contrast, the following transactions *do not* satisfy the rules in `rules.js`:
```js
// violate maximum transaction value condition
var tx = {to: "0xae967917c465db8578ca9024c205720b1a3651a9", from: "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3", value: web3.toWei(1, "ether")}
eth.sendTransaction(tx)
```
```js
// violate recipient condition
var tx = {to: "0xae967917c465db8578ca9024c205720b1a3651a9", from: "0xd4c4bb7d6889453c6c6ea3e9eab3c4177b4fbcc3", value: web3.toWei(0.01, "ether")}
eth.sendTransaction(tx)
```
These latter two transactions, that do not satisfy the encoded rules in `rules.js`, are not automatically approved, but instead pass the
decision back to the UI for manual approval by the user.
### Summary of basic usage
To summarize, the steps required to run Clef with an automated ruleset that requires account access is as follows:
**1)** Define rules as Javascript and save as a `.js` file, e.g. `rules.js`
**2)** Calculate hash of rule file using `sha256sum rules.js`
**3)** Attest the rules in Clef using `clef attest <hash>`
**4)** Set account passwords in Clef using `clef --setpw <address>`
**5)** Start Clef with rule file enabled using `clef --keystore <path-to-keystore> --chainid <chainID> --rules rules.js`
**6)** Make requests directly to Clef using the external API or connect to Geth by passing `--signer=<path to clef.ipc>` at Geth startup
## More rules
Since rules are defined as Javascript code, rulesets of arbitrary complexity can be created and they can
impose conditions on any part of a transaction, not only the recipient and value.
A simple example is implementing a "whitelist" of recipients where transactions that have those
accounts in the `to` field are automatically signed (for example perhaps transactions between
a user's own accounts might be whitelisted):
```js
function ApproveTx(r) {
if (r.transaction.to.toLowerCase() == "0xd4c4bb7d6889453c6c6ea3e9eab3c4177b4fbcc3") {
return "Approve"
}
if (r.transaction.to.toLowerCase() == "0xae967917c465db8578ca9024c205720b1a3651a9") {
return "Reject"
}
// Otherwise goes to manual processing
}
```
In addition to addresses and values, other properties of a request can also be incorporated
into a ruleset. The example below demonstrates a ruleset for `approve_signData` imposing
the following conditions on a transaction's sender and message data.
1. The sender must be `0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3`
2. The transaction message must include the text `wen-merge`, which is `77656E2D6D65726765` in hex.
If these conditions are satisfied then the transaction is auto-approved (assuming the password for
`0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3` has been provided to `setpw`).
```js
function ApproveListing() {
return "Approve"
}
function ApproveSignData(req) {
if (req.address.toLowerCase() == "0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3") {
if (req.messages[0].value.indexOf("wen-merge") >= 0) {
return "Approve"
}
return "Reject"
}
// Otherwise goes to manual processing
}
```
This file should be saved as a `.js` file, hashed and attested in Clef:
```sh
sha256sum rules.js
```
which returns:
```terminal
84d9e70aa30d0e5ffb3c4b376c9490f428390a196bfdc1d36770ffd2bbe66845 rules.js
```
then:
```sh
clef attest 84d9e70aa30d0e5ffb3c4b376c9490f428390a196bfdc1d36770ffd2bbe66845
```
which returns:
```terminal
Decrypt master seed of clef
Password:
INFO [07-01|14:11:28.509] Ruleset attestation updated sha256=84d9e70aa30d0e5ffb3c4b376c9490f428390a196bfdc1d36770ffd2bbe66845
```
Then, Clef can be restarted with the new rules in place:
```sh
clef --keystore go-ethereum/goerli-data/clef --configdir go-ethereum/goerli-data/clef --chainid 5 --rules rules.js
```
```terminal
INFO [07-01|14:12:41.636] Rule engine configured file=rules.js
INFO [07-01|14:12:41.636] Starting signer chainid=5 keystore=go-ethereum/goerli-data/clef/keystore light-kdf=false advanced=false
DEBUG[07-01|14:12:41.636] FS scan times list=46.722µs set=4.47µs diff=2.157µs
DEBUG[07-01|14:12:41.637] Ledger support enabled
DEBUG[07-01|14:12:41.637] Trezor support enabled via HID
DEBUG[07-01|14:12:41.638] Trezor support enabled via WebUSB
INFO [07-01|14:12:41.638] Audit logs configured file=audit.log
DEBUG[07-01|14:12:41.638] IPC registered namespace=account
INFO [07-01|14:12:41.638] IPC endpoint opened url=go-ethereum/goerli-data/clef/clef.ipc
------- Signer info -------
* intapi_version : 7.0.0
* extapi_version : 6.0.0
* extapi_http : n/a
* extapi_ipc : go-ethereum/goerli-data/clef/clef.ipc
```
Finally, a request can be submitted to test that the rules are being applied as expected.
Here, Clef is used independently of Geth by making a request via RPC, but the same logic
would be imposed if the request was made via a connected Geth node. Some arbitrary text
will be included in the message data that includes the term `wen-merge`. The plaintext
`clefdemotextthatincludeswen-merge` is `636c656664656d6f7465787474686174696e636c7564657377656e2d6d65726765`
when represented as a hexadecimal string. This can be passed as data to an `account_signData`
request as follows:
```sh
echo '{"id": 1, "jsonrpc":"2.0", "method":"account_signData", "params":["data/plain", "0x636c656664656d6f7465787474686174696e636c7564657377656e2d6d65726765"]}' | nc -U ~/go-ethereum.goerli-data/clef/clef.ipc
```
This will be automatically signed, returning a result that looks like the following:
```terminal
{"jsonrpc":"2.0","id":1,"result":"0x4f93e3457027f6be99b06b3392d0ebc60615ba448bb7544687ef1248dea4f5317f789002df783979c417d969836b6fda3710f5bffb296b4d51c8aaae6e2ac4831c"}
```
Alternatively, a request that does not include the phrase `wen-merge` will not automatically approve. For example, the following request passes the hexadecimal
string representing the plaintext `clefdemotextwithoutspecialtext`:
```sh
echo '{"id": 1, "jsonrpc":"2.0", "method":"account_signData", "params":["data/plain", "0x636c656664656d6f74657874776974686f75747370656369616c74657874"]}' | nc -U ~/go-ethereum.goerli-data/clef/clef.ipc
```
This returns a `Request denied` message as follows:
```terminal
{"jsonrpc":"2.0","id":1,"error":{"code":-32000,"message":"Request denied"}}
```
Meanwhile, in the output logs in the Clef terminal you can see:
```text
INFO [02-21|14:42:41] Op approved
INFO [02-21|14:42:56] Op rejected
```
The signer also stores all traffic over the external API in a log file.
The last 4 lines shows the two requests and their responses:
```text
$ tail -n 4 audit.log
t=2022-07-01T15:52:14+0300 lvl=info msg=SignData api=signer type=request metadata="{\"remote\":\"NA\",\"local\":\"NA\",\"scheme\":\"NA\",\"User-Agent\":\"\",\"Origin\":\"\"}" addr="0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3 [chksum INVALID]" data=0x202062617a6f6e6b2062617a2067617a0a content-type=data/plain
t=2022-07-01T15:52:14+0300 lvl=info msg=SignData api=signer type=response data=0x636c656664656d6f7465787474686174696e636c7564657377656e2d6d65726765 error=nil
t=2022-07-01T15:52:23+0300 lvl=info msg=SignData api=signer type=request metadata="{\"remote\":\"NA\",\"local\":\"NA\",\"scheme\":\"NA\",\"User-Agent\":\"\",\"Origin\":\"\"}" addr="0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3 [chksum INVALID]" data=0x636c656664656d6f74657874776974686f75747370656369616c74657874 content-type=data/plain
t=2022-07-01T15:52:23+0300 lvl=info msg=SignData api=signer type=response data= error="Request denied"
```
More examples, including a ruleset for a rate-limited window, are available on the [Clef Github][rate-limited-window-example]
and on the [Rules page](/docs/clef/rules).
## Under the hood
The examples on this page have provided step-by-step instructions for verious operations using Clef.
However, they have not provided much detail as to what is happening under the hood.
This section will provide some more details about how Clef organizes itself locally.
Initializing Clef with a master password and providing an account password to `clef setpw`
and attesting a ruleset creates the following files in the directory `~/.clef/`
(this path is independent of the paths provided to `--keystore` and `--configdir` on startup):
```terminal
# displayed using $ ls -laR ~/.clef/
/home/user/.clef/:
total 24
drwxr-x--x 3 user user 4096 Jul 1 13:45 .
drwxr-xr-x 102 user user 12288 Jul 1 13:39 ..
drwx------ 2 user user 4096 Jul 1 13:25 02f90c0603f4f2f60188
-r-------- 1 user user 868 Jun 28 13:55 masterseed.json
/home/user/.clef/02f90c0603f4f2f60188:
total 12
drwx------ 2 user user 4096 Jul 1 13:25 .
drwxr-x--x 3 user user 4096 Jul 1 13:45 ..
-rw------- 1 user user 159 Jul 1 13:25 config.json
-rw------- 1 user user 115 Jul 1 13:35 credentials.json
```
The file `masterseed.json` includes a json object containing the masterseed which was used to derive
the vault directory (in this case `02f90c0603f4f2f60188`). The vault is encrypted using a password
which is also derived from the masterseed. Inside the vault are two subdirectories:
`credentials.json`
`config.json`
Inside `credentials.json` are the confidential `ksp` data (standing for "keystore pass" - these
are the account passwords used to unlock the keystore).
The `config.json` file contains encrypted key/value pairs for configuration data. Usually
this is only the `sha256` hashes of any attested rulesets.
Vault locations map uniquely to masterseeds so that multiple instances of Clef can co-exist
each with their own attested rules and their own set of keystore passwords. This is useful for,
for example, maintaining separate setups for Mainnet and testnets.
The contents of each of these json files can be viewed using `cat` and should look something
like the following:
For `config.json`:
```sh
cat ~/.clef/02f90c0603f4f2f60188/config.json
```
```terminal
{"ruleset_sha256":{"iv":"SWWEtnl+R+I+wfG7","c":"I3fjmwmamxVcfGax7D0MdUOL29/rBWcs73WBILmYK0o1CrX7wSMc3y37KsmtlZUAjp0oItYq01Ow8VGUOzilG91tDHInB5YHNtm/YkufEbo="}}
```
and for `credentials.json`:
```sh
cat ~/.clef/02f90c0603f4f2f60188/config.json
```
```terminal
{"0xd9c9cd5f6779558b6e0ed4e6acf6b1947e7fa1f3": {"iv": "6SC062CfaUW8uSqH","c":"C+S5kaJyrarrxrAESs4EmPjL5zmg5tRh0Q=="}}
```
## Geth integration
This tutorial has bounced back and forth between demonstrating Clef as a standalone tool by making
'manual` JSON RPC requests from the terminal and integrating it as a backend singer for Geth.
Using Clef for account management is considered best practise for Geth users because of the additional
security benefits it offers over and above what it offered by Geth's built-in accounts module. Clef is
far more flexible and composable than Geth's built-in account management tool and can interface directly
with hardware wallets, while Apps and wallets can request signatures directly from Clef.
Ultimately, the goal is to deprecate Geth's account management tools completely and replace them with
Clef. Until then, users are simply encouraged to choose to use Clef as an optional backend signer for Geth.
In addition to the examples on this page, the [Getting started tutorial](/docs/_getting-started/index.md)
also demonstrates Clef/Geth integration.
## Summary
This page includes step-by-step instructions for basic and intermediate uses of Clef, including using
it as a standalone app and a backend signer for Geth. Further information is available on our other
Clef pages, including [Introduction](/docs/clef/introduction), [Setup](/docs/clef/setup),
[Rules](/docs/clef/rules), [Communication Datatypes](/docs/clef/datatypes) and [Communication APIs](/docs/clef/apis).
Also see the [Clef Github](https://github.com/ethereum/go-ethereum/tree/master/cmd/clef) for further reading.
[rate-limited-window-example]:https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/rules.md#example-1-ruleset-for-a-rate-limited-window

@ -0,0 +1,851 @@
---
title: Communication APIs
sort_key: E
---
### External API
Clef listens to HTTP requests on `http.addr`:`http.port` (or to IPC on `ipcpath`), with the same JSON-RPC standard as Geth. The messages are expected to be [JSON-RPC 2.0 standard](https://www.jsonrpc.org/specification).
Some of these calls can require user interaction. Clients must be aware that responses may be delayed significantly or may never be received if a user decides to ignore the confirmation request.
The External API is **untrusted**: it does not accept credentials, nor does it expect that requests have any authority.
### Internal UI API
Clef has one native console-based UI, for operation without any standalone tools. However, there is also an API to communicate with an external UI. To enable that UI, the signer needs to be executed with the `--stdio-ui` option, which allocates `stdin` / `stdout` for the UI API.
An example (insecure) proof-of-concept of has been implemented in `pythonsigner.py`.
The model is as follows:
* The user starts the UI app (`pythonsigner.py`).
* The UI app starts `clef` with `--stdio-ui`, and listens to the
process output for confirmation-requests.
* `clef` opens the external HTTP API.
* When the `signer` receives requests, it sends a JSON-RPC request via `stdout`.
* The UI app prompts the user accordingly, and responds to `clef`.
* `clef` signs (or not), and responds to the original request.
### More resoruces
* Changelog for [External API](https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/extapi_changelog.md)
* Changelog for [UI API](https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/intapi_changelog.md)
* Documentation about [Datatypes](datatypes)
## External API
See the [external API changelog](https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/extapi_changelog.md) for information about changes to this API.
### Encoding
- number: positive integers that are hex encoded
- data: hex encoded data
- string: ASCII string
All hex encoded values must be prefixed with `0x`.
### account_new
#### Create new password protected account
The signer will generate a new private key, encrypt it according to [web3 keystore spec](https://github.com/ethereum/wiki/wiki/Web3-Secret-Storage-Definition) and store it in the keystore directory.
The client is responsible for creating a backup of the keystore. If the keystore is lost there is no method of retrieving lost accounts.
#### Arguments
None
#### Result
- address [string]: account address that is derived from the generated key
#### Sample call
```json
{
"id": 0,
"jsonrpc": "2.0",
"method": "account_new",
"params": []
}
```
Response
```json
{
"id": 0,
"jsonrpc": "2.0",
"result": "0xbea9183f8f4f03d427f6bcea17388bdff1cab133"
}
```
### account_list
#### List available accounts
List all accounts that this signer currently manages
#### Arguments
None
#### Result
- array with account records:
- account.address [string]: account address that is derived from the generated key
#### Sample call
```json
{
"id": 1,
"jsonrpc": "2.0",
"method": "account_list"
}
```
Response
```json
{
"id": 1,
"jsonrpc": "2.0",
"result": [
"0xafb2f771f58513609765698f65d3f2f0224a956f",
"0xbea9183f8f4f03d427f6bcea17388bdff1cab133"
]
}
```
### account_signTransaction
#### Sign transactions
Signs a transaction and responds with the signed transaction in RLP-encoded and JSON forms. Supports both legacy and EIP-1559-style transactions.
#### Arguments
1. transaction object (legacy):
- `from` [address]: account to send the transaction from
- `to` [address]: receiver account. If omitted or `0x`, will cause contract creation.
- `gas` [number]: maximum amount of gas to burn
- `gasPrice` [number]: gas price
- `value` [number:optional]: amount of Wei to send with the transaction
- `data` [data:optional]: input data
- `nonce` [number]: account nonce
1. transaction object (1559):
- `from` [address]: account to send the transaction from
- `to` [address]: receiver account. If omitted or `0x`, will cause contract creation.
- `gas` [number]: maximum amount of gas to burn
- `maxPriorityFeePerGas` [number]: maximum priority fee per unit of gas for the transaction
- `maxFeePerGas` [number]: maximum fee per unit of gas for the transaction
- `value` [number:optional]: amount of Wei to send with the transaction
- `data` [data:optional]: input data
- `nonce` [number]: account nonce
3. method signature [string:optional]
- The method signature, if present, is to aid decoding the calldata. Should consist of `methodname(paramtype,...)`, e.g. `transfer(uint256,address)`. The signer may use this data to parse the supplied calldata, and show the user. The data, however, is considered totally untrusted, and reliability is not expected.
#### Result
- raw [data]: signed transaction in RLP encoded form
- tx [json]: signed transaction in JSON form
#### Sample call (legacy)
```json
{
"id": 2,
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db",
"gas": "0x55555",
"gasPrice": "0x1234",
"input": "0xabcd",
"nonce": "0x0",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x1234"
}
]
}
```
Response
```json
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1234",
"gas": "0x55555",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x1234",
"input": "0xabcd",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
}
```
#### Sample call (1559)
```json
{
"id": 2,
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0xd1a9C60791e8440AEd92019a2C3f6c336ffefA27",
"to": "0x8A8eAFb1cf62BfBeb1741769DAE1a9dd47996192",
"gas": "0x33333",
"maxPriorityFeePerGas": "0x174876E800",
"maxFeePerGas": "0x174876E800",
"nonce": "0x0",
"value": "0x10",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
}
]
}
```
Response
```json
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"raw": "0x02f891018085174876e80085174876e80083033333948a8eafb1cf62bfbeb1741769dae1a9dd4799619210a44401a6e40000000000000000000000000000000000000000000000000000000000000012c080a0c8b59180c6e0c154284402b52d772f1afcf8ec2d245cf75bfb3212ebe676135ba02c660aaebf92d5e314fc2ba4c70f018915d174c3c1fc6e4e38d00ebf1a5bb69f",
"tx": {
"type": "0x2",
"nonce": "0x0",
"gasPrice": null,
"maxPriorityFeePerGas": "0x174876e800",
"maxFeePerGas": "0x174876e800",
"gas": "0x33333",
"value": "0x10",
"input": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"v": "0x0",
"r": "0xc8b59180c6e0c154284402b52d772f1afcf8ec2d245cf75bfb3212ebe676135b",
"s": "0x2c660aaebf92d5e314fc2ba4c70f018915d174c3c1fc6e4e38d00ebf1a5bb69f",
"to": "0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192",
"chainId": "0x1",
"accessList": [],
"hash": "0x8e096eb11ea89aa83900e6816fb182ff0adb2c85d270998ca2dd2394ec6c5a73"
}
}
}
```
#### Sample call with ABI-data
```json
{
"id": 67,
"jsonrpc": "2.0",
"method": "account_signTransaction",
"params": [
{
"from": "0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"gas": "0x333",
"gasPrice": "0x1",
"nonce": "0x0",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"
},
"safeSend(address)"
]
}
```
Response
```json
{
"jsonrpc": "2.0",
"id": 67,
"result": {
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1",
"gas": "0x333",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"input": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
}
```
Bash example:
```bash
> curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
{"jsonrpc":"2.0","id":67,"result":{"raw":"0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","tx":{"nonce":"0x0","gasPrice":"0x1","gas":"0x333","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0","value":"0x0","input":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012","v":"0x26","r":"0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e","s":"0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663","hash":"0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"}}}
```
### account_signData
#### Sign data
Signs a chunk of data and returns the calculated signature.
#### Arguments
- content type [string]: type of signed data
- `text/validator`: hex data with custom validator defined in a contract
- `application/clique`: [clique](https://github.com/ethereum/EIPs/issues/225) headers
- `text/plain`: simple hex data validated by `account_ecRecover`
- account [address]: account to sign with
- data [object]: data to sign
#### Result
- calculated signature [data]
#### Sample call
```json
{
"id": 3,
"jsonrpc": "2.0",
"method": "account_signData",
"params": [
"data/plain",
"0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db",
"0xaabbccdd"
]
}
```
Response
```json
{
"id": 3,
"jsonrpc": "2.0",
"result": "0x5b6693f153b48ec1c706ba4169960386dbaa6903e249cc79a8e6ddc434451d417e1e57327872c7f538beeb323c300afa9999a3d4a5de6caf3be0d5ef832b67ef1c"
}
```
### account_signTypedData
#### Sign data
Signs a chunk of structured data conformant to [EIP-712](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-712.md) and returns the calculated signature.
#### Arguments
- account [address]: account to sign with
- data [object]: data to sign
#### Result
- calculated signature [data]
#### Sample call
```json
{
"id": 68,
"jsonrpc": "2.0",
"method": "account_signTypedData",
"params": [
"0xcd2a3d9f938e13cd947ec05abc7fe734df8dd826",
{
"types": {
"EIP712Domain": [
{
"name": "name",
"type": "string"
},
{
"name": "version",
"type": "string"
},
{
"name": "chainId",
"type": "uint256"
},
{
"name": "verifyingContract",
"type": "address"
}
],
"Person": [
{
"name": "name",
"type": "string"
},
{
"name": "wallet",
"type": "address"
}
],
"Mail": [
{
"name": "from",
"type": "Person"
},
{
"name": "to",
"type": "Person"
},
{
"name": "contents",
"type": "string"
}
]
},
"primaryType": "Mail",
"domain": {
"name": "Ether Mail",
"version": "1",
"chainId": 1,
"verifyingContract": "0xCcCCccccCCCCcCCCCCCcCcCccCcCCCcCcccccccC"
},
"message": {
"from": {
"name": "Cow",
"wallet": "0xCD2a3d9F938E13CD947Ec05AbC7FE734Df8DD826"
},
"to": {
"name": "Bob",
"wallet": "0xbBbBBBBbbBBBbbbBbbBbbbbBBbBbbbbBbBbbBBbB"
},
"contents": "Hello, Bob!"
}
}
]
}
```
Response
```json
{
"id": 1,
"jsonrpc": "2.0",
"result": "0x4355c47d63924e8a72e509b65029052eb6c299d53a04e167c5775fd466751c9d07299936d304c153f6443dfa05f40ff007d72911b6f72307f996231605b915621c"
}
```
### account_ecRecover
#### Recover the signing address
Derive the address from the account that was used to sign data with content type `text/plain` and the signature.
#### Arguments
- data [data]: data that was signed
- signature [data]: the signature to verify
#### Result
- derived account [address]
#### Sample call
```json
{
"id": 4,
"jsonrpc": "2.0",
"method": "account_ecRecover",
"params": [
"0xaabbccdd",
"0x5b6693f153b48ec1c706ba4169960386dbaa6903e249cc79a8e6ddc434451d417e1e57327872c7f538beeb323c300afa9999a3d4a5de6caf3be0d5ef832b67ef1c"
]
}
```
Response
```json
{
"id": 4,
"jsonrpc": "2.0",
"result": "0x1923f626bb8dc025849e00f99c25fe2b2f7fb0db"
}
```
### account_version
#### Get external API version
Get the version of the external API used by Clef.
#### Arguments
None
#### Result
* external API version [string]
#### Sample call
```json
{
"id": 0,
"jsonrpc": "2.0",
"method": "account_version",
"params": []
}
```
Response
```json
{
"id": 0,
"jsonrpc": "2.0",
"result": "6.0.0"
}
```
## UI API
These methods needs to be implemented by a UI listener.
By starting the signer with the switch `--stdio-ui-test`, the signer will invoke all known methods, and expect the UI to respond with
denials. This can be used during development to ensure that the API is (at least somewhat) correctly implemented.
See `pythonsigner`, which can be invoked via `python3 pythonsigner.py test` to perform the 'denial-handshake-test'.
All methods in this API use object-based parameters, so that there can be no mixup of parameters: each piece of data is accessed by key.
See the [ui API changelog](https://github.com/ethereum/go-ethereum/blob/master/cmd/clef/intapi_changelog.md) for information about changes to this API.
OBS! A slight deviation from `json` standard is in place: every request and response should be confined to a single line.
Whereas the `json` specification allows for linebreaks, linebreaks __should not__ be used in this communication channel, to make
things simpler for both parties.
### ApproveTx / `ui_approveTx`
Invoked when there's a transaction for approval.
#### Sample call
Here's a method invocation:
```bash
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
```
Results in the following invocation on the UI:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_approveTx",
"params": [
{
"transaction": {
"from": "0x0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"to": "0x0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"gas": "0x333",
"gasPrice": "0x1",
"value": "0x0",
"nonce": "0x0",
"data": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"input": null
},
"call_info": [
{
"type": "WARNING",
"message": "Invalid checksum on to-address"
},
{
"type": "Info",
"message": "safeSend(address: 0x0000000000000000000000000000000000000012)"
}
],
"meta": {
"remote": "127.0.0.1:48486",
"local": "localhost:8550",
"scheme": "HTTP/1.1"
}
}
]
}
```
The same method invocation, but with invalid data:
```bash
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x694267f14675d7e1b9494fd8d72fefe1755710fa","gas":"0x333","gasPrice":"0x1","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x0", "data":"0x4401a6e40000000000000002000000000000000000000000000000000000000000000012"},"safeSend(address)"],"id":67}' http://localhost:8550/
```
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_approveTx",
"params": [
{
"transaction": {
"from": "0x0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"to": "0x0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"gas": "0x333",
"gasPrice": "0x1",
"value": "0x0",
"nonce": "0x0",
"data": "0x4401a6e40000000000000002000000000000000000000000000000000000000000000012",
"input": null
},
"call_info": [
{
"type": "WARNING",
"message": "Invalid checksum on to-address"
},
{
"type": "WARNING",
"message": "Transaction data did not match ABI-interface: WARNING: Supplied data is stuffed with extra data. \nWant 0000000000000002000000000000000000000000000000000000000000000012\nHave 0000000000000000000000000000000000000000000000000000000000000012\nfor method safeSend(address)"
}
],
"meta": {
"remote": "127.0.0.1:48492",
"local": "localhost:8550",
"scheme": "HTTP/1.1"
}
}
]
}
```
One which has missing `to`, but with no `data`:
```json
{
"jsonrpc": "2.0",
"id": 3,
"method": "ui_approveTx",
"params": [
{
"transaction": {
"from": "",
"to": null,
"gas": "0x0",
"gasPrice": "0x0",
"value": "0x0",
"nonce": "0x0",
"data": null,
"input": null
},
"call_info": [
{
"type": "CRITICAL",
"message": "Tx will create contract with empty code!"
}
],
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveListing / `ui_approveListing`
Invoked when a request for account listing has been made.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 5,
"method": "ui_approveListing",
"params": [
{
"accounts": [
{
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-20T14-44-54.089682944Z--123409812340981234098123409812deadbeef42",
"address": "0x123409812340981234098123409812deadbeef42"
},
{
"url": "keystore:///home/bazonk/.ethereum/keystore/UTC--2017-11-23T21-59-03.199240693Z--cafebabedeadbeef34098123409812deadbeef42",
"address": "0xcafebabedeadbeef34098123409812deadbeef42"
}
],
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveSignData / `ui_approveSignData`
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 4,
"method": "ui_approveSignData",
"params": [
{
"address": "0x123409812340981234098123409812deadbeef42",
"raw_data": "0x01020304",
"messages": [
{
"name": "message",
"value": "\u0019Ethereum Signed Message:\n4\u0001\u0002\u0003\u0004",
"type": "text/plain"
}
],
"hash": "0x7e3a4e7a9d1744bc5c675c25e1234ca8ed9162bd17f78b9085e48047c15ac310",
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ApproveNewAccount / `ui_approveNewAccount`
Invoked when a request for creating a new account has been made.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 4,
"method": "ui_approveNewAccount",
"params": [
{
"meta": {
"remote": "signer binary",
"local": "main",
"scheme": "in-proc"
}
}
]
}
```
### ShowInfo / `ui_showInfo`
The UI should show the info (a single message) to the user. Does not expect response.
#### Sample call
```json
{
"jsonrpc": "2.0",
"id": 9,
"method": "ui_showInfo",
"params": [
"Tests completed"
]
}
```
### ShowError / `ui_showError`
The UI should show the error (a single message) to the user. Does not expect response.
```json
{
"jsonrpc": "2.0",
"id": 2,
"method": "ui_showError",
"params": [
"Something bad happened!"
]
}
```
### OnApprovedTx / `ui_onApprovedTx`
`OnApprovedTx` is called when a transaction has been approved and signed. The call contains the return value that will be sent to the external caller. The return value from this method is ignored - the reason for having this callback is to allow the ruleset to keep track of approved transactions.
When implementing rate-limited rules, this callback should be used.
TLDR; Use this method to keep track of signed transactions, instead of using the data in `ApproveTx`.
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_onApprovedTx",
"params": [
{
"raw": "0xf88380018203339407a565b7ed7d7a678680a4c162885bedbb695fe080a44401a6e4000000000000000000000000000000000000000000000000000000000000001226a0223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20ea02aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"tx": {
"nonce": "0x0",
"gasPrice": "0x1",
"gas": "0x333",
"to": "0x07a565b7ed7d7a678680a4c162885bedbb695fe0",
"value": "0x0",
"input": "0x4401a6e40000000000000000000000000000000000000000000000000000000000000012",
"v": "0x26",
"r": "0x223a7c9bcf5531c99be5ea7082183816eb20cfe0bbc322e97cc5c7f71ab8b20e",
"s": "0x2aadee6b34b45bb15bc42d9c09de4a6754e7000908da72d48cc7704971491663",
"hash": "0xeba2df809e7a612a0a0d444ccfa5c839624bdc00dd29e3340d46df3870f8a30e"
}
}
]
}
```
### OnSignerStartup / `ui_onSignerStartup`
This method provides the UI with information about what API version the signer uses (both internal and external) as well as build-info and external API,
in k/v-form.
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_onSignerStartup",
"params": [
{
"info": {
"extapi_http": "http://localhost:8550",
"extapi_ipc": null,
"extapi_version": "2.0.0",
"intapi_version": "1.2.0"
}
}
]
}
```
### OnInputRequired / `ui_onInputRequired`
Invoked when Clef requires user input (e.g. a password).
Example call:
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "ui_onInputRequired",
"params": [
{
"title": "Account password",
"prompt": "Please enter the password for account 0x694267f14675d7e1b9494fd8d72fefe1755710fa",
"isPassword": true
}
]
}
```

@ -0,0 +1,229 @@
---
title: Communication data types
sort_key: F
---
## UI Client interface
These data types are defined in the channel between clef and the UI
### SignDataRequest
SignDataRequest contains information about a pending request to sign some data. The data to be signed can be of various types, defined by content-type. Clef has done most of the work in canonicalizing and making sense of the data, and it's up to the UI to present the user with the contents of the `message`
Example:
```json
{
"content_type": "text/plain",
"address": "0xDEADbEeF000000000000000000000000DeaDbeEf",
"raw_data": "GUV0aGVyZXVtIFNpZ25lZCBNZXNzYWdlOgoxMWhlbGxvIHdvcmxk",
"messages": [
{
"name": "message",
"value": "\u0019Ethereum Signed Message:\n11hello world",
"type": "text/plain"
}
],
"hash": "0xd9eba16ed0ecae432b71fe008c98cc872bb4cc214d3220a36f365326cf807d68",
"meta": {
"remote": "localhost:9999",
"local": "localhost:8545",
"scheme": "http",
"User-Agent": "Firefox 3.2",
"Origin": "www.malicious.ru"
}
}
```
### SignDataResponse - approve
Response to SignDataRequest
Example:
```json
{
"approved": true
}
```
### SignDataResponse - deny
Response to SignDataRequest
Example:
```json
{
"approved": false
}
```
### SignTxRequest
SignTxRequest contains information about a pending request to sign a transaction. Aside from the transaction itself, there is also a `call_info`-struct. That struct contains messages of various types, that the user should be informed of.
As in any request, it's important to consider that the `meta` info also contains untrusted data.
The `transaction` (on input into clef) can have either `data` or `input` -- if both are set, they must be identical, otherwise an error is generated. However, Clef will always use `data` when passing this struct on (if Clef does otherwise, please file a ticket)
Example:
```json
{
"transaction": {
"from": "0xDEADbEeF000000000000000000000000DeaDbeEf",
"to": null,
"gas": "0x3e8",
"gasPrice": "0x5",
"value": "0x6",
"nonce": "0x1",
"data": "0x01020304"
},
"call_info": [
{
"type": "Warning",
"message": "Something looks odd, show this message as a warning"
},
{
"type": "Info",
"message": "User should see this aswell"
}
],
"meta": {
"remote": "localhost:9999",
"local": "localhost:8545",
"scheme": "http",
"User-Agent": "Firefox 3.2",
"Origin": "www.malicious.ru"
}
}
```
### SignTxResponse - approve
Response to request to sign a transaction. This response needs to contain the `transaction`, because the UI is free to make modifications to the transaction.
Example:
```json
{
"transaction": {
"from": "0xDEADbEeF000000000000000000000000DeaDbeEf",
"to": null,
"gas": "0x3e8",
"gasPrice": "0x5",
"value": "0x6",
"nonce": "0x4",
"data": "0x04030201"
},
"approved": true
}
```
### SignTxResponse - deny
Response to SignTxRequest. When denying a request, there's no need to provide the transaction in return
Example:
```json
{
"transaction": {
"from": "0x",
"to": null,
"gas": "0x0",
"gasPrice": "0x0",
"value": "0x0",
"nonce": "0x0",
"data": null
},
"approved": false
}
```
### OnApproved - SignTransactionResult
SignTransactionResult is used in the call `clef` -> `OnApprovedTx(result)`
This occurs _after_ successful completion of the entire signing procedure, but right before the signed transaction is passed to the external caller. This method (and data) can be used by the UI to signal to the user that the transaction was signed, but it is primarily useful for ruleset implementations.
A ruleset that implements a rate limitation needs to know what transactions are sent out to the external interface. By hooking into this methods, the ruleset can maintain track of that count.
**OBS:** Note that if an attacker can restore your `clef` data to a previous point in time (e.g through a backup), the attacker can reset such windows, even if he/she is unable to decrypt the content.
The `OnApproved` method cannot be responded to, it's purely informative
Example:
```json
{
"raw": "0xf85d640101948a8eafb1cf62bfbeb1741769dae1a9dd47996192018026a0716bd90515acb1e68e5ac5867aa11a1e65399c3349d479f5fb698554ebc6f293a04e8a4ebfff434e971e0ef12c5bf3a881b06fd04fc3f8b8a7291fb67a26a1d4ed",
"tx": {
"nonce": "0x64",
"gasPrice": "0x1",
"gas": "0x1",
"to": "0x8a8eafb1cf62bfbeb1741769dae1a9dd47996192",
"value": "0x1",
"input": "0x",
"v": "0x26",
"r": "0x716bd90515acb1e68e5ac5867aa11a1e65399c3349d479f5fb698554ebc6f293",
"s": "0x4e8a4ebfff434e971e0ef12c5bf3a881b06fd04fc3f8b8a7291fb67a26a1d4ed",
"hash": "0x662f6d772692dd692f1b5e8baa77a9ff95bbd909362df3fc3d301aafebde5441"
}
}
```
### UserInputRequest
Sent when clef needs the user to provide data. If 'password' is true, the input field should be treated accordingly (echo-free)
Example:
```json
{
"prompt": "The question to ask the user",
"title": "The title here",
"isPassword": true
}
```
### UserInputResponse
Response to UserInputRequest
Example:
```json
{
"text": "The textual response from user"
}
```
### ListRequest
Sent when a request has been made to list addresses. The UI is provided with the full `account`s, including local directory names. Note: this information is not passed back to the external caller, who only sees the `address`es.
Example:
```json
{
"accounts": [
{
"address": "0xdeadbeef000000000000000000000000deadbeef",
"url": "keystore:///path/to/keyfile/a"
},
{
"address": "0x1111111122222222222233333333334444444444",
"url": "keystore:///path/to/keyfile/b"
}
],
"meta": {
"remote": "localhost:9999",
"local": "localhost:8545",
"scheme": "http",
"User-Agent": "Firefox 3.2",
"Origin": "www.malicious.ru"
}
}
```
### ListResponse
Response to list request. The response contains a list of all addresses to show to the caller. Note: the UI is free to respond with any address the caller, regardless of whether it exists or not
Example:
```json
{
"accounts": [
{
"address": "0x0000000000000000000000000000000000000000",
"url": ".. ignored .."
},
{
"address": "0xffffffffffffffffffffffffffffffffffffffff",
"url": ""
}
]
}
```

@ -0,0 +1,27 @@
---
title: Home
root: ..
---
## What is Geth?
Geth (go-ethereum) is a [Go](https://go.dev/) implementation of [Ethereum](http://ethereum.org) - a
gateway into the decentralized web.
Geth has been a core part of Etheruem since the very beginning. Geth was one of the original
Ethereum implementations making it the most battle-hardened and tested client.
Geth is an Ethereum *execution client* meaning it handles transactions, deployment and execution
of smart contracts and contains an embedded computer known as the *Ethereum Virtual Machine*.
## What is Ethereum?
Ethereum is a technology for building apps and organizations, holding assets, transacting and
communicating without being controlled by a central authority. It is the base of a new, decentralized
internet.
Read more on our [Ethereum page](/ethereum) or on [ethereum.org](http://ethereum.org).

Binary file not shown.
Loading…
Cancel
Save