A typical verifier life cycle

Defines a typical verifier life cycle and how the verifier interacts with the network
graphical divider

A typical verifier life cycle


The verifier

As was previously explained in the Knowledgebase article 'How the network works', the entirety of the Nyzo network is called 'The mesh' and nodes on the network are separated into two groups:

  • Cycle candidates
  • In-cycle verifiers

To go through the verifier life cycle in detail, it is necessary that we first set up a Nyzo verifier on a new Virtual Private Server (VPS).

Details on how to accomplish this have been laid out on the 'How to set up a verifier' page, found in the 'Getting started' section of the Knowledgebase.

Initial configuration of the verifier

During the installation process, various events take place:

  • a directory /home/ubuntu is created (non-AWS installation only)
  • port 9444 is opened for incoming and outgoing TCP connections
  • port 9446 is opened for incoming and outgoing UDP connections
  • the linux distribution is updated through apt
  • haveged is installed on the machine
  • openjdk-8 is installed on the machine
  • supervisor is installed on the machine
  • git is installed on the machine
  • the nyzoVerifier repository is cloned
  • the cloned files now residing in /home/ubuntu/nyzoVerifier are compiled using gradlew
  • a new directory is created: /var/lib/nyzo/production
  • the trusted_entry_points file is copied to the newly generated directory
  • the nyzoVerifier.conf file is copied to the /etc/supervisor/conf.d directory to configure supervisor
  • the nickname argument is echoed into the /var/lib/nyzo/production/nickname file
  • supervisor is initiated using the sudo supervisorctl reload command
  • the status of the supervisor application is displayed using the sudo supervisorctl status command
  • an entry for the sudo supervisorctl reload command is added as a cronjob, to enable automatic execution after reboot


The nyzoVerifier instance needs nodes to exist in the trusted_entry_points file to make an initial connection to the network.
By default the nodes residing in this file point to FQDN's of the developer's official website.

These FQDN's have been configured to point to random nodes in the network, while this presents a small element of trust, it is possible for a person to randomly select nodes from the mesh page and to use those IP's instead.
Alternatively, if someone already has a fair amount of nodes to his name, he could solely rely on his set of nodes to synchronize with the network if he wishes to do so.


The nyzoVerifier.conf file is used to correctly configure the supervisor.
The supervisor package available for linux distributions , essentially, as the name suggests, supervises the particular process under the parametered conditions specified in the configuration file.

The parameters set forth in the file by default excert the following behavior on the process:

  • autostart is enabled
  • autorestart is enabled
  • the minimum amount of time the process needs to stay up and running to be considered a successful start is 10 seconds and is configured using the startsecs parameter
  • the maximum amount of iterations the supervisor will go through, when the startsecs parameter condition is not met in a consecutive amount of times is by default 20 and is configured using the startretries parameter. After this condition is met, the process resides in a FATAL state
  • regular output by the process is by default written to /var/log/nyzo-verifier-stdout.log and is governed by the stdout_logfile parameter
  • error output by the process is by default written to /var/log/nyzo-verifier-stderr.log and is governed by the stderr_logfile parameter
  • both files have a default cap of 10MB per log file to ensure that the act of writing to said files doesn't become a memory-heavy burden for the verifier.
    This limit is imposed by the stdout_logfile_maxbytes and stderr_logfile_maxbytes parameters.
  • the command necessary for executing the compiled jar file is added to the command parameter


The nickname file residing in the /var/lib/nyzo/production directory is straightforward, any text stored in this file (with some size limitations imposed on it by the verifier), will be used as the verifier nickname and relayed to other nodes. Examples of nicknames can be found both on the mesh and cycle candidate pages.


After the sudo supervisorctl reload command has been invoked, the verifier contacts the trusted_entry_points and comes to its terms as to what the current state of the network is.

If the always_track_blockchain=1 parameter is set in the /var/lib/nyzo/production/preferences file, the verifier will contact the in-cycle verifiers in a similar fashion to an in-cycle verifier and track the frozen edge and chain state on a per block basis.

If the always_track_blockchain parameter has not been configured, the node will periodically catch up with the network. This parameter exists to alleviate the in-cycle verifiers from an unnecessary burden caused by cycle candidates continuously requesting information they don't actually need.

Waiting period

Cycle candidates must remain active and periodically check up with every in-cycle verifier to keep validity as a cycle candidate.
This waiting period lasts for 30 days and largely prevents botnets and illicitly funded servers from gaining an immediate and significant advantage in joining the network.

Selection procedure

As explained in detail on the 'How the network works' page, a cycle candidate can be selected by the network to become an in-cycle verifier.

After going through the necessary hoops and turns, the candidate joins the network and becomes an in-cycle verifier.

Part of the cycle

Great. Your candidate is now an in-cycle verifier.
It will now perform a multitude of tasks automatically:

  • produce blocks once per cycle
  • vote for every block produced by other verifiers
  • vote for new verifiers
  • keep a list of candidate nodes and their timestamps
  • keep a list of in-cycle verifiers and their performance scores
  • vote for in-cycle verifiers to be removed from the cycle if their performance score is too high
  • store metadata about important processes such as verifier votes on-chain

It's also possible to manually perform the following actions:

  • manually vote for a new verifier
  • manually vote for a different block hash
  • broadcast a cycle transaction to be stored on-chain
  • export the current voting state of the network
  • export the current removal voting state of the network
  • send and receive a variety of different messages to different nodes

Now that you're part of the cycle, your node will stay in the cycle as long as its performance score isn't regarded as bad (too high) by a majority of other participating nodes and as long as it produces blocks. If your verifier malfunctions for a long period of time (days) or misses one block production event (once per cycle), your node will leave the cycle.

Since leaving the cycle can be considered a costly event due to competition, and thus cost of entry associated with one node, a separate running mode of the verifier exists which protects verifiers whom happen to be malfunctioning. Malfunctioning of a verifier can occur in many different ways but VPS host maintenance, DDoS attacks or 0days can be considered the most common of all possibilities.

Since the sentinel is an important component of the network, its intricacies are laid out in a separate article:

Share this article:

divider graphic
arrow-up icon