Unpacking Software Livestream

Join our monthly Unpacking Software livestream to hear about the latest news, chat and opinion on packaging, software deployment and lifecycle management!

Learn More

Chocolatey Product Spotlight

Join the Chocolatey Team on our regular monthly stream where we put a spotlight on the most recent Chocolatey product releases. You'll have a chance to have your questions answered in a live Ask Me Anything format.

Learn More

Chocolatey Coding Livestream

Join us for the Chocolatey Coding Livestream, where members of our team dive into the heart of open source development by coding live on various Chocolatey projects. Tune in to witness real-time coding, ask questions, and gain insights into the world of package management. Don't miss this opportunity to engage with our team and contribute to the future of Chocolatey!

Learn More

Calling All Chocolatiers! Whipping Up Windows Automation with Chocolatey Central Management

Webinar from
Wednesday, 17 January 2024

We are delighted to announce the release of Chocolatey Central Management v0.12.0, featuring seamless Deployment Plan creation, time-saving duplications, insightful Group Details, an upgraded Dashboard, bug fixes, user interface polishing, and refined documentation. As an added bonus we'll have members of our Solutions Engineering team on-hand to dive into some interesting ways you can leverage the new features available!

Watch On-Demand
Chocolatey Community Coffee Break

Join the Chocolatey Team as we discuss all things Community, what we do, how you can get involved and answer your Chocolatey questions.

Watch The Replays
Chocolatey and Intune Overview

Webinar Replay from
Wednesday, 30 March 2022

At Chocolatey Software we strive for simple, and teaching others. Let us teach you just how simple it could be to keep your 3rd party applications updated across your devices, all with Intune!

Watch On-Demand
Chocolatey For Business. In Azure. In One Click.

Livestream from
Thursday, 9 June 2022

Join James and Josh to show you how you can get the Chocolatey For Business recommended infrastructure and workflow, created, in Azure, in around 20 minutes.

Watch On-Demand
The Future of Chocolatey CLI

Livestream from
Thursday, 04 August 2022

Join Paul and Gary to hear more about the plans for the Chocolatey CLI in the not so distant future. We'll talk about some cool new features, long term asks from Customers and Community and how you can get involved!

Watch On-Demand
Hacktoberfest Tuesdays 2022

Livestreams from
October 2022

For Hacktoberfest, Chocolatey ran a livestream every Tuesday! Re-watch Cory, James, Gary, and Rain as they share knowledge on how to contribute to open-source projects such as Chocolatey CLI.

Watch On-Demand

Downloads:

101

Downloads of v 1.1.0:

71

Last Update:

24 May 2023

Package Maintainer(s):

Software Author(s):

  • EOS Network Foundation

Tags:

antelopeio-dune package

AntelopeIO DUNE

  • 1
  • 2
  • 3

1.1.0 | Updated: 24 May 2023

Downloads:

101

Downloads of v 1.1.0:

71

Maintainer(s):

Software Author(s):

  • EOS Network Foundation

AntelopeIO DUNE 1.1.0

Legal Disclaimer: Neither this package nor Chocolatey Software, Inc. are affiliated with or endorsed by EOS Network Foundation. The inclusion of EOS Network Foundation trademark(s), if any, upon this webpage is solely to identify EOS Network Foundation goods or services and not for commercial purposes.

  • 1
  • 2
  • 3

Some Checks Have Failed or Are Not Yet Complete

Not All Tests Have Passed


Validation Testing Passed


Verification Testing Failed

Details

Scan Testing Successful:

No detections found in any package files

Details
Learn More

Deployment Method: Individual Install, Upgrade, & Uninstall

To install AntelopeIO DUNE, run the following command from the command line or from PowerShell:

>

To upgrade AntelopeIO DUNE, run the following command from the command line or from PowerShell:

>

To uninstall AntelopeIO DUNE, run the following command from the command line or from PowerShell:

>

Deployment Method:

NOTE

This applies to both open source and commercial editions of Chocolatey.

1. Enter Your Internal Repository Url

(this should look similar to https://community.chocolatey.org/api/v2/)


2. Setup Your Environment

1. Ensure you are set for organizational deployment

Please see the organizational deployment guide

2. Get the package into your environment

  • Open Source or Commercial:
    • Proxy Repository - Create a proxy nuget repository on Nexus, Artifactory Pro, or a proxy Chocolatey repository on ProGet. Point your upstream to https://community.chocolatey.org/api/v2/. Packages cache on first access automatically. Make sure your choco clients are using your proxy repository as a source and NOT the default community repository. See source command for more information.
    • You can also just download the package and push it to a repository Download

3. Copy Your Script

choco upgrade antelopeio-dune -y --source="'INTERNAL REPO URL'" [other options]

See options you can pass to upgrade.

See best practices for scripting.

Add this to a PowerShell script or use a Batch script with tools and in places where you are calling directly to Chocolatey. If you are integrating, keep in mind enhanced exit codes.

If you do use a PowerShell script, use the following to ensure bad exit codes are shown as failures:


choco upgrade antelopeio-dune -y --source="'INTERNAL REPO URL'" 
$exitCode = $LASTEXITCODE

Write-Verbose "Exit code was $exitCode"
$validExitCodes = @(0, 1605, 1614, 1641, 3010)
if ($validExitCodes -contains $exitCode) {
  Exit 0
}

Exit $exitCode

- name: Install antelopeio-dune
  win_chocolatey:
    name: antelopeio-dune
    version: '1.1.0'
    source: INTERNAL REPO URL
    state: present

See docs at https://docs.ansible.com/ansible/latest/modules/win_chocolatey_module.html.


chocolatey_package 'antelopeio-dune' do
  action    :install
  source   'INTERNAL REPO URL'
  version  '1.1.0'
end

See docs at https://docs.chef.io/resource_chocolatey_package.html.


cChocoPackageInstaller antelopeio-dune
{
    Name     = "antelopeio-dune"
    Version  = "1.1.0"
    Source   = "INTERNAL REPO URL"
}

Requires cChoco DSC Resource. See docs at https://github.com/chocolatey/cChoco.


package { 'antelopeio-dune':
  ensure   => '1.1.0',
  provider => 'chocolatey',
  source   => 'INTERNAL REPO URL',
}

Requires Puppet Chocolatey Provider module. See docs at https://forge.puppet.com/puppetlabs/chocolatey.


4. If applicable - Chocolatey configuration/installation

See infrastructure management matrix for Chocolatey configuration elements and examples.

Package Approved

This package was approved by moderator flcdrg on 16 Dec 2023.

Description

antelopeio-dune eos cleos nodeos leap cdt antelope antelopeio

Docker Utilities for Node Execution (DUNE) is a tool to abstract over Leap programs, CDT and other services/tools related to Antelope blockchains.


tools\bootstrap.bat
@echo off

SET mypath=%~dp0
docker build --no-cache -f Dockerfile.win -t dune %mypath%
tools\bootstrap.sh
#!/usr/bin/env bash
set -o errexit -o pipefail -o noclobber -o nounset

getopt --test && echo "'getopt --test' failed in this environment." && exit 1

LONGOPTS=leap:,cdt:,help
OPTIONS=l:c:h

PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@")

eval set -- "$PARSED"
LEAP_ARGUMENT=
CDT_ARGUMENT=

usage="$(basename "$0") [-l|--leap=version] [-c|--cdt=version]
where:
  -l, --leap=version
    sets the leap version
  -c, --cdt=version
    sets the CDT version"

while true; do
    case "$1" in
        -l|--leap) # Specifies leap version, i.e. --leap=3.1.0
            LEAP_ARGUMENT="--build-arg LEAP_VERSION=$2"
            shift 2
            ;;
        -c|--cdt) # Specifies cdt version, i.e. --cdt=3.0.0
            CDT_ARGUMENT="--build-arg CDT_VERSION=$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        -h | --help) # Display help.
            echo -e "$usage"
            exit 0
            ;;
    esac
done

SDIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

GROUP_ID=0
# for mac users
if [[ $(uname) == "Darwin" ]]; then
  GROUP_ID=200
fi

docker build --no-cache --build-arg USER_ID=0 --build-arg GROUP_ID="$GROUP_ID" $LEAP_ARGUMENT $CDT_ARGUMENT -f Dockerfile.unix -t dune "$SDIR"
tools\Dockerfile.unix
 
tools\Dockerfile.win
 
tools\dune
 
tools\dune.bat
@echo off

SET mypath=%~dp0

where python.exe >nul
if %ERRORLEVEL% EQU 0 (
  python %mypath%\src\dune %*
  ) else (
  where python3.exe >nul
  if %ERRORLEVEL% EQU 0 (
    python3 %mypath%\src\dune %*
  ) else (
    echo "Python/3 was not found, please install and add to PATH."
	exit /b
  )
)
tools\LICENSE.txt
MIT License

Copyright (c) 2022 Bucky Kittinger

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

It is permissible to redistribute the software by embedding it in the Chocolatey package.
tools\README.md
# Docker Utilities for Node Execution (DUNE)

![Logo](docs/images/logo.png)

Docker Utilities for Node Execution (DUNE) is a tool to abstract over [Leap](https://github.com/AntelopeIO/leap) programs, 
[CDT](https://github.com/AntelopeIO/cdt), and other services/tools to perform the functions of node management, compiling smart contracts, 
running tests, and several other common tasks required to develop smart contracts on [Antelope](https://github.com/AntelopeIO) blockchains.

## Getting started

First we must install [Docker](https://docs.docker.com/get-docker/).

![Get Docker](docs/images/get-docker.png)

Once you select the Docker Desktop for your operating system the installation process is very straight forward.

### Linux - docker setup

Start your Docker Desktop, visit its settings and add the root directory of the host system to shared directories.

![Docker Desktop settings](docs/images/linux_docker_settings_shares.png)

Check the installation with the command

```console
$ docker --help
```

This should display the list of commands and features.  If it fails with unknown command the installation did not work correctly.

### Several useful facts for working with Docker in Linux

The Docker installation package consists of two independent products which install together: Docker Engine and Docker Desktop.
  
It is easy to confuse which system you are currently working with.
  
Docker Desktop has a built-in daemon which works in parallel with the `dockerd` daemon.
  
Docker Desktop keeps its settings and storage of images and containers in the user's home directory.
  
Docker Engine keeps its settings and storage in the system directories.
  
Docker Desktop works with userns-remap turned on only. It is hardcoded and can't be changed.
  
Docker Desktop has a built-in daemon which works in parallel with the `dockerd` daemon.   

Therefore, to successfully work with DUNE you should:

>+ Download the latest DUNE release on [Windows](#dune-windows) or on [Linux](#dune-linux "Linux")
>+ Add a root directory of the host system to the list of shared directories in Docker Desktop settings.
>+ Keep Docker Desktop running all the time when you work with DUNE.


#### Python 3

Depending on the distro you are using will determine which `python3` package to install.

| Distro | Package Name                                              |
| :----- | :-------------------------------------------------------- |
| Ubuntu | python3                                                   |
| RHEL   | rh-python36 * (need to use `scl enable rh-python36 bash`) |
| Centos | python3                                                   |
| Arch   | python                                                    |

### DUNE installation on Linux <a name="dune-linux"></a>

This is the fastest way to get started. From the [latest release](https://github.com/AntelopeIO/DUNE/releases/latest) page, download DUNE `*.deb` file or visit the [release tags](https://github.com/AntelopeIO/DUNE/releases) page to download specific version of DUNE deb package.

Once you have a `*.deb` file downloaded, you can install it as follows:
```bash
sudo apt-get update
sudo apt-get install -y ~/Downloads/antelopeio-dune*.deb
```
Your download path may vary.

#### Alternative: DUNE installation using RPM package

From the [latest release](https://github.com/AntelopeIO/DUNE/releases/latest) page, download DUNE `*.rpm` file.

Once you have a `*.rpm` file downloaded, you can install it as follows:
```bash
sudo rpm -i ~/Downloads/antelopeio-dune*.rpm
```
Your download path may vary.

#### DUNE installation - verification

Finally, verify DUNE was installed correctly in `/usr/opt/DUNE/`. First [add DUNE to Path](#add-dune-to-path) and check:
```bash
dune --version
```
You should see a DUNE version number. For example:
```
v1.0.0
```

Latest DUNE docker image will be downloaded automatically when starting the DUNE as described in [Node management](#node-management).
#### Add DUNE to PATH

To keep from having to install files to the user's system, the preferred method of usage is to add this directory to your `PATH`.

```console
$ echo "PATH=<LocationOfDUNE>:$PATH" >> .bashrc
```
### Rebuild the DUNE image

If you want to rebuild the DUNE image pick your preferred terminal application and input the following command:

```console
<PathToDUNE>/DUNE$ ./bootstrap.sh
```

### Windows 10 & 11

In some cases (i.e. running Docker from VirtualBox) you might need to turn on hardware virtualization in the BIOS of your computer.  Docker should give an error stating this failure.
Because of the variance of motherboards and BIOS implementations we can't give a clear description as to how to turn this on,
but a quick google search with your PC make and model should find the information you would need.

You can then start the program `Docker Desktop`.

You should see the following:

![docker-desktop](docs/images/docker-desktop.png)

#### Python 3

Installing Python 3 on Windows is pretty straightforward.

Visit the download page for [Python 3](https://python.org/downloads). You should see the link to the latest Python 3:

![win-python](docs/images/win-python.png)

Make sure you mark "Add Python to PATH" during installation.
After installation open `cmd.exe` and verify `python --version` returns current Python version.

#### DUNE installation on Windows <a name="dune-windows"></a>

1. Install [Chocolatey](https://docs.chocolatey.org/en-us/choco/setup).
2. In PowerShell run as administrator following command:
`choco install antelopeio-dune --version=1.1.0`
3. Restart your computer (this is because %PATH% has to be reloaded. In cmd.exe it is enough to run command `refreshenv`).
4. Open PowerShell / cmd.exe and try that following command works: `dune.bat --version`.

**NOTE**: Chocolatey does not detect python / docker-dekstop installed via other means, so it is possible some dependencies will be installed twice. Usually it is not an issue, but if it is you can follow [this solution](https://stackoverflow.com/a/71605170).

### Mac OS

When finished installing. Check the installation with the command. 
```console
$ docker --help
```
#### Python 3

Python 3 should already be installed.

#### Add DUNE to PATH

To keep from having to install files to the user's system, the preferred method of usage is to add this directory to your `PATH`.

```console
$ echo "PATH=<LocationOfDUNE>:$PATH" >> .bashrc
```

## DUNE commands

---

**-h** or **--help** 
This will display the available commands and a small description of each and argument values.

---

**--start** 
This will start a new node for deploying smart contracts and send actions against. 
This command takes a name of your choosing and an optional `config.ini` (look at scripts/config.ini for reference).

---

**--stop** 
This will stop a node that is running. This command takes a name of a node that was previously started.

---

**--remove** 
This will remove a node from the system. This command takes a name of a node that was previously started.

---

**--list** 
This will print the status of all nodes currently in the system. 
It will display if the node is active, running, and the ports for http/p2p/SHiP.

---

**--simple-list** 
Does the same thing as `--list` but does not use unicode and other formatting for use with scripts or plugins.

---

**--set-active** 
This will set a node as the current active node. This command takes a name of a node that was previously started.

---

**--get-active** 
This will return the name of the currently active node.

---

**--export-node** 
This will create a snapshot and tar ball the state snapshot, blocklog and index and export to desired location. 
This command takes a name of a node that was previously started, and a directory to save the exported node contents.

---

**--import-node** 
This will import a previously exported node.tgz. 
This command takes the path of the export and the name of what you want to name the imported node.

---

**--monitor** 
This will return information from the currently active node.

---

**--import-dev-key** 
This will import a private key into the pre-created developer wallet. This command takes a private key.

---

**--create-key** 
This will produce a public key and private key pair for development uses.

---

**--export-wallet** 
This will export the system wallet to your current directory.

---

**--import-wallet** 
This will import an exported wallet from a given location. This command takes a path that points to the exported wallet.

---

**--create-account** 
This will create a new on-chain account. 
This command takes a compatible name for the new Antelope account and an optional creator (also a valid Antelope account name).

---

**--create-cmake-app** 
This will produce a new smart contract project that utilizes CMake as its build system. This command takes a project name and a directory.

---

**--create-bare-app** 
This will produce a new smart contract project that is bare, i.e. uses only `cdt-cpp` tools. This command takes a project name and a directory.

---

**--cmake-build** 
This will build a given CMake app project. This command takes a directory to the project and optional CMake flags.

---

**--destroy-container** 
This will destroy and remove the currently running container. 
WARNING! This will delete all data that is running. 
This is useful if you need to update to a new version of DUNE or if you corrupt the container some how.

---

**--stop-container** 
This will stop the currently running container.

---

**--start-container** 
This will start the `dune` container.

---

**--set-core-contract** 
This will deploy the core contract to an account. This command takes a valid Antelope account name.

---

**--set-bios-contract** 
This will deploy the bios contract to an account. This command takes a valid Antelope account name.

---

**--set-token-contract** 
This will deploy the token contract to an account. This command takes a valid Antelope account name.

---

**--bootstrap-system** 
This will install the boot contract to the `eosio` account and activate all protocol features.

---

**--bootstrap-system-full** 
This will install the boot contract to the `eosio` account and activate all protocol features. 
It will also create all the accounts needed by the core, token, and multisig contracts from [reference-contracts](https://github.com/AntelopeIO/reference-contracts) and deploy those three contracts to their corresponding accounts. Note that the core contract will replace the boot contract on the `eosio` account.

---
**--send-acton** 
This will send an action to an account. 
This command takes a valid Antelope account name, a valid Antelope action name, the data payload needed and the permission.

---

**--get-table** 
This will get table data from the specified table. 
This command takes a valid Antelope account name, table scope, and table name.

---

**--activate-feature** 
This will activate a protocol feature. This command takes a code name for the protocol feature.

---

**--list-features** 
This will list the available protocol feature code names.

---

**--leap** 
Sets specific version of leap

---

**--cdt** 
Sets specific version of CDT (Contract Development Toolkit)

---
**--version-all** 
Lists version of DUNE, leap and CDT

---

**--upgrade** 
Upgrades DUNE docker image to the latest version

---

**--** 
(Not listed with help) This will allow you to call the tool and pass through to the underlying system.

---

<br/><br/>

## Concepts and operations

The core concept of this utility is to abstract over Leap programs such as `nodeos` and `cleos`, CDT, etc. 
As such some of the commands might seem restrictive.  Please take note that if you find any of the commands to be too 
restrictive then you can use the command `--` followed by whatever normal `cleos`, `nodeos`, CDT and OS commands that you need.

When you run any command with DUNE, if a container has not been created yet it will automatically create one for you. 
The command of `start-container` shouldn't necessarily be needed during normal operation.

A developer wallet is automatically created for you and is always unlocked 
and none of the commands will ever ask you to unlock the wallet. 
If you need to run any `cleos` wallet commands or `keosd` commands via `--` and the wallet is locked, 
then simply run one of the wallet commands from DUNE first and it will unlock the wallet.

If you deploy a smart contract to an account it will automatically add the `code` permission to that account for you.

The drive/directory that your workspace is in is mapped into the container and prefixed with `/host`. 
So on Windows this would be `/host/Users/<name>/<some path>`. 
On Linux and Mac this would be something like `/host/home/<name>/<some path>`.

## Node management

For all of the deployment commands and most of the commands in general you have to have at least one node up and running.

Let's create a new node.

```console
$ dune --start test_node
```

If you run this for the first time DUNE docker image will be downloaded. It will create a new Leap node and start it up.

If you have custom ports or options you need for the node, a config.ini file can be provided.

```console
$ dune --start test_node <path-to-config>/config.ini
```

Now let's say that we are done with that node for a while, then we can stop any node we want.

```console
$ dune --stop test_node
```

From here we can also remove any node via `--remove <node name>`.

The command `--list` will provide a listing of nodes in the current container.

```console
$ dune --list
```

<img src="docs/images/node-list.png" alt="node-list" width="600">

This gives us information about the nodes, their particular ports, if the node is running and a new concept of is the node `active`.

The way DUNE operates is state based, so you set the active node to which node you want and fire away at the commands 
and any that are directed towards the nodes or are listening to nodes will set their URL information correctly and immediately. 
This state persists after shutting down the software.

When you create a new node, as long as it is successful, it will automatically switch to that node as the active node.

You can manually set the active node with the command `--set-active <node name>`.

### Multiple nodes

When creating multiple nodes you have the option to stop the currently running node if the ports clash 
or change the ports via the config.ini and start the node in parallel.

If we start them in parallel we can create complex topologies of nodes via the config.ini 
and try to replicate things like an EOS mainnet or the like.

These types of topologies are out of the scope of this README but please look at documentation for Leap node configurations.

## Contract development

### CMake contract development

Let's start by creating a new project in our workspace.

```console
$ dune --create-cmake-app hello ./
```

This should produce a file structure like the picture below:

<img src="docs/images/cmake-init.png" alt="cmake-init" width="800">

Modify the source code how you like.

Then, let's compile the contract.

```console
$ dune --cmake-build ./
```

<img src="docs/images/cmake-build.png" alt="cmake-build" width="400">

### Bare contract development

Let's start by create a new bare project in our workspace.

```console
$ dune --create-bare-app hello ./
```

This should produce a file structure like the picture below:

<img src="docs/images/bare-init.png" alt="bare-init" width="400">

Modify the source code how you like.

Then, let's compile the contract.

```console
$ dune -- cdt-cpp /host/<path>/hello/hello.cpp -o /host/<path>/hello/hello.wasm
```

### Creating accounts and deploying smart contracts

Let's start off by creating some accounts.

```console
$ dune --create-account bucky
$ dune --create-account test
$ dune --create-account areg
```

From here we can deploy built smart contracts.

```console
$ dune --deploy ./hello bucky
$ dune --deploy ./example/talk/build/talk test
$ dune --deploy ./sudo/build/sudo areg
```

### Sending actions

Let's send some actions to the accounts.

```console
$ dune --send-action bucky hi '[bucky]' bucky@active
$ dune --send-action test post '[1, 0, bucky, "message"]' test
$ dune --send-action areg wrap ...
```

### Table information

The only current command open is `--get-table` which is analogous to `cleos get table`.

```console
$ dune --get-table <ACCOUNT> <SCOPE> <TABLE NAME>
```

This will allow for all of the same utilities from cleos itself. 
As we move forward we hope to greatly expand upon these utilities.

### DApp/WebApp

The services through the docker container are exposed at 8888 for http, 9876 for p2p and 8080 for SHiP. 
You will need to ensure the running node is using those ports.

## Account management

Earlier we saw a simple way above to create accounts using `--create-account <ACCOUNT NAME>`.

But we also have the ability to supply the creator of the account and the public and private key optionally.

```console
$ dune --create-account bucky foo
```

The above command will use the existing `foo` account to create a new account account called `bucky` 
and it will automatically generate the public and private key pair for this account and import the private key into the development wallet.

Or, to explicitly specify the public and private key pair of the new account:

```console
$ dune --create-account bucky foo EOS7qPSKJhqygQTSNjMy8aH6TL6NtsYJnBJ7fxh7Y4SFLiXYdhjGD 5KNYGzaLo9aTjiXG7oeKGy5JWkQVkAha1Xi9DXNedvojovPhnLC
```

Clearly you don't want to do this with real private keys or sensitive accounts.

## Bootstrapping nodes

At some point you will want to activate protocol features for your chain.

This can be achieved in a few ways with DUNE.

The first is by using the command `--activate-feature`.

This will require you to know which protocol features you want to enable. 
A list of available features are listed via `--list-features`.

If will try to preactivate the protocol features if it hasn't already done so, so you shouldn't have to worry about that step.

Next is using one of the two `bootstrapping` commands.

The first is `bootstrap-system`.

```console
$ dune --bootstrap-system
```

This will preactivate protocol features, set the boot contract and activate all protocol features.

The second is `bootstrap-system-full`.

```console
$ dune --bootstrap-system-full
```

This will do the same as `--bootstrap-system` but additionally set the contracts 
from [reference-contracts](https://github.com/AntelopeIO/reference-contracts)
and create the correct accounts needed for those.

## System-level commands

### Wallet

The default wallet is created for you and always unlocked for you when using this system. 
The wallet of DUNE is not in any way designed to be a `secure` wallet.

During testing or replication of state we sometimes might want to import a previous wallet.

DUNE exposes two commands `--export-wallet` and `--import-wallet <WALLET DIR>`.

`export-wallet` will produce a `.tgz` at the current location called `wallet.tgz`.

<img src="docs/images/export-wallet.png" alt="export-wallet" width="400">

`import-wallet` will take the directory of the `wallet.tgz` and import it over the current wallet of the system.

You can also create a public key private key pair with the command `--create-key`.

And lastly, we can import a development key if we need to manually do so with the command `--import-dev-key`.

### Container

Sometimes the running container can get corrupted or overly large and you will want to purge it and start fresh.

To do this use the command `destroy-container`.

```console
$ dune --destroy-container
```

This will stop the running container and erase it.

When you are done for the day it is best practice to stop the container, which is exposed via `stop-container`.

```console
$ dune --stop-container
```

This will stop all running nodes safely, and then stop the running container.

And lastly, if you are building some IDE plugin support or ancillary tooling you will want to start the container.
```console
$ dune --start-container
```

As mentioned above all commands that use the container will automatically create a new container if one does not exist 
and automatically start the container if is stopped.

# DUNE plugins

DUNE can be extended with custom functionality using plugins: [Documentation of DUNE plugins](docs/PLUGIN.md)

# Preparing DUNE release

[Steps for preparing a new DUNE release](docs/RELEASE.md)
tools\VERIFICATION.txt
VERIFICATION
Verification is intended to assist the Chocolatey moderators and community
in verifying that this package's contents are trustworthy.

Download the *.nupkg file from github releases: https://github.com/AntelopeIO/DUNE/releases
Take its sha256
If the package in Chocolatey repo has the same sha256, it's authentic.

tools\docs\PLUGIN.md
# DUNE plugins

DUNE allows users to extend its functionality through the use of plugins. DUNE plugins are simply Python scripts which are fulfilling specific requirements explained below.

## Plugin requirements
1. Plugin needs to be placed in the subdirectory of [../src/plugin/](../src/plugin/).
2. In the aforementioned subdirectory you need to create script `main.py`.
3. `main.py` needs to define 3 functions:
   1. `add_parsing(parser)` - function that receives instance of [argparse.ArgumentParser](https://docs.python.org/3/library/argparse.html). It is used to add new DUNE command parsing arguments.
   2.  (optionally) `set_dune(dune)` - function that receives instance of DUNE so the user could interact with DUNE. It might be stored for later usage if needed.
   3. `handle_args(args)` - function that receives populated namespace returned by [ArgumentParser.parse_args](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.parse_args). It is used to handle new DUNE command arguments.
   

## Plugin examples
You can find example plugins in [plugin_example directory](../plugin_example/).
To test the example plugins, copy or symbolically link the contents of the [../plugin_example/](../plugin_example) directory into the [../src/plugin/](../src/plugin/) directory. This way, DUNE will automatically discover the new plugins.

### dune_hello
The simplest plugin, which adds `--hello` to DUNE commands. When command `dune --hello` is executed then an example output is printed.

### account_setup
Plugin adds command `--bootstrap-account` to DUNE commands. When it is executed 3 example accounts are created: `alice`, `bob` and `cindy`.
Additionally the contract `eosio.token` is deployed to all above accounts.

In this example you can see how `set_dune` function is being used to store `dune` instance and later use it to create and prepare accounts.

## Implementation details
DUNE starts with auto-discovering the plugins in the `src/plugin` subdirectories and dynamically loading each `main.py` file. The functions from each plugin are then called in the following order:
1. `add_parsing(parser)` - this function is called first to add parsing arguments. Users can also initialize their plugin at this stage, however, it should be noted that at this point it is not known if the plugin will be used.
2. (optionally) `set_dune(dune)` - if the user wants to interact with DUNE, they should store the DUNE object in this function.
3. `handle_args(args)` - the user should check if their parsing arguments are being used and handle them in this function. This is the main function where the plugin does its job. The DUNE object is usually needed in this function.
tools\docs\RELEASE.md
# How to prepare new DUNE release?

## How to generate a package on Windows?
1. Edit `packaging\antelopeio-dune\antelopeio-dune.nuspec` and edit the current version in XML tag `version`
2. If you do not have yet [Chocolatey](https://chocolatey.org/) installed open Windows console as administrator.
Otherwise you can open Windows console as a regular user.
3. Go to `packaging` directory and run `generate_chocolatey.bat` (if you run it for the first time then Chocolatey will be installed)
4. `*.nupkg` file will be created in your current directory


## How to prepare new DUNE docker image?
1. Make sure you are logged in with docker login, example command: `docker login ghcr.io -u <your_username> --password-stdin`
2. Change version in `packaging/generate_package.sh`
3. Run `./bootstrap.sh` in your DUNE directory
4. Change below `X.Y.Z` to your version and in DUNE directory run:
```
docker tag dune ghcr.io/antelopeio/dune:latest
docker push ghcr.io/antelopeio/dune:latest
docker tag dune ghcr.io/antelopeio/dune:X.Y.Z
docker push ghcr.io/antelopeio/dune:X.Y.Z
```
tools\docs\images\bare-init.png
 
tools\docs\images\cmake-build.png
 
tools\docs\images\cmake-init.png
 
tools\docs\images\docker-desktop.png
 
tools\docs\images\export-wallet.png
 
tools\docs\images\get-docker.png
 
tools\docs\images\linux_docker_settings_shares.png
 
tools\docs\images\logo.png
 
tools\docs\images\node-list.png
 
tools\docs\images\table-add.png
 
tools\docs\images\win-python.png
 
tools\plugin_example\README.md
## DUNE plugins

This directory contains DUNE example plugins. To test the example plugins, copy or symbolically link the contents of the [../plugin_example/](../plugin_example) directory into the [../src/plugin/](../src/plugin/) directory. This way, DUNE will automatically discover the new plugins.

For more information please check [plugin documentation](../docs/PLUGIN.md)
tools\plugin_example\account_setup\main.py
class account_setup_plugin:
    _dune = None

    @staticmethod
    def set_dune(in_dune):
        account_setup_plugin._dune = in_dune

    @staticmethod
    def create_accounts():
        account_setup_plugin._dune.create_account('alice')
        account_setup_plugin._dune.create_account('bob')
        account_setup_plugin._dune.create_account('cindy')

    @staticmethod
    def deploy_contracts():
        account_setup_plugin._dune.deploy_contract(
            '/app/reference-contracts/build/contracts/eosio.token',
            'alice')
        account_setup_plugin._dune.deploy_contract(
            '/app/reference-contracts/build/contracts/eosio.token',
            'bob')
        account_setup_plugin._dune.deploy_contract(
            '/app/reference-contracts/build/contracts/eosio.token',
            'cindy')

def handle_args(args):
    if args.bootstrap_account:
        print('Starting account bootstrapping')
        account_setup_plugin.create_accounts()
        account_setup_plugin.deploy_contracts()
        print('Created accounts and deployed contracts')

def set_dune(in_dune):
    account_setup_plugin.set_dune(in_dune)

def add_parsing(parser):
    parser.add_argument('--bootstrap-account', action='store_true',
                        help='Set up 3 example accounts together with their token contracts')
tools\plugin_example\dune_hello\main.py
def handle_args(args):
    if args.hello:
        print('Hello from DUNE plugin!')

def add_parsing(parser):
    parser.add_argument('--hello', action='store_true',
                                  help='outputs "Hello World"')
tools\scripts\bootstrap_cdt.sh
#! /bin/sh -e
. ./bootstrap_common.sh

CDT_VERSION=$1

if [ -n "$CDT_VERSION" ]; then
   FINAL_CDT_VERSION="$CDT_VERSION"
else
   FINAL_CDT_VERSION=$(wget -q -O- https://api.github.com/repos/"$ORG"/cdt/releases/latest | jq -r '.tag_name' | cut -c2-)
fi


if [ "${ARCH}" = "x86_64" ]; then
   wget https://github.com/"${ORG}"/cdt/releases/download/v"${FINAL_CDT_VERSION}"/cdt_"${FINAL_CDT_VERSION}"_amd64.deb
   apt --assume-yes --allow-downgrades install ./cdt_"${FINAL_CDT_VERSION}"_amd64.deb
else
   wget https://github.com/"${ORG}"/cdt/releases/download/v"${FINAL_CDT_VERSION}"/cdt_"${FINAL_CDT_VERSION}"_arm64.deb
   apt --assume-yes --allow-downgrades install ./cdt_"${FINAL_CDT_VERSION}"_arm64.deb
fi
tools\scripts\bootstrap_common.sh
#! /bin/sh -e

ARCH=$(uname -m)
export ARCH

ORG="AntelopeIO"
export ORG
tools\scripts\bootstrap_contracts.sh
#! /bin/sh -e
. ./bootstrap_common.sh

rm -rf reference-contracts || true
git clone https://github.com/"${ORG}"/reference-contracts
cd reference-contracts
git checkout 074bc11394d13395e82015f6c41db32a67170d73
mkdir build
cd build
cmake ..
make -j4
tools\scripts\bootstrap_env.sh
#! /bin/sh -e

./bootstrap_leap.sh "$LEAP_VERSION"
./bootstrap_cdt.sh "$CDT_VERSION"
./bootstrap_contracts.sh
tools\scripts\bootstrap_leap.sh
#! /bin/sh -e
. ./bootstrap_common.sh

LEAP_VERSION=$1

if [ -n "$LEAP_VERSION" ]; then
   FINAL_LEAP_VERSION="$LEAP_VERSION"
else
   FINAL_LEAP_VERSION=$(wget -q -O- https://api.github.com/repos/"$ORG"/leap/releases/latest | jq -r '.tag_name' | cut -c2-)
fi

CONTAINER_PACKAGE=AntelopeIO/experimental-binaries
GH_ANON_BEARER=$(curl -s "https://ghcr.io/token?service=registry.docker.io&scope=repository:${CONTAINER_PACKAGE}:pull" | jq -r .token)
curl -s -L -H "Authorization: Bearer ${GH_ANON_BEARER}" https://ghcr.io/v2/${CONTAINER_PACKAGE}/blobs/"$(curl -s -L -H "Authorization: Bearer ${GH_ANON_BEARER}" https://ghcr.io/v2/${CONTAINER_PACKAGE}/manifests/v"${FINAL_LEAP_VERSION}" | jq -r .layers[0].digest)" | tar -xz

case $FINAL_LEAP_VERSION in
   #up to 3.1
   "3.1"*) if [ "${ARCH}" = "x86_64" ]; then
      wget https://github.com/"${ORG}"/leap/releases/download/v"${FINAL_LEAP_VERSION}"/leap-"${FINAL_LEAP_VERSION}"-ubuntu20.04-x86_64.deb
      apt --assume-yes --allow-downgrades install ./leap-"${FINAL_LEAP_VERSION}"-ubuntu20.04-x86_64.deb
      apt --assume-yes --allow-downgrades install ./leap-dev-"${FINAL_LEAP_VERSION}"-ubuntu20.04-x86_64.deb
   else
      apt --assume-yes --allow-downgrades install ./leap-"${FINAL_LEAP_VERSION}"-ubuntu20.04-aarch64.deb
      apt --assume-yes --allow-downgrades install ./leap-dev-"${FINAL_LEAP_VERSION}"-ubuntu20.04-aarch64.deb
   fi;;

   #from 3.2
   *) if [ "${ARCH}" = "x86_64" ]; then
      wget https://github.com/"${ORG}"/leap/releases/download/v"${FINAL_LEAP_VERSION}"/leap_"${FINAL_LEAP_VERSION}"-ubuntu20.04_amd64.deb
      apt --assume-yes --allow-downgrades install ./leap_"${FINAL_LEAP_VERSION}"-ubuntu20.04_amd64.deb
      apt --assume-yes --allow-downgrades install ./leap-dev_"${FINAL_LEAP_VERSION}"-ubuntu20.04_amd64.deb
   else
      apt --assume-yes --allow-downgrades install ./leap_"${FINAL_LEAP_VERSION}"-ubuntu20.04_arm64.deb
      apt --assume-yes --allow-downgrades install ./leap-dev_"${FINAL_LEAP_VERSION}"-ubuntu20.04_arm64.deb
   fi;;
esac
tools\scripts\config.ini
wasm-runtime = eos-vm
abi-serializer-max-time-ms = 15
chain-state-db-size-mb = 65536
# chain-threads = 2
contracts-console = true
http-server-address = 127.0.0.1:8888
p2p-listen-endpoint = 0.0.0.0:9876
state-history-endpoint = 127.0.0.1:8080
verbose-http-errors = true
# http-threads = 2
agent-name = "DUNE Test Node"
net-threads = 2
max-transaction-time = 100
producer-name = eosio
enable-stale-production = true
# producer-threads = 2
# trace-history = false
# chain-state-history = false
resource-monitor-not-shutdown-on-threshold-exceeded=true

plugin = eosio::chain_api_plugin
plugin = eosio::http_plugin
plugin = eosio::producer_plugin
plugin = eosio::producer_api_plugin
tools\scripts\my_init
 
tools\scripts\setup_system.sh
#! /bin/sh

cleos wallet create --file .wallet.pw
cat .wallet.pw | cleos wallet unlock --password
# import main EOSIO account private key
cleos wallet import --private-key 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
tools\scripts\start_node.sh
#! /bin/sh

# 1 - data directory
# 2 - config.ini directory
# 3 - snapshot
# 4 - node name

nodeos --data-dir=$1 --config-dir=$2 $3 &> /app/$4.out
tools\scripts\write_context.sh
#! /bin/sh

echo $1 > /app/.dune.ctx
tools\src\dune\args.py
import argparse
import sys


class fix_action_data(argparse.Action):
    def __call__(self, parser, namespace, values, option_string=None):
        fixed_list = [values[0], values[1], values[2].strip(), values[3]]
        setattr(namespace, self.dest, fixed_list)


def fix_args(args):
    arg_list = []
    arg_so_far = ""
    state = False
    for arg in args:
        if not state:
            if arg.startswith('['):
                state = True
                arg_so_far = arg[1:]
                continue
            arg_list.append(arg)
            continue
        if state:
            if arg.endswith(']'):
                arg_so_far = arg_so_far + arg[:-1]
                state = False
                arg_list.append(arg_so_far)
                arg_so_far = ""
                continue
            arg_so_far = arg_so_far + arg

    return arg_list


def parse_optional(cmd):
    if cmd is not None:
        return cmd[1:]  # remove leading --
    return cmd  # empty list


class arg_parser:

    def __init__(self):
        self._parser = argparse.ArgumentParser(
            description='''DUNE: Docker Utilities for Node Execution.
                        -- [COMMANDS] run any number of commands in the container.
                        Example: dune -- cleos --help''')
        self._parser.add_argument('-s', '--start', nargs=1, metavar=("NODE"),
                                  help='start a new node with a given name')
        self._parser.add_argument('-c', '--config', nargs=1, metavar=("CONFIG_DIR"),
                                  help='optionally used with --start, a path containing'
                                  ' the config.ini file to use')
        self._parser.add_argument(
            '--stop', metavar="NODE", help='stop a node with a given name')
        self._parser.add_argument('--remove', metavar="NODE",
                                  help='a node with a given name, will stop the node if running')
        self._parser.add_argument('--list', action='store_true',
                                  help='list all nodes available and their statuses')
        self._parser.add_argument('--simple-list', action='store_true',
                                  help='list all nodes available and their statuses without '
                                       'formatting and unicode')
        self._parser.add_argument('--set-active', metavar="NODE",
                                  help='set a node to active status')
        self._parser.add_argument('--get-active', action='store_true',
                                  help='get the name of the node that is currently active')
        self._parser.add_argument('--export-node', metavar=("NODE", "PATH"), nargs=2,
                                  help='export state and blocks log for the given node. '
                                  'PATH may be a directory or a filename with `.tgz` extension.')
        self._parser.add_argument('--import-node', metavar=("NODE", "PATH"), nargs=2,
                                  help='import state and blocks log to a given node'
                                  'PATH *must* be a previously exported node ending in `.tgz`.')
        self._parser.add_argument('--monitor', action='store_true',
                                  help='monitor the currently active node')
        self._parser.add_argument('--import-dev-key', metavar="KEY",
                                  help='import a private key into developement wallet')
        self._parser.add_argument('--create-key', action='store_true',
                                  help='create an public key private key pair')
        self._parser.add_argument('--export-wallet', action='store_true',
                                  help='export the internal development wallet')
        self._parser.add_argument('--import-wallet', metavar="DIR",
                                  help='import a development wallet')
        self._parser.add_argument('--create-account', nargs='+',
                                  metavar=["NAME", "CREATOR (Optional)", "PUB_KEY (Optional)",
                                           "PRIV_KEY (Optional)"],
                                  help='create an EOSIO account and an optional creator (the '
                                       'default is eosio)')
        self._parser.add_argument('--create-cmake-app', nargs=2, metavar=["PROJ_NAME", "DIR"],
                                  help='create a smart contract project at from a specific host '
                                       'location')
        self._parser.add_argument('--create-bare-app', nargs=2, metavar=["PROJ_NAME", "DIR"],
                                  help='create a smart contract project at from a specific host '
                                       'location')
        self._parser.add_argument('--cmake-build', nargs=1, metavar=["DIR", "-- FLAGS (Optional)"],
                                  help='build a smart contract project at the directory given '
                                       'optional flags are of the form -- -DFLAG1=On '
                                       '-DFLAG2=Off]')
        self._parser.add_argument('--ctest', nargs=1, metavar=["DIR", "-- FLAGS (Optional)"],
                                  help='run the ctest tests for a smart contract project at the '
                                       'directory given. Optional flags are of the form -- -VV')
        self._parser.add_argument('--gdb', nargs=1, metavar=["PROGRAM", "-- FLAGS (Optional)"],
                                  help='start gdb into the container with given executive binary'
                                       'Optional flags are of the form -- -VV')
        self._parser.add_argument('--deploy', nargs=2, metavar=["DIR", "ACCOUNT"],
                                  help='deploy a smart contract and ABI to account given')
        self._parser.add_argument('--destroy-container', action='store_true',
                                  help='destroy context container <Warning, this will destroy '
                                       'your state and block log>')
        self._parser.add_argument('--stop-container', action='store_true',
                                  help='stop the context container')
        self._parser.add_argument('--start-container', action='store_true',
                                  help='start the context container')
        self._parser.add_argument('--set-core-contract', metavar="ACCOUNT",
                                  help='set the core contract to an account given (default '
                                       'normally is `eosio`)')
        self._parser.add_argument('--set-bios-contract', metavar="ACCOUNT",
                                  help='set the bios contract to an account given (default '
                                       'normally is `eosio`)')
        self._parser.add_argument('--set-token-contract', metavar="ACCOUNT",
                                  help='set the token contract to an account given (default '
                                       'normally is`eosio.token`)')
        self._parser.add_argument('--bootstrap-system', action='store_true',
                                  help='install boot contract to eosio and activate all protocol '
                                       'features')
        self._parser.add_argument('--bootstrap-system-full', action='store_true',
                                  help='same as `--bootstrap-system` but also creates accounts '
                                       'needed for core contract and deploys core, token, '
                                       'and multisig contracts')
        self._parser.add_argument('--send-action', nargs=4, action=fix_action_data,
                                  metavar=["ACCOUNT", "ACTION", "DATA", "PERMISSION"],
                                  help='send action to account with data given and permission')
        self._parser.add_argument('--get-table', nargs=3, metavar=["ACCOUNT", "SCOPE", "TABLE"],
                                  help='get the data from the given table')
        self._parser.add_argument('--activate-feature', metavar="CODENAME",
                                  help='active protocol feature')
        self._parser.add_argument('--list-features', action='store_true',
                                  help='list available protocol feature code names')
        self._parser.add_argument('--version', action='store_true',
                                  help='display the current version of DUNE')
        self._parser.add_argument('--version-all', action='store_true',
                                  help='display the current version of DUNE, CDT and leap')
        self._parser.add_argument('--debug', action='store_true', help='print additional info '
                                                                       'useful for debugging, '
                                                                       'like running docker '
                                                                       'commands')
        self._parser.add_argument(
            '--upgrade', action='store_true', help='upgrades DUNE image to the latest version')
        self._parser.add_argument(
            '--leap', metavar="LEAP_VERSION", help='sets the version of leap')
        self._parser.add_argument('--cdt', metavar="CDT_VERSION", help='sets the version of CDT (Contract '
                                  'Development Toolkit)')
        # used to store arguments to individual programs, starting with --
        self._parser.add_argument('remainder',
                                  nargs=argparse.REMAINDER)
        # pylint: disable=fixme
        # TODO readdress after the launch
        # self._parser.add_argument('--start-webapp', metavar=["DIR"], help='start a webapp with ')

    @staticmethod
    def is_forwarding():
        return len(sys.argv) > 1 and sys.argv[1] == '--'

    @staticmethod
    def get_forwarded_args():
        return sys.argv[2:]

    def parse(self):
        return self._parser.parse_args()

    def get_parser(self):
        return self._parser

    def exit_with_help_message(self, *args, return_value=1):
        self._parser.print_help(sys.stderr)
        print("\nError: ", *args, file=sys.stderr)
        sys.exit(return_value)
tools\src\dune\context.py
class ctx:
    active = ""  # active node
    http_port = ""  # active node's http port
    p2p_port = ""  # active node's p2p port
    ship_port = ""  # active node's ship port


class context:
    _file_name = ".dune.ctx"
    _dir = '/app/'
    _docker = None
    _ctx = ctx()

    def __init__(self, dockr):
        self._docker = dockr
        if self._docker.file_exists(self._dir + self._file_name):
            self.read_ctx()

    def read_ctx(self):
        arr = self._docker.execute_cmd(['cat', self._dir + self._file_name])[0].splitlines()
        self._ctx.active = arr[0]
        self._ctx.http_port = arr[1]
        self._ctx.p2p_port = arr[2]
        self._ctx.ship_port = arr[3]

    def write_ctx(self):
        ctx_str = self._ctx.active + "\\n"
        ctx_str = ctx_str + self._ctx.http_port + "\\n"
        ctx_str = ctx_str + self._ctx.p2p_port + "\\n"
        ctx_str = ctx_str + self._ctx.ship_port
        stdout, stderr, exit_code = self._docker.execute_cmd(['/app/write_context.sh', ctx_str])
        print(stdout)
        print(stderr)

    @staticmethod
    def is_commented(string):
        for char in string:
            if char == '#':
                return True
            if char == ' ':
                continue
        return False

    def get_ctx(self):
        return self._ctx

    def get_active(self):
        return self._ctx.active

    def get_config_args(self, nod):
        conf, stderr, exit_code = self._docker.execute_cmd(
            ['cat', '/app/nodes/' + nod.name() + '/config.ini'])

        http_port = None
        p2p_port = None
        ship_port = None

        for line in conf.splitlines():
            if not self.is_commented(line):
                if "http-server-address" in line:
                    http_port = line.split('=')[1][1:]
                elif "p2p-listen-endpoint" in line:
                    p2p_port = line.split('=')[1][1:]
                elif "state-history-endpoint" in line:
                    ship_port = line.split('=')[1][1:]

        # if they don't exist just set to normal default values
        if http_port is None:
            http_port = "127.0.0.1:8888"
        if p2p_port is None:
            p2p_port = "0.0.0.0:9876"
        if ship_port is None:
            ship_port = "127.0.0.1:8080"
        return [http_port, p2p_port, ship_port]

    def set_active(self, nod):
        self._ctx.active = nod.name()
        self._ctx.http_port, self._ctx.p2p_port, self._ctx.ship_port = self.get_config_args(nod)
        self.write_ctx()
tools\src\dune\docker.py
import os
import platform
import subprocess


class docker:

    _container = ""
    _image = ""
    _cl_args = None
    _dune_url = 'ghcr.io/antelopeio/dune:latest'

    def __init__(self, container, image, cl_args):
        self._container = container
        self._image = image
        self._cl_args = cl_args

        # check if container is running
        stdout, stderr, exit_code = self.execute_docker_cmd(['container', 'ls'])

        # if container is not in the list then create one
        if self._container not in stdout:
            # check if container is stopped
            stdout, stderr, exit_code = self.execute_docker_cmd(
                ['container', 'ls', '-a'])
            if self._container in stdout:
                self.execute_docker_cmd(
                    ['container', 'start', self._container])
            else:
                # download dune image
                dune_image = subprocess.check_output(['docker', 'images', '-q', self._image], stderr=None, encoding='utf-8')

                if dune_image == '':
                    print('Downloading Dune image')
                    self.upgrade()
                    with subprocess.Popen(['docker', 'tag', self._dune_url, 'dune:latest']) as proc:
                        proc.communicate()


                # start a new container
                print("Creating docker container [" + self._container + "]")
                host_dir = '/'
                if platform.system() == 'Windows':
                    host_dir = 'C:/'

                stdout, stderr, exit_code = self.execute_docker_cmd(
                    ['run', '-p', '8888:8888', '-p', '9876:9876', '-p',
                     '8080:8080', '-p', '3000:3000', '-p', '8000:8000', '-v',
                     host_dir + ':/host', '-d', '--name=' + self._container,
                     self._image, 'tail', '-f', '/dev/null'])

    @staticmethod
    def abs_host_path(directory):
        abs_path = os.path.abspath(directory)
        if platform.system() == 'Windows':
            # remove the drive letter prefix and replace the separators
            abs_path = abs_path[3:].replace('\\', '/')
        else:
            abs_path = abs_path[1:]

        return '/host/' + abs_path

    def get_container(self):
        return self._container

    def get_image(self):
        return self._image

    def execute_docker_cmd(self, cmd):
        with subprocess.Popen(['docker'] + cmd,
                              stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc:
            stdout, stderr = proc.communicate()
            if self._cl_args.debug:
                print('docker '+' '.join(cmd))
                print(stdout.decode('UTF-8'))
                print(stderr.decode('UTF-8'))
        return [stdout.decode('UTF-8'), stderr.decode('UTF-8'), proc.poll()]

    def file_exists(self, file_name):
        return self.execute_cmd(['test', '-f', file_name])[2] == 0

    def dir_exists(self, directory):
        return self.execute_cmd(['test', '-d', directory])[2] == 0

    def tar_dir(self, file_name, directory):
        return self.execute_cmd(['tar', 'cvzf', file_name + '.tgz', directory])

    def untar(self, directory):
        return self.execute_cmd(['tar', 'xvzf', directory])

    def cp_to_host(self, container_file, host_file):
        return self.execute_docker_cmd(['cp', self._container + ":" + container_file, host_file])

    def cp_from_host(self, host_file, container_file):
        return self.execute_docker_cmd(['cp', host_file, self._container + ":" + container_file])

    def rm_file(self, file_name):
        self.execute_cmd(['rm', '-rf', file_name])

    def find_pid(self, process_name):
        stdout, stderr, exit_code = self.execute_cmd(['ps', 'ax'])
        for line in stdout.splitlines(True):
            if "PID TTY" in line:
                continue
            if process_name in line:
                return line.split()[0]
        return -1

    def get_container_name(self):
        return self._container

    def commit(self, name):
        self.execute_docker_cmd(['commit', 'dune', 'dune'])

    def start(self):
        print("Starting docker container [" + self._container + "]")
        self.execute_docker_cmd(['container', 'start', self._container])

    def stop(self):
        print("Stopping docker container [" + self._container + "]")
        self.execute_docker_cmd(['container', 'stop', self._container])

    def destroy(self):
        print("Destroying docker container [" + self._container + "]")
        self.execute_docker_cmd(['container', 'stop', self._container])
        self.execute_docker_cmd(['container', 'rm', self._container])

    def execute_cmd_at(self, directory, cmd):
        with subprocess.Popen(['docker', 'container', 'exec', '-w', directory,
                               self._container] + cmd) as proc:
            proc.communicate()

    def execute_cmd(self, cmd):
        return self.execute_docker_cmd(
            ['container', 'exec', self._container] + cmd)

    def execute_interactive_cmd(self, cmd):
        with subprocess.Popen(['docker', 'container',
                               'exec', '-i', self._container] + cmd) as proc:
            proc.communicate()

    def execute_cmd2(self, cmd):
        with subprocess.Popen(['docker', 'container',
                               'exec', self._container] + cmd) as proc:
            proc.communicate()

    def execute_bg_cmd(self, cmd):
        return self.execute_cmd(cmd + ['&'])

    def upgrade(self):
        with subprocess.Popen(['docker', 'pull', self._dune_url]) as proc:
            proc.communicate()
tools\src\dune\dune.py
# pylint: disable=missing-function-docstring, missing-module-docstring
import os
import sys                      # sys.stderr
from context import context
from docker import docker
from node_state import node_state

# VERSION INFORMATION
def version_major():
    return 1


def version_minor():
    return 1


def version_patch():
    return 0


def version_suffix():
    return ""


def version_full():
    main_version = "v" + str(version_major()) + "." + str(
        version_minor()) + "." + str(version_patch())
    if version_suffix() == "":
        return main_version
    return main_version + "." + version_suffix()


class dune_error(Exception):
    pass


class dune_node_not_found(dune_error):
    _name = ""

    def __init__(self, n):
        self._name = n

    def name(self):
        return self._name


class node:
    _name = ""
    _cfg = ""

    def __init__(self, name, cfg=None):
        self._name = name
        self._cfg = cfg

    def name(self):
        return self._name

    def config(self):
        return self._cfg

    def set_config(self, cfg):
        self._cfg = cfg

    def data_dir(self):
        return '/app/nodes/' + self.name()

    def config_dir(self):
        return '/app/nodes/' + self.name()


class dune:
    _docker = None
    _wallet_pw = None
    _context = None
    _cl_args = None
    _token_priv_key = "5JPJoZXizFVi19wHkboX5fwwEU2jZVvtSJpQkQu3uqgNu8LNdQN"
    _token_pub_key = "EOS6v86d8DAxjfGu92CLrnEzq7pySpVWYV2LjaxPaDJJvyf9Vpx5R"

    def __init__(self, cl_args):
        self._cl_args = cl_args
        self._docker = docker('dune_container', 'dune:latest', cl_args)
        self._wallet_pw = self.get_wallet_pw()
        self._context = context(self._docker)

    def node_exists(self, nod):
        return self._docker.dir_exists('/app/nodes/' + nod.name())

    def is_node_running(self, nod):
        return self._docker.find_pid('/app/nodes/' + nod.name() + ' ') != -1

    def set_active(self, nod):
        if self.node_exists(nod):
            self._context.set_active(nod)
        else:
            raise dune_node_not_found(nod.name())

    def get_active(self):
        return self._context.get_active()

    def create_node(self, nod):
        print("Creating node [" + nod.name() + "]")
        self._docker.execute_cmd(['mkdir', '-p', nod.data_dir()])

    def start_node(self, nod, snapshot=None):
        stdout, stderr, exit_code = self._docker.execute_cmd(['ls', '/app/nodes'])

        if self.is_node_running(nod):
            print("Node [" + nod.name() + "] is already running.")
            return

        cmd = ['sh', 'start_node.sh', nod.data_dir(), nod.config_dir()]

        if snapshot is not None:
            cmd = cmd + ['--snapshot /app/nodes/' + nod.name() + '/snapshots/' + snapshot + ' -e']
        else:
            cmd = cmd + [' ']

        # if node name is not found we need to create it
        is_restart=True
        if not nod.name() in stdout:
            is_restart=False
            self.create_node(nod)

        # copy config.ini to config-dir
        if not is_restart and nod.config() is None:
            nod.set_config('/app/config.ini')

        if nod.config() is not None:
            self._docker.execute_cmd(['cp', nod.config(), nod.config_dir()])
            print("Using Configuration [" + nod.config() + "]")

        ctx = self._context.get_ctx()
        cfg_args = self._context.get_config_args(nod)

        if self.node_exists(node(ctx.active)):
            if cfg_args[0] == ctx.http_port:
                print("Currently active node [" + ctx.active + "] http port is the same as this nodes [" + nod.name() + "]")
                self.stop_node(node(ctx.active))
            elif cfg_args[1] == ctx.p2p_port:
                print("Currently active node [" + ctx.active + "] p2p port is the same as this nodes [" + nod.name() + "]")
                self.stop_node(node(ctx.active))
            elif cfg_args[2] == ctx.ship_port:
                print("Currently active node [" + ctx.active + "] ship port is the same as this nodes [" + nod.name() + "]")
                self.stop_node(node(ctx.active))

        stdout, stderr, exit_code = self._docker.execute_cmd(cmd + [nod.name()])

        if exit_code == 0:
            self.set_active(nod)
            print("Active [" + nod.name() + "]")
            print(stdout)
            print(stderr)
        else:
            print(stderr)

    def cleos_cmd(self, cmd, quiet=True):
        self.unlock_wallet()
        ctx = self._context.get_ctx()
        if quiet:
            return self._docker.execute_cmd(
                ['cleos', '--verbose', '-u', 'http://' + ctx.http_port] + cmd)
        return self._docker.execute_cmd2(
            ['cleos', '--verbose', '-u', 'http://' + ctx.http_port] + cmd)

    def monitor(self):
        stdout, stderr, exit_code = self.cleos_cmd(['get', 'info'])
        print(stdout)
        if exit_code != 0:
            print(stderr)
            raise dune_error

    def stop_node(self, nod):
        if self.node_exists(nod):
            if self.is_node_running(nod):
                pid = self._docker.find_pid(
                    '/app/nodes/' + nod.name() + ' ')
                print("Stopping node [" + nod.name() + "]")
                self._docker.execute_cmd(['kill', pid])
            else:
                print("Node [" + nod.name() + "] is not running")
        else:
            raise dune_node_not_found(nod.name())

    def remove_node(self, nod):
        self.stop_node(nod)
        print("Removing node [" + nod.name() + "]")
        self._docker.execute_cmd(
            ['rm', '-rf', '/app/nodes/' + nod.name()])

    def destroy(self):
        self._docker.destroy()

    def stop_container(self):
        stdout, stderr, exit_code = self._docker.execute_cmd(
            ['ls', '/app/nodes'])
        for string in stdout.split():
            if self.is_node_running(node(string)):
                self.stop_node(node(string))

        self._docker.stop()

    def start_container(self):
        self._docker.start()


    def state_list(self):
        # [(node_name, active, running, ports),...]
        rv=[]
        stdout, stderr, exit_code = self._docker.execute_cmd(['ls', '/app/nodes'])
        ctx = self._context.get_ctx()
        for node_name in stdout.split():
            active = False
            if node_name == ctx.active:
                active=True
            running = self.is_node_running(node(node_name))
            addrs = self._context.get_config_args(node(node_name))
            rv.append( node_state(node_name, active, running, addrs[0], addrs[1], addrs[2]) )
        return rv


    # pylint: disable=too-many-branches
    def list_nodes(self, simple=False, sep='|'):

        buffer = 3
        node_name = "Node Name"

        states=self.state_list()
        name_width = len(node_name) + buffer
        if not simple:
            for state in states:
                name_width = max( len(state.name) + buffer, name_width)

        if simple:
            print("Node|Active|Running|HTTP|P2P|SHiP")
        else:
            header = '| Active? | Running? | HTTP           | P2P          | SHiP          '
            print(f'{node_name : <{name_width}}{header}')
            print(f'{"":{"-"}<{name_width + len(header)}}')

        for state in states:
            print( state.string(sep=sep, simple=simple, name_width=name_width) )

    # pylint: disable=too-many-locals,too-many-statements
    def export_node(self, nod, path):
        # Sanity check
        if not self.node_exists(nod):
            raise dune_node_not_found(nod.name())

        ctx = self._context.get_ctx()

        is_active=nod.name() == ctx.active
        is_running=self.is_node_running(nod)
        my_addrs=self._context.get_config_args(nod)

        was_running=[]
        was_active=None

        initial_states=[]

        if not is_active or not is_running:
            # Get the current states.
            initial_states=self.state_list()

            # For each state, make decisions based on it's
            for state in initial_states:
                # Don't operate on our node.
                if state.name == nod.name():
                    continue
                if state.is_active:
                    was_active = state.name
                if state.is_running:
                    # We only need to stop a running node if there are address collisions.
                    if state.http in my_addrs or state.p2p in my_addrs or state.ship in my_addrs:
                        was_running.append(state.name)
                        self.stop_node(node(state.name))
                        print("\t", state.name, "was stopped due to address collision.")

        # Get this node ready for export.
        if not is_active:
            self.set_active(nod)
        if not is_running:
            self.start_node(nod)


        # Paths:
        directory=path
        filename=nod.name()+".tgz"

        # Update paths based on input.
        if os.path.splitext(path)[1].lower() == ".tgz":
            directory=os.path.split(path)[0]
            filename=os.path.split(path)[1]

        # Ensure the directory is absolute and it exists.
        directory=os.path.realpath(directory)
        if not os.path.exists(directory):
            os.makedirs(directory)

        # Determine the final full path.
        fullpath=os.path.join(directory,filename)

        src_path='/app/nodes/' + nod.name()
        dst_path='/app/tmp/' + nod.name()


        print("Exporting data from node [" + nod.name() + "] to location " + fullpath)

        # Create the snapshot
        self.create_snapshot()
        # Stop the node for copy.
        self.stop_node(nod)

        self._docker.execute_cmd(['mkdir', '-p', dst_path])
        self._docker.execute_cmd(['cp', '-R', src_path + '/blocks', dst_path + '/blocks'])
        self._docker.execute_cmd(['cp', src_path + '/config.ini', dst_path + '/config.ini'])
        self._docker.execute_cmd(['cp', '-R', src_path + '/protocol_features', dst_path + '/protocol_features'])
        self._docker.execute_cmd(['cp', '-R', src_path + '/snapshots', dst_path + '/snapshots'])

        self._docker.tar_dir(nod.name(), 'tmp/' + nod.name())
        self._docker.cp_to_host('/app/' + nod.name() + '.tgz', fullpath)
        self._docker.rm_file('/app/' + nod.name() + '.tgz')
        self._docker.rm_file(dst_path)

        # Restore previously active node.
        if not is_active and was_active is not None:
            self.set_active(node(was_active))

        # Restart the node if necessary.
        if is_running:
            self.start_node(nod)

        # Restart any nodes that were previously running.
        for old_runner in was_running:
            self.start_node(node(old_runner))


    def import_node(self, path, nod):

        # Sanity check path
        if not os.path.exists(path):
            print("File not found: ", path, file=sys.stderr)
            raise dune_error
        if os.path.splitext(path)[1].lower() != ".tgz":
            print("Path extension must be `.tgz`: ", path, file=sys.stderr)
            raise dune_error

        print("Importing node data [" + nod.name() + "]")

        # If the node already exists we delete it.
        if self.node_exists(nod):
            self.remove_node(nod)

        # Copy the tgz file.
        stdout, stderr, exit_code = self._docker.cp_from_host(path, '/app/tmp.tgz')
        if exit_code != 0:
            print(stderr)
            raise dune_error

        # Clean up the tmp file, untar, and remove the file.
        self._docker.rm_file('/app/tmp') # remove any existing file
        self._docker.untar('/app/tmp.tgz')
        self._docker.rm_file('/app/tmp.tgz')

        # Find the path inside temp of the import data.
        stdout, stderr, exit_code = self._docker.execute_cmd(['ls', '/app/tmp'])
        src_name=stdout.split()[0]
        src_path='/app/tmp/' + src_name

        # Calculate and create the destination path.
        dst_path='/app/nodes/' + nod.name()
        self._docker.execute_cmd(['mkdir', '-p', dst_path + '/blocks'])

        # Move data to the destination.
        self._docker.execute_cmd(['mv', src_path + '/blocks/blocks.index', dst_path + '/blocks/blocks.index'])
        self._docker.execute_cmd(['mv', src_path + '/blocks/blocks.log',   dst_path + '/blocks/blocks.log'])
        self._docker.execute_cmd(['mv', src_path + '/config.ini',          dst_path + '/config.ini'])
        self._docker.execute_cmd(['mv', src_path + '/protocol_features',   dst_path + '/protocol_features'])
        self._docker.execute_cmd(['mv', src_path + '/snapshots',           dst_path + '/snapshots'])
        # Clean up the temp.
        self._docker.rm_file('/app/tmp')

        # Ensure a snapshot exists
        stdout, stderr, exit_code = self._docker.execute_cmd(['ls', dst_path + '/snapshots'])
        if len(stdout) == 0:
            print('No snapshot found for ', nod.name(), ' sourced from: \n\t', path, file=sys.stderr)
            raise dune_error

        # Start and activate the node...
        self.start_node(nod, stdout.split()[0])
        self.set_active(nod)

    def get_wallet_pw(self):
        stdout, stderr, exit_code = self._docker.execute_cmd(['cat', '.wallet.pw'])
        return stdout

    def unlock_wallet(self):
        stdout, stderr, exit_code = self._docker.execute_cmd(
            ['cleos', 'wallet', 'unlock', '--password', self.get_wallet_pw()])

    def import_key(self, key):
        self.unlock_wallet()
        return self.cleos_cmd(['wallet', 'import', '--private-key', key])

    def create_key(self):
        stdout, stderr, exit_code = self.cleos_cmd(['create', 'key', '--to-console'])
        return stdout

    def export_wallet(self):
        self._docker.execute_cmd(['mkdir', '/app/_wallet'])
        self._docker.execute_cmd(['cp', '-R', '/root/eosio-wallet/', '/app/_wallet/eosio-wallet'])
        self._docker.execute_cmd(['cp', '-R', '/app/.wallet.pw', '/app/_wallet/.wallet.pw'])
        self._docker.tar_dir("wallet", "_wallet")
        self._docker.cp_to_host("/app/wallet.tgz", "wallet.tgz")

    def import_wallet(self, path):
        self._docker.cp_from_host(path, "/app/wallet.tgz")
        self._docker.untar("/app/wallet.tgz")
        self._docker.execute_cmd(["mv", "/app/_wallet/.wallet.pw", "/app"])
        self._docker.execute_cmd(["cp", "-R", "/app/_wallet/eosio-wallet/", "/root"])
        self._docker.execute_cmd(["rm", "-R", "/app/_wallet/"])
        self._docker.execute_cmd(["rm", "/app/wallet.tgz"])

    # pylint: disable=fixme
    # TODO cleos has a bug displaying keys for K1 so, we need the public key
    #  if providing the private key
    # Remove that requirement when we fix cleos.
    def create_account(self, name, creator=None, pub=None, private=None):
        if private is None:
            keys = self.create_key()
            private = keys.splitlines()[0].split(':')[1][1:]
            pub = keys.splitlines()[1].split(':')[1][1:]
            print(
                "Creating account [" + name + "] with key pair [Private: " +
                private + ", Public: " + pub + "]")

        if creator is None:
            stdout, stderr, exit_code = self.cleos_cmd(
                ['create', 'account', 'eosio', name, pub])
        else:
            stdout, stderr, exit_code = self.cleos_cmd(
                ['create', 'account', creator, name, pub])
        self.import_key(private)
        print(stderr)

    def execute_cmd(self, args):
        self._docker.execute_cmd2(args)

    def execute_cmd_at(self, args, at_dir):
        self._docker.execute_cmd_at(args, at_dir)

    def execute_interactive_cmd(self, args):
        self._docker.execute_interactive_cmd(args)

    def build_cmake_proj(self, directory, flags):
        container_dir = self._docker.abs_host_path(directory)
        build_dir = container_dir + '/build'
        if not self._docker.dir_exists(build_dir):
            self._docker.execute_cmd(['mkdir', '-p', build_dir])
        self._docker.execute_cmd2(
            ['cmake', '-S', container_dir, '-B', build_dir] + flags)
        self._docker.execute_cmd2(['cmake', '--build', build_dir])

    def ctest_runner(self, directory, flags):
        container_dir = self._docker.abs_host_path(directory)
        self._docker.execute_cmd_at(container_dir, ['ctest'] + flags)

    def gdb(self, executable, flags):
        container_exec = self._docker.abs_host_path(executable)
        self._docker.execute_interactive_cmd(['gdb', container_exec] + flags)

    def build_other_proj(self, cmd):
        self._docker.execute_cmd2([cmd])

    def init_project(self, name, directory, cmake=True):
        if cmake:
            bare = []
        else:
            bare = ["--bare"]

        stdout, stderr, exit_code = self._docker.execute_cmd(
            ['cdt-init', '-project', name, '-path', directory] + bare)
        if exit_code != 0:
            print(stdout)
            raise dune_error()

    def create_snapshot(self):
        ctx = self._context.get_ctx()
        url = "http://" + ctx.http_port + "/v1/producer/create_snapshot"
        stdout, stderr, exit_code = self._docker.execute_cmd(['curl', '-X', 'POST', url])
        print(stdout)
        print(stderr)
        print(url)

    def deploy_contract(self, directory, acnt):
        stdout = ""
        stderr = ""
        exit_code = 0
        count = 10
        while count > 0:
            self.cleos_cmd(
                ['set', 'account', 'permission', acnt, 'active', '--add-code'])

            stdout, stderr, exit_code = self.cleos_cmd(
                ['set', 'contract', acnt, directory])

            if exit_code:
                count = count - 1
                print('*** Retry')
            else:
                break

        if exit_code == 0:
            print(stdout)
        else:
            print(stderr)
            raise dune_error()

    def preactivate_feature(self):
        ctx = self._context.get_ctx()
        stdout, stderr, exit_code = \
            self._docker.execute_cmd(
                ['curl', '--noproxy', '-x', 'POST',
                 ctx.http_port +
                 '/v1/producer/schedule_protocol_feature_activations',
                 '-d',
                 '{"protocol_features_to_activate": ['
                 '"0ec7e080177b2c02b278d5088611686b49'
                 'd739925a92d9bfcacd7fc6b74053bd"]}'])

        if exit_code != 0:
            print(stderr)
            raise dune_error()
        print("Preactivate Features: " + stdout)

    def send_action(self, action, acnt, data, permission='eosio@active'):
        self.cleos_cmd(
            ['push', 'action', acnt, action, data, '-p', permission], False)

    def get_table(self, acnt, scope, tab):
        self.cleos_cmd(['get', 'table', acnt, scope, tab], False)

    @staticmethod
    def features():
        return ["GET_CODE_HASH",
                "CRYPTO_PRIMITIVES",
                "GET_BLOCK_NUM",
                "ACTION_RETURN_VALUE",
                "CONFIGURABLE_WASM_LIMITS2",
                "BLOCKCHAIN_PARAMETERS",
                "GET_SENDER",
                "FORWARD_SETCODE",
                "ONLY_BILL_FIRST_AUTHORIZER",
                "RESTRICT_ACTION_TO_SELF",
                "DISALLOW_EMPTY_PRODUCER_SCHEDULE",
                "FIX_LINKAUTH_RESTRICTION",
                "REPLACE_DEFERRED",
                "NO_DUPLICATE_DEFERRED_ID",
                "ONLY_LINK_TO_EXISTING_PERMISSION",
                "RAM_RESTRICTIONS",
                "WEBAUTHN_KEY",
                "WTMSIG_BLOCK_SIGNATURES"]

    def activate_feature(self, code_name, preactivate=False):
        if preactivate:
            self.preactivate_feature()
            self.deploy_contract(
                '/app/reference-contracts/build/contracts/eosio.boot', 'eosio')

        if code_name == "ACTION_RETURN_VALUE":
            self.send_action('activate', 'eosio',
                             '["c3a6138c5061cf291310887c0b5c71'
                             'fcaffeab90d5deb50d3b9e687cead45071"]',
                             'eosio@active')
        elif code_name == "GET_CODE_HASH":
            self.send_action('activate', 'eosio',
                             '["bcd2a26394b36614fd4894241d3c451ab0f6fd110958c3423073621a70826e99"]',
                             'eosio@active')
        elif code_name == "GET_BLOCK_NUM":
            self.send_action('activate', 'eosio',
                             '["35c2186cc36f7bb4aeaf4487b36e57039ccf45a9136aa856a5d569ecca55ef2b"]',
                             'eosio@active')
        elif code_name == "CRYPTO_PRIMITIVES":
            self.send_action('activate', 'eosio',
                             '["6bcb40a24e49c26d0a60513b6aeb8551d264e4717f306b81a37a5afb3b47cedc"]',
                             'eosio@active')
        elif code_name == "CONFIGURABLE_WASM_LIMITS2":
            self.send_action('activate', 'eosio',
                             '["d528b9f6e9693f45ed277af93474fd47'
                             '3ce7d831dae2180cca35d907bd10cb40"]',
                             'eosio@active')
        elif code_name == "BLOCKCHAIN_PARAMETERS":
            self.send_action('activate', 'eosio',
                             '["5443fcf88330c586bc0e5f3dee10e7f'
                             '63c76c00249c87fe4fbf7f38c082006b4"]',
                             'eosio@active')
        elif code_name == "GET_SENDER":
            self.send_action('activate', 'eosio',
                             '["f0af56d2c5a48d60a4a5b5c903edfb7db3a'
                             '736a94ed589d0b797df33ff9d3e1d"]',
                             'eosio@active')
        elif code_name == "FORWARD_SETCODE":
            self.send_action('activate', 'eosio',
                             '["2652f5f96006294109b3dd0bbde63693f'
                             '55324af452b799ee137a81a905eed25"]',
                             'eosio@active')
        elif code_name == "ONLY_BILL_FIRST_AUTHORIZER":
            self.send_action('activate', 'eosio',
                             '["8ba52fe7a3956c5cd3a656a3174b931d'
                             '3bb2abb45578befc59f283ecd816a405"]',
                             'eosio@active')
        elif code_name == "RESTRICT_ACTION_TO_SELF":
            self.send_action('activate', 'eosio',
                             '["ad9e3d8f650687709fd68f4b90b41f7d8'
                             '25a365b02c23a636cef88ac2ac00c43"]',
                             'eosio@active')
        elif code_name == "DISALLOW_EMPTY_PRODUCER_SCHEDULE":
            self.send_action('activate', 'eosio',
                             '["68dcaa34c0517d19666e6b33add67351d8'
                             'c5f69e999ca1e37931bc410a297428"]',
                             'eosio@active')
        elif code_name == "FIX_LINKAUTH_RESTRICTION":
            self.send_action('activate', 'eosio',
                             '["e0fb64b1085cc5538970158d05a009c24e2'
                             '76fb94e1a0bf6a528b48fbc4ff526"]',
                             'eosio@active')
        elif code_name == "REPLACE_DEFERRED":
            self.send_action('activate', 'eosio',
                             '["ef43112c6543b88db2283a2e077278c315ae'
                             '2c84719a8b25f25cc88565fbea99"]',
                             'eosio@active')
        elif code_name == "NO_DUPLICATE_DEFERRED_ID":
            self.send_action('activate', 'eosio',
                             '["4a90c00d55454dc5b059055ca213579c6ea85'
                             '6967712a56017487886a4d4cc0f"]',
                             'eosio@active')
        elif code_name == "ONLY_LINK_TO_EXISTING_PERMISSION":
            self.send_action('activate', 'eosio',
                             '["1a99a59d87e06e09ec5b028a9cbb7749b4a5ad'
                             '8819004365d02dc4379a8b7241"]',
                             'eosio@active')
        elif code_name == "RAM_RESTRICTIONS":
            self.send_action('activate', 'eosio',
                             '["4e7bf348da00a945489b2a681749eb56f5de00'
                             'b900014e137ddae39f48f69d67"]',
                             'eosio@active')
        elif code_name == "WEBAUTHN_KEY":
            self.send_action('activate', 'eosio',
                             '["4fca8bd82bbd181e714e283f83e1b45d95ca5af'
                             '40fb89ad3977b653c448f78c2"]',
                             'eosio@active')
        elif code_name == "WTMSIG_BLOCK_SIGNATURES":
            self.send_action('activate', 'eosio',
                             '["299dcb6af692324b899b39f16d5a530a3306280'
                             '4e41f09dc97e9f156b4476707"]',
                             'eosio@active')
        else:
            print("Feature Not Found")
            raise dune_error()

    def bootstrap_system(self, full):
        self.preactivate_feature()
        if full:
            # create account for multisig contract
            self.create_account('eosio.msig', 'eosio')
            # create account for token contract
            self.create_account('eosio.token', 'eosio')
            # create accounts needed by core contract
            self.create_account('eosio.bpay', 'eosio')
            self.create_account('eosio.names', 'eosio')
            self.create_account('eosio.ram', 'eosio')
            self.create_account('eosio.ramfee', 'eosio')
            self.create_account('eosio.saving', 'eosio')
            self.create_account('eosio.stake', 'eosio')
            self.create_account('eosio.vpay', 'eosio')
            self.create_account('eosio.rex', 'eosio')

        # activate features
        self.deploy_contract(
            '/app/reference-contracts/build/contracts/eosio.boot', 'eosio')

        for feature in self.features():
            self.activate_feature(feature)

        if full:
            self.deploy_contract(
                '/app/reference-contracts/build/contracts/eosio.msig',
                'eosio.msig')
            self.deploy_contract(
                '/app/reference-contracts/build/contracts/eosio.token',
                'eosio.token')
            self.deploy_contract(
                '/app/reference-contracts/build/contracts/eosio.system',
                'eosio')

    def start_webapp(self, directory):
        # pylint: disable=fixme
        # TODO readdress after the launch
        pass

    @property
    def docker(self):
        return self._docker
tools\src\dune\node_state.py
import sys
#from typing import NamedTuple


class node_state:
    """A simple class for reporting node state."""

    name: str
    is_active: bool
    is_running: bool
    http: str
    p2p: str
    ship: str


    # pylint: disable=too-many-arguments
    def __init__(self, name, is_active, is_running, http, p2p, ship):
        self.name=name
        self.is_active=is_active
        self.is_running=is_running
        self.http=http
        self.p2p=p2p
        self.ship=ship


    def __str__(self):
        active_str='inactive'
        if self.is_active:
            active_str='active'
        running_str='halted'
        if self.is_running:
            running_str='running'
        return f"{self.name}, {active_str}, {running_str}, {self.http}, {self.p2p}, {self.ship}"


    def string(self, file=sys.stdout, sep=',', simple=True, name_width=0):
        active_str='N'
        if self.is_active:
            active_str='Y'
        running_str='N'
        if self.is_running:
            running_str='Y'
        if simple:
            return f"{self.name}{sep}{active_str}{sep}{running_str}{sep}{self.http}{sep}{self.p2p}{sep}{self.ship}"
        return f"{self.name}{' ' * (name_width - len(self.name))}" + \
            f"{sep}    {active_str}    {sep}    {running_str}     {sep} {self.http} {sep} {self.p2p} {sep} {self.ship}"
tools\src\dune\__main__.py
import os   # path
import sys  # sys.exit()
import importlib.util

from args import arg_parser
from args import parse_optional
from dune import dune
from dune import dune_error
from dune import dune_node_not_found
from dune import node
from dune import version_full

def handle_version():
    print("DUNE " + version_full())

def handle_simple_args():
    # Handle args that do not require docker started up
    if args.version is True:
        handle_version()
        sys.exit(0)

def load_module(absolute_path):
    module_name, _ = os.path.splitext(os.path.split(absolute_path)[-1])
    module_root = os.path.dirname(absolute_path)

    sys.path.append(module_root)
    spec = importlib.util.spec_from_file_location(module_name, absolute_path)
    py_mod = importlib.util.module_from_spec(spec)
    spec.loader.exec_module(py_mod)
    return py_mod

def load_all_modules_from_dir(plugin_dir):
    loaded_modules = []

    if not os.path.exists(plugin_dir):
        return loaded_modules

    for subdir in os.listdir(plugin_dir):
        subdir_path = os.path.join(plugin_dir, subdir)
        if not os.path.isdir(subdir_path):
            continue

        main_py = os.path.join(subdir_path, 'main.py')
        if not os.path.exists(main_py):
            print(f'main.py not found in {subdir_path}')
            continue

        loaded_module = load_module(main_py)
        if not hasattr(loaded_module, 'handle_args'):
            print('Plugin ' + main_py + ' does not have handle_args() method')
            continue
        if not hasattr(loaded_module, 'add_parsing'):
            print('Plugin ' + main_py + ' does not have add_parsing() method')
            continue

        loaded_modules.append(loaded_module)

    return loaded_modules

if __name__ == '__main__':

    parser = arg_parser()

    current_script_path = os.path.abspath(__file__)
    current_script_dir = os.path.dirname(current_script_path)

    modules = load_all_modules_from_dir(current_script_dir + '/../plugin/')

    for module in modules:
        module.add_parsing(parser.get_parser())

    args = parser.parse()

    handle_simple_args()

    dune_sys = dune(args)

    for module in modules:
        if hasattr(module, 'set_dune'):
            module.set_dune(dune_sys)

    if parser.is_forwarding():
        dune_sys.execute_interactive_cmd(parser.get_forwarded_args())
    else:

        try:
            if args.start is not None:
                n: object
                if args.config is None:
                    n = node(args.start[0])
                elif len(args.config) == 1:
                    cfg_temp = args.config[0]
                    if not os.path.exists(cfg_temp):
                        parser.exit_with_help_message("--config: config.ini unknown path\n",
                                                       "bad value: ", cfg_temp)
                    if os.path.isdir(cfg_temp):
                        cfg_temp = os.path.join(cfg_temp, "config.ini")
                    if os.path.split(cfg_temp)[1] != "config.ini":
                        parser.exit_with_help_message("--config: config must either be a config.ini"
                                                      "file or a path containg one\n"
                                                      "bad value: ", cfg_temp)
                    if not os.path.exists(cfg_temp):
                        parser.exit_with_help_message("--config: config.ini file must exist\n"
                                                      "bad value: ", cfg_temp)
                    n = node(args.start[0], dune_sys.docker.abs_host_path(cfg_temp))
                else:
                    parser.exit_with_help_message("--start / --config error")
                dune_sys.start_node(n)

            elif args.config is not None:
                parser.exit_with_help_message("--config without --start")

            elif args.remove is not None:
                dune_sys.remove_node(node(args.remove))

            elif args.destroy_container:
                dune_sys.destroy()

            elif args.stop_container:
                dune_sys.stop_container()

            elif args.start_container:
                dune_sys.start_container()

            elif args.stop is not None:
                dune_sys.stop_node(node(args.stop))

            elif args.list:
                dune_sys.list_nodes()

            elif args.simple_list:
                dune_sys.list_nodes(True)

            elif args.set_active is not None:
                dune_sys.set_active(node(args.set_active))

            elif args.get_active:
                print(dune_sys.get_active())

            elif args.monitor:
                dune_sys.monitor()

            elif args.export_wallet:
                dune_sys.export_wallet()

            elif args.import_wallet is not None:
                dune_sys.import_wallet(args.import_wallet)

            elif args.export_node is not None:
                dune_sys.export_node(node(args.export_node[0]),
                                     args.export_node[1])

            elif args.import_node is not None:
                dune_sys.import_node(args.import_node[0],
                                     node(args.import_node[1]))

            elif args.import_dev_key is not None:
                dune_sys.import_key(args.import_dev_key)

            elif args.create_key:
                print(dune_sys.create_key())

            elif args.create_account is not None:
                if len(args.create_account) > 2:
                    dune_sys.create_account(args.create_account[0],
                                            args.create_account[1],
                                            args.create_account[2],
                                            args.create_account[3])
                elif len(args.create_account) > 1:
                    dune_sys.create_account(args.create_account[0],
                                            args.create_account[1])
                else:
                    dune_sys.create_account(args.create_account[0])

            elif args.create_cmake_app is not None:
                dune_sys.init_project(args.create_cmake_app[0],
                                      dune_sys.docker.abs_host_path(
                                          args.create_cmake_app[1]), True)

            elif args.create_bare_app is not None:
                dune_sys.init_project(args.create_bare_app[0],
                                      dune_sys.docker.abs_host_path(
                                          args.create_bare_app[1]),
                                      False)

            elif args.cmake_build is not None:
                dune_sys.build_cmake_proj(args.cmake_build[0],
                                          parse_optional(args.remainder))

            elif args.ctest is not None:
                dune_sys.ctest_runner(args.ctest[0],
                                      parse_optional(args.remainder))

            elif args.gdb is not None:
                dune_sys.gdb(args.gdb[0], parse_optional(args.remainder))

            elif args.deploy is not None:
                dune_sys.deploy_contract(
                    dune_sys.docker.abs_host_path(args.deploy[0]),
                    args.deploy[1])

            elif args.set_bios_contract is not None:
                dune_sys.deploy_contract(
                    '/app/reference-contracts/build/contracts/eosio.bios',
                    args.set_bios_contract)

            elif args.set_core_contract is not None:
                dune_sys.deploy_contract(
                    '/app/reference-contracts/build/contracts/eosio.system',
                    args.set_core_contract)

            elif args.set_token_contract is not None:
                dune_sys.deploy_contract(
                    '/app/reference-contracts/build/contracts/eosio.token',
                    args.set_token_contract)

            elif args.bootstrap_system:
                dune_sys.bootstrap_system(False)

            elif args.bootstrap_system_full:
                dune_sys.bootstrap_system(True)

            elif args.activate_feature is not None:
                dune_sys.activate_feature(args.activate_feature, True)

            elif args.list_features:
                for f in dune_sys.features():
                    print(f)

            elif args.send_action is not None:
                dune_sys.send_action(args.send_action[1], args.send_action[0],
                                     args.send_action[2], args.send_action[3])

            elif args.get_table is not None:
                dune_sys.get_table(args.get_table[0], args.get_table[1],
                                   args.get_table[2])

            elif args.upgrade:
                dune_sys.docker.upgrade()

            elif args.leap:
                dune_sys.execute_cmd(['sh', 'bootstrap_leap.sh', args.leap])

            elif args.cdt:
                dune_sys.execute_cmd(['sh', 'bootstrap_cdt.sh', args.cdt])

            elif args.version_all:
                handle_version()
                dune_sys.execute_interactive_cmd(['apt','list','leap'])
                dune_sys.execute_interactive_cmd(['apt','list','cdt'])

            else:
                for module in modules:
                    module.handle_args(args)

        except KeyboardInterrupt:
            pass
        except dune_node_not_found as err:
            print('Node not found [' + err.name() + ']', file=sys.stderr)
            sys.exit(1)
        except dune_error as err:
            print("Internal Error", file=sys.stderr)
            sys.exit(1)
tools\src\plugin\README.md
[README for plugins can be found in docs/PLUGIN.md](../../docs/PLUGIN.md)
tools\tests\common.py


import os

# Find path for tests:
TEST_PATH = os.path.dirname(os.path.abspath(__file__))

# Set path for executable:
DUNE_EXE = os.path.split(TEST_PATH)[0] + "/dune"
print("Executable path: ", DUNE_EXE)

# Default addresses
DEFAULT_HTTP_ADDR="127.0.0.1:8888"
DEFAULT_P2P_ADDR="0.0.0.0:9876"
DEFAULT_SHIP_ADDR="127.0.0.1:8080"
tools\tests\config.ini
wasm-runtime = eos-vm
abi-serializer-max-time-ms = 15
chain-state-db-size-mb = 65536
# chain-threads = 2
contracts-console = true
http-server-address = 127.0.0.1:9991
p2p-listen-endpoint = 0.0.0.0:9992
state-history-endpoint = 127.0.0.1:9993
verbose-http-errors = true
# http-threads = 2
agent-name = "DUNE Test Node"
net-threads = 2
max-transaction-time = 100
producer-name = eosio
enable-stale-production = true
# producer-threads = 2
# trace-history = false
# chain-state-history = false
resource-monitor-not-shutdown-on-threshold-exceeded=true

plugin = eosio::chain_api_plugin
plugin = eosio::http_plugin
plugin = eosio::producer_plugin
plugin = eosio::producer_api_plugin
tools\tests\container.py
import os
import platform
import subprocess


class container:
    _container_name = ""
    _image_name = ""

    def __init__(self, container_name='dune_container', image_name='dune:latest'):
        self._container_name = container_name
        self._image_name = image_name
        self._debug = True

    @staticmethod
    def abs_host_path(directory):
        abs_path = os.path.abspath(directory)
        if platform.system() == 'Windows':
            # remove the drive letter prefix and replace the separators
            abs_path = abs_path[3:].replace('\\', '/')
        else:
            abs_path = abs_path[1:]

        return '/host/' + abs_path

    def get_container(self):
        return self._container_name

    def get_image(self):
        return self._image_name

    def execute_docker_cmd(self, cmd):
        with subprocess.Popen(['docker'] + cmd,
                              stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc:
            stdout, stderr = proc.communicate()
            if self._debug:
                print('docker '+' '.join(cmd))
                print(stdout.decode('UTF-8'))
                print(stderr.decode('UTF-8'))
        return [stdout.decode('UTF-8'), stderr.decode('UTF-8'), proc.poll()]

    def file_exists(self, file_name):
        return self.execute_cmd(['test', '-f', file_name])[2] == 0

    def dir_exists(self, directory):
        return self.execute_cmd(['test', '-d', directory])[2] == 0

    def tar_dir(self, file_name, directory):
        return self.execute_cmd(['tar', 'cvzf', file_name + '.tgz', directory])

    def untar(self, directory):
        return self.execute_cmd(['tar', 'xvzf', directory])

    def cp_to_host(self, container_file, host_file):
        return self.execute_docker_cmd(
            ['cp', self._container_name + ":" + container_file, host_file])

    def cp_from_host(self, host_file, container_file):
        return self.execute_docker_cmd(
            ['cp', host_file, self._container_name + ":" + container_file])

    def rm_file(self, file_name):
        self.execute_cmd(['rm', '-rf', file_name])

    def find_pid(self, process_name):
        stdout, _, _ = self.execute_cmd(['ps', 'ax'])
        for line in stdout.splitlines(True):
            if "PID TTY" in line:
                continue
            if process_name in line:
                return line.split()[0]
        return -1

    def get_container_name(self):
        return self._container_name

    def commit(self):
        self.execute_docker_cmd(['commit', 'dune', 'dune'])

    def start(self):
        print("Starting docker container [" + self._container_name + "]")
        self.execute_docker_cmd(['container', 'start', self._container_name])

    def stop(self):
        print("Stopping docker container [" + self._container_name + "]")
        self.execute_docker_cmd(['container', 'stop', self._container_name])

    def destroy(self):
        print("Destroying docker container [" + self._container_name + "]")
        self.execute_docker_cmd(['container', 'stop', self._container_name])
        self.execute_docker_cmd(['container', 'rm', self._container_name])

    def execute_cmd_at(self, directory, cmd):
        with subprocess.Popen(['docker', 'container', 'exec', '-w', directory,
                               self._container_name] + cmd) as proc:
            proc.communicate()

    def execute_cmd(self, cmd):
        return self.execute_docker_cmd(
            ['container', 'exec', self._container_name] + cmd)

    def execute_interactive_cmd(self, cmd):
        with subprocess.Popen(['docker', 'container',
                               'exec', '-i', self._container_name] + cmd) as proc:
            proc.communicate()

    def execute_cmd2(self, cmd):
        with subprocess.Popen(['docker', 'container',
                               'exec', self._container_name] + cmd) as proc:
            proc.communicate()

    def execute_bg_cmd(self, cmd):
        return self.execute_cmd(cmd + ['&'])

    # possible values for the status: created, restarting, running, removing, paused, exited, dead
    def check_status(self, status):
        stdout, _, _ = self.execute_docker_cmd(['ps', '--filter',
                                                             'status=' + status])
        for line in stdout.splitlines(True):
            if "CONTAINER ID" in line:
                continue
            if self._container_name in line:
                return True
        return False

    # check if the container is still exists and was not deleted
    def exists(self):
        stdout, _, _ = self.execute_docker_cmd(['ps', '--filter',
                                                             'name=' + self._container_name])
        for line in stdout.splitlines(True):
            if "CONTAINER ID" in line:
                continue
            if self._container_name in line:
                return True
        return False

    # create a new container
    def create(self):
        print("Creating docker container [" + self._container_name + "]")
        self.execute_docker_cmd(["run", "--name=" + self._container_name,
                                 self._image_name, "exit"])
tools\tests\README.md
# Pytest Validation for DUNE

# WARNING

These tests are destructive. Do NOT run them when there is important data in your container!

## Getting Started

Make sure you have a recent version of `pytest` installed. Development was initially done with version `7.1`.

Pytest can be installed with [pip](https://pypi.org/project/pytest/)
or - very likely - is part of your Linux distribution. Additionally,
the source is available [here](https://github.com/pytest-dev/pytest).

Following the [instructions](../README.md#getting-started) for
building and installing DUNE.

## Running the Tests

From the root directory, execute all the tests as follows:
```
$ pytest ./tests/
```

If you need to run a single test, it can be done similar to the example below:
```
$ pytest ./tests/test_version.py
```

More information regarding running and developing tests with pytest
can be found [here](https://docs.pytest.org/en/stable/).

## What to do when you find a defect?

Check DUNE's github site to see if the issue already exists. Make sure
that your issue isn't already fixed in the main DUNE branch by
rerunning the tests there if you haven't already. If the problem is
repeatable in the main branch and no issue already exists, add a new
issue including what platform (e.g. Ubuntu 20.04 x86, OSX arm64, etc),
complier, python version, pytest version, and steps we might need to
repeat the defect.

## Adding new tests

Make sure any new tests follow the same basic format as exist
here. You can use [test_version.py](test_version.py) as a sample.

Note that you will NEED to run pylint against the tests. This can be
done from the root directory like this:
```
$ pylint ./tests/<my_new_test>.py
```
tools\tests\show_untested_options.sh
#!/bin/sh

# This file MUST remain co-located with the files to work.


# Find paths to the tests and the dune executable.
SCRIPT=`readlink -f "$0"`
TEST_DIR=`dirname "$SCRIPT"`
DUNE_DIR=`dirname "$TEST_DIR"`
DUNE="$DUNE_DIR/dune"

# Get a list of the options.
options=`$DUNE --help | grep -o "^  --[a-z\-]*"`

# Get a list of the test files.
files=`find "$TEST_DIR" | grep "[.]py\$"`

# Return value. Initially set to zero/success.
rv=0

# Search for each option.
for opt in $options; do
    if ! grep --quiet \"$opt\" $files ; then
        # Report missing options and set the return value to non-zeor/failure.
        echo "Missing option: $opt"
        rv=1
    fi
done

# Report success/fail.
exit $rv
tools\tests\test_all_options_tested.py
#!/usr/bin/env python3

"""Test DUNE Version_

This script tests that the compiled binary produce expected output
in response to `--version` option.
"""

import subprocess
import pytest

from common import TEST_PATH

@pytest.mark.skip(reason="Not implemented options make this test failed")
def test_all_options_tested():
    """Test that all the options from the output of `--help` are in the various test files."""

    script=TEST_PATH+"/show_untested_options.sh"

    subprocess.run(script, check=True)
tools\tests\test_boostrap.py
#!/usr/bin/env python3

"""Test DUNE bootstrap

These options are tested:
  --create-key
  --import-dev-key
  --bootstrap-system-full
  --get-table
"""

import subprocess

from common import DUNE_EXE

# Globals
NODE_NAME = "my_node"
ACCT_NAME = "myaccount"


def test_booststrap():

    # Remove any existing containers.
    subprocess.run([DUNE_EXE, "--destroy-container"], check=True)

    # Start the new node.
    subprocess.run([DUNE_EXE, "--start",NODE_NAME], check=True)

    # Create an account.
    subprocess.run([DUNE_EXE, "--create-account",ACCT_NAME], check=True)

    # Create a key. Get it to a var as well.
    public_key = None
    private_key = None
    stdout_result = subprocess.run([DUNE_EXE,"--create-key"], check=True, stdout=subprocess.PIPE)
    result_list = stdout_result.stdout.decode().split("\n")
    for entry in result_list:
        # ignore empty entries.
        if len(entry) == 0:
            continue
        items = entry.split(': ')
        if len(items) == 2:
            if items[0] == "Private key":
                private_key = items[1]
            elif items[0] == "Public key":
                public_key = items[1]
    assert private_key is not None

    # Import the key.
    subprocess.run([DUNE_EXE, "--import-dev-key",private_key], check=True)

    # Bootstrap the system.
    subprocess.run([DUNE_EXE, "--bootstrap-system-full"], check=True)

    results = subprocess.run([DUNE_EXE, "--get-table", "eosio.token", "eosio", "accounts"], check=True, stdout=subprocess.PIPE)
    assert b'"rows"' in results.stdout
tools\tests\test_container.py
#!/usr/bin/env python3

"""Test DUNE container controls

This script test works with the Docker container.
"""
import subprocess

from common import DUNE_EXE
from container import container


def test_container_actions():
    """ Test the start, stop, and destroy action for containers. """

    cntr = container('dune_container', 'dune:latest')

    # Remove any container that already exists.
    if cntr.exists():
        subprocess.run([DUNE_EXE, "--destroy-container"], check=True)

    # Create/start the container.
    subprocess.run([DUNE_EXE, "--start-container"], check=True)
    assert cntr.check_status("running") is True

    # Stop the container.
    subprocess.run([DUNE_EXE, "--stop-container"], check=True)
    assert cntr.check_status("exited") is True

    # Restart the container.
    subprocess.run([DUNE_EXE, "--start-container"], check=True)
    assert cntr.check_status("running") is True

    # Destroy the container.
    subprocess.run([DUNE_EXE, "--destroy-container"], check=True)
    assert cntr.exists() is False
tools\tests\test_debug.py
#!/usr/bin/env python3

"""Test DUNE Version

This script tests that the compiled binary produces output in response
to `--debug` option.

"""



import subprocess

from common import DUNE_EXE


def test_version_debug():
    """Test that the output of `--version --debug` is as expected."""

    # Call DUNE, we only care that `--debug` is available, not that it
    # does anything. For now.
    subprocess.run([DUNE_EXE,"--version","--debug"], check=True)
tools\tests\test_deploy.py
#!/usr/bin/env python3

"""Test DUNE Functions.

This script tests work with the smart contract related keys:
  --deploy
  --send-action
"""

import os
import shutil
import subprocess

from common import DUNE_EXE,TEST_PATH

# Globals
NODE_NAME = "my_node"
ACCT_NAME = "myaccount"

PROJECT_NAME = "test_app"
TEST_APP_DIR = TEST_PATH + "/" + PROJECT_NAME
TEST_APP_BLD_DIR = TEST_APP_DIR + "/build/" + PROJECT_NAME
TEST_APP_WASM = TEST_APP_BLD_DIR + "/" + PROJECT_NAME + ".wasm"    # TEST_APP_BLD_DIR + "/test_app.wasm"


def test_deploy():
    """Test `--deploy` key."""

    # Remove any existing containers and old build directories.
    subprocess.run([DUNE_EXE,"--destroy-container"], check=True)
    if os.path.exists(TEST_APP_DIR):
        print("Removing TEST_APP_DIR: ", TEST_APP_DIR)
        shutil.rmtree(TEST_APP_DIR)

    # Create a new node and an account.
    subprocess.run([DUNE_EXE, "--start", NODE_NAME], check=True)
    subprocess.run([DUNE_EXE, "--create-account", ACCT_NAME], check=True)

    # Create and build a test app.
    subprocess.run([DUNE_EXE, "--create-cmake-app", PROJECT_NAME, TEST_PATH], check=True)
    subprocess.run([DUNE_EXE, "--cmake-build", TEST_APP_DIR], check=True)
    assert os.path.isfile(TEST_APP_WASM) is True

    subprocess.run([DUNE_EXE, "--deploy", TEST_APP_BLD_DIR, ACCT_NAME], check=True)

    # Send the action and search for a response in the result.
    #   ./dune --debug --send-action myaccount hi ["test"] eosio@active
    results = subprocess.run([DUNE_EXE, "--send-action", ACCT_NAME, "hi", '["test"]', "eosio@active"], check=True, stdout=subprocess.PIPE)
    assert b'>> Name : test' in results.stdout

    # Clean up after tests.
    shutil.rmtree(TEST_APP_DIR)
tools\tests\test_help.py
#!/usr/bin/env python3

"""Test DUNE Help

This script tests that the compiled binary produce expected output
in response to `-h`, `--help`, and `<some invalid option>` options.
"""



import subprocess

from common import DUNE_EXE


def test_invalid_option():
    """Test that the output of `dune <some invalid option>` is as expected."""

    # List of expected values.
    expect_list = \
        [
            b'usage: dune',
            b'dune: error: unrecognized arguments: --some-invalid-option'
        ]

    # Call the tool, check for failed return code
    # pylint: disable=subprocess-run-check
    completed_process = subprocess.run([DUNE_EXE,"--some-invalid-option"], stderr=subprocess.PIPE)
    assert completed_process.returncode != 0

    # Test for expected values in the captured output.
    for expect in expect_list:
        assert expect in completed_process.stderr


def test_help():
    """Test that the output of `dune -h` and `dune --help` is as expected."""

    # List of expected values.
    expect_list = \
        [
            b'usage: dune',
            b'DUNE: Docker Utilities for Node Execution',
            b'optional arguments:',
            b'-h, --help',
            b'--monitor',
            b'--start',
            b'--stop',
            b'--remove',
            b'--list',
            b'--version',
        ]


    # Call DUNE.
    completed_process_h = subprocess.run([DUNE_EXE,"-h"], check=True, stdout=subprocess.PIPE)

    # Call DUNE.
    completed_process_help = subprocess.run([DUNE_EXE,"--help"], check=True, stdout=subprocess.PIPE)

    # Test that the output of all the above executions is the same
    assert completed_process_h.stdout == completed_process_help.stdout

    # Test for expected values in the captured output.
    #  We need only test ONE output because we ensure the output is the same above.
    for expect in expect_list:
        assert expect in completed_process_h.stdout
tools\tests\test_keys.py
#!/usr/bin/env python3

"""Test DUNE Version

This script tests work with the crypto keys:
--create-key
--import-dev-key

"""
import subprocess

from common import DUNE_EXE
from container import container


def test_create_and_import_keys():
    """Test `--create-key` and `--import-dev-key` key."""

    # Ensure a container exists.
    cntr = container('dune_container', 'dune:latest')
    if not cntr.exists():
        cntr.create()

    # Create a key. Get it to a var as well.
    public_key = None
    private_key = None
    stdout_result = subprocess.run([DUNE_EXE,"--create-key"], check=True, stdout=subprocess.PIPE)
    result_list = stdout_result.stdout.decode().split("\n")
    for entry in result_list:
        # ignore empty entries.
        if len(entry) == 0:
            continue
        items = entry.split(': ')
        if len(items) == 2:
            if items[0] == "Private key":
                private_key = items[1]
            elif items[0] == "Public key":
                public_key = items[1]
    assert public_key is not None
    assert private_key is not None

    # Import the key.
    subprocess.run([DUNE_EXE,"--import-dev-key",private_key], check=True)
tools\tests\test_list_features.py
#!/usr/bin/env python3

"""Test DUNE List Features

This script tests that the compiled binary produce expected output in
response to the `--list-features` option.
"""



import subprocess

from common import DUNE_EXE


def test_list_features():
    """Test that the output of `dune --list-features` is as expected."""

    # List of expected output lines from `dune --list-features`.
    expect_list = \
	[b"GET_CODE_HASH",
        b"CRYPTO_PRIMITIVES",
        b"GET_BLOCK_NUM",
        b"ACTION_RETURN_VALUE",
        b"CONFIGURABLE_WASM_LIMITS2",
        b"BLOCKCHAIN_PARAMETERS",
        b"GET_SENDER",
        b"FORWARD_SETCODE",
        b"ONLY_BILL_FIRST_AUTHORIZER",
        b"RESTRICT_ACTION_TO_SELF",
        b"DISALLOW_EMPTY_PRODUCER_SCHEDULE",
        b"FIX_LINKAUTH_RESTRICTION",
        b"REPLACE_DEFERRED",
        b"NO_DUPLICATE_DEFERRED_ID",
        b"ONLY_LINK_TO_EXISTING_PERMISSION",
        b"RAM_RESTRICTIONS",
        b"WEBAUTHN_KEY",
        b"WTMSIG_BLOCK_SIGNATURES"]

    # Convert the list to a useful comparison value.
    expect = b''
    for temp in expect_list:
        expect = expect + temp + b'\n'

    # Call the tool, check return code, check expected value.
    completed_process = subprocess.run([DUNE_EXE,"--list-features"],
                                       check=True, stdout=subprocess.PIPE)
    assert completed_process.stdout == expect
tools\tests\test_monitor.py
#!/usr/bin/env python3

"""Test DUNE Version

This script tests --monitor key

"""
import subprocess

from common import DUNE_EXE
from container import container


def test_monitor():
    """Test `--monitor` key."""


    # Remove any container that already exists.
    cntr = container('dune_container', 'dune:latest')
    if cntr.exists():
        subprocess.run([DUNE_EXE, "--destroy-container"], check=True)

    # This will start a container; however, there will be NO active node, so it will fail.
    results = subprocess.run([DUNE_EXE, "--monitor"], capture_output=True, check=False)
    assert results.returncode != 0
    assert cntr.check_status("running") is True

    # Start a node.
    subprocess.run([DUNE_EXE,"--start", "my_node"], check=True)

    # Now try to monitor again.
    results = subprocess.run([DUNE_EXE, "--monitor"], capture_output=True, check=False)
    assert b'server_version' in results.stdout
tools\tests\test_nodes.py
#!/usr/bin/env python3

"""Test various DUNE commands.

This script tests that the compiled binary produce expected output for these commands:
  --start
  --config
  --stop
  --remove
  --simple-list
  --list
  --get-active
  --set-active
  --export-node
  --import-node
"""


import os                       # mkdir
import shutil                   # rmtree
import subprocess

# pylint: disable=wildcard-import
from common import *                   # local defines
from container import container


# Globals
NODE_ALPHA = "ALPHA_NODE"
NODE_BRAVO = "BRAVO_NODE"
NODE_CHARLIE = "CHARLIE_NODE"

CONFIG_PATH = TEST_PATH
CONFIG_FILE = TEST_PATH + "/config.ini"

EXPORT_DIR = TEST_PATH + "/temp"

ALT_HTTP_ADDR="127.0.0.1:9991"
ALT_P2P_ADDR="0.0.0.0:9992"
ALT_SHIP_ADDR="127.0.0.1:9993"


def remove_all():
    """ Remove any existing nodes. """

    # Call dune, check the return is True.
    completed_process = subprocess.run([DUNE_EXE,"--simple-list"],
                                       check=True, stdout=subprocess.PIPE)

    # Convert the captured stdin to a list.
    result_list = completed_process.stdout.decode().split("\n")

    # Remove the header.
    result_list.pop(0)

    # Remove all the entries in the list.
    for entry in result_list:
        # ignore empty entries.
        if len(entry) == 0:
            continue
        # Remove the entry
        name = entry.split('|')[0]
        print("Removing: ", name)
        subprocess.run([DUNE_EXE,"--remove",name], check=True)


def validate_node_state( node_name, active_state, running_state ):
    """Validate the result of a call to `dune --simple-list` contains the
    node in a given state.

    :param node_name: The node to search for.
    :param active_state: True/False
    :param running_state: True/False
    """

    # Validate the entry
    assert active_state in (True, False)
    assert running_state in (True, False)

    expect = node_name + "|"
    if active_state:
        expect += "Y|"
    else:
        expect += "N|"
    if running_state:
        expect += "Y|"
    else:
        expect += "N|"
    expect += DEFAULT_HTTP_ADDR + "|" + DEFAULT_P2P_ADDR + "|" + DEFAULT_SHIP_ADDR

    # Call dune, check the return is True.
    completed_process = subprocess.run([DUNE_EXE,"--simple-list"],
                                       check=True, stdout=subprocess.PIPE)

    # Convert the captured stdin to a list for comparison with expected output.
    result_list = completed_process.stdout.decode().split("\n")

    assert expect in result_list


# pylint: disable=too-many-branches
def validate_node_list( node_list ):
    """Validate the result of a call to `dune --simple-list` contains all
    the nodes and states in node_list.

    :param node_list: A list of lists with the form:
        [node name, active (t/F), running (t/F), http_addr, p2p_addr, ship_addr].
        where node name is required, but other values have reasonable defaults.
    """

    # Test algorith:
    #   Build the list of expected results.
    #   Get the actual results.
    #   For each entry in the actual results,
    #     Test the entry is in expected and remove it.
    #   Test that all entries are removed from expected results.

    # Create a list of expected strings.
    expect_list = ["Node|Active|Running|HTTP|P2P|SHiP"]
    for entry in node_list:

        # Valid the array count in the entry.
        is_valid = True
        if not len(entry) in (1,2,3,4,5,6):
            print("len() should be a value between 1 and 6 but is: ", len(entry), " value: ", entry)
            is_valid = False

        # Determine if this entry should be active.
        active = False
        if len(entry) > 1:
            active = entry[1]
            if not active in (True,False):
                print("Invalid value for Active. Expect True/False, received: ", active)
                is_valid = False

        # Determine if this entry should be running.
        running = False
        if len(entry) > 2:
            running = entry[2]
            if not running in (True,False):
                print("Invalid value for Running. Expect True/False, received: ", running)
                is_valid = False

        # Determine the expected ip addrs.
        http_addr=DEFAULT_HTTP_ADDR
        if len(entry) > 3:
            http_addr = entry[3]
        p2p_addr=DEFAULT_P2P_ADDR
        if len(entry) > 4:
            p2p_addr = entry[4]
        ship_addr=DEFAULT_SHIP_ADDR
        if len(entry) > 5:
            ship_addr = entry[5]

        assert is_valid

        # Make an expected string for this entry and append it the  expected results list
        temp = entry[0] + "|"
        if entry[1]:
            temp += "Y|"
        else:
            temp += "N|"
        if entry[2]:
            temp += "Y|"
        else:
            temp += "N|"
        temp += http_addr + "|" + p2p_addr + "|" + ship_addr
        expect_list.append(temp)


    # Call dune, check the return is True.
    completed_process = subprocess.run([DUNE_EXE,"--simple-list"],
                                       check=True, stdout=subprocess.PIPE)

    # Convert the captured stdin to a list for comparison with expected output.
    result_list = completed_process.stdout.decode().split("\n")

    # Iterate over the elements in the results list
    for entry in result_list:
        # Ignore empty lines.
        if entry == "":
            continue
        # Test a value exists in expect_list, then remove it.
        assert entry in expect_list
        expect_list.remove(entry)

    # Test that there are no entries remaining in expect_list.
    assert not expect_list


def expect_empty_verbose_list():
    """Test that the output of list options are empty."""

    # List of expected output lines from `dune --list`.
    empty_verbose_list = \
        "Node Name   | Active? | Running? | HTTP           | P2P          | SHiP          \n" + \
        "---------------------------------------------------------------------------------\n"

    # Call the tool, check expected value.
    completed_process = subprocess.run([DUNE_EXE,"--list"], check=True, stdout=subprocess.PIPE)
    assert completed_process.stdout.decode() == empty_verbose_list


# pylint: disable=too-many-statements
def test_nodes():
    """Run the tests."""

    # Remove any container that already exists and create a fresh one.
    cntr = container('dune_container', 'dune:latest')
    if cntr.exists():
        subprocess.run([DUNE_EXE, "--destroy-container"], check=True)
    subprocess.run([DUNE_EXE, "--start-container"], check=True)

    # Ensure there are no existing nodes.
    #   Tests `--simple-list` and `--list`
    remove_all()
    validate_node_list([])
    expect_empty_verbose_list()

    # Create a node and test its state.
    #   Tests `--start` when the node needs to be created.
    subprocess.run([DUNE_EXE,"--start", NODE_ALPHA], check=True)
    validate_node_state(NODE_ALPHA, True, True)
    # Stop the node and test its state.
    #   Tests `--stop`
    subprocess.run([DUNE_EXE,"--stop", NODE_ALPHA], check=True)
    validate_node_state(NODE_ALPHA, True, False)
    # Restart the node and test its state.
    #   Tests `--start` when the node already exists.
    subprocess.run([DUNE_EXE,"--start", NODE_ALPHA], check=True)
    validate_node_state(NODE_ALPHA, True, True)

    # Create a 2nd node and test the state of both nodes.
    #   Tests the behavior of `--start` on an already active, running node.
    subprocess.run([DUNE_EXE,"--start", NODE_BRAVO], check=True)
    validate_node_state(NODE_BRAVO, True, True)
    validate_node_list([[NODE_ALPHA, False, False],[NODE_BRAVO, True, True]])

    # Test --get-active shows NODE_BRAVO
    #   Tests `--get-active`.
    assert subprocess.run([DUNE_EXE,"--get-active"], check=True, stdout=subprocess.PIPE).stdout.decode() == (NODE_BRAVO + "\n")

    # Test --set-active works to switch to NODE_ALPHA and --get active returns the correct value.
    #   Tests `--set-active` switch active node while run state is left unchanged.
    subprocess.run([DUNE_EXE,"--set-active", NODE_ALPHA], check=True)
    validate_node_list([[NODE_ALPHA, True, False],[NODE_BRAVO, False, True]]) # Note this is TF,FT
    assert subprocess.run([DUNE_EXE,"--get-active"], check=True, stdout=subprocess.PIPE).stdout.decode() == (NODE_ALPHA + "\n")

    # Remove NODE_ALPHA, ensure it is no longer in the list.
    #   Tests `--remove`.
    subprocess.run([DUNE_EXE,"--remove", NODE_ALPHA], check=True)
    validate_node_list([[NODE_BRAVO, False, True]]) # Note the state of NODE_BRAVO is FT

    # Remove anything to get to a clean slate.
    remove_all()

    # Test `--start` where start includes a config path.
    subprocess.run([DUNE_EXE,"--start", NODE_ALPHA, "--config", CONFIG_PATH], check=True)
    validate_node_list([[NODE_ALPHA, True, True, ALT_HTTP_ADDR, ALT_P2P_ADDR, ALT_SHIP_ADDR]])

    # Test `--start` where start includes a config file.
    subprocess.run([DUNE_EXE,"--start", NODE_BRAVO, "--config", CONFIG_FILE], check=True)
    validate_node_list([[NODE_ALPHA, False, False, ALT_HTTP_ADDR, ALT_P2P_ADDR, ALT_SHIP_ADDR],
                        [NODE_BRAVO, True, True, ALT_HTTP_ADDR, ALT_P2P_ADDR, ALT_SHIP_ADDR]])

    # Test `--start` with invalid config file path.
    #   pylint: disable=subprocess-run-check
    completed_process = subprocess.run([DUNE_EXE,"--start", NODE_ALPHA, "--config", "unknown_config"], check=False)
    assert completed_process.returncode != 0

    # Test `--config` alone.
    #   pylint: disable=subprocess-run-check
    completed_process = subprocess.run([DUNE_EXE,"--config", "unknown_config"], check=False)
    assert completed_process.returncode != 0

    #
    # Testing the import and export of nodes may not be sophisticated
    # enough.
    #

    # remove any existing directory and ensure a fresh one is created.
    if os.path.exists(EXPORT_DIR):
        print("Removing EXPORT_DIR: ", EXPORT_DIR)
        shutil.rmtree(EXPORT_DIR)
    os.mkdir(EXPORT_DIR)


    # Just add an additional node for export.
    subprocess.run([DUNE_EXE,"--start", NODE_CHARLIE], check=True)


    # things we need to test:
    #  export to TEST_PATH, TEST_PATH/my_file.tgz, TEST_PATH/does_not_exist_yet/my_file.tgz

    # Test --export-node using standard filename.
    subprocess.run([DUNE_EXE,"--export-node", NODE_ALPHA, EXPORT_DIR], check=True)
    assert os.path.exists(EXPORT_DIR + "/" + NODE_ALPHA + ".tgz")

    # Test --export-node using provided filename.
    subprocess.run([DUNE_EXE,"--export-node", NODE_BRAVO, EXPORT_DIR + "/bravo_export.tgz"], check=True)
    assert os.path.exists(EXPORT_DIR + "/bravo_export.tgz")

    # Test --export-node using non-existing path.
    subprocess.run([DUNE_EXE,"--export-node", NODE_CHARLIE, EXPORT_DIR + "/new_path/charlie_export.tgz"], check=True)
    assert os.path.exists(EXPORT_DIR + "/new_path/charlie_export.tgz")


    # Clean up before import.
    remove_all()

    # Test --import-node
    #  Import each node from the export tests and
    subprocess.run([DUNE_EXE,"--import-node", EXPORT_DIR + "/ALPHA_NODE.tgz", NODE_ALPHA], check=True)
    validate_node_list([[NODE_ALPHA, True, True, ALT_HTTP_ADDR, ALT_P2P_ADDR, ALT_SHIP_ADDR]])

    subprocess.run([DUNE_EXE,"--import-node", EXPORT_DIR + "/bravo_export.tgz", NODE_BRAVO], check=True)
    validate_node_list([[NODE_ALPHA, False, False, ALT_HTTP_ADDR, ALT_P2P_ADDR, ALT_SHIP_ADDR],
                        [NODE_BRAVO, True, True, ALT_HTTP_ADDR, ALT_P2P_ADDR, ALT_SHIP_ADDR]])

    subprocess.run([DUNE_EXE,"--import-node", EXPORT_DIR + "/new_path/charlie_export.tgz", NODE_CHARLIE], check=True)
    validate_node_list([[NODE_ALPHA, False, False, ALT_HTTP_ADDR, ALT_P2P_ADDR, ALT_SHIP_ADDR],
                        [NODE_BRAVO, False, True, ALT_HTTP_ADDR, ALT_P2P_ADDR, ALT_SHIP_ADDR],
                        [NODE_CHARLIE, True, True, DEFAULT_HTTP_ADDR, DEFAULT_P2P_ADDR, DEFAULT_SHIP_ADDR]])

    # Finally, clean everything up before final return.
    remove_all()
tools\tests\test_plugin.py
#!/usr/bin/env python3

"""Test DUNE Plugin

This script tests that after copying plugin to src/plugin DUNE detects it and run properly.
"""

import os
import shutil
import subprocess

from common import DUNE_EXE

current_script_path = os.path.abspath(__file__)
current_script_dir = os.path.dirname(current_script_path)

src_dir = current_script_dir + '/../plugin_example/dune_hello'
dst_dir = current_script_dir + '/../src/plugin/dune_hello'

def prepare_plugin():
    remove_plugin()
    shutil.copytree(src_dir, dst_dir)

def remove_plugin():
    shutil.rmtree(dst_dir, ignore_errors=True)

def test_plugin_help():
    prepare_plugin()

    expect_list = \
        [
            b'--hello',
        ]

    # Call DUNE.
    completed_process = subprocess.run([DUNE_EXE,"--help"], check=True, stdout=subprocess.PIPE)

    # Test for expected values in the captured output.
    for expect in expect_list:
        assert expect in completed_process.stdout

    remove_plugin()


def test_plugin_execution():
    prepare_plugin()

    expect_list = \
        [
            b'Hello from DUNE',
        ]

    # Call DUNE.
    completed_process = subprocess.run([DUNE_EXE,"--hello"], check=True, stdout=subprocess.PIPE)

    # Test for expected values in the captured output.
    for expect in expect_list:
        assert expect in completed_process.stdout

    remove_plugin()
    
tools\tests\test_project.py
#!/usr/bin/env python3

"""Test DUNE Version

This script tests work with the a smart contract project keys:
  --create-cmake-app
  --create-bare-app
  --cmake-build
  --ctest
  --gdb
"""

import glob
import os
import shutil
import subprocess

from common import DUNE_EXE,TEST_PATH
from container import container


PROJECT_NAME = "test_app"
TEST_APP_DIR = TEST_PATH + "/" + PROJECT_NAME
TEST_APP_BLD_DIR = TEST_APP_DIR + "/build/" + PROJECT_NAME
TEST_APP_WASM = TEST_APP_BLD_DIR + "/" + PROJECT_NAME + ".wasm"    # TEST_APP_BLD_DIR + "/test_app.wasm"


def remove_existing():
    """ Remove an existing `./test_app` dir. """

    cntr = container('dune_container', 'dune:latest')
    cntr.stop()

    if os.path.exists(TEST_APP_DIR):
        print("Removing TEST_APP_DIR: ", TEST_APP_DIR)
        shutil.rmtree(TEST_APP_DIR)


def test_create_cmake_app():
    """Test `--create-cmake-app` key."""

    remove_existing()

    # Expected files.
    filelist = [TEST_APP_DIR + '/',
                TEST_APP_DIR + '/src',
                TEST_APP_DIR + '/src/' + PROJECT_NAME + '.cpp',
                TEST_APP_DIR + '/src/CMakeLists.txt',
                TEST_APP_DIR + '/include',
                TEST_APP_DIR + '/include/' + PROJECT_NAME + '.hpp',
                TEST_APP_DIR + '/ricardian',
                TEST_APP_DIR + '/ricardian/' + PROJECT_NAME + '.contracts.md',
                TEST_APP_DIR + '/build',
                TEST_APP_DIR + '/CMakeLists.txt',
                TEST_APP_DIR + '/README.txt']

    # Create the test app.
    completed_process = subprocess.run([DUNE_EXE, "--create-cmake-app", PROJECT_NAME, TEST_PATH], check=True)
    assert completed_process.returncode == 0
    assert os.path.isdir(TEST_APP_DIR) is True

    # Get a list of the files created.
    lst = glob.glob(TEST_APP_DIR + "/**", recursive=True)

    # Sort the lists and compare.
    filelist.sort()
    lst.sort()
    assert filelist == lst

    # Cleanup
    shutil.rmtree(TEST_APP_DIR)


def test_create_bare_app():
    """Test `--create-bare-app` key."""

    remove_existing()

    # Expected file list.
    filelist = [TEST_APP_DIR + '/',
                TEST_APP_DIR + '/' + PROJECT_NAME + '.hpp',
                TEST_APP_DIR + '/' + PROJECT_NAME + '.cpp',
                TEST_APP_DIR + '/' + PROJECT_NAME + '.contracts.md',
                TEST_APP_DIR + '/README.txt']


    subprocess.run([DUNE_EXE, "--create-bare-app", PROJECT_NAME, TEST_PATH], check=True)
    assert os.path.isdir(TEST_APP_DIR) is True

    # Actual file list.
    lst = glob.glob(TEST_APP_DIR + "/**", recursive=True)

    # Sort and compare expected and actual.
    filelist.sort()
    lst.sort()
    assert filelist == lst

    # Cleanup
    shutil.rmtree(TEST_APP_DIR)



def test_cmake_and_ctest():
    """Test `--cmake` and `--ctest` key."""

    remove_existing()

    # Create the cmake app, test it exists.
    subprocess.run([DUNE_EXE, "--create-cmake-app", PROJECT_NAME, TEST_PATH], check=True)
    assert os.path.isdir(TEST_APP_DIR) is True

    # Build the app, test that the expected output file is created.
    subprocess.run([DUNE_EXE, "--cmake-build", TEST_APP_DIR], check=True)
    assert os.path.isfile(TEST_APP_WASM) is True

    # Test that CTest files are run.
    #    @TODO - This should be updated to create and test some PASSING tests.
    #    @TODO - This should be updated to create and test some FAILING tests.
    with subprocess.Popen(
            [DUNE_EXE, "--debug", "--ctest", TEST_APP_DIR, "--", "--no-tests=ignore"],
            stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding="utf-8") as proc:
        _, stderr = proc.communicate()

        assert "No tests were found!!!" in stderr


    shutil.rmtree(TEST_APP_DIR)


def test_gdb():
    """Test `--gdb` key."""

    # Simply ensure gdb is run.
    proc = subprocess.run([DUNE_EXE, "--gdb", "/bin/sh"], capture_output=True, encoding="utf-8", check=True)
    assert "GNU gdb" in proc.stdout
tools\tests\test_version.py
#!/usr/bin/env python3

"""Test DUNE Version_

This script tests that the compiled binary produce expected output
in response to `--version` option.
"""



import subprocess

from common import DUNE_EXE


def test_version():
    """Test that the output of `--version` is as expected."""

    # List of expected values.
    expect_list = \
        [
            b'DUNE v1.',
        ]

    # Call DUNE.
    completed_process = subprocess.run([DUNE_EXE,"--version"], check=True, stdout=subprocess.PIPE)

    # Test for expected values in the captured output.
    for expect in expect_list:
        assert expect in completed_process.stdout
tools\tests\test_wallet.py
#!/usr/bin/env python3

"""Test DUNE Version

This script tests export and import of the wallet
"""

import os
import shutil
import subprocess

from container import container
from common import DUNE_EXE


def tar_dir(file_name, directory):
    subprocess.run(['tar', 'cvzf', file_name + '.tgz', directory], check=True)


def untar(file_name):
    subprocess.run(['tar', 'xvzf', file_name], check=True)


def test_export():
    """Test `--export-wallet` key."""

    subprocess.run([DUNE_EXE, "--export-wallet"], check=True)

    assert os.path.exists("wallet.tgz") is True


def test_import():
    """Test `--import-wallet` key."""

    cntr = container('dune_container', 'dune:latest')

    cntr.rm_file("/app/wallet.tgz")

    assert os.path.exists("wallet.tgz") is True

    untar("wallet.tgz")

    # Platform dependent locale encoding is acceptable here.
    #   pylint: disable=unspecified-encoding
    with open("_wallet/eosio-wallet/import_test_file", "w") as flag_file:
        flag_file.write("this is a flag file for testing purposes")

    tar_dir("wallet", "_wallet")

    # Use wallet.tgz created by successfully finished test of export
    subprocess.run([DUNE_EXE, "--debug",  "--import-wallet", "./wallet.tgz"], check=True)

    os.remove("wallet.tgz")
    shutil.rmtree("_wallet")

    assert cntr.file_exists("/root/eosio-wallet/import_test_file") is True

    cntr.rm_file("/root/eosio-wallet/import_test_file")
tools\tests\untested.py
#!/usr/bin/env python3

"""
This file exists solely to allow for `show_untested_options.sh` to ignore certain keys.

These keys
  "--set-core-contract"
  "--set-bios-contract"
  "--set-token-contract"

are not tested because:

  Alex: I already do deployment of a contract and calling of
  it's action during test of --deploy key. Does it mean that all keys above don't need to be
  tested because actually they do deployment of contracts?

  Bucky: Yes, I think its fine to not worry about testing those specifically as they are just
  deploying contracts. When we get to creating the EOS specific plugin system we will have will
  need to validate the bootstrapping, but it will be more than deploying at that point.

"""
tools\chocolateyinstall.ps1
$ErrorActionPreference = 'Stop'; # stop on all errors

$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"

Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment' -Name Path -Value "$toolsDir;$($env:Path)"
tools\chocolateyuninstall.ps1
$ErrorActionPreference = 'Stop'; # stop on all errors

$currentPath = $env:Path

$toolsDir = "$(Split-Path -parent $MyInvocation.MyCommand.Definition)"

$escapedDirectory = [regex]::Escape($toolsDir)

$newPath = $currentPath -replace $escapedDirectory,''

Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Environment' -Name Path -Value $newPath

Log in or click on link to see number of positives.

In cases where actual malware is found, the packages are subject to removal. Software sometimes has false positives. Moderators do not necessarily validate the safety of the underlying software, only that a package retrieves software from the official distribution point and/or validate embedded software against official distribution point (where distribution rights allow redistribution).

Chocolatey Pro provides runtime protection from possible malware.

Add to Builder Version Downloads Last Updated Status

Discussion for the AntelopeIO DUNE Package

Ground Rules:

  • This discussion is only about AntelopeIO DUNE and the AntelopeIO DUNE package. If you have feedback for Chocolatey, please contact the Google Group.
  • This discussion will carry over multiple versions. If you have a comment about a particular version, please note that in your comments.
  • The maintainers of this Chocolatey Package will be notified about new comments that are posted to this Disqus thread, however, it is NOT a guarantee that you will get a response. If you do not hear back from the maintainers after posting a message below, please follow up by using the link on the left side of this page or follow this link to contact maintainers. If you still hear nothing back, please follow the package triage process.
  • Tell us what you love about the package or AntelopeIO DUNE, or tell us what needs improvement.
  • Share your experiences with the package, or extra configuration or gotchas that you've found.
  • If you use a url, the comment will be flagged for moderation until you've been whitelisted. Disqus moderated comments are approved on a weekly schedule if not sooner. It could take between 1-5 days for your comment to show up.
comments powered by Disqus