Examples
Running an ONNX Model

Running an ONNX Model on Infernet

Welcome to this comprehensive guide where we'll explore how to run an ONNX model on Infernet, using our infernet-container-starter (opens in a new tab) examples repository. This tutorial is designed to give you and end-to-end understanding of how you can run your own custom pre-trained models, and interact with them on-chain and off-chain.

Model: This example uses a pre-trained model to classify iris flowers. The code for the model is located at our simple-ml-models (opens in a new tab) repository.

Hardware Requirements

Any laptop or desktop computer should be able to run this tutorial.

Tutorial Video

Install Pre-requisites

For this tutorial you'll need to have the following installed.

  1. Docker (opens in a new tab)
  2. Foundry (opens in a new tab)

Web2 Off-Chain Compute

Ensure docker & foundry exist

To check for docker, run the following command in your terminal:

docker --version
# Docker version 25.0.2, build 29cf629 (example output)

You'll also need to ensure that docker-compose exists in your terminal:

which docker-compose
# /usr/local/bin/docker-compose (example output)

To check for foundry, run the following command in your terminal:

forge --version
# forge 0.2.0 (551bcb5 2024-02-28T07:40:42.782478000Z) (example output)

Clone the starter repository

Much like our hello-world example, we're going to clone this repository. All of the code and instructions for this tutorial can be found in the projects/onnx-iris (opens in a new tab) directory of the repository.

# Clone locally
git clone --recurse-submodules https://github.com/ritual-net/infernet-container-starter
# Navigate to the repository
cd infernet-container-starter

Build the onnx-iris container

Simply run the following command to build the onnx-iris container:

make build-container project=onnx-iris

Deploy the onnx-iris container with Infernet

You can run a simple command to deploy the onnx-iris container along with bootstrapping the rest of the Infernet node stack in one go:

make deploy-container project=onnx-iris

Check the running containers

At this point it makes sense to check the running containers to ensure everything is running as expected.

# > docker container ps
CONTAINER ID   IMAGE                                             COMMAND                  CREATED         STATUS          PORTS                                                     NAMES
0dbc30f67e1e   ritualnetwork/example-onnx-iris-infernet:latest   "hypercorn app:creat…"   8 seconds ago   Up 7 seconds    0.0.0.0:3000->3000/tcp                                    onnx-iris
0c5140e0f41b   ritualnetwork/infernet-anvil:0.0.0                "anvil --host 0.0.0.…"   23 hours ago    Up 23 hours     0.0.0.0:8545->3000/tcp                                    anvil-node
f5682ec2ad31   ritualnetwork/infernet-node:latest                "/app/entrypoint.sh"     23 hours ago    Up 9 seconds    0.0.0.0:4000->4000/tcp                                    deploy-node-1
c1ece27ba112   fluent/fluent-bit:latest                          "/fluent-bit/bin/flu…"   23 hours ago    Up 10 seconds   2020/tcp, 0.0.0.0:24224->24224/tcp, :::24224->24224/tcp   deploy-fluentbit-1
3cccea24a303   redis:latest                                      "docker-entrypoint.s…"   23 hours ago    Up 10 seconds   0.0.0.0:6379->6379/tcp, :::6379->6379/tcp                 deploy-redis-1

You should see five different images running, including the Infernet node and the ONNX model container we just deployed.

Create an off-chain compute job

With the onnx-iris container running, we can make a POST request to the Infernet node to create a job.

curl -X POST "http://127.0.0.1:4000/api/jobs" \
-H "Content-Type: application/json" \
-d '{"containers":["onnx-iris"], "data": {"input": [[1.0380048, 0.5586108, 1.1037828, 1.712096]]}}'

You should get an output similar to the following:

{
    "id": "074b9e98-f1f6-463c-b185-651878f3b4f6"
}
ℹ️

The inputs provided above correspond to an iris flower with the following characteristics:

  1. Sepal Length: 5.5cm
  2. Sepal Width: 2.4cm
  3. Petal Length: 3.8cm
  4. Petal Width: 1.1cm

By vectorizing & standardizing these inputs, we get the following vector:

[1.0380048, 0.5586108, 1.1037828, 1.712096]

Refer to this function in the model's repository (opens in a new tab) for more information on how the input is scaled.

For more context on the Iris dataset, refer to the UCI Machine Learning Repository (opens in a new tab).

Collect job status

To check for the current status of the job (including any results or errors), we can poll the /api/jobs (opens in a new tab) endpoint with the ID of our job from above:

curl -X GET "http://127.0.0.1:4000/api/jobs?id=074b9e98-f1f6-463c-b185-651878f3b4f6"

If our request was successful, we should see a response similar to the following:

[
    {
        "id": "074b9e98-f1f6-463c-b185-651878f3b4f6",
        "result": {
            "container": "onnx-iris",
            "output": [
                [
                    [
                        0.0010151526657864451,
                        0.014391022734344006,
                        0.9845937490463257
                    ]
                ]
            ]
        },
        "status": "success"
    }
]

Additional Resources

  1. To look at the node configuration, code, and other resources for the onnx-iris container, reference the Infernet Container Starter (opens in a new tab) repository.
  2. To build an Infernet-compatible container image, reference the Container documentation (opens in a new tab).

Web3 On-Chain Subscriptions

To test the Web3 workflow of the Infernet Node, we will need to scaffold some additional components. Below, we:

  1. Deploy an Anvil (opens in a new tab) local testnet node with the Infernet SDK (opens in a new tab) contracts already setup (via our anvil-node (opens in a new tab) image).
  2. Deploy a simple Infernet Consumer contract (opens in a new tab) for our Web3 demo application.
  3. Deploy an Infernet Node ready to service our requests.

Then, once this preliminary infrastructure is setup, we:

  1. Make a subscription request to our newly-deployed consumer contract, creating an on-chain subscription.
  2. Monitor the full lifecycle of the subscription, from the:
  • node listening to the subscription event,
  • to the node processing the subscription,
  • to the node sending the result back to the our consumer contract on-chain.

Setup

Follow steps 1 through 5 above to build and deploy the onnx-iris container.

Inspect the Anvil node

In another terminal, you can run docker container ls to see a list of the now running containers:

# > docker container ps
 
CONTAINER ID   IMAGE                                             COMMAND                  CREATED         STATUS          PORTS                                                     NAMES
0dbc30f67e1e   ritualnetwork/example-onnx-iris-infernet:latest   "hypercorn app:creat…"   8 seconds ago   Up 7 seconds    0.0.0.0:3000->3000/tcp                                    onnx-iris
0c5140e0f41b   ritualnetwork/infernet-anvil:0.0.0                "anvil --host 0.0.0.…"   23 hours ago    Up 23 hours     0.0.0.0:8545->3000/tcp                                    anvil-node
f5682ec2ad31   ritualnetwork/infernet-node:latest                "/app/entrypoint.sh"     23 hours ago    Up 9 seconds    0.0.0.0:4000->4000/tcp                                    deploy-node-1
c1ece27ba112   fluent/fluent-bit:latest                          "/fluent-bit/bin/flu…"   23 hours ago    Up 10 seconds   2020/tcp, 0.0.0.0:24224->24224/tcp, :::24224->24224/tcp   deploy-fluentbit-1
3cccea24a303   redis:latest                                      "docker-entrypoint.s…"   23 hours ago    Up 10 seconds   0.0.0.0:6379->6379/tcp, :::6379->6379/tcp                 deploy-redis-1

Notice that you now have an Anvil node running on port 8545 and an Infernet Node on port 4000.

By default, the anvil-node (opens in a new tab) image used deploys the Infernet SDK (opens in a new tab) and other relevant contracts for you:

  • Coordinator: 0x5FbDB2315678afecb367f032d93F642f64180aa3
  • Primary node: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8

Deploy a consumer

Next, we must deploy a consumer contract that implements the simple Infernet SDK interface (opens in a new tab).

Our IrisClassifier (opens in a new tab) contract is a simple example consumer. All this contract does is request a compute output from our infernet node, to classify an iris flower based on its sepal and petal dimensions, and upon receiving the result, use the forge console to print the result. We can deploy this contract via the associated forge project (opens in a new tab).

Anvil logs

During this process, it is useful to look at the logs of the Anvil node to see what's going on. To follow the logs, in a new terminal, run:

docker logs -f anvil-node

Deploying the contract

Once ready, to deploy the IrisClassifier consumer contract, in another terminal, run:

make deploy-contracts project=onnx-iris

You should expect to see similar Anvil logs:

# > make deploy-contracts project=onnx-iris
 
eth_sendRawTransaction
eth_getTransactionReceipt
 
Transaction: 0x23ca6b1d1823ad5af175c207c2505112f60038fc000e1e22509816fa29a3afd6
Contract created: 0x663f3ad617193148711d28f5334ee4ed07016602
Gas used: 476669
 
Block Number: 1
Block Hash: 0x6b026b70fbe97b4a733d4812ccd6e8e25899a1f6c622430c3fb07a2e5c5c96b7
Block Time: "Wed, 17 Jan 2024 22:17:31 +0000"
 
eth_getTransactionByHash
eth_getTransactionReceipt
eth_blockNumber

From our logs, we can see that the IrisClassifier contract has been deployed to address 0x663f3ad617193148711d28f5334ee4ed07016602.

Call the contract

Now, let's call the contract initiating a request to the Infernet Node. In the same terminal, run:

make call-contract project=onnx-iris

You should first expect to see an initiation transaction sent to the IrisClassifier contract:

eth_getTransactionReceipt
 
Transaction: 0xe56b5b6ac713a978a1631a44d6a0c9eb6941dce929e1b66b4a2f7a61b0349d65
Gas used: 123323
 
Block Number: 2
Block Hash: 0x3d6678424adcdecfa0a8edd51e014290e5f54ee4707d4779e710a2a4d9867c08
Block Time: "Wed, 17 Jan 2024 22:18:39 +0000"
eth_getTransactionByHash

Shortly after that you should see another transaction submitted from the Infernet Node which is the result of your on-chain subscription and its associated job request:

eth_sendRawTransaction
 
 
_____  _____ _______ _    _         _
|  __ \|_   _|__   __| |  | |  /\   | |
| |__) | | |    | |  | |  | | /  \  | |
|  _  /  | |    | |  | |  | |/ /\ \ | |
| | \ \ _| |_   | |  | |__| / ____ \| |____
|_|  \_\_____|  |_|   \____/_/    \_\______|
 
 
predictions: (adjusted by 6 decimals, 1_000_000 = 100%, 1_000 = 0.1%)
Setosa:  1015
Versicolor:  14391
Virginica:  984593
 
Transaction: 0x77c7ff26ed20ffb1a32baf467a3cead6ed81fe5ae7d2e419491ca92b4ac826f0
Gas used: 111091
 
Block Number: 3
Block Hash: 0x78f98f4d54ebdca2a8aa46c3b9b7e7ae36348373dbeb83c91a4600dd6aba2c55
Block Time: "Mon, 19 Feb 2024 20:33:00 +0000"
 
eth_blockNumber
eth_newFilter
eth_getFilterLogs

We can now confirm that the address of the Infernet Node (see the logged node parameter in the Anvil logs above) matches the address of the node we setup by default for our Infernet Node.

Congratulations! 🎉 You have successfully created an on-chain subscription request!