Running OpenAI's GPT-4 on Infernet
In this tutorial, we will integrate OpenAI's GPT-4 (opens in a new tab) into Infernet. Specifically, we will:
- Obtain an API key from OpenAI
- Configure the
gpt4
service, build & deploy it with Infernet - Make a web-2 request by directly prompting the
gpt4
service - Make a web-3 request by integrating a sample
PromptsGPT.sol
smart contract. This contract will make a request to Infernet with their prompt, and receive the result of the request.
Hardware Requirements
Any laptop or desktop computer should be able to run this tutorial.
Tutorial Video
Install Pre-requisites
For this tutorial you'll need to have the following installed.
Setting up the Infernet Node along with the gpt4
container
You can follow the following steps on your local machine to setup the Infernet Node and the gpt4
container.
Obtain an API key from OpenAI
First, you'll need to get an API key from OpenAI. You can do this by making an OpenAI (opens in a new tab) account. After signing in, head over to their platform (opens in a new tab) to make an API key.
You will need a paid account to use the GPT-4 API.
Ensure docker
& foundry
exist
To check for docker
, run the following command in your terminal:
docker --version
# Docker version 25.0.2, build 29cf629 (example output)
You'll also need to ensure that docker-compose exists in your terminal:
which docker-compose
# /usr/local/bin/docker-compose (example output)
To check for foundry
, run the following command in your terminal:
forge --version
# forge 0.2.0 (551bcb5 2024-02-28T07:40:42.782478000Z) (example output)
Clone the starter repository
Just like our other examples, we're going to clone this repository. All of the code and instructions for this tutorial can be found in the projects/gpt4 (opens in a new tab) directory of the repository.
# Clone locally
git clone --recurse-submodules https://github.com/ritual-net/infernet-container-starter
# Navigate to the repository
cd infernet-container-starter
Configure the gpt4
container
Configure API key
This is where we'll use the API key we obtained from OpenAI.
cd projects/gpt4/container
cp config.sample.json config.json
In the containers
field, you will see the following. Replace your-openai-key
with your OpenAI API key.
"containers": [
{
// ...
"env": {
// TODO: replace with your OpenAI API key
"OPENAI_API_KEY": "your-openai-key"
}
}
],
Build the gpt4
container
First, navigate back to the root of the repository. Then simply run the following command to build the gpt4
container:
cd ../../..
make build-container project=gpt4
Deploy the gpt4
container with Infernet
You can run a simple command to deploy the gpt4
container along with bootstrapping the rest of the Infernet node
stack in one go:
make deploy-container project=gpt4
Check the running containers
At this point it makes sense to check the running containers to ensure everything is running as expected.
docker container ps
You should expect to see something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
55071f2f5926 ritualnetwork/example-gpt4-infernet:latest "hypercorn app:creat…" 4 seconds ago Up 3 seconds 0.0.0.0:3000->3000/tcp gpt4
ace176281304 ritualnetwork/infernet-node:1.3.1 "/app/entrypoint.sh" 6 seconds ago Up 5 seconds 0.0.0.0:4000->4000/tcp infernet-node
36d3e2dadc20 ritualnetwork/infernet-anvil:1.0.0 "anvil --host 0.0.0.…" 6 seconds ago Up 5 seconds 0.0.0.0:8545->3000/tcp infernet-anvil
c09ac450572e redis:7.4.0 "docker-entrypoint.s…" 6 seconds ago Up 5 seconds 0.0.0.0:6379->6379/tcp infernet-redis
3df662d987b1 fluent/fluent-bit:3.1.4 "/fluent-bit/bin/flu…" 6 seconds ago Up 5 seconds 2020/tcp, 24224/tcp infernet-fluentbit
Notice that five different containers are running, including the infernet-node
and the gpt4
containers.
Send a job request to the gpt4
container
From here, we can make a Web-2 job request to the container by posting a request to the
api/jobs
(opens in a new tab) endpoint.
curl -X POST http://127.0.0.1:4000/api/jobs \
-H "Content-Type: application/json" \
-d '{"containers": ["gpt4"], "data": {"prompt": "Can shrimp actually fry rice fr?"}}'
You will get a job id in response. You can use this id to check the status of the job:
{"id": "e1006f58-22a9-461c-bfb7-3a0e87b34377"}
Check the status of the job
You can make a GET
request to the api/jobs
(opens in a new tab) endpoint to check the status of the job.
curl -X GET "http://127.0.0.1:4000/api/jobs?id=e1006f58-22a9-461c-bfb7-3a0e87b34377"
You will get a response similar to this:
[
{
"id": "e1006f58-22a9-461c-bfb7-3a0e87b34377",
"result": {
"container": "gpt4",
"output": {
"message": "No, the phrase \"shrimp fried rice\" refers to the method of cooking, not to the fact that shrimp would actually fry the rice. In this case, the shrimp and rice are fried together, typically along with other ingredients like vegetables and sauce. Usually humans will do the cooking, not the shrimp!"
}
},
"status": "success"
}
]
Disappointing response from GPT-4, but it's working! 🎉
Calling our service from a smart contract
In the following steps, we will deploy our consumer contract (opens in a new tab) and make a subscription request by calling the contract.
Setup
Ensure that you have followed Steps 1-6 in the previous section to setup the Infernet Node and the gpt4
container.
Notice that in one of the steps above we have an Anvil node running on port 8545
.
By default, the infernet-anvil
(opens in a new tab) image used deploys the
Infernet SDK (opens in a new tab) and other relevant contracts for you:
- Coordinator:
0x5FbDB2315678afecb367f032d93F642f64180aa3
- Primary node:
0x70997970C51812dc3A010C7d01b50e0d17dc79C8
Deploy our PromptsGPT
smart contract
In this step, we will deploy our PromptsGPT.sol
(opens in a new tab) to the Anvil node. This contract simply
allows us to submit a prompt to the LLM, and receives the result of the prompt and prints it to the anvil console.
Anvil logs
During this process, it is useful to look at the logs of the Anvil node to see what's going on. To follow the logs, in a new terminal, run:
docker logs -f infernet-anvil
Deploying the contract
Once ready, to deploy the PromptsGPT
consumer contract, in another terminal, run:
make deploy-contracts project=gpt4
You should expect to see similar Anvil logs:
eth_getTransactionReceipt
Transaction: 0x7055d707c0b66e0a49f4a686af2c64434a69242c44547a08bf9f212ac091cabc
Contract created: 0x663f3ad617193148711d28f5334ee4ed07016602
Gas used: 730664
Block Number: 1
Block Hash: 0x730f64803600476f6b0b314d3d3e4fcd51b93f29fc9d99b4f0993f4ede6b4b55
Block Time: "Wed, 6 Mar 2024 18:48:06 +0000"
eth_getTransactionByHash
From our logs, we can see that the PromptsGPT
contract has been deployed to address
0x663f3ad617193148711d28f5334ee4ed07016602
.
Call the contract
Now, let's call the contract with a prompt! In the same terminal, run:
make call-contract project=gpt4 prompt="How can I make a cake?"
You should first expect to see an initiation transaction sent to the PromptsGPT
contract:
eth_sendRawTransaction
Transaction: 0x4e85c69eeccd44af35e11d5c82f1868c97659dd3cfc508b028efb16e7ffef0d1
Gas used: 191018
Block Number: 3
Block Hash: 0xbd424c32b709a95e945a72335c45870c43aade59ca69168676325b6a1ab378f9
Block Time: "Wed, 6 Mar 2024 18:49:09 +0000"
eth_getTransactionReceipt
Shortly after that you should see another transaction submitted from the Infernet Node which is the result of your on-chain subscription and its associated job request:
eth_sendRawTransaction
_____ _____ _______ _ _ _
| __ \|_ _|__ __| | | | /\ | |
| |__) | | | | | | | | | / \ | |
| _ / | | | | | | | |/ /\ \ | |
| | \ \ _| |_ | | | |__| / ____ \| |____
|_| \_\_____| |_| \____/_/ \_\______|
subscription Id 1
interval 1
redundancy 1
node 0x70997970C51812dc3A010C7d01b50e0d17dc79C8
output: Sure, I can guide you through a basic vanilla cake recipe. Here's a step-by-step guide:
Ingredients:
1. 2 cups of all-purpose flour
2. 1 1/2 cups of granulated sugar
3. 1/2 cup of butter at room temperature
4. 1 cup of milk
5. 3 1/2 teaspoons of baking powder
6. 1 teaspoon of vanilla extract
7. 1/2 teaspoon of salt
8. 3 large eggs
Instructions:
1. Preheat your oven to 350 degrees F (175 degrees C).
2. Grease and flour a 9x13 inch pan or line with parchment paper.
3. In a medium bowl, cream together the sugar and butter.
4. Beat in the eggs, one at a time, mixing well after each.
5. Combine the flour, baking powder, and salt; stir into the butter mixture alternately with the milk, beginning and ending with the flour mixture.
6. Stir in the vanilla extract.
7. Pour batter into the prepared pan.
8. Bake for 30 to 40 minutes in the preheated oven.
9. Cake is done when it springs back to the touch, or a toothpick inserted into the center comes out clean.
10. Let cool in pan for at least 10 minutes, then turn out onto a wire rack and cool completely.
Remember, all ovens vary so keep a close eye on your cake to ensure it doesn't overcook. Enjoy baking!
Transaction: 0xfe7b13a50e4ee427db280a3ea0f6e01bc3e34d4bff6dc567a176bfb059cd814b
Gas used: 139840
🎉 Congratulations! You have successfully enabled a contract to have access to OpenAI's GPT4 service!
Next steps
This container is for demonstration purposes only, and is purposefully simplified for readability and ease of comprehension. For a production-ready version of this code, check out:
- The CSS Inference Workflow (opens in a new tab): A Python class that supports multiple API providers, including OpenAI, and can be used to build production-ready containers.
- The CSS Inference Service (opens in a new tab): A production-ready, Infernet (opens in a new tab)-compatible container that works out-of-the-box with minimal configuration, and serves inference using the
CSS Inference Workflow
.