Multinode - Rapidly build distributed cloud applications in Python | Product Hunt

Core concepts


In the previous section, we wrote distributed code using the function primitive. However, the code execution has always been triggered by a developer running a command in a terminal.

In this section, we will expose these distributed computations as APIs, conferring the same efficiency savings to our users on the internet. To build these APIs, we will introduce an additional multinode compute primitive, known as a service.


Here is an example of a FastAPI API, implemented as a multinode service. (We are using uvicorn as the web server, but Daphne and gunicorn can also be used.)

import uvicorn
from fastapi import FastAPI

app = FastAPI()

def hello_world():
    return "<p>Hello, World!</p>"

def api():, host="", port=80)

The @mn.service decorator indicates that the api function should be run in multinode's hosted cloud environment. The port=80 argument indicates that port 80 should be exposed to the outside world.


A Flask API can be wrapped up as a service in a similar manner using the @mn.service decorator.

import multinode as mn
from flask import Flask

app = Flask(__name__)

def hello_world():
    return "<p>Hello, World!</p>"

def api():

Running the service

Save the FastAPI example code to file called Also save the following dependencies to a file called requirements.txt

# file: requirements.txt

Now run this command:

`multinode run`

The domain name of the service should be printed to the console. If you open this domain name in a browser, you should see the Hello world! page.

When you are finished, press CTRL+C to tear down the application.

To create a deployment that persists outside the lifetime of your terminal process, see the section on persistent deployments.

Invoking functions from a service

Now that we have grasped the basic mechanics of services, let's bundle up some of the distributed computations from the functions section as APIs.

Example 1: Expensive resources

import uvicorn
from fastapi import FastAPI

app = FastAPI()

def light(x):
    return x + 1

def heavy(x):

@mn.function(cpu=32, memory="128GiB")
def heavy_computation(x):
    ans = do_hard_maths(x)
    return ans

@mn.service(cpu=1, memory="1GiB", port=80)
def api():, host="", port=80)
  • The /light endpoint performs some light computation, which runs inside the service process itself. This service process has 1 CPU and 1GiB of memory.

  • The /heavy endpoint delegates some heavy computation to the heavy_computation function, which runs in a separate process, endowed with 32 CPUs and 128GiB of memory.

Example 2: Parallelisation

import uvicorn
from fastapi import FastAPI

app = FastAPI()

def calculate_sum_of_squares(x):
    squares =
    return sum(squares)

def square(x):
    return x ** 2

def api():, host="", port=80)

GET requests will typically return in around one second, since the calculation is parallelised. (This is assuming we have taken precautions against cold starts.)

Example 3: Asynchronous invocation

import uvicorn
from fastapi import FastAPI
import psycopg2

app = FastAPI()"/orders")
def submit_order(order_details):
    # Do not await the result - return acknowledgement immediately
    return "Order acknowledged"

def fulfil_order(order_details):
    # process the order

def api():, host="", port=80)

When the user submits an order, they receive an acknowledgement immediately; the order is fulfilled later.

Note: In the exceptionally rare event of a hardware failure while fulfil_order is running, the framework guarantees that the function will be retried.