Core concepts
Services
In the previous section, we wrote distributed code using the function primitive. However, the code execution has always been triggered by a developer running a command in a terminal.
In this section, we will expose these distributed computations as APIs, conferring the same efficiency savings to our users on the internet. To build these APIs, we will introduce an additional multinode
compute primitive, known as a service.
Hello world services in popular web frameworks
FastAPI
Here is an example of a FastAPI API, implemented as a multinode
service. (We are using uvicorn as the web server, but Daphne and gunicorn can also be used.)
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def hello_world():
return "<p>Hello, World!</p>"
@mn.service(port=80)
def api():
uvicorn.run(app, host="0.0.0.0", port=80)
The @mn.service
decorator indicates that the api
function should be run in multinode
's hosted cloud environment. The port=80
argument indicates that port 80 should be exposed to the outside world.
Flask
A Flask API can be wrapped up as a service in a similar manner using the @mn.service
decorator.
import multinode as mn
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello_world():
return "<p>Hello, World!</p>"
@mn.service(port=80)
def api():
app.run()
Running the service
Save the FastAPI example code to file called main.py
. Also save the following dependencies to a file called requirements.txt
# file: requirements.txt
fastapi==0.103.1
uvicorn==0.23.2
Now run this command:
`multinode run main.py`
The domain name of the service should be printed to the console. If you open this domain name in a browser, you should see the Hello world!
page.
When you are finished, press CTRL+C
to tear down the application.
To create a deployment that persists outside the lifetime of your terminal process, see the section on persistent deployments.
Invoking functions from a service
Now that we have grasped the basic mechanics of services, let's bundle up some of the distributed computations from the functions section as APIs.
Example 1: Expensive resources
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/light")
def light(x):
return x + 1
@app.get("/heavy")
def heavy(x):
return heavy_computation.call(x)
@mn.function(cpu=32, memory="128GiB")
def heavy_computation(x):
ans = do_hard_maths(x)
return ans
@mn.service(cpu=1, memory="1GiB", port=80)
def api():
uvicorn.run(app, host="0.0.0.0", port=80)
The
/light
endpoint performs some light computation, which runs inside the service process itself. This service process has 1 CPU and 1GiB of memory.The
/heavy
endpoint delegates some heavy computation to theheavy_computation
function, which runs in a separate process, endowed with 32 CPUs and 128GiB of memory.
Example 2: Parallelisation
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/sum_of_squares")
def calculate_sum_of_squares(x):
squares = square.map(range(x))
return sum(squares)
@mn.function()
def square(x):
time.sleep(1)
return x ** 2
@mn.service(port=80)
def api():
uvicorn.run(app, host="0.0.0.0", port=80)
GET
requests will typically return in around one second, since the calculation is parallelised. (This is assuming we have taken precautions against cold starts.)
Example 3: Asynchronous invocation
import uvicorn
from fastapi import FastAPI
import psycopg2
app = FastAPI()
@app.post("/orders")
def submit_order(order_details):
fulfil_order.async_call(order_details)
# Do not await the result - return acknowledgement immediately
return "Order acknowledged"
@mn.function()
def fulfil_order(order_details):
# process the order
@mn.service(port=80)
def api():
uvicorn.run(app, host="0.0.0.0", port=80)
When the user submits an order, they receive an acknowledgement immediately; the order is fulfilled later.
Note: In the exceptionally rare event of a hardware failure while fulfil_order
is running, the framework guarantees that the function will be retried.