Core concepts
Functions
A function is the simplest building block in the multinode framework.
Just like a standard Python function, a multinode
function can run any piece of code. But what's special about a multinode
function is that its code runs on dynamically-provisioned hardware in our hosted cloud environment.
import multinode as mn
@mn.function()
def say_hello(name):
# Runs on dynamically-provisioned hardware in multinode's cloud
message = f"Hello, {name}!"
return message
result = say_hello.call("Alice")
# Runs on your laptop
print(result) # Hello, Alice!
To run this example, save the code to a file called main.py
, then run the command multinode run main.py
.
Provisioning expensive resources on demand
Suppose that our function requires a large amount of CPU or memory. Ideally, we want to provision these resources only while the function is running, so that they do not sit idle and incur costs.
@mn.function(cpu=32, memory="128GiB")
def heavy_computation(data):
# Do some CPU- and memory-intensive work
# Provisions resources solely for the duration of the function call
result = heavy_computation.call(data)
The same principle applies if we require a GPU.
@mn.function(gpu="Tesla T4")
def gpu_computation(input_data):
# Use a GPU for ML model inference
result = gpu_computation.call(data)
See the CPUs, GPUs and memory page for more details about the kinds of resources that you can provision.
Distributed parallel compute
If we want to run the same function on a number of different inputs, we can get the answers faster by processing in parallel across multiple machines.
@mn.function()
def slow_function(data):
# Perform a slow computation
# Distributes the computation across multiple machines
results_list = slow_function.map(datapoints_list)
See the workers and autoscaling page for further information about how multinode
scales the underlying worker pool.
Asynchronous function calls
Both .call
and .map
have asynchronous equivalents - namely, .async_call
and .async_map
.
.async_call
.async_call
returns a FunctionInvocationHandler object, which you can use to await the result of the function invocation at a later point.
import multinode as mn
@mn.function()
def slow_function(fn_input):
# Run some code that can take a couple of seconds or more.
fn_invocation = slow_function.async_call(input)
# Perform some other operations in the meantime
# ...
# Now wait until the functions finish and get the results
result = fn_invocation.await_result()
.async_map
.async_map
, on the other hand, returns a generator, which yields results as soon as they are ready.
import multinode as mn
@mn.function()
def square(x):
return x ** 2
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
squared_numbers_generator = square.async_map(numbers)
# Perform some other operations in the meantime
for n in squared_numbers_generator:
# 1, 4, 9, 16, 25, 36, 49, 64, 81, 100
print(n)
By default, .async_map
yields results in the same order as the inputs. But sometimes you may want to do some downstream processing on the results as soon as they are available, and it does not matter what order the results are processed downstream. In these situations, you can set the return_in_order
flag to False
, so that results are yielded in the order in which they become available.
squared_numbers_generator = square.async_map(
numbers,
return_in_order=False
)
# Perform some other operations in the meantime
for n in squared_numbers_generator:
# e.g. 64, 100, 4, 9, 16, 49, 1, 25, 81, 36
print(n)
Note: for tasks that take a several minutes or more, it may be more appropriate to use a job instead of a function.