Resource management
CPUs, GPUs and memory
Every multinode workload (function, service, job, schedule or daemon) can request a specific amount of CPU, GPU or memory. This is done using the cpu
, gpu
and memory
keyword arguments.
import multinode as mn
@mn.function(cpu=32, memory="500 MiB")
def do_heavy_cpu_computation(input_data):
# your cpu code
@mn.job(gpu="Tesla T4")
def do_gpu_computation(input_data):
# your gpu code
If you do not explicitly specify the resources requirements, multinode will use the minimum possible values of 0.1 CPU and 100 MiB memory.
For now, the maximum you can request for a single workload is 16 CPUs and 120 GiB of memory. Higher requests will be rejected at deployment time.
Coming soon
GPUs are not generally available yet. We are also working hard on raising limits on CPUs and memory.