Easily scale up or down for your compute needs, and pay only for the time you use.
Upload your job into Dis.co via command-line or web interface. Jobs include AI model training, image and video rendering, or IoT device edge computing.
The jobs’s subtasks are scheduled and routed to compute nodes. Disco parallelizes and optimizes the job for time and cost.
There are no limitations on requests, code size, memory, execution time, or runtime environments.
Monitor the progress of your jobs in the CLI or web UI. Download all your results at once
Easily run parallel workloads and accelerate time-to-results without managing infrastructure.
With the power 1-8 Nvidia Tesla V100 cores per job, get results faster. For AI modeling, media rendering, or any framework that support GPUs. Take huge arrays and store it memory for faster processing.
Leverage existing infrastructure - on-prem and cloud - as one resource. Auto-scaling infrastructure with nothing to maintain.
Run jobs in web UI or CLI. Use existing codebase and CICD codesets. Parallelize your Python projects without learning multi-processing.
Bring your own Docker images. Or let us build one based on your requirements.
Stop building home-grown scheduling and orchestration tools. Automatically parallelize large-compute workloads at scale.
No servers to provision, patch, manage, or monitor. Pay only for the compute time you use.
Dis.co integrates with all of the major platforms and technologies that you need to build and scale your applications. Choose your own set from the below options to fit your needs.
AWS, Azure, GCP, Packet
Github, Gitlab, Bitbucket
Ubuntu 18.04, Centos 7.7,
Pricing for CPU and GPU power is based on machine size.
Pay for the compute time used - by hour for cloud compute and by month per agent installed for on-prem machines. Volume discounts are available. Speak with a solution expert for a customized price quote.