In the first blog post of this series, we reviewed the history of graphics rendering and how computers never seem to be able to outpace the
Dis.co works hard to stay on the cutting edge of serverless solutions. Our team is constantly building new features and updates into the platform to make sure it remains the agile, dynamic solution our customers need. At the same time, users have been requesting new functions and applications as they gain more experience with Dis.co’s distributed computing solutions. To meet the growing demand for distributed solutions for GPU compute stacks, we’ve updated Dis.co to provide a serverless solution for processing jobs on GPU.
Dis.co’s Serverless GPU Solutions
Machine learning and deep learning applications are driving new innovations in artificial intelligence. However, a typical ML or DL job demands a great deal of computing power. Engineers working with deep learning applications, for example, must train around 1 billion parameters to uncover previously-known insights about test objects, and the computational intensity of DL jobs only increases with the depth of the neural network employed.
Constraints in computing power represent meaningful barriers to the development of novel DL and ML applications. Dis.co has already harnessed the power of distributed compute to maximize computational power with minimal resource demand. With our latest build, we’ve gone one step further to give users the option to run jobs on up to 8 GPUs in a single instance.
The potential applications for this new feature are endless; nearly any data-intensive application from ML and AI jobs to image/video processing and complex simulations will run quicker and more efficiently on GPU. By giving users a way to run jobs using GPUs, Dis.co has added a new level of functionality to an already agile and dynamic platform.
Using the GPU Feature
To access the new GPU feature on the Dis.co platform, start with compatible code. In order to run any job utilizing GPUs on Dis.co, users must include a docker image containing CUDA. Tensorflow version 2.0, using image tensorflow/tensorflow:2.0.0-gpu-py3, has been verified for use on the platform.
Once coding for the GPU instance is ready, simply create a new job in Dis.co. The UI for GPU instances on Dis.co is just as user-friendly as it is for CPU instances. To run a job, simply select small_gpu, medium_gpu, or large_gpu from the “Job Size” drop-down based on the computing demand of the instance selected. Small GPU instances access 1 GPU, medium instances have 4 GPUs, and large instances have up to 8. Give the instance a Job title, and select the preferred Cloud from the drop-down menu of the same name.
To ensure users do not face any risk of vendor lock-in for this service, GPU cloud instances are supported in AWS and Google Cloud in addition to Dis.co’s own discoCloud. For additional information on using Dis.Co with AWS or GCP, check out our demo videos: AWS or GCP.
How Customers Benefit from New Features
Dis.co’s customers are among the most innovative groundbreaking data scientists and tech professionals in the business, and we developed the GPU feature in direct response to their requests. Many sophisticated ML applications, such as the management of LIDAR and sensor data in autonomous vehicles, inherently require distributed computing solutions to process massive amounts of data in near real-time.
Developers should be free to experiment with different applications, and this is true whether they run on CPUs or GPUs. Dis.co has designed its platform to facilitate this type of flexibility without compromising functionality. Users can select several resources from several jobs, including move from cloud to cloud depending on whether AWS, GCP, or discoCloud best suits their needs. With the next update, users will be able to use different images on different jobs, which further refines efficiency and ease-of-use. To discover what Dis.co can offer your enterprise, check out our demo or follow us on Facebook, Twitter, or LinkedIn.