Running A Scalable “Hello World” in Minutes on

Raymond Lo

Raymond Lo

April 3, 2020 · 5 min read

Setting up a cloud service such as AWS, Google Cloud, Microsoft Azure, or any other types of cloud computing platforms can sometimes be an adventure itself. Today, I could only vaguely remember all the “mandatory” steps I had to follow before getting the server running. To get a cloud environment architecture setup properly to run my first job, I had to track down tens of pages of documentation and several third-party blogs to get around all the ‘undocumented’ issues.  And now, all these documentations became obsolete as the user interfaces had changed. The tens to hundreds of pages of documentation that often needed to be read before getting the machine to run anything meaningful, it is just too painful for a developer like me. 

For many cloud-related jobs, particularly to a junior cloud infrastructure developer, taking certification courses to learn how to set up a cloud environment architecture is a huge investment of money and time for the organization. Moreover, the bitter reality is that these courses are long and tedious, and definitely not exciting. I often feel that the way these week-long courses teach you the complexity of the system is just ineffective. Of course, there may be exceptions, but “simplicity is the ultimate sophistication”, and I am a true believer in that. 

So, what can we do better? Let’s take a look at what offers. I will start with the most basic code – a simple ‘hello world’ example in Python. 

The Hello World Code

Without further ado, here is the infamous “Hello World” code.

Let’s first save this file as With this file in place, now you will learn three different ways to run this code on

Method 1: Web UI

Arguably, the simplest way is to use for the first time is to follow the drag-and-drop Web interface. To get started, you simply log in and then follow this video instruction. 

In less than 1 minute, you get the result back and no need to manage any servers. That’s all it takes to get your first job running on the cloud. 

Method 2: CLI 

For many developers, scripting usually is a more desirable way of maintaining your workflow and automation. Disco provides a command-line interface (CLI) that developers can use to manage job creations.

To install, you can run the following command. 

This will not only install Disco’s Python SDK libraries but also install the CLI. To login, you can type this command, and you will be prompted to enter the username and password similar to the Web UI.


To create a job, now in the same directory where you create the you run the following command. Upon success, you will obtain a unique job id from the command. 

The job ID is a piece of information that you would like to keep as many of the commands under “disco job” require that later on.

Lastly, you can download the result back to the local machine with the “download-result” command. 

Then there you will find a text file called IqoqoTask.stdout.0.txt with the “Hello World” text inside.

Method 3: Python SDK 

For developers who are looking for full features and controls,’s Python SDK is the ideal choice. The SDK provides features such as job management, docker image management (see our Python SDK Custom Docker Blog) that you can use to integrate natively in Python. 

For example, below is an example code of how to execute the on with merely 10 lines of code in total.

And by executing that Python Code, you can easily offload many of the workloads directly to Disco Cloud without learning how to manage the cloud infrastructures to scale. Put it in perspective, now you can execute hundreds of these jobs quickly and only pay for the hours you used. That is a significant cost and time saving for many organizations who may have a hard time managing the DevOps in-house.   

Yes, that’s the 3 ways of getting things running on Perhaps, this is one of the “Hello World” examples that may make you feel like asking why I would even bother. But if you take that as a challenge and try to run the “Hello World” on any other platforms today, I’m sure you will face some random mistakes and hit obstacles along your way. More importantly, often you will have a hard time scale the solution afterward without significant uptakes. For example, if I ask you to run this same task on 100 different machines every morning at 10 am. You know it is not a simple task immediately. Problems such as spinning up and down the servers and tasks management can become very complicated and hard to maintain, and that’s why is valuable to many of us who would like to reduce the DevOps workloads internally.

Lastly, you can find the examples we described here in our Example GitHub repository. To get more information about, feel free to sign up for a free account and get your solution on the serverless cloud in minutes. 

Raymond Lo

Raymond Lo

April 3, 2020 · 5 min read