Getting started with cloudknot

To get started with cloudknot you will need an AWS account with the proper permisssions and you will need to install cloudknot.

Obtaining an AWS account

If you haven’t already done so, create an Amazon Web Services (AWS) account.

Installation and configuration

First, you should install Docker and start the Docker daemon for cloudknot to work properly. In particular, if you plan to run cloudknot from an Amazon EC2 instance, you need to install Docker such that you can run Docker commands without sudo

$ # Install Docker
$ curl -sSL | sudo sh
$ # Add user to the group `docker`
$ sudo usermod -a -G docker $USER
$ # Restart the docker daemon
$ sudo service docker restart
$ # logout and then back in again, to make the above changes take effect
$ logout

You should do all this first, before installing cloudknot. If everything went well, you should be able to run docker run hello-world without using sudo. Then, you can install cloudknot from PyPI (recommended)

$ pip install cloudknot

or from the github repository. This will install cloudknot and its python dependencies.

After installation, you must configure cloudknot by running

$ cloudknot configure

This runs the AWS-CLI configuration tool and also installs some cloudknot infrastructure on AWS. Follow the prompts or simply press <ENTER> to accept the default values.


To use cloudknot, you must have the same permissions required to use AWS Batch. You can attach a managed policy, such as AWSBatchFullAccess or AWSBatchUserPolicy. If you prefer to write your own policies, the minimal permissions required for a cloudknot user should be contained in the following policy summary:

policy summary

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
            "Resource": [

Using multiple AWS profiles

If you want to use cloudknot with multiple AWS profiles, make sure that you have the profiles configured in the AWS credentials file, e.g.:

# ~/.aws/credentials





or in the AWS config file, e.g.:

# ~/.aws/config


[profile default]

[profile another-profile-i-use]

[profile project-specific-profile]

[profile profile-with-another-organization]

Then you can use the cloudknot functions cloudknot.set_profile, cloudknot.get_profile, and cloudknot.list_profiles to interact with your various AWS profiles.

Region shopping

You may want to shop the AWS regions for the cheapest spot instance pricing (see this page for details). You can view and change the region in which your cloudknot resources reside and in which you will launch your AWS Batch jobs by using the and functions.


See the examples directory on github for (you guessed it) examples of how to use cloudknot.

Cloudknot S3 Bucket

Cloudknot has some methods to return the results of AWS Batch jobs to the user. See, for example,, cloudknot.BatchJob.done, and cloudknot.BatchJob.result. Under the hood, these methods pass results through an Amazon S3 bucket. You can get and set the name of this S3 bucket using cloudknot.get_s3_params and cloudknot.set_s3_params.

AWS resource limitations

AWS places some limits on the number of services in each region for an AWS account. The most relevant limits for cloudknot users are the AWS Virtual Private Cloud (VPC) limits, which limit the number of VPCs per region to five, and the AWS Batch limits, which limit the number of compute environments and job queues. For most use cases, these limits should not be a problem. However, if you are using cloudknot along with other users in the same organization, you might bump up against these limitations. To avoid the VPC limit, try always to use the default VPC or to agree with your coworkers on using an organization-wide PARS name. To avoid the batch limits, clobber old knots that you are no longer using. If none of those approaches work for you, you can request increases to some service limits.

If the terms “knot,” “PARS,” and “clobber” in the preceding paragraph confound you, take a look at the cloudknot documentation.

Debugging and logging

Cloudknot will print logging info to the console by setting the CLOUDKNOT_LOGLEVEL environment variable:


Cloudknot also writes a much more verbose log for the current session in the user’s home directory in the path returned by

import os.path as op
op.join(op.expanduser('~'), '.cloudknot', 'cloudknot.log')

If something goes wrong with an AWS Batch job, you might want to inspect the job’s log on Amazon CloudWatch. You can get a URL for each job attempt’s CloudWatch log using the cloudknot.BatchJob.log_urls parameter.

Anything else

Did you run into an issue that isn’t addressed here? Check out the Frequently Asked Questions or open up a new issue.