Connect Ocean to your EKS Cluster

This workshop has been deprecated and archived. The new Amazon EKS Workshop is now available at www.eksworkshop.com.

In this section we will create a new Ocean cluster, associated with your existing EKS cluster.

Step 1: Create A New Cluster

  • To get started with the Ocean Creation Wizard, select “Cloud Clusters” from the side menu, under “Ocean”, and click the “Create Cluster” button on the top right.
  • On the Use Cases page, select “Migrate Worker Nodes' Configuration” under “Join an Existing Cluster”:

Step 2: General Settings

  • Enter a Cluster Name and Identifier and select the Region of your EKS cluster.

    The Cluster Identifier for your Ocean cluster should be unique within the account, and defaults to the Ocean Cluster Name.

  • Select an EKS Auto Scaling Group (or alternatively, an Instance which should be an existing worker node) to import the cluster configuration from.
  • Click on Next. Ocean will now import the configuration of your EKS cluster.

    If this is a new cluster, and the managed auto scaling group is not showing up in the spot console go ahead and use one of the running instance ids from your node group (due to api limits aws account information is cached)

    When importing a cluster, Ocean will clone your cluster and node pools configuration. New instances will then be launched and registered directly to your cluster, and will not be visible via your node pools. Your existing instances and applications will remain unchanged.

Step 3: Compute Settings

  • Confirm or change the settings imported by the Ocean Creation Wizard.

    By default, Ocean will use as wide a selection of instance types as possible, in order to ensure optimal pricing and availability for your worker nodes by tapping into many EC2 Spot capacity pools. If you wish, you can exclude certain types from the pool of instances used by the cluster, by clicking on “Customize” under “Machine Types”.

Step 4: Connectivity Configuration

  • Create a token with the “Generate Token” link, or use an existing one.
  • Install the Controller pod. Learn more about the Controller Pod and Ocean here.
  • Click Test Connectivity to ensure the controller’s functionality.
  • Once the connectivity test is successful, click Next to proceed to the Review screen.

Step 5: Review And Create

  • On this page you can review your configuration, and check it out in JSON or Terraform formats.
  • When you are satisfied with all settings, click Create.

Step 6: Migrating Existing Nodes

In order to fully migrate any existing workloads to Ocean, the original EKS Auto Scaling group/s should be gradually drained and scaled down, while replacement nodes should be launched by Ocean. In order to make this process automatic and safe, and gradually migrate all workloads while maintaining high availablity for the application, Ocean has the “Workload Migration” feature. You can read about it here, or watch the video tutorial here.

In the interest of stability, the Workload Migration process is very gradual, and therefore takes a while (up to half an hour), even for small workloads. So for the purposes of this workshop we will assume that our workloads can tolerate a more aggesive rescheduling. Therefore, proceed with the following steps:

  1. If you have installed “Cluster Autoscaler” or set up any scaling poilicies for the orginal ASG managed by your EKS cluster, go ahead and disable them. Ocean’s autoscaler will take their place.
  2. Find the ASG associated with your EKS cluster in the EC2 console, right click it and select “edit”. Set the Desired Capacity, Min and Max values to 0. If you have any pods running, Ocean’s autoscaler will pick them up and scale up appropriately.

If you have several node groups configured, with different sets of labels, taints or launch specifications, before scaling them down make sure to configure matching “Launch Specifications” in Ocean. Have a look at the next page in the workshop to see how.

You’re all set! Ocean will now ensure your EKS cluster worker nodes are running on the most cost-effective, optimally sized instances possible.