Skip to main content

Introduction

how-wave-autoscale-works

Wave Autoscale is a powerful and unified autoscaling & traffic management tool designed for clouds and Kubernetes environments. It allows you to customize autoscaling and traffic control of cloud resources to meet your availability and cost-efficiency needs. With Wave Autoscale, you can easily manage the scaling of your applications and ensure optimal performance and resource utilization.

Understanding Key Concepts

Metrics Definitions

Wave Autoscale allows engineers to define custom metrics that will be used as triggers for scaling events. These metrics can be based on various factors such as CPU utilization, memory usage, network traffic, or any other relevant performance indicators. By defining metrics tailored to their applications, engineers can ensure that scaling decisions are made based on accurate and relevant data.

Scaling Component Definitions

Scaling components represent the resources that will be scaled up or down based on the defined metrics. These components can include virtual machines, containers, or any other cloud resources that are part of the application infrastructure. Engineers can define the desired scaling behavior for each component, specifying the minimum and maximum number of instances, as well as any other relevant scaling parameters.

Scaling Plan Definitions

Scaling plans are the core of Wave Autoscale's functionality. These plans define the rules and conditions under which scaling events will occur. Engineers can create complex scaling plans that take into account multiple metrics, scaling components, and scaling strategies. For example, a scaling plan could be configured to scale out virtual machines when CPU utilization exceeds a certain threshold, while also scaling in containers when memory usage drops below a certain level.

Get Started with Wave Autoscale

Setting up and running Wave Autoscale is a straightforward process:

  1. Install Wave Autoscale: Follow the installation instructions provided in the official documentation for your specific cloud environment or Kubernetes cluster.

  2. Define Metrics: Specify the metrics that you want to monitor and use for scaling decisions. You can define these metrics using a simple YAML configuration file.

  3. Define Scaling Components: Identify the resources or entities that you want to scale.

  4. Define Scaling Plans: Combine your metrics and scaling components into scaling plans that define the conditions and actions for scaling your applications.

  5. Deploy and Run: Deploy your configurations to Wave Autoscale, and it will automatically monitor your applications and perform scaling actions based on your defined plans.