Maximizing Cloud Efficiency: Turn it Off When Not in Use

Mahesh - Aug 5 - - Dev Community

In the realm of cloud computing, managing costs effectively while maintaining high performance is a constant challenge. One of the simplest yet most effective cost-saving practices is to turn off resources when they are not in use. This strategy is particularly relevant for pre-production environments used for testing and experimentation.

Prerequisites and Skills Needed

To effectively implement these strategies, the following prerequisites and skills are necessary:

  1. Basic Understanding of Kubernetes (K8s): Familiarity with Kubernetes operations, including deployment and scaling.
  2. Experience with CI/CD Tools: Knowledge of continuous integration and continuous deployment tools like ArgoCD, GitHub Workflows.
  3. Scripting Skills: Ability to write and understand scripts, particularly for automation tasks.
  4. Familiarity with Cloud Services: Understanding of cloud service providers (e.g., AWS, Azure, GCP) and their respective cost management practices.
  5. Knowledge of GitOps: Understanding of the GitOps model for managing Kubernetes clusters.

Understanding Cloud Environments

Cloud projects typically involve various environments for different purposes. Many of these are pre-production environments used to ensure the quality of deliverables before they go live. Unlike production environments, which must be available 24/7, pre-production environments are often only needed during specific hours, such as 9 AM to 5 PM on weekdays.

The Cost Implications

Running cloud resources continuously, even when not in use, leads to significant costs, especially for memory and CPU reserved by pods. Implementing simple sleep scheduling can dramatically cut these costs, potentially by half or more.

Implementation Options

To optimize cloud usage and reduce costs, consider the following implementation options:

  • Daily Deployments: Delete your deployments at the end of each day and re-deploy them each morning. While effective, this method can be achieved by using Argocd or Gitops Model.

  • Cron Schedule with Automation Server: Run a scaledown job on a cron schedule using an automation server like Github Workflows. This method is for its simplicity and efficiency:

Team members already have the necessary knowledge to implement it.
The Kubernetes (K8s) code required is straightforward:

for all namespaces
    for all deployments
        kubectl scale deployment <deployment-name> --replicas=0
Enter fullscreen mode Exit fullscreen mode

No need to manage namespace state by re-deploying the same versions daily.
Avoids running multiple deployment jobs.

  • Automated Scaledown Tools: Utilize tools such as the Kube-SleepSchedule to schedule a scaledown from within the cluster.

Guide for Automated Scaledown Tools.

Scale down Kubernetes Deployments, StatefulSets, and/or HorizontalPodAutoscalers during non-work hours automatically within the cluster.
Deploy the Kube SleepSchedule to a test (non-prod) cluster with a default uptime or downtime time range to scale down all deployments during the night and weekend.
Image description

Installing Kube SleepSchedule

kubectl apply -f https://github.com/UmamaheswarKalagotla/kubernetes-sleepschedule/blob/main/kubernetes-sleepschedule.yaml
Enter fullscreen mode Exit fullscreen mode

Architecture
The diagram below depicts how a Kube sleepschedule agent controls applications.
Image description

Kubernetes Deployments are interchangeable by any kind of supported workload. Kubernetes Sleep Schedule will scale down the deployment's replicas if all of the following conditions are met:

  • current time is not part of the "uptime" schedule or is part of the "downtime" schedule.

If true, the schedules are evaluated in the following order:

  • downscaler/downtime annotation on the workload definition.
  • downscaler/uptime annotation on the workload definition.
  • downscaler/downtime annotation on the workload's Namespace.
  • downscaler/uptime annotation on the workload's Namespace.

Configuring your Deployments/workload definitions to downscale
Add below annotations based on timezone your deployment should run:

metadata:
  annotations:
    downscaler/uptime: "Mon-Fri 07:00-19:00 US/Central"
Enter fullscreen mode Exit fullscreen mode

Configuring your workload's Namespace to downscale
Add below annotations based on timezone your Namespace should run:

apiVersion: v1
kind: Namespace
metadata:
    name: foo
    labels:
        name: foo
    annotations:
        downscaler/uptime: "Mon-Fri 07:00-19:00 US/Central"
Enter fullscreen mode Exit fullscreen mode

Using HPA to keep Minimum Replicas
To enable Sleep Schedule on HPA if excluded from namespaces or Deployments can be implemented with --downtime-replicas=1, ensure to add the following annotations to Deployment and HPA.

kubectl annotate deploy <<Service>> 'downscaler/exclude=true'
kubectl annotate hpa <<Service>> 'downscaler/downtime-replicas=1'
Enter fullscreen mode Exit fullscreen mode

Engage with Us!

We'd love to hear your experiences and insights:

1. Have you experienced high cloud costs due to unused pre-production environments?
Share your stories and how it impacted your budget.

2. What tools or strategies have you used to manage and reduce cloud costs?
Let us know which tools and methods have worked best for you.

3. What challenges have you faced in implementing cost-saving strategies in your cloud infrastructure?
Tell us about the obstacles you've encountered and how you overcame them.

By implementing these strategies and considering the prerequisites, you can significantly reduce your cloud computing costs while maintaining the quality and efficiency of your operations. Remember, if you aren't using it, turn it off. It's a simple yet powerful approach to cloud cost management.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player