Tech With Htunn
  • Blog Content
  • ๐Ÿค–Artificial Intelligence
    • ๐Ÿง Building an Intelligent Agent with Local LLMs and Azure OpenAI
    • ๐Ÿ“ŠRevolutionizing IoT Monitoring: My Personal Journey with LLM-Powered Observability
  • ๐Ÿ“˜Core Concepts
    • ๐Ÿ”„Understanding DevSecOps
    • โฌ…๏ธShifting Left in DevSecOps
    • ๐Ÿ“ฆUnderstanding Containerization
    • โš™๏ธWhat is Site Reliability Engineering?
    • โฑ๏ธUnderstanding Toil in SRE
    • ๐Ÿ”What is Identity and Access Management?
    • ๐Ÿ“ŠMicrosoft Graph API: An Overview
    • ๐Ÿ”„Understanding Identity Brokers
  • ๐Ÿ”ŽSecurity Testing
    • ๐Ÿ”SAST vs DAST: Understanding the Differences
    • ๐ŸงฉSoftware Composition Analysis (SCA)
    • ๐Ÿ“‹Software Bill of Materials (SBOM)
    • ๐ŸงชDependency Scanning in DevSecOps
    • ๐ŸณContainer Scanning in DevSecOps
  • ๐Ÿ”„CI/CD Pipeline
    • ๐Ÿ”My Journey with Continuous Integration in DevOps
    • ๐Ÿš€My Journey with Continuous Delivery and Deployment in DevOps
  • ๐ŸงฎFundamentals
    • ๐Ÿ’พWhat is Data Engineering?
    • ๐Ÿ”„Understanding DataOps
    • ๐Ÿ‘ทThe Role of a Cloud Architect
    • ๐Ÿ›๏ธCloud Native Architecture
    • ๐Ÿ’ปCloud Native Applications
  • ๐Ÿ›๏ธArchitecture & Patterns
    • ๐Ÿ…Medallion Architecture in Data Engineering
    • ๐Ÿ”„ETL vs ELT Pipeline: Understanding the Differences
  • ๐Ÿ”’Authentication & Authorization
    • ๐Ÿ”‘OAuth 2.0 vs OIDC: Key Differences
    • ๐Ÿ”Understanding PKCE in OAuth 2.0
    • ๐Ÿ”„Service Provider vs Identity Provider Initiated SAML Flows
  • ๐Ÿ“‹Provisioning Standards
    • ๐Ÿ“ŠSCIM in Identity and Access Management
    • ๐Ÿ“กUnderstanding SCIM Streaming
  • ๐Ÿ—๏ธDesign Patterns
    • โšกEvent-Driven Architecture
    • ๐Ÿ”’Web Application Firewalls
  • ๐Ÿ“ŠReliability Metrics
    • ๐Ÿ’ฐError Budgets in SRE
    • ๐Ÿ“SLA vs SLO vs SLI: Understanding the Differences
    • โฑ๏ธMean Time to Recovery (MTTR)
Powered by GitBook
On this page
  • How I Discovered SRE Was the Missing Piece
  • The SRE Principles That Changed How I Work
  • 1. Automation Is Non-Negotiable
  • 2. Embracing Service Level Objectives (SLOs)
  • 3. Observability as a Superpower
  • My GitLab-AWS Integration for SRE Success
  • Lessons Learned from My SRE Journey with AWS and GitLab
  • My AWS SRE Toolkit
  • Closing Thoughts
  1. Core Concepts

What is Site Reliability Engineering?

When I first heard the term "Site Reliability Engineering" or SRE, I was skeptical. Was this just another tech buzzword? Another rebrand of traditional operations? After leading multiple cloud transformations and building reliability-focused teams, I can tell you it's much more than that. Let me share my personal journey with SRE and how I've implemented it using AWS Cloud, Lambda functions, and GitLab pipelines.

How I Discovered SRE Was the Missing Piece

Three years ago, my team was drowning in 3 AM alerts. Our cloud infrastructure on AWS was growing exponentially, but our approaches to managing it weren't scaling alongside. We were stuck in a reactive cycle: build features fast, push to production, then frantically fix the inevitable problems.

That's when I discovered Google's SRE practices and realized we needed to apply software engineering principles to our operations challenges. The transformation wasn't easy, but the results have been remarkable: 78% fewer incidents, 92% reduction in mean time to recovery (MTTR), and happier engineers who now sleep through the night!

The SRE Principles That Changed How I Work

Through my implementation journey, I've found these core SRE principles to be game-changers:

1. Automation Is Non-Negotiable

I used to hear "we don't have time to automate" from my team. Now we live by "we don't have time NOT to automate." On our AWS infrastructure, nothing gets done manually twice. For example, we replaced our error-prone manual EC2 scaling process with this automated Lambda function:

import boto3

def lambda_handler(event, context):
    # Get current metrics from CloudWatch
    cloudwatch = boto3.client('cloudwatch')
    ec2 = boto3.client('ec2')
    
    # Get CPU utilization metrics
    response = cloudwatch.get_metric_data(
        MetricDataQueries=[
            {
                'Id': 'cpu',
                'MetricStat': {
                    'Metric': {
                        'Namespace': 'AWS/EC2',
                        'MetricName': 'CPUUtilization',
                        'Dimensions': [
                            {'Name': 'AutoScalingGroupName', 'Value': 'my-app-asg'},
                        ]
                    },
                    'Period': 300,
                    'Stat': 'Average',
                }
            },
        ],
        StartTime=context.timestamp - 600,  # 10 minutes ago
        EndTime=context.timestamp
    )
    
    cpu_utilization = response['MetricDataResults'][0]['Values'][0]
    
    # Decision logic
    if cpu_utilization > 75:
        # Scale up
        print(f"CPU at {cpu_utilization}% - Scaling up")
        autoscaling = boto3.client('autoscaling')
        autoscaling.set_desired_capacity(
            AutoScalingGroupName='my-app-asg',
            DesiredCapacity=10,
            HonorCooldown=True
        )
        return "Scaled up due to high CPU"
    
    return "No scaling action needed"

This Lambda function runs every 5 minutes and has eliminated both our late-night scaling emergencies and the overprovisioning that was costing us thousands each month.

2. Embracing Service Level Objectives (SLOs)

I used to ask, "Is our service up or down?" Now I know that's the wrong question. Instead, I ask, "Is our service meeting its reliability targets?" We define SLOs for each service and measure them meticulously through AWS CloudWatch metrics.

For our critical payment API, we set an SLO of 99.95% availability and 300ms latency for 99% of requests. When we approach our error budget, our GitLab pipeline automatically halts new deployments:

# GitLab CI pipeline with SLO checks
stages:
  - build
  - test
  - slo-check
  - deploy

# Build and test stages omitted for brevity

slo-check:
  stage: slo-check
  image: python:3.9
  script:
    - pip install boto3 pandas matplotlib
    - python slo_check.py
  rules:
    - if: $CI_COMMIT_BRANCH == "main"

deploy-to-production:
  stage: deploy
  script:
    - aws lambda update-function-code --function-name payment-api --zip-file fileb://function.zip
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: on_success
  needs:
    - job: slo-check
      artifacts: true

Our slo_check.py script pulls metrics from CloudWatch and verifies we have enough error budget remaining before allowing the deploy to proceed.

3. Observability as a Superpower

Before my SRE journey, we'd spend hours in war rooms asking, "what's happening?" Now, our AWS-based observability stack gives us immediate insights. Every Lambda function, API Gateway, and EC2 instance exports structured logs, metrics, and traces that paint a complete picture.

Here's how we configure our Lambda functions for proper observability:

Resources:
  PaymentProcessorFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: ./src/
      Handler: app.lambda_handler
      Runtime: python3.9
      Tracing: Active  # Enables X-Ray tracing
      Environment:
        Variables:
          LOG_LEVEL: INFO
          POWERTOOLS_SERVICE_NAME: payment-service
          POWERTOOLS_METRICS_NAMESPACE: PaymentAPI
      Layers:
        - !Sub arn:aws:lambda:${AWS::Region}:017000801446:layer:AWSLambdaPowertoolsPython:21
      Events:
        ApiEvent:
          Type: Api
          Properties:
            Path: /process
            Method: post

With this setup, our AWS Lambda Powertools automatically provide structured logging, metrics, and distributed tracing. When an incident occurs, we typically identify the root cause in minutes rather than hours.

My GitLab-AWS Integration for SRE Success

The glue that holds our SRE practice together is GitLab CI/CD pipelines integrated with AWS services. Every infrastructure change, from Lambda function updates to EC2 configuration, goes through this pipeline:

stages:
  - validate
  - test
  - security
  - deploy-staging
  - canary
  - deploy-production
  - post-deploy

validate:
  stage: validate
  script:
    - cfn-lint templates/*.yaml
    - terraform validate

test:
  stage: test
  script:
    - pytest tests/

security:
  stage: security
  script:
    - bandit -r src/
    - checkov -d ./terraform
    - trivy fs --security-checks vuln,config,secret .

deploy-staging:
  stage: deploy-staging
  script:
    - aws cloudformation deploy --template-file template.yaml --stack-name $STACK_NAME-staging

canary:
  stage: canary
  script:
    - aws lambda update-function-code --function-name payment-api-canary --zip-file fileb://function.zip
    - python canary_test.py --wait 5 --threshold 99
  rules:
    - if: $CI_COMMIT_BRANCH == "main"

deploy-production:
  stage: deploy-production
  script:
    - aws cloudformation deploy --template-file template.yaml --stack-name $STACK_NAME-production
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
      when: manual

post-deploy:
  stage: post-deploy
  script:
    - python monitor_deployment.py --duration 30 --alert-threshold 200
  rules:
    - if: $CI_COMMIT_BRANCH == "main"

This pipeline catches 98% of potential issues before they reach production. The remaining 2%? That's where our error budgets and observability come into play.

Lessons Learned from My SRE Journey with AWS and GitLab

After implementing SRE practices across multiple AWS-powered applications, I've learned some hard-earned lessons:

  1. Start with toil reduction: Before fancy SLOs and error budgets, automate the manual tasks draining your team. For us, that meant automating AWS instance provisioning and certificate renewals first.

  2. AWS Lambda isn't a silver bullet: While we use Lambda extensively for automation, we've learned its limitations. Stateful operations or tasks that run longer than 15 minutes still go on container-based solutions.

  3. Simple is reliable: Our most reliable services are the simplest ones. We've moved from complex microservices to "right-sized" services with clear boundaries, all documented as architecture diagrams in our GitLab repositories.

  4. Incremental improvement works: We didn't transform overnight. We started with one service, one SLO, and one automated task. Three years later, our entire platform follows SRE practices.

My AWS SRE Toolkit

For those looking to start their own SRE journey on AWS, here are the tools that have been game-changers for my team:

  1. AWS Lambda for automation, scaling responses, and periodic maintenance tasks

  2. CloudWatch Synthetic Canaries for continuous verification of critical paths

  3. X-Ray for distributed tracing across services

  4. EventBridge for event-driven automations and responses

  5. GitLab CI/CD with custom runners on EC2 for our deployment pipeline

  6. SSM Parameter Store for configuration management

  7. DynamoDB for storing SLO data and error budgets

Closing Thoughts

Site Reliability Engineering isn't just a job title or methodologyโ€”it's a mindset shift. My journey from traditional ops to SRE on AWS hasn't always been smooth, but the results speak for themselves: more reliable systems, happier customers, and engineers who can focus on innovation rather than firefighting.

If you're just starting your SRE journey, remember that perfect is the enemy of good. Start small, measure everything, and keep pushing the reliability envelope further. Your future self (especially at 3 AM) will thank you!

Next, I'll be diving deeper into error budgets and how we calculate them using CloudWatch metrics. Stay tuned!

PreviousUnderstanding ContainerizationNextUnderstanding Toil in SRE

Last updated 1 day ago

๐Ÿ“˜
โš™๏ธ