Tech With Htunn
  • Blog Content
  • ๐Ÿค–Artificial Intelligence
    • ๐Ÿง Building an Intelligent Agent with Local LLMs and Azure OpenAI
    • ๐Ÿ“ŠRevolutionizing IoT Monitoring: My Personal Journey with LLM-Powered Observability
  • ๐Ÿ“˜Core Concepts
    • ๐Ÿ”„Understanding DevSecOps
    • โฌ…๏ธShifting Left in DevSecOps
    • ๐Ÿ“ฆUnderstanding Containerization
    • โš™๏ธWhat is Site Reliability Engineering?
    • โฑ๏ธUnderstanding Toil in SRE
    • ๐Ÿ”What is Identity and Access Management?
    • ๐Ÿ“ŠMicrosoft Graph API: An Overview
    • ๐Ÿ”„Understanding Identity Brokers
  • ๐Ÿ”ŽSecurity Testing
    • ๐Ÿ”SAST vs DAST: Understanding the Differences
    • ๐ŸงฉSoftware Composition Analysis (SCA)
    • ๐Ÿ“‹Software Bill of Materials (SBOM)
    • ๐ŸงชDependency Scanning in DevSecOps
    • ๐ŸณContainer Scanning in DevSecOps
  • ๐Ÿ”„CI/CD Pipeline
    • ๐Ÿ”My Journey with Continuous Integration in DevOps
    • ๐Ÿš€My Journey with Continuous Delivery and Deployment in DevOps
  • ๐ŸงฎFundamentals
    • ๐Ÿ’พWhat is Data Engineering?
    • ๐Ÿ”„Understanding DataOps
    • ๐Ÿ‘ทThe Role of a Cloud Architect
    • ๐Ÿ›๏ธCloud Native Architecture
    • ๐Ÿ’ปCloud Native Applications
  • ๐Ÿ›๏ธArchitecture & Patterns
    • ๐Ÿ…Medallion Architecture in Data Engineering
    • ๐Ÿ”„ETL vs ELT Pipeline: Understanding the Differences
  • ๐Ÿ”’Authentication & Authorization
    • ๐Ÿ”‘OAuth 2.0 vs OIDC: Key Differences
    • ๐Ÿ”Understanding PKCE in OAuth 2.0
    • ๐Ÿ”„Service Provider vs Identity Provider Initiated SAML Flows
  • ๐Ÿ“‹Provisioning Standards
    • ๐Ÿ“ŠSCIM in Identity and Access Management
    • ๐Ÿ“กUnderstanding SCIM Streaming
  • ๐Ÿ—๏ธDesign Patterns
    • โšกEvent-Driven Architecture
    • ๐Ÿ”’Web Application Firewalls
  • ๐Ÿ“ŠReliability Metrics
    • ๐Ÿ’ฐError Budgets in SRE
    • ๐Ÿ“SLA vs SLO vs SLI: Understanding the Differences
    • โฑ๏ธMean Time to Recovery (MTTR)
Powered by GitBook
On this page
  • OAuth 2.0 vs. PKCE: Why I Changed My Approach
  • The Key Differences I've Learned
  • When I Faced a Real Security Challenge
  • Building a Secure OIDC Application with Keycloak and Node.js
  • The Authentication Flow I Use
  • Step 1: Setting Up Keycloak
  • Step 2: My Node.js Implementation
  • Key Lessons I've Learned About PKCE Implementation
  • 1. Security Best Practices
  • 2. Common Pitfalls I've Encountered
  • Why I'll Never Go Back to Basic OAuth
  • Conclusion: The Future of OAuth Security
  1. Authentication & Authorization

Understanding PKCE in OAuth 2.0

When I began implementing authentication for my first single-page application (SPA), I thought using the standard OAuth 2.0 flows would be sufficient. I was wrong, and that mistake nearly cost our company a serious security breach. That's when I discovered PKCE (Proof Key for Code Exchange), an extension that transformed how I approach authentication in public clients. Let me share what I've learned along this journey.

OAuth 2.0 vs. PKCE: Why I Changed My Approach

In my early days as an identity architect, I implemented traditional OAuth 2.0 authorization code flows for our web applications. The flow worked fine for server-side applications where we could securely store a client secret. The pattern was simple:

  1. User clicks "Login"

  2. Our application redirects to the authorization server

  3. User authenticates

  4. Authorization server redirects back with an authorization code

  5. Our server exchanges this code + client secret for tokens

  6. Authentication complete!

But then we built our first mobile app and modern SPA, and I hit a roadblock: where do I securely store the client secret? The answer was uncomfortable โ€“ I couldn't. Storing it in JavaScript or a mobile app meant anyone could extract it.

The standard recommendation was to use the Implicit flow (where the authorization server returns tokens directly in the URL fragment), but this introduced new security issues with token leakage. I needed something better.

That's when I discovered PKCE โ€“ an elegant solution that allows public clients to use the authorization code flow securely without a client secret. After implementing it across our application ecosystem, I'll never go back to basic OAuth flows for public clients.

The Key Differences I've Learned

From hands-on experience building dozens of authentication flows, here are the crucial differences between standard OAuth 2.0 and PKCE:

Standard OAuth 2.0 Auth Code Flow
OAuth 2.0 with PKCE

Requires a client secret for token exchange

Uses a dynamically generated code verifier/challenge pair instead

Not secure for public clients (mobile apps, SPAs)

Secure for all client types

Vulnerable to authorization code interception attacks

Resistant to code interception attacks

Simple setup for confidential clients

Slightly more complex but more secure

Static credentials

Dynamic proof of possession model

When I Faced a Real Security Challenge

I'll never forget the day our security team reported a potential OAuth flow exploitation attempt. Someone had intercepted an authorization code through a compromised network and attempted to exchange it for tokens. With a traditional OAuth implementation, they might have succeeded. But since we had implemented PKCE, the attack failed โ€“ without the original code verifier that only existed in the user's legitimate browser session, the exchange was rejected.

Building a Secure OIDC Application with Keycloak and Node.js

Let me walk you through how I implement PKCE with OpenID Connect (OIDC) using Keycloak and Node.js in my projects.

The Authentication Flow I Use

Step 1: Setting Up Keycloak

First, I configure Keycloak to support PKCE. Here's what I've learned works best:

  1. Create a dedicated realm for each environment (dev, staging, prod)

  2. Configure the client settings carefully:

    Client ID: my-spa-client
    Client Protocol: openid-connect
    Access Type: public      # Critical for PKCE usage
    Standard Flow Enabled: ON
    Direct Access Grants: OFF  # I disable this for security
    PKCE Code Challenge Method: S256  # Always use S256, not plain
    Valid Redirect URIs: https://myapp.com/callback
  3. Create roles and scopes that match your application's permission model

The key insight I learned is to always use the S256 challenge method rather than plain โ€“ this prevents potential MITM attacks against your code challenge.

Step 2: My Node.js Implementation

Here's how I implement the client-side of PKCE in my Node.js applications:

const express = require('express');
const session = require('express-session');
const axios = require('axios');
const crypto = require('crypto');
const app = express();

// I always use secure sessions
app.use(session({
  secret: process.env.SESSION_SECRET,
  resave: false,
  saveUninitialized: true,
  cookie: { secure: process.env.NODE_ENV === 'production' }
}));

// My Keycloak configuration
const config = {
  clientId: process.env.KEYCLOAK_CLIENT_ID,
  realm: process.env.KEYCLOAK_REALM,
  baseUrl: process.env.KEYCLOAK_URL,
  redirectUri: process.env.APP_URL + '/callback'
};

// Generate the OIDC endpoints from base configuration
const endpoints = {
  authorization: `${config.baseUrl}/realms/${config.realm}/protocol/openid-connect/auth`,
  token: `${config.baseUrl}/realms/${config.realm}/protocol/openid-connect/token`
};

// A helper to generate a secure random string
function generateRandomString(length = 64) {
  return crypto.randomBytes(length).toString('hex');
}

// A helper to create a code challenge from verifier
function generateCodeChallenge(verifier) {
  return crypto
    .createHash('sha256')
    .update(verifier)
    .digest('base64')
    .replace(/\+/g, '-')
    .replace(/\//g, '_')
    .replace(/=/g, '');
}

// Step 1: Initiate login and generate PKCE values
app.get('/login', (req, res) => {
  // I store the verifier in the session, never in localStorage or cookies
  const codeVerifier = generateRandomString();
  req.session.codeVerifier = codeVerifier;
  
  const codeChallenge = generateCodeChallenge(codeVerifier);
  
  // Construct authorization URL with PKCE parameters
  const authUrl = new URL(endpoints.authorization);
  authUrl.searchParams.append('client_id', config.clientId);
  authUrl.searchParams.append('redirect_uri', config.redirectUri);
  authUrl.searchParams.append('response_type', 'code');
  authUrl.searchParams.append('scope', 'openid profile email');
  authUrl.searchParams.append('code_challenge', codeChallenge);
  authUrl.searchParams.append('code_challenge_method', 'S256');
  
  // I add state to prevent CSRF attacks - a practice I always follow
  const state = generateRandomString(32);
  req.session.oauthState = state;
  authUrl.searchParams.append('state', state);
  
  res.redirect(authUrl.toString());
});

// Step 2: Handle the callback and exchange code for tokens
app.get('/callback', async (req, res) => {
  // Validate state to prevent CSRF attacks
  if (req.query.state !== req.session.oauthState) {
    return res.status(400).send('Invalid state parameter. Possible CSRF attack.');
  }
  
  const code = req.query.code;
  const codeVerifier = req.session.codeVerifier;
  
  // Clean up session after use
  delete req.session.codeVerifier;
  delete req.session.oauthState;
  
  try {
    // Exchange code + verifier for tokens
    const tokenResponse = await axios.post(endpoints.token, new URLSearchParams({
      grant_type: 'authorization_code',
      client_id: config.clientId,
      code: code,
      redirect_uri: config.redirectUri,
      code_verifier: codeVerifier
    }), {
      headers: {
        'Content-Type': 'application/x-www-form-urlencoded'
      }
    });
    
    // Store tokens securely in session (never expose to frontend!)
    req.session.tokens = {
      access_token: tokenResponse.data.access_token,
      refresh_token: tokenResponse.data.refresh_token,
      id_token: tokenResponse.data.id_token,
      expires_at: Date.now() + tokenResponse.data.expires_in * 1000
    };
    
    // Parse ID token claims
    const idToken = tokenResponse.data.id_token;
    const [, payload] = idToken.split('.');
    const claims = JSON.parse(Buffer.from(payload, 'base64').toString());
    
    // Store user info in session
    req.session.user = {
      sub: claims.sub,
      name: claims.name,
      email: claims.email
    };
    
    res.redirect('/dashboard');
  } catch (error) {
    console.error('Token exchange failed:', error.response?.data || error.message);
    res.status(500).send('Authentication failed');
  }
});

// Protected route that requires authentication
app.get('/dashboard', (req, res) => {
  if (!req.session.user) {
    return res.redirect('/login');
  }
  
  res.send(`
    <h1>Welcome, ${req.session.user.name}!</h1>
    <p>Your email: ${req.session.user.email}</p>
    <a href="/logout">Log out</a>
  `);
});

app.get('/logout', (req, res) => {
  req.session.destroy();
  res.redirect('/');
});

app.listen(3000, () => {
  console.log('Server running on http://localhost:3000');
});

Key Lessons I've Learned About PKCE Implementation

Through implementing PKCE across multiple projects, here are the crucial insights I've gained:

1. Security Best Practices

  • Always use S256 challenge method - The 'plain' method is vulnerable to interception

  • Store the code verifier server-side - I use server-side sessions, never client storage

  • Include state parameter - Essential for CSRF protection

  • Validate all inputs - I check every parameter from the authorization server

  • Use HTTPS everywhere - Even in development environments

2. Common Pitfalls I've Encountered

  • Base64URL encoding issues - Standard Base64 won't work; you must replace +, /, and = characters

  • Session management problems - If you're using stateless architectures, securely passing the verifier becomes challenging

  • Missing CORS headers - Often an issue when your app and auth server are on different domains

  • Improper token storage - Never store tokens in localStorage; use HTTP-only cookies or server sessions

Why I'll Never Go Back to Basic OAuth

Before PKCE, securing SPAs and mobile apps required uncomfortable security compromises. My options were either:

  1. Use the Implicit flow (less secure, tokens in URL)

  2. Use a backend-for-frontend proxy (more complex architecture)

  3. Risk leaking a client secret in public code (security vulnerability)

PKCE eliminates these compromises. It provides the security benefits of the authorization code flow without requiring a client secret. It's the best of both worlds.

For any public client application I build todayโ€”whether it's a mobile app, SPA, or native desktop applicationโ€”PKCE is my default choice. The additional security it provides against code interception attacks is well worth the minimal extra implementation effort.

If you've been using the Implicit flow or considering storing client secrets in public code, I strongly encourage you to make the switch to PKCE. Your security team will thank you, and you'll sleep better at night knowing your authentication flow is using modern best practices.

Conclusion: The Future of OAuth Security

PKCE has evolved from an optional extension to a critical security component for modern authentication. Major identity providers now recommend or require it for public clients. Based on my experience, I believe it will become the default approach for all OAuth 2.0 implementations in the future, even for confidential clients.

OAuth 2.1, which is currently in draft, is expected to make PKCE mandatory for all clients using the authorization code flow. This official recognition validates what security practitioners like me have been advocating for years: PKCE represents a significant improvement to OAuth security.

By embracing PKCE today, you're not just implementing a security best practiceโ€”you're future-proofing your authentication architecture for the evolving identity landscape.

PreviousOAuth 2.0 vs OIDC: Key DifferencesNextService Provider vs Identity Provider Initiated SAML Flows

Last updated 20 hours ago

๐Ÿ”’
๐Ÿ”