Part 4: Kibana - Visualization and Exploration

Part of the ELK Stack 101 Series

The Dashboard That Saved Production

3 AM. Production is on fire. Error rates spiking. Users complaining.

Before Kibana: SSH into 20 servers, grep log files, piece together what's happening, 30+ minutes to understand the issue.

With Kibana: Open dashboard, see error spike correlated with deployment at 2:47 AM, identify failing service, find exact error in 90 seconds.

We rolled back in 5 minutes. Incident resolved.

That's the power of Kibana - turning raw logs into actionable insights through visualizations, dashboards, and powerful search tools.

In this article, I'll show you everything I use in Kibana - from basic searches to building production monitoring dashboards.

What is Kibana?

Kibana is the visualization and exploration layer for Elasticsearch. It's:

  • Search UI: Query and explore data

  • Visualization tool: Charts, graphs, maps

  • Dashboard platform: Combine visualizations

  • Management interface: Configure Elasticsearch, ILM, index patterns

Written in Node.js, runs as a web application, connects to Elasticsearch.

Installing Kibana

Method 1: Docker

Access: http://localhost:5601

Method 2: Linux Installation

On Ubuntu/Debian:

kibana.yml:

Start service:

Method 3: Docker Compose (Full Stack)

docker-compose.yml:

Access Kibana: http://localhost:5601

First Time Setup

1. Index Patterns

Index patterns tell Kibana which Elasticsearch indices to use.

Create index pattern:

  1. Navigate to Stack Management β†’ Data Views

  2. Click Create data view

  3. Name: logs-*

  4. Index pattern: logs-*

  5. Timestamp field: @timestamp

  6. Click Create data view

Now Kibana can query all logs-* indices.

2. Sample Data (Optional)

For testing, load sample data:

  1. Home β†’ Add sample data

  2. Choose "Sample web logs" or "Sample flight data"

  3. Click Add data

Great for exploring Kibana features.

Discover - Searching Logs

Discover is where I spend 80% of my time - searching and exploring logs.

  1. Navigate to Discover

  2. Select data view: logs-*

  3. Set time range (top right): Last 15 minutes, Last 24 hours, etc.

You'll see:

  • Histogram of log volume over time

  • Table of recent logs

  • Field list on left

KQL (Kibana Query Language)

My preferred search syntax:

Basic queries:

Complex queries:

Lucene Query Syntax (Alternative)

Toggle to Lucene for advanced queries:

I use KQL 90% of the time - simpler syntax.

Filtering

Click field values to filter:

  1. Find field in left sidebar

  2. Click value to add filter

  3. Click + to include, - to exclude

Edit filters:

  • Click filter to edit

  • Toggle enable/disable

  • Pin across views

Time Filtering

Time picker (top right):

Quick select:

  • Last 15 minutes

  • Last 1 hour

  • Last 24 hours

  • Last 7 days

Relative:

  • now-15m to now

  • now-1h to now

Absolute:

  • Select start and end dates

Refresh:

  • Set auto-refresh interval (10s, 30s, 1m)

Saved Searches

Save frequently used queries:

  1. Build query + filters

  2. Click Save (top right)

  3. Name: "Payment Service Errors"

  4. Click Save

Reload anytime from Discover sidebar.

Visualizations

Kibana supports many visualization types. Let me show you the ones I use most.

Creating a Visualization

Two ways:

Method 1: From Discover

  1. Build search

  2. Click Visualize

Method 2: From Visualize

  1. Navigate to Visualize Library

  2. Click Create visualization

  3. Choose type

Visualization Types

1. Metric (Single Number)

Show total error count:

Configuration:

  • Data view: logs-*

  • Aggregation: Count

  • Filter: level: ERROR

Result: Big number showing total errors

Use case: KPI tiles on dashboards

2. Line Chart (Time Series)

Logs over time by level:

Configuration:

  • X-axis: Date histogram on @timestamp (interval: auto)

  • Y-axis: Count

  • Split series: Terms on level field

Result: Line chart showing ERROR, WARN, INFO trends

This is my most-used visualization.

3. Bar Chart

Top 10 services by error count:

Configuration:

  • X-axis: Terms on service (size: 10, order by count desc)

  • Y-axis: Count

  • Filter: level: ERROR

Result: Bar chart ranking services by errors

4. Pie Chart

Error distribution by service:

Configuration:

  • Slice by: Terms on service

  • Metrics: Count

Result: Pie chart showing proportional error counts

I use for quick overviews.

5. Data Table

Top error messages:

Configuration:

  • Rows: Terms on message.keyword (size: 20)

  • Metrics: Count

  • Columns: Terms on service

Result: Table of most common errors

Great for drilling into specifics.

6. Area Chart

Stacked logs by level:

Configuration:

  • X-axis: Date histogram on @timestamp

  • Y-axis: Count

  • Split series: Terms on level

  • Chart type: Area (stacked)

Result: Stacked area showing log volume composition

7. Heat Map

Response time by hour and service:

Configuration:

  • X-axis: Date histogram on @timestamp (interval: 1 hour)

  • Y-axis: Terms on service

  • Cell value: Average response_time

Result: Heat map showing when/where slowdowns occur

Perfect for identifying patterns.

8. Maps (with GeoIP)

User locations:

Configuration:

  • Map type: Coordinate map

  • Geo coordinates: geoip.location

  • Metrics: Count

Result: World map with user activity dots

Requires GeoIP enrichment in Logstash.

Lens - Modern Visualization Builder

Lens is the new drag-and-drop visualization tool.

Create visualization:

  1. Go to Visualize Library β†’ Create β†’ Lens

  2. Drag fields to workspace

  3. Kibana suggests visualization types

  4. Customize as needed

Example: Drag @timestamp to X-axis, Kibana creates time series chart automatically.

I use Lens for 90% of new visualizations - it's intuitive.

Dashboards

Dashboards combine multiple visualizations into a single view.

Creating a Dashboard

  1. Navigate to Dashboard

  2. Click Create dashboard

  3. Click Add from library or Create visualization

  4. Arrange visualizations

  5. Save

My Production Monitoring Dashboard

"Microservices Health Dashboard":

Layout:

Visualizations:

1. Metrics (top row):

  • Total log count

  • Error count (1 hour)

  • Warning count (1 hour)

  • Average response time

2. Time series:

  • Line chart: Logs over time split by level

3. Analysis:

  • Bar chart: Top services by error count

  • Heat map: Error rate by service and time

4. Details:

  • Data table: Recent error messages with service, timestamp

Dashboard Filters

Apply filters to entire dashboard:

  1. Click Add filter

  2. Field: environment

  3. Value: production

  4. Apply

All visualizations update to show only production logs.

Time Controls

Set time range for entire dashboard:

  • Use time picker (top right)

  • All visualizations sync to same time range

Dashboard Drilldown

Click on visualization to filter:

  1. Click on "payment-service" in bar chart

  2. Entire dashboard filters to payment-service

  3. See related errors, time series, etc.

Clear filter to return to full view.

Saving and Sharing

Save dashboard:

  1. Click Save

  2. Name: "Microservices Health"

  3. Description (optional)

  4. Save

Share dashboard:

  1. Click Share

  2. Copy link (includes filters and time range)

  3. Send to team

Export PDF (with X-Pack):

  1. Share β†’ PDF Reports

  2. Generate report

  3. Download or email

Canvas - Custom Infographics

Canvas is for pixel-perfect, presentation-ready dashboards.

Use cases:

  • Executive dashboards

  • NOC displays

  • Custom branding

Example: Create a "war room" display with:

  • Real-time metrics

  • Alert status

  • Service topology diagram

  • Custom graphics and logos

I use for high-visibility displays, not day-to-day monitoring.

Alerts and Actions

Monitor data and trigger actions (requires X-Pack Basic+).

Creating an Alert

  1. Navigate to Stack Management β†’ Rules and Connectors

  2. Click Create rule

  3. Choose rule type: Elasticsearch query

Example alert: "High Error Rate"

Configuration:

  • Name: High Error Rate Alert

  • Check every: 1 minute

  • Index: logs-*

  • Time field: @timestamp

  • Query: level: ERROR

  • Threshold: Count > 100 in last 5 minutes

  • Action: Send email / Slack / PagerDuty

When triggered, alert sends notification.

Connectors

Integrate with external services:

  • Email: SMTP

  • Slack: Webhook

  • PagerDuty: API

  • Webhook: Custom HTTP endpoint

  • Microsoft Teams: Webhook

Configure in Stack Management β†’ Connectors.

Watcher (Advanced Alerting)

For complex alerting logic, use Watcher (X-Pack):

Dev Tools Console

Dev Tools is where I interact directly with Elasticsearch REST API.

Using Console

  1. Navigate to Dev Tools

  2. Type query in left pane

  3. Click green play button or Ctrl+Enter

  4. See response in right pane

Example queries:

I use Dev Tools constantly for testing queries before adding to dashboards.

Stack Management

Centralized configuration for Elasticsearch and Kibana.

Key Sections

Index Management:

  • View indices

  • Delete indices

  • Adjust settings

  • Manage ILM policies

Data Views:

  • Create/edit index patterns

  • Manage field formatters

Advanced Settings:

  • Configure Kibana behavior

  • Default index pattern

  • Date formats

Saved Objects:

  • Import/export dashboards, visualizations, searches

  • Share configurations between environments

My Daily Kibana Workflows

Workflow 1: Investigating Production Issues

  1. Open main dashboard - see current state

  2. Notice error spike at specific time

  3. Click on spike to drill down

  4. Filter by service showing errors

  5. Switch to Discover to see actual error messages

  6. Search for specific error pattern using KQL

  7. Expand log entry to see full details (trace ID, stack trace)

  8. Follow trace ID to related logs across services

  9. Identify root cause

Time: 2-5 minutes

Workflow 2: Building New Dashboard

  1. Identify metrics needed (error rate, response time, etc.)

  2. Create visualizations in Lens

  3. Test queries in Discover

  4. Combine into dashboard

  5. Add filters and time controls

  6. Test with team

  7. Save and share

  1. Open historical dashboard

  2. Set time range to last 7 days

  3. Look for patterns (daily peaks, weekly trends)

  4. Create visualizations for anomalies

  5. Export insights to share with team

Kibana Spaces

Organize dashboards by team or use case (requires X-Pack).

Creating a Space

  1. Stack Management β†’ Spaces

  2. Create space

  3. Name: "platform-team" or "prod-monitoring"

  4. Choose which features to enable

Each space has separate:

  • Dashboards

  • Visualizations

  • Searches

  • Index patterns

Use for multi-team environments.

Security (X-Pack)

Control access to Kibana and data.

Users and Roles

Create role:

  1. Stack Management β†’ Roles

  2. Create role: log_viewer

  3. Cluster privileges: monitor

  4. Index privileges:

    • Indices: logs-*

    • Privileges: read, view_index_metadata

  5. Kibana privileges: Read access to Discover, Dashboard

Create user:

  1. Stack Management β†’ Users

  2. Create user: john.doe

  3. Assign role: log_viewer

Now user can view logs but not modify cluster.

Performance Tips

1. Limit Time Ranges

Searching all data is slow:

  • Default to last 24 hours

  • Use relative time ranges

  • Avoid "Last 90 days" unless needed

2. Use Filters, Not Queries

Filters are cached, queries are not:

  • Use KQL for free-text search

  • Use filters for exact matches

3. Limit Visualization Buckets

Too many buckets slow down visualizations:

  • Terms aggregation: Limit to 10-20 terms

  • Date histogram: Use appropriate intervals (auto, 1h, 1d)

4. Disable Auto-Refresh in Production

Auto-refresh hits Elasticsearch repeatedly:

  • Use manual refresh

  • Enable auto-refresh only when actively monitoring

Common Issues

Issue 1: "No results found"

Check:

  • Time range (are logs in this range?)

  • Index pattern (does it match indices?)

  • Filters (are they too restrictive?)

  • Field name (case-sensitive)

Issue 2: Visualization shows no data

Check:

  • Time range

  • Filters

  • Aggregation field (is it mapped correctly?)

  • Data actually exists in Elasticsearch

Issue 3: Kibana slow

Solutions:

  • Limit time range

  • Reduce visualization complexity

  • Check Elasticsearch cluster health

  • Increase Kibana memory

Conclusion

Kibana turns raw logs into insights through powerful search, visualizations, and dashboards. Key takeaways:

Discovery:

  • KQL for searching logs

  • Filters for drilling down

  • Saved searches for common queries

Visualizations:

  • Line charts for time series

  • Bar charts for rankings

  • Tables for details

  • Lens for easy creation

Dashboards:

  • Combine multiple visualizations

  • Filter entire dashboard

  • Share with team

  • Export reports

Advanced:

  • Alerts for proactive monitoring

  • Dev Tools for direct Elasticsearch access

  • Canvas for custom displays

  • Spaces for organization

In the next article, we'll cover production best practices - security, scaling, backup, and running ELK at scale.

Previous: Part 3 - Logstash Pipeline Next: Part 5 - Production Best Practices


This article is part of the ELK Stack 101 series. Check out the series overview for more content.

Last updated