Setting Up a Home Lab Logging Stack with Loki, Promtail, and Grafana on K3s
Setting Up a Home Lab Logging Stack with Loki, Promtail, and Grafana on K3s Deploy a robust logging system in your home lab with Loki for log aggregation, Promtail for collection, and Grafana for visualization on K3s.
Whether you're a hobbyist delving into the depths of home networking and server management, or simply someone passionate about maintaining a robust home lab setup, having a reliable logging system is crucial. It not only aids in troubleshooting but also helps in optimizing the performance of your applications. This guide is tailored for home lab enthusiasts looking to deploy Loki for log aggregation, Promtail for log collection, and Grafana for visualizing and querying logs, all on a lightweight Kubernetes platform, K3s. Let's dive into setting up a streamlined logging stack that can offer deep insights into your home lab's workings.
Prerequisites
Before diving into the deployment process, ensure you have the following:
- A running K3s cluster
kubectl
configured to communicate with your K3s cluster- Basic familiarity with Kubernetes concepts like Deployments, Services, and PersistentVolumeClaims
Step 1: Deploying Loki
Loki is designed to aggregate logs from various sources, making log queries fast and efficient. We start by deploying Loki to the cluster.
Loki Deployment
Create a file named loki-deployment.yaml
and paste the following content:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: loki-configmap
data:
local-config.yaml: |
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
# Statistics help us better understand how Loki is used, and they show us performance
# levels for most users. This helps us prioritize features and documentation.
# For more information on what's sent, look at
# https://github.com/grafana/loki/blob/main/pkg/usagestats/stats.go
# Refer to the buildReport method to see what goes into a report.
#
# If you would like to disable reporting, uncomment the following lines:
#analytics:
# reporting_enabled: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: loki
spec:
selector:
matchLabels:
app: loki
replicas: 1
template:
metadata:
labels:
app: loki
spec:
containers:
- name: loki
image: grafana/loki:2.9.4
args:
- "-config.file=/etc/loki/local-config.yaml"
ports:
- containerPort: 3100
volumeMounts:
- name: loki-config
mountPath: /etc/loki
readOnly: true
volumes:
- name: loki-config
configMap:
name: loki-configmap
---
apiVersion: v1
kind: Service
metadata:
name: loki
spec:
selector:
app: loki
ports:
- protocol: TCP
port: 3100
targetPort: 3100
Deploy Loki by running:
kubectl apply -f loki-deployment.yaml
Loki Service
To access Loki within your cluster, create a service by defining loki-service.yaml
:
Apply it using:
kubectl apply -f loki-service.yaml
Step 2: Setting Up Promtail
Promtail is parses logs from your kubernetes nodes. We need to configure it so that is has access rights and so that it will push logs to loki.
Promtail Deployment
Define promtail-deployment.yaml
with the following:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: promtail-configmap
data:
promtail.yaml: |
server:
http_listen_port: 9080
grpc_listen_port: 0
clients:
- url: http://loki:3100/loki/api/v1/push
positions:
filename: /tmp/positions.yaml
target_config:
sync_period: 10s
scrape_configs:
- job_name: pod-logs
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_name
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: promtail-daemonset
spec:
selector:
matchLabels:
name: promtail
template:
metadata:
labels:
name: promtail
spec:
serviceAccount: promtail-serviceaccount
containers:
- name: promtail-container
image: grafana/promtail:2.9.4
args:
- -config.file=/etc/promtail/promtail.yaml
env:
- name: "HOSTNAME" # needed when using kubernetes_sd_configs
valueFrom:
fieldRef:
fieldPath: "spec.nodeName"
volumeMounts:
- name: logs
mountPath: /var/log
- name: promtail-config
mountPath: /etc/promtail
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
volumes:
- name: logs
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: promtail-config
configMap:
name: promtail-configmap
--- # Clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: promtail-clusterrole
rules:
- apiGroups: [""]
resources:
- nodes
- services
- pods
verbs:
- get
- watch
- list
--- # ServiceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: promtail-serviceaccount
--- # Rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: promtail-clusterrolebinding
subjects:
- kind: ServiceAccount
name: promtail-serviceaccount
namespace: default
roleRef:
kind: ClusterRole
name: promtail-clusterrole
apiGroup: rbac.authorization.k8s.io
Apply this deployment:
kubectl apply -f promtail-deployment.yaml
Step 3: Deploying Grafana
Grafana is your window to the data collected and aggregated by Loki and Promtail.
Grafana Deployment
Create grafana-deployment.yaml
with:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
data:
ds.yaml: |
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
orgId: 1
url: http://loki:3100
basicAuth: false
isDefault: true
version: 1
editable: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:latest
ports:
- containerPort: 3000
env:
- name: GF_PATHS_PROVISIONING
value: "/etc/grafana/provisioning"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: "Admin"
volumeMounts:
- name: grafana-provisioning
mountPath: /etc/grafana/provisioning/datasources
readOnly: true
- mountPath: /var/lib/grafana
name: grafana-storage
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-pvc
- name: grafana-provisioning
configMap:
name: grafana-datasources
And deploy it:
kubectl apply -f grafana-deployment.yaml
Setup Metal Lb
I'm using metal lb in this guide so that Grafana can use a local network IP.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: main-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.229-192.168.1.245
autoAssign: true
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: main-ad
namespace: metallb-system
spec:
ipAddressPools:
- main-pool
Grafana Service
The Grafana service I've put together below uses metal lb
so that we can map it to a local network IP address. Use whatever IP address you want below.
To access Grafana, create a service with grafana-service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: grafana-lb
spec:
selector:
app: grafana
ports:
- protocol: TCP
port: 3000
targetPort: 3000
type: LoadBalancer
loadBalancerIP: 192.168.1.242
Apply this to make Grafana accessible:
kubectl apply -f grafana-service.yaml
Accessing Grafana
Grafana is now accessible through by going to 192.168.1.242:3000
in your webbrowser. Use the admin account to log in (default password is admin
), and configure Loki as a datasource using the Loki service's cluster IP.
Conclusion
Deploying Loki, Promtail, and Grafana on a K3s cluster provides a scalable and efficient logging solution for Kubernetes environments. By following the steps outlined in this guide, you'll have a powerful toolset at your disposal for observability and log management.
Remember to monitor your deployments and adjust resources as necessary to ensure optimal performance. Happy logging!