The Kubernetes Terraform provider allows you to manage Kubernetes resources using the same Infrastructure as Code principles you apply to your cloud infrastructure. This approach provides unified lifecycle management, dependency tracking, and drift detection for your Kubernetes workloads.
When to Use Kubernetes Provider
The Kubernetes provider is ideal when you want to:
- Manage both infrastructure and applications in a single Terraform workflow
- Leverage Terraform’s dependency graph for deployments
- Use consistent tooling across your entire stack
- Implement GitOps workflows with infrastructure and application code together
For complex application deployments, consider dedicated tools like Helm, Kustomize, or ArgoCD.
Authentication and Setup
The provider supports multiple authentication methods, listed in order of preference:
Cloud Provider Integration (Recommended)
For managed Kubernetes services, use cloud-specific authentication:
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.16.1"
}
}
}
# EKS Authentication
data "aws_eks_cluster" "cluster" {
name = var.cluster_name
}
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_name
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", var.cluster_name]
}
}
Kubeconfig File
For local development or when using kubectl:
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-cluster-context"
}
In-Cluster Configuration
For Terraform running inside a Kubernetes cluster:
provider "kubernetes" {
# Uses service account token automatically
}
Core Kubernetes Resources
Namespaces
Create isolated environments:
resource "kubernetes_namespace" "app" {
metadata {
name = "my-application"
labels = {
environment = "production"
managed-by = "terraform"
}
annotations = {
"description" = "Application namespace"
}
}
}
Deployments
Manage application deployments:
resource "kubernetes_deployment" "app" {
metadata {
name = "nginx-deployment"
namespace = kubernetes_namespace.app.metadata[0].name
labels = {
app = "nginx"
}
}
spec {
replicas = 3
selector {
match_labels = {
app = "nginx"
}
}
template {
metadata {
labels = {
app = "nginx"
}
}
spec {
container {
name = "nginx"
image = "nginx:1.26"
port {
container_port = 80
}
resources {
limits = {
cpu = "500m"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "128Mi"
}
}
liveness_probe {
http_get {
path = "/"
port = 80
}
initial_delay_seconds = 30
period_seconds = 10
}
readiness_probe {
http_get {
path = "/"
port = 80
}
initial_delay_seconds = 5
period_seconds = 5
}
}
}
}
}
}
Services
Expose applications within the cluster:
resource "kubernetes_service" "app" {
metadata {
name = "nginx-service"
namespace = kubernetes_namespace.app.metadata[0].name
}
spec {
selector = {
app = "nginx"
}
port {
port = 80
target_port = 80
protocol = "TCP"
}
type = "ClusterIP"
}
}
Configuration Management
ConfigMaps and Secrets
Manage configuration and sensitive data:
resource "kubernetes_config_map" "app_config" {
metadata {
name = "app-config"
namespace = kubernetes_namespace.app.metadata[0].name
}
data = {
"app.properties" = <<-EOF
database.host=db.example.com
database.port=5432
log.level=INFO
EOF
"config.json" = jsonencode({
debug = false
timeout = 30
retries = 3
})
}
}
resource "kubernetes_secret" "app_secrets" {
metadata {
name = "app-secrets"
namespace = kubernetes_namespace.app.metadata[0].name
}
type = "Opaque"
data = {
database-password = base64encode(var.db_password)
api-key = base64encode(var.api_key)
}
}
Scaling and Traffic Management
Horizontal Pod Autoscaler
Automatically scale based on metrics:
resource "kubernetes_horizontal_pod_autoscaler_v2" "app_hpa" {
metadata {
name = "nginx-hpa"
namespace = kubernetes_namespace.app.metadata[0].name
}
spec {
min_replicas = 2
max_replicas = 10
scale_target_ref {
api_version = "apps/v1"
kind = "Deployment"
name = kubernetes_deployment.app.metadata[0].name
}
metric {
type = "Resource"
resource {
name = "cpu"
target {
type = "Utilization"
average_utilisation = 70
}
}
}
metric {
type = "Resource"
resource {
name = "memory"
target {
type = "Utilization"
average_utilisation = 80
}
}
}
}
}
Ingress
Expose applications externally:
resource "kubernetes_ingress_v1" "app_ingress" {
metadata {
name = "app-ingress"
namespace = kubernetes_namespace.app.metadata[0].name
annotations = {
"kubernetes.io/ingress.class" = "nginx"
"cert-manager.io/cluster-issuer" = "letsencrypt-prod"
"nginx.ingress.kubernetes.io/rewrite-target" = "/"
}
}
spec {
tls {
hosts = ["app.example.com"]
secret_name = "app-tls"
}
rule {
host = "app.example.com"
http {
path {
path = "/"
path_type = "Prefix"
backend {
service {
name = kubernetes_service.app.metadata[0].name
port {
number = 80
}
}
}
}
}
}
}
}
Migrating from YAML to Terraform
When migrating existing Kubernetes YAML manifests to Terraform, several tools can help automate the conversion process and provide a structured approach to adopting Infrastructure as Code.
Migration Tools Comparison
There are two main tools for converting Kubernetes YAML to Terraform, each with very different approaches:
k2tf (Recommended for Better Code)
k2tf generates native Terraform Kubernetes resources like kubernetes_deployment
and kubernetes_service
.
Installation
go install github.com/sl1pm4t/k2tf@latest
What k2tf Generates
# k2tf creates native, typed resources
resource "kubernetes_deployment" "nginx_deployment" {
metadata {
name = "nginx-deployment"
namespace = "default"
labels = {
app = "nginx"
}
}
spec {
replicas = 3
selector {
match_labels = {
app = "nginx"
}
}
template {
metadata {
labels = {
app = "nginx"
}
}
spec {
container {
name = "nginx"
image = "nginx:1.26"
port {
container_port = 80
}
resources {
limits = {
cpu = "500m"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "128Mi"
}
}
}
}
}
}
}
resource "kubernetes_service" "nginx_service" {
metadata {
name = "nginx-service"
namespace = "default"
}
spec {
selector = {
app = "nginx"
}
port {
port = 80
target_port = 80
protocol = "TCP"
}
type = "ClusterIP"
}
}
tfk8s (Quick but Generic)
tfk8s generates kubernetes_manifest resources - essentially YAML-in-HCL format.
Installation
brew install tfk8s
What tfk8s Generates
# tfk8s creates generic manifest resources
resource "kubernetes_manifest" "nginx_deployment" {
manifest = {
apiVersion = "apps/v1"
kind = "Deployment"
metadata = {
name = "nginx-deployment"
namespace = "default"
labels = {
app = "nginx"
}
}
spec = {
# ... YAML structure preserved in HCL
}
}
}
Which Tool Should You Choose?
Use k2tf if:
- ✅ You want proper Terraform resources with validation
- ✅ You need IDE support and autocompletion
- ✅ You want to reference resources easily (
kubernetes_service.nginx.metadata[0].name
) - ✅ You prefer idiomatic Terraform code
Use tfk8s if:
- ✅ You need to migrate quickly with minimal changes
- ✅ You’re using Custom Resource Definitions (CRDs)
- ✅ You want to copy-paste from Kubernetes docs easily
Converting Your YAML
Starting YAML:
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
spec:
replicas: 3
# ... rest of your YAML
Convert with k2tf (native resources):
k2tf -f nginx-deployment.yaml -o nginx.tf
Convert with tfk8s (manifest resources):
tfk8s -f nginx-deployment.yaml -o nginx.tf
Advanced Migration Strategies
Working with Kustomize
Build the final manifests first, then convert:
# Build kustomized manifests
kustomize build overlays/production > production-manifests.yaml
# Convert with k2tf (recommended)
k2tf -f production-manifests.yaml -o production.tf
# Or convert with tfk8s (quick but generic)
tfk8s -f production-manifests.yaml -o production.tf
Migrating Helm Charts
Render charts first, then convert:
# Render Helm chart
helm template my-app ./my-chart --values values.yaml > helm-manifests.yaml
# Convert with your preferred tool
k2tf -f helm-manifests.yaml -o helm-converted.tf
Migrating Existing Cluster Resources
Extract and clean existing resources:
# Extract existing resources
kubectl get deployment nginx-deployment -o yaml > current-deployment.yaml
# Clean up unwanted fields
kubectl convert -f current-deployment.yaml --output-version apps/v1 | \
yq eval 'del(.metadata.resourceVersion, .metadata.uid, .status)' - > clean-deployment.yaml
# Convert to native Terraform resources
k2tf -f clean-deployment.yaml -o converted.tf
Post-Migration Optimisation
After conversion, optimise the Terraform configuration:
1. Add Variables and Locals
locals {
app_name = "nginx"
app_namespace = "production"
app_labels = {
app = local.app_name
environment = "production"
managed-by = "terraform"
}
}
variable "replica_count" {
description = "Number of replicas for the deployment"
type = number
default = 3
}
2. Use Resource Dependencies
resource "kubernetes_namespace" "app" {
metadata {
name = local.app_namespace
}
}
resource "kubernetes_deployment" "nginx_deployment" {
metadata {
name = local.app_name
namespace = kubernetes_namespace.app.metadata[0].name
labels = local.app_labels
}
# ... rest of configuration
}
3. Import Existing Resources
# Import existing Kubernetes resources into Terraform state
terraform import kubernetes_deployment.nginx_deployment default/nginx-deployment
terraform import kubernetes_service.nginx_service default/nginx-service
Migration Best Practices
- Start with non-critical environments to test the conversion process
- Use terraform plan to verify the conversion before applying
- Implement gradual migration by moving one application at a time
- Maintain backup YAML files during the transition period
- Test thoroughly in staging environments before production migration
- Consider resource dependencies when structuring Terraform modules
- Use data sources for existing cluster resources that won’t be managed by Terraform
Implementation Best Practices
- Use cloud provider authentication for managed Kubernetes services
- Version your provider to ensure consistent behavior
- Set resource limits and requests for all containers
- Use namespaces to organise and isolate resources
- Implement health checks with liveness and readiness probes
- Manage dependencies using Terraform’s resource references
- Use data sources to reference existing cluster resources
- Consider security contexts and pod security standards
Important Considerations
- Limited support for some Kubernetes features compared to kubectl
- State management can be complex with frequent manual cluster changes
- Better suited for infrastructure-like resources than application deployments
Conclusion
The Kubernetes Terraform provider excels at managing infrastructure-like Kubernetes resources alongside your cloud infrastructure. Use it for namespaces, services, ingress controllers, and cluster-wide resources where Terraform’s lifecycle management and dependency tracking provide clear benefits.
For complex application deployments, consider hybrid approaches where Terraform manages the infrastructure and dedicated Kubernetes tools handle application lifecycles.