GCP & GKE¶
Before You Read¶
This page explains the GCP project structure and Kubernetes cluster configuration. For networking details see Networking. For security/IAM see Security Model.
GCP Project Structure¶
The platform uses three GCP projects for environment isolation. Resources in one project cannot access resources in another without explicit IAM grants.
orofi-dev-cloud ← Development workloads, all Artifact Registries
orofi-stage-cloud ← Staging workloads
orofi-prod ← Production workloads [NEEDS TEAM INPUT: confirm project ID]
orofi-cloud ← Shared DNS zones (all *.orofi.xyz domains live here)
Artifact Registry is in dev
All container images — including those deployed to staging and production — are built and pushed to us-central1-docker.pkg.dev/orofi-dev-cloud/orofi/. Staging and production pull images from this registry in the dev project. The Bitbucket service account (bitbucket@orofi-dev-cloud) has write access; GKE service accounts in staging/prod have read access.
GKE Clusters¶
Cluster Provisioning¶
Clusters are provisioned via infrastructure-management/modules/k8s/. Key inputs:
module "k8s" {
source = "../../modules/k8s"
project_id = var.project_id
name = "${var.project_id}-${var.env}-k8s-cluster"
region = var.region # us-central1
zone = var.zone # us-central1-a
node_count = var.node_count # 1 (scales up via autoscaler)
min_nodes = 0 # can scale to zero
max_nodes = 15
network = module.network.vpc_name
subnetwork = module.network.subnet_name
zero_trust = var.zero_trust # true for dev, false for staging
trusted_ips = var.trusted_ips # Bitbucket + private ranges
}
Cluster Details¶
| Property | Dev | Staging |
|---|---|---|
| Cluster name | orofi-dev-cloud-dev-k8s-cluster |
orofi-stage-cloud-stage-k8s-cluster |
| Location type | Zonal | Zonal |
| Zone | us-central1-a |
us-central1-a |
| Node pool min | 0 | 0 |
| Node pool max | 15 | 15 |
| Node pool initial | 1 | 1 |
| GKE control plane access | Restricted | Restricted |
| Workload Identity | Enabled | Enabled |
Node Pool¶
[NEEDS TEAM INPUT: machine type for node pool (e.g., e2-standard-4), disk size, OS image. This is configured in modules/k8s/main.tf but the specific machine type was not visible in the module inputs.]
Workload Identity¶
Workload Identity is the mechanism that allows Kubernetes pods to impersonate GCP service accounts without static credentials. Each application namespace has:
- A Kubernetes
ServiceAccount(created by the Helm chart) - A GCP
ServiceAccount(created by Terraform viamodules/service-accounts) - An IAM binding:
{gcp-sa}allows{k8s-namespace}/{k8s-sa}to use it
sequenceDiagram
participant Pod
participant K8s["Kubernetes"]
participant Meta["GCP Metadata Server"]
participant IAM["GCP IAM"]
participant SecMgr["Secret Manager"]
Pod->>K8s: Request token (projected volume)
K8s-->>Pod: Service account JWT
Pod->>Meta: Exchange JWT for GCP credentials
Meta->>IAM: Validate Workload Identity binding
IAM-->>Meta: Approved
Meta-->>Pod: Short-lived GCP access token
Pod->>SecMgr: Access secret (using GCP token)
SecMgr-->>Pod: Secret value
Artifact Registry¶
Docker Images¶
All microservice images are stored in:
Example image path:
The init container image (used by services during startup) is built from shared-workflows/Dockerfiles/Dockerfile-init and pushed to the same registry.
Maven Packages¶
Java/Kotlin shared libraries are published to:
This is used by microservices that share common Java libraries.
Bitbucket Push Permissions¶
The bitbucket service account has roles/artifactregistry.writer to push images during CI/CD builds. It also has roles/iam.serviceAccountTokenCreator to impersonate other service accounts if needed during pipeline execution.
Connecting to Clusters¶
# Development cluster
gcloud container clusters get-credentials orofi-dev-cloud-dev-k8s-cluster \
--zone us-central1-a \
--project orofi-dev-cloud
# Staging cluster
gcloud container clusters get-credentials orofi-stage-cloud-stage-k8s-cluster \
--zone us-central1-a \
--project orofi-stage-cloud
For full setup instructions see Onboarding Guide.
Cluster Addons Installed¶
The following components are deployed to each cluster via Helm (managed by ArgoCD or direct Terraform apply):
| Component | Method | Namespace |
|---|---|---|
| Istio (base, istiod, gateways) | Helm via Terraform (modules/helm) |
istio-system |
| cert-manager | Helm via Terraform | cert-manager |
| ArgoCD | Helm via Terraform | argocd |
| External Secrets Operator | [NEEDS TEAM INPUT: Helm or Terraform] | external-secrets |
| KEDA | [NEEDS TEAM INPUT] | [NEEDS TEAM INPUT] |
| Prometheus | Helm via ArgoCD | prometheus |
| Grafana | Helm via ArgoCD | grafana |
| Loki | Helm via ArgoCD | loki |
| kube-state-metrics | [NEEDS TEAM INPUT] | kube-state-metrics |
| node-exporter | [NEEDS TEAM INPUT] | node-exporter |
| KubeCost | Helm via ArgoCD | kubecost |
| K6 Operator | Helm via ArgoCD | k6-operator |
| MongoDB Operator (PSMDB) | Helm via ArgoCD | mongo-db |
| Kafka (Bitnami) | Helm via ArgoCD | kafka |