Skip to content

Databases

Before You Read

This page is a reference for database configuration. For connection instructions see Database Access Patterns. For migrations see Database Migrations.

Database-per-Service Pattern

Each core microservice owns exactly one MySQL database schema in the shared Cloud SQL instance. No service accesses another service's database directly. Cross-service data access happens through service APIs or Kafka events.

Cloud SQL Instance (orofi-{env}-cloud-{env}-oro-mysql-instance)
├── db_microservice_communication   ← owned by microservice-communication
├── db_microservice_identity        ← owned by microservice-identity
├── db_microservice_monolith        ← owned by microservice-monolith
└── db_microservice_analytics       ← owned by microservice-analytics

Cloud SQL MySQL

Instance Configuration

Provisioned by infrastructure-management/modules/datastore/.

Property Dev Staging
Instance ID orofi-dev-cloud-dev-oro-mysql-instance orofi-stage-cloud-stage-oro-mysql-instance
Engine MySQL 8.0 MySQL 8.0
Machine type db-f1-micro db-n1-standard-1
Disk 20 GB HDD 100 GB SSD
Availability Zonal Regional HA
Backup retention [NEEDS TEAM INPUT] 30 backups
PITR Binary logging Binary logging
SSL ENCRYPTED_ONLY ENCRYPTED_ONLY
Public IP Disabled Disabled
PSC Disabled Enabled

Database Users

User Access Secret Name Used By
microservice-communication db_microservice_communication {env}-microservice-communication-db-connection communication service
microservice-identity db_microservice_identity {env}-microservice-identity-db-connection identity service
microservice-monolith db_microservice_monolith {env}-microservice-monolith-db-connection monolith service
microservice-analytics db_microservice_analytics {env}-microservice-analytics-db-connection analytics service
flyway_admin All databases Bitbucket secret FLYWAY_PASSWORD Migration runner (oro-database-migrations pipeline)
oro-ext-user All databases {env}-oro-ext-user-db-connection External integrations
root Full instance {env}-cloudsql-root-password Emergency DBA access only

Connection Endpoints

Services connect using DNS names that resolve to the Cloud SQL private IP within the VPC:

# Per-service (preferred — least privilege)
microservice-identity-db.{env}.orofi.xyz

# Shared endpoint
db.{env}.orofi.xyz

# Internal (migration runner, direct access)
db-int.{env}.orofi.xyz  →  10.128.0.11 (staging) / 10.128.0.12 (dev)

The DNS names are managed by infrastructure-management/modules/dns/ and defined in projects/orofi-{env}/dns.tf.

Connection String Format

Stored in GCP Secret Manager as JSON (example):

{
  "host": "microservice-identity-db.stage.orofi.xyz",
  "port": 3306,
  "database": "db_microservice_identity",
  "username": "microservice-identity",
  "password": "<redacted>",
  "ssl": true,
  "sslMode": "VERIFY_CA"
}

[NEEDS TEAM INPUT: confirm exact secret format — is it JSON, a JDBC URL, or a connection string format specific to the framework used?]

MongoDB

Operator & Replica Set

MongoDB is managed by the Percona Server for MongoDB Operator (PSMDB). The operator runs in the mongo-db namespace and manages a PerconaServerMongoDB custom resource.

Property Dev Staging
Replica set size 1 (single node) 3 (HA)
WiredTiger cache 0.2 GB [NEEDS TEAM INPUT]
Max unavailable 0 0

KEDA Autoscaling

The MongoDB replica set uses KEDA to scale the number of replicas based on:

Trigger Metric Threshold
MongoDB connections db.serverStatus().connections.current 50 connections
CPU Pod CPU utilization 70%
Global lock queue db.serverStatus().globalLock.currentQueue.total 5

KEDA scales between min=1, max=5 replicas.

External Access

MongoDB is exposed outside the cluster via Istio at port 32017 (internal) → mapped through the load balancer. The DNS entry:

mongodb-ext.{env}.orofi.xyz  →  Load balancer static IP (port 32017)

This is used for developer access (Mongo Express web UI) and external tooling.

Connection Secret

{env}-mongodb-connection  →  full MongoDB connection URI

Apache Kafka

Topology

Kafka runs in KRaft mode (no ZooKeeper) using the Bitnami chart (kafka-new in infrastructure-configuration/projects/orofi/tools/kafka-new/).

Property Dev Staging
Kafka version 4.0.0 4.0.0
Helm chart Bitnami 32.4.3 Bitnami 32.4.3
Controllers 1 3
Brokers 1 3
Controller resources 250m CPU / 512Mi mem (req), 500m / 1Gi (limit) [NEEDS TEAM INPUT]
Broker resources 1000m CPU / 1Gi mem (req), 2000m / 4Gi (limit) [NEEDS TEAM INPUT]
Controller storage 2 Gi [NEEDS TEAM INPUT]
Broker storage 10 Gi [NEEDS TEAM INPUT]
Replication factor 1 3
Min ISR N/A 2
Network threads default 8
IO threads default 16

Topics

Topic Partitions Replication (Dev/Stage) Purpose
check-health-topic 1 1 / 3 Service health events
service-log-topic 3 1 / 3 Service-level log events
client-log-topic 3 1 / 3 Client-level log events
account-event-log-topic 3 1 / 3 Account lifecycle events

[NEEDS TEAM INPUT: are there additional topics created by the microservices themselves at runtime?]

Listeners

Listener Port Protocol Used By
CLIENT 9092 PLAINTEXT Internal cluster services
EXTERNAL 9095 PLAINTEXT External tools (Kafka UI), mapped to port 39092 via Istio
CONTROLLER 9093 SASL_PLAINTEXT KRaft controller-controller communication

Authentication

External connections use SASL (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512). Credentials are stored in {env}-kafka-secrets.

External Access

Kafka is exposed via Istio on TCP port 39092:

# istio-system/oro-gateway
- port: 39092
  protocol: TCP
  name: tcp-kafka

The Kafka UI at kafka-ui.{env}.orofi.xyz connects to kafka.kafka.svc.cluster.local:9092 internally.

Connection Secret

{env}-kafka-secrets  →  SASL credentials and broker list

Redis Cache

Redis is a shared cache used by all microservices. It runs as Cloud Memorystore (managed service, not in Kubernetes).

Property Dev Staging
Tier STANDARD_HA STANDARD_HA
Memory 1 GB 1 GB
Replicas 1 1
Auth Enabled Enabled
Persistence RDB, 12h RDB, 6h
DNS redis.dev.orofi.xyz redis.stage.orofi.xyz

Redis Connection Note

Istio sidecar bypass

Outbound traffic on port 6379 is excluded from the Istio sidecar proxy (configured in the Helm chart deployment template). Redis connections go directly from the application pod to Cloud Memorystore, bypassing mTLS. This is intentional — Redis does its own authentication via the auth password.

Connection Secret

{env}-redis-auth-password  →  Redis AUTH password

The Redis connection URL format: redis://:{password}@redis.{env}.orofi.xyz:6379

See Also