Complete Onboarding¶
Welcome to Orofi engineering. This guide takes you from zero to:
- [ ] GCP access configured
- [ ] Kubernetes clusters connected
- [ ] Repositories cloned
- [ ] Local toolchain installed
- [ ] First deployment understood
Expected time: 2–4 hours (mostly waiting for access approvals).
Step 1: Request Access¶
Before you can do anything, you need access to the right systems.
Follow the Access & Permissions Guide to request:
- GCP project access (dev and staging)
- Bitbucket repository access
- ArgoCD access
- Grafana access
Come back here once your access is confirmed.
Step 2: Install Required Tools¶
Install these tools on your local machine:
# Google Cloud SDK
brew install --cask google-cloud-sdk
# kubectl
brew install kubectl
# Terraform
brew install terraform
# Terragrunt (if making infra changes)
brew install terragrunt
# Helm
brew install helm
# ArgoCD CLI (optional but useful)
brew install argocd
# K9s (optional — terminal Kubernetes UI, highly recommended)
brew install derailed/k9s/k9s
Verify installations:
Step 3: Configure GCP Authentication¶
# Log in to GCP
gcloud auth login
# Set up Application Default Credentials (used by Terraform and SDK clients)
gcloud auth application-default login
# Set your default project (use dev for day-to-day work)
gcloud config set project orofi-dev-cloud
# Verify
gcloud config list
Step 4: Connect to Kubernetes Clusters¶
# Development cluster
gcloud container clusters get-credentials orofi-dev-cloud-dev-k8s-cluster \
--zone us-central1-a \
--project orofi-dev-cloud
# Staging cluster (you'll need this for incident response)
gcloud container clusters get-credentials orofi-stage-cloud-stage-k8s-cluster \
--zone us-central1-a \
--project orofi-stage-cloud
# Verify — you should see both clusters
kubectl config get-contexts
Switch between clusters:
# Switch to dev
kubectl config use-context gke_orofi-dev-cloud_us-central1-a_orofi-dev-cloud-dev-k8s-cluster
# Switch to staging
kubectl config use-context gke_orofi-stage-cloud_us-central1-a_orofi-stage-cloud-stage-k8s-cluster
Step 5: Clone Repositories¶
# Infrastructure repo (this repo — manifests, Terraform, pipelines)
git clone git@bitbucket.org:oro-codebase/infra.git
# [NEEDS TEAM INPUT: list the microservice repositories the engineer needs]
# Example:
# git clone git@bitbucket.org:oro-codebase/microservice-identity.git
# git clone git@bitbucket.org:oro-codebase/microservice-monolith.git
Step 6: Explore the Running System¶
Get familiar with what's running before making changes.
# List all namespaces
kubectl get namespaces
# See all running pods across all namespaces
kubectl get pods -A
# Check ArgoCD application status
kubectl get applications -n argocd
# View Istio services
kubectl get services -n istio-system
Open the web UIs: - ArgoCD (dev): https://argocd.dev.orofi.xyz — see all deployed applications and their sync status - Grafana (dev): https://grafana.dev.orofi.xyz — metrics and logs - Kafka UI (dev): https://kafka-ui.dev.orofi.xyz — browse Kafka topics and consumers - Mongo Express (dev): https://mongoexpress.dev.orofi.xyz — browse MongoDB collections
Authentication
All web UIs require Google OAuth2 login with your @orofi.xyz Google account.
Step 7: Understand the Deployment Workflow¶
Read these two pages to understand how code goes from your laptop to production:
- GitOps Workflow — how deployments work
- ArgoCD & GitOps — why we use this pattern
Key points:
- Code changes go through the microservice repo → Bitbucket CI builds the image → ArgoCD deploys
- Infrastructure changes go through the infra repo → manual Terraform apply
- You never run kubectl apply or helm upgrade manually in production
Step 8: Make a Test Change¶
[NEEDS TEAM INPUT: describe a safe, low-risk change a new engineer can make to verify their setup end-to-end. For example: "Update the replica count of the analytics service in dev and verify the deployment."]
Useful Commands Reference¶
# Get logs from a service
kubectl logs -n microservice-identity -l app=microservice-identity --tail=100
# Exec into a running pod
kubectl exec -it -n microservice-identity \
$(kubectl get pod -n microservice-identity -l app=microservice-identity -o name | head -1) \
-- /bin/sh
# Port-forward a service to localhost
kubectl port-forward -n microservice-identity svc/microservice-identity 8080:80
# Get events (useful for debugging deployment issues)
kubectl get events -n microservice-identity --sort-by='.lastTimestamp'
# Scale a deployment (dev only — do NOT do this in staging/prod manually)
kubectl scale deployment microservice-identity -n microservice-identity --replicas=2
What to Read Next¶
After completing this guide:
- Architecture Overview — understand the full system
- Secrets Management — how to work with secrets
- Debugging Guide — top issues and solutions
- Runbooks — incident response procedures
Getting Help¶
[NEEDS TEAM INPUT: where to ask questions — Slack channel, who the platform team is, how to escalate issues.]