Cluster Sizing Guidelines
This is a guide to general Kubernetes cluster sizing for your Cortex Fabric installation.
Additional sizing instructions are included for specific platforms on the prerequisites pages:
Baseline System Configuration
Consideration | Value |
---|---|
Kubernetes version | v1.22 or higher |
Number of nodes | 3 |
Number of CPUs/vCPUs | 4 |
RAM | 16GB |
Container runtime | docker://19.3.6 |
Disk space / host | 128GB |
k8s | installed and running |
Mongo DB storage | 50GB per replica PVC |
Redis storage | 8GB per replica PVC |
Blob storage | 100GB |
Baseline Considerations
The base footprint for the Fabric installation is based on the following:
- Fabric employs 8 services and an operator each scaled to 1 node.
- Each of 9 services processes Istio sidecar requests.
- Additional sidecar requests are processed by Dex, OpenLDAP, and Docker-registry
- Vault sidecar requests are processed by each of 9 services
- The Fabric API can be configured to one of five Java options:
- -Xms1g
- -Xmx8g
- -XX:+UseG1GC
- -XX:+UseStringDeduplication
- -XX:+OptimizeStringConcat'
Considerations beyond these basic requirements will determine a need for additional resources.
IMPORTANT
Additional sizing considerations must account for applications that are running on top of Fabric.
Platform sizing analysis recommendations
- AWS EKS cluster sizing recommendations
- Azure AKS cluster sizing recommendations
- GCS GKE cluster sizing recommendations
Disaster Recovery (DR) Considerations
Configuration | Values |
---|---|
Availability Zones | Primary and secondary zones configured |
Shared Service: Mongo | Approach 1: VPC Peering (1 cluster with 6 nodes. 3 on geo1 and 3 on geo2) Approach 2 : Synced to a separate cluster |
Shared Service: Redis | TBD based on Mongo’s configuration |
Anticorruption of databases: Mongo and Redis | Enable backup to S3 Restore upon corruption |
More information on Disaster Recovery
High Availability (HA) Considerations
We recommend scaling all the Cortex services to 3 or enabling autoscaling for HA.