Scalability

CertChain is designed for horizontal scalability at multiple layers — from scaling individual components to adding entirely new organizations.

This section includes hands-on exercises. You’ll use the Terminal tab and the Git Repository (Gitea) to make changes and observe ArgoCD syncing them to the cluster.

Prerequisites

First, log in to OpenShift from the Terminal tab:

oc login https://:6443 -u admin

When prompted, enter the password from your lab provisioning email (RHDP) or the one you set during installation.

Verify the CertChain namespaces are healthy:

for ns in certchain certchain-techpulse certchain-dataforge certchain-neuralpath; do
  echo "=== $ns ==="
  oc get pods -n $ns --no-headers | grep -v Completed | awk '{print $1, $3}'
done

Next, clone the project from Gitea so you can make GitOps-driven changes:

cd ~
git clone https:///certchain/hyperledger-on-openshift-demo.git certchain-repo
cd certchain-repo
git config user.name "Lab User"
git config user.email "lab-user@local.demo"

Horizontal Scaling

Fabric peers and orderers are standard Kubernetes Deployments, so scaling them is as simple as changing the replica count. Let’s try it.

Important: ArgoCD Auto-Sync

CertChain’s ArgoCD Applications have automated sync enabled. If you scale a deployment imperatively with oc scale, ArgoCD will detect the drift and revert it within minutes. To demonstrate manual scaling, you must first pause auto-sync on the target application.

Exercise: Scale verify-api (Stateless Service)

First, pause ArgoCD auto-sync so your manual changes stick:

oc patch application certchain-central -n openshift-gitops --type=merge \
  -p '{"spec":{"syncPolicy":{"automated":null}}}'

Now scale verify-api to 3 replicas:

oc scale deployment verify-api -n certchain --replicas=3

Watch the pods come up:

oc get pods -n certchain -l app.kubernetes.io/name=verify-api -w

Press Ctrl+C once all 3 pods show Running. Open the ArgoCD tab — the certchain-central app now shows OutOfSync because the live replica count differs from Git.

Test the scaled service — the OpenShift Service load-balances across all 3 replicas:

# Verify a known certificate — the request is routed to any of the 3 replicas
for i in 1 2 3; do
  curl -sk "https:///api/v1/verify/TP-FSWD-001" \
    | python3 -c "import sys,json; d=json.load(sys.stdin); print('Request $i: status=' + d['status'])"
done

All 3 requests return VALID — the stateless verify-api scales horizontally with zero configuration.

Scale back and re-enable auto-sync:

oc scale deployment verify-api -n certchain --replicas=1
oc patch application certchain-central -n openshift-gitops --type=merge \
  -p '{"spec":{"syncPolicy":{"automated":{"prune":true,"selfHeal":true}}}}'
Component Scaling Notes

verify-api

Stateless — each replica connects to the same peer. OpenShift Service load-balances.

cert-admin-api

Each replica submits transactions independently to its org’s peer.

cert-portal / course-manager-ui

Static frontends — scale freely, no backend state.

Chaincode (CcaaS)

Deployed as a Kubernetes Deployment — scale replicas for concurrent endorsement execution.

Scaling Peers

Each organization runs a single peer. In production, you’d run 2+ peers per org for endorsement availability. Each peer replica:

  • Maintains a full copy of the ledger via gossip sync

  • Can independently endorse transactions and serve queries

  • Is load-balanced by the Kubernetes Service

Peer scaling requires ReadWriteMany (RWX) persistent volumes because multiple peer pods share the same ledger PVC. This demo uses ReadWriteOnce (RWO) storage, so peer replicas > 1 will fail with a multi-attach error. In production, use RWX-capable storage (e.g., OpenShift Data Foundation, NFS, or EFS).

Scaling Orderers

The orderer cluster uses SmartBFT consensus with 4 nodes (tolerating 1 Byzantine fault). Adding orderers requires channel configuration updates — it’s not a simple replica scale:

Orderers Faults Tolerated Notes

4 (this demo)

1

Default — one orderer per org + central

7

2

Add 3 orderers, update configtx.yaml consenter set

10

3

Add 6 orderers, requires channel config update transaction

GitOps-Driven Scaling (ArgoCD)

The imperative approach above is great for learning. In production, you’d make changes in Git and let ArgoCD sync them — this is the GitOps way.

Exercise: Scale cert-admin-api via Git Push

Clone the repo from Gitea (if not done yet) and make a Git-driven change:

cd ~/certchain-repo

Edit the cert-admin-api Deployment to scale from 1 to 2 replicas:

sed -i 's/replicas: 1/replicas: 2/' helm/components/certchain-org/templates/cert-admin-api/deployment.yaml

Verify the change:

grep -n replicas helm/components/certchain-org/templates/cert-admin-api/deployment.yaml

Commit and push to Gitea:

git add helm/components/certchain-org/templates/cert-admin-api/deployment.yaml
git commit -m "feat: scale cert-admin-api to 2 replicas"
git push origin main

Gitea credentials: username gitea, password openshift.

Now open the ArgoCD tab (ArgoCD Dashboard) and observe:

  1. Click the certchain-techpulse Application tile

  2. Within ~3 minutes, ArgoCD detects the Git change and starts syncing

  3. The cert-admin-api Deployment shows OutOfSyncSyncingSynced

  4. A second cert-admin-api pod appears

Watch from the terminal:

oc get pods -n certchain-techpulse -l app.kubernetes.io/name=cert-admin-api -w

Since the org chart is shared across all 3 organizations, this change scales cert-admin-api in DataForge and NeuralPath too. In a real deployment, you’d use per-org values overrides.

Revert the change:

cd ~/certchain-repo
git revert HEAD --no-edit
git push origin main

Adding New Organizations

The App-of-Apps pattern makes adding organizations straightforward — see the Adding Organizations reference for a hands-on exercise with ready-to-run scripts.