Comment on page
Self-Hosting configure8
This guide delineates the steps to deploy the Configure8 (C8) application on a Kubernetes cluster using a Helm chart.

- 1.A running Kubernetes version 1.22 or above to guarantee compatibility with the C8 App. Ensure the cluster has public internet access to fetch Docker images from repositories, specifically from GitHub.
- 2.A Kubernetes user with sufficient cluster access privileges to install the C8 app.
- 3.
- 4.
- 5.
- 6.A token provided by the C8 team for adding image pull secrets to the cluster.
- 7.A MongoDB database must be set up, and accessible by the Kubernetes cluster.
- 8.A RabbitMQ cluster must be set up for managing message queues within the C8 application.
- 9.An OpenSearch cluster must be set up for robust search functionality and data analytics within the C8 app.
Isolate the C8 application by creating a Kubernetes namespace named "c8":
kubectl create namespace c8
Create a Kubernetes secret to access the C8 Docker registry. Replace and with your specific token and email address, respectively:
kubectl create secret docker-registry c8-docker-registry-secret \
--docker-server=ghcr.io \
--docker-username=c8-user \
--docker-password=<Token provided to you by the C8 team> \
--docker-email=<your email>` \
-n c8
Generate a Kubernetes secret for the C8 application, which will contain sensitive data such as API keys and database credentials. Replace 'value' with the actual values:
kubectl create secret generic c8-secret \
--from-literal=API_KEY='value' \
--from-literal=CRYPTO_IV='value' \
--from-literal=CRYPTO_SECRET='value' \
--from-literal=JWT_SECRET='value' \
--from-literal=DB_USERNAME='value' \
--from-literal=DB_PASSWORD='value' \
--from-literal=RABBITMQ_USERNAME='value' \
--from-literal=RABBITMQ_PASSWORD='value' \
--from-literal=SMTP_USERNAME='value' \
--from-literal=SMTP_PASSWORD='value' \
-n c8 --dry-run=client -o yaml | kubectl apply -f -
Name | Type | Default | Description |
---|---|---|---|
API_KEY | string | "" | Unique secret key |
CRYPTO_IV | string | "" | Crypto initialization vector |
CRYPTO_SECRET | string | "" | Crypto password |
DB_PASSWORD | string | "" | Database password |
DB_USERNAME | string | "" | Database username |
GITHUB_APP_CLIENT_ID | string | "" | GitHub application client id. Should be created per installation in advance (optional) |
GITHUB_APP_CLIENT_SECRET | string | "" | GitHub application client secret. (optional) |
GITHUB_APP_INSTALL_URL | string | "" | GitHub application installation url. (optional) |
GOOGLE_KEY | string | "" | Google application key. Required for the sign in with google (optional) |
GOOGLE_SECRET | string | "" | Google application secret. Required for the login with google (optional) |
JWT_SECRET | string | "" | Unique secret used for sign user's JWT tokens |
RABBITMQ_PASSWORD | string | "" | RabbitMQ password |
RABBITMQ_USERNAME | string | "" | RabbitMQ user |
SMTP_USERNAME | string | "" | Username for SMTP server. |
SMTP_PASSWORD | string | "" | Password or token for SMTP authentication. |
AWS_ACCESS_KEY_ID | string | "" | A unique identifier associated with an AWS User. (optional, see discovery configuration) |
AWS_SECRET_ACCESS_KEY | string | "" | A secret string associated with the AWS_ACCESS_KEY_ID for an AWS IAM user or role. (optional, see discovery configuration) |
Warning You need to generate your own API_KEY, CRYPTO_IV, JWT_SECRET, and CRYPTO_SECRET which can be any cryptographically secure random string. Feel free to refer to Open Web Application Security Project (OWASP) for secure random number generation recommendations: https://cheatsheetseries.owasp.org/cheatsheets/Cryptographic_Storage_Cheat_Sheet.html#secure-random-number-generation
Replace the placeholders with your specific values:
placeholders description:
account_id=$(aws sts get-caller-identity --query "Account" --output text)
oidc_provider=$(aws eks describe-cluster --name $AWS_EKS_CLUSTER_NAME --region $AWS_EKS_CLUSTER_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
namespace=$APP_NAMESPACE
service_account_c8_app=c8-backend
service_account_c8_djw=c8-djw
Create a trust relationship for the IAM role:
# Generate a JSON file for the trust relationship
cat >trust-relationship-sa.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${account_id}:oidc-provider/${oidc_provider}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${oidc_provider}:aud": "sts.amazonaws.com",
"${oidc_provider}:sub": [
"system:serviceaccount:${namespace}:${service_account_c8_app}",
"system:serviceaccount:${namespace}:${service_account_c8_djw}"
]
}
}
}
]
}
EOF
# Create an IAM role with a defined trust relationship and description
aws iam create-role --role-name sh-c8-service-account --assume-role-policy-document file://trust-relationship-sa.json --description "The role for the Configure8 pods service account"
Download the IAM policy that grants read permissions to all AWS resources:
curl -o sh-c8-discovery-policy.json https://configure8-resources.s3.us-east-2.amazonaws.com/iam/sh-c8-discovery-policy.json
Create the IAM policy:
aws iam create-policy --policy-name sh-c8-discovery-policy --policy-document file://sh-c8-discovery-policy.json
Create an IAM role that can be assumed by the C8 and DJM service accounts:
# Generate a JSON file for the trust relationship
cat >trust-relationship.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${account_id}:role/sh-c8-service-account"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
aws iam create-role --role-name sh-c8-discovery --assume-role-policy-document file://trust-relationship.json --description "sh-c8-discovery"
aws iam attach-role-policy --role-name sh-c8-discovery --policy-arn=arn:aws:iam::$account_id:policy/sh-c8-discovery-policy
Note If you want to discover more AWS accounts, please repeat the 2nd step for each account.
Download the IAM policy that grants read permissions to all AWS resources:
curl -o sh-c8-discovery-policy.json https://configure8-resources.s3.us-east-2.amazonaws.com/iam/sh-c8-discovery-policy.json
Create the IAM policy:
aws iam create-policy --policy-name sh-c8-discovery-policy --policy-document file://sh-c8-discovery-policy.json
Create an IAM role that can be assumed by EC2 roles:
$account_id - The AWS account id from which you want to allow run discovery
$ec2_role - The AWS role name from which you want to allow run discovery
# Generate a JSON file for the trust relationship
cat >trust-relationship.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${account_id}:role/${ec2_role}"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
aws iam create-role --role-name sh-c8-discovery --assume-role-policy-document file://trust-relationship.json --description "sh-c8-discovery"
aws iam attach-role-policy --role-name sh-c8-discovery --policy-arn=arn:aws:iam::${account_id}:policy/sh-c8-discovery-policy
Note If you want to discover more AWS accounts, please repeat the 2nd step for each account.
Important As a best practice, use temporary security credentials (such as IAM roles) instead of creating long-term credentials like access keys.
Download the IAM policy that grants read permissions to all AWS resources:
curl -o sh-c8-discovery-policy.json https://configure8-resources.s3.us-east-2.amazonaws.com/iam/sh-c8-discovery-policy.json
Create the IAM policy:
aws iam create-policy --policy-name sh-c8-discovery-policy --policy-document file://sh-c8-discovery-policy.json
Create an IAM role that can be assumed by EC2 roles:
$account_id - The AWS account id from which you want to allow run discovery
$iam_user - The AWS IAM user name from which you want to allow run discovery
# Generate a JSON file for the trust relationship
cat >trust-relationship.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${account_id}:user/${iam_user}"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
aws iam create-role --role-name sh-c8-discovery --assume-role-policy-document file://trust-relationship.json --description "sh-c8-discovery"
aws iam attach-role-policy --role-name sh-c8-discovery --policy-arn=arn:aws:iam::${account_id}:policy/sh-c8-discovery-policy
Note If you want to discover more AWS accounts, please repeat the 2nd step for each account.
Important Don't forget to add the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables to the c8-secret secret on the 3rd step.
helm repo add c8 https://helm.configure8.io/store/
helm repo update
Install the Helm chart with the desired configurations. Replace the placeholders with your specific values:
helm upgrade -i sh-c8 ./helm-charts/c8 \
-n c8 \
--set variables.AWS_REGION='value' \
--set variables.DB_HOST='value' \
--set variables.DB_DATABASE='value' \
--set variables.DEEPLINK_URL='value' \
--set variables.HOOKS_CALLBACK_URL='value' \
--set variables.OPENSEARCH_NODE='value' \
--set variables.RABBITMQ_HOST='value' \
--set common.ingress.ingressClassName='value' \
--set djm.serviceAccount.job_worker.annotations."eks\.amazonaws\.com/role-arn"='The IAM role was created above for the service account' \
--set backend.serviceAccount.annotations."eks\.amazonaws\.com/role-arn"='The IAM role was created above for the service account'
Warning You don't need to set djm.serviceAccount.job_worker.annotations."eks.amazonaws.com/role-arn" and backend.serviceAccount.annotations."eks.amazonaws.com/role-arn" if you use access type EC2 role or AWS access user keys.
Once you successfully install a Helm chart that includes Ingress configurations, the next vital step is to establish a CNAME record in your DNS settings. This is essential to map your domain name to the Ingress controller's service endpoint.
Ensuring that the DNS propagates the new record correctly and securely linking it via TLS/SSL certificates (if applicable) will bolster both usability and security for end-users navigating to your c8 applications.
The table below lists the key application variables that can be configured during deployment:
Key | Type | Default | Description |
---|---|---|---|
variables.AWS_REGION | string | "us-east-1" | The region what actually we use with AWS integration |
variables.DB_AUTH_MECHANISM | string | "SCRAM-SHA-1" | The mechanism of how to authenticate with the database. Might be SCRAM-SHA-1 or any other supported by mongodb |
variables.DB_DATABASE | string | "c8" | Database name |
variables.DB_HOST | string | "" | Database host |
variables.DB_PORT | string | "27017" | Database port |
variables.DEEPLINK_URL | string | "" | Url on which the application will be available. For example https://configure8.my-company.io |
variables.DEFAULT_SENDER | string | Default email for sending notifications. | |
variables.HOOKS_CALLBACK_URL | string | "" | Url on which the application will be available. Usually should be the same as DEEPLINK_URLFor example https://configure8.my-company.io |
variables.MONGO_DRIVER_TYPE | string | "mongoDb" | Type of the driver. For atlas mongoDbAtlas and mongoDb for the regular instance |
variables.OPENSEARCH_NODE | string | "" | ElasticSearch url |
variables.RABBITMQ_HOST | string | "" | RabbitMQ host |
variables.RABBITMQ_PORT | int | 5672 | RabbitMQ port |
variables.SEGMENT_KEY | string | "na" | Application analytics segment key |
variables.USE_SMTP_STRATEGY | string | "true" | Flag to use SMTP for emails |
variables.SMTP_HOST | string | "smtp.sendgrid.net" | Address of the SMTP server (e.g., SendGrid's server). |
variables.SMTP_PORT | string | "587" | Port for connecting to the SMTP server |
variables.SSA_SWAGGER_ENABLED | string | "false" | Enable or disable swagger documentation |
variables.SWAGGER_ENABLED | string | "false" | Enable or disable swagger documentation |
variables.TZ | string | "America/New_York" | Application timezone |
The table below shows configurable parameters when deploying the C8 Helm chart:
Key | Type | Default | Description |
---|---|---|---|
backend.affinity | object | {} | Affinity for pod assignment https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity |
backend.autoscaling.enabled | bool | false | |
backend.autoscaling.maxReplicas | int | 10 | |
backend.autoscaling.minReplicas | int | 1 | |
backend.autoscaling.targetCPUUtilizationPercentage | int | 80 | |
backend.autoscaling.targetMemoryUtilizationPercentage | int | 80 | |
backend.enabled | bool | true | |
backend.image.pullPolicy | string | "IfNotPresent" | |
backend.image.repository | string | "ghcr.io/configure8inc/c8-backend" | |
backend.image.tag | string | "1.0.0" | |
backend.livenessProbe.failureThreshold | int | 3 | |
backend.livenessProbe.httpGet.path | string | "/api/v1/ping" | |
backend.livenessProbe.httpGet.port | int | 5000 | |
backend.livenessProbe.periodSeconds | int | 10 | |
backend.livenessProbe.timeoutSeconds | int | 10 | |
backend.nodeSelector | object | {} | Node labels for pod assignment https://kubernetes.io/docs/user-guide/node-selection/ |
backend.podAnnotations | object | {} | |
backend.podDisruptionBudget.enabled | bool | false | Specifies whether pod disruption budget should be created |
backend.podDisruptionBudget.minAvailable | string | "50%" | Number or percentage of pods that must be available |
backend.podSecurityContext | object | {} | |
backend.readinessProbe.failureThreshold | int | 3 | |
backend.readinessProbe.httpGet.path | string | "/api/v1/ping" |