Unverified Commit bbdf4f32 authored by Shaun Elliott's avatar Shaun Elliott Committed by GitHub

[amundsen-kube-helm] HELM Chart v1.0.0 PR (#336)

* * setting up a pvc to allow for neo4j backups

* * allowing pvc to be turned off
* allowing pvc size to be configurable

* ISSUE 196
* added neo4j plugins
* added chart versions
* removed image pull policy never from metadata service

* * PR comment fix; using the correct maven repo urls

* (ISSUE 196) - added neo4j s3 backup cron pod

* * added missing restartPolicy

* (ISSUE 196)
* updated for PR comments
* using export schema, export graphml functions

* * using neo4j image instead of aws cli image
* updated pvc to point at /data for mount path, to avoid path collision of neo4j image
* made commands chained\dependent

* * removing full path from ls command

* * increased the sleep time to 30s

* (ISSUE 196)
* removed defaulting of persistence
* moved neo4j persistence size to dedicated block
* removed manual\host mapping of pv\pvc

* * turning on neo4j-shell and it's port
* using neo4j-shell for backups

* HELM Chart v1.0.0;
FIXES for: ISSUE-327, ISSUE-282, ISSUE-328, ISSUE-234

* removed es, neo charts; added es dependency chart
* moved neo templates in with amundsen templates
* minor resource bug fix (328)
* lots more comments
* added minimum helm version to docs (234)

* * added better reasoning on why we're not using neo4j chart

* * changed the proxy endpoint for the search module to respect the namespace

* * respecting the namespace, just a little bit more
parent 2a891b39
# Amundsen k8s Helm Charts
Current chart version is `1.0.0`
Source code can be found [here](https://github.com/lyft/amundsen)
## What is this?
This is setup templates for deploying [amundsen](https://github.com/lyft/amundsen) on [k8s (kubernetes)](https://kubernetes.io/), using [helm.](https://helm.sh/)
......@@ -11,21 +15,81 @@ This is setup templates for deploying [amundsen](https://github.com/lyft/amundse
2. Build out a cloud based k8s cluster, such as [amazon eks.](https://aws.amazon.com/eks/)
3. Ensure you can connect to your cluster with cli tools in step 1.
## How do I use this?
You will need a values file to merge with these templates, in order to create the infrastructure. Here is an example:
```
environment: "dev"
provider: aws
dnsZone: teamname.company.com
dockerhubImagePath: amundsendev
searchServiceName: search
searchImageVersion: 1.4.2
metadataServiceName: metadata
metadataImageVersion: 1.1.5
frontEndServiceName: frontend
frontEndImageVersion: 1.1.1
frontEndServicePort: 80
```
## Prerequisites
1. 2.14 < Helm < 3
2. Kubernetes 1.14+
## Chart Requirements
| Repository | Name | Version |
|------------|------|---------|
| https://kubernetes-charts.storage.googleapis.com/ | elasticsearch | 1.24.0 |
## Chart Values
The following table lists the configurable parameters of the Amundsen charts and their default values.
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| LONG_RANDOM_STRING | int | `1234` | A long random string. You should probably provide your own. This is needed for OIDC. |
| affinity | object | `{}` | amundsen application wide configuration of affinity. This applies to search, metadata, frontend and neo4j. Elasticsearch has it's own configuation properties for this. [ref](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) |
| dnsZone | string | `"teamname.company.com"` | **DEPRECATED - its not standard to pre construct urls this way.** The dns zone (e.g. group-qa.myaccount.company.com) the app is running in. Used to construct dns hostnames (on aws only). |
| dockerhubImagePath | string | `"amundsendev"` | **DEPRECATED - this is not useful, it would be better to just allow the whole image to be swapped instead.** The image path for dockerhub. |
| elasticsearch.client.replicas | int | `1` | only running amundsen on 1 client replica |
| elasticsearch.cluster.env.EXPECTED_MASTER_NODES | int | `1` | required to match master.replicas |
| elasticsearch.cluster.env.MINIMUM_MASTER_NODES | int | `1` | required to match master.replicas |
| elasticsearch.cluster.env.RECOVER_AFTER_MASTER_NODES | int | `1` | required to match master.replicas |
| elasticsearch.data.replicas | int | `1` | only running amundsen on 1 data replica |
| elasticsearch.enabled | bool | `true` | set this to false, if you want to provide your own ES instance. |
| elasticsearch.master.replicas | int | `1` | only running amundsen on 1 master replica |
| environment | string | `"dev"` | **DEPRECATED - its not standard to pre construct urls this way.** The environment the app is running in. Used to construct dns hostnames (on aws only) and ports. |
| frontEnd.OIDC_AUTH_SERVER_ID | string | `nil` | The authorization server id for OIDC. |
| frontEnd.OIDC_CLIENT_ID | string | `nil` | The client id for OIDC. |
| frontEnd.OIDC_CLIENT_SECRET | string | `""` | The client secret for OIDC. |
| frontEnd.OIDC_ORG_URL | string | `nil` | The organization URL for OIDC. |
| frontEnd.affinity | object | `{}` | Frontend pod specific affinity. |
| frontEnd.createOidcSecret | bool | `false` | OIDC needs some configuration. If you want the chart to make your secrets, set this to true and set the next four values. If you don't want to configure your secrets via helm, you can still use the amundsen-oidc-config.yaml as a template |
| frontEnd.imageVersion | string | `"2.0.0"` | The frontend version of the metadata container. |
| frontEnd.nodeSelector | object | `{}` | Frontend pod specific nodeSelector. |
| frontEnd.oidcEnabled | bool | `false` | To enable auth via OIDC, set this to true. |
| frontEnd.replicas | int | `1` | How many replicas of the frontend service to run. |
| frontEnd.resources | object | `{}` | See pod resourcing [ref](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) |
| frontEnd.serviceName | string | `"frontend"` | The frontend service name. |
| frontEnd.servicePort | int | `80` | The port the frontend service will be exposed on via the loadbalancer. |
| frontEnd.tolerations | list | `[]` | Frontend pod specific tolerations. |
| metadata.affinity | object | `{}` | Metadata pod specific affinity. |
| metadata.imageVersion | string | `"2.0.0"` | The image version of the metadata container. |
| metadata.nodeSelector | object | `{}` | Metadata pod specific nodeSelector. |
| metadata.replicas | int | `1` | How many replicas of the metadata service to run. |
| metadata.resources | object | `{}` | See pod resourcing [ref](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) |
| metadata.serviceName | string | `"metadata"` | The metadata service name. |
| metadata.tolerations | list | `[]` | Metadata pod specific tolerations. |
| neo4j.affinity | object | `{}` | neo4j specific affinity. |
| neo4j.backup | object | `{"enabled":false,"s3Path":"s3://dev/null","schedule":"0 * * * *"}` | If enabled is set to true, make sure and set the s3 path as well. |
| neo4j.backup.s3Path | string | `"s3://dev/null"` | The s3path to write to for backups. |
| neo4j.backup.schedule | string | `"0 * * * *"` | The schedule to run backups on. Defaults to hourly. |
| neo4j.config | object | `{"dbms":{"heap_initial_size":"23000m","heap_max_size":"23000m","pagecache_size":"26600m"}}` | Neo4j application specific configuration. This type of configuration is why the charts/stable version is not used. See [ref](https://github.com/helm/charts/issues/21439) |
| neo4j.config.dbms | object | `{"heap_initial_size":"23000m","heap_max_size":"23000m","pagecache_size":"26600m"}` | dbms config for neo4j |
| neo4j.config.dbms.heap_initial_size | string | `"23000m"` | the initial java heap for neo4j |
| neo4j.config.dbms.heap_max_size | string | `"23000m"` | the max java heap for neo4j |
| neo4j.config.dbms.pagecache_size | string | `"26600m"` | the page cache size for neo4j |
| neo4j.enabled | bool | `true` | If neo4j is enabled as part of this chart, or not. Set this to false if you want to provide your own version. However, note that your own version needs to be in the same namespace. |
| neo4j.nodeSelector | object | `{}` | neo4j specific nodeSelector. |
| neo4j.persistence | object | `{}` | Neo4j persistence. Turn this on to keep your data between pod crashes, etc. This is also needed for backups. |
| neo4j.resources | object | `{}` | See pod resourcing [ref](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) |
| neo4j.tolerations | list | `[]` | neo4j specific tolerations. |
| neo4j.version | string | `"3.3.0"` | The neo4j application version used by amundsen. |
| nodeSelector | object | `{}` | amundsen application wide configuration of nodeSelector. This applies to search, metadata, frontend and neo4j. Elasticsearch has it's own configuation properties for this. [ref](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) |
| provider | string | `"aws"` | The cloud provider the app is running in. Used to construct dns hostnames (on aws only). |
| search.affinity | object | `{}` | Search pod specific affinity. |
| search.elasticsearchEndpoint | string | `"amundsen-elasticsearch-client"` | The name of the service hosting elasticsearch on your cluster, if you bring your own. You should only need to change this, if you don't use the version in this chart. |
| search.imageVersion | string | `"2.0.0"` | The image version of the search container. |
| search.nodeSelector | object | `{}` | Search pod specific nodeSelector. |
| search.replicas | int | `1` | How many replicas of the search service to run. |
| search.resources | object | `{}` | See pod resourcing [ref](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) |
| search.serviceName | string | `"search"` | The search service name. |
| search.tolerations | list | `[]` | Search pod specific tolerations. |
| tolerations | list | `[]` | amundsen application wide configuration of tolerations. This applies to search, metadata, frontend and neo4j. Elasticsearch has it's own configuation properties for this. [ref](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature) |
## Neo4j DBMS Config?
You may want to override the default memory usage for Neo4J. In particular, if you're just test-driving a deployment and your node exits with status 137, you should set the usage to smaller values:
```
......@@ -38,11 +102,11 @@ config:
With this values file, you can then setup amundsen with these commands:
```
helm install templates/helm/neo4j --values impl/helm/dev/values.yaml
helm install templates/helm/elasticsearch --values impl/helm/dev/values.yaml
helm install templates/helm/amundsen --values impl/helm/dev/values.yaml
```
## Other Notes
* For aws setup, you will also need to setup the [external-dns plugin](https://github.com/kubernetes-incubator/external-dns)
* There are exising helm charts for neo4j and elasticsearch. Future versions of amundsen may use them instead.
* There is an existing helm chart for neo4j, but, it is missing some features necessary to for use such as:
* [\[stable/neo4j\] make neo4j service definition more extensible](https://github.com/helm/charts/issues/21441); without this, it is not possible to setup external load balancers, external-dns, etc
* [\[stable/neo4j\] allow custom configuration of neo4j](https://github.com/helm/charts/issues/21439); without this, custom configuration is not possible which includes setting configmap based settings, which also includes turning on apoc.
apiVersion: v1
description: Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.
name: amundsen
version: 1.0.0
icon: https://github.com/lyft/amundsen/blob/master/docs/img/logos/amundsen_logo_on_light.svg
home: https://github.com/lyft/amundsen
maintainers:
- name: Shaun Elliott
email: javamonkey79@gmail.com
sources:
- https://github.com/lyft/amundsen
keywords:
- metadata
- data
\ No newline at end of file
name: amundsen
version: 0.1.0
home: https://github.com/lyft/amundsen
{{- if .Values.createOidcSecret }}
apiVersion: v1
kind: Secret
metadata:
name: oidc-config
namespace: {{ .Release.Namespace }}
stringData:
OIDC_CLIENT_SECRET: {{ .Values.OIDC_CLIENT_SECRET }}
client_secrets.json: |-
{
"web": {
"client_id": "{{ .Values.OIDC_CLIENT_ID }}",
"client_secret": "{{ .Values.OIDC_CLIENT_SECRET }}",
"auth_uri": "{{ .Values.OIDC_ORG_URL }}/oauth2/{{ .Values.OIDC_AUTH_SERVER_ID }}/v1/authorize",
"token_uri": "{{ .Values.OIDC_ORG_URL }}/oauth2/{{ .Values.OIDC_AUTH_SERVER_ID }}/v1/token",
"issuer": "{{ .Values.OIDC_ORG_URL }}/oauth2/{{ .Values.OIDC_AUTH_SERVER_ID }}",
"userinfo_uri": "{{ .Values.OIDC_ORG_URL }}/oauth2/{{ .Values.OIDC_AUTH_SERVER_ID }}/v1/userinfo",
"redirect_uris": [
"http://localhost/oidc_callback"
],
"token_introspection_uri": "{{ .Values.OIDC_ORG_URL }}/oauth2/{{ .Values.OIDC_AUTH_SERVER_ID }}/v1/introspect"
}
}
{{- end }}
environment: "dev"
provider: aws
dnsZone: teamname.company.com
dockerhubImagePath: amundsendev
LONG_RANDOM_STRING: 1234
# To enable auth via OIDC, set this to true.
oidcEnabled: false
# OIDC needs some configuration. If you want the chart to make your secrets, set this to true and set the next four values.
# If you don't want to configure your secrets via helm, you can still use the oidc_config.yaml as a template
createOidcSecret: false
# OIDC_CLIENT_ID:
# OIDC_CLIENT_SECRET:
# OIDC_ORG_URL:
# OIDC_AUTH_SERVER_ID:
## Support Node, affinity and tolerations for scheduler pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
affinity: {}
tolerations: []
searchServiceName: search
searchImageVersion: 1.4.2
search:
replicas: 1
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
nodeSelector: {}
affinity: {}
tolerations: []
metadataServiceName: metadata
metadataImageVersion: 1.1.6
metadata:
replicas: 1
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
# You can also set these properties to override the ones set above
nodeSelector: {}
affinity: {}
tolerations: []
frontEndServiceName: frontend
frontEndImageVersion: 1.2.0
frontEndServicePort: 80
frontEnd:
replicas: 1
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
# You can also set these properties to override the ones set above
nodeSelector: {}
affinity: {}
tolerations: []
name: elasticsearch
home: https://www.elastic.co
appVersion: 6.7.0
version: 0.1.0
\ No newline at end of file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
spec:
selector:
matchLabels:
run: {{ .Chart.Name }}
replicas: 1
template:
metadata:
labels:
run: {{ .Chart.Name }}
spec:
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
initContainers:
- name: init-map-count
image: busybox:1.31
securityContext:
privileged: true
command: ["sysctl", "-w", "vm.max_map_count={{ .Values.initContainer.vmMaxMapCount }}"]
containers:
- name: {{ .Chart.Name }}
image: {{ .Chart.Name }}:{{ .Chart.AppVersion }}
ports:
- containerPort: 9200
{{- with .Values.resources }}
resources:
{{ toYaml . | indent 10 }}
{{- end }}
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}
labels:
run: {{ .Chart.Name }}
annotations:
{{- if (eq .Values.provider "aws") }}
external-dns.alpha.kubernetes.io/hostname: amundsen-{{ .Chart.Name }}-{{ .Values.environment }}.{{ .Values.dnsZone }}
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: nlb
{{- end }}
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 9200
name: {{ .Chart.Name }}-{{ .Values.environment }}-http
targetPort: 9200
nodePort: 30200
selector:
run: {{ .Chart.Name }}
\ No newline at end of file
provider: aws
# nodeSelector
# affinity
# tolerations
# resources
# annotations
environment: staging
dnsZone: teamname.company.com
initContainer:
vmMaxMapCount: 262144
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
name: neo4j
home: https://www.neo4j.com
appVersion: 3.3.0
version: 0.3.0
\ No newline at end of file
provider: aws
# nodeSelector
# affinity
# tolerations
# resources
# annotations
environment: staging
dnsZone: teamname.company.com
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
config:
dbms:
heap_initial_size: 23000m
heap_max_size: 23000m
pagecache_size: 26600m
neo4j:
persistence: {}
# persistence:
# storageClass: gp2
# size: 10Gi
backup:
enabled: false
# backup:
# enabled: true
# s3Path: "s3://my-bucket/my-prefix"
# schedule: "0 * * * *"
dependencies:
# - name: neo4j
# version: 1.2.2
# repository: https://kubernetes-charts.storage.googleapis.com/
- name: elasticsearch
version: 1.24.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: elasticsearch.enabled
\ No newline at end of file
......@@ -2,16 +2,16 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}-{{ .Values.searchServiceName }}
name: {{ .Chart.Name }}-{{ .Values.search.serviceName }}
spec:
selector:
matchLabels:
run: {{ .Chart.Name }}-{{ .Values.searchServiceName }}
run: {{ .Chart.Name }}-{{ .Values.search.serviceName }}
replicas: {{ default 1 .Values.search.replicas }}
template:
metadata:
labels:
run: {{ .Chart.Name }}-{{ .Values.searchServiceName }}
run: {{ .Chart.Name }}-{{ .Values.search.serviceName }}
spec:
{{- with default .Values.nodeSelector .Values.search.nodeSelector }}
nodeSelector:
......@@ -26,13 +26,13 @@ spec:
{{ toYaml . | indent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}-{{ .Values.searchServiceName }}
image: {{- if .Values.searchServiceImage }} {{.Values.searchServiceImage}}{{- else }} {{ .Values.dockerhubImagePath }}/{{ .Chart.Name }}-{{ .Values.searchServiceName }}:{{ .Values.searchImageVersion }}{{- end }}
- name: {{ .Chart.Name }}-{{ .Values.search.serviceName }}
image: {{ .Values.dockerhubImagePath }}/{{ .Chart.Name }}-{{ .Values.search.serviceName }}:{{ .Values.search.imageVersion }}
ports:
- containerPort: 5000
env:
- name: PROXY_ENDPOINT
value: elasticsearch
value: {{ if .Values.search.elasticsearchEndpoint }}{{ .Values.search.elasticsearchEndpoint }}{{ else }}{{ .Release.Namespace }}-elasticsearch-client.{{ .Release.Namespace }}.svc.cluster.local{{ end }}
{{- with .Values.search.resources }}
resources:
{{ toYaml . | indent 10 }}
......@@ -41,16 +41,16 @@ spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}
name: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}
spec:
selector:
matchLabels:
run: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}
run: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}
replicas: {{ default 1 .Values.metadataReplicas }}
template:
metadata:
labels:
run: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}
run: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}
spec:
{{- with default .Values.nodeSelector .Values.metadata.nodeSelector }}
nodeSelector:
......@@ -65,24 +65,24 @@ spec:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
{{- if .Values.oidcEnabled }}
{{- if .Values.frontEnd.oidcEnabled }}
- name: oidc-config
secret:
secretName: oidc-config
{{- end }}
containers:
- name: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}
- name: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}
{{- with .Values.metadataServiceImage }}
image: {{ . }}
{{- else }}
image: {{ .Values.dockerhubImagePath }}/{{ .Chart.Name }}-{{ .Values.metadataServiceName }}{{ if .Values.oidcEnabled }}-oidc{{ end }}:{{ .Values.metadataImageVersion }}
image: {{ .Values.dockerhubImagePath }}/{{ .Chart.Name }}-{{ .Values.metadata.serviceName }}{{ if .Values.frontEnd.oidcEnabled }}-oidc{{ end }}:{{ .Values.metadata.imageVersion }}
{{- end }}
ports:
- containerPort: 5000
env:
- name: PROXY_HOST
value: bolt://neo4j
{{- if .Values.oidcEnabled }}
{{- if .Values.frontEnd.oidcEnabled }}
- name: FLASK_OIDC_CLIENT_SECRETS
value: /etc/client_secrets.json
- name: FLASK_OIDC_SECRET_KEY
......@@ -92,7 +92,7 @@ spec:
key: OIDC_CLIENT_SECRET
{{- end }}
volumeMounts:
{{- if .Values.oidcEnabled }}
{{- if .Values.frontEnd.oidcEnabled }}
- name: oidc-config
mountPath: /etc/client_secrets.json
subPath: client_secrets.json
......@@ -105,16 +105,16 @@ spec:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}
name: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}
spec:
selector:
matchLabels:
run: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}
run: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}
replicas: {{ default 1 .Values.frontEnd.replicas }}
template:
metadata:
labels:
run: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}
run: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}
spec:
{{- with default .Values.nodeSelector .Values.frontEnd.nodeSelector }}
nodeSelector:
......@@ -129,33 +129,33 @@ spec:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
{{- if .Values.oidcEnabled }}
{{- if .Values.frontEnd.oidcEnabled }}
- name: oidc-config
secret:
secretName: oidc-config
{{- end }}
containers:
- name: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}
{{- with .Values.frontEndServiceImage }}
- name: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}
{{- with .Values.frontEnd.serviceImage }}
image: {{ . }}
{{- else }}
image: {{ .Values.dockerhubImagePath }}/{{ .Chart.Name }}-{{ .Values.frontEndServiceName }}{{ if .Values.oidcEnabled }}-oidc{{ end }}:{{ .Values.frontEndImageVersion }}
image: {{ .Values.dockerhubImagePath }}/{{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}{{ if .Values.frontEnd.oidcEnabled }}-oidc{{ end }}:{{ .Values.frontEnd.imageVersion }}
{{- end }}
ports:
- containerPort: 5000
env:
# FRONTEND_BASE is used by notifications util to provide links to amundsen pages in emails. If its not set, it will default to localhost.
{{ if .Values.FRONTEND_BASE }}
{{ if .Values.frontEnd.FRONTEND_BASE }}
- name: FRONTEND_BASE
value: http://{{ .Values.FRONTEND_BASE }}
value: http://{{ .Values.frontEnd.FRONTEND_BASE }}
{{ end }}
- name: SEARCHSERVICE_BASE
value: http://{{ .Chart.Name }}-{{ .Values.searchServiceName }}:5001
value: http://{{ .Chart.Name }}-{{ .Values.search.serviceName }}:5001
- name: METADATASERVICE_BASE
value: http://{{ .Chart.Name }}-{{ .Values.metadataServiceName }}:5002
value: http://{{ .Chart.Name }}-{{ .Values.metadata.serviceName }}:5002
- name: LONG_RANDOM_STRING
value: {{ quote .Values.LONG_RANDOM_STRING }}
{{- if .Values.oidcEnabled }}
{{- if .Values.frontEnd.oidcEnabled }}
- name: FLASK_OIDC_CLIENT_SECRETS
value: /etc/client_secrets.json
- name: FLASK_OIDC_SECRET_KEY
......@@ -165,7 +165,7 @@ spec:
key: OIDC_CLIENT_SECRET
{{- end }}
volumeMounts:
{{- if .Values.oidcEnabled }}
{{- if .Values.frontEnd.oidcEnabled }}
- name: oidc-config
mountPath: /etc/client_secrets.json
subPath: client_secrets.json
......
{{- if .Values.frontEnd.createOidcSecret }}
apiVersion: v1
kind: Secret
metadata:
name: oidc-config
namespace: {{ .Release.Namespace }}
stringData:
OIDC_CLIENT_SECRET: {{ .Values.frontEnd.OIDC_CLIENT_SECRET }}
client_secrets.json: |-
{
"web": {
"client_id": "{{ .Values.frontEnd.OIDC_CLIENT_ID }}",
"client_secret": "{{ .Values.frontEnd.OIDC_CLIENT_SECRET }}",
"auth_uri": "{{ .Values.frontEnd.OIDC_ORG_URL }}/oauth2/{{ .Values.frontEnd.OIDC_AUTH_SERVER_ID }}/v1/authorize",
"token_uri": "{{ .Values.frontEnd.OIDC_ORG_URL }}/oauth2/{{ .Values.frontEnd.OIDC_AUTH_SERVER_ID }}/v1/token",
"issuer": "{{ .Values.frontEnd.OIDC_ORG_URL }}/oauth2/{{ .Values.frontEnd.OIDC_AUTH_SERVER_ID }}",
"userinfo_uri": "{{ .Values.frontEnd.OIDC_ORG_URL }}/oauth2/{{ .Values.frontEnd.OIDC_AUTH_SERVER_ID }}/v1/userinfo",
"redirect_uris": [
"http://localhost/oidc_callback"
],
"token_introspection_uri": "{{ .Values.frontEnd.OIDC_ORG_URL }}/oauth2/{{ .Values.frontEnd.OIDC_AUTH_SERVER_ID }}/v1/introspect"
}
}
{{- end }}
......@@ -2,12 +2,12 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-{{ .Values.searchServiceName }}
name: {{ .Chart.Name }}-{{ .Values.search.serviceName }}
labels:
run: {{ .Chart.Name }}-{{ .Values.searchServiceName }}
run: {{ .Chart.Name }}-{{ .Values.search.serviceName }}
annotations:
{{- if (eq .Values.provider "aws") }}
external-dns.alpha.kubernetes.io/hostname: {{ .Chart.Name }}-{{ .Values.searchServiceName }}-{{ .Values.environment }}.{{ .Values.dnsZone }}
external-dns.alpha.kubernetes.io/hostname: {{ .Chart.Name }}-{{ .Values.search.serviceName }}-{{ .Values.environment }}.{{ .Values.dnsZone }}
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: nlb
{{- end }}
......@@ -16,21 +16,21 @@ spec:
externalTrafficPolicy: Local
ports:
- port: 5001
name: {{ .Chart.Name }}-{{ .Values.searchServiceName }}-{{ .Values.environment }}-http
name: {{ .Chart.Name }}-{{ .Values.search.serviceName }}-{{ .Values.environment }}-http
targetPort: 5001
nodePort: 30001
selector:
run: {{ .Chart.Name }}-{{ .Values.searchServiceName }}
run: {{ .Chart.Name }}-{{ .Values.search.serviceName }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}
name: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}
labels:
run: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}
run: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}
annotations:
{{- if (eq .Values.provider "aws") }}
external-dns.alpha.kubernetes.io/hostname: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}-{{ .Values.environment }}.{{ .Values.dnsZone }}
external-dns.alpha.kubernetes.io/hostname: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}-{{ .Values.environment }}.{{ .Values.dnsZone }}
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: nlb
{{- end }}
......@@ -39,21 +39,21 @@ spec:
externalTrafficPolicy: Local
ports:
- port: 5002
name: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}-{{ .Values.environment }}-http
name: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}-{{ .Values.environment }}-http
targetPort: 5002
nodePort: 30002
selector:
run: {{ .Chart.Name }}-{{ .Values.metadataServiceName }}
run: {{ .Chart.Name }}-{{ .Values.metadata.serviceName }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}
name: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}
labels:
run: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}
run: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}
annotations:
{{- if (eq .Values.provider "aws") }}
external-dns.alpha.kubernetes.io/hostname: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}-{{ .Values.environment }}.{{ .Values.dnsZone }}
external-dns.alpha.kubernetes.io/hostname: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}-{{ .Values.environment }}.{{ .Values.dnsZone }}
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: nlb
{{- end }}
......@@ -61,10 +61,10 @@ spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: {{ .Values.frontEndServicePort }}
name: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}-{{ .Values.environment }}-http
- port: {{ .Values.frontEnd.servicePort }}
name: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}-{{ .Values.environment }}-http
targetPort: 5000
nodePort: 30003
selector:
run: {{ .Chart.Name }}-{{ .Values.frontEndServiceName }}
run: {{ .Chart.Name }}-{{ .Values.frontEnd.serviceName }}
---
\ No newline at end of file
{{ if .Values.neo4j.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Chart.Name }}-configmap
name: neo4j-configmap
labels:
app: "neo4j"
data:
......@@ -23,11 +24,12 @@ data:
dbms.logs.query.enabled=true
dbms.logs.query.rotation.keep_number=7
dbms.logs.query.rotation.size=20m
dbms.memory.heap.initial_size={{ .Values.config.dbms.heap_initial_size }}
dbms.memory.heap.max_size={{ .Values.config.dbms.heap_max_size }}
dbms.memory.pagecache.size={{ .Values.config.dbms.pagecache_size }}
dbms.memory.heap.initial_size={{ .Values.neo4j.config.dbms.heap_initial_size }}
dbms.memory.heap.max_size={{ .Values.neo4j.config.dbms.heap_max_size }}
dbms.memory.pagecache.size={{ .Values.neo4j.config.dbms.pagecache_size }}
dbms.security.allow_csv_import_from_file_urls=true
dbms.security.procedures.unrestricted=algo.*,apoc.*
dbms.windows_service_name=neo4j
apoc.export.file.enabled=true
apoc.import.file.enabled=true
\ No newline at end of file
apoc.import.file.enabled=true
{{ end }}
\ No newline at end of file
{{ if .Values.neo4j.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
name: neo4j
spec:
selector:
matchLabels:
run: {{ .Chart.Name }}
run: neo4j
replicas: 1
template:
metadata:
labels:
run: {{ .Chart.Name }}
run: neo4j
spec:
{{- with .Values.nodeSelector }}
{{- with .Values.neo4j.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
{{- with .Values.neo4j.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
{{- with .Values.neo4j.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
......@@ -45,8 +46,8 @@ spec:
- name: plugins
mountPath: /var/lib/neo4j/plugins
containers:
- name: {{ .Chart.Name }}
image: {{ .Chart.Name }}:{{ .Chart.AppVersion }}
- name: neo4j
image: neo4j:{{ .Values.neo4j.version }}
ports:
- containerPort: 7474
- containerPort: 7687
......@@ -65,10 +66,14 @@ spec:
{{- end}}
- name: plugins
mountPath: /var/lib/neo4j/plugins
{{- with .Values.neo4j.resources }}
resources:
{{ toYaml . | indent 10 }}
{{- end}}
volumes:
- name: conf
configMap:
name: {{ .Chart.Name }}-configmap
name: neo4j-configmap
{{- if .Values.neo4j.persistence }}
- name: data
persistentVolumeClaim:
......@@ -78,7 +83,4 @@ spec:
hostPath:
path: "/mnt/ephemeral/neo4j/plugins"
type: DirectoryOrCreate
{{- with .Values.resources }}
resources:
{{ toYaml . | indent 8 }}
{{- end }}
{{ end }}
\ No newline at end of file
{{- if .Values.neo4j.persistence }}
{{- if and .Values.neo4j.enabled .Values.neo4j.persistence }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
......
{{ if and .Values.neo4j.backup.enabled .Values.neo4j.backup.s3Path .Values.neo4j.persistence (eq .Values.provider "aws") }}
{{ if and .Values.neo4j.enabled (and .Values.neo4j.backup.enabled .Values.neo4j.backup.s3Path .Values.neo4j.persistence (eq .Values.provider "aws")) }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: neo4j-s3-backup
spec:
schedule: {{ default "0 * * * *" (.Values.neo4j.backup.schedule | quote) }}
schedule: {{ .Values.neo4j.backup.schedule | quote }}
jobTemplate:
spec:
template:
......
{{ if .Values.neo4j.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}
name: neo4j
labels:
run: {{ .Chart.Name }}
run: neo4j
annotations:
{{- if (eq .Values.provider "aws") }}
external-dns.alpha.kubernetes.io/hostname: amundsen-{{ .Chart.Name }}-{{ .Values.environment }}.{{ .Values.dnsZone }}
external-dns.alpha.kubernetes.io/hostname: amundsen-neo4j-{{ .Values.environment }}.{{ .Values.dnsZone }}
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: nlb
{{- end }}
......@@ -15,20 +16,21 @@ spec:
externalTrafficPolicy: Local
ports:
- port: 7473
name: {{ .Chart.Name }}-{{ .Values.environment }}-https
name: neo4j-{{ .Values.environment }}-https
targetPort: 7473
nodePort: 30473
- port: 7474
name: {{ .Chart.Name }}-{{ .Values.environment }}-http
name: neo4j-{{ .Values.environment }}-http
targetPort: 7474
nodePort: 30474
- port: 7687
name: {{ .Chart.Name }}-{{ .Values.environment }}-bolt
name: neo4j-{{ .Values.environment }}-bolt
targetPort: 7687
nodePort: 30687
- port: 1337
name: {{ .Chart.Name }}-{{ .Values.environment }}-shell
name: neo4j-{{ .Values.environment }}-shell
targetPort: 1337
nodePort: 31337
selector:
run: {{ .Chart.Name }}
\ No newline at end of file
run: neo4j
{{ end }}
\ No newline at end of file
# Duplicate this file and put your customization here
##
## common settings for all apps
##
## NOTE - README table was generated with https://github.com/norwoodj/helm-docs
##
## environment -- **DEPRECATED - its not standard to pre construct urls this way.** The environment the app is running in. Used to construct dns hostnames (on aws only) and ports.
##
environment: "dev"
##
## DEPRECATED - its not standard to pre construct urls this way
## provider -- The cloud provider the app is running in. Used to construct dns hostnames (on aws only).
##
provider: aws
##
## dnsZone -- **DEPRECATED - its not standard to pre construct urls this way.** The dns zone (e.g. group-qa.myaccount.company.com) the app is running in. Used to construct dns hostnames (on aws only).
##
dnsZone: teamname.company.com
##
## dockerhubImagePath -- **DEPRECATED - this is not useful, it would be better to just allow the whole image to be swapped instead.** The image path for dockerhub.
##
dockerhubImagePath: amundsendev
##
## LONG_RANDOM_STRING -- A long random string. You should probably provide your own. This is needed for OIDC.
##
LONG_RANDOM_STRING: 1234
##
## nodeSelector -- amundsen application wide configuration of nodeSelector. This applies to search, metadata, frontend and neo4j. Elasticsearch has it's own configuation properties for this. [ref](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector)
##
nodeSelector: {}
##
## affinity -- amundsen application wide configuration of affinity. This applies to search, metadata, frontend and neo4j. Elasticsearch has it's own configuation properties for this. [ref](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
##
affinity: {}
##
## tolerations -- amundsen application wide configuration of tolerations. This applies to search, metadata, frontend and neo4j. Elasticsearch has it's own configuation properties for this. [ref](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature)
##
tolerations: []
##
## Configuration related to the search service.
##
search:
##
## search.serviceName -- The search service name.
##
serviceName: search
##
## search.elasticsearchEndpoint -- The name of the service hosting elasticsearch on your cluster, if you bring your own. You should only need to change this, if you don't use the version in this chart.
##
elasticsearchEndpoint:
##
## search.imageVersion -- The image version of the search container.
##
imageVersion: 2.0.0
##
## search.replicas -- How many replicas of the search service to run.
##
replicas: 1
##
## search.resources -- See pod resourcing [ref](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
##
resources: {}
# limits:
# cpu: 2
# memory: 2Gi
# requests:
# cpu: 1
# memory: 1Gi
##
## search.nodeSelector -- Search pod specific nodeSelector.
##
nodeSelector: {}
##
## search.affinity -- Search pod specific affinity.
##
affinity: {}
##
## search.tolerations -- Search pod specific tolerations.
##
tolerations: []
##
## Configuration related to the metadata service.
##
metadata:
##
## metadata.serviceName -- The metadata service name.
##
serviceName: metadata
##
## metadata.imageVersion -- The image version of the metadata container.
##
imageVersion: 2.0.0
##
## metadata.replicas -- How many replicas of the metadata service to run.
##
replicas: 1
##
## metadata.resources -- See pod resourcing [ref](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
##
resources: {}
# limits:
# cpu: 2
# memory: 2Gi
# requests:
# cpu: 1
# memory: 1Gi
##
## metadata.nodeSelector -- Metadata pod specific nodeSelector.
##
nodeSelector: {}
##
## metadata.affinity -- Metadata pod specific affinity.
##
affinity: {}
##
## metadata.tolerations -- Metadata pod specific tolerations.
##
tolerations: []
##
## Configuration related to the frontEnd service.
##
frontEnd:
##
## frontEnd.serviceName -- The frontend service name.
##
serviceName: frontend
##
## frontEnd.imageVersion -- The frontend version of the metadata container.
##
imageVersion: 2.0.0
##
## frontEnd.servicePort -- The port the frontend service will be exposed on via the loadbalancer.
##
servicePort: 80
##
## frontEnd.replicas -- How many replicas of the frontend service to run.
##
replicas: 1
##
## frontEnd.oidcEnabled -- To enable auth via OIDC, set this to true.
##
oidcEnabled: false
##
## frontEnd.createOidcSecret -- OIDC needs some configuration. If you want the chart to make your secrets, set this to true and set the next four values. If you don't want to configure your secrets via helm, you can still use the amundsen-oidc-config.yaml as a template
##
createOidcSecret: false
##
## frontEnd.OIDC_CLIENT_ID -- The client id for OIDC.
##
OIDC_CLIENT_ID:
##
## frontEnd.OIDC_CLIENT_SECRET -- The client secret for OIDC.
##
OIDC_CLIENT_SECRET: ""
##
## frontEnd.OIDC_ORG_URL -- The organization URL for OIDC.
##
OIDC_ORG_URL:
##
## frontEnd.OIDC_AUTH_SERVER_ID -- The authorization server id for OIDC.
##
OIDC_AUTH_SERVER_ID:
##
## frontEnd.resources -- See pod resourcing [ref](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
##
resources: {}
# limits:
# cpu: 2
# memory: 2Gi
# requests:
# cpu: 1
# memory: 1Gi
##
## frontEnd.nodeSelector -- Frontend pod specific nodeSelector.
##
nodeSelector: {}
##
## frontEnd.affinity -- Frontend pod specific affinity.
##
affinity: {}
##
## frontEnd.tolerations -- Frontend pod specific tolerations.
##
tolerations: []
##
## Configuration related to neo4j.
##
neo4j:
##
## neo4j.enabled -- If neo4j is enabled as part of this chart, or not. Set this to false if you want to provide your own version. However, note that your own version needs to be in the same namespace.
##
enabled: true
##
## neo4j.version -- The neo4j application version used by amundsen.
##
version: 3.3.0
##
## neo4j.resources -- See pod resourcing [ref](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
##
resources: {}
#resources:
# limits:
# cpu: 2
# memory: 2Gi
# requests:
# cpu: 1
# memory: 1Gi
##
## neo4j.config -- Neo4j application specific configuration. This type of configuration is why the charts/stable version is not used. See [ref](https://github.com/helm/charts/issues/21439)
##
config:
##
## neo4j.config.dbms -- dbms config for neo4j
##
dbms:
## neo4j.config.dbms.heap_initial_size -- the initial java heap for neo4j
heap_initial_size: 23000m
## neo4j.config.dbms.heap_max_size -- the max java heap for neo4j
heap_max_size: 23000m
## neo4j.config.dbms.pagecache_size -- the page cache size for neo4j
pagecache_size: 26600m
##
## neo4j.persistence -- Neo4j persistence. Turn this on to keep your data between pod crashes, etc. This is also needed for backups.
##
persistence: {}
# storageClass: gp2
# size: 10Gi
##
## neo4j.backup -- If enabled is set to true, make sure and set the s3 path as well.
##
backup:
# neo4j.backup.enabled - Whether to include the backup neo4j cron pod. If set to true, s3Path is required.
enabled: false
##
## neo4j.backup.s3Path -- The s3path to write to for backups.
##
s3Path: "s3://dev/null"
##
## neo4j.backup.schedule -- The schedule to run backups on. Defaults to hourly.
##
schedule: "0 * * * *"
##
## neo4j.nodeSelector -- neo4j specific nodeSelector.
##
nodeSelector: {}
##
## neo4j.affinity -- neo4j specific affinity.
##
affinity: {}
##
## neo4j.tolerations -- neo4j specific tolerations.
##
tolerations: []
##
## Configuration related to elasticsearch.
##
## To add values to dependent charts, prefix the value with the chart name (e.g. elasticsearch)
## By default, the ES chart runs with 3,3,2 nodes for master, data, client. Amundsen likely does not need so much,
## so, this has been tuned down to 1,1,1.
##
elasticsearch:
# elasticsearch.enabled -- set this to false, if you want to provide your own ES instance.
enabled: true
cluster:
env:
## elasticsearch.cluster.env.MINIMUM_MASTER_NODES -- required to match master.replicas
MINIMUM_MASTER_NODES: 1
## elasticsearch.cluster.env.EXPECTED_MASTER_NODES -- required to match master.replicas
EXPECTED_MASTER_NODES: 1
## elasticsearch.cluster.env.RECOVER_AFTER_MASTER_NODES -- required to match master.replicas
RECOVER_AFTER_MASTER_NODES: 1
master:
## elasticsearch.master.replicas -- only running amundsen on 1 master replica
replicas: 1
data:
## elasticsearch.data.replicas -- only running amundsen on 1 data replica
replicas: 1
client:
## elasticsearch.client.replicas -- only running amundsen on 1 client replica
replicas: 1
# serviceType: LoadBalancer
# serviceAnnotations:
# external-dns.alpha.kubernetes.io/hostname: amundsen-elasticsearch.dev.teamname.company.com
# service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
# service.beta.kubernetes.io/aws-load-balancer-type: nlb
# nodeAffinity: high
# resources:
# limits:
# cpu: 2
# memory: 2Gi
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment