d6c9c75 Parametrize, exhaustively document
flattenTree
~goorzhel pushed to ~goorzhel/turboprop git
813e933 Move liftDefault closer to place of use
~goorzhel pushed to ~goorzhel/turboprop git
Problem: You have twenty or thirty Helm releases, all of which you template semi-manually. Deploying new applications involves tremendous amounts of copy-pasta.
Solution: Use Nix. With Nix, you can ensure chart integrity, generate repetitive data in subroutines, and easily reuse variable data.
Turboprop templates your Helm charts for you, making an individual Nix derivation of each one; each of these derivations is then gathered into a mega-derivation complete with Kustomizations for every namespace and service. In short, you're two commands away from full cluster reconciliation:
nix build && kubectl diff -k ./result
First, define services in ./services
. Ensure that
CRD-providing services are evaluated first, usually with ordered
directories like ./services/01-service-mesh
.
Then, in your flake:
inputs
.turboprop.lib.${system}.mkDerivation {charts} {pname, version, src, serviceRoot}
.Add Turboprop to your flake's inputs, along with flake-utils and nixhelm:
{
inputs = {
flake-utils.url = "github:numtide/flake-utils";
nixhelm.url = "github:farcaller/nixhelm";
turboprop = {
url = "sourcehut:~goorzhel/turboprop";
inputs.nixhelm.follows = "nixhelm";
};
};
<...>
}
Next, put it to use in your flake's output:
{
<...>
outputs = {self, flake-utils, nixhelm, turboprop}:
flake-utils.lib.eachDefaultSystem (system: let
turbo = turboprop.lib.${system};
in {
packages.default = let
pname = "my-k8s-flake";
in
turbo.mkDerivation {
charts = nixhelm.chartsDerivations.${system}
} {
inherit pname;
version = "rolling";
src = builtins.path {
path = ./.;
name = pname;
};
serviceRoot = ./services;
nsMetadata = {};
};
}
);
}
Now set that aside for the time being.
This is a module that defines a service derivation:
{ charts, lib, userData, ... }: { # I
builder = lib.builders.helmChart; # I2; O1
args = { # < - - - - - - - - - - - O2
chart = charts.jetstack.cert-manager; # I1
values = {
featureGates = "ExperimentalGatewayAPISupport=true";
installCRDs = true;
prometheus = {
enabled = true;
servicemonitor = {
enabled = true;
prometheusInstance = "monitoring";
};
};
startupapicheck.podLabels."sidecar.istio.io/inject" = "false";
};
};
extraObjects = [ # O3
{
apiVersion = "cert-manager.io/v1";
kind = "ClusterIssuer";
metadata.name = userData.vars.k8sCert.name; # I3
spec.ca.secretName = userData.vars.k8sCert.name;
}
];
}
The module takes as input these attributes, any of which you may omit:
- A tree of chart derivations;
- the Turboprop library;
- the Nixpkgs for the current system (
pkgs
);- the name and namespace of the service (
name
,namespace
); and- user data specific to your flake.
The output signature is
{builder, args, extraObjects}
:
builder
is the Turboprop builder that will create your service derivation. Most often, you will usehelmChart
; other builders exist for scenarios such as deploying a collection of Kubernetes objects or a single remote YAML file. You may even define your own builder.args
are arguments passed to the builder. Refer to each builder's signature below.extraObjects
are objects to deploy alongside the chart.
Turboprop operates on trees of Nix modules, both in the filesystem sense (nested directories) and the Nix sense (nested attrsets), and uses Haumea to do so. A service tree consists of
./services
, which
containsWe'll start with building a flake containing two applications:
Normally, one would also deploy a Gateway controller, but this suffices for the example.
# services/gateway-system/gateway-api.nix
{lib, ...}: {
# Any function can be used as a builder so long as it has variable arity
# and produces a derivation consisting of a single YAML file.
builder = lib.fetchers.remoteYAMLFile;
args = rec {
version = "1.0.0";
url = "https://github.com/kubernetes-sigs/gateway-api/releases/download/v${version}/experimental-install.yaml";
hash = "sha256-bGAdzteHKpQNdvpmeuEmunGMtMbblw0Lq0kSjswRkqM=";
};
}
# services/default/breezewiki.nix
{charts, lib, name, namespace ...}: {
builder = lib.app-template.build;
args = {
mainImage = "quay.io/pussthecatorg/breezewiki:latest";
values = let
port = 10416;
in {
# app-template's schema can be found here:
# https://github.com/bjw-s/helm-charts/blob/app-template-2.3.0/charts/library/common/values.yaml
service.main.ports.http.port = port;
route.main = {
enabled = true;
hostnames = ["${name}.example.com"];
parentRefs = [
{
name = "gateway";
inherit namespace;
sectionName = "https";
}
];
rules = [
{backendRefs = [{inherit name namespace port;}];}
];
};
};
};
}
Now build the flake:
$ nix build
$ ls -l result/*/*
-r--r--r-- 3 root root 88 Dec 31 1969 result/default/kustomization.yaml
-r--r--r-- 130 root root 89 Dec 31 1969 result/gateway-system/kustomization.yaml
result/default/breezewiki:
total 12
-r--r--r-- 1364 root root 90 Dec 31 1969 kustomization.yaml
-r--r--r-- 5 root root 2795 Dec 31 1969 SERVICE.yaml
lrwxrwxrwx 4 root root 74 Dec 31 1969 SERVICE.yaml.drv -> /nix/store/sijp95rfkbijnrklmrb4smb9qvl7bd4v-yaml-stream-default-breezewiki
result/gateway-system/gateway-api:
total 768
-r--r--r-- 1364 root root 90 Dec 31 1969 kustomization.yaml
-r--r--r-- 14 root root 775478 Dec 31 1969 SERVICE.yaml
lrwxrwxrwx 11 root root 87 Dec 31 1969 SERVICE.yaml.drv -> /nix/store/0yi3y3b0lrgd71yrglgi7mjaxhk8khsm-copied-drv-gateway-system-gateway-api-1.0.0
$ sha256sum result/gateway-system/gateway-api/SERVICE.yaml
6c601dced7872a940d76fa667ae126ba718cb4c6db970d0bab49128ecc1192a3 result/gateway-system/gateway-api/SERVICE.yaml
Pretty cool, huh? Now to install the services...
$ kubectl apply -f result/namespaces.yaml
namespace/default configured
namespace/gateway-system created
$ kubectl apply -k result/gateway-system/
<...>
$ kubectl apply -k result/default/breezewiki
service/breezewiki created
deployment.apps/breezewiki created
error: resource mapping not found for name: "breezewiki" namespace: "default" from "result/default": no matches for kind "HTTPRoute" in version "gateway.networking.k8s.io/v1alpha2"
ensure CRDs are installed first
Wait, what? A v1alpha2
HTTP Route? That API isn't even
in Gateway API v1. What gives?
Like most things in Nix, Helm derivations are pure
functions: they have no room for external state. This means Helm
cannot poll a
Kubernetes cluster for data such as supported APIs, upon which
charts such as app-template
depend
to calculate their output:
{{- $routeKind := $routeObject.kind | default "HTTPRoute" -}}
{{- $apiVersion := "gateway.networking.k8s.io/v1alpha2" -}}
{{- if $rootContext.Capabilities.APIVersions.Has (printf "gateway.networking.k8s.io/v1beta1/%s" $routeKind) }}
{{- $apiVersion = "gateway.networking.k8s.io/v1beta1" -}}
{{- end -}}
{{- if $rootContext.Capabilities.APIVersions.Has (printf "gateway.networking.k8s.io/v1/%s" $routeKind) }}
{{- $apiVersion = "gateway.networking.k8s.io/v1" -}}
{{- end -}}
This is a problem solved by Turboprop and all of its dependencies:
--api-versions
and --kube-version
with which to declare capabilities.kubeVersion
and apiVersions
with
reasonable defaults.Which order? Well, Haumea loads and Turboprop evaluates in
alphabetical order. And thus we arrive to the crux of the problem:
gateway-api
> default
. Luckily, it's
trivial to solve:
$ mkdir services/{1-gateway,2-main}
$ mv services/gateway-system services/1-gateway
$ mv services/default services/2-main
$ nix build
<...>
$ grep -A1 'apiVersion: gateway' result/2-main/default/breezewiki/SERVICE.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
And there you have it: a Helm deployment supercharged with inheritance, functional purity, integrity-checking, and all else that is great about the Nix language.
These functions are only available outside of service modules.
{charts, userData?} -> {pname, version, src, serviceRoot, nsMetadata?, kubeVersion?, apiVersions?} -> <derivation: a dir of Kustomization dirs>
The main interface to Turboprop.
The first attrset instantiates the derivation builder:
{}
):
Additional data to be used by the service modules.The second attrset specifies the derivation to build:
{}
):
Additional metadata to attach to the generated namespaces (see
"Namespace metadata" below).pkgs.kubernetes.version
): The version of the Kubernetes
cluster to target.[]
): API
versions to declare in addition to those provided by generated
services.src -> attrs
Searches a directory tree for Nix modules describing a chart and fetches each chart, returning the tree as an attrset of derivatives.
Each module must be an attrset with the signature
{repo, chart, version, chartHash?}
; see the documentation
of lib.fetchers.helmChart
for more.
src -> attrs
Same as mkCharts
, but overlays the fetched charts onto
the ones provided
by Nixhelm through the flake input.
Fetcher functions download a resource into the Nix store. A fetcher may also serve as a builder for resources intended to be used without modification or processing, such as a YAML file.
{name, version, url, hash, chartPath, vPrefixInRef?, ...} -> <derivation: a dir containing a Helm chart>
Fetch a Helm chart from a Git repository. Useful in the absence of a published Helm repo.
1.0.0
.false
):
Whether the Git tag begins with an utterly redundant
v
.{repo, chart, version, chartHash?} -> <derivation: a dir containing a Helm chart>
Re-export of kubelib.downloadHelmChart.
{version, url, hash, ...} -> <derivation: a YAML file>
Fetch a remote file. Useful for applications distributed as a YAML stream, e.g., the Gateway API.
Builder functions build a service derivation.
Builders receive name
and namespace
through
Turboprop, so these two variables will be documented once:
app.kubernetes.io/instance
, as well
as the derivation's name.{name, namespace, src, ...} -> <derivation>
Copy a derivation verbatim.
{name, namespace, chart, values?, includeCRDs?, kubeVersion?, apiVersions?, extraOpts?, ...} -> <derivation: a YAML file of Helm output>
Re-export of kubelib.buildHelmChart.
{}
): Values to
pass into the chart.true
):
Whether to include CustomResourceDefinitions in the template
output.pkgs.kubernetes.version
): Target Kubernetes version.[]
): Sets
Capabilities.APIVersions.[]
):
Additional flags for helm template
.{name, namespace, objs, ...} -> <derivation: a YAML file>
Converts Kubernetes objects from Nix to YAML.
{name, namespace, mainImage, values?, kubeVersion?, apiVersions?} -> {builder, args, extraObjects}
Wrapper of helmChart
that builds app-template
images.
{}
): Values to
pass into the chart.pkgs.kubernetes.version
): Target Kubernetes version.[]
): Sets
Capabilities.APIVersions.[]
):
Additional flags for helm template
.{name, namespace, charts?, lib?, pkgs?, userData?} -> {builder, args, extraObjects?}
A service module as defined in your flake.
Input attrset, of which any of its attributes may be omitted if unused:
Output attrset:
null
):
Kubernetes objects to deploy alongside the service.{name, namespace, kubeVersion, apiVersions} -> {out, extra}
A service module loaded by Turboprop and ready to produce derivations.
Input attrset:
Output attrset:
null
):
Extra objects as a YAML file.{name, namespace, kubeVersion, apiVersions, ...} -> <derivation: a YAML file>
The signature of a generic builder.
{DEFAULT?, ...}
The signature of the nsMetadata
argument to
mkDerivation
.
Each namespace is represented by an attrset; this attrset is copied
to the resulting namespace's metadata
key at build time.
For example, this is equivalent to
k label ns/default istio.io/rev=stable
:
default = {
labels = {
"istio.io/rev" = "stable";
}
}
Metadata to be applied to all namespaces can be set in the special
attrset DEFAULT
:
DEFAULT = {
labels = {
"istio.io/rev" = "stable";
};
};
# Opt a namespace out of the defaults.
gateway-system = {};
kube-system = {};
longhorn-system = {};
# To set data beyond the defaults,
# opt the namespace back in.
default =
DEFAULT
// {
labels = {
"words-words-words-words" = "punchline";
};
};
N.B.: namespaces set in extraMetadata but not present in
namespaces
aren't created.