1
0
Fork 0
mirror of https://git.sr.ht/~goorzhel/turboprop synced 2024-12-15 17:50:52 +00:00

Convert to RST and significantly expand

This commit is contained in:
Antonio Gurgel 2023-11-28 20:49:12 -08:00
parent ce881f0c0d
commit f9683b4411
2 changed files with 176 additions and 69 deletions

View file

@ -1,69 +0,0 @@
# Turboprop
Problem: I have twenty or thirty Helm releases, all of which I template semi-manually to [retain WYSIWYG control](https://github.com/kubernetes-sigs/kustomize/blob/bfb00ecb2747dc711abfc27d9cf788ca1d7c637b/examples/chart.md#best-practice). Deploying new applications involves a lot of copy-pasta.
Solution: Use Nix. With Nix, I can [ensure chart integrity](), [generate repetitive data in subroutines](), and [easily inherit data from elsewhere]().
## Prior art
Without [farcaller's "Nix and Kubernetes: Deployments Done Right"](https://media.ccc.de/v/nixcon-2023-35290-nix-and-kubernetes-deployments-done-right) ([notes](https://gist.github.com/farcaller/c87c03fbb55eaeaeb840b938455f37ff)), this project would not exist.
I also used heywoodlh's [Kubernetes flake](https://github.com/heywoodlh/flakes/blob/aa5a52a/kube/flake.nix) as a starting point early on.
## Usage
```nix
{ charts, lib, user, ... }: { # 1
builder = lib.builders.helmChart; # 2
args = { # 3
chart = charts.jetstack.cert-manager;
values = {
featureGates = "ExperimentalGatewayAPISupport=true";
installCRDs = true;
prometheus = {
enabled = true;
servicemonitor = {
enabled = true;
prometheusInstance = "monitoring";
};
};
startupapicheck.podLabels."sidecar.istio.io/inject" = "false";
};
};
extraObjects = [ # 4
{
apiVersion = "cert-manager.io/v1";
kind = "ClusterIssuer";
metadata.name = user.vars.k8sCert.name; # 5
spec.ca.secretName = user.vars.k8sCert.name;
}
];
}
```
### lib
#### flake builders
##### charts
Signature, etc.
## Architecture
Services expected to provide custom APIs (e.g.: Gateway API,
Istio, Longhorn) go in `./system`. All others in `./services`,
including system-service charts dependent on other APIs.
This prevents infinite recursion when gathering APIs.
Each of the leaves of the `services` attrsets is a derivation
(explained better in `lib/flake-builders.nix`).
Here, they are gathered into one mega-derivation, with Kustomizations
at each level for usage with `k apply -k $path`.
### namespaces
Assign extra metadata in `namespaces.nix`. For example,
`svc = {labels."istio.io/rev" = "1-18-1"}`
is the equivalent of
`k label ns/svc istio.io/rev=1-18-1`

176
README.rst Normal file
View file

@ -0,0 +1,176 @@
.. vim: set et sw=2:
#########
Turboprop
#########
Problem: You have twenty or thirty Helm releases, all of which you template semi-manually to `retain WYSIWYG control`_. Deploying new applications involves a lot of copy-pasta.
Solution: Use Nix. With Nix, you can `ensure chart integrity`_, `generate repetitive data in subroutines`_, and `easily reuse variable data`_.
Turboprop templates your Helm charts for you, making an individual Nix derivation of each one; each of these derivations is then gathered into a mega-derivation complete with Kustomizations. In short, you're two commands away from instant cluster reconciliation::
nix build && kubectl diff -k ./result
.. _retain WYSIWYG control: https://github.com/kubernetes-sigs/kustomize/blob/bfb00ecb2747dc711abfc27d9cf788ca1d7c637b/examples/chart.md#best-practice
.. _ensure chart integrity: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/charts/intel/device-plugins-gpu/default.nix
.. _generate repetitive data in subroutines: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/services/svc/gateway/default.nix#L8-9
.. _easily reuse variable data: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/system/kube-system/csi-driver-nfs/default.nix#L16
*********
Prior art
*********
Without `Vladimir Pouzanov's "Nix and Kubernetes: Deployments Done Right"`_ (and `its notes`_), this project would not exist.
I also used `heywoodlh's Kubernetes flake`_ as a starting point early on.
.. _farcaller's "Nix and Kubernetes: Deployments Done Right": https://media.ccc.de/v/nixcon-2023-35290-nix-and-kubernetes-deployments-done-right
.. _its notes: https://gist.github.com/farcaller/c87c03fbb55eaeaeb840b938455f37ff
.. _heywoodlh's Kubernetes flake: https://github.com/heywoodlh/flakes/blob/aa5a52a/kube/flake.nix
*****
Usage
*****
===============
Getting started
===============
Add this flake to your flake's inputs, along with ``nixpkgs`` and `flake-utils``::
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs";
flake-utils.url = "github:numtide/flake-utils";
turboprop = {
url = "sourcehut:~goorzhel/turboprop";
inputs.nixpkgs.follows = "nixpkgs";
};
};
<...>
}
Next, split your flake's output into two sections:
#. One that "rakes" your module data into a tree (nested attrset), which runs in pure Nix; and
#. one that builds derivations from your module data, using your current system's ``nixpkgs``.
In practice::
{
<...>
outputs = {self, nixpkgs, flake-utils, turboprop}:
let rake = turboprop.rake in
{
systemServiceData = rake.leaves ./system;
serviceData = rake.leaves ./services;
repos = rake.leaves ./charts;
namespaces = rake.namespaces {
roots = [./system ./services];
extraMetadata = import ./namespaces.nix;
};
}
// flake-utils.lib.eachDefaultSystem (system: let
pkgs = import nixpkgs {
inherit system;
overlays = [devshell.overlays.default];
};
turbo = turboprop.packages.${system};
# We'll get back to the rest of this
namespaces = {};
paths = {};
in {
packages = {
default =
turbo.mkDerivation {
# This too
};
};
});
};
Now set that aside for the time being.
======================
Example service module
======================
This is a module that creates a *templated chart derivation*, namely of cert-manager::
{ charts, lib, user, ... }: { # 1
builder = lib.builders.helmChart; # 2
args = {
chart = charts.jetstack.cert-manager;
values = {
featureGates = "ExperimentalGatewayAPISupport=true";
installCRDs = true;
prometheus = {
enabled = true;
servicemonitor = {
enabled = true;
prometheusInstance = "monitoring";
};
};
startupapicheck.podLabels."sidecar.istio.io/inject" = "false";
};
};
extraObjects = [ # 4
{
apiVersion = "cert-manager.io/v1";
kind = "ClusterIssuer";
metadata.name = user.vars.k8sCert.name; # 5
spec.ca.secretName = user.vars.k8sCert.name;
}
];
}
#. The module takes as input:
#. A tree (nested attrset) of *untemplated* chart derivations;
#. the Turboprop library; and
#. user data specific to your flake. If you have none, you may pass `{charts, lib...}` instead.
#. The module has the output signature ``{builder, args, extraObjects}``.
#. ``builder`` is the Turboprop builder that will create your derivation. Most often, you will use ``helmChart``; other builders exist for scenarios such as deploying a `collection of Kubernetes objects`_ or a `single remote YAML file`_. You may even define your own builder.
#. ``args`` are arguments passed to the builder. Refer to each builder's signature below.
#. ``extraObjects`` are objects to deploy alongside the chart.
.. _collection of Kubernetes objects:
.. _single remote YAML file:
.. _define your own builder:
#### flake builders
##### charts
Signature, etc.
## Architecture
Services expected to provide custom APIs (e.g.: Gateway API,
Istio, Longhorn) go in ``./system``. All others in ``./services``,
including system-service charts dependent on other APIs.
This prevents infinite recursion when gathering APIs.
Each of the leaves of the ``services`` attrsets is a derivation
(explained better in ``lib/flake-builders.nix``).
Here, they are gathered into one mega-derivation, with Kustomizations
at each level for usage with ``k apply -k $path``.
### namespaces
Assign extra metadata in ``namespaces.nix``. For example,
``svc = {labels."istio.io/rev" = "1-18-1"}``
is the equivalent of
``k label ns/svc istio.io/rev=1-18-1``