1
0
Fork 0
mirror of https://git.sr.ht/~goorzhel/turboprop synced 2024-12-14 11:37:37 +00:00
No description
Find a file
2023-11-28 22:08:54 -08:00
charts Add Kubernetes dashboard 2023-11-26 16:41:55 -08:00
lib rm alwaysList 2023-11-27 11:40:13 -08:00
.gitignore Name Make recipe after output file 2023-11-23 17:12:33 -08:00
flake.lock rm alwaysList 2023-11-27 11:40:13 -08:00
flake.nix Keep writing 2023-11-28 22:08:54 -08:00
LICENSE License under Apache-2.0 2023-11-16 20:46:03 -08:00
Makefile Missed a spot in s/releases/services/ 2023-11-26 01:23:19 -08:00
README.rst Words words words words 2023-11-28 22:08:54 -08:00

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

.. vim: set et sw=2:

#########
Turboprop
#########

Problem: You have twenty or thirty Helm releases, all of which you template semi-manually to `retain WYSIWYG control`_. Deploying new applications involves a lot of copy-pasta.

Solution: Use Nix. With Nix, you can `ensure chart integrity`_, `generate repetitive data in subroutines`_, and `easily reuse variable data`_.

Turboprop templates your Helm charts for you, making an individual Nix derivation of each one; each of these derivations is then gathered into a mega-derivation complete with Kustomizations. In short, you're two commands away from instant cluster reconciliation::

  nix build && kubectl diff -k ./result

.. _retain WYSIWYG control: https://github.com/kubernetes-sigs/kustomize/blob/bfb00ecb2747dc711abfc27d9cf788ca1d7c637b/examples/chart.md#best-practice
.. _ensure chart integrity: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/charts/intel/device-plugins-gpu/default.nix
.. _generate repetitive data in subroutines: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/services/svc/gateway/default.nix#L8-9
.. _easily reuse variable data: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/system/kube-system/csi-driver-nfs/default.nix#L16

*********
Prior art
*********

Without `Vladimir Pouzanov`_'s "`Nix and Kubernetes\: Deployments Done Right`_" (and `its notes`_), this project would not exist.

I also used `heywoodlh's Kubernetes flake`_ as a starting point early on.

.. _Vladimir Pouzanov: https://github.com/farcaller
.. _Nix and Kubernetes\: Deployments Done Right: https://media.ccc.de/v/nixcon-2023-35290-nix-and-kubernetes-deployments-done-right
.. _its notes: https://gist.github.com/farcaller/c87c03fbb55eaeaeb840b938455f37ff
.. _heywoodlh's Kubernetes flake: https://github.com/heywoodlh/flakes/blob/aa5a52a/kube/flake.nix 


**********************
Usage and architecture
**********************

Getting started
===============

Add this flake to your flake's inputs, along with ``nixpkgs`` and ``flake-utils``::

  {
    inputs = {
      nixpkgs.url = "github:NixOS/nixpkgs";
      flake-utils.url = "github:numtide/flake-utils";
      turboprop = {
        url = "sourcehut:~goorzhel/turboprop";
        inputs.nixpkgs.follows = "nixpkgs";
      };
    };
    <...>
  }


Next, split your flake's output into two sections: 

#. One running in pure Nix that "rakes" your module data into a *tree* (more on that later); and
#. one that builds derivations from your module data, using your current system's ``nixpkgs``.

In practice::

  {
    <...>
    outputs = {self, nixpkgs, flake-utils, turboprop}: 
      let rake = turboprop.rake in
    {
      # I'll explain the distinction in a later chapter
      systemServiceData = rake.leaves ./system;
      serviceData = rake.leaves ./services;

      repos = rake.leaves ./charts;

      namespaces = rake.namespaces {
        roots = [./system ./services];
        extraMetadata = import ./namespaces.nix;
      };
    }
    // flake-utils.lib.eachDefaultSystem (system: let
      pkgs = import nixpkgs {
        inherit system;
        overlays = [devshell.overlays.default];
      };

      turbo = turboprop.packages.${system};

      # We'll get back to this too
      flakeBuilders = turbo.flakeBuilders {};
      namespaces = <...>;
      paths = <...>;
    in {
      packages.default = turbo.mkDerivation {
          # This too
        };
      }
    );
  }

Now set that aside for the time being.

Example service module
======================

This is a module that creates a *templated chart derivation*, namely of cert-manager::

    { charts, lib, user, ... }: {  # 1
      builder = lib.builders.helmChart; # 1.2; 2.1
      args = {                          # 2.2
        chart = charts.jetstack.cert-manager; # 1.1
        values = {
          featureGates = "ExperimentalGatewayAPISupport=true";
          installCRDs = true;
          prometheus = {

            enabled = true;
            servicemonitor = {
              enabled = true;
              prometheusInstance = "monitoring";
            };
          };
          startupapicheck.podLabels."sidecar.istio.io/inject" = "false";
        };
      };
      extraObjects = [  # 2.3
        {
          apiVersion = "cert-manager.io/v1";
          kind = "ClusterIssuer";
          metadata.name = user.vars.k8sCert.name; # 1.3
          spec.ca.secretName = user.vars.k8sCert.name;
        }
      ];
    }


1. The module takes as input:

  #. A tree of *untemplated* chart derivations;
  #. the Turboprop library; and
  #. user data specific to your flake. You may `omit any of these`_ if they're not used.

2. The module has the output signature ``{builder, args, extraObjects}``.

  #. ``builder`` is the Turboprop builder that will create your derivation. Most often, you will use ``helmChart``; other builders exist for scenarios such as deploying a `collection of Kubernetes objects`_ or a `single remote YAML file`_. You may even `define your own builder`_.
  #. ``args`` are arguments passed to the builder. Refer to each builder's signature below.
  #. ``extraObjects`` are objects to deploy alongside the chart.

.. _omit any of these: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/system/gateway-system/gateway-api/default.nix#L1
.. _collection of Kubernetes objects: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/services/svc/gateway/default.nix#L12
.. _single remote YAML file: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/system/gateway-system/gateway-api/default.nix#L2
.. _define your own builder: https://git.sr.ht/~goorzhel/kubernetes/tree/f3cba6831621288228581b7ad7b6762d6d58a966/item/services/svc/breezewiki/default.nix#L6

Trees of Nix modules
====================

Turboprop's main operating concept is a *tree* of Nix modules, both in the filesystem sense (nested directories) and the Nix sense (nested attrsets). A service tree, then, consists of

#. an arbitrarily-named root, such as ``./services``, which contains
#. directories representing Kubernetes namespaces, which each contain
#. Nix modules representing a templated deployment.

This metaphor extends to charts, too. Both Turboprop and nixhelm, from which Turboprop borrows heavily, contain a chart tree:

#. an arbitrarily-named root, ``./charts``, which contains
#. directories representing Helm repositories, which each contain
#. Nix modules representing a(n untemplated) chart.

In practice::

  $ nix run nixpkgs#tree -- ~/src/kubernetes/{chart,service}s --noreport
  /home/ag/src/kubernetes/charts
  ├── kubernetes-dashboard
  │   └── kubernetes-dashboard
  │       └── default.nix
  <...>
  /home/ag/src/kubernetes/services
  <...>
  ├── istio-system
  │   ├── 1-18-1
  │   │   └── default.nix
  │   └── kiali
  │       └── default.nix
  └── svc
      ├── breezewiki
      │   └── default.nix
      <...>
      └── syncthing
          └── default.nix

You may have noticed that, if neither Nixhelm nor Turboprop provide a chart you need, you can supply your own within your flake. (PRs welcome, though.)

The modules' signatures will be covered in the following section.

Builders and flake builders
===========================

*********
Reference
*********

Fetchers
========

Builders
========

Flake builders
==============

## Architecture

Services expected to provide custom APIs (e.g.: Gateway API,
Istio, Longhorn) go in ``./system``. All others in ``./services``,
including system-service charts dependent on other APIs.
This prevents infinite recursion when gathering APIs.

Each of the leaves of the ``services`` attrsets is a derivation
(explained better in ``lib/flake-builders.nix``).
Here, they are gathered into one mega-derivation, with Kustomizations
at each level for usage with ``k apply -k $path``.

### namespaces

Assign extra metadata in ``namespaces.nix``. For example,
``svc = {labels."istio.io/rev" = "1-18-1"}``
is the equivalent of
``k label ns/svc istio.io/rev=1-18-1``