32 Commits

Author SHA1 Message Date
UGA Innovation Factory
6f7e95b9f9 fix: Fail the CI if formatting fails
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m40s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 15s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 9s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 20s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 11s
2026-01-30 18:26:26 -05:00
UGA Innovation Factory
7c07727150 feat: USDA-dash now uses encrypted .env files
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m42s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 14s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 9s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 20s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 10s
2026-01-30 23:19:38 +00:00
UGA Innovation Factory
7e6e8d5e0f chore: Update flake lock
Some checks failed
CI / Format Check (push) Waiting to run
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
2026-01-30 23:07:40 +00:00
UGA Innovation Factory
c6e0a0aedf chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:55:30 +00:00
UGA Innovation Factory
4b4e6a2873 chore: Update flake lock
Some checks failed
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:53:59 +00:00
UGA Innovation Factory
40a9f9f5a6 chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:52:27 +00:00
UGA Innovation Factory
14a61da9ed chore: Update flake lock
Some checks failed
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Format Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
2026-01-30 22:38:14 +00:00
UGA Innovation Factory
a3c8e0640a chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:26:29 +00:00
UGA Innovation Factory
01fc5518c1 chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:24:31 +00:00
UGA Innovation Factory
a2d4f71a77 chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:21:13 +00:00
UGA Innovation Factory
e0cafb7f66 chore: Update usda-docker hash
Some checks failed
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:18:59 +00:00
UGA Innovation Factory
ffbd7a221d Set default timezone for LXC containers to fix Docker /etc/localtime mounts
Some checks failed
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:00:15 +00:00
UGA Innovation Factory
d7922247d2 Fix activation script to always regenerate age keys
Some checks failed
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:51:19 +00:00
UGA Innovation Factory
31c829f502 Add SSH-to-age conversion activation script for reliable secret decryption
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:48:57 +00:00
UGA Innovation Factory
e3bae02f58 Re-encrypt usda-vision-env with correct host key
Some checks failed
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:46:02 +00:00
UGA Innovation Factory
aa6d9d5691 Revert experimental changes, use ragenix defaults
Some checks failed
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:45:55 +00:00
UGA Innovation Factory
87045a518f Use rage instead of age for SSH key decryption support
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:40:04 +00:00
UGA Innovation Factory
dffe817e47 Update usda-dash host key and re-encrypt secret
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:29:39 +00:00
UGA Innovation Factory
23da829033 feat: Use age for env secret managment
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 20:54:31 +00:00
UGA Innovation Factory
dd19d1488a fix: Convert ssh keys to age keys
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m42s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 14s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 20s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 11s
2026-01-30 19:41:34 +00:00
UGA Innovation Factory
862ae2c864 chore: Run nix fmt
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m42s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 13s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 22s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 14s
CI / Build and Publish Documentation (push) Successful in 10s
2026-01-30 19:19:38 +00:00
UGA Innovation Factory
3efba93424 feat: Ragenix secret management per host
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 19:19:20 +00:00
UGA Innovation Factory
2e4602cbf3 refactor: Move macCaseBuilder into athenix.lib
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m44s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 14s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 19s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 10s
2026-01-27 22:13:32 +00:00
UGA Innovation Factory
ab3710b5f6 chore: Run nix fmt
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m43s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 14s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 20s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 11s
2026-01-27 21:44:23 +00:00
UGA Innovation Factory
863cd1ea95 fix: Remove unused or broken config outputs for nix eval of flake components
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-27 21:43:58 +00:00
UGA Innovation Factory
d8cee7e79b refactor: Make hw definitions modules with mkIf guards
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-27 16:30:54 -05:00
UGA Innovation Factory
063336f736 refactor: Fleet and sw behind mkIf guards 2026-01-27 16:11:36 -05:00
UGA Innovation Factory
85653e632f fix: Enable sw by default when imported
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m47s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 11s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 7s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 18s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 11s
CI / Build and Publish Documentation (push) Successful in 8s
2026-01-27 15:36:31 -05:00
Hunter David Halloran
1533382ff2 Merge pull request 'fix: Lazily fetch external modules only if needed' (#32) from external-refactor into main
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m45s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 11s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 18s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 12s
CI / Build and Publish Documentation (push) Successful in 8s
Reviewed-on: http://git.factory.uga.edu/UGA-Innovation-Factory/athenix/pulls/32
2026-01-27 20:06:09 +00:00
UGA Innovation Factory
540f5feb78 fix: Lazily fetch external modules only if needed 2026-01-27 15:05:52 -05:00
UGA Innovation Factory
1a7bf29448 docs: Update inline code docs for LSP help
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m39s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 7s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 14s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 8s
CI / Build and Publish Documentation (push) Successful in 5s
2026-01-27 14:48:07 -05:00
UGA Innovation Factory
13fdc3a7a1 feat: Update auto-docs
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m39s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 9s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 7s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 14s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 9s
CI / Build and Publish Documentation (push) Successful in 6s
2026-01-27 14:25:37 -05:00
52 changed files with 2824 additions and 1031 deletions

View File

@@ -26,18 +26,23 @@ jobs:
format-check: format-check:
name: Format Check name: Format Check
runs-on: [self-hosted, nix-builder] runs-on: [self-hosted, nix-builder]
timeout-minutes: 5
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Check formatting - name: Check formatting
timeout-minutes: 3
run: | run: |
nix fmt **/*.nix set -euo pipefail
if ! git diff --quiet; then echo "Checking code formatting..."
output=$(nix fmt **/*.nix 2>&1)
if [ -n "$output" ]; then
echo "::error::Code is not formatted. Please run 'nix fmt **/*.nix' locally." echo "::error::Code is not formatted. Please run 'nix fmt **/*.nix' locally."
git diff echo "$output"
exit 1 exit 1
fi fi
echo "All files are properly formatted"
eval-configs: eval-configs:
name: Evaluate Key Configurations name: Evaluate Key Configurations

57
flake.lock generated
View File

@@ -239,6 +239,24 @@
"inputs": { "inputs": {
"systems": "systems_4" "systems": "systems_4"
}, },
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_4": {
"inputs": {
"systems": "systems_5"
},
"locked": { "locked": {
"lastModified": 1681202837, "lastModified": 1681202837,
"narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=", "narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=",
@@ -636,6 +654,7 @@
"nixos-wsl": "nixos-wsl", "nixos-wsl": "nixos-wsl",
"nixpkgs": "nixpkgs_2", "nixpkgs": "nixpkgs_2",
"nixpkgs-old-kernel": "nixpkgs-old-kernel", "nixpkgs-old-kernel": "nixpkgs-old-kernel",
"usda-vision": "usda-vision",
"vscode-server": "vscode-server" "vscode-server": "vscode-server"
} }
}, },
@@ -720,6 +739,21 @@
"type": "github" "type": "github"
} }
}, },
"systems_5": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"treefmt-nix": { "treefmt-nix": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@@ -742,13 +776,34 @@
"type": "github" "type": "github"
} }
}, },
"vscode-server": { "usda-vision": {
"inputs": { "inputs": {
"flake-utils": "flake-utils_3", "flake-utils": "flake-utils_3",
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
] ]
}, },
"locked": {
"lastModified": 1769814438,
"narHash": "sha256-DEZrmqpqbrd996W5p1r4GA1C8Jmo31n3N642ccS0deY=",
"ref": "refs/heads/main",
"rev": "78bfcf02612817a2cee1edbf92deeac9bf657613",
"revCount": 126,
"type": "git",
"url": "https://git.factory.uga.edu/MODEL/usda-vision.git"
},
"original": {
"type": "git",
"url": "https://git.factory.uga.edu/MODEL/usda-vision.git"
}
},
"vscode-server": {
"inputs": {
"flake-utils": "flake-utils_4",
"nixpkgs": [
"nixpkgs"
]
},
"locked": { "locked": {
"lastModified": 1753541826, "lastModified": 1753541826,
"narHash": "sha256-foGgZu8+bCNIGeuDqQ84jNbmKZpd+JvnrL2WlyU4tuU=", "narHash": "sha256-foGgZu8+bCNIGeuDqQ84jNbmKZpd+JvnrL2WlyU4tuU=",

View File

@@ -59,6 +59,12 @@
url = "github:nix-community/NixOS-WSL/main"; url = "github:nix-community/NixOS-WSL/main";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
# USDA Vision Dashboard application
usda-vision = {
url = "git+https://git.factory.uga.edu/MODEL/usda-vision.git";
inputs.nixpkgs.follows = "nixpkgs";
};
}; };
outputs = outputs =

View File

@@ -5,44 +5,60 @@
# - Bootloader configuration (systemd-boot with Plymouth) # - Bootloader configuration (systemd-boot with Plymouth)
# - Timezone and locale settings # - Timezone and locale settings
# - Systemd sleep configuration # - Systemd sleep configuration
#
# Only applies to:
# - Linux systems (not Darwin/macOS)
# - Systems with actual boot hardware (not containers/WSL)
{ lib, ... }:
{ {
boot = { config,
loader.systemd-boot.enable = lib.mkDefault true; lib,
loader.efi.canTouchEfiVariables = lib.mkDefault true; pkgs,
plymouth.enable = lib.mkDefault true; ...
}:
# Enable "Silent boot" let
consoleLogLevel = 3; # Check if this is a bootable system (not container, not WSL)
initrd.verbose = false; isBootable = !(config.boot.isContainer or false) && (pkgs.stdenv.isLinux);
in
{
config = lib.mkIf isBootable {
boot = {
loader.systemd-boot.enable = lib.mkDefault true;
loader.efi.canTouchEfiVariables = lib.mkDefault true;
plymouth.enable = lib.mkDefault true;
# Hide the OS choice for bootloaders. # Enable "Silent boot"
# It's still possible to open the bootloader list by pressing any key consoleLogLevel = 3;
# It will just not appear on screen unless a key is pressed initrd.verbose = false;
loader.timeout = lib.mkDefault 0;
# Hide the OS choice for bootloaders.
# It's still possible to open the bootloader list by pressing any key
# It will just not appear on screen unless a key is pressed
loader.timeout = lib.mkDefault 0;
};
# Set your time zone.
time.timeZone = "America/New_York";
# Select internationalisation properties.
i18n.defaultLocale = "en_US.UTF-8";
i18n.extraLocaleSettings = {
LC_ADDRESS = "en_US.UTF-8";
LC_IDENTIFICATION = "en_US.UTF-8";
LC_MEASUREMENT = "en_US.UTF-8";
LC_MONETARY = "en_US.UTF-8";
LC_NAME = "en_US.UTF-8";
LC_NUMERIC = "en_US.UTF-8";
LC_PAPER = "en_US.UTF-8";
LC_TELEPHONE = "en_US.UTF-8";
LC_TIME = "en_US.UTF-8";
};
systemd.sleep.extraConfig = ''
SuspendState=freeze
HibernateDelaySec=2h
'';
}; };
# Set your time zone.
time.timeZone = "America/New_York";
# Select internationalisation properties.
i18n.defaultLocale = "en_US.UTF-8";
i18n.extraLocaleSettings = {
LC_ADDRESS = "en_US.UTF-8";
LC_IDENTIFICATION = "en_US.UTF-8";
LC_MEASUREMENT = "en_US.UTF-8";
LC_MONETARY = "en_US.UTF-8";
LC_NAME = "en_US.UTF-8";
LC_NUMERIC = "en_US.UTF-8";
LC_PAPER = "en_US.UTF-8";
LC_TELEPHONE = "en_US.UTF-8";
LC_TIME = "en_US.UTF-8";
};
systemd.sleep.extraConfig = ''
SuspendState=freeze
HibernateDelaySec=2h
'';
} }

View File

@@ -7,16 +7,148 @@
{ {
config, config,
lib, lib,
inputs,
... ...
}: }:
let
# Import all hardware modules so they're available for enabling
hwTypes = import ../hw { inherit inputs; };
hwModules = lib.attrValues hwTypes;
# User account submodule definition
userSubmodule = lib.types.submodule {
options = {
enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether this user account is enabled on this system.";
};
isNormalUser = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether this is a normal user account (vs system user).";
};
description = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Full name or description of the user (GECOS field).";
example = "John Doe";
};
extraGroups = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "Additional groups for the user (wheel, docker, etc.).";
};
hashedPassword = lib.mkOption {
type = lib.types.str;
default = "!";
description = "Hashed password for the user account. Default '!' means locked.";
};
extraPackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "Additional system packages available to this user.";
};
excludePackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "System packages to exclude for this user.";
};
homePackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "Packages to install in the user's home-manager profile.";
};
extraImports = lib.mkOption {
type = lib.types.listOf lib.types.path;
default = [ ];
description = "Additional home-manager modules to import for this user.";
};
external = lib.mkOption {
type = lib.types.nullOr (
lib.types.oneOf [
lib.types.path
(lib.types.submodule {
options = {
url = lib.mkOption {
type = lib.types.str;
description = "Git repository URL to fetch user configuration from.";
};
rev = lib.mkOption {
type = lib.types.str;
description = "Git commit hash, tag, or branch to fetch.";
};
submodules = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to fetch Git submodules.";
};
};
})
]
);
default = null;
description = "External dotfiles repository (user.nix + optional nixos.nix).";
};
opensshKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "SSH public keys for the user (authorized_keys).";
};
shell = lib.mkOption {
type = lib.types.nullOr (
lib.types.enum [
"bash"
"zsh"
"fish"
"tcsh"
]
);
default = "bash";
description = "Default shell for the user.";
};
editor = lib.mkOption {
type = lib.types.nullOr (
lib.types.enum [
"vim"
"neovim"
"emacs"
"nano"
"code"
]
);
default = "neovim";
description = "Default text editor for the user (sets EDITOR).";
};
useZshTheme = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to apply the system Zsh theme (Oh My Posh).";
};
useNvimPlugins = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to apply the system Neovim configuration.";
};
};
};
in
{ {
imports = [ imports = [
./fs.nix ./fs.nix
./boot.nix ./boot.nix
./user-config.nix ./user-config.nix
./fleet-option.nix
../sw ../sw
]; inputs.vscode-server.nixosModules.default
inputs.nixos-wsl.nixosModules.default
]
++ hwModules;
options.athenix.users = lib.mkOption {
type = lib.types.attrsOf userSubmodule;
default = { };
description = "User accounts configuration. Set enable=true for users that should exist on this system.";
};
options.athenix = { options.athenix = {
forUser = lib.mkOption { forUser = lib.mkOption {

View File

@@ -20,8 +20,6 @@ let
# Import fleet-option.nix (defines athenix.fleet) and inventory.nix (sets values) # Import fleet-option.nix (defines athenix.fleet) and inventory.nix (sets values)
# We use a minimal module here to avoid circular dependencies from common.nix's imports # We use a minimal module here to avoid circular dependencies from common.nix's imports
hostTypes = config.athenix.hwTypes;
# Helper to create a single NixOS system configuration # Helper to create a single NixOS system configuration
mkHost = mkHost =
{ {
@@ -36,8 +34,20 @@ let
externalModulePath = externalModulePath =
if externalModuleThunk != null then if externalModuleThunk != null then
let let
# Force evaluation of the thunk (fetchGit, fetchTarball, etc.) # Force evaluation of the thunk
fetchedPath = externalModuleThunk; fetchedPath =
if
builtins.isAttrs externalModuleThunk
&& externalModuleThunk ? _type
&& externalModuleThunk._type == "lazy-fetchGit"
then
# New format: lazy fetchGit - only execute when needed
(builtins.fetchGit {
inherit (externalModuleThunk) url rev submodules;
}).outPath
else
# Legacy: pre-fetched derivation or path
externalModuleThunk;
# Extract outPath from fetchGit/fetchTarball results # Extract outPath from fetchGit/fetchTarball results
extractedPath = extractedPath =
if builtins.isAttrs fetchedPath && fetchedPath ? outPath then fetchedPath.outPath else fetchedPath; if builtins.isAttrs fetchedPath && fetchedPath ? outPath then fetchedPath.outPath else fetchedPath;
@@ -61,10 +71,19 @@ let
name: user: name: user:
if (user ? external && user.external != null) then if (user ? external && user.external != null) then
let let
# Resolve external path (lazy fetchGit if needed)
externalPath = externalPath =
if builtins.isAttrs user.external && user.external ? outPath then if builtins.isAttrs user.external && user.external ? url && user.external ? rev then
# New format: lazy fetchGit
(builtins.fetchGit {
inherit (user.external) url rev;
submodules = user.external.submodules or false;
}).outPath
else if builtins.isAttrs user.external && user.external ? outPath then
# Legacy: pre-fetched
user.external.outPath user.external.outPath
else else
# Direct path
user.external; user.external;
nixosModulePath = externalPath + "/nixos.nix"; nixosModulePath = externalPath + "/nixos.nix";
in in
@@ -102,11 +121,6 @@ let
} }
) userNixosModulePaths; ) userNixosModulePaths;
# Get the host type module from the hostTypes attribute set
typeModule =
hostTypes.${hostType}
or (throw "Host type '${hostType}' not found. Available types: ${lib.concatStringsSep ", " (lib.attrNames hostTypes)}");
# External module from fetchGit/fetchurl # External module from fetchGit/fetchurl
externalPathModule = externalPathModule =
if externalModulePath != null then import externalModulePath { inherit inputs; } else { }; if externalModulePath != null then import externalModulePath { inherit inputs; } else { };
@@ -134,18 +148,27 @@ let
]; ];
}; };
# Hardware-specific external modules
hwSpecificModules =
lib.optional (hostType == "nix-lxc")
"${inputs.nixpkgs.legacyPackages.${system}.path}/nixos/modules/virtualisation/proxmox-lxc.nix";
allModules = allModules =
userNixosModules userNixosModules
++ [ ++ [
./common.nix ./common.nix
typeModule
overrideModule overrideModule
{ networking.hostName = hostName; } { networking.hostName = hostName; }
# Set athenix.host.name for secrets and other modules to use
{ athenix.host.name = hostName; }
{ {
# Inject user definitions from flake-parts level # Inject user definitions from flake-parts level
config.athenix.users = lib.mapAttrs (_: user: lib.mapAttrs (_: lib.mkDefault) user) users; config.athenix.users = lib.mapAttrs (_: user: lib.mapAttrs (_: lib.mkDefault) user) users;
} }
# Enable the appropriate hardware module based on hostType
{ config.athenix.hw.${hostType}.enable = lib.mkDefault true; }
] ]
++ hwSpecificModules
++ lib.optional (externalModulePath != null) externalPathModule; ++ lib.optional (externalModulePath != null) externalPathModule;
in in
{ {
@@ -205,8 +228,24 @@ let
# Check if deviceConfig has an 'external' field for lazy evaluation # Check if deviceConfig has an 'external' field for lazy evaluation
hasExternalField = builtins.isAttrs deviceConfig && deviceConfig ? external; hasExternalField = builtins.isAttrs deviceConfig && deviceConfig ? external;
# Extract external module thunk if present (don't evaluate yet!) # Extract external module spec (don't evaluate fetchGit yet!)
externalModuleThunk = if hasExternalField then deviceConfig.external else null; externalModuleThunk =
if hasExternalField then
let
ext = deviceConfig.external;
in
# New format: { url, rev, submodules? } - create lazy fetchGit thunk
if builtins.isAttrs ext && ext ? url && ext ? rev then
{
_type = "lazy-fetchGit";
inherit (ext) url rev;
submodules = ext.submodules or false;
}
# Legacy: pre-fetched or path
else
ext
else
null;
# Remove 'external' from config to avoid conflicts # Remove 'external' from config to avoid conflicts
cleanDeviceConfig = cleanDeviceConfig =

View File

@@ -125,21 +125,45 @@ let
type = lib.types.nullOr ( type = lib.types.nullOr (
lib.types.oneOf [ lib.types.oneOf [
lib.types.path lib.types.path
lib.types.package (lib.types.submodule {
lib.types.attrs options = {
url = lib.mkOption {
type = lib.types.str;
description = "Git repository URL to fetch user configuration from.";
example = "https://github.com/username/dotfiles";
};
rev = lib.mkOption {
type = lib.types.str;
description = "Git commit hash, tag, or branch to fetch.";
example = "abc123def456...";
};
submodules = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to fetch Git submodules.";
};
};
})
] ]
); );
default = null; default = null;
description = '' description = ''
External user configuration module from Git or local path. External user configuration module from Git or local path.
Can be either:
- A local path: /path/to/config
- A Git repository: { url = "..."; rev = "..."; submodules? = false; }
The Git repository is only fetched when the user is actually enabled.
Should contain user.nix (user options + home-manager config) Should contain user.nix (user options + home-manager config)
and optionally nixos.nix (system-level config). and optionally nixos.nix (system-level config).
''; '';
example = lib.literalExpression '' example = lib.literalExpression ''
builtins.fetchGit { {
url = "https://github.com/username/dotfiles"; url = "https://github.com/username/dotfiles";
rev = "abc123..."; rev = "abc123def456789abcdef0123456789abcdef012";
submodules = false;
}''; }'';
}; };
opensshKeys = lib.mkOption { opensshKeys = lib.mkOption {

View File

@@ -4,108 +4,129 @@
# This module defines: # This module defines:
# - Disko partition layout (EFI, swap, root) # - Disko partition layout (EFI, swap, root)
# - Filesystem options (device, swap size) # - Filesystem options (device, swap size)
#
# Only applies to systems with physical disk management needs
# (not containers, not WSL, not systems without a configured device)
{ config, lib, ... }: { config, lib, ... }:
let
cfg = config.athenix.host.filesystem;
# Only enable disk config if device is set and disko is enabled
hasDiskConfig = cfg.device != null && config.disko.enableConfig;
in
{ {
options.athenix = { options.athenix = {
host.filesystem = { host = {
device = lib.mkOption { name = lib.mkOption {
type = lib.types.nullOr lib.types.str; type = lib.types.str;
default = null;
description = '' description = ''
The main disk device to use for automated partitioning and installation. Fleet-assigned hostname for this system.
Used for secrets discovery and other host-specific configurations.
When set, enables disko for declarative disk management with:
- 1GB EFI boot partition
- Optional swap partition (see swapSize)
- Root partition using remaining space
Leave null for systems that don't need disk partitioning (containers, WSL).
'';
example = "/dev/nvme0n1";
};
useSwap = lib.mkOption {
type = lib.types.bool;
default = true;
description = ''
Whether to create and use a swap partition.
Disable for systems with ample RAM or SSDs where swap is undesirable.
''; '';
}; };
swapSize = lib.mkOption { filesystem = {
type = lib.types.nullOr lib.types.str; device = lib.mkOption {
default = null; type = lib.types.nullOr lib.types.str;
description = '' default = null;
Size of the swap partition (e.g., "16G", "32G"). description = ''
The main disk device to use for automated partitioning and installation.
Recommended sizes: When set, enables disko for declarative disk management with:
- 8-16GB for desktops with 16GB+ RAM - 1GB EFI boot partition
- 32GB for laptops (enables hibernation) - Optional swap partition (see swapSize)
- Match RAM size for systems <8GB RAM - Root partition using remaining space
'';
example = "32G"; Leave null for systems that don't need disk partitioning (containers, WSL).
'';
example = "/dev/nvme0n1";
};
useSwap = lib.mkOption {
type = lib.types.bool;
default = true;
description = ''
Whether to create and use a swap partition.
Disable for systems with ample RAM or SSDs where swap is undesirable.
'';
};
swapSize = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = ''
Size of the swap partition (e.g., "16G", "32G").
Recommended sizes:
- 8-16GB for desktops with 16GB+ RAM
- 32GB for laptops (enables hibernation)
- Match RAM size for systems <8GB RAM
'';
example = "32G";
};
}; };
}; };
}; };
config = { config = lib.mkMerge [
# ========== Disk Partitioning (Disko) ========== {
disko.enableConfig = lib.mkDefault (config.athenix.host.filesystem.device != null); # ========== Disk Partitioning (Disko) ==========
disko.enableConfig = lib.mkDefault (cfg.device != null);
}
disko.devices = { (lib.mkIf hasDiskConfig {
disk.main = { disko.devices = {
type = "disk"; disk.main = {
device = config.athenix.host.filesystem.device; type = "disk";
content = { device = cfg.device;
type = "gpt"; content = {
partitions = { type = "gpt";
# EFI System Partition partitions = {
ESP = { # EFI System Partition
name = "ESP"; ESP = {
label = "BOOT"; name = "ESP";
size = "1G"; label = "BOOT";
type = "EF00"; size = "1G";
content = { type = "EF00";
type = "filesystem"; content = {
format = "vfat"; type = "filesystem";
mountpoint = "/boot"; format = "vfat";
mountOptions = [ "umask=0077" ]; mountpoint = "/boot";
extraArgs = [ mountOptions = [ "umask=0077" ];
"-n" extraArgs = [
"BOOT" "-n"
]; "BOOT"
];
};
}; };
};
# Swap Partition (size configurable per host) # Swap Partition (size configurable per host)
swap = lib.mkIf config.athenix.host.filesystem.useSwap { swap = lib.mkIf cfg.useSwap {
name = "swap"; name = "swap";
label = "swap"; label = "swap";
size = config.athenix.host.filesystem.swapSize; size = cfg.swapSize;
content = { content = {
type = "swap"; type = "swap";
};
}; };
};
# Root Partition (takes remaining space) # Root Partition (takes remaining space)
root = { root = {
name = "root"; name = "root";
label = "root"; label = "root";
size = "100%"; size = "100%";
content = { content = {
type = "filesystem"; type = "filesystem";
format = "ext4"; format = "ext4";
mountpoint = "/"; mountpoint = "/";
extraArgs = [ extraArgs = [
"-L" "-L"
"ROOT" "ROOT"
]; ];
};
}; };
}; };
}; };
}; };
}; };
}; })
}; ];
} }

View File

@@ -13,13 +13,21 @@
# Options are defined in fleet-option.nix for early availability. # Options are defined in fleet-option.nix for early availability.
let let
# Helper: Resolve external module path from fetchGit/fetchTarball/path # Helper: Resolve external module path (with lazy Git fetching)
resolveExternalPath = resolveExternalPath =
external: external:
if external == null then if external == null then
null null
# New format: { url, rev, submodules? } - only fetch when needed
else if builtins.isAttrs external && external ? url && external ? rev then
(builtins.fetchGit {
inherit (external) url rev;
submodules = external.submodules or false;
}).outPath
# Legacy: pre-fetched derivation/package
else if builtins.isAttrs external && external ? outPath then else if builtins.isAttrs external && external ? outPath then
external.outPath external.outPath
# Direct path
else else
external; external;

View File

@@ -10,41 +10,64 @@
modulesPath, modulesPath,
... ...
}: }:
with lib;
let
cfg = config.athenix.hw.nix-desktop;
in
{ {
imports = [ imports = [
(modulesPath + "/installer/scan/not-detected.nix") (modulesPath + "/installer/scan/not-detected.nix")
]; ];
# ========== Boot Configuration ========== options.athenix.hw.nix-desktop = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable desktop workstation hardware configuration.";
};
};
};
default = { };
description = "Desktop workstation hardware type configuration.";
};
boot.initrd.availableKernelModules = [ config = mkIf cfg.enable {
"xhci_pci" # USB 3.0 support
"nvme" # NVMe SSD support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
# ========== Filesystem Configuration ========== # ========== Boot Configuration ==========
athenix.host.filesystem.swapSize = lib.mkDefault "16G";
athenix.host.filesystem.device = lib.mkDefault "/dev/nvme0n1";
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Hardware Configuration ========== boot.initrd.availableKernelModules = [
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware; "xhci_pci" # USB 3.0 support
"nvme" # NVMe SSD support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
# ========== Software Profile ========== # ========== Filesystem Configuration ==========
athenix.sw.enable = lib.mkDefault true; athenix.host.filesystem.swapSize = lib.mkDefault "16G";
athenix.sw.desktop.enable = lib.mkDefault true; athenix.host.filesystem.device = lib.mkDefault "/dev/nvme0n1";
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Hardware Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.desktop.enable = lib.mkDefault true;
};
} }

View File

@@ -11,56 +11,78 @@
modulesPath, modulesPath,
... ...
}: }:
with lib;
let
cfg = config.athenix.hw.nix-ephemeral;
in
{ {
imports = [ imports = [
(modulesPath + "/installer/scan/not-detected.nix") (modulesPath + "/installer/scan/not-detected.nix")
]; ];
# ========== Boot Configuration ========== options.athenix.hw.nix-ephemeral = mkOption {
boot.initrd.availableKernelModules = [ type = types.submodule {
"xhci_pci" # USB 3.0 support options = {
"nvme" # NVMe support enable = mkOption {
"usb_storage" # USB storage devices type = types.bool;
"sd_mod" # SD card support default = false;
"sdhci_pci" # SD card host controller description = "Enable ephemeral/diskless system hardware configuration.";
]; };
boot.initrd.kernelModules = [ ]; };
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support };
boot.extraModulePackages = [ ]; default = { };
boot.kernelParams = [ description = "Ephemeral hardware type configuration.";
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
# ========== Ephemeral Configuration ==========
# No persistent storage - everything runs from RAM
athenix.host.filesystem.swapSize = lib.mkForce "0G";
athenix.host.filesystem.device = lib.mkForce "/dev/null"; # Dummy device
athenix.host.buildMethods = lib.mkDefault [
"iso" # Live ISO image
"ipxe" # Network boot
];
# Disable disk management for RAM-only systems
disko.enableConfig = lib.mkForce false;
# Define tmpfs root filesystem
fileSystems."/" = {
device = "none";
fsType = "tmpfs";
options = [
"defaults"
"size=50%"
"mode=755"
];
}; };
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux"; config = mkIf cfg.enable {
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware; # ========== Boot Configuration ==========
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"nvme" # NVMe support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
athenix.sw.enable = lib.mkDefault true; # ========== Ephemeral Configuration ==========
athenix.sw.stateless-kiosk.enable = lib.mkDefault true; # No persistent storage - everything runs from RAM
athenix.host.filesystem.swapSize = lib.mkForce "0G";
athenix.host.filesystem.device = lib.mkForce "/dev/null"; # Dummy device
athenix.host.buildMethods = lib.mkDefault [
"iso" # Live ISO image
"ipxe" # Network boot
];
# Disable disk management for RAM-only systems
disko.enableConfig = lib.mkForce false;
# Define tmpfs root filesystem
fileSystems."/" = {
device = "none";
fsType = "tmpfs";
options = [
"defaults"
"size=50%"
"mode=755"
];
};
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
athenix.sw.enable = lib.mkDefault true;
athenix.sw.stateless-kiosk.enable = lib.mkDefault true;
};
} }

View File

@@ -10,54 +10,76 @@
modulesPath, modulesPath,
... ...
}: }:
with lib;
let
cfg = config.athenix.hw.nix-laptop;
in
{ {
imports = [ imports = [
(modulesPath + "/installer/scan/not-detected.nix") (modulesPath + "/installer/scan/not-detected.nix")
]; ];
# ========== Boot Configuration ========== options.athenix.hw.nix-laptop = mkOption {
type = types.submodule {
boot.initrd.availableKernelModules = [ options = {
"xhci_pci" # USB 3.0 support enable = mkOption {
"thunderbolt" # Thunderbolt support type = types.bool;
"nvme" # NVMe SSD support default = false;
"usb_storage" # USB storage devices description = "Enable laptop hardware configuration with power management.";
"sd_mod" # SD card support };
"sdhci_pci" # SD card host controller };
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
"i915.enable_psr=0" # Disable Panel Self Refresh (stability)
"i915.enable_dc=0" # Disable display power saving
"i915.enable_fbc=0" # Disable framebuffer compression
];
# ========== Hardware Configuration ==========
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Filesystem Configuration ==========
athenix.host.filesystem.device = lib.mkDefault "/dev/nvme0n1";
athenix.host.filesystem.swapSize = lib.mkDefault "34G"; # Larger swap for hibernation
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
# ========== Power Management ==========
services.upower.enable = lib.mkDefault true;
services.logind.settings = {
Login = {
HandleLidSwitch = "suspend";
HandleLidSwitchExternalPower = "suspend";
HandleLidSwitchDocked = "ignore";
}; };
default = { };
description = "Laptop hardware type configuration.";
}; };
athenix.sw.enable = lib.mkDefault true; config = mkIf cfg.enable {
athenix.sw.desktop.enable = lib.mkDefault true; # ========== Boot Configuration ==========
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"thunderbolt" # Thunderbolt support
"nvme" # NVMe SSD support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
"i915.enable_psr=0" # Disable Panel Self Refresh (stability)
"i915.enable_dc=0" # Disable display power saving
"i915.enable_fbc=0" # Disable framebuffer compression
];
# ========== Hardware Configuration ==========
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Filesystem Configuration ==========
athenix.host.filesystem.device = lib.mkDefault "/dev/nvme0n1";
athenix.host.filesystem.swapSize = lib.mkDefault "34G"; # Larger swap for hibernation
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
# ========== Power Management ==========
services.upower.enable = lib.mkDefault true;
services.logind.settings = {
Login = {
HandleLidSwitch = "suspend";
HandleLidSwitchExternalPower = "suspend";
HandleLidSwitchDocked = "ignore";
};
};
athenix.sw.enable = lib.mkDefault true;
athenix.sw.desktop.enable = lib.mkDefault true;
};
} }

View File

@@ -5,56 +5,75 @@
# Disables boot/disk management and enables remote development support. # Disables boot/disk management and enables remote development support.
{ {
config,
lib, lib,
modulesPath,
inputs,
... ...
}: }:
with lib;
let
cfg = config.athenix.hw.nix-lxc;
in
{ {
imports = [ options.athenix.hw.nix-lxc = mkOption {
inputs.vscode-server.nixosModules.default type = types.submodule {
"${modulesPath}/virtualisation/proxmox-lxc.nix" options = {
]; enable = mkOption {
type = types.bool;
default = false;
description = "Enable Proxmox LXC container hardware configuration.";
};
};
};
default = { };
description = "Proxmox LXC hardware type configuration.";
};
# ========== Nix Configuration ========== config = mkIf cfg.enable {
nix.settings.trusted-users = [ # ========== Nix Configuration ==========
"root" nix.settings.trusted-users = [
"engr-ugaif" "root"
]; "engr-ugaif"
nix.settings.experimental-features = [ ];
"nix-command" nix.settings.experimental-features = [
"flakes" "nix-command"
]; "flakes"
];
# ========== Container-Specific Configuration ========== # ========== Container-Specific Configuration ==========
boot.isContainer = true; boot.isContainer = true;
boot.loader.systemd-boot.enable = lib.mkForce false; # No bootloader in container boot.loader.systemd-boot.enable = lib.mkForce false; # No bootloader in container
disko.enableConfig = lib.mkForce false; # No disk management in container disko.enableConfig = lib.mkForce false; # No disk management in container
console.enable = true; console.enable = true;
# Allow getty to work in containers # Set timezone to fix /etc/localtime for Docker containers
systemd.services."getty@".unitConfig.ConditionPathExists = [ time.timeZone = lib.mkDefault "America/New_York";
""
"/dev/%I"
];
# Suppress unnecessary systemd units for containers # Allow getty to work in containers
systemd.suppressedSystemUnits = [ systemd.services."getty@".unitConfig.ConditionPathExists = [
"dev-mqueue.mount" ""
"sys-kernel-debug.mount" "/dev/%I"
"sys-fs-fuse-connections.mount" ];
];
# ========== Remote Development ========== # Suppress unnecessary systemd units for containers
services.vscode-server.enable = true; systemd.suppressedSystemUnits = [
"dev-mqueue.mount"
"sys-kernel-debug.mount"
"sys-fs-fuse-connections.mount"
];
# ========== System Configuration ========== # ========== Remote Development ==========
system.stateVersion = "25.11"; services.vscode-server.enable = true;
athenix.host.buildMethods = lib.mkDefault [
"lxc" # LXC container tarball
"proxmox" # Proxmox VMA archive
];
athenix.sw.enable = lib.mkDefault true; # ========== System Configuration ==========
athenix.sw.headless.enable = lib.mkDefault true; system.stateVersion = "25.11";
athenix.host.buildMethods = lib.mkDefault [
"lxc" # LXC container tarball
"proxmox" # Proxmox VMA archive
];
athenix.sw.enable = lib.mkDefault true;
athenix.sw.headless.enable = lib.mkDefault true;
};
} }

View File

@@ -12,7 +12,11 @@
inputs, inputs,
... ...
}: }:
with lib;
let let
cfg = config.athenix.hw.nix-surface;
# Use older kernel version for better Surface Go compatibility # Use older kernel version for better Surface Go compatibility
refSystem = inputs.nixpkgs-old-kernel.lib.nixosSystem { refSystem = inputs.nixpkgs-old-kernel.lib.nixosSystem {
system = pkgs.stdenv.hostPlatform.system; system = pkgs.stdenv.hostPlatform.system;
@@ -26,44 +30,60 @@ in
inputs.nixos-hardware.nixosModules.microsoft-surface-go inputs.nixos-hardware.nixosModules.microsoft-surface-go
]; ];
# ========== Boot Configuration ========== options.athenix.hw.nix-surface = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable Microsoft Surface tablet hardware configuration.";
};
};
};
default = { };
description = "Microsoft Surface hardware type configuration.";
};
boot.initrd.availableKernelModules = [ config = mkIf cfg.enable {
"xhci_pci" # USB 3.0 support # ========== Boot Configuration ==========
"nvme" # NVMe support (though Surface uses eMMC)
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
"intel_ipu3_imgu" # Intel camera image processing
"intel_ipu3_isys" # Intel camera sensor interface
"fbcon=map:1" # Framebuffer console mapping
"i915.enable_psr=0" # Disable Panel Self Refresh (breaks resume)
"i915.enable_dc=0" # Disable display power saving
];
# Use older kernel for better Surface hardware support boot.initrd.availableKernelModules = [
boot.kernelPackages = lib.mkForce refKernelPackages; "xhci_pci" # USB 3.0 support
"nvme" # NVMe support (though Surface uses eMMC)
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
"intel_ipu3_imgu" # Intel camera image processing
"intel_ipu3_isys" # Intel camera sensor interface
"fbcon=map:1" # Framebuffer console mapping
"i915.enable_psr=0" # Disable Panel Self Refresh (breaks resume)
"i915.enable_dc=0" # Disable display power saving
];
# ========== Filesystem Configuration ========== # Use older kernel for better Surface hardware support
athenix.host.filesystem.swapSize = lib.mkDefault "8G"; boot.kernelPackages = lib.mkForce refKernelPackages;
athenix.host.filesystem.device = lib.mkDefault "/dev/mmcblk0"; # eMMC storage # eMMC storage
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Hardware Configuration ========== # ========== Filesystem Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware; athenix.host.filesystem.swapSize = lib.mkDefault "8G";
athenix.host.filesystem.device = lib.mkDefault "/dev/mmcblk0"; # eMMC storage # eMMC storage
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Software Profile ========== # ========== Hardware Configuration ==========
athenix.sw.enable = lib.mkDefault true; hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
athenix.sw.tablet-kiosk.enable = lib.mkDefault true; # Touch-optimized kiosk mode
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.tablet-kiosk.enable = lib.mkDefault true; # Touch-optimized kiosk mode
};
} }

View File

@@ -7,16 +7,30 @@
{ {
lib, lib,
config, config,
inputs,
... ...
}: }:
{
imports = [
inputs.nixos-wsl.nixosModules.default
inputs.vscode-server.nixosModules.default
];
# ========== Options ========== with lib;
let
cfg = config.athenix.hw.nix-wsl;
in
{
options.athenix.hw.nix-wsl = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable Windows Subsystem for Linux hardware configuration.";
};
};
};
default = { };
description = "WSL hardware type configuration.";
};
# WSL user option (at module level, not inside config)
options.athenix.host.wsl.user = lib.mkOption { options.athenix.host.wsl.user = lib.mkOption {
type = lib.types.str; type = lib.types.str;
default = "engr-ugaif"; default = "engr-ugaif";
@@ -29,7 +43,7 @@
example = "alice"; example = "alice";
}; };
config = { config = mkIf cfg.enable {
# ========== WSL Configuration ========== # ========== WSL Configuration ==========
wsl.enable = true; wsl.enable = true;
# Use forUser if set, otherwise fall back to wsl.user option # Use forUser if set, otherwise fall back to wsl.user option
@@ -55,5 +69,8 @@
# Provide dummy values for required options from boot.nix # Provide dummy values for required options from boot.nix
athenix.host.filesystem.device = "/dev/null"; athenix.host.filesystem.device = "/dev/null";
athenix.host.filesystem.swapSize = "0G"; athenix.host.filesystem.swapSize = "0G";
# WSL doesn't use installer ISOs
athenix.host.buildMethods = lib.mkDefault [ ];
}; };
} }

View File

@@ -10,40 +10,62 @@
modulesPath, modulesPath,
... ...
}: }:
with lib;
let
cfg = config.athenix.hw.nix-zima;
in
{ {
imports = [ imports = [
(modulesPath + "/installer/scan/not-detected.nix") (modulesPath + "/installer/scan/not-detected.nix")
]; ];
# ========== Boot Configuration ========== options.athenix.hw.nix-zima = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable Zima-specific hardware configuration.";
};
};
};
default = { };
description = "Zima hardware type configuration.";
};
boot.initrd.availableKernelModules = [ config = mkIf cfg.enable {
"xhci_pci" # USB 3.0 support # ========== Boot Configuration ==========
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
# ========== Filesystem Configuration ========== boot.initrd.availableKernelModules = [
athenix.host.filesystem.useSwap = lib.mkDefault false; "xhci_pci" # USB 3.0 support
athenix.host.filesystem.device = lib.mkDefault "/dev/mmcblk0"; "usb_storage" # USB storage devices
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ]; "sd_mod" # SD card support
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux"; "sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
# ========== Hardware Configuration ========== # ========== Filesystem Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware; athenix.host.filesystem.useSwap = lib.mkDefault false;
athenix.host.filesystem.device = lib.mkDefault "/dev/mmcblk0";
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Software Profile ========== # ========== Hardware Configuration ==========
athenix.sw.enable = lib.mkDefault true; hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
athenix.sw.desktop.enable = lib.mkDefault true;
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.desktop.enable = lib.mkDefault true;
};
} }

View File

@@ -45,7 +45,7 @@
# External modules (instead of config): # External modules (instead of config):
# Device values can be a config attrset with an optional 'external' field: # Device values can be a config attrset with an optional 'external' field:
# devices."hostname" = { # devices."hostname" = {
# external = builtins.fetchGit { ... }; # Lazy: only fetched when building this host # external = { url = "..."; rev = "..."; submodules? = false; }; # Lazy: only fetched when building this host
# # ... additional config options # # ... additional config options
# }; # };
# The external module will be imported and evaluated only when this specific host is built. # The external module will be imported and evaluated only when this specific host is built.
@@ -65,7 +65,7 @@
# devices."alice".athenix.forUser = "alice123"; # Sets up for user alice123 # devices."alice".athenix.forUser = "alice123"; # Sets up for user alice123
# }; # };
# "external" = { # "external" = {
# devices."remote".external = builtins.fetchGit { # External module via Git (lazy) # devices."remote".external = { url = "..."; rev = "..."; }; # External module via Git (lazy)
# url = "https://github.com/example/config"; # url = "https://github.com/example/config";
# rev = "e1ccd7cc3e709afe4f50b0627e1c4bde49165014"; # rev = "e1ccd7cc3e709afe4f50b0627e1c4bde49165014";
# }; # };
@@ -127,10 +127,10 @@
}; };
}; };
}; };
"usda-dash".external = builtins.fetchGit { "usda-dash".external = {
url = "https://git.factory.uga.edu/MODEL/usda-dash-config.git"; url = "https://git.factory.uga.edu/MODEL/usda-dash-config.git";
rev = "dab32f5884895cead0fae28cb7d88d17951d0c12"; rev = "ce2700b0196e106f7c013bbcee851a5f96b146a3";
submodules = true; submodules = false;
}; };
}; };
overrides = { overrides = {

View File

@@ -1,4 +1,8 @@
{ ... }: {
lib,
...
}:
{ {
mkFleet = import ./mkFleet.nix; mkFleet = import ./mkFleet.nix;
macCaseBuilder = import ./macCaseBuilder.nix { inherit lib; };
} }

33
lib/macCaseBuilder.nix Normal file
View File

@@ -0,0 +1,33 @@
{ lib }:
let
# Default MAC address to station number mapping
defaultHostmap = {
"00:e0:4c:46:0b:32" = "1";
"00:e0:4c:46:07:26" = "2";
"00:e0:4c:46:05:94" = "3";
"00:e0:4c:46:07:11" = "4";
"00:e0:4c:46:08:02" = "5";
"00:e0:4c:46:08:5c" = "6";
};
# macCaseBuilder: builds a shell case statement from a hostmap
# Parameters:
# varName: the shell variable to assign
# prefix: optional string to prepend to the value (default: "")
# hostmap: optional attribute set to use (default: built-in hostmap)
#
# Example:
# macCaseBuilder { varName = "STATION"; prefix = "nix-"; }
# # Generates case statements like: 00:e0:4c:46:0b:32) STATION=nix-1 ;;
builder =
{
varName,
prefix ? "",
hostmap ? defaultHostmap,
}:
lib.concatStringsSep "\n" (
lib.mapAttrsToList (mac: val: " ${mac}) ${varName}=${prefix}${val} ;;") hostmap
);
in
# Export the builder function with hostmap as an accessible attribute
lib.setFunctionArgs builder { } // { hostmap = defaultHostmap; }

View File

@@ -20,11 +20,61 @@ let
in in
athenixOptions; athenixOptions;
# Generate wiki home page
wikiHome = pkgs.writeText "Home.md" ''
# Athenix - NixOS Fleet Management
Athenix is a NixOS configuration system for managing the UGA Innovation Factory's fleet of devices using Nix flakes and a custom configuration framework.
## Quick Start
- [Configuration Options](Configuration-Options) - All available `athenix.*` options
- [User Guide](User-Configuration) - Setting up user accounts and dotfiles
- [Building](Building) - Creating installers and system images
- [Development](Development) - Contributing to Athenix
## Features
- **Inventory-based fleet management** - Define entire device fleets in a single file
- **Multiple hardware types** - Desktops, laptops, Surface tablets, LXC containers, WSL
- **Flexible software configurations** - Desktop, headless, kiosk, and builder modes
- **External module support** - Load user dotfiles and system configs from Git repos
- **Declarative everything** - Reproducible builds with pinned dependencies
## Software Types
Enable different system configurations:
- **desktop** - Full KDE Plasma 6 desktop environment
- **headless** - Minimal server/container configuration
- **tablet-kiosk** - Touch-optimized kiosk for Surface tablets
- **stateless-kiosk** - Diskless PXE boot kiosk
- **builders** - CI/CD build server with Gitea Actions runner
## Hardware Types
- **nix-desktop** - Desktop workstations
- **nix-laptop** - Laptop computers
- **nix-surface** - Microsoft Surface Pro tablets
- **nix-lxc** - LXC containers (Proxmox)
- **nix-wsl** - Windows Subsystem for Linux
- **nix-ephemeral** - Stateless systems (PXE boot)
## Documentation
Browse the documentation using the sidebar or start with:
- [README](README) - Repository overview and getting started
- [Configuration Options](Configuration-Options) - Complete option reference
- [Inventory Guide](Inventory) - Managing the device fleet
- [External Modules](External-Modules) - Using external configurations
'';
# Generate markdown documentation from options # Generate markdown documentation from options
optionsToMarkdown = optionsToMarkdown =
options: options:
pkgs.writeText "options.md" '' pkgs.writeText "options.md" ''
# Athenix Configuration Options # Configuration Options
This document describes all available configuration options in the Athenix namespace. This document describes all available configuration options in the Athenix namespace.
@@ -77,18 +127,30 @@ in
nativeBuildInputs = [ pkgs.jq ]; nativeBuildInputs = [ pkgs.jq ];
} }
'' ''
mkdir -p $out mkdir -p $out
# Copy existing documentation # Generate wiki home page
${if builtins.pathExists ../README.md then "cp ${../README.md} $out/README.md" else ""} cat > $out/Home.md << 'EOF'
${if builtins.pathExists ../docs then "cp -r ${../docs} $out/guides" else ""} ${builtins.readFile wikiHome}
EOF
# Generate options reference
cat > $out/OPTIONS.md << 'EOF' # Copy main README
cp ${../README.md} $out/README.md
# Copy documentation with wiki-friendly names
cp ${../docs/BUILDING.md} $out/Building.md
cp ${../docs/DEVELOPMENT.md} $out/Development.md
cp ${../docs/EXTERNAL_MODULES.md} $out/External-Modules.md
cp ${../docs/INVENTORY.md} $out/Inventory.md
cp ${../docs/NAMESPACE.md} $out/Namespace.md
cp ${../docs/USER_CONFIGURATION.md} $out/User-Configuration.md
# Generate options reference
cat > $out/Configuration-Options.md << 'EOF'
${builtins.readFile (optionsToMarkdown (getAthenixOptions "nix-desktop1"))} ${builtins.readFile (optionsToMarkdown (getAthenixOptions "nix-desktop1"))}
EOF EOF
echo "Documentation generated in $out" echo "Documentation generated in $out"
''; '';
# Extract just the athenix namespace options as JSON # Extract just the athenix namespace options as JSON

View File

@@ -1,5 +1,8 @@
# Library functions for flake-parts # Library functions for flake-parts
{ inputs, ... }: { inputs, ... }:
{ {
flake.lib = import ../lib { inherit inputs; }; flake.lib = import ../lib {
inherit inputs;
lib = inputs.nixpkgs.lib;
};
} }

187
secrets.nix Normal file
View File

@@ -0,0 +1,187 @@
# ============================================================================
# Agenix Secret Recipients Configuration (Auto-Generated)
# ============================================================================
# This file automatically discovers hosts and their public keys from the
# secrets/ directory structure and generates recipient configurations.
#
# Directory structure:
# secrets/{hostname}/*.pub -> SSH/age public keys for that host
# secrets/global/*.pub -> Keys accessible to all hosts
#
# Usage:
# ragenix -e secrets/global/example.age # Edit/create secret
# ragenix -r # Re-key all secrets
#
# To add admin keys for editing secrets, create secrets/admins/*.pub files
# with your personal age public keys (generated with: age-keygen)
let
lib = builtins;
# Helper functions not in builtins
filterAttrs =
pred: set:
lib.listToAttrs (
lib.filter (item: pred item.name item.value) (
lib.map (name: {
inherit name;
value = set.${name};
}) (lib.attrNames set)
)
);
concatLists = lists: lib.foldl' (acc: list: acc ++ list) [ ] lists;
unique =
list:
let
go =
acc: remaining:
if remaining == [ ] then
acc
else if lib.elem (lib.head remaining) acc then
go acc (lib.tail remaining)
else
go (acc ++ [ (lib.head remaining) ]) (lib.tail remaining);
in
go [ ] list;
hasSuffix =
suffix: str:
let
lenStr = lib.stringLength str;
lenSuffix = lib.stringLength suffix;
in
lenStr >= lenSuffix && lib.substring (lenStr - lenSuffix) lenSuffix str == suffix;
nameValuePair = name: value: { inherit name value; };
secretsPath = ./secrets;
# Read all directories in secrets/
secretDirs = if lib.pathExists secretsPath then lib.readDir secretsPath else { };
# Filter to only directories (excludes files)
isDirectory = name: type: type == "directory";
directories = lib.filter (name: isDirectory name secretDirs.${name}) (lib.attrNames secretDirs);
# Read public keys from a directory and convert to age format
readHostKeys =
dirName:
let
dirPath = secretsPath + "/${dirName}";
files = if lib.pathExists dirPath then lib.readDir dirPath else { };
# Prefer .age.pub files (pre-converted), fall back to .pub files
agePubFiles = filterAttrs (name: type: type == "regular" && hasSuffix ".age.pub" name) files;
sshPubFiles = filterAttrs (
name: type: type == "regular" && hasSuffix ".pub" name && !(hasSuffix ".age.pub" name)
) files;
# Read age public keys (already in correct format)
ageKeys = lib.map (
name:
let
content = lib.readFile (dirPath + "/${name}");
# Trim whitespace/newlines
trimmed = lib.replaceStrings [ "\n" " " "\r" "\t" ] [ "" "" "" "" ] content;
in
trimmed
) (lib.attrNames agePubFiles);
# For SSH keys, just include them as-is (user needs to convert with ssh-to-age)
# Or they can run the update-age-keys.sh script
sshKeys =
if (lib.length (lib.attrNames sshPubFiles)) > 0 then
lib.trace "Warning: ${dirName} has unconverted SSH keys. Run secrets/update-age-keys.sh" [ ]
else
[ ];
in
lib.filter (k: k != null && k != "") (ageKeys ++ sshKeys);
# Build host key mappings: { hostname = [ "age1..." "age2..." ]; }
hostKeys = lib.listToAttrs (
lib.map (dir: nameValuePair dir (readHostKeys dir)) (
lib.filter (d: d != "global" && d != "admins") directories
)
);
# Global keys that all hosts can use
globalKeys = if lib.elem "global" directories then readHostKeys "global" else [ ];
# Admin keys for editing secrets
adminKeys = if lib.elem "admins" directories then readHostKeys "admins" else [ ];
# All host keys combined
allHostKeys = concatLists (lib.attrValues hostKeys);
# Find all .age files in the secrets directory
findSecrets =
dir:
let
dirPath = secretsPath + "/${dir}";
files = if lib.pathExists dirPath then lib.readDir dirPath else { };
ageFiles = filterAttrs (name: type: type == "regular" && hasSuffix ".age" name) files;
in
lib.map (name: "secrets/${dir}/${name}") (lib.attrNames ageFiles);
# Generate recipient list for a secret based on its location
getRecipients =
secretPath:
let
# Extract directory name from path: "secrets/nix-builder/foo.age" -> "nix-builder"
pathParts = lib.split "/" secretPath;
dirName = lib.elemAt pathParts 2;
in
if dirName == "global" then
# Global secrets: all hosts + admins
allHostKeys ++ globalKeys ++ adminKeys
else if hostKeys ? ${dirName} then
# Host-specific secrets: that host + global keys + admins
hostKeys.${dirName} ++ globalKeys ++ adminKeys
else
# Fallback: just admins
adminKeys;
# Find all secrets across all directories
allSecrets = concatLists (lib.map findSecrets directories);
# Generate the configuration
secretsConfig = lib.listToAttrs (
lib.map (
secretPath:
let
recipients = getRecipients secretPath;
# Remove duplicates and empty keys
uniqueRecipients = unique (lib.filter (k: k != null && k != "") recipients);
in
nameValuePair secretPath {
publicKeys = uniqueRecipients;
}
) allSecrets
);
# Generate wildcard rules for each directory to allow creating new secrets
wildcardRules = lib.listToAttrs (
lib.concatMap (dir: [
# Match with and without .age extension for ragenix compatibility
(nameValuePair "secrets/${dir}/*" {
publicKeys =
let
recipients = getRecipients "secrets/${dir}/dummy.age";
in
unique (lib.filter (k: k != null && k != "") recipients);
})
(nameValuePair "secrets/${dir}/*.age" {
publicKeys =
let
recipients = getRecipients "secrets/${dir}/dummy.age";
in
unique (lib.filter (k: k != null && k != "") recipients);
})
]) (lib.filter (d: d != "admins") directories)
);
in
secretsConfig // wildcardRules

174
secrets/DESIGN.md Normal file
View File

@@ -0,0 +1,174 @@
# Athenix Secrets System Design
## Overview
The Athenix secrets management system integrates ragenix (agenix) with automatic host discovery based on the repository's fleet inventory structure. It provides a seamless workflow for managing encrypted secrets across all systems.
## Architecture
### Auto-Discovery Module (`sw/secrets.nix`)
**Purpose**: Automatically load and configure secrets at system deployment time.
**Features**:
- Discovers `.age` encrypted files from `secrets/` directories
- Loads global secrets from `secrets/global/` on ALL systems
- Loads host-specific secrets from `secrets/{hostname}/` on matching hosts
- Auto-configures decryption keys based on `.pub` files in directories
- Supports custom secret configuration via `default.nix` in each directory
**Key Behaviors**:
- Secrets are decrypted to `/run/agenix/{name}` at boot
- Identity paths include: system SSH keys + global keys + host-specific keys
- Host-specific secrets override global secrets with the same name
### Dynamic Recipients Configuration (`secrets/secrets.nix`)
**Purpose**: Generate ragenix recipient configuration from directory structure.
**Features**:
- Automatically discovers hosts from `secrets/` subdirectories
- Reads age public keys from `.age.pub` files (converted from SSH keys)
- Generates recipient lists based on secret location:
- `secrets/global/*.age` → ALL hosts + admins
- `secrets/{hostname}/*.age` → that host + global keys + admins
- Supports admin keys in `secrets/admins/` for secret editing
**Key Behaviors**:
- No manual recipient list maintenance required
- Adding a new host = create directory + add .pub key + run `update-age-keys.sh`
- Works with ragenix CLI: `ragenix -e`, `ragenix -r`
## Workflow
### Adding a New Host
1. **Capture SSH host key**:
```bash
# From the running system
cat /etc/ssh/ssh_host_ed25519_key.pub > secrets/new-host/ssh_host_ed25519_key.pub
```
2. **Convert to age format**:
```bash
cd secrets/
./update-age-keys.sh
```
3. **Re-key existing secrets** (if needed):
```bash
ragenix -r
```
### Creating a New Secret
1. **Choose location**:
- `secrets/global/` → all systems can decrypt
- `secrets/{hostname}/` → only that host can decrypt
2. **Create/edit secret**:
```bash
ragenix -e secrets/global/my-secret.age
```
3. **Recipients are auto-determined** from `secrets.nix`:
- Global secrets: all host keys + admin keys
- Host-specific: that host + global keys + admin keys
### Cross-Host Secret Management
Any Athenix host can manage secrets for other hosts because:
- All public keys are in the repository (`*.age.pub` files)
- `secrets/secrets.nix` auto-generates recipient lists
- Hosts decrypt using their own private keys (not shared)
Example: From `nix-builder`, create a secret for `usda-dash`:
```bash
ragenix -e secrets/usda-dash/database-password.age
# Encrypted for usda-dash's public key + admins
# usda-dash will decrypt using its private key at /etc/ssh/ssh_host_ed25519_key
```
## Directory Structure
```
secrets/
├── secrets.nix # Auto-generated recipient config
├── update-age-keys.sh # Helper to convert SSH → age keys
├── README.md # User documentation
├── DESIGN.md # This file
├── global/ # Secrets for ALL hosts
│ ├── *.pub # SSH public keys
│ ├── *.age.pub # Age public keys (generated)
│ ├── *.age # Encrypted secrets
│ └── default.nix # Optional: custom secret config
├── {hostname}/ # Host-specific secrets
│ ├── *.pub
│ ├── *.age.pub
│ ├── *.age
│ └── default.nix
└── admins/ # Admin keys for editing
└── *.age.pub
```
## Security Model
1. **Public keys in git**: Safe to commit (only public keys, `.age.pub` and `.pub`)
2. **Private keys on hosts**: Never leave the system (`/etc/ssh/ssh_host_*_key`, `/etc/age/identity.key`)
3. **Encrypted secrets in git**: Safe to commit (`.age` files)
4. **Decrypted secrets**: Only in memory/tmpfs (`/run/agenix/*`)
## Integration Points
### With NixOS Configuration
```nix
# Access decrypted secrets in any NixOS module
config.age.secrets.my-secret.path # => /run/agenix/my-secret
# Example usage
services.myapp.passwordFile = config.age.secrets.database-password.path;
```
### With Inventory System
The system automatically matches `secrets/{hostname}/` to hostnames from `inventory.nix`. No manual configuration needed.
### With External Modules
External user/system modules can reference secrets:
```nix
# In external module
{ config, ... }:
{
programs.git.extraConfig.credential.helper =
"store --file ${config.age.secrets.git-credentials.path}";
}
```
## Advantages
1. **Zero manual recipient management**: Just add directories and keys
2. **Cross-host secret creation**: Any host can manage secrets for others
3. **Automatic host discovery**: Syncs with inventory structure
4. **Flexible permission model**: Global vs host-specific + custom configs
5. **Version controlled**: All public data in git, auditable history
6. **Secure by default**: Private keys never shared, secrets encrypted at rest
## Limitations
1. **Requires age key conversion**: SSH keys must be converted to age format (automated by script)
2. **Bootstrap chicken-egg**: Need initial host key before encrypting secrets (capture from first boot or generate locally)
3. **No secret rotation automation**: Must manually re-key with `ragenix -r`
4. **Git history contains old encrypted versions**: Rotating keys doesn't remove old ciphertexts from history
## Future Enhancements
- Auto-run `update-age-keys.sh` in pre-commit hook
- Integrate with inventory.nix to auto-generate host directories
- Support for multiple identity types per host
- Automated secret rotation scheduling
- Integration with hardware security modules (YubiKey, etc.)

250
secrets/README.md Normal file
View File

@@ -0,0 +1,250 @@
# Secrets Management with Agenix
This directory contains age-encrypted secrets for Athenix hosts. Secrets are automatically loaded based on directory structure.
## Directory Structure
```
secrets/
├── global/ # Secrets installed on ALL systems
│ ├── default.nix # Optional: Custom config for global secrets
│ └── example.age # Decrypted to /run/agenix/example on all hosts
├── nix-builder/ # Secrets only for nix-builder host
│ ├── default.nix # Optional: Custom config for nix-builder secrets
│ └── ssh_host_ed25519_key.age
└── usda-dash/ # Secrets only for usda-dash host
└── ssh_host_ed25519_key.age
```
## How It Works
1. **Global secrets** (`./secrets/global/*.age`) are installed on every system
2. **Host-specific secrets** (`./secrets/{hostname}/*.age`) are only installed on matching hosts
3. Only `.age` encrypted files are loaded; `.pub` public keys are ignored
4. Secrets are decrypted at boot to `/run/agenix/{secret-name}` with mode `0400` and owner `root:root`
5. **Custom configurations** can be defined in `default.nix` files within each directory
## Creating Secrets
### 1. Generate Age Keys
For a new host, generate an age identity:
```bash
# On the target system
mkdir -p /etc/age
age-keygen -o /etc/age/identity.key
chmod 600 /etc/age/identity.key
```
Or use SSH host keys (automatically done by Athenix):
```bash
# Get the age public key from SSH host key
nix shell nixpkgs#ssh-to-age -c sh -c 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
```
### 2. Store Public Keys
Save the public key to `secrets/{hostname}/` for reference:
```bash
# Example for nix-builder
echo "age1..." > secrets/nix-builder/identity.pub
```
Or from SSH host key:
```bash
cat /etc/ssh/ssh_host_ed25519_key.pub > secrets/nix-builder/ssh_host_ed25519_key.pub
```
**Then convert SSH keys to age format:**
```bash
cd secrets/
./update-age-keys.sh
```
This creates `.age.pub` files that `secrets.nix` uses for ragenix recipient configuration.
### 3. Encrypt Secrets
Encrypt a secret for specific hosts:
```bash
# For a single host
age -r age1publickey... -o secrets/nix-builder/my-secret.age <<< "secret value"
# For multiple hosts (recipient list)
age -R recipients.txt -o secrets/global/shared-secret.age < plaintext-file
# Using SSH public keys
age -R secrets/nix-builder/ssh_host_ed25519_key.pub \
-o secrets/nix-builder/ssh_host_key.age < /etc/ssh/ssh_host_ed25519_key
```
### 4. Creating and Editing Secrets
**For new secrets**, use the helper script (automatically determines recipients):
```bash
cd secrets/
# Create a host-specific secret
./create-secret.sh usda-dash/database-url.age <<< "postgresql://..."
# Create a global secret
echo "shared-api-key" | ./create-secret.sh global/api-key.age
# From a file
./create-secret.sh nix-builder/ssh-key.age < ~/.ssh/id_ed25519
```
The script automatically includes the correct recipients:
- **Host-specific**: that host's keys + global keys + admin keys
- **Global**: all host keys + admin keys
**To edit existing secrets**, use `ragenix`:
```bash
# Install ragenix
nix shell github:yaxitech/ragenix
# Edit an existing secret (you must have a decryption key)
ragenix -e secrets/global/existing-secret.age
# Re-key all secrets after adding new hosts
ragenix -r
```
**Why create with `age` first?** Ragenix requires the `.age` file to exist before editing. The `secrets/secrets.nix` configuration auto-discovers recipients from the directory structure, but ragenix doesn't support wildcard patterns for creating new files.
**Recipient management** is automatic:
- **Global secrets** (`secrets/global/*.age`): encrypted for ALL hosts + admins
- **Host secrets** (`secrets/{hostname}/*.age`): encrypted for that host + global keys + admins
- **Admin keys** from `secrets/admins/*.age.pub` allow editing from your workstation
After creating new .age files with `age`, use `ragenix -r` to re-key all secrets with the updated recipient configuration.
To add admin keys for editing secrets:
```bash
# Generate personal age key
age-keygen -o ~/.config/age/personal.key
# Extract public key and add to secrets
grep "public key:" ~/.config/age/personal.key | cut -d: -f2 | tr -d ' ' > secrets/admins/your-name.age.pub
```
## Using Secrets in Configuration
Secrets are automatically loaded. Reference them in your NixOS configuration:
```nix
# Example: Using a secret for a service
services.myservice = {
enable = true;
passwordFile = config.age.secrets.my-password.path; # /run/agenix/my-password
};
# Example: Setting up SSH host key from secret
services.openssh = {
hostKeys = [{
path = config.age.secrets.ssh_host_ed25519_key.path;
type = "ed25519";
}];
};
```
## Custom Secret Configuration
For secrets needing custom permissions, use `athenix.sw.secrets.extraSecrets`:
```nix
# In inventory.nix or host config
athenix.sw.secrets.extraSecrets = {
"nginx-cert" = {
file = ./secrets/custom/cert.age;
mode = "0440";
owner = "nginx";
group = "nginx";
};
};
```
### Using default.nix in Secret Directories
Alternatively, create a `default.nix` file in the secret directory to configure all secrets in that directory:
```nix
# secrets/global/default.nix
{
"example" = {
mode = "0440"; # Custom file mode (default: "0400")
owner = "nginx"; # Custom owner (default: "root")
group = "nginx"; # Custom group (default: "root")
path = "/run/secrets/example"; # Custom path (default: /run/agenix/{name})
};
"api-key" = {
mode = "0400";
owner = "myservice";
group = "myservice";
};
}
```
The `default.nix` file should return an attribute set where:
- **Keys** are secret names (without the `.age` extension)
- **Values** are configuration objects with optional fields:
- `mode` - File permissions (string, e.g., `"0440"`)
- `owner` - File owner (string, e.g., `"nginx"`)
- `group` - File group (string, e.g., `"nginx"`)
- `path` - Custom installation path (string, e.g., `"/custom/path"`)
Secrets not listed in `default.nix` will use default settings.
## Security Best Practices
1. **Never commit unencrypted secrets** - Only `.age` and `.pub` files belong in this directory
2. **Use host-specific secrets** when possible - Limit exposure by using hostname directories
3. **Rotate secrets regularly** - Re-encrypt with new keys periodically
4. **Backup age identity keys** - Store `/etc/age/identity.key` securely offline
5. **Use SSH keys** - Leverage existing SSH host keys for age encryption when possible
6. **Pin to commits** - When using external secrets modules, always use `rev = "commit-hash"`
## Converting SSH Keys to Age Format
```bash
# Convert SSH public key to age public key
nix shell nixpkgs#ssh-to-age -c ssh-to-age < secrets/nix-builder/ssh_host_ed25519_key.pub
# Convert SSH private key to age identity (for editing secrets)
nix shell nixpkgs#ssh-to-age -c ssh-to-age -private-key -i ~/.ssh/id_ed25519
```
## Disabling Automatic Secrets
To disable automatic secret loading:
```nix
# In inventory.nix or host config
athenix.sw.secrets.enable = false;
```
## Troubleshooting
### Secret not found
- Ensure the `.age` file exists in `secrets/global/` or `secrets/{hostname}/`
- Check `hostname` matches directory name: `echo $HOSTNAME` on the target system
- Run `nix flake check` to verify secrets are discovered
### Permission denied
- Verify secret permissions in `/run/agenix/`
- Check if custom permissions are needed (use `extraSecrets`)
- Ensure the service user/group has access to the secret file
### Age decrypt failed
- Verify the host's age identity exists: `ls -l /etc/age/identity.key`
- Check that the secret was encrypted with the host's public key
- Confirm SSH host key hasn't changed (would change derived age key)
## References
- [ragenix GitHub](https://github.com/yaxitech/ragenix)
- [agenix upstream](https://github.com/ryantm/agenix)
- [age encryption tool](https://age-encryption.org/)

View File

@@ -0,0 +1 @@
age14emzyraytqzmre58c452t07rtcj87cwqwmd9z3gj7upugtxk8s3sda5tju

BIN
secrets/core Normal file

Binary file not shown.

121
secrets/create-secret.sh Executable file
View File

@@ -0,0 +1,121 @@
#!/usr/bin/env bash
set -euo pipefail
# Create a new age-encrypted secret with auto-determined recipients
# Usage: ./create-secret.sh <path> [content]
# path: relative to secrets/ (e.g., "usda-dash/my-secret.age" or "global/shared.age")
# content: stdin if not provided
SECRETS_DIR="$(cd "$(dirname "$0")" && pwd)"
if [ $# -lt 1 ]; then
echo "Usage: $0 <path> [content]" >&2
echo "Examples:" >&2
echo " $0 usda-dash/database-url.age <<< 'postgresql://...'" >&2
echo " $0 global/api-key.age < secret-file.txt" >&2
echo " echo 'secret' | $0 nix-builder/token.age" >&2
exit 1
fi
SECRET_PATH="$1"
shift
# Extract directory from path (e.g., "usda-dash/file.age" -> "usda-dash")
SECRET_DIR="$(dirname "$SECRET_PATH")"
SECRET_FILE="$(basename "$SECRET_PATH")"
# Ensure .age extension
if [[ ! "$SECRET_FILE" =~ \.age$ ]]; then
echo "Error: Secret file must have .age extension" >&2
exit 1
fi
TARGET_FILE="$SECRETS_DIR/$SECRET_PATH"
# Ensure target directory exists
mkdir -p "$(dirname "$TARGET_FILE")"
# Collect recipient keys
RECIPIENTS=()
if [ "$SECRET_DIR" = "global" ]; then
echo "Creating global secret (encrypted for all hosts + admins)..." >&2
# Add all host keys
for host_dir in "$SECRETS_DIR"/*/; do
host_name="$(basename "$host_dir")"
# Skip non-host directories
if [ "$host_name" = "admins" ] || [ "$host_name" = "global" ]; then
continue
fi
# Add all .age.pub files from this host
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$host_dir" -maxdepth 1 -name "*.age.pub" -print0)
done
# Add global keys
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$SECRETS_DIR/global" -maxdepth 1 -name "*.age.pub" -print0 2>/dev/null || true)
else
echo "Creating host-specific secret for $SECRET_DIR..." >&2
# Check if host directory exists
if [ ! -d "$SECRETS_DIR/$SECRET_DIR" ]; then
echo "Error: Host directory $SECRET_DIR does not exist" >&2
echo "Create it first: mkdir -p secrets/$SECRET_DIR" >&2
exit 1
fi
# Add this host's keys
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$SECRETS_DIR/$SECRET_DIR" -maxdepth 1 -name "*.age.pub" -print0)
# Add global keys (so global hosts can also decrypt)
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$SECRETS_DIR/global" -maxdepth 1 -name "*.age.pub" -print0 2>/dev/null || true)
fi
# Add admin keys (for editing from workstations)
if [ -d "$SECRETS_DIR/admins" ]; then
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$SECRETS_DIR/admins" -maxdepth 1 -name "*.age.pub" -print0 2>/dev/null || true)
fi
# Check if we have any recipients
if [ ${#RECIPIENTS[@]} -eq 0 ]; then
echo "Error: No recipient keys found!" >&2
echo "Run ./update-age-keys.sh first to generate .age.pub files" >&2
exit 1
fi
echo "Found ${#RECIPIENTS[@]} recipient key(s):" >&2
for key in "${RECIPIENTS[@]}"; do
echo " - $(basename "$key")" >&2
done
# Create recipient list file (temporary)
RECIPIENT_LIST=$(mktemp)
trap "rm -f $RECIPIENT_LIST" EXIT
for key in "${RECIPIENTS[@]}"; do
cat "$key" >> "$RECIPIENT_LIST"
done
# Encrypt the secret
if [ $# -gt 0 ]; then
# Content provided as argument
echo "$@" | age -R "$RECIPIENT_LIST" -o "$TARGET_FILE"
else
# Content from stdin
age -R "$RECIPIENT_LIST" -o "$TARGET_FILE"
fi
echo "✓ Created $TARGET_FILE" >&2
echo " Edit with: ragenix -e secrets/$SECRET_PATH" >&2

View File

@@ -0,0 +1 @@
age1udmpqkedupd33gyut85ud3nvppydzeg04kkuneymkvxcjjej244s4v8xjc

View File

@@ -0,0 +1,10 @@
# Host-specific secret configuration for nix-builder
{
# SSH host key should be readable by sshd
ssh_host_ed25519_key = {
mode = "0600";
owner = "root";
group = "root";
path = "/etc/ssh/ssh_host_ed25519_key";
};
}

View File

@@ -0,0 +1 @@
age1u5tczg2sx90n03uuz9h549f4h3h7sq5uehhqpampzs7vj8ew7y6s2mjwz0

176
secrets/secrets.nix Normal file
View File

@@ -0,0 +1,176 @@
# ============================================================================
# Agenix Secret Recipients Configuration (Auto-Generated)
# ============================================================================
# This file automatically discovers hosts and their public keys from the
# secrets/ directory structure and generates recipient configurations.
#
# Directory structure:
# secrets/{hostname}/*.pub -> SSH/age public keys for that host
# secrets/global/*.pub -> Keys accessible to all hosts
#
# Usage:
# ragenix -e secrets/global/example.age # Edit/create secret
# ragenix -r # Re-key all secrets
#
# To add admin keys for editing secrets, create secrets/admins/*.pub files
# with your personal age public keys (generated with: age-keygen)
let
lib = builtins;
# Helper functions not in builtins
filterAttrs =
pred: set:
lib.listToAttrs (
lib.filter (item: pred item.name item.value) (
lib.map (name: {
inherit name;
value = set.${name};
}) (lib.attrNames set)
)
);
concatLists = lists: lib.foldl' (acc: list: acc ++ list) [ ] lists;
unique =
list:
let
go =
acc: remaining:
if remaining == [ ] then
acc
else if lib.elem (lib.head remaining) acc then
go acc (lib.tail remaining)
else
go (acc ++ [ (lib.head remaining) ]) (lib.tail remaining);
in
go [ ] list;
hasSuffix =
suffix: str:
let
lenStr = lib.stringLength str;
lenSuffix = lib.stringLength suffix;
in
lenStr >= lenSuffix && lib.substring (lenStr - lenSuffix) lenSuffix str == suffix;
nameValuePair = name: value: { inherit name value; };
secretsPath = ./secrets;
# Read all directories in secrets/
secretDirs = if lib.pathExists secretsPath then lib.readDir secretsPath else { };
# Filter to only directories (excludes files)
isDirectory = name: type: type == "directory";
directories = lib.filter (name: isDirectory name secretDirs.${name}) (lib.attrNames secretDirs);
# Read public keys from a directory and convert to age format
readHostKeys =
dirName:
let
dirPath = secretsPath + "/${dirName}";
files = if lib.pathExists dirPath then lib.readDir dirPath else { };
# Prefer .age.pub files (pre-converted), fall back to .pub files
agePubFiles = filterAttrs (name: type: type == "regular" && hasSuffix ".age.pub" name) files;
sshPubFiles = filterAttrs (
name: type: type == "regular" && hasSuffix ".pub" name && !(hasSuffix ".age.pub" name)
) files;
# Read age public keys (already in correct format)
ageKeys = lib.map (
name:
let
content = lib.readFile (dirPath + "/${name}");
# Trim whitespace/newlines
trimmed = lib.replaceStrings [ "\n" " " "\r" "\t" ] [ "" "" "" "" ] content;
in
trimmed
) (lib.attrNames agePubFiles);
# For SSH keys, just include them as-is (user needs to convert with ssh-to-age)
# Or they can run the update-age-keys.sh script
sshKeys =
if (lib.length (lib.attrNames sshPubFiles)) > 0 then
lib.trace "Warning: ${dirName} has unconverted SSH keys. Run secrets/update-age-keys.sh" [ ]
else
[ ];
in
lib.filter (k: k != null && k != "") (ageKeys ++ sshKeys);
# Build host key mappings: { hostname = [ "age1..." "age2..." ]; }
hostKeys = lib.listToAttrs (
lib.map (dir: nameValuePair dir (readHostKeys dir)) (
lib.filter (d: d != "global" && d != "admins") directories
)
);
# Global keys that all hosts can use
globalKeys = if lib.elem "global" directories then readHostKeys "global" else [ ];
# Admin keys for editing secrets
adminKeys = if lib.elem "admins" directories then readHostKeys "admins" else [ ];
# All host keys combined
allHostKeys = concatLists (lib.attrValues hostKeys);
# Find all .age files in the secrets directory
findSecrets =
dir:
let
dirPath = secretsPath + "/${dir}";
files = if lib.pathExists dirPath then lib.readDir dirPath else { };
ageFiles = filterAttrs (name: type: type == "regular" && hasSuffix ".age" name) files;
in
lib.map (name: "secrets/${dir}/${name}") (lib.attrNames ageFiles);
# Generate recipient list for a secret based on its location
getRecipients =
secretPath:
let
# Extract directory name from path: "secrets/nix-builder/foo.age" -> "nix-builder"
pathParts = lib.split "/" secretPath;
dirName = lib.elemAt pathParts 2;
in
if dirName == "global" then
# Global secrets: all hosts + admins
allHostKeys ++ globalKeys ++ adminKeys
else if hostKeys ? ${dirName} then
# Host-specific secrets: that host + global keys + admins
hostKeys.${dirName} ++ globalKeys ++ adminKeys
else
# Fallback: just admins
adminKeys;
# Find all secrets across all directories
allSecrets = concatLists (lib.map findSecrets directories);
# Generate the configuration
secretsConfig = lib.listToAttrs (
lib.map (
secretPath:
let
recipients = getRecipients secretPath;
# Remove duplicates and empty keys
uniqueRecipients = unique (lib.filter (k: k != null && k != "") recipients);
in
nameValuePair secretPath {
publicKeys = uniqueRecipients;
}
) allSecrets
);
in
secretsConfig
// {
# Export helper information for debugging
_meta = {
hostKeys = hostKeys;
globalKeys = globalKeys;
adminKeys = adminKeys;
allHostKeys = allHostKeys;
discoveredSecrets = allSecrets;
};
}

36
secrets/update-age-keys.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/usr/bin/env bash
# ============================================================================
# Update Age Keys from SSH Public Keys
# ============================================================================
# This script converts SSH public keys to age format for use with ragenix.
# Run this after adding new SSH .pub files to create corresponding .age.pub files.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
echo "Converting SSH public keys to age format..."
# Find all .pub files that are SSH keys (not already .age.pub)
find . -name "*.pub" -not -name "*.age.pub" -type f | while read -r pubkey; do
# Check if it's an SSH key
if grep -q "^ssh-" "$pubkey" 2>/dev/null || grep -q "^ecdsa-" "$pubkey" 2>/dev/null; then
age_key=$(nix shell nixpkgs#ssh-to-age -c ssh-to-age < "$pubkey" 2>/dev/null || true)
if [ -n "$age_key" ]; then
# Create .age.pub file with the age key
age_file="${pubkey%.pub}.age.pub"
echo "$age_key" > "$age_file"
echo "✓ Converted: $pubkey -> $age_file"
else
echo "⚠ Skipped: $pubkey (conversion failed)"
fi
fi
done
echo ""
echo "Done! Age public keys have been generated."
echo "You can now use ragenix to manage secrets:"
echo " ragenix -e secrets/global/my-secret.age"
echo " ragenix -r # Re-key all secrets with updated keys"

View File

@@ -0,0 +1,8 @@
# Host-specific secret configuration for usda-dash
{
usda-vision-azure-env = {
mode = "0600";
owner = "root";
group = "root";
};
}

View File

@@ -0,0 +1 @@
age1lr24yvk7rdfh5wkle7h32jpxqxm2e8vk85mc4plv370u2sh4yfmszaaejx

View File

@@ -0,0 +1 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHI73LOEK2RgfjhZWpryntlLbx0LouHrhQ6v0vZu4Etr root@usda-dash

Binary file not shown.

View File

@@ -17,87 +17,99 @@ let
cfg = config.athenix.sw.builders; cfg = config.athenix.sw.builders;
in in
{ {
options.athenix.sw.builders = { options.athenix.sw.builders = mkOption {
enable = mkOption { type = lib.types.submodule {
type = types.bool; options = {
default = false; enable = mkOption {
description = '' type = lib.types.bool;
Enable build server configuration. default = false;
description = ''
Enable build server configuration.
Includes: Includes:
- SSH host keys for common Git servers (factory.uga.edu, github.com) - SSH host keys for common Git servers (factory.uga.edu, github.com)
- Gitea Actions runner support (optional) - Gitea Actions runner support (optional)
- Build tools and dependencies - Build tools and dependencies
Recommended for: CI/CD servers, build containers, development infrastructure Recommended for: CI/CD servers, build containers, development infrastructure
''; '';
example = true; example = true;
}; };
giteaRunner = { giteaRunner = mkOption {
enable = mkOption { type = lib.types.submodule {
type = types.bool; options = {
default = false; enable = mkOption {
description = '' type = lib.types.bool;
Enable Gitea Actions self-hosted runner. default = false;
description = ''
Enable Gitea Actions self-hosted runner.
This runner will connect to a Gitea instance and execute CI/CD workflows. This runner will connect to a Gitea instance and execute CI/CD workflows.
Requires manual setup of the token file before the service will start. Requires manual setup of the token file before the service will start.
''; '';
example = true; example = true;
}; };
url = mkOption { url = mkOption {
type = types.str; type = lib.types.str;
description = '' description = ''
URL of the Gitea instance to connect to. URL of the Gitea instance to connect to.
This should be the base URL without any path components. This should be the base URL without any path components.
''; '';
example = "https://git.factory.uga.edu"; example = "https://git.factory.uga.edu";
}; };
tokenFile = mkOption { tokenFile = mkOption {
type = types.path; type = lib.types.path;
default = "/var/lib/gitea-runner-token"; default = "/var/lib/gitea-runner-token";
description = '' description = ''
Path to file containing Gitea runner registration token. Path to file containing Gitea runner registration token.
To generate: To generate:
1. Go to your Gitea repository settings 1. Go to your Gitea repository settings
2. Navigate to Actions > Runners 2. Navigate to Actions > Runners
3. Click "Create new Runner" 3. Click "Create new Runner"
4. Save the token to this file: 4. Save the token to this file:
echo "TOKEN=your-token-here" | sudo tee /var/lib/gitea-runner-token > /dev/null echo "TOKEN=your-token-here" | sudo tee /var/lib/gitea-runner-token > /dev/null
The service will not start until this file exists. The service will not start until this file exists.
''; '';
example = "/var/secrets/gitea-runner-token"; example = "/var/secrets/gitea-runner-token";
}; };
extraLabels = mkOption { extraLabels = mkOption {
type = types.listOf types.str; type = lib.types.listOf lib.types.str;
default = [ ]; default = [ ];
description = '' description = ''
Additional labels to identify this runner in workflow files. Additional labels to identify this runner in workflow files.
Use labels to target specific runners for different job types. Use labels to target specific runners for different job types.
''; '';
example = [ example = [
"self-hosted" "self-hosted"
"nix" "nix"
"x86_64-linux" "x86_64-linux"
]; ];
}; };
name = mkOption { name = mkOption {
type = types.str; type = lib.types.str;
default = "athenix"; default = "athenix";
description = '' description = ''
Unique name for this runner instance. Unique name for this runner instance.
Shown in Gitea's runner list and logs. Shown in Gitea's runner list and logs.
''; '';
example = "nix-builder-1"; example = "nix-builder-1";
};
};
};
default = { };
description = "Gitea Actions runner configuration.";
};
}; };
}; };
default = { };
description = "Build server configuration (CI/CD, Gitea Actions).";
}; };
config = mkIf cfg.enable (mkMerge [ config = mkIf cfg.enable (mkMerge [

View File

@@ -26,6 +26,7 @@ in
./gc.nix ./gc.nix
./updater.nix ./updater.nix
./update-ref.nix ./update-ref.nix
./secrets.nix
./desktop ./desktop
./headless ./headless
./builders ./builders
@@ -38,8 +39,8 @@ in
options.athenix.sw = { options.athenix.sw = {
enable = mkOption { enable = mkOption {
type = types.bool; type = lib.types.bool;
default = false; default = true;
description = '' description = ''
Enable standard workstation configuration with base packages. Enable standard workstation configuration with base packages.
@@ -53,17 +54,15 @@ in
''; '';
}; };
# DEPRECATED: Backwards compatibility for external modules
# Use athenix.sw.<type>.enable instead
type = mkOption { type = mkOption {
type = types.nullOr (types.either types.str (types.listOf types.str)); type = lib.types.nullOr (lib.types.either lib.types.str (lib.types.listOf lib.types.str));
default = null; default = null;
description = "DEPRECATED: Use athenix.sw.<type>.enable instead. Legacy type selection."; description = "DEPRECATED: Use athenix.sw.<type>.enable instead. Legacy type selection.";
visible = false; visible = false;
}; };
extraPackages = mkOption { extraPackages = mkOption {
type = types.listOf types.package; type = lib.types.listOf lib.types.package;
default = [ ]; default = [ ];
description = '' description = ''
Additional system packages to install beyond the defaults. Additional system packages to install beyond the defaults.
@@ -73,7 +72,7 @@ in
}; };
excludePackages = mkOption { excludePackages = mkOption {
type = types.listOf types.package; type = lib.types.listOf lib.types.package;
default = [ ]; default = [ ];
description = '' description = ''
Packages to exclude from the default package list. Packages to exclude from the default package list.

View File

@@ -17,25 +17,31 @@ let
cfg = config.athenix.sw.desktop; cfg = config.athenix.sw.desktop;
in in
{ {
options.athenix.sw.desktop = { options.athenix.sw.desktop = mkOption {
enable = mkOption { type = lib.types.submodule {
type = types.bool; options = {
default = false; enable = mkOption {
description = '' type = lib.types.bool;
Enable full desktop environment with KDE Plasma 6. default = false;
description = ''
Enable full desktop environment with KDE Plasma 6.
Includes: Includes:
- KDE Plasma 6 desktop with SDDM display manager - KDE Plasma 6 desktop with SDDM display manager
- Full graphical software suite (Firefox, Chromium, LibreOffice) - Full graphical software suite (Firefox, Chromium, LibreOffice)
- Printing and scanning support (CUPS) - Printing and scanning support (CUPS)
- Virtualization (libvirt, virt-manager) - Virtualization (libvirt, virt-manager)
- Bluetooth and audio (PipeWire) - Bluetooth and audio (PipeWire)
- Video conferencing (Zoom, Teams) - Video conferencing (Zoom, Teams)
Recommended for: Workstations, development machines, user desktops Recommended for: Workstations, development machines, user desktops
''; '';
example = true; example = true;
};
};
}; };
default = { };
description = "Desktop environment configuration (KDE Plasma 6).";
}; };
config = mkIf cfg.enable (mkMerge [ config = mkIf cfg.enable (mkMerge [

View File

@@ -1,4 +1,6 @@
{ {
config,
lib,
pkgs, pkgs,
... ...
}: }:
@@ -10,7 +12,11 @@
# It reconstructs the terminfo database from the provided definition and # It reconstructs the terminfo database from the provided definition and
# adds it to the system packages. # adds it to the system packages.
with lib;
let let
cfg = config.athenix.sw;
ghostty-terminfo = pkgs.runCommand "ghostty-terminfo" { } '' ghostty-terminfo = pkgs.runCommand "ghostty-terminfo" { } ''
mkdir -p $out/share/terminfo mkdir -p $out/share/terminfo
cat > ghostty.info <<'EOF' cat > ghostty.info <<'EOF'
@@ -99,5 +105,7 @@ let
''; '';
in in
{ {
environment.systemPackages = [ ghostty-terminfo ]; config = mkIf cfg.enable {
environment.systemPackages = [ ghostty-terminfo ];
};
} }

View File

@@ -17,23 +17,29 @@ let
cfg = config.athenix.sw.headless; cfg = config.athenix.sw.headless;
in in
{ {
options.athenix.sw.headless = { options.athenix.sw.headless = mkOption {
enable = mkOption { type = lib.types.submodule {
type = types.bool; options = {
default = false; enable = mkOption {
description = '' type = lib.types.bool;
Enable minimal headless server configuration. default = false;
description = ''
Enable minimal headless server configuration.
Includes: Includes:
- SSH server with password authentication - SSH server with password authentication
- Minimal CLI tools (tmux, man) - Minimal CLI tools (tmux, man)
- Systemd-networkd for networking - Systemd-networkd for networking
- No graphical environment - No graphical environment
Recommended for: Servers, containers (LXC), WSL, remote systems Recommended for: Servers, containers (LXC), WSL, remote systems
''; '';
example = true; example = true;
};
};
}; };
default = { };
description = "Headless server configuration (SSH, minimal CLI tools).";
}; };
config = mkIf cfg.enable (mkMerge [ config = mkIf cfg.enable (mkMerge [

View File

@@ -18,10 +18,27 @@ let
cfg = config.athenix.sw.python; cfg = config.athenix.sw.python;
in in
{ {
options.athenix.sw.python = { options.athenix.sw.python = lib.mkOption {
enable = mkEnableOption "Python development tools (pixi, uv)" // { type = lib.types.submodule {
default = true; options = {
enable = lib.mkOption {
type = lib.types.bool;
default = true;
description = ''
Enable Python development tools (pixi, uv).
Provides:
- pixi: Fast, cross-platform package manager for Python
- uv: Extremely fast Python package installer and resolver
These tools manage project-based dependencies rather than global
Python packages, avoiding conflicts and improving reproducibility.
'';
};
};
}; };
default = { };
description = "Python development environment configuration.";
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {

230
sw/secrets.nix Normal file
View File

@@ -0,0 +1,230 @@
# ============================================================================
# Automatic Secret Management with Agenix
# ============================================================================
# This module automatically loads age-encrypted secrets from ./secrets based on
# the hostname. Secrets are organized by directory:
# - ./secrets/global/ -> Installed on ALL systems
# - ./secrets/{hostname}/ -> Installed only on matching host
#
# Secret files should be .age encrypted files. Public keys (.pub) are ignored.
{
config,
lib,
pkgs,
...
}:
with lib;
let
cfg = config.athenix.sw;
secretsPath = ../secrets;
# Get the fleet-assigned hostname (avoids issues with LXC empty hostnames)
hostname = config.athenix.host.name;
# Read all directories in ./secrets
secretDirs = if builtins.pathExists secretsPath then builtins.readDir secretsPath else { };
# Filter to only directories (excludes files)
isDirectory = name: type: type == "directory";
directories = lib.filterAttrs isDirectory secretDirs;
# Read secrets from a specific directory
readSecretsFromDir =
dirName:
let
dirPath = secretsPath + "/${dirName}";
files = builtins.readDir dirPath;
# Check if there's a default.nix with custom secret configurations
hasDefaultNix = files ? "default.nix";
customConfigs = if hasDefaultNix then import (dirPath + "/default.nix") else { };
# Only include .age files (exclude .pub public keys and other files)
secretFiles = lib.filterAttrs (name: type: type == "regular" && lib.hasSuffix ".age" name) files;
in
lib.mapAttrs' (
name: _:
let
# Remove .age extension for the secret name
secretName = lib.removeSuffix ".age" name;
# Get custom config for this secret if defined
customConfig = customConfigs.${secretName} or { };
# Base configuration with file path
baseConfig = {
file = dirPath + "/${name}";
};
in
lib.nameValuePair secretName (baseConfig // customConfig)
) secretFiles;
# Read public keys from a specific directory and map to private key paths
readIdentityPathsFromDir =
dirName:
let
dirPath = secretsPath + "/${dirName}";
files = if builtins.pathExists dirPath then builtins.readDir dirPath else { };
# Only include .pub public key files
pubKeyFiles = lib.filterAttrs (name: type: type == "regular" && lib.hasSuffix ".pub" name) files;
in
lib.mapAttrsToList (
name: _:
let
# Map public key filename to expected private key location
baseName = lib.removeSuffix ".pub" name;
filePath = dirPath + "/${name}";
fileContent = builtins.readFile filePath;
# Check if it's an SSH key by looking at the content
isSSHKey = lib.hasPrefix "ssh-" fileContent || lib.hasPrefix "ecdsa-" fileContent;
in
if lib.hasPrefix "ssh_host_" name then
# SSH host keys: ssh_host_ed25519_key.pub -> /etc/ssh/ssh_host_ed25519_key
"/etc/ssh/${baseName}"
else if name == "identity.pub" then
# Standard age identity: identity.pub -> /etc/age/identity.key
"/etc/age/identity.key"
else if isSSHKey then
# Other SSH keys (user keys, etc.): hunter_halloran_key.pub -> /etc/ssh/hunter_halloran_key
"/etc/ssh/${baseName}"
else
# Generic age keys: key.pub -> /etc/age/key
"/etc/age/${baseName}"
) pubKeyFiles;
# Determine which secrets apply to this host
applicableSecrets =
let
# Global secrets apply to all hosts
globalSecrets = if directories ? "global" then readSecretsFromDir "global" else { };
# Host-specific secrets
hostSecrets = if directories ? ${hostname} then readSecretsFromDir hostname else { };
in
globalSecrets // hostSecrets; # Host-specific secrets override global if same name
# Determine which identity paths (private keys) to use for decryption
identityPaths =
let
# Global identity paths (keys in global/ that all hosts can use)
globalPaths = if directories ? "global" then readIdentityPathsFromDir "global" else [ ];
# Host-specific identity paths
hostPaths = if directories ? ${hostname} then readIdentityPathsFromDir hostname else [ ];
# Default paths that NixOS/agenix use
defaultPaths = [
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/age/identity.key"
];
# Combine all paths and remove duplicates
allPaths = lib.unique (defaultPaths ++ globalPaths ++ hostPaths);
in
allPaths;
in
{
options.athenix.sw.secrets = {
enable = mkOption {
type = types.bool;
default = true;
description = ''
Enable automatic secret management using agenix.
Secrets are loaded from ./secrets based on directory structure:
- ./secrets/global/ -> All systems
- ./secrets/{hostname}/ -> Specific host only
Only .age encrypted files are loaded; .pub files are ignored.
'';
};
extraSecrets = mkOption {
type = types.attrsOf (
types.submodule {
options = {
file = mkOption {
type = types.path;
description = "Path to the encrypted secret file";
};
mode = mkOption {
type = types.str;
default = "0400";
description = "Permissions mode for the decrypted secret";
};
owner = mkOption {
type = types.str;
default = "root";
description = "Owner of the decrypted secret file";
};
group = mkOption {
type = types.str;
default = "root";
description = "Group of the decrypted secret file";
};
};
}
);
default = { };
description = ''
Additional secrets to define manually, beyond the auto-discovered ones.
Use this for secrets that need custom permissions or are stored elsewhere.
'';
example = lib.literalExpression ''
{
"my-secret" = {
file = ./secrets/custom/secret.age;
mode = "0440";
owner = "nginx";
group = "nginx";
};
}
'';
};
};
config = mkIf (cfg.enable && cfg.secrets.enable) {
# Auto-discovered secrets with default permissions
age.secrets = applicableSecrets // cfg.secrets.extraSecrets;
# Generate age identity files from SSH host keys at boot
# This is needed because age can't reliably use OpenSSH private keys directly
# Must run before agenix tries to decrypt secrets
system.activationScripts.convertSshToAge = {
deps = [
"users"
"groups"
];
text = ''
mkdir -p /etc/age
if [ -f /etc/ssh/ssh_host_ed25519_key ]; then
${pkgs.ssh-to-age}/bin/ssh-to-age -private-key -i /etc/ssh/ssh_host_ed25519_key > /etc/age/ssh_host_ed25519.age || true
chmod 600 /etc/age/ssh_host_ed25519.age 2>/dev/null || true
fi
if [ -f /etc/ssh/ssh_host_rsa_key ]; then
${pkgs.ssh-to-age}/bin/ssh-to-age -private-key -i /etc/ssh/ssh_host_rsa_key > /etc/age/ssh_host_rsa.age 2>/dev/null || true
chmod 600 /etc/age/ssh_host_rsa.age 2>/dev/null || true
fi
'';
};
# Add the converted age keys to identity paths (in addition to auto-discovered ones)
age.identityPaths = identityPaths ++ [
"/etc/age/ssh_host_ed25519.age"
"/etc/age/ssh_host_rsa.age"
];
# Optional: Add assertion to warn if no secrets found
warnings =
let
hasSecrets = (builtins.length (builtins.attrNames applicableSecrets)) > 0;
in
lib.optional (
!hasSecrets
) "No age-encrypted secrets found in ./secrets/global/ or ./secrets/${hostname}/";
};
}

View File

@@ -14,36 +14,42 @@ let
cfg = config.athenix.sw.stateless-kiosk; cfg = config.athenix.sw.stateless-kiosk;
in in
{ {
options.athenix.sw.stateless-kiosk = { options.athenix.sw.stateless-kiosk = mkOption {
enable = mkOption { type = lib.types.submodule {
type = types.bool; options = {
default = false; enable = mkOption {
description = '' type = lib.types.bool;
Enable stateless kiosk mode for diskless PXE boot systems. default = false;
description = ''
Enable stateless kiosk mode for diskless PXE boot systems.
Includes: Includes:
- Sway (Wayland compositor) - Sway (Wayland compositor)
- Chromium in fullscreen kiosk mode - Chromium in fullscreen kiosk mode
- MAC address-based URL routing - MAC address-based URL routing
- Network-only boot (no local storage) - Network-only boot (no local storage)
- Auto-start browser on boot - Auto-start browser on boot
Recommended for: Assembly line stations, diskless kiosks, PXE boot displays Recommended for: Assembly line stations, diskless kiosks, PXE boot displays
''; '';
example = true; example = true;
}; };
kioskUrl = mkOption { kioskUrl = mkOption {
type = types.str; type = lib.types.str;
default = "https://ha.factory.uga.edu"; default = "https://ha.factory.uga.edu";
description = '' description = ''
Default URL to display in the kiosk browser. Default URL to display in the kiosk browser.
Note: For stateless-kiosk, MAC address-based routing may override this. Note: For stateless-kiosk, MAC address-based routing may override this.
See sw/stateless-kiosk/mac-hostmap.nix for MAC-to-URL mappings. See sw/stateless-kiosk/mac-hostmap.nix for MAC-to-URL mappings.
''; '';
example = "https://homeassistant.lan:8123/lovelace/dashboard"; example = "https://homeassistant.lan:8123/lovelace/dashboard";
};
};
}; };
default = { };
description = "Stateless kiosk configuration (PXE boot, Sway, MAC-based routing).";
}; };
config = mkIf cfg.enable (mkMerge [ config = mkIf cfg.enable (mkMerge [

View File

@@ -1,13 +1,13 @@
# This module configures Chromium for kiosk mode under Sway. # This module configures Chromium for kiosk mode under Sway.
# It includes a startup script that determines the kiosk URL based on the machine's MAC address. # It includes a startup script that determines the kiosk URL based on the machine's MAC address.
{ {
lib,
pkgs, pkgs,
inputs,
... ...
}: }:
let let
macCaseBuilder = (import ./mac-hostmap.nix { inherit lib; }).macCaseBuilder; macCaseBuilder = inputs.self.lib.macCaseBuilder;
macCases = macCaseBuilder { macCases = macCaseBuilder {
varName = "STATION"; varName = "STATION";
}; };

View File

@@ -1,28 +0,0 @@
# Shared MAC address to station mapping and case builder for stateless-kiosk modules
{ lib }:
let
hostmap = {
"00:e0:4c:46:0b:32" = "1";
"00:e0:4c:46:07:26" = "2";
"00:e0:4c:46:05:94" = "3";
"00:e0:4c:46:07:11" = "4";
"00:e0:4c:46:08:02" = "5";
"00:e0:4c:46:08:5c" = "6";
};
# macCaseBuilder: builds a shell case statement from a hostmap
# varName: the shell variable to assign
# prefix: optional string to prepend to the value (default: "")
# attrset: attribute set to use (default: hostmap)
macCaseBuilder =
{
varName,
prefix ? "",
attrset ? hostmap,
}:
lib.concatStringsSep "\n" (
lib.mapAttrsToList (mac: val: " ${mac}) ${varName}=${prefix}${val} ;;") attrset
);
in
{
inherit hostmap macCaseBuilder;
}

View File

@@ -1,7 +1,4 @@
{ {
config,
lib,
pkgs,
... ...
}: }:
{ {

View File

@@ -1,11 +1,10 @@
{ {
config,
lib,
pkgs, pkgs,
inputs,
... ...
}: }:
let let
macCaseBuilder = (import ./mac-hostmap.nix { inherit lib; }).macCaseBuilder; macCaseBuilder = inputs.self.lib.macCaseBuilder;
shellCases = macCaseBuilder { shellCases = macCaseBuilder {
varName = "NEW_HOST"; varName = "NEW_HOST";
prefix = "nix-station"; prefix = "nix-station";

View File

@@ -12,35 +12,41 @@ let
cfg = config.athenix.sw.tablet-kiosk; cfg = config.athenix.sw.tablet-kiosk;
in in
{ {
options.athenix.sw.tablet-kiosk = { options.athenix.sw.tablet-kiosk = mkOption {
enable = mkOption { type = lib.types.submodule {
type = types.bool; options = {
default = false; enable = mkOption {
description = '' type = lib.types.bool;
Enable tablet kiosk mode with touch-optimized interface. default = false;
description = ''
Enable tablet kiosk mode with touch-optimized interface.
Includes: Includes:
- Phosh mobile desktop environment - Phosh mobile desktop environment
- Chromium in fullscreen kiosk mode - Chromium in fullscreen kiosk mode
- On-screen keyboard (Squeekboard) - On-screen keyboard (Squeekboard)
- Auto-login and auto-start browser - Auto-login and auto-start browser
- Touch gesture support - Touch gesture support
- Optimized for Surface Pro tablets - Optimized for Surface Pro tablets
Recommended for: Surface tablets, touchscreen kiosks, interactive displays Recommended for: Surface tablets, touchscreen kiosks, interactive displays
''; '';
example = true; example = true;
}; };
kioskUrl = mkOption { kioskUrl = mkOption {
type = types.str; type = lib.types.str;
default = "https://ha.factory.uga.edu"; default = "https://ha.factory.uga.edu";
description = '' description = ''
URL to display in the kiosk browser on startup. URL to display in the kiosk browser on startup.
The browser will automatically navigate to this URL in fullscreen mode. The browser will automatically navigate to this URL in fullscreen mode.
''; '';
example = "https://dashboard.example.com"; example = "https://dashboard.example.com";
};
};
}; };
default = { };
description = "Tablet kiosk configuration (Phosh, touch interface).";
}; };
config = mkIf cfg.enable (mkMerge [ config = mkIf cfg.enable (mkMerge [

View File

@@ -1,511 +1,524 @@
{ pkgs, ... }:
{ {
environment.systemPackages = with pkgs; [ config,
python3 lib,
git pkgs,
(pkgs.writeShellScriptBin "update-ref" '' ...
set -euo pipefail }:
RED='\033[31m'; YEL='\033[33m'; NC='\033[0m' with lib;
die() { printf "''${RED}error:''${NC} %s\n" "$*" >&2; exit 2; }
warn() { printf "''${YEL}warning:''${NC} %s\n" "$*" >&2; }
usage() { let
cat >&2 <<'EOF' cfg = config.athenix.sw;
usage: in
update-ref [-R PATH|--athenix-repo=PATH] [-b BRANCH|--athenix-branch=BRANCH] {
[-m "msg"|--message "msg"] config = mkIf cfg.enable {
[-p[=false] [remote[=URL]]|--push[=false] [remote[=URL]]] environment.systemPackages = with pkgs; [
[--make-local|-l] [--make-remote|-r] [--ssh] python3
user=<username> | system=<device-type>:<hostkey> git
EOF (pkgs.writeShellScriptBin "update-ref" ''
exit 2 set -euo pipefail
}
# --- must be in a git repo (current dir) --- RED='\033[31m'; YEL='\033[33m'; NC='\033[0m'
git rev-parse --is-inside-work-tree >/dev/null 2>&1 || die "This directory is not a git project" die() { printf "''${RED}error:''${NC} %s\n" "$*" >&2; exit 2; }
CUR_REPO_ROOT="$(git rev-parse --show-toplevel)" warn() { printf "''${YEL}warning:''${NC} %s\n" "$*" >&2; }
CUR_BRANCH="$(git rev-parse --abbrev-ref HEAD)"
# --- athenix checkout (working tree) --- usage() {
ATHENIX_DIR="$HOME/athenix" cat >&2 <<'EOF'
ATHENIX_BRANCH="" usage:
update-ref [-R PATH|--athenix-repo=PATH] [-b BRANCH|--athenix-branch=BRANCH]
[-m "msg"|--message "msg"]
[-p[=false] [remote[=URL]]|--push[=false] [remote[=URL]]]
[--make-local|-l] [--make-remote|-r] [--ssh]
user=<username> | system=<device-type>:<hostkey>
EOF
exit 2
}
# --- current repo automation --- # --- must be in a git repo (current dir) ---
COMMIT_MSG="" git rev-parse --is-inside-work-tree >/dev/null 2>&1 || die "This directory is not a git project"
PUSH_SPEC="" CUR_REPO_ROOT="$(git rev-parse --show-toplevel)"
CUR_BRANCH="$(git rev-parse --abbrev-ref HEAD)"
# --- push / url mode --- # --- athenix checkout (working tree) ---
PUSH_SET=0 ATHENIX_DIR="$HOME/athenix"
DO_PUSH=0 ATHENIX_BRANCH=""
MODE_FORCE="" # "", local, remote
TARGET="" # --- current repo automation ---
COMMIT_MSG=""
PUSH_SPEC=""
is_remote_url() { # --- push / url mode ---
# https://, http://, ssh://, or scp-style git@host:org/repo PUSH_SET=0
printf "%s" "$1" | grep -qE '^(https?|ssh)://|^[^/@:]+@[^/:]+:' DO_PUSH=0
} MODE_FORCE="" # "", local, remote
derive_full_hostname() { TARGET=""
devtype="$1"; hostkey="$2"
if printf "%s" "$hostkey" | grep -q '-' || printf "%s" "$hostkey" | grep -q "^$devtype"; then
printf "%s" "$hostkey"
elif printf "%s" "$hostkey" | grep -qE '^[0-9]+$'; then
printf "%s" "$devtype$hostkey"
else
printf "%s" "$devtype-$hostkey"
fi
}
extract_existing_fetch_url() { is_remote_url() {
# args: mode file username key # https://, http://, ssh://, or scp-style git@host:org/repo
python3 - "$1" "$2" "$3" "$4" "$5"<<'PY' printf "%s" "$1" | grep -qE '^(https?|ssh)://|^[^/@:]+@[^/:]+:'
import sys, re, pathlib }
mode, file, username, key, use_ssh = sys.argv[1:5]
t = pathlib.Path(file).read_text()
def url_from_block(block: str) -> str: derive_full_hostname() {
if not block: devtype="$1"; hostkey="$2"
return "" if printf "%s" "$hostkey" | grep -q '-' || printf "%s" "$hostkey" | grep -q "^$devtype"; then
m = re.search(r'url\s*=\s*"([^"]+)"\s*;', block) printf "%s" "$hostkey"
url = m.group(1) if m else "" elif printf "%s" "$hostkey" | grep -qE '^[0-9]+$'; then
printf "%s" "$devtype$hostkey"
else
printf "%s" "$devtype-$hostkey"
fi
}
if use_ssh = "true": extract_existing_fetch_url() {
return url # args: mode file username key
python3 - "$1" "$2" "$3" "$4" "$5"<<'PY'
import sys, re, pathlib
mode, file, username, key, use_ssh = sys.argv[1:5]
t = pathlib.Path(file).read_text()
# Already https def url_from_block(block: str) -> str:
if url.startswith("https://"): if not block:
return ""
m = re.search(r'url\s*=\s*"([^"]+)"\s*;', block)
url = m.group(1) if m else ""
if use_ssh = "true":
return url return url
# ssh://git@host/org/repo.git # Already https
m = re.match(r"ssh://(?:.+?)@([^/]+)/(.+)", url) if url.startswith("https://"):
if m: return url
host, path = m.groups()
return f"https://{host}/{path}"
# git@host:org/repo.git # ssh://git@host/org/repo.git
m = re.match(r"(?:.+?)@([^:]+):(.+)", url) m = re.match(r"ssh://(?:.+?)@([^/]+)/(.+)", url)
if m: if m:
host, path = m.groups() host, path = m.groups()
return f"https://{host}/{path}" return f"https://{host}/{path}"
# If you gave me something cursed # git@host:org/repo.git
raise ValueError(f"Unrecognized SSH git URL format: {url}") m = re.match(r"(?:.+?)@([^:]+):(.+)", url)
if m:
host, path = m.groups()
return f"https://{host}/{path}"
# If you gave me something cursed
raise ValueError(f"Unrecognized SSH git URL format: {url}")
if mode == "user": if mode == "user":
m = re.search(r'(?s)\n\s*' + re.escape(username) + r'\.external\s*=\s*builtins\.fetchGit\s*\{(.*?)\n\s*\};', t) m = re.search(r'(?s)\n\s*' + re.escape(username) + r'\.external\s*=\s*builtins\.fetchGit\s*\{(.*?)\n\s*\};', t)
block = m.group(1) if m else "" block = m.group(1) if m else ""
print(url_from_block(block)) print(url_from_block(block))
else: else:
m = re.search(r'(?s)\n\s*"' + re.escape(key) + r'"\s*=\s*builtins\.fetchGit\s*\{(.*?)\n\s*\};', t) m = re.search(r'(?s)\n\s*"' + re.escape(key) + r'"\s*=\s*builtins\.fetchGit\s*\{(.*?)\n\s*\};', t)
block = m.group(1) if m else "" block = m.group(1) if m else ""
print(url_from_block(block)) print(url_from_block(block))
PY PY
} }
# --- parse args --- # --- parse args ---
while [ "$#" -gt 0 ]; do while [ "$#" -gt 0 ]; do
case "$1" in case "$1" in
user=*|system=*) user=*|system=*)
[ -z "$TARGET" ] || die "Only one subcommand allowed (user=... or system=...)" [ -z "$TARGET" ] || die "Only one subcommand allowed (user=... or system=...)"
TARGET="$1"; shift TARGET="$1"; shift
;; ;;
--athenix-repo=*) --athenix-repo=*)
ATHENIX_DIR="''${1#*=}"; shift ATHENIX_DIR="''${1#*=}"; shift
;; ;;
-R) -R)
[ "$#" -ge 2 ] || usage [ "$#" -ge 2 ] || usage
ATHENIX_DIR="$2"; shift 2 ATHENIX_DIR="$2"; shift 2
;; ;;
--athenix-branch=*) --athenix-branch=*)
ATHENIX_BRANCH="''${1#*=}"; shift ATHENIX_BRANCH="''${1#*=}"; shift
;; ;;
-b) -b)
[ "$#" -ge 2 ] || usage [ "$#" -ge 2 ] || usage
ATHENIX_BRANCH="$2"; shift 2 ATHENIX_BRANCH="$2"; shift 2
;; ;;
-m|--message) -m|--message)
[ "$#" -ge 2 ] || usage [ "$#" -ge 2 ] || usage
COMMIT_MSG="$2"; shift 2 COMMIT_MSG="$2"; shift 2
;; ;;
-p|--push) -p|--push)
PUSH_SET=1 PUSH_SET=1
DO_PUSH=1 DO_PUSH=1
PUSH_SPEC="" PUSH_SPEC=""
# If there is a next token, only consume it if it is a remote spec # If there is a next token, only consume it if it is a remote spec
# and not another flag or the subcommand. # and not another flag or the subcommand.
if [ "$#" -ge 2 ]; then if [ "$#" -ge 2 ]; then
nxt="$2" nxt="$2"
if printf "%s" "$nxt" | grep -qE '^(user=|system=)'; then if printf "%s" "$nxt" | grep -qE '^(user=|system=)'; then
# next token is the subcommand; don't consume it # next token is the subcommand; don't consume it
shift shift
elif printf "%s" "$nxt" | grep -qE '^-'; then elif printf "%s" "$nxt" | grep -qE '^-'; then
# next token is another flag; don't consume it # next token is another flag; don't consume it
shift shift
elif printf "%s" "$nxt" | grep -qE '^[A-Za-z0-9._-]+$'; then elif printf "%s" "$nxt" | grep -qE '^[A-Za-z0-9._-]+$'; then
# remote name # remote name
PUSH_SPEC="$nxt" PUSH_SPEC="$nxt"
shift 2 shift 2
elif printf "%s" "$nxt" | grep -qE '^[A-Za-z0-9._-]+=.+$'; then elif printf "%s" "$nxt" | grep -qE '^[A-Za-z0-9._-]+=.+$'; then
# remote=URL # remote=URL
PUSH_SPEC="$nxt" PUSH_SPEC="$nxt"
shift 2 shift 2
else
# unknown token; treat as not-a-push-spec and don't consume it
shift
fi
else else
# unknown token; treat as not-a-push-spec and don't consume it
shift shift
fi fi
else ;;
-p=*|--push=*)
PUSH_SET=1
val="''${1#*=}"
case "$val" in
false|0|no|off) DO_PUSH=0 ;;
true|1|yes|on|"") DO_PUSH=1 ;;
*) die "Invalid value for --push: $val (use true/false)" ;;
esac
shift shift
;;
--make-local|-l) MODE_FORCE="local"; shift ;;
--make-remote|-r) MODE_FORCE="remote"; shift ;;
--ssh) USE_SSH="true"; shift ;;
-h|--help) usage ;;
*) die "Unknown argument: $1" ;;
esac
done
[ -n "$TARGET" ] || die "Missing required subcommand: user=<username> or system=<device-type>:<hostkey>"
# --- validate athenix working tree path ---
[ -d "$ATHENIX_DIR" ] || die "$ATHENIX_DIR does not exist"
git -C "$ATHENIX_DIR" rev-parse --is-inside-work-tree >/dev/null 2>&1 || die "$ATHENIX_DIR is not a git project (athenix checkout)"
# --- -b behavior: fork/switch athenix working tree into branch ---
if [ -n "$ATHENIX_BRANCH" ]; then
ATH_CUR_BRANCH="$(git -C "$ATHENIX_DIR" rev-parse --abbrev-ref HEAD)"
if [ "$ATH_CUR_BRANCH" != "$ATHENIX_BRANCH" ]; then
if git -C "$ATHENIX_DIR" show-ref --verify --quiet "refs/heads/$ATHENIX_BRANCH"; then
warn "Branch '$ATHENIX_BRANCH' already exists in $ATHENIX_DIR."
warn "Delete and recreate it from current branch '$ATH_CUR_BRANCH' state? [y/N] "
read -r ans || true
case "''${ans:-N}" in
y|Y|yes|YES)
git -C "$ATHENIX_DIR" branch -D "$ATHENIX_BRANCH"
git -C "$ATHENIX_DIR" switch -c "$ATHENIX_BRANCH"
;;
*)
git -C "$ATHENIX_DIR" switch "$ATHENIX_BRANCH"
;;
esac
else
git -C "$ATHENIX_DIR" switch -c "$ATHENIX_BRANCH"
fi fi
;;
-p=*|--push=*)
PUSH_SET=1
val="''${1#*=}"
case "$val" in
false|0|no|off) DO_PUSH=0 ;;
true|1|yes|on|"") DO_PUSH=1 ;;
*) die "Invalid value for --push: $val (use true/false)" ;;
esac
shift
;;
--make-local|-l) MODE_FORCE="local"; shift ;;
--make-remote|-r) MODE_FORCE="remote"; shift ;;
--ssh) USE_SSH="true"; shift ;;
-h|--help) usage ;;
*) die "Unknown argument: $1" ;;
esac
done
[ -n "$TARGET" ] || die "Missing required subcommand: user=<username> or system=<device-type>:<hostkey>"
# --- validate athenix working tree path ---
[ -d "$ATHENIX_DIR" ] || die "$ATHENIX_DIR does not exist"
git -C "$ATHENIX_DIR" rev-parse --is-inside-work-tree >/dev/null 2>&1 || die "$ATHENIX_DIR is not a git project (athenix checkout)"
# --- -b behavior: fork/switch athenix working tree into branch ---
if [ -n "$ATHENIX_BRANCH" ]; then
ATH_CUR_BRANCH="$(git -C "$ATHENIX_DIR" rev-parse --abbrev-ref HEAD)"
if [ "$ATH_CUR_BRANCH" != "$ATHENIX_BRANCH" ]; then
if git -C "$ATHENIX_DIR" show-ref --verify --quiet "refs/heads/$ATHENIX_BRANCH"; then
warn "Branch '$ATHENIX_BRANCH' already exists in $ATHENIX_DIR."
warn "Delete and recreate it from current branch '$ATH_CUR_BRANCH' state? [y/N] "
read -r ans || true
case "''${ans:-N}" in
y|Y|yes|YES)
git -C "$ATHENIX_DIR" branch -D "$ATHENIX_BRANCH"
git -C "$ATHENIX_DIR" switch -c "$ATHENIX_BRANCH"
;;
*)
git -C "$ATHENIX_DIR" switch "$ATHENIX_BRANCH"
;;
esac
else
git -C "$ATHENIX_DIR" switch -c "$ATHENIX_BRANCH"
fi fi
fi fi
fi
# --- target file + identifiers --- # --- target file + identifiers ---
MODE=""; FILE=""; USERNAME=""; DEVTYPE=""; HOSTKEY="" MODE=""; FILE=""; USERNAME=""; DEVTYPE=""; HOSTKEY=""
case "$TARGET" in case "$TARGET" in
user=*) user=*)
MODE="user" MODE="user"
USERNAME="''${TARGET#user=}" USERNAME="''${TARGET#user=}"
[ -n "$USERNAME" ] || die "user=<username>: username missing" [ -n "$USERNAME" ] || die "user=<username>: username missing"
FILE="$ATHENIX_DIR/users.nix" FILE="$ATHENIX_DIR/users.nix"
;; ;;
system=*) system=*)
MODE="system" MODE="system"
RHS="''${TARGET#system=}" RHS="''${TARGET#system=}"
printf "%s" "$RHS" | grep -q ':' || die "system=... must be system=<device-type>:<hostkey>" printf "%s" "$RHS" | grep -q ':' || die "system=... must be system=<device-type>:<hostkey>"
DEVTYPE="''${RHS%%:*}" DEVTYPE="''${RHS%%:*}"
HOSTKEY="''${RHS#*:}" HOSTKEY="''${RHS#*:}"
[ -n "$DEVTYPE" ] || die "system=<device-type>:<hostkey>: device-type missing" [ -n "$DEVTYPE" ] || die "system=<device-type>:<hostkey>: device-type missing"
[ -n "$HOSTKEY" ] || die "system=<device-type>:<hostkey>: hostkey missing" [ -n "$HOSTKEY" ] || die "system=<device-type>:<hostkey>: hostkey missing"
FILE="$ATHENIX_DIR/inventory.nix" FILE="$ATHENIX_DIR/inventory.nix"
;; ;;
esac esac
[ -f "$FILE" ] || die "File not found: $FILE" [ -f "$FILE" ] || die "File not found: $FILE"
# --- push default based on existing entry url in the target file --- # --- push default based on existing entry url in the target file ---
EXISTING_URL="" EXISTING_URL=""
ENTRY_EXISTS=0 ENTRY_EXISTS=0
if [ "$MODE" = "user" ]; then if [ "$MODE" = "user" ]; then
EXISTING_URL="$(extract_existing_fetch_url user "$FILE" "$USERNAME" "" "false")" EXISTING_URL="$(extract_existing_fetch_url user "$FILE" "$USERNAME" "" "false")"
[ -n "$EXISTING_URL" ] && ENTRY_EXISTS=1 || true
else
FULL="$(derive_full_hostname "$DEVTYPE" "$HOSTKEY")"
EXISTING_URL="$(extract_existing_fetch_url system "$FILE" "" "$HOSTKEY")"
if [ -n "$EXISTING_URL" ]; then
ENTRY_EXISTS=1
elif [ "$FULL" != "$HOSTKEY" ]; then
EXISTING_URL="$(extract_existing_fetch_url system "$FILE" "" "$FULL")"
[ -n "$EXISTING_URL" ] && ENTRY_EXISTS=1 || true [ -n "$EXISTING_URL" ] && ENTRY_EXISTS=1 || true
fi
fi
if [ "$PUSH_SET" -eq 0 ]; then
if [ "$ENTRY_EXISTS" -eq 1 ] && is_remote_url "$EXISTING_URL"; then
DO_PUSH=1
else else
DO_PUSH=0 FULL="$(derive_full_hostname "$DEVTYPE" "$HOSTKEY")"
[ "$MODE_FORCE" = "remote" ] && DO_PUSH=1 || true EXISTING_URL="$(extract_existing_fetch_url system "$FILE" "" "$HOSTKEY")"
if [ -n "$EXISTING_URL" ]; then
ENTRY_EXISTS=1
elif [ "$FULL" != "$HOSTKEY" ]; then
EXISTING_URL="$(extract_existing_fetch_url system "$FILE" "" "$FULL")"
[ -n "$EXISTING_URL" ] && ENTRY_EXISTS=1 || true
fi
fi fi
fi
if [ "$MODE_FORCE" = "local" ] && [ "$PUSH_SET" -eq 0 ]; then
DO_PUSH=0
fi
# --- if current repo dirty, prompt --- if [ "$PUSH_SET" -eq 0 ]; then
if [ -n "$(git status --porcelain)" ]; then if [ "$ENTRY_EXISTS" -eq 1 ] && is_remote_url "$EXISTING_URL"; then
warn "This branch has untracked or uncommitted changes. Would you like to add, commit''${DO_PUSH:+, and push}? [y/N] " DO_PUSH=1
read -r ans || true
case "''${ans:-N}" in
y|Y|yes|YES)
git add -A
if ! git diff --cached --quiet; then
if [ -n "$COMMIT_MSG" ]; then git commit -m "$COMMIT_MSG"; else git commit; fi
else
warn "No staged changes to commit."
fi
;;
*) warn "Proceeding without committing. (rev will be last committed HEAD.)" ;;
esac
fi
# --- push current repo if requested ---
PUSH_REMOTE_URL=""
if [ "$DO_PUSH" -eq 1 ]; then
if [ -n "$PUSH_SPEC" ]; then
if printf "%s" "$PUSH_SPEC" | grep -q '='; then
REM_NAME="''${PUSH_SPEC%%=*}"
REM_URL="''${PUSH_SPEC#*=}"
[ -n "$REM_NAME" ] || die "--push remote-name=URL: remote-name missing"
[ -n "$REM_URL" ] || die "--push remote-name=URL: URL missing"
if git remote get-url "$REM_NAME" >/dev/null 2>&1; then
git remote set-url "$REM_NAME" "$REM_URL"
else
git remote add "$REM_NAME" "$REM_URL"
fi
git push -u "$REM_NAME" "$CUR_BRANCH"
PUSH_REMOTE_URL="$REM_URL"
else else
REM_NAME="$PUSH_SPEC" DO_PUSH=0
git push -u "$REM_NAME" "$CUR_BRANCH" [ "$MODE_FORCE" = "remote" ] && DO_PUSH=1 || true
PUSH_REMOTE_URL="$(git remote get-url "$REM_NAME")"
fi fi
else
if ! git rev-parse --abbrev-ref --symbolic-full-name @{u} >/dev/null 2>&1; then
die "No upstream is set. Set a default upstream with \"git branch -u <remote>/<remote_branch_name>\""
fi
git push
UPSTREAM_REMOTE="$(git rev-parse --abbrev-ref --symbolic-full-name @{u} | cut -d/ -f1)"
PUSH_REMOTE_URL="$(git remote get-url "$UPSTREAM_REMOTE")"
fi fi
fi if [ "$MODE_FORCE" = "local" ] && [ "$PUSH_SET" -eq 0 ]; then
DO_PUSH=0
fi
CUR_REV="$(git -C "$CUR_REPO_ROOT" rev-parse HEAD)" # --- if current repo dirty, prompt ---
if [ -n "$(git status --porcelain)" ]; then
warn "This branch has untracked or uncommitted changes. Would you like to add, commit''${DO_PUSH:+, and push}? [y/N] "
read -r ans || true
case "''${ans:-N}" in
y|Y|yes|YES)
git add -A
if ! git diff --cached --quiet; then
if [ -n "$COMMIT_MSG" ]; then git commit -m "$COMMIT_MSG"; else git commit; fi
else
warn "No staged changes to commit."
fi
;;
*) warn "Proceeding without committing. (rev will be last committed HEAD.)" ;;
esac
fi
# --- choose URL to write into fetchGit --- # --- push current repo if requested ---
if [ "$MODE_FORCE" = "local" ]; then PUSH_REMOTE_URL=""
FETCH_URL="file://$CUR_REPO_ROOT"
elif [ "$MODE_FORCE" = "remote" ]; then
if [ "$DO_PUSH" -eq 1 ]; then if [ "$DO_PUSH" -eq 1 ]; then
FETCH_URL="$PUSH_REMOTE_URL" if [ -n "$PUSH_SPEC" ]; then
elif [ "$ENTRY_EXISTS" -eq 1 ] && [ -n "$EXISTING_URL" ] && is_remote_url "$EXISTING_URL"; then if printf "%s" "$PUSH_SPEC" | grep -q '='; then
FETCH_URL="$EXISTING_URL" REM_NAME="''${PUSH_SPEC%%=*}"
else REM_URL="''${PUSH_SPEC#*=}"
CUR_ORIGIN="$(git remote get-url origin 2>/dev/null || true)" [ -n "$REM_NAME" ] || die "--push remote-name=URL: remote-name missing"
[ -n "$CUR_ORIGIN" ] && is_remote_url "$CUR_ORIGIN" || die "--make-remote requires a remote url (set origin or use -p remote=URL)" [ -n "$REM_URL" ] || die "--push remote-name=URL: URL missing"
FETCH_URL="$CUR_ORIGIN" if git remote get-url "$REM_NAME" >/dev/null 2>&1; then
git remote set-url "$REM_NAME" "$REM_URL"
else
git remote add "$REM_NAME" "$REM_URL"
fi
git push -u "$REM_NAME" "$CUR_BRANCH"
PUSH_REMOTE_URL="$REM_URL"
else
REM_NAME="$PUSH_SPEC"
git push -u "$REM_NAME" "$CUR_BRANCH"
PUSH_REMOTE_URL="$(git remote get-url "$REM_NAME")"
fi
else
if ! git rev-parse --abbrev-ref --symbolic-full-name @{u} >/dev/null 2>&1; then
die "No upstream is set. Set a default upstream with \"git branch -u <remote>/<remote_branch_name>\""
fi
git push
UPSTREAM_REMOTE="$(git rev-parse --abbrev-ref --symbolic-full-name @{u} | cut -d/ -f1)"
PUSH_REMOTE_URL="$(git remote get-url "$UPSTREAM_REMOTE")"
fi
fi fi
else
if [ "$DO_PUSH" -eq 1 ]; then FETCH_URL="$PUSH_REMOTE_URL"; else FETCH_URL="file://$CUR_REPO_ROOT"; fi
fi
# --- rewrite users.nix or inventory.nix --- CUR_REV="$(git -C "$CUR_REPO_ROOT" rev-parse HEAD)"
python3 - "$MODE" "$FILE" "$FETCH_URL" "$CUR_REV" "$USERNAME" "$DEVTYPE" "$HOSTKEY" <<'PY'
import sys, re, pathlib
mode = sys.argv[1] # --- choose URL to write into fetchGit ---
path = pathlib.Path(sys.argv[2]) if [ "$MODE_FORCE" = "local" ]; then
fetch_url = sys.argv[3] FETCH_URL="file://$CUR_REPO_ROOT"
rev = sys.argv[4] elif [ "$MODE_FORCE" = "remote" ]; then
username = sys.argv[5] if [ "$DO_PUSH" -eq 1 ]; then
devtype = sys.argv[6] FETCH_URL="$PUSH_REMOTE_URL"
hostkey = sys.argv[7] elif [ "$ENTRY_EXISTS" -eq 1 ] && [ -n "$EXISTING_URL" ] && is_remote_url "$EXISTING_URL"; then
text = path.read_text() FETCH_URL="$EXISTING_URL"
else
CUR_ORIGIN="$(git remote get-url origin 2>/dev/null || true)"
[ -n "$CUR_ORIGIN" ] && is_remote_url "$CUR_ORIGIN" || die "--make-remote requires a remote url (set origin or use -p remote=URL)"
FETCH_URL="$CUR_ORIGIN"
fi
else
if [ "$DO_PUSH" -eq 1 ]; then FETCH_URL="$PUSH_REMOTE_URL"; else FETCH_URL="file://$CUR_REPO_ROOT"; fi
fi
def find_matching_brace(s: str, start: int) -> int: # --- rewrite users.nix or inventory.nix ---
depth = 0 python3 - "$MODE" "$FILE" "$FETCH_URL" "$CUR_REV" "$USERNAME" "$DEVTYPE" "$HOSTKEY" <<'PY'
i = start import sys, re, pathlib
in_str = False
while i < len(s):
ch = s[i]
if in_str:
if ch == '\\':
i += 2
continue
if ch == '"':
in_str = False
i += 1
continue
if ch == '"':
in_str = True
i += 1
continue
if ch == '{':
depth += 1
elif ch == '}':
depth -= 1
if depth == 0:
return i
i += 1
raise ValueError("Could not find matching '}'")
def mk_fetch(entry_indent: str) -> str: mode = sys.argv[1]
# entry_indent is indentation for the whole `"key" = <here>;` line. path = pathlib.Path(sys.argv[2])
# The attrset contents should be indented one level deeper. fetch_url = sys.argv[3]
inner = entry_indent + " " rev = sys.argv[4]
return ( username = sys.argv[5]
'builtins.fetchGit {\n' devtype = sys.argv[6]
f'{inner}url = "{fetch_url}";\n' hostkey = sys.argv[7]
f'{inner}rev = "{rev}";\n' text = path.read_text()
f'{inner}submodules = true;\n'
f'{entry_indent}}}'
)
def full_hostname(devtype: str, hostkey: str) -> str: def find_matching_brace(s: str, start: int) -> int:
if hostkey.startswith(devtype) or "-" in hostkey: depth = 0
return hostkey i = start
if hostkey.isdigit(): in_str = False
return f"{devtype}{hostkey}" while i < len(s):
return f"{devtype}-{hostkey}" ch = s[i]
if in_str:
if ch == '\\':
i += 2
continue
if ch == '"':
in_str = False
i += 1
continue
if ch == '"':
in_str = True
i += 1
continue
if ch == '{':
depth += 1
elif ch == '}':
depth -= 1
if depth == 0:
return i
i += 1
raise ValueError("Could not find matching '}'")
def update_user(t: str) -> str: def mk_fetch(entry_indent: str) -> str:
mblock = re.search(r"(?s)athenix\.users\s*=\s*\{(.*?)\n\s*\};", t) # entry_indent is indentation for the whole `"key" = <here>;` line.
if not mblock: # The attrset contents should be indented one level deeper.
raise SystemExit("error: could not locate `athenix.users = { ... };` block") inner = entry_indent + " "
return (
# locate the full span of the users block to edit inside it 'builtins.fetchGit {\n'
# (re-find with groups for reconstruction) f'{inner}url = "{fetch_url}";\n'
m2 = re.search(r"(?s)(athenix\.users\s*=\s*\{)(.*?)(\n\s*\};)", t) f'{inner}rev = "{rev}";\n'
head, body, tail = m2.group(1), m2.group(2), m2.group(3) f'{inner}submodules = true;\n'
f'{entry_indent}}}'
entry_re = re.search(
r"(?s)(\n[ \t]*" + re.escape(username) + r"\.external\s*=\s*)builtins\.fetchGit\s*\{",
body
) )
if entry_re:
brace = body.rfind("{", 0, entry_re.end())
end = find_matching_brace(body, brace)
semi = re.match(r"\s*;", body[end+1:])
if not semi:
raise SystemExit("error: expected ';' after fetchGit attrset")
semi_end = end + 1 + semi.end()
line_start = body.rfind("\n", 0, entry_re.start()) + 1 def full_hostname(devtype: str, hostkey: str) -> str:
indent = re.match(r"[ \t]*", body[line_start:entry_re.start()]).group(0) if hostkey.startswith(devtype) or "-" in hostkey:
return hostkey
if hostkey.isdigit():
return f"{devtype}{hostkey}"
return f"{devtype}-{hostkey}"
new_body = body[:entry_re.start()] + entry_re.group(1) + mk_fetch(indent) + ";" + body[semi_end:] def update_user(t: str) -> str:
else: mblock = re.search(r"(?s)athenix\.users\s*=\s*\{(.*?)\n\s*\};", t)
indent = " " if not mblock:
new_body = body + f"\n{indent}{username}.external = {mk_fetch(indent)};\n" raise SystemExit("error: could not locate `athenix.users = { ... };` block")
return t[:m2.start()] + head + new_body + tail + t[m2.end():] # locate the full span of the users block to edit inside it
# (re-find with groups for reconstruction)
m2 = re.search(r"(?s)(athenix\.users\s*=\s*\{)(.*?)(\n\s*\};)", t)
head, body, tail = m2.group(1), m2.group(2), m2.group(3)
def update_system(t: str) -> str: entry_re = re.search(
# Find devtype block robustly: start-of-file or newline. r"(?s)(\n[ \t]*" + re.escape(username) + r"\.external\s*=\s*)builtins\.fetchGit\s*\{",
m = re.search(r"(?s)(^|\n)[ \t]*" + re.escape(devtype) + r"\s*=\s*\{", t) body
if not m: )
raise SystemExit(f"error: could not locate `{devtype} = {{ ... }};` block") if entry_re:
brace = body.rfind("{", 0, entry_re.end())
dev_open = t.find("{", m.end() - 1) end = find_matching_brace(body, brace)
dev_close = find_matching_brace(t, dev_open) semi = re.match(r"\s*;", body[end+1:])
dev = t[dev_open:dev_close+1]
# Find devices attrset inside dev
dm = re.search(r"(?s)(^|\n)[ \t]*devices\s*=\s*\{", dev)
if not dm:
raise SystemExit(f"error: could not locate `devices = {{ ... }};` inside `{devtype}`")
devices_open = dev.find("{", dm.end() - 1)
devices_close = find_matching_brace(dev, devices_open)
devices = dev[devices_open:devices_close+1]
# indentation for entries in devices
# find indent of the 'devices' line, then add 2 spaces
candidates = [hostkey, full_hostname(devtype, hostkey)]
seen = set()
candidates = [c for c in candidates if not (c in seen or seen.add(c))]
for key in candidates:
entry = re.search(
r'(?s)\n([ ]*)"' + re.escape(key) + r'"\s*=\s*builtins\.fetchGit\s*\{',
devices
)
if entry:
entry_indent = entry.group(1)
# find the '{' we matched
brace = devices.find("{", entry.end() - 1)
end = find_matching_brace(devices, brace)
semi = re.match(r"\s*;", devices[end+1:])
if not semi: if not semi:
raise SystemExit("error: expected ';' after fetchGit attrset in devices") raise SystemExit("error: expected ';' after fetchGit attrset")
semi_end = end + 1 + semi.end() semi_end = end + 1 + semi.end()
# Reconstruct the prefix: newline + indent + "key" = line_start = body.rfind("\n", 0, entry_re.start()) + 1
prefix = f'\n{entry_indent}"{key}" = ' indent = re.match(r"[ \t]*", body[line_start:entry_re.start()]).group(0)
new_devices = ( new_body = body[:entry_re.start()] + entry_re.group(1) + mk_fetch(indent) + ";" + body[semi_end:]
devices[:entry.start()] else:
+ prefix indent = " "
+ mk_fetch(entry_indent) new_body = body + f"\n{indent}{username}.external = {mk_fetch(indent)};\n"
+ ";"
+ devices[semi_end:] return t[:m2.start()] + head + new_body + tail + t[m2.end():]
def update_system(t: str) -> str:
# Find devtype block robustly: start-of-file or newline.
m = re.search(r"(?s)(^|\n)[ \t]*" + re.escape(devtype) + r"\s*=\s*\{", t)
if not m:
raise SystemExit(f"error: could not locate `{devtype} = {{ ... }};` block")
dev_open = t.find("{", m.end() - 1)
dev_close = find_matching_brace(t, dev_open)
dev = t[dev_open:dev_close+1]
# Find devices attrset inside dev
dm = re.search(r"(?s)(^|\n)[ \t]*devices\s*=\s*\{", dev)
if not dm:
raise SystemExit(f"error: could not locate `devices = {{ ... }};` inside `{devtype}`")
devices_open = dev.find("{", dm.end() - 1)
devices_close = find_matching_brace(dev, devices_open)
devices = dev[devices_open:devices_close+1]
# indentation for entries in devices
# find indent of the 'devices' line, then add 2 spaces
candidates = [hostkey, full_hostname(devtype, hostkey)]
seen = set()
candidates = [c for c in candidates if not (c in seen or seen.add(c))]
for key in candidates:
entry = re.search(
r'(?s)\n([ ]*)"' + re.escape(key) + r'"\s*=\s*builtins\.fetchGit\s*\{',
devices
) )
new_dev = dev[:devices_open] + new_devices + dev[devices_close+1:] if entry:
entry_indent = entry.group(1)
return t[:dev_open] + new_dev + t[dev_close+1:] # find the '{' we matched
brace = devices.find("{", entry.end() - 1)
end = find_matching_brace(devices, brace)
# Not found: append into devices (exact hostkey) semi = re.match(r"\s*;", devices[end+1:])
# Indent for new entries: take indent of the closing '}' of devices, add 2 spaces. if not semi:
close_line_start = devices.rfind("\n", 0, len(devices)-1) + 1 raise SystemExit("error: expected ';' after fetchGit attrset in devices")
close_indent = re.match(r"[ ]*", devices[close_line_start:]).group(0) semi_end = end + 1 + semi.end()
entry_indent = close_indent + " "
insertion = f'\n{entry_indent}"{hostkey}" = {mk_fetch(entry_indent)};\n' # Reconstruct the prefix: newline + indent + "key" =
new_devices = devices[:-1].rstrip() + insertion + close_indent + "}" prefix = f'\n{entry_indent}"{key}" = '
new_dev = dev[:devices_open] + new_devices + dev[devices_close+1:]
return t[:dev_open] + new_dev + t[dev_close+1:]
if mode == "user": new_devices = (
out = update_user(text) devices[:entry.start()]
elif mode == "system": + prefix
out = update_system(text) + mk_fetch(entry_indent)
else: + ";"
raise SystemExit("error: unknown mode") + devices[semi_end:]
)
new_dev = dev[:devices_open] + new_devices + dev[devices_close+1:]
path.write_text(out) return t[:dev_open] + new_dev + t[dev_close+1:]
PY
cd $ATHENIX_DIR # Not found: append into devices (exact hostkey)
nix fmt **/*.nix # Indent for new entries: take indent of the closing '}' of devices, add 2 spaces.
cd $CUR_REPO_ROOT close_line_start = devices.rfind("\n", 0, len(devices)-1) + 1
close_indent = re.match(r"[ ]*", devices[close_line_start:]).group(0)
entry_indent = close_indent + " "
printf "updated %s\n" "$FILE" >&2 insertion = f'\n{entry_indent}"{hostkey}" = {mk_fetch(entry_indent)};\n'
printf " url = %s\n" "$FETCH_URL" >&2 new_devices = devices[:-1].rstrip() + insertion + close_indent + "}"
printf " rev = %s\n" "$CUR_REV" >&2 new_dev = dev[:devices_open] + new_devices + dev[devices_close+1:]
'') return t[:dev_open] + new_dev + t[dev_close+1:]
];
if mode == "user":
out = update_user(text)
elif mode == "system":
out = update_system(text)
else:
raise SystemExit("error: unknown mode")
path.write_text(out)
PY
cd $ATHENIX_DIR
nix fmt **/*.nix
cd $CUR_REPO_ROOT
printf "updated %s\n" "$FILE" >&2
printf " url = %s\n" "$FETCH_URL" >&2
printf " rev = %s\n" "$CUR_REV" >&2
'')
];
};
} }

View File

@@ -9,10 +9,10 @@ with lib;
{ {
options.athenix.sw.remoteBuild = lib.mkOption { options.athenix.sw.remoteBuild = lib.mkOption {
type = types.submodule { type = lib.types.submodule {
options = { options = {
hosts = mkOption { hosts = mkOption {
type = types.listOf types.str; type = lib.types.listOf lib.types.str;
default = [ "engr-ugaif@192.168.11.133 x86_64-linux" ]; default = [ "engr-ugaif@192.168.11.133 x86_64-linux" ];
description = '' description = ''
List of remote build hosts for system rebuilding. List of remote build hosts for system rebuilding.
@@ -31,7 +31,7 @@ with lib;
}; };
enable = mkOption { enable = mkOption {
type = types.bool; type = lib.types.bool;
default = false; default = false;
description = '' description = ''
Whether to enable remote builds for the 'update-system' command. Whether to enable remote builds for the 'update-system' command.

View File

@@ -13,8 +13,9 @@
# #
# External User Configuration: # External User Configuration:
# Users can specify external configuration modules via the 'external' attribute: # Users can specify external configuration modules via the 'external' attribute:
# external = builtins.fetchGit { url = "..."; rev = "..."; }; # external = { url = "..."; rev = "..."; submodules? = false; };
# external = /path/to/local/config; # external = /path/to/local/config;
# external = builtins.fetchGit { ... }; # legacy, still supported
# #
# External repositories should contain: # External repositories should contain:
# - user.nix (required): Defines athenix.users.<name> options AND home-manager config # - user.nix (required): Defines athenix.users.<name> options AND home-manager config
@@ -47,9 +48,10 @@
enable = true; # Default user, enabled everywhere enable = true; # Default user, enabled everywhere
}; };
hdh20267 = { hdh20267 = {
external = builtins.fetchGit { external = {
url = "https://git.factory.uga.edu/hdh20267/hdh20267-nix"; url = "https://git.factory.uga.edu/hdh20267/hdh20267-nix";
rev = "dbdf65c7bd59e646719f724a3acd2330e0c922ec"; rev = "dbdf65c7bd59e646719f724a3acd2330e0c922ec";
# submodules = false; # optional, defaults to false
}; };
}; };
sv22900 = { sv22900 = {