36 Commits

Author SHA1 Message Date
UGA Innovation Factory
6f7e95b9f9 fix: Fail the CI if formatting fails
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m40s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 15s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 9s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 20s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 11s
2026-01-30 18:26:26 -05:00
UGA Innovation Factory
7c07727150 feat: USDA-dash now uses encrypted .env files
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m42s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 14s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 9s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 20s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 10s
2026-01-30 23:19:38 +00:00
UGA Innovation Factory
7e6e8d5e0f chore: Update flake lock
Some checks failed
CI / Format Check (push) Waiting to run
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
2026-01-30 23:07:40 +00:00
UGA Innovation Factory
c6e0a0aedf chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:55:30 +00:00
UGA Innovation Factory
4b4e6a2873 chore: Update flake lock
Some checks failed
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:53:59 +00:00
UGA Innovation Factory
40a9f9f5a6 chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:52:27 +00:00
UGA Innovation Factory
14a61da9ed chore: Update flake lock
Some checks failed
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Format Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
2026-01-30 22:38:14 +00:00
UGA Innovation Factory
a3c8e0640a chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:26:29 +00:00
UGA Innovation Factory
01fc5518c1 chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:24:31 +00:00
UGA Innovation Factory
a2d4f71a77 chore: Update flake lock
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:21:13 +00:00
UGA Innovation Factory
e0cafb7f66 chore: Update usda-docker hash
Some checks failed
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:18:59 +00:00
UGA Innovation Factory
ffbd7a221d Set default timezone for LXC containers to fix Docker /etc/localtime mounts
Some checks failed
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 22:00:15 +00:00
UGA Innovation Factory
d7922247d2 Fix activation script to always regenerate age keys
Some checks failed
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:51:19 +00:00
UGA Innovation Factory
31c829f502 Add SSH-to-age conversion activation script for reliable secret decryption
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:48:57 +00:00
UGA Innovation Factory
e3bae02f58 Re-encrypt usda-vision-env with correct host key
Some checks failed
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:46:02 +00:00
UGA Innovation Factory
aa6d9d5691 Revert experimental changes, use ragenix defaults
Some checks failed
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:45:55 +00:00
UGA Innovation Factory
87045a518f Use rage instead of age for SSH key decryption support
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:40:04 +00:00
UGA Innovation Factory
dffe817e47 Update usda-dash host key and re-encrypt secret
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 21:29:39 +00:00
UGA Innovation Factory
23da829033 feat: Use age for env secret managment
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 20:54:31 +00:00
UGA Innovation Factory
dd19d1488a fix: Convert ssh keys to age keys
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m42s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 14s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 20s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 11s
2026-01-30 19:41:34 +00:00
UGA Innovation Factory
862ae2c864 chore: Run nix fmt
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m42s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 13s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 22s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 14s
CI / Build and Publish Documentation (push) Successful in 10s
2026-01-30 19:19:38 +00:00
UGA Innovation Factory
3efba93424 feat: Ragenix secret management per host
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-30 19:19:20 +00:00
UGA Innovation Factory
2e4602cbf3 refactor: Move macCaseBuilder into athenix.lib
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m44s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 14s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 19s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 10s
2026-01-27 22:13:32 +00:00
UGA Innovation Factory
ab3710b5f6 chore: Run nix fmt
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m43s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 14s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 20s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 13s
CI / Build and Publish Documentation (push) Successful in 11s
2026-01-27 21:44:23 +00:00
UGA Innovation Factory
863cd1ea95 fix: Remove unused or broken config outputs for nix eval of flake components
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-27 21:43:58 +00:00
UGA Innovation Factory
d8cee7e79b refactor: Make hw definitions modules with mkIf guards
Some checks failed
CI / Flake Check (push) Has been cancelled
CI / Evaluate Key Configurations (nix-builder) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-desktop1) (push) Has been cancelled
CI / Evaluate Key Configurations (nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Has been cancelled
CI / Evaluate Artifacts (lxc-nix-builder) (push) Has been cancelled
CI / Build and Publish Documentation (push) Has been cancelled
CI / Format Check (push) Has been cancelled
2026-01-27 16:30:54 -05:00
UGA Innovation Factory
063336f736 refactor: Fleet and sw behind mkIf guards 2026-01-27 16:11:36 -05:00
UGA Innovation Factory
85653e632f fix: Enable sw by default when imported
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m47s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 11s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 7s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 18s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 11s
CI / Build and Publish Documentation (push) Successful in 8s
2026-01-27 15:36:31 -05:00
Hunter David Halloran
1533382ff2 Merge pull request 'fix: Lazily fetch external modules only if needed' (#32) from external-refactor into main
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m45s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 11s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 18s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 12s
CI / Build and Publish Documentation (push) Successful in 8s
Reviewed-on: http://git.factory.uga.edu/UGA-Innovation-Factory/athenix/pulls/32
2026-01-27 20:06:09 +00:00
UGA Innovation Factory
540f5feb78 fix: Lazily fetch external modules only if needed 2026-01-27 15:05:52 -05:00
UGA Innovation Factory
1a7bf29448 docs: Update inline code docs for LSP help
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m39s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 8s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 7s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 14s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 8s
CI / Build and Publish Documentation (push) Successful in 5s
2026-01-27 14:48:07 -05:00
UGA Innovation Factory
13fdc3a7a1 feat: Update auto-docs
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m39s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 9s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 7s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 14s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 9s
CI / Build and Publish Documentation (push) Successful in 6s
2026-01-27 14:25:37 -05:00
Hunter David Halloran
01fdfbf913 Merge pull request 'fix: Change CI to ssh git' (#31) from sw-refactor into main
All checks were successful
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m40s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 9s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 8s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 15s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 8s
CI / Build and Publish Documentation (push) Successful in 6s
Reviewed-on: http://git.factory.uga.edu/UGA-Innovation-Factory/athenix/pulls/31
2026-01-27 19:07:29 +00:00
UGA Innovation Factory
9d0683165f fix: Change CI to ssh git 2026-01-27 14:07:03 -05:00
Hunter David Halloran
b1bc354160 Merge pull request 'refactor: Move sw into properly nested modules with unconditional import' (#30) from sw-refactor into main
Some checks failed
CI / Format Check (push) Successful in 2s
CI / Flake Check (push) Successful in 1m39s
CI / Evaluate Key Configurations (nix-builder) (push) Successful in 9s
CI / Evaluate Key Configurations (nix-desktop1) (push) Successful in 7s
CI / Evaluate Key Configurations (nix-laptop1) (push) Successful in 7s
CI / Evaluate Artifacts (installer-iso-nix-laptop1) (push) Successful in 14s
CI / Evaluate Artifacts (lxc-nix-builder) (push) Successful in 8s
CI / Build and Publish Documentation (push) Has been cancelled
Reviewed-on: http://git.factory.uga.edu/UGA-Innovation-Factory/athenix/pulls/30
2026-01-27 19:00:23 +00:00
UGA Innovation Factory
f669845bf7 refactor: Move sw into properly nested modules with unconditional import 2026-01-27 13:59:57 -05:00
64 changed files with 3411 additions and 1177 deletions

View File

@@ -26,18 +26,23 @@ jobs:
format-check:
name: Format Check
runs-on: [self-hosted, nix-builder]
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Check formatting
timeout-minutes: 3
run: |
nix fmt **/*.nix
if ! git diff --quiet; then
set -euo pipefail
echo "Checking code formatting..."
output=$(nix fmt **/*.nix 2>&1)
if [ -n "$output" ]; then
echo "::error::Code is not formatted. Please run 'nix fmt **/*.nix' locally."
git diff
echo "$output"
exit 1
fi
echo "All files are properly formatted"
eval-configs:
name: Evaluate Key Configurations
@@ -79,3 +84,39 @@ jobs:
echo "Evaluating artifact ${{ matrix.artifact }}"
nix eval .#${{ matrix.artifact }}.drvPath \
--show-trace
build-docs:
name: Build and Publish Documentation
runs-on: [self-hosted, nix-builder]
needs: [flake-check]
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Build documentation
run: |
echo "Building Athenix documentation"
nix build .#docs --print-build-logs
- name: Clone wiki repository
run: |
git clone git@factory.uga.edu:UGA-Innovation-Factory/athenix.wiki.git wiki
cd wiki
git config user.name "Athenix CI"
git config user.email "ci@athenix.factory.uga.edu"
- name: Update wiki with documentation
run: |
# Copy documentation to wiki
cp -r result/* wiki/
# Commit and push changes
cd wiki
git add .
if git diff --staged --quiet; then
echo "No documentation changes to commit"
else
git commit -m "Update documentation from commit ${{ github.sha }}"
git push
fi

34
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Build Documentation
on:
push:
branches: [main]
pull_request:
jobs:
build-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- name: Build documentation
run: |
nix build .#docs
nix build .#athenix-options
- name: Upload documentation
uses: actions/upload-artifact@v4
with:
name: athenix-docs
path: result/
- name: Deploy to GitHub Pages
if: github.ref == 'refs/heads/main'
uses: peaceiris/actions-gh-pages@v4
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./result

18
.nixd.json Normal file
View File

@@ -0,0 +1,18 @@
{
"eval": {
"target": {
"installable": ".#nixosConfigurations.nix-desktop1.options"
}
},
"formatting": {
"command": ["nixfmt"]
},
"options": {
"nixos": {
"expr": "(builtins.getFlake \"/home/engr-ugaif/athenix\").nixosConfigurations.nix-desktop1.options"
},
"home-manager": {
"expr": "(builtins.getFlake \"/home/engr-ugaif/athenix\").nixosConfigurations.nix-desktop1.config.home-manager.users.engr-ugaif.options"
}
}
}

57
flake.lock generated
View File

@@ -239,6 +239,24 @@
"inputs": {
"systems": "systems_4"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_4": {
"inputs": {
"systems": "systems_5"
},
"locked": {
"lastModified": 1681202837,
"narHash": "sha256-H+Rh19JDwRtpVPAWp64F+rlEtxUWBAQW28eAi3SRSzg=",
@@ -636,6 +654,7 @@
"nixos-wsl": "nixos-wsl",
"nixpkgs": "nixpkgs_2",
"nixpkgs-old-kernel": "nixpkgs-old-kernel",
"usda-vision": "usda-vision",
"vscode-server": "vscode-server"
}
},
@@ -720,6 +739,21 @@
"type": "github"
}
},
"systems_5": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"treefmt-nix": {
"inputs": {
"nixpkgs": [
@@ -742,13 +776,34 @@
"type": "github"
}
},
"vscode-server": {
"usda-vision": {
"inputs": {
"flake-utils": "flake-utils_3",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1769814438,
"narHash": "sha256-DEZrmqpqbrd996W5p1r4GA1C8Jmo31n3N642ccS0deY=",
"ref": "refs/heads/main",
"rev": "78bfcf02612817a2cee1edbf92deeac9bf657613",
"revCount": 126,
"type": "git",
"url": "https://git.factory.uga.edu/MODEL/usda-vision.git"
},
"original": {
"type": "git",
"url": "https://git.factory.uga.edu/MODEL/usda-vision.git"
}
},
"vscode-server": {
"inputs": {
"flake-utils": "flake-utils_4",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1753541826,
"narHash": "sha256-foGgZu8+bCNIGeuDqQ84jNbmKZpd+JvnrL2WlyU4tuU=",

View File

@@ -59,6 +59,12 @@
url = "github:nix-community/NixOS-WSL/main";
inputs.nixpkgs.follows = "nixpkgs";
};
# USDA Vision Dashboard application
usda-vision = {
url = "git+https://git.factory.uga.edu/MODEL/usda-vision.git";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs =
@@ -80,6 +86,7 @@
./parts/nixos-modules.nix
./parts/packages.nix
./parts/templates.nix
./parts/docs.nix
./inventory.nix
./users.nix
];

View File

@@ -5,44 +5,60 @@
# - Bootloader configuration (systemd-boot with Plymouth)
# - Timezone and locale settings
# - Systemd sleep configuration
#
# Only applies to:
# - Linux systems (not Darwin/macOS)
# - Systems with actual boot hardware (not containers/WSL)
{ lib, ... }:
{
boot = {
loader.systemd-boot.enable = lib.mkDefault true;
loader.efi.canTouchEfiVariables = lib.mkDefault true;
plymouth.enable = lib.mkDefault true;
config,
lib,
pkgs,
...
}:
# Enable "Silent boot"
consoleLogLevel = 3;
initrd.verbose = false;
let
# Check if this is a bootable system (not container, not WSL)
isBootable = !(config.boot.isContainer or false) && (pkgs.stdenv.isLinux);
in
{
config = lib.mkIf isBootable {
boot = {
loader.systemd-boot.enable = lib.mkDefault true;
loader.efi.canTouchEfiVariables = lib.mkDefault true;
plymouth.enable = lib.mkDefault true;
# Hide the OS choice for bootloaders.
# It's still possible to open the bootloader list by pressing any key
# It will just not appear on screen unless a key is pressed
loader.timeout = lib.mkDefault 0;
# Enable "Silent boot"
consoleLogLevel = 3;
initrd.verbose = false;
# Hide the OS choice for bootloaders.
# It's still possible to open the bootloader list by pressing any key
# It will just not appear on screen unless a key is pressed
loader.timeout = lib.mkDefault 0;
};
# Set your time zone.
time.timeZone = "America/New_York";
# Select internationalisation properties.
i18n.defaultLocale = "en_US.UTF-8";
i18n.extraLocaleSettings = {
LC_ADDRESS = "en_US.UTF-8";
LC_IDENTIFICATION = "en_US.UTF-8";
LC_MEASUREMENT = "en_US.UTF-8";
LC_MONETARY = "en_US.UTF-8";
LC_NAME = "en_US.UTF-8";
LC_NUMERIC = "en_US.UTF-8";
LC_PAPER = "en_US.UTF-8";
LC_TELEPHONE = "en_US.UTF-8";
LC_TIME = "en_US.UTF-8";
};
systemd.sleep.extraConfig = ''
SuspendState=freeze
HibernateDelaySec=2h
'';
};
# Set your time zone.
time.timeZone = "America/New_York";
# Select internationalisation properties.
i18n.defaultLocale = "en_US.UTF-8";
i18n.extraLocaleSettings = {
LC_ADDRESS = "en_US.UTF-8";
LC_IDENTIFICATION = "en_US.UTF-8";
LC_MEASUREMENT = "en_US.UTF-8";
LC_MONETARY = "en_US.UTF-8";
LC_NAME = "en_US.UTF-8";
LC_NUMERIC = "en_US.UTF-8";
LC_PAPER = "en_US.UTF-8";
LC_TELEPHONE = "en_US.UTF-8";
LC_TIME = "en_US.UTF-8";
};
systemd.sleep.extraConfig = ''
SuspendState=freeze
HibernateDelaySec=2h
'';
}

View File

@@ -7,16 +7,148 @@
{
config,
lib,
inputs,
...
}:
let
# Import all hardware modules so they're available for enabling
hwTypes = import ../hw { inherit inputs; };
hwModules = lib.attrValues hwTypes;
# User account submodule definition
userSubmodule = lib.types.submodule {
options = {
enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether this user account is enabled on this system.";
};
isNormalUser = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether this is a normal user account (vs system user).";
};
description = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Full name or description of the user (GECOS field).";
example = "John Doe";
};
extraGroups = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "Additional groups for the user (wheel, docker, etc.).";
};
hashedPassword = lib.mkOption {
type = lib.types.str;
default = "!";
description = "Hashed password for the user account. Default '!' means locked.";
};
extraPackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "Additional system packages available to this user.";
};
excludePackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "System packages to exclude for this user.";
};
homePackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "Packages to install in the user's home-manager profile.";
};
extraImports = lib.mkOption {
type = lib.types.listOf lib.types.path;
default = [ ];
description = "Additional home-manager modules to import for this user.";
};
external = lib.mkOption {
type = lib.types.nullOr (
lib.types.oneOf [
lib.types.path
(lib.types.submodule {
options = {
url = lib.mkOption {
type = lib.types.str;
description = "Git repository URL to fetch user configuration from.";
};
rev = lib.mkOption {
type = lib.types.str;
description = "Git commit hash, tag, or branch to fetch.";
};
submodules = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to fetch Git submodules.";
};
};
})
]
);
default = null;
description = "External dotfiles repository (user.nix + optional nixos.nix).";
};
opensshKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "SSH public keys for the user (authorized_keys).";
};
shell = lib.mkOption {
type = lib.types.nullOr (
lib.types.enum [
"bash"
"zsh"
"fish"
"tcsh"
]
);
default = "bash";
description = "Default shell for the user.";
};
editor = lib.mkOption {
type = lib.types.nullOr (
lib.types.enum [
"vim"
"neovim"
"emacs"
"nano"
"code"
]
);
default = "neovim";
description = "Default text editor for the user (sets EDITOR).";
};
useZshTheme = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to apply the system Zsh theme (Oh My Posh).";
};
useNvimPlugins = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to apply the system Neovim configuration.";
};
};
};
in
{
imports = [
./fs.nix
./boot.nix
./user-config.nix
./fleet-option.nix
../sw
];
inputs.vscode-server.nixosModules.default
inputs.nixos-wsl.nixosModules.default
]
++ hwModules;
options.athenix.users = lib.mkOption {
type = lib.types.attrsOf userSubmodule;
default = { };
description = "User accounts configuration. Set enable=true for users that should exist on this system.";
};
options.athenix = {
forUser = lib.mkOption {
@@ -24,14 +156,28 @@
default = null;
description = ''
Convenience option to configure a host for a specific user.
Automatically enables the user (sets athenix.users.username.enable = true).
Value should be a username from athenix.users.accounts.
When set, automatically:
- Enables the user account (athenix.users.<username>.enable = true)
- Sets as default WSL user (on WSL systems)
The username must exist in athenix.users (defined in users.nix).
'';
example = "engr-ugaif";
};
host.useHostPrefix = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to prepend the host prefix to the hostname (used in inventory and hosts/default.nix).";
description = ''
Whether to prepend the hardware type prefix to the hostname.
When true:
- "nix-laptop" with device "1" hostname "nix-laptop1"
- "nix-wsl" with device "alice" hostname "nix-wsl-alice"
When false:
- Device name becomes the full hostname (useful for custom names)
'';
};
};

View File

@@ -20,8 +20,6 @@ let
# Import fleet-option.nix (defines athenix.fleet) and inventory.nix (sets values)
# We use a minimal module here to avoid circular dependencies from common.nix's imports
hostTypes = config.athenix.hwTypes;
# Helper to create a single NixOS system configuration
mkHost =
{
@@ -36,8 +34,20 @@ let
externalModulePath =
if externalModuleThunk != null then
let
# Force evaluation of the thunk (fetchGit, fetchTarball, etc.)
fetchedPath = externalModuleThunk;
# Force evaluation of the thunk
fetchedPath =
if
builtins.isAttrs externalModuleThunk
&& externalModuleThunk ? _type
&& externalModuleThunk._type == "lazy-fetchGit"
then
# New format: lazy fetchGit - only execute when needed
(builtins.fetchGit {
inherit (externalModuleThunk) url rev submodules;
}).outPath
else
# Legacy: pre-fetched derivation or path
externalModuleThunk;
# Extract outPath from fetchGit/fetchTarball results
extractedPath =
if builtins.isAttrs fetchedPath && fetchedPath ? outPath then fetchedPath.outPath else fetchedPath;
@@ -61,10 +71,19 @@ let
name: user:
if (user ? external && user.external != null) then
let
# Resolve external path (lazy fetchGit if needed)
externalPath =
if builtins.isAttrs user.external && user.external ? outPath then
if builtins.isAttrs user.external && user.external ? url && user.external ? rev then
# New format: lazy fetchGit
(builtins.fetchGit {
inherit (user.external) url rev;
submodules = user.external.submodules or false;
}).outPath
else if builtins.isAttrs user.external && user.external ? outPath then
# Legacy: pre-fetched
user.external.outPath
else
# Direct path
user.external;
nixosModulePath = externalPath + "/nixos.nix";
in
@@ -102,11 +121,6 @@ let
}
) userNixosModulePaths;
# Get the host type module from the hostTypes attribute set
typeModule =
hostTypes.${hostType}
or (throw "Host type '${hostType}' not found. Available types: ${lib.concatStringsSep ", " (lib.attrNames hostTypes)}");
# External module from fetchGit/fetchurl
externalPathModule =
if externalModulePath != null then import externalModulePath { inherit inputs; } else { };
@@ -134,18 +148,27 @@ let
];
};
# Hardware-specific external modules
hwSpecificModules =
lib.optional (hostType == "nix-lxc")
"${inputs.nixpkgs.legacyPackages.${system}.path}/nixos/modules/virtualisation/proxmox-lxc.nix";
allModules =
userNixosModules
++ [
./common.nix
typeModule
overrideModule
{ networking.hostName = hostName; }
# Set athenix.host.name for secrets and other modules to use
{ athenix.host.name = hostName; }
{
# Inject user definitions from flake-parts level
config.athenix.users = lib.mapAttrs (_: user: lib.mapAttrs (_: lib.mkDefault) user) users;
}
# Enable the appropriate hardware module based on hostType
{ config.athenix.hw.${hostType}.enable = lib.mkDefault true; }
]
++ hwSpecificModules
++ lib.optional (externalModulePath != null) externalPathModule;
in
{
@@ -205,8 +228,24 @@ let
# Check if deviceConfig has an 'external' field for lazy evaluation
hasExternalField = builtins.isAttrs deviceConfig && deviceConfig ? external;
# Extract external module thunk if present (don't evaluate yet!)
externalModuleThunk = if hasExternalField then deviceConfig.external else null;
# Extract external module spec (don't evaluate fetchGit yet!)
externalModuleThunk =
if hasExternalField then
let
ext = deviceConfig.external;
in
# New format: { url, rev, submodules? } - create lazy fetchGit thunk
if builtins.isAttrs ext && ext ? url && ext ? rev then
{
_type = "lazy-fetchGit";
inherit (ext) url rev;
submodules = ext.submodules or false;
}
# Legacy: pre-fetched or path
else
ext
else
null;
# Remove 'external' from config to avoid conflicts
cleanDeviceConfig =

View File

@@ -1,8 +1,8 @@
# ============================================================================
# Fleet Option Definition
# ============================================================================
# This module only defines the athenix.fleet option without any dependencies.
# Used by fleet/default.nix to evaluate inventory data without circular dependencies.
# This module defines the athenix.fleet and athenix.hwTypes options.
# Self-contained fleet management without dependencies on user configuration.
{ inputs, lib, ... }:
let
fleetDefinition = lib.mkOption {
@@ -59,66 +59,118 @@ let
)
);
};
# Submodule defining the structure of a user account
# Forward declaration for user options (full definition in user-config.nix)
# This allows users.nix to be evaluated at flake level
userSubmodule = lib.types.submodule {
options = {
enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether this user account is enabled on this system.";
};
isNormalUser = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether this is a normal user account (vs system user).";
};
description = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "Full name or description of the user (GECOS field).";
example = "John Doe";
};
extraGroups = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "Additional groups for the user (wheel, docker, etc.).";
example = [
"wheel"
"networkmanager"
"docker"
];
};
hashedPassword = lib.mkOption {
type = lib.types.str;
default = "!";
description = ''
Hashed password for the user account.
Generate with: mkpasswd -m sha-512
Default "!" means account is locked (SSH key only).
'';
};
extraPackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "Additional system packages available to this user.";
example = lib.literalExpression "[ pkgs.vim pkgs.git ]";
};
excludePackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "System packages to exclude for this user.";
};
homePackages = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = [ ];
description = "Packages to install in the user's home-manager profile.";
example = lib.literalExpression "[ pkgs.firefox pkgs.vscode ]";
};
extraImports = lib.mkOption {
type = lib.types.listOf lib.types.path;
default = [ ];
description = "Additional home-manager modules to import for this user.";
};
external = lib.mkOption {
type = lib.types.nullOr (
lib.types.oneOf [
lib.types.path
lib.types.package
lib.types.attrs
(lib.types.submodule {
options = {
url = lib.mkOption {
type = lib.types.str;
description = "Git repository URL to fetch user configuration from.";
example = "https://github.com/username/dotfiles";
};
rev = lib.mkOption {
type = lib.types.str;
description = "Git commit hash, tag, or branch to fetch.";
example = "abc123def456...";
};
submodules = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to fetch Git submodules.";
};
};
})
]
);
default = null;
description = ''
External user configuration module. Can be:
- A path to a local module directory
- A fetchGit/fetchTarball result pointing to a repository
External user configuration module from Git or local path.
The external module can contain:
- user.nix (optional): Sets athenix.users.<name> options AND home-manager config
- nixos.nix (optional): System-level NixOS configuration
Can be either:
- A local path: /path/to/config
- A Git repository: { url = "..."; rev = "..."; submodules? = false; }
Example: builtins.fetchGit { url = "https://github.com/user/dotfiles"; rev = "..."; }
The Git repository is only fetched when the user is actually enabled.
Should contain user.nix (user options + home-manager config)
and optionally nixos.nix (system-level config).
'';
example = lib.literalExpression ''
{
url = "https://github.com/username/dotfiles";
rev = "abc123def456789abcdef0123456789abcdef012";
submodules = false;
}'';
};
opensshKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "List of SSH public keys for the user.";
description = "SSH public keys for the user (authorized_keys).";
example = [ "ssh-ed25519 AAAAC3Nza... user@host" ];
};
shell = lib.mkOption {
type = lib.types.nullOr (
@@ -130,7 +182,7 @@ let
]
);
default = "bash";
description = "The shell for this user.";
description = "Default shell for the user.";
};
editor = lib.mkOption {
type = lib.types.nullOr (
@@ -143,23 +195,18 @@ let
]
);
default = "neovim";
description = "The default editor for this user.";
description = "Default text editor for the user (sets EDITOR).";
};
useZshTheme = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to apply the system Zsh theme.";
description = "Whether to apply the system Zsh theme (Oh My Posh).";
};
useNvimPlugins = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to apply the system Neovim configuration.";
};
enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether this user account is enabled on this system.";
};
};
};
in

View File

@@ -4,87 +4,129 @@
# This module defines:
# - Disko partition layout (EFI, swap, root)
# - Filesystem options (device, swap size)
#
# Only applies to systems with physical disk management needs
# (not containers, not WSL, not systems without a configured device)
{ config, lib, ... }:
let
cfg = config.athenix.host.filesystem;
# Only enable disk config if device is set and disko is enabled
hasDiskConfig = cfg.device != null && config.disko.enableConfig;
in
{
options.athenix = {
host.filesystem = {
device = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "The main disk device to use for installation.";
host = {
name = lib.mkOption {
type = lib.types.str;
description = ''
Fleet-assigned hostname for this system.
Used for secrets discovery and other host-specific configurations.
'';
};
useSwap = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to create and use a swap partition.";
};
swapSize = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = "The size of the swap partition.";
filesystem = {
device = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = ''
The main disk device to use for automated partitioning and installation.
When set, enables disko for declarative disk management with:
- 1GB EFI boot partition
- Optional swap partition (see swapSize)
- Root partition using remaining space
Leave null for systems that don't need disk partitioning (containers, WSL).
'';
example = "/dev/nvme0n1";
};
useSwap = lib.mkOption {
type = lib.types.bool;
default = true;
description = ''
Whether to create and use a swap partition.
Disable for systems with ample RAM or SSDs where swap is undesirable.
'';
};
swapSize = lib.mkOption {
type = lib.types.nullOr lib.types.str;
default = null;
description = ''
Size of the swap partition (e.g., "16G", "32G").
Recommended sizes:
- 8-16GB for desktops with 16GB+ RAM
- 32GB for laptops (enables hibernation)
- Match RAM size for systems <8GB RAM
'';
example = "32G";
};
};
};
};
config = {
# ========== Disk Partitioning (Disko) ==========
disko.enableConfig = lib.mkDefault (config.athenix.host.filesystem.device != null);
config = lib.mkMerge [
{
# ========== Disk Partitioning (Disko) ==========
disko.enableConfig = lib.mkDefault (cfg.device != null);
}
disko.devices = {
disk.main = {
type = "disk";
device = config.athenix.host.filesystem.device;
content = {
type = "gpt";
partitions = {
# EFI System Partition
ESP = {
name = "ESP";
label = "BOOT";
size = "1G";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
extraArgs = [
"-n"
"BOOT"
];
(lib.mkIf hasDiskConfig {
disko.devices = {
disk.main = {
type = "disk";
device = cfg.device;
content = {
type = "gpt";
partitions = {
# EFI System Partition
ESP = {
name = "ESP";
label = "BOOT";
size = "1G";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
mountOptions = [ "umask=0077" ];
extraArgs = [
"-n"
"BOOT"
];
};
};
};
# Swap Partition (size configurable per host)
swap = lib.mkIf config.athenix.host.filesystem.useSwap {
name = "swap";
label = "swap";
size = config.athenix.host.filesystem.swapSize;
content = {
type = "swap";
# Swap Partition (size configurable per host)
swap = lib.mkIf cfg.useSwap {
name = "swap";
label = "swap";
size = cfg.swapSize;
content = {
type = "swap";
};
};
};
# Root Partition (takes remaining space)
root = {
name = "root";
label = "root";
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
extraArgs = [
"-L"
"ROOT"
];
# Root Partition (takes remaining space)
root = {
name = "root";
label = "root";
size = "100%";
content = {
type = "filesystem";
format = "ext4";
mountpoint = "/";
extraArgs = [
"-L"
"ROOT"
];
};
};
};
};
};
};
};
};
})
];
}

View File

@@ -9,18 +9,25 @@
# ============================================================================
# User Configuration Module
# ============================================================================
# This module defines the schema for user accounts and handles their creation.
# It bridges the gap between the data in 'users.nix' and the actual NixOS
# and Home Manager configuration.
# This module implements user account creation and home-manager setup.
# Options are defined in fleet-option.nix for early availability.
let
# Helper: Resolve external module path from fetchGit/fetchTarball/path
# Helper: Resolve external module path (with lazy Git fetching)
resolveExternalPath =
external:
if external == null then
null
# New format: { url, rev, submodules? } - only fetch when needed
else if builtins.isAttrs external && external ? url && external ? rev then
(builtins.fetchGit {
inherit (external) url rev;
submodules = external.submodules or false;
}).outPath
# Legacy: pre-fetched derivation/package
else if builtins.isAttrs external && external ? outPath then
external.outPath
# Direct path
else
external;

View File

@@ -10,41 +10,64 @@
modulesPath,
...
}:
with lib;
let
cfg = config.athenix.hw.nix-desktop;
in
{
imports = [
(modulesPath + "/installer/scan/not-detected.nix")
];
# ========== Boot Configuration ==========
options.athenix.hw.nix-desktop = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable desktop workstation hardware configuration.";
};
};
};
default = { };
description = "Desktop workstation hardware type configuration.";
};
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"nvme" # NVMe SSD support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
config = mkIf cfg.enable {
# ========== Filesystem Configuration ==========
athenix.host.filesystem.swapSize = lib.mkDefault "16G";
athenix.host.filesystem.device = lib.mkDefault "/dev/nvme0n1";
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Boot Configuration ==========
# ========== Hardware Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"nvme" # NVMe SSD support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.type = lib.mkDefault "desktop";
# ========== Filesystem Configuration ==========
athenix.host.filesystem.swapSize = lib.mkDefault "16G";
athenix.host.filesystem.device = lib.mkDefault "/dev/nvme0n1";
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Hardware Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.desktop.enable = lib.mkDefault true;
};
}

View File

@@ -11,56 +11,78 @@
modulesPath,
...
}:
with lib;
let
cfg = config.athenix.hw.nix-ephemeral;
in
{
imports = [
(modulesPath + "/installer/scan/not-detected.nix")
];
# ========== Boot Configuration ==========
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"nvme" # NVMe support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
# ========== Ephemeral Configuration ==========
# No persistent storage - everything runs from RAM
athenix.host.filesystem.swapSize = lib.mkForce "0G";
athenix.host.filesystem.device = lib.mkForce "/dev/null"; # Dummy device
athenix.host.buildMethods = lib.mkDefault [
"iso" # Live ISO image
"ipxe" # Network boot
];
# Disable disk management for RAM-only systems
disko.enableConfig = lib.mkForce false;
# Define tmpfs root filesystem
fileSystems."/" = {
device = "none";
fsType = "tmpfs";
options = [
"defaults"
"size=50%"
"mode=755"
];
options.athenix.hw.nix-ephemeral = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable ephemeral/diskless system hardware configuration.";
};
};
};
default = { };
description = "Ephemeral hardware type configuration.";
};
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
config = mkIf cfg.enable {
# ========== Boot Configuration ==========
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"nvme" # NVMe support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
athenix.sw.enable = lib.mkDefault true;
athenix.sw.type = lib.mkDefault "stateless-kiosk";
# ========== Ephemeral Configuration ==========
# No persistent storage - everything runs from RAM
athenix.host.filesystem.swapSize = lib.mkForce "0G";
athenix.host.filesystem.device = lib.mkForce "/dev/null"; # Dummy device
athenix.host.buildMethods = lib.mkDefault [
"iso" # Live ISO image
"ipxe" # Network boot
];
# Disable disk management for RAM-only systems
disko.enableConfig = lib.mkForce false;
# Define tmpfs root filesystem
fileSystems."/" = {
device = "none";
fsType = "tmpfs";
options = [
"defaults"
"size=50%"
"mode=755"
];
};
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
athenix.sw.enable = lib.mkDefault true;
athenix.sw.stateless-kiosk.enable = lib.mkDefault true;
};
}

View File

@@ -10,54 +10,76 @@
modulesPath,
...
}:
with lib;
let
cfg = config.athenix.hw.nix-laptop;
in
{
imports = [
(modulesPath + "/installer/scan/not-detected.nix")
];
# ========== Boot Configuration ==========
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"thunderbolt" # Thunderbolt support
"nvme" # NVMe SSD support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
"i915.enable_psr=0" # Disable Panel Self Refresh (stability)
"i915.enable_dc=0" # Disable display power saving
"i915.enable_fbc=0" # Disable framebuffer compression
];
# ========== Hardware Configuration ==========
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Filesystem Configuration ==========
athenix.host.filesystem.device = lib.mkDefault "/dev/nvme0n1";
athenix.host.filesystem.swapSize = lib.mkDefault "34G"; # Larger swap for hibernation
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
# ========== Power Management ==========
services.upower.enable = lib.mkDefault true;
services.logind.settings = {
Login = {
HandleLidSwitch = "suspend";
HandleLidSwitchExternalPower = "suspend";
HandleLidSwitchDocked = "ignore";
options.athenix.hw.nix-laptop = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable laptop hardware configuration with power management.";
};
};
};
default = { };
description = "Laptop hardware type configuration.";
};
athenix.sw.enable = lib.mkDefault true;
athenix.sw.type = lib.mkDefault "desktop";
config = mkIf cfg.enable {
# ========== Boot Configuration ==========
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"thunderbolt" # Thunderbolt support
"nvme" # NVMe SSD support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
"i915.enable_psr=0" # Disable Panel Self Refresh (stability)
"i915.enable_dc=0" # Disable display power saving
"i915.enable_fbc=0" # Disable framebuffer compression
];
# ========== Hardware Configuration ==========
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Filesystem Configuration ==========
athenix.host.filesystem.device = lib.mkDefault "/dev/nvme0n1";
athenix.host.filesystem.swapSize = lib.mkDefault "34G"; # Larger swap for hibernation
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
# ========== Power Management ==========
services.upower.enable = lib.mkDefault true;
services.logind.settings = {
Login = {
HandleLidSwitch = "suspend";
HandleLidSwitchExternalPower = "suspend";
HandleLidSwitchDocked = "ignore";
};
};
athenix.sw.enable = lib.mkDefault true;
athenix.sw.desktop.enable = lib.mkDefault true;
};
}

View File

@@ -5,56 +5,75 @@
# Disables boot/disk management and enables remote development support.
{
config,
lib,
modulesPath,
inputs,
...
}:
with lib;
let
cfg = config.athenix.hw.nix-lxc;
in
{
imports = [
inputs.vscode-server.nixosModules.default
"${modulesPath}/virtualisation/proxmox-lxc.nix"
];
options.athenix.hw.nix-lxc = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable Proxmox LXC container hardware configuration.";
};
};
};
default = { };
description = "Proxmox LXC hardware type configuration.";
};
# ========== Nix Configuration ==========
nix.settings.trusted-users = [
"root"
"engr-ugaif"
];
nix.settings.experimental-features = [
"nix-command"
"flakes"
];
config = mkIf cfg.enable {
# ========== Nix Configuration ==========
nix.settings.trusted-users = [
"root"
"engr-ugaif"
];
nix.settings.experimental-features = [
"nix-command"
"flakes"
];
# ========== Container-Specific Configuration ==========
boot.isContainer = true;
boot.loader.systemd-boot.enable = lib.mkForce false; # No bootloader in container
disko.enableConfig = lib.mkForce false; # No disk management in container
console.enable = true;
# ========== Container-Specific Configuration ==========
boot.isContainer = true;
boot.loader.systemd-boot.enable = lib.mkForce false; # No bootloader in container
disko.enableConfig = lib.mkForce false; # No disk management in container
console.enable = true;
# Allow getty to work in containers
systemd.services."getty@".unitConfig.ConditionPathExists = [
""
"/dev/%I"
];
# Set timezone to fix /etc/localtime for Docker containers
time.timeZone = lib.mkDefault "America/New_York";
# Suppress unnecessary systemd units for containers
systemd.suppressedSystemUnits = [
"dev-mqueue.mount"
"sys-kernel-debug.mount"
"sys-fs-fuse-connections.mount"
];
# Allow getty to work in containers
systemd.services."getty@".unitConfig.ConditionPathExists = [
""
"/dev/%I"
];
# ========== Remote Development ==========
services.vscode-server.enable = true;
# Suppress unnecessary systemd units for containers
systemd.suppressedSystemUnits = [
"dev-mqueue.mount"
"sys-kernel-debug.mount"
"sys-fs-fuse-connections.mount"
];
# ========== System Configuration ==========
system.stateVersion = "25.11";
athenix.host.buildMethods = lib.mkDefault [
"lxc" # LXC container tarball
"proxmox" # Proxmox VMA archive
];
# ========== Remote Development ==========
services.vscode-server.enable = true;
athenix.sw.enable = lib.mkDefault true;
athenix.sw.type = lib.mkDefault "headless";
# ========== System Configuration ==========
system.stateVersion = "25.11";
athenix.host.buildMethods = lib.mkDefault [
"lxc" # LXC container tarball
"proxmox" # Proxmox VMA archive
];
athenix.sw.enable = lib.mkDefault true;
athenix.sw.headless.enable = lib.mkDefault true;
};
}

View File

@@ -12,7 +12,11 @@
inputs,
...
}:
with lib;
let
cfg = config.athenix.hw.nix-surface;
# Use older kernel version for better Surface Go compatibility
refSystem = inputs.nixpkgs-old-kernel.lib.nixosSystem {
system = pkgs.stdenv.hostPlatform.system;
@@ -26,44 +30,60 @@ in
inputs.nixos-hardware.nixosModules.microsoft-surface-go
];
# ========== Boot Configuration ==========
options.athenix.hw.nix-surface = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable Microsoft Surface tablet hardware configuration.";
};
};
};
default = { };
description = "Microsoft Surface hardware type configuration.";
};
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"nvme" # NVMe support (though Surface uses eMMC)
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
"intel_ipu3_imgu" # Intel camera image processing
"intel_ipu3_isys" # Intel camera sensor interface
"fbcon=map:1" # Framebuffer console mapping
"i915.enable_psr=0" # Disable Panel Self Refresh (breaks resume)
"i915.enable_dc=0" # Disable display power saving
];
config = mkIf cfg.enable {
# ========== Boot Configuration ==========
# Use older kernel for better Surface hardware support
boot.kernelPackages = lib.mkForce refKernelPackages;
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"nvme" # NVMe support (though Surface uses eMMC)
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
"intel_ipu3_imgu" # Intel camera image processing
"intel_ipu3_isys" # Intel camera sensor interface
"fbcon=map:1" # Framebuffer console mapping
"i915.enable_psr=0" # Disable Panel Self Refresh (breaks resume)
"i915.enable_dc=0" # Disable display power saving
];
# ========== Filesystem Configuration ==========
athenix.host.filesystem.swapSize = lib.mkDefault "8G";
athenix.host.filesystem.device = lib.mkDefault "/dev/mmcblk0"; # eMMC storage # eMMC storage
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# Use older kernel for better Surface hardware support
boot.kernelPackages = lib.mkForce refKernelPackages;
# ========== Hardware Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Filesystem Configuration ==========
athenix.host.filesystem.swapSize = lib.mkDefault "8G";
athenix.host.filesystem.device = lib.mkDefault "/dev/mmcblk0"; # eMMC storage # eMMC storage
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.type = lib.mkDefault "tablet-kiosk"; # Touch-optimized kiosk mode
# ========== Hardware Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.tablet-kiosk.enable = lib.mkDefault true; # Touch-optimized kiosk mode
};
}

View File

@@ -7,23 +7,43 @@
{
lib,
config,
inputs,
...
}:
{
imports = [
inputs.nixos-wsl.nixosModules.default
inputs.vscode-server.nixosModules.default
];
# ========== Options ==========
with lib;
let
cfg = config.athenix.hw.nix-wsl;
in
{
options.athenix.hw.nix-wsl = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable Windows Subsystem for Linux hardware configuration.";
};
};
};
default = { };
description = "WSL hardware type configuration.";
};
# WSL user option (at module level, not inside config)
options.athenix.host.wsl.user = lib.mkOption {
type = lib.types.str;
default = "engr-ugaif";
description = "The default user to log in as in WSL.";
description = ''
The default user to automatically log in as when starting WSL.
This user must be enabled via athenix.users.<username>.enable = true.
Tip: Use athenix.forUser = "username" as a shortcut to set both.
'';
example = "alice";
};
config = {
config = mkIf cfg.enable {
# ========== WSL Configuration ==========
wsl.enable = true;
# Use forUser if set, otherwise fall back to wsl.user option
@@ -32,7 +52,7 @@
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.type = lib.mkDefault "headless";
athenix.sw.headless.enable = lib.mkDefault true;
# ========== Remote Development ==========
services.vscode-server.enable = true;
@@ -49,5 +69,8 @@
# Provide dummy values for required options from boot.nix
athenix.host.filesystem.device = "/dev/null";
athenix.host.filesystem.swapSize = "0G";
# WSL doesn't use installer ISOs
athenix.host.buildMethods = lib.mkDefault [ ];
};
}

View File

@@ -10,40 +10,62 @@
modulesPath,
...
}:
with lib;
let
cfg = config.athenix.hw.nix-zima;
in
{
imports = [
(modulesPath + "/installer/scan/not-detected.nix")
];
# ========== Boot Configuration ==========
options.athenix.hw.nix-zima = mkOption {
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable Zima-specific hardware configuration.";
};
};
};
default = { };
description = "Zima hardware type configuration.";
};
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
config = mkIf cfg.enable {
# ========== Boot Configuration ==========
# ========== Filesystem Configuration ==========
athenix.host.filesystem.useSwap = lib.mkDefault false;
athenix.host.filesystem.device = lib.mkDefault "/dev/mmcblk0";
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
boot.initrd.availableKernelModules = [
"xhci_pci" # USB 3.0 support
"usb_storage" # USB storage devices
"sd_mod" # SD card support
"sdhci_pci" # SD card host controller
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ "kvm-intel" ]; # Intel virtualization support
boot.extraModulePackages = [ ];
boot.kernelParams = [
"quiet" # Minimal boot messages
"splash" # Show Plymouth boot splash
"boot.shell_on_fail" # Emergency shell on boot failure
"udev.log_priority=3" # Reduce udev logging
"rd.systemd.show_status=auto" # Show systemd status during boot
];
# ========== Hardware Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Filesystem Configuration ==========
athenix.host.filesystem.useSwap = lib.mkDefault false;
athenix.host.filesystem.device = lib.mkDefault "/dev/mmcblk0";
athenix.host.buildMethods = lib.mkDefault [ "installer-iso" ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.type = lib.mkDefault "desktop";
# ========== Hardware Configuration ==========
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
# ========== Software Profile ==========
athenix.sw.enable = lib.mkDefault true;
athenix.sw.desktop.enable = lib.mkDefault true;
};
}

View File

@@ -20,7 +20,7 @@ let
in
{
# Software configuration module - main module with all athenix.sw options
# Use athenix.sw.type to select profile: "desktop", "tablet-kiosk", "headless", "stateless-kiosk"
# Use athenix.sw.<type>.enable to enable software profiles: desktop, tablet-kiosk, headless, stateless-kiosk, builders
hw = hostTypes;
sw =
{

View File

@@ -45,7 +45,7 @@
# External modules (instead of config):
# Device values can be a config attrset with an optional 'external' field:
# devices."hostname" = {
# external = builtins.fetchGit { ... }; # Lazy: only fetched when building this host
# external = { url = "..."; rev = "..."; submodules? = false; }; # Lazy: only fetched when building this host
# # ... additional config options
# };
# The external module will be imported and evaluated only when this specific host is built.
@@ -65,7 +65,7 @@
# devices."alice".athenix.forUser = "alice123"; # Sets up for user alice123
# };
# "external" = {
# devices."remote".external = builtins.fetchGit { # External module via Git (lazy)
# devices."remote".external = { url = "..."; rev = "..."; }; # External module via Git (lazy)
# url = "https://github.com/example/config";
# rev = "e1ccd7cc3e709afe4f50b0627e1c4bde49165014";
# };
@@ -92,10 +92,10 @@
nix-surface = {
defaultCount = 3;
devices = {
"1".athenix.sw.kioskUrl = "https://google.com";
"1".athenix.sw.tablet-kiosk.kioskUrl = "https://google.com";
};
overrides = {
athenix.sw.kioskUrl = "https://yahoo.com";
athenix.sw.tablet-kiosk.kioskUrl = "https://yahoo.com";
};
};
@@ -106,31 +106,31 @@
"nix-builder" = {
# Gitea Actions self-hosted runner configuration
athenix.sw = {
type = [
"headless"
"builders"
];
builders.giteaRunner = {
headless.enable = true;
builders = {
enable = true;
url = "https://git.factory.uga.edu";
# Token file must be created manually at this path with a Gitea runner token
# Generate in repository settings: Settings > Actions > Runners > Create new Runner
# echo "TOKEN=YOUR_TOKEN_HERE" | sudo tee /var/lib/gitea-runner-token > /dev/null
tokenFile = "/var/lib/gitea-runner-token";
# Labels to identify this runner in workflows
extraLabels = [
"self-hosted"
"nix-builder"
];
# Runner service name
name = "athenix";
giteaRunner = {
enable = true;
url = "https://git.factory.uga.edu";
# Token file must be created manually at this path with a Gitea runner token
# Generate in repository settings: Settings > Actions > Runners > Create new Runner
# echo "TOKEN=YOUR_TOKEN_HERE" | sudo tee /var/lib/gitea-runner-token > /dev/null
tokenFile = "/var/lib/gitea-runner-token";
# Labels to identify this runner in workflows
extraLabels = [
"self-hosted"
"nix-builder"
];
# Runner service name
name = "athenix";
};
};
};
};
"usda-dash".external = builtins.fetchGit {
"usda-dash".external = {
url = "https://git.factory.uga.edu/MODEL/usda-dash-config.git";
rev = "dab32f5884895cead0fae28cb7d88d17951d0c12";
submodules = true;
rev = "ce2700b0196e106f7c013bbcee851a5f96b146a3";
submodules = false;
};
};
overrides = {

View File

@@ -1,4 +1,8 @@
{ ... }:
{
lib,
...
}:
{
mkFleet = import ./mkFleet.nix;
macCaseBuilder = import ./macCaseBuilder.nix { inherit lib; };
}

33
lib/macCaseBuilder.nix Normal file
View File

@@ -0,0 +1,33 @@
{ lib }:
let
# Default MAC address to station number mapping
defaultHostmap = {
"00:e0:4c:46:0b:32" = "1";
"00:e0:4c:46:07:26" = "2";
"00:e0:4c:46:05:94" = "3";
"00:e0:4c:46:07:11" = "4";
"00:e0:4c:46:08:02" = "5";
"00:e0:4c:46:08:5c" = "6";
};
# macCaseBuilder: builds a shell case statement from a hostmap
# Parameters:
# varName: the shell variable to assign
# prefix: optional string to prepend to the value (default: "")
# hostmap: optional attribute set to use (default: built-in hostmap)
#
# Example:
# macCaseBuilder { varName = "STATION"; prefix = "nix-"; }
# # Generates case statements like: 00:e0:4c:46:0b:32) STATION=nix-1 ;;
builder =
{
varName,
prefix ? "",
hostmap ? defaultHostmap,
}:
lib.concatStringsSep "\n" (
lib.mapAttrsToList (mac: val: " ${mac}) ${varName}=${prefix}${val} ;;") hostmap
);
in
# Export the builder function with hostmap as an accessible attribute
lib.setFunctionArgs builder { } // { hostmap = defaultHostmap; }

194
parts/docs.nix Normal file
View File

@@ -0,0 +1,194 @@
# Documentation generation
{
inputs,
self,
lib,
...
}:
let
pkgs = inputs.nixpkgs.legacyPackages.x86_64-linux;
# Extract options from a sample configuration
getAthenixOptions =
configName:
let
nixosConfig = self.nixosConfigurations.${configName};
evaledOptions = nixosConfig.options;
# Filter to just athenix namespace
athenixOptions = evaledOptions.athenix or { };
in
athenixOptions;
# Generate wiki home page
wikiHome = pkgs.writeText "Home.md" ''
# Athenix - NixOS Fleet Management
Athenix is a NixOS configuration system for managing the UGA Innovation Factory's fleet of devices using Nix flakes and a custom configuration framework.
## Quick Start
- [Configuration Options](Configuration-Options) - All available `athenix.*` options
- [User Guide](User-Configuration) - Setting up user accounts and dotfiles
- [Building](Building) - Creating installers and system images
- [Development](Development) - Contributing to Athenix
## Features
- **Inventory-based fleet management** - Define entire device fleets in a single file
- **Multiple hardware types** - Desktops, laptops, Surface tablets, LXC containers, WSL
- **Flexible software configurations** - Desktop, headless, kiosk, and builder modes
- **External module support** - Load user dotfiles and system configs from Git repos
- **Declarative everything** - Reproducible builds with pinned dependencies
## Software Types
Enable different system configurations:
- **desktop** - Full KDE Plasma 6 desktop environment
- **headless** - Minimal server/container configuration
- **tablet-kiosk** - Touch-optimized kiosk for Surface tablets
- **stateless-kiosk** - Diskless PXE boot kiosk
- **builders** - CI/CD build server with Gitea Actions runner
## Hardware Types
- **nix-desktop** - Desktop workstations
- **nix-laptop** - Laptop computers
- **nix-surface** - Microsoft Surface Pro tablets
- **nix-lxc** - LXC containers (Proxmox)
- **nix-wsl** - Windows Subsystem for Linux
- **nix-ephemeral** - Stateless systems (PXE boot)
## Documentation
Browse the documentation using the sidebar or start with:
- [README](README) - Repository overview and getting started
- [Configuration Options](Configuration-Options) - Complete option reference
- [Inventory Guide](Inventory) - Managing the device fleet
- [External Modules](External-Modules) - Using external configurations
'';
# Generate markdown documentation from options
optionsToMarkdown =
options:
pkgs.writeText "options.md" ''
# Configuration Options
This document describes all available configuration options in the Athenix namespace.
## Quick Reference
- **athenix.sw** - Software configuration (desktop, headless, kiosk modes)
- **athenix.users** - User account management
- **athenix.host** - Host-specific settings (filesystem, build methods)
- **athenix.fleet** - Fleet inventory definitions
- **athenix.forUser** - Convenience option to enable a user
- **athenix.system.gc** - Garbage collection settings
## Detailed Options
For detailed option information, use:
```bash
# View all athenix options
nix eval .#nixosConfigurations.nix-desktop1.options.athenix --apply builtins.attrNames
# View specific option description
nix eval .#nixosConfigurations.nix-desktop1.options.athenix.sw.desktop.enable.description
# Export all options to JSON
nix build .#athenix-options
cat result | jq
```
## Software Types
Enable different system configurations:
- **desktop** - Full KDE Plasma desktop environment
- **headless** - Server/container configuration
- **tablet-kiosk** - Touch-optimized kiosk for tablets
- **stateless-kiosk** - Diskless PXE boot kiosk
- **builders** - Build server with optional Gitea Actions runner
See the individual option descriptions for detailed information.
'';
in
{
perSystem =
{ system, ... }:
lib.mkIf (system == "x86_64-linux") {
packages = {
# Generate option documentation in markdown
docs =
pkgs.runCommand "athenix-docs"
{
nativeBuildInputs = [ pkgs.jq ];
}
''
mkdir -p $out
# Generate wiki home page
cat > $out/Home.md << 'EOF'
${builtins.readFile wikiHome}
EOF
# Copy main README
cp ${../README.md} $out/README.md
# Copy documentation with wiki-friendly names
cp ${../docs/BUILDING.md} $out/Building.md
cp ${../docs/DEVELOPMENT.md} $out/Development.md
cp ${../docs/EXTERNAL_MODULES.md} $out/External-Modules.md
cp ${../docs/INVENTORY.md} $out/Inventory.md
cp ${../docs/NAMESPACE.md} $out/Namespace.md
cp ${../docs/USER_CONFIGURATION.md} $out/User-Configuration.md
# Generate options reference
cat > $out/Configuration-Options.md << 'EOF'
${builtins.readFile (optionsToMarkdown (getAthenixOptions "nix-desktop1"))}
EOF
echo "Documentation generated in $out"
'';
# Extract just the athenix namespace options as JSON
athenix-options =
let
nixosConfig =
self.nixosConfigurations.nix-desktop1
or (builtins.head (builtins.attrValues self.nixosConfigurations));
# Recursively extract option information
extractOption =
opt:
if opt ? _type && opt._type == "option" then
{
inherit (opt) description;
type = opt.type.description or (opt.type.name or "unknown");
default =
if opt ? default then
if builtins.isAttrs opt.default && opt.default ? _type then "<special>" else opt.default
else
null;
example =
if opt ? example then
if builtins.isAttrs opt.example && opt.example ? _type then "<special>" else opt.example
else
null;
}
else if builtins.isAttrs opt then
lib.mapAttrs (name: value: extractOption value) (
# Filter out internal attributes
lib.filterAttrs (n: _: !lib.hasPrefix "_" n) opt
)
else
null;
athenixOpts = nixosConfig.options.athenix or { };
in
pkgs.writeText "athenix-options.json" (builtins.toJSON (extractOption athenixOpts));
};
};
}

View File

@@ -1,5 +1,8 @@
# Library functions for flake-parts
{ inputs, ... }:
{
flake.lib = import ../lib { inherit inputs; };
flake.lib = import ../lib {
inherit inputs;
lib = inputs.nixpkgs.lib;
};
}

187
secrets.nix Normal file
View File

@@ -0,0 +1,187 @@
# ============================================================================
# Agenix Secret Recipients Configuration (Auto-Generated)
# ============================================================================
# This file automatically discovers hosts and their public keys from the
# secrets/ directory structure and generates recipient configurations.
#
# Directory structure:
# secrets/{hostname}/*.pub -> SSH/age public keys for that host
# secrets/global/*.pub -> Keys accessible to all hosts
#
# Usage:
# ragenix -e secrets/global/example.age # Edit/create secret
# ragenix -r # Re-key all secrets
#
# To add admin keys for editing secrets, create secrets/admins/*.pub files
# with your personal age public keys (generated with: age-keygen)
let
lib = builtins;
# Helper functions not in builtins
filterAttrs =
pred: set:
lib.listToAttrs (
lib.filter (item: pred item.name item.value) (
lib.map (name: {
inherit name;
value = set.${name};
}) (lib.attrNames set)
)
);
concatLists = lists: lib.foldl' (acc: list: acc ++ list) [ ] lists;
unique =
list:
let
go =
acc: remaining:
if remaining == [ ] then
acc
else if lib.elem (lib.head remaining) acc then
go acc (lib.tail remaining)
else
go (acc ++ [ (lib.head remaining) ]) (lib.tail remaining);
in
go [ ] list;
hasSuffix =
suffix: str:
let
lenStr = lib.stringLength str;
lenSuffix = lib.stringLength suffix;
in
lenStr >= lenSuffix && lib.substring (lenStr - lenSuffix) lenSuffix str == suffix;
nameValuePair = name: value: { inherit name value; };
secretsPath = ./secrets;
# Read all directories in secrets/
secretDirs = if lib.pathExists secretsPath then lib.readDir secretsPath else { };
# Filter to only directories (excludes files)
isDirectory = name: type: type == "directory";
directories = lib.filter (name: isDirectory name secretDirs.${name}) (lib.attrNames secretDirs);
# Read public keys from a directory and convert to age format
readHostKeys =
dirName:
let
dirPath = secretsPath + "/${dirName}";
files = if lib.pathExists dirPath then lib.readDir dirPath else { };
# Prefer .age.pub files (pre-converted), fall back to .pub files
agePubFiles = filterAttrs (name: type: type == "regular" && hasSuffix ".age.pub" name) files;
sshPubFiles = filterAttrs (
name: type: type == "regular" && hasSuffix ".pub" name && !(hasSuffix ".age.pub" name)
) files;
# Read age public keys (already in correct format)
ageKeys = lib.map (
name:
let
content = lib.readFile (dirPath + "/${name}");
# Trim whitespace/newlines
trimmed = lib.replaceStrings [ "\n" " " "\r" "\t" ] [ "" "" "" "" ] content;
in
trimmed
) (lib.attrNames agePubFiles);
# For SSH keys, just include them as-is (user needs to convert with ssh-to-age)
# Or they can run the update-age-keys.sh script
sshKeys =
if (lib.length (lib.attrNames sshPubFiles)) > 0 then
lib.trace "Warning: ${dirName} has unconverted SSH keys. Run secrets/update-age-keys.sh" [ ]
else
[ ];
in
lib.filter (k: k != null && k != "") (ageKeys ++ sshKeys);
# Build host key mappings: { hostname = [ "age1..." "age2..." ]; }
hostKeys = lib.listToAttrs (
lib.map (dir: nameValuePair dir (readHostKeys dir)) (
lib.filter (d: d != "global" && d != "admins") directories
)
);
# Global keys that all hosts can use
globalKeys = if lib.elem "global" directories then readHostKeys "global" else [ ];
# Admin keys for editing secrets
adminKeys = if lib.elem "admins" directories then readHostKeys "admins" else [ ];
# All host keys combined
allHostKeys = concatLists (lib.attrValues hostKeys);
# Find all .age files in the secrets directory
findSecrets =
dir:
let
dirPath = secretsPath + "/${dir}";
files = if lib.pathExists dirPath then lib.readDir dirPath else { };
ageFiles = filterAttrs (name: type: type == "regular" && hasSuffix ".age" name) files;
in
lib.map (name: "secrets/${dir}/${name}") (lib.attrNames ageFiles);
# Generate recipient list for a secret based on its location
getRecipients =
secretPath:
let
# Extract directory name from path: "secrets/nix-builder/foo.age" -> "nix-builder"
pathParts = lib.split "/" secretPath;
dirName = lib.elemAt pathParts 2;
in
if dirName == "global" then
# Global secrets: all hosts + admins
allHostKeys ++ globalKeys ++ adminKeys
else if hostKeys ? ${dirName} then
# Host-specific secrets: that host + global keys + admins
hostKeys.${dirName} ++ globalKeys ++ adminKeys
else
# Fallback: just admins
adminKeys;
# Find all secrets across all directories
allSecrets = concatLists (lib.map findSecrets directories);
# Generate the configuration
secretsConfig = lib.listToAttrs (
lib.map (
secretPath:
let
recipients = getRecipients secretPath;
# Remove duplicates and empty keys
uniqueRecipients = unique (lib.filter (k: k != null && k != "") recipients);
in
nameValuePair secretPath {
publicKeys = uniqueRecipients;
}
) allSecrets
);
# Generate wildcard rules for each directory to allow creating new secrets
wildcardRules = lib.listToAttrs (
lib.concatMap (dir: [
# Match with and without .age extension for ragenix compatibility
(nameValuePair "secrets/${dir}/*" {
publicKeys =
let
recipients = getRecipients "secrets/${dir}/dummy.age";
in
unique (lib.filter (k: k != null && k != "") recipients);
})
(nameValuePair "secrets/${dir}/*.age" {
publicKeys =
let
recipients = getRecipients "secrets/${dir}/dummy.age";
in
unique (lib.filter (k: k != null && k != "") recipients);
})
]) (lib.filter (d: d != "admins") directories)
);
in
secretsConfig // wildcardRules

174
secrets/DESIGN.md Normal file
View File

@@ -0,0 +1,174 @@
# Athenix Secrets System Design
## Overview
The Athenix secrets management system integrates ragenix (agenix) with automatic host discovery based on the repository's fleet inventory structure. It provides a seamless workflow for managing encrypted secrets across all systems.
## Architecture
### Auto-Discovery Module (`sw/secrets.nix`)
**Purpose**: Automatically load and configure secrets at system deployment time.
**Features**:
- Discovers `.age` encrypted files from `secrets/` directories
- Loads global secrets from `secrets/global/` on ALL systems
- Loads host-specific secrets from `secrets/{hostname}/` on matching hosts
- Auto-configures decryption keys based on `.pub` files in directories
- Supports custom secret configuration via `default.nix` in each directory
**Key Behaviors**:
- Secrets are decrypted to `/run/agenix/{name}` at boot
- Identity paths include: system SSH keys + global keys + host-specific keys
- Host-specific secrets override global secrets with the same name
### Dynamic Recipients Configuration (`secrets/secrets.nix`)
**Purpose**: Generate ragenix recipient configuration from directory structure.
**Features**:
- Automatically discovers hosts from `secrets/` subdirectories
- Reads age public keys from `.age.pub` files (converted from SSH keys)
- Generates recipient lists based on secret location:
- `secrets/global/*.age` → ALL hosts + admins
- `secrets/{hostname}/*.age` → that host + global keys + admins
- Supports admin keys in `secrets/admins/` for secret editing
**Key Behaviors**:
- No manual recipient list maintenance required
- Adding a new host = create directory + add .pub key + run `update-age-keys.sh`
- Works with ragenix CLI: `ragenix -e`, `ragenix -r`
## Workflow
### Adding a New Host
1. **Capture SSH host key**:
```bash
# From the running system
cat /etc/ssh/ssh_host_ed25519_key.pub > secrets/new-host/ssh_host_ed25519_key.pub
```
2. **Convert to age format**:
```bash
cd secrets/
./update-age-keys.sh
```
3. **Re-key existing secrets** (if needed):
```bash
ragenix -r
```
### Creating a New Secret
1. **Choose location**:
- `secrets/global/` → all systems can decrypt
- `secrets/{hostname}/` → only that host can decrypt
2. **Create/edit secret**:
```bash
ragenix -e secrets/global/my-secret.age
```
3. **Recipients are auto-determined** from `secrets.nix`:
- Global secrets: all host keys + admin keys
- Host-specific: that host + global keys + admin keys
### Cross-Host Secret Management
Any Athenix host can manage secrets for other hosts because:
- All public keys are in the repository (`*.age.pub` files)
- `secrets/secrets.nix` auto-generates recipient lists
- Hosts decrypt using their own private keys (not shared)
Example: From `nix-builder`, create a secret for `usda-dash`:
```bash
ragenix -e secrets/usda-dash/database-password.age
# Encrypted for usda-dash's public key + admins
# usda-dash will decrypt using its private key at /etc/ssh/ssh_host_ed25519_key
```
## Directory Structure
```
secrets/
├── secrets.nix # Auto-generated recipient config
├── update-age-keys.sh # Helper to convert SSH → age keys
├── README.md # User documentation
├── DESIGN.md # This file
├── global/ # Secrets for ALL hosts
│ ├── *.pub # SSH public keys
│ ├── *.age.pub # Age public keys (generated)
│ ├── *.age # Encrypted secrets
│ └── default.nix # Optional: custom secret config
├── {hostname}/ # Host-specific secrets
│ ├── *.pub
│ ├── *.age.pub
│ ├── *.age
│ └── default.nix
└── admins/ # Admin keys for editing
└── *.age.pub
```
## Security Model
1. **Public keys in git**: Safe to commit (only public keys, `.age.pub` and `.pub`)
2. **Private keys on hosts**: Never leave the system (`/etc/ssh/ssh_host_*_key`, `/etc/age/identity.key`)
3. **Encrypted secrets in git**: Safe to commit (`.age` files)
4. **Decrypted secrets**: Only in memory/tmpfs (`/run/agenix/*`)
## Integration Points
### With NixOS Configuration
```nix
# Access decrypted secrets in any NixOS module
config.age.secrets.my-secret.path # => /run/agenix/my-secret
# Example usage
services.myapp.passwordFile = config.age.secrets.database-password.path;
```
### With Inventory System
The system automatically matches `secrets/{hostname}/` to hostnames from `inventory.nix`. No manual configuration needed.
### With External Modules
External user/system modules can reference secrets:
```nix
# In external module
{ config, ... }:
{
programs.git.extraConfig.credential.helper =
"store --file ${config.age.secrets.git-credentials.path}";
}
```
## Advantages
1. **Zero manual recipient management**: Just add directories and keys
2. **Cross-host secret creation**: Any host can manage secrets for others
3. **Automatic host discovery**: Syncs with inventory structure
4. **Flexible permission model**: Global vs host-specific + custom configs
5. **Version controlled**: All public data in git, auditable history
6. **Secure by default**: Private keys never shared, secrets encrypted at rest
## Limitations
1. **Requires age key conversion**: SSH keys must be converted to age format (automated by script)
2. **Bootstrap chicken-egg**: Need initial host key before encrypting secrets (capture from first boot or generate locally)
3. **No secret rotation automation**: Must manually re-key with `ragenix -r`
4. **Git history contains old encrypted versions**: Rotating keys doesn't remove old ciphertexts from history
## Future Enhancements
- Auto-run `update-age-keys.sh` in pre-commit hook
- Integrate with inventory.nix to auto-generate host directories
- Support for multiple identity types per host
- Automated secret rotation scheduling
- Integration with hardware security modules (YubiKey, etc.)

250
secrets/README.md Normal file
View File

@@ -0,0 +1,250 @@
# Secrets Management with Agenix
This directory contains age-encrypted secrets for Athenix hosts. Secrets are automatically loaded based on directory structure.
## Directory Structure
```
secrets/
├── global/ # Secrets installed on ALL systems
│ ├── default.nix # Optional: Custom config for global secrets
│ └── example.age # Decrypted to /run/agenix/example on all hosts
├── nix-builder/ # Secrets only for nix-builder host
│ ├── default.nix # Optional: Custom config for nix-builder secrets
│ └── ssh_host_ed25519_key.age
└── usda-dash/ # Secrets only for usda-dash host
└── ssh_host_ed25519_key.age
```
## How It Works
1. **Global secrets** (`./secrets/global/*.age`) are installed on every system
2. **Host-specific secrets** (`./secrets/{hostname}/*.age`) are only installed on matching hosts
3. Only `.age` encrypted files are loaded; `.pub` public keys are ignored
4. Secrets are decrypted at boot to `/run/agenix/{secret-name}` with mode `0400` and owner `root:root`
5. **Custom configurations** can be defined in `default.nix` files within each directory
## Creating Secrets
### 1. Generate Age Keys
For a new host, generate an age identity:
```bash
# On the target system
mkdir -p /etc/age
age-keygen -o /etc/age/identity.key
chmod 600 /etc/age/identity.key
```
Or use SSH host keys (automatically done by Athenix):
```bash
# Get the age public key from SSH host key
nix shell nixpkgs#ssh-to-age -c sh -c 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
```
### 2. Store Public Keys
Save the public key to `secrets/{hostname}/` for reference:
```bash
# Example for nix-builder
echo "age1..." > secrets/nix-builder/identity.pub
```
Or from SSH host key:
```bash
cat /etc/ssh/ssh_host_ed25519_key.pub > secrets/nix-builder/ssh_host_ed25519_key.pub
```
**Then convert SSH keys to age format:**
```bash
cd secrets/
./update-age-keys.sh
```
This creates `.age.pub` files that `secrets.nix` uses for ragenix recipient configuration.
### 3. Encrypt Secrets
Encrypt a secret for specific hosts:
```bash
# For a single host
age -r age1publickey... -o secrets/nix-builder/my-secret.age <<< "secret value"
# For multiple hosts (recipient list)
age -R recipients.txt -o secrets/global/shared-secret.age < plaintext-file
# Using SSH public keys
age -R secrets/nix-builder/ssh_host_ed25519_key.pub \
-o secrets/nix-builder/ssh_host_key.age < /etc/ssh/ssh_host_ed25519_key
```
### 4. Creating and Editing Secrets
**For new secrets**, use the helper script (automatically determines recipients):
```bash
cd secrets/
# Create a host-specific secret
./create-secret.sh usda-dash/database-url.age <<< "postgresql://..."
# Create a global secret
echo "shared-api-key" | ./create-secret.sh global/api-key.age
# From a file
./create-secret.sh nix-builder/ssh-key.age < ~/.ssh/id_ed25519
```
The script automatically includes the correct recipients:
- **Host-specific**: that host's keys + global keys + admin keys
- **Global**: all host keys + admin keys
**To edit existing secrets**, use `ragenix`:
```bash
# Install ragenix
nix shell github:yaxitech/ragenix
# Edit an existing secret (you must have a decryption key)
ragenix -e secrets/global/existing-secret.age
# Re-key all secrets after adding new hosts
ragenix -r
```
**Why create with `age` first?** Ragenix requires the `.age` file to exist before editing. The `secrets/secrets.nix` configuration auto-discovers recipients from the directory structure, but ragenix doesn't support wildcard patterns for creating new files.
**Recipient management** is automatic:
- **Global secrets** (`secrets/global/*.age`): encrypted for ALL hosts + admins
- **Host secrets** (`secrets/{hostname}/*.age`): encrypted for that host + global keys + admins
- **Admin keys** from `secrets/admins/*.age.pub` allow editing from your workstation
After creating new .age files with `age`, use `ragenix -r` to re-key all secrets with the updated recipient configuration.
To add admin keys for editing secrets:
```bash
# Generate personal age key
age-keygen -o ~/.config/age/personal.key
# Extract public key and add to secrets
grep "public key:" ~/.config/age/personal.key | cut -d: -f2 | tr -d ' ' > secrets/admins/your-name.age.pub
```
## Using Secrets in Configuration
Secrets are automatically loaded. Reference them in your NixOS configuration:
```nix
# Example: Using a secret for a service
services.myservice = {
enable = true;
passwordFile = config.age.secrets.my-password.path; # /run/agenix/my-password
};
# Example: Setting up SSH host key from secret
services.openssh = {
hostKeys = [{
path = config.age.secrets.ssh_host_ed25519_key.path;
type = "ed25519";
}];
};
```
## Custom Secret Configuration
For secrets needing custom permissions, use `athenix.sw.secrets.extraSecrets`:
```nix
# In inventory.nix or host config
athenix.sw.secrets.extraSecrets = {
"nginx-cert" = {
file = ./secrets/custom/cert.age;
mode = "0440";
owner = "nginx";
group = "nginx";
};
};
```
### Using default.nix in Secret Directories
Alternatively, create a `default.nix` file in the secret directory to configure all secrets in that directory:
```nix
# secrets/global/default.nix
{
"example" = {
mode = "0440"; # Custom file mode (default: "0400")
owner = "nginx"; # Custom owner (default: "root")
group = "nginx"; # Custom group (default: "root")
path = "/run/secrets/example"; # Custom path (default: /run/agenix/{name})
};
"api-key" = {
mode = "0400";
owner = "myservice";
group = "myservice";
};
}
```
The `default.nix` file should return an attribute set where:
- **Keys** are secret names (without the `.age` extension)
- **Values** are configuration objects with optional fields:
- `mode` - File permissions (string, e.g., `"0440"`)
- `owner` - File owner (string, e.g., `"nginx"`)
- `group` - File group (string, e.g., `"nginx"`)
- `path` - Custom installation path (string, e.g., `"/custom/path"`)
Secrets not listed in `default.nix` will use default settings.
## Security Best Practices
1. **Never commit unencrypted secrets** - Only `.age` and `.pub` files belong in this directory
2. **Use host-specific secrets** when possible - Limit exposure by using hostname directories
3. **Rotate secrets regularly** - Re-encrypt with new keys periodically
4. **Backup age identity keys** - Store `/etc/age/identity.key` securely offline
5. **Use SSH keys** - Leverage existing SSH host keys for age encryption when possible
6. **Pin to commits** - When using external secrets modules, always use `rev = "commit-hash"`
## Converting SSH Keys to Age Format
```bash
# Convert SSH public key to age public key
nix shell nixpkgs#ssh-to-age -c ssh-to-age < secrets/nix-builder/ssh_host_ed25519_key.pub
# Convert SSH private key to age identity (for editing secrets)
nix shell nixpkgs#ssh-to-age -c ssh-to-age -private-key -i ~/.ssh/id_ed25519
```
## Disabling Automatic Secrets
To disable automatic secret loading:
```nix
# In inventory.nix or host config
athenix.sw.secrets.enable = false;
```
## Troubleshooting
### Secret not found
- Ensure the `.age` file exists in `secrets/global/` or `secrets/{hostname}/`
- Check `hostname` matches directory name: `echo $HOSTNAME` on the target system
- Run `nix flake check` to verify secrets are discovered
### Permission denied
- Verify secret permissions in `/run/agenix/`
- Check if custom permissions are needed (use `extraSecrets`)
- Ensure the service user/group has access to the secret file
### Age decrypt failed
- Verify the host's age identity exists: `ls -l /etc/age/identity.key`
- Check that the secret was encrypted with the host's public key
- Confirm SSH host key hasn't changed (would change derived age key)
## References
- [ragenix GitHub](https://github.com/yaxitech/ragenix)
- [agenix upstream](https://github.com/ryantm/agenix)
- [age encryption tool](https://age-encryption.org/)

View File

@@ -0,0 +1 @@
age14emzyraytqzmre58c452t07rtcj87cwqwmd9z3gj7upugtxk8s3sda5tju

BIN
secrets/core Normal file

Binary file not shown.

121
secrets/create-secret.sh Executable file
View File

@@ -0,0 +1,121 @@
#!/usr/bin/env bash
set -euo pipefail
# Create a new age-encrypted secret with auto-determined recipients
# Usage: ./create-secret.sh <path> [content]
# path: relative to secrets/ (e.g., "usda-dash/my-secret.age" or "global/shared.age")
# content: stdin if not provided
SECRETS_DIR="$(cd "$(dirname "$0")" && pwd)"
if [ $# -lt 1 ]; then
echo "Usage: $0 <path> [content]" >&2
echo "Examples:" >&2
echo " $0 usda-dash/database-url.age <<< 'postgresql://...'" >&2
echo " $0 global/api-key.age < secret-file.txt" >&2
echo " echo 'secret' | $0 nix-builder/token.age" >&2
exit 1
fi
SECRET_PATH="$1"
shift
# Extract directory from path (e.g., "usda-dash/file.age" -> "usda-dash")
SECRET_DIR="$(dirname "$SECRET_PATH")"
SECRET_FILE="$(basename "$SECRET_PATH")"
# Ensure .age extension
if [[ ! "$SECRET_FILE" =~ \.age$ ]]; then
echo "Error: Secret file must have .age extension" >&2
exit 1
fi
TARGET_FILE="$SECRETS_DIR/$SECRET_PATH"
# Ensure target directory exists
mkdir -p "$(dirname "$TARGET_FILE")"
# Collect recipient keys
RECIPIENTS=()
if [ "$SECRET_DIR" = "global" ]; then
echo "Creating global secret (encrypted for all hosts + admins)..." >&2
# Add all host keys
for host_dir in "$SECRETS_DIR"/*/; do
host_name="$(basename "$host_dir")"
# Skip non-host directories
if [ "$host_name" = "admins" ] || [ "$host_name" = "global" ]; then
continue
fi
# Add all .age.pub files from this host
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$host_dir" -maxdepth 1 -name "*.age.pub" -print0)
done
# Add global keys
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$SECRETS_DIR/global" -maxdepth 1 -name "*.age.pub" -print0 2>/dev/null || true)
else
echo "Creating host-specific secret for $SECRET_DIR..." >&2
# Check if host directory exists
if [ ! -d "$SECRETS_DIR/$SECRET_DIR" ]; then
echo "Error: Host directory $SECRET_DIR does not exist" >&2
echo "Create it first: mkdir -p secrets/$SECRET_DIR" >&2
exit 1
fi
# Add this host's keys
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$SECRETS_DIR/$SECRET_DIR" -maxdepth 1 -name "*.age.pub" -print0)
# Add global keys (so global hosts can also decrypt)
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$SECRETS_DIR/global" -maxdepth 1 -name "*.age.pub" -print0 2>/dev/null || true)
fi
# Add admin keys (for editing from workstations)
if [ -d "$SECRETS_DIR/admins" ]; then
while IFS= read -r -d '' key_file; do
RECIPIENTS+=("$key_file")
done < <(find "$SECRETS_DIR/admins" -maxdepth 1 -name "*.age.pub" -print0 2>/dev/null || true)
fi
# Check if we have any recipients
if [ ${#RECIPIENTS[@]} -eq 0 ]; then
echo "Error: No recipient keys found!" >&2
echo "Run ./update-age-keys.sh first to generate .age.pub files" >&2
exit 1
fi
echo "Found ${#RECIPIENTS[@]} recipient key(s):" >&2
for key in "${RECIPIENTS[@]}"; do
echo " - $(basename "$key")" >&2
done
# Create recipient list file (temporary)
RECIPIENT_LIST=$(mktemp)
trap "rm -f $RECIPIENT_LIST" EXIT
for key in "${RECIPIENTS[@]}"; do
cat "$key" >> "$RECIPIENT_LIST"
done
# Encrypt the secret
if [ $# -gt 0 ]; then
# Content provided as argument
echo "$@" | age -R "$RECIPIENT_LIST" -o "$TARGET_FILE"
else
# Content from stdin
age -R "$RECIPIENT_LIST" -o "$TARGET_FILE"
fi
echo "✓ Created $TARGET_FILE" >&2
echo " Edit with: ragenix -e secrets/$SECRET_PATH" >&2

View File

@@ -0,0 +1 @@
age1udmpqkedupd33gyut85ud3nvppydzeg04kkuneymkvxcjjej244s4v8xjc

View File

@@ -0,0 +1,10 @@
# Host-specific secret configuration for nix-builder
{
# SSH host key should be readable by sshd
ssh_host_ed25519_key = {
mode = "0600";
owner = "root";
group = "root";
path = "/etc/ssh/ssh_host_ed25519_key";
};
}

View File

@@ -0,0 +1 @@
age1u5tczg2sx90n03uuz9h549f4h3h7sq5uehhqpampzs7vj8ew7y6s2mjwz0

176
secrets/secrets.nix Normal file
View File

@@ -0,0 +1,176 @@
# ============================================================================
# Agenix Secret Recipients Configuration (Auto-Generated)
# ============================================================================
# This file automatically discovers hosts and their public keys from the
# secrets/ directory structure and generates recipient configurations.
#
# Directory structure:
# secrets/{hostname}/*.pub -> SSH/age public keys for that host
# secrets/global/*.pub -> Keys accessible to all hosts
#
# Usage:
# ragenix -e secrets/global/example.age # Edit/create secret
# ragenix -r # Re-key all secrets
#
# To add admin keys for editing secrets, create secrets/admins/*.pub files
# with your personal age public keys (generated with: age-keygen)
let
lib = builtins;
# Helper functions not in builtins
filterAttrs =
pred: set:
lib.listToAttrs (
lib.filter (item: pred item.name item.value) (
lib.map (name: {
inherit name;
value = set.${name};
}) (lib.attrNames set)
)
);
concatLists = lists: lib.foldl' (acc: list: acc ++ list) [ ] lists;
unique =
list:
let
go =
acc: remaining:
if remaining == [ ] then
acc
else if lib.elem (lib.head remaining) acc then
go acc (lib.tail remaining)
else
go (acc ++ [ (lib.head remaining) ]) (lib.tail remaining);
in
go [ ] list;
hasSuffix =
suffix: str:
let
lenStr = lib.stringLength str;
lenSuffix = lib.stringLength suffix;
in
lenStr >= lenSuffix && lib.substring (lenStr - lenSuffix) lenSuffix str == suffix;
nameValuePair = name: value: { inherit name value; };
secretsPath = ./secrets;
# Read all directories in secrets/
secretDirs = if lib.pathExists secretsPath then lib.readDir secretsPath else { };
# Filter to only directories (excludes files)
isDirectory = name: type: type == "directory";
directories = lib.filter (name: isDirectory name secretDirs.${name}) (lib.attrNames secretDirs);
# Read public keys from a directory and convert to age format
readHostKeys =
dirName:
let
dirPath = secretsPath + "/${dirName}";
files = if lib.pathExists dirPath then lib.readDir dirPath else { };
# Prefer .age.pub files (pre-converted), fall back to .pub files
agePubFiles = filterAttrs (name: type: type == "regular" && hasSuffix ".age.pub" name) files;
sshPubFiles = filterAttrs (
name: type: type == "regular" && hasSuffix ".pub" name && !(hasSuffix ".age.pub" name)
) files;
# Read age public keys (already in correct format)
ageKeys = lib.map (
name:
let
content = lib.readFile (dirPath + "/${name}");
# Trim whitespace/newlines
trimmed = lib.replaceStrings [ "\n" " " "\r" "\t" ] [ "" "" "" "" ] content;
in
trimmed
) (lib.attrNames agePubFiles);
# For SSH keys, just include them as-is (user needs to convert with ssh-to-age)
# Or they can run the update-age-keys.sh script
sshKeys =
if (lib.length (lib.attrNames sshPubFiles)) > 0 then
lib.trace "Warning: ${dirName} has unconverted SSH keys. Run secrets/update-age-keys.sh" [ ]
else
[ ];
in
lib.filter (k: k != null && k != "") (ageKeys ++ sshKeys);
# Build host key mappings: { hostname = [ "age1..." "age2..." ]; }
hostKeys = lib.listToAttrs (
lib.map (dir: nameValuePair dir (readHostKeys dir)) (
lib.filter (d: d != "global" && d != "admins") directories
)
);
# Global keys that all hosts can use
globalKeys = if lib.elem "global" directories then readHostKeys "global" else [ ];
# Admin keys for editing secrets
adminKeys = if lib.elem "admins" directories then readHostKeys "admins" else [ ];
# All host keys combined
allHostKeys = concatLists (lib.attrValues hostKeys);
# Find all .age files in the secrets directory
findSecrets =
dir:
let
dirPath = secretsPath + "/${dir}";
files = if lib.pathExists dirPath then lib.readDir dirPath else { };
ageFiles = filterAttrs (name: type: type == "regular" && hasSuffix ".age" name) files;
in
lib.map (name: "secrets/${dir}/${name}") (lib.attrNames ageFiles);
# Generate recipient list for a secret based on its location
getRecipients =
secretPath:
let
# Extract directory name from path: "secrets/nix-builder/foo.age" -> "nix-builder"
pathParts = lib.split "/" secretPath;
dirName = lib.elemAt pathParts 2;
in
if dirName == "global" then
# Global secrets: all hosts + admins
allHostKeys ++ globalKeys ++ adminKeys
else if hostKeys ? ${dirName} then
# Host-specific secrets: that host + global keys + admins
hostKeys.${dirName} ++ globalKeys ++ adminKeys
else
# Fallback: just admins
adminKeys;
# Find all secrets across all directories
allSecrets = concatLists (lib.map findSecrets directories);
# Generate the configuration
secretsConfig = lib.listToAttrs (
lib.map (
secretPath:
let
recipients = getRecipients secretPath;
# Remove duplicates and empty keys
uniqueRecipients = unique (lib.filter (k: k != null && k != "") recipients);
in
nameValuePair secretPath {
publicKeys = uniqueRecipients;
}
) allSecrets
);
in
secretsConfig
// {
# Export helper information for debugging
_meta = {
hostKeys = hostKeys;
globalKeys = globalKeys;
adminKeys = adminKeys;
allHostKeys = allHostKeys;
discoveredSecrets = allSecrets;
};
}

36
secrets/update-age-keys.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/usr/bin/env bash
# ============================================================================
# Update Age Keys from SSH Public Keys
# ============================================================================
# This script converts SSH public keys to age format for use with ragenix.
# Run this after adding new SSH .pub files to create corresponding .age.pub files.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
echo "Converting SSH public keys to age format..."
# Find all .pub files that are SSH keys (not already .age.pub)
find . -name "*.pub" -not -name "*.age.pub" -type f | while read -r pubkey; do
# Check if it's an SSH key
if grep -q "^ssh-" "$pubkey" 2>/dev/null || grep -q "^ecdsa-" "$pubkey" 2>/dev/null; then
age_key=$(nix shell nixpkgs#ssh-to-age -c ssh-to-age < "$pubkey" 2>/dev/null || true)
if [ -n "$age_key" ]; then
# Create .age.pub file with the age key
age_file="${pubkey%.pub}.age.pub"
echo "$age_key" > "$age_file"
echo "✓ Converted: $pubkey -> $age_file"
else
echo "⚠ Skipped: $pubkey (conversion failed)"
fi
fi
done
echo ""
echo "Done! Age public keys have been generated."
echo "You can now use ragenix to manage secrets:"
echo " ragenix -e secrets/global/my-secret.age"
echo " ragenix -r # Re-key all secrets with updated keys"

View File

@@ -0,0 +1,8 @@
# Host-specific secret configuration for usda-dash
{
usda-vision-azure-env = {
mode = "0600";
owner = "root";
group = "root";
};
}

View File

@@ -0,0 +1 @@
age1lr24yvk7rdfh5wkle7h32jpxqxm2e8vk85mc4plv370u2sh4yfmszaaejx

View File

@@ -0,0 +1 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHI73LOEK2RgfjhZWpryntlLbx0LouHrhQ6v0vZu4Etr root@usda-dash

Binary file not shown.

View File

@@ -11,21 +11,123 @@
...
}:
lib.mkMerge [
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
]
with lib;
let
cfg = config.athenix.sw.builders;
in
{
options.athenix.sw.builders = mkOption {
type = lib.types.submodule {
options = {
enable = mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable build server configuration.
Includes:
- SSH host keys for common Git servers (factory.uga.edu, github.com)
- Gitea Actions runner support (optional)
- Build tools and dependencies
Recommended for: CI/CD servers, build containers, development infrastructure
'';
example = true;
};
giteaRunner = mkOption {
type = lib.types.submodule {
options = {
enable = mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable Gitea Actions self-hosted runner.
This runner will connect to a Gitea instance and execute CI/CD workflows.
Requires manual setup of the token file before the service will start.
'';
example = true;
};
url = mkOption {
type = lib.types.str;
description = ''
URL of the Gitea instance to connect to.
This should be the base URL without any path components.
'';
example = "https://git.factory.uga.edu";
};
tokenFile = mkOption {
type = lib.types.path;
default = "/var/lib/gitea-runner-token";
description = ''
Path to file containing Gitea runner registration token.
To generate:
1. Go to your Gitea repository settings
2. Navigate to Actions > Runners
3. Click "Create new Runner"
4. Save the token to this file:
echo "TOKEN=your-token-here" | sudo tee /var/lib/gitea-runner-token > /dev/null
The service will not start until this file exists.
'';
example = "/var/secrets/gitea-runner-token";
};
extraLabels = mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = ''
Additional labels to identify this runner in workflow files.
Use labels to target specific runners for different job types.
'';
example = [
"self-hosted"
"nix"
"x86_64-linux"
];
};
name = mkOption {
type = lib.types.str;
default = "athenix";
description = ''
Unique name for this runner instance.
Shown in Gitea's runner list and logs.
'';
example = "nix-builder-1";
};
};
};
default = { };
description = "Gitea Actions runner configuration.";
};
};
};
default = { };
description = "Build server configuration (CI/CD, Gitea Actions).";
};
config = mkIf cfg.enable (mkMerge [
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
]);
}

View File

@@ -1,8 +1,6 @@
{
config,
lib,
pkgs,
inputs,
...
}:
@@ -10,7 +8,7 @@ with lib;
let
cfg = config.athenix.sw;
basePackages = with pkgs; [
basePackages = [
# Build-related packages can be added here if needed
];
in

View File

@@ -10,19 +10,14 @@
# Software Module Entry Point
# ============================================================================
# This module manages the software configuration for the system. It provides
# options to select the system type ('desktop' or 'kiosk') and handles
# the conditional importation of the appropriate sub-modules.
# enable options for each system type (desktop, headless, builders, etc.)
# that can be enabled independently or in combination. Each type is a proper
# NixOS submodule with its own enable flag and type-specific options.
with lib;
let
cfg = config.athenix.sw;
# Normalize type to always be a list
swTypes = if isList cfg.type then cfg.type else [ cfg.type ];
# Helper to check if a type is enabled
hasType = type: elem type swTypes;
in
{
imports = [
@@ -31,169 +26,82 @@ in
./gc.nix
./updater.nix
./update-ref.nix
./secrets.nix
./desktop
./headless
./builders
./tablet-kiosk
./stateless-kiosk
inputs.home-manager.nixosModules.home-manager
inputs.agenix.nixosModules.default
inputs.disko.nixosModules.disko
];
options.athenix.sw = {
enable = mkEnableOption "Standard Workstation Configuration";
enable = mkOption {
type = lib.types.bool;
default = true;
description = ''
Enable standard workstation configuration with base packages.
Provides:
- Base CLI tools (htop, git, binutils)
- Shell configuration (Zsh)
- Secret management (agenix)
- Oh My Posh shell theme
This is typically enabled automatically when any sw type is enabled.
'';
};
type = mkOption {
type = types.oneOf [
(types.enum [
"desktop"
"tablet-kiosk"
"headless"
"stateless-kiosk"
"builders"
])
(types.listOf (
types.enum [
"desktop"
"tablet-kiosk"
"headless"
"stateless-kiosk"
"builders"
]
))
];
default = "desktop";
description = "Type(s) of system configuration. Can be a single type or a list of types to combine multiple configurations.";
type = lib.types.nullOr (lib.types.either lib.types.str (lib.types.listOf lib.types.str));
default = null;
description = "DEPRECATED: Use athenix.sw.<type>.enable instead. Legacy type selection.";
visible = false;
};
extraPackages = mkOption {
type = types.listOf types.package;
type = lib.types.listOf lib.types.package;
default = [ ];
description = "Extra packages to install.";
description = ''
Additional system packages to install beyond the defaults.
These packages are added to environment.systemPackages.
'';
example = lib.literalExpression "[ pkgs.vim pkgs.wget pkgs.curl ]";
};
excludePackages = mkOption {
type = types.listOf types.package;
type = lib.types.listOf lib.types.package;
default = [ ];
description = "Packages to exclude from the default list.";
};
kioskUrl = mkOption {
type = types.str;
default = "https://ha.factory.uga.edu";
description = "URL to open in Chromium kiosk mode.";
};
# Builders-specific options
builders = mkOption {
type = types.submodule {
options = {
giteaRunner = {
enable = mkEnableOption "Gitea Actions self-hosted runner";
url = mkOption {
type = types.str;
description = "Gitea instance URL for the runner";
};
tokenFile = mkOption {
type = types.path;
default = "/var/lib/gitea-runner-token";
description = ''
Path to file containing Gitea runner token.
Generate in Gitea repository settings under Actions > Runners.
The token must have runner registration access.
'';
};
extraLabels = mkOption {
type = types.listOf types.str;
default = [ ];
description = "Extra labels to identify this runner in workflows";
};
name = mkOption {
type = types.str;
default = "athenix";
description = "Name of the Gitea runner service";
};
};
};
};
default = { };
description = "Builder-specific configuration options";
description = ''
Packages to exclude from the default package list.
Useful for removing unwanted default packages.
'';
example = lib.literalExpression "[ pkgs.htop ]";
};
};
config = mkIf cfg.enable (mkMerge [
{
# ========== System-Wide Configuration ==========
nixpkgs.config.allowUnfree = true;
config = mkIf cfg.enable {
# ========== System-Wide Configuration ==========
nixpkgs.config.allowUnfree = true;
# ========== Shell Configuration ==========
programs.zsh.enable = true;
programs.nix-ld.enable = true; # Allow running non-NixOS binaries
# ========== Shell Configuration ==========
programs.zsh.enable = true;
programs.nix-ld.enable = true; # Allow running non-NixOS binaries
# ========== Base Packages ==========
environment.systemPackages =
with pkgs;
subtractLists cfg.excludePackages [
htop # System monitor
binutils # Binary utilities
zsh # Z shell
git # Version control
oh-my-posh # Shell prompt theme
age # Simple file encryption tool
age-plugin-fido2-hmac # age FIDO2 support
inputs.agenix.packages.${stdenv.hostPlatform.system}.default # Secret management
];
}
# ========== Software Profile Imports ==========
(mkIf (hasType "desktop") (
import ./desktop {
inherit
config
lib
pkgs
inputs
;
}
))
(mkIf (hasType "tablet-kiosk") (
import ./tablet-kiosk {
inherit
config
lib
pkgs
inputs
;
}
))
(mkIf (hasType "headless") (
import ./headless {
inherit
config
lib
pkgs
inputs
;
}
))
(mkIf (hasType "stateless-kiosk") (
import ./stateless-kiosk {
inherit
config
lib
pkgs
inputs
;
}
))
(mkIf (hasType "builders") (
import ./builders {
inherit
config
lib
pkgs
inputs
;
}
))
]);
# ========== Base Packages ==========
environment.systemPackages =
with pkgs;
subtractLists cfg.excludePackages [
htop # System monitor
binutils # Binary utilities
zsh # Z shell
git # Version control
oh-my-posh # Shell prompt theme
age # Simple file encryption tool
age-plugin-fido2-hmac # age FIDO2 support
inputs.agenix.packages.${stdenv.hostPlatform.system}.default # Secret management
];
};
}

View File

@@ -10,21 +10,56 @@
inputs,
...
}:
lib.mkMerge [
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
]
with lib;
let
cfg = config.athenix.sw.desktop;
in
{
options.athenix.sw.desktop = mkOption {
type = lib.types.submodule {
options = {
enable = mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable full desktop environment with KDE Plasma 6.
Includes:
- KDE Plasma 6 desktop with SDDM display manager
- Full graphical software suite (Firefox, Chromium, LibreOffice)
- Printing and scanning support (CUPS)
- Virtualization (libvirt, virt-manager)
- Bluetooth and audio (PipeWire)
- Video conferencing (Zoom, Teams)
Recommended for: Workstations, development machines, user desktops
'';
example = true;
};
};
};
default = { };
description = "Desktop environment configuration (KDE Plasma 6).";
};
config = mkIf cfg.enable (mkMerge [
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
]);
}

View File

@@ -2,7 +2,6 @@
config,
lib,
pkgs,
inputs,
...
}:

View File

@@ -1,5 +1,4 @@
{
config,
lib,
pkgs,
...

View File

@@ -1,7 +1,6 @@
{
config,
lib,
pkgs,
...
}:
{
@@ -10,22 +9,40 @@
enable = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to enable automatic garbage collection.";
description = ''
Enable automatic garbage collection of old NixOS generations.
Helps keep disk usage under control on long-running systems.
'';
};
frequency = lib.mkOption {
type = lib.types.str;
default = "weekly";
description = "How often to run garbage collection (systemd timer format).";
description = ''
How often to run garbage collection (systemd timer format).
Common values: "daily", "weekly", "monthly"
Advanced: "*-*-* 03:00:00" (daily at 3 AM)
'';
example = "daily";
};
retentionDays = lib.mkOption {
type = lib.types.int;
default = 30;
description = "Number of days to keep old generations before deletion.";
description = ''
Number of days to keep old system generations before deletion.
Older generations allow rolling back system changes.
Recommended: 30-90 days for workstations, 7-14 for servers.
'';
example = 60;
};
optimise = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether to automatically optimize the Nix store.";
description = ''
Whether to automatically hard-link identical files in the Nix store.
Can save significant disk space but uses CPU during optimization.
'';
};
};

View File

@@ -12,7 +12,11 @@
# It reconstructs the terminfo database from the provided definition and
# adds it to the system packages.
with lib;
let
cfg = config.athenix.sw;
ghostty-terminfo = pkgs.runCommand "ghostty-terminfo" { } ''
mkdir -p $out/share/terminfo
cat > ghostty.info <<'EOF'
@@ -101,5 +105,7 @@ let
'';
in
{
environment.systemPackages = [ ghostty-terminfo ];
config = mkIf cfg.enable {
environment.systemPackages = [ ghostty-terminfo ];
};
}

View File

@@ -11,21 +11,53 @@
...
}:
lib.mkMerge [
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
]
with lib;
let
cfg = config.athenix.sw.headless;
in
{
options.athenix.sw.headless = mkOption {
type = lib.types.submodule {
options = {
enable = mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable minimal headless server configuration.
Includes:
- SSH server with password authentication
- Minimal CLI tools (tmux, man)
- Systemd-networkd for networking
- No graphical environment
Recommended for: Servers, containers (LXC), WSL, remote systems
'';
example = true;
};
};
};
default = { };
description = "Headless server configuration (SSH, minimal CLI tools).";
};
config = mkIf cfg.enable (mkMerge [
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
]);
}

View File

@@ -2,7 +2,6 @@
config,
lib,
pkgs,
inputs,
...
}:

View File

@@ -1,7 +1,4 @@
{
config,
lib,
pkgs,
...
}:

View File

@@ -18,10 +18,27 @@ let
cfg = config.athenix.sw.python;
in
{
options.athenix.sw.python = {
enable = mkEnableOption "Python development tools (pixi, uv)" // {
default = true;
options.athenix.sw.python = lib.mkOption {
type = lib.types.submodule {
options = {
enable = lib.mkOption {
type = lib.types.bool;
default = true;
description = ''
Enable Python development tools (pixi, uv).
Provides:
- pixi: Fast, cross-platform package manager for Python
- uv: Extremely fast Python package installer and resolver
These tools manage project-based dependencies rather than global
Python packages, avoiding conflicts and improving reproducibility.
'';
};
};
};
default = { };
description = "Python development environment configuration.";
};
config = mkIf cfg.enable {

230
sw/secrets.nix Normal file
View File

@@ -0,0 +1,230 @@
# ============================================================================
# Automatic Secret Management with Agenix
# ============================================================================
# This module automatically loads age-encrypted secrets from ./secrets based on
# the hostname. Secrets are organized by directory:
# - ./secrets/global/ -> Installed on ALL systems
# - ./secrets/{hostname}/ -> Installed only on matching host
#
# Secret files should be .age encrypted files. Public keys (.pub) are ignored.
{
config,
lib,
pkgs,
...
}:
with lib;
let
cfg = config.athenix.sw;
secretsPath = ../secrets;
# Get the fleet-assigned hostname (avoids issues with LXC empty hostnames)
hostname = config.athenix.host.name;
# Read all directories in ./secrets
secretDirs = if builtins.pathExists secretsPath then builtins.readDir secretsPath else { };
# Filter to only directories (excludes files)
isDirectory = name: type: type == "directory";
directories = lib.filterAttrs isDirectory secretDirs;
# Read secrets from a specific directory
readSecretsFromDir =
dirName:
let
dirPath = secretsPath + "/${dirName}";
files = builtins.readDir dirPath;
# Check if there's a default.nix with custom secret configurations
hasDefaultNix = files ? "default.nix";
customConfigs = if hasDefaultNix then import (dirPath + "/default.nix") else { };
# Only include .age files (exclude .pub public keys and other files)
secretFiles = lib.filterAttrs (name: type: type == "regular" && lib.hasSuffix ".age" name) files;
in
lib.mapAttrs' (
name: _:
let
# Remove .age extension for the secret name
secretName = lib.removeSuffix ".age" name;
# Get custom config for this secret if defined
customConfig = customConfigs.${secretName} or { };
# Base configuration with file path
baseConfig = {
file = dirPath + "/${name}";
};
in
lib.nameValuePair secretName (baseConfig // customConfig)
) secretFiles;
# Read public keys from a specific directory and map to private key paths
readIdentityPathsFromDir =
dirName:
let
dirPath = secretsPath + "/${dirName}";
files = if builtins.pathExists dirPath then builtins.readDir dirPath else { };
# Only include .pub public key files
pubKeyFiles = lib.filterAttrs (name: type: type == "regular" && lib.hasSuffix ".pub" name) files;
in
lib.mapAttrsToList (
name: _:
let
# Map public key filename to expected private key location
baseName = lib.removeSuffix ".pub" name;
filePath = dirPath + "/${name}";
fileContent = builtins.readFile filePath;
# Check if it's an SSH key by looking at the content
isSSHKey = lib.hasPrefix "ssh-" fileContent || lib.hasPrefix "ecdsa-" fileContent;
in
if lib.hasPrefix "ssh_host_" name then
# SSH host keys: ssh_host_ed25519_key.pub -> /etc/ssh/ssh_host_ed25519_key
"/etc/ssh/${baseName}"
else if name == "identity.pub" then
# Standard age identity: identity.pub -> /etc/age/identity.key
"/etc/age/identity.key"
else if isSSHKey then
# Other SSH keys (user keys, etc.): hunter_halloran_key.pub -> /etc/ssh/hunter_halloran_key
"/etc/ssh/${baseName}"
else
# Generic age keys: key.pub -> /etc/age/key
"/etc/age/${baseName}"
) pubKeyFiles;
# Determine which secrets apply to this host
applicableSecrets =
let
# Global secrets apply to all hosts
globalSecrets = if directories ? "global" then readSecretsFromDir "global" else { };
# Host-specific secrets
hostSecrets = if directories ? ${hostname} then readSecretsFromDir hostname else { };
in
globalSecrets // hostSecrets; # Host-specific secrets override global if same name
# Determine which identity paths (private keys) to use for decryption
identityPaths =
let
# Global identity paths (keys in global/ that all hosts can use)
globalPaths = if directories ? "global" then readIdentityPathsFromDir "global" else [ ];
# Host-specific identity paths
hostPaths = if directories ? ${hostname} then readIdentityPathsFromDir hostname else [ ];
# Default paths that NixOS/agenix use
defaultPaths = [
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/age/identity.key"
];
# Combine all paths and remove duplicates
allPaths = lib.unique (defaultPaths ++ globalPaths ++ hostPaths);
in
allPaths;
in
{
options.athenix.sw.secrets = {
enable = mkOption {
type = types.bool;
default = true;
description = ''
Enable automatic secret management using agenix.
Secrets are loaded from ./secrets based on directory structure:
- ./secrets/global/ -> All systems
- ./secrets/{hostname}/ -> Specific host only
Only .age encrypted files are loaded; .pub files are ignored.
'';
};
extraSecrets = mkOption {
type = types.attrsOf (
types.submodule {
options = {
file = mkOption {
type = types.path;
description = "Path to the encrypted secret file";
};
mode = mkOption {
type = types.str;
default = "0400";
description = "Permissions mode for the decrypted secret";
};
owner = mkOption {
type = types.str;
default = "root";
description = "Owner of the decrypted secret file";
};
group = mkOption {
type = types.str;
default = "root";
description = "Group of the decrypted secret file";
};
};
}
);
default = { };
description = ''
Additional secrets to define manually, beyond the auto-discovered ones.
Use this for secrets that need custom permissions or are stored elsewhere.
'';
example = lib.literalExpression ''
{
"my-secret" = {
file = ./secrets/custom/secret.age;
mode = "0440";
owner = "nginx";
group = "nginx";
};
}
'';
};
};
config = mkIf (cfg.enable && cfg.secrets.enable) {
# Auto-discovered secrets with default permissions
age.secrets = applicableSecrets // cfg.secrets.extraSecrets;
# Generate age identity files from SSH host keys at boot
# This is needed because age can't reliably use OpenSSH private keys directly
# Must run before agenix tries to decrypt secrets
system.activationScripts.convertSshToAge = {
deps = [
"users"
"groups"
];
text = ''
mkdir -p /etc/age
if [ -f /etc/ssh/ssh_host_ed25519_key ]; then
${pkgs.ssh-to-age}/bin/ssh-to-age -private-key -i /etc/ssh/ssh_host_ed25519_key > /etc/age/ssh_host_ed25519.age || true
chmod 600 /etc/age/ssh_host_ed25519.age 2>/dev/null || true
fi
if [ -f /etc/ssh/ssh_host_rsa_key ]; then
${pkgs.ssh-to-age}/bin/ssh-to-age -private-key -i /etc/ssh/ssh_host_rsa_key > /etc/age/ssh_host_rsa.age 2>/dev/null || true
chmod 600 /etc/age/ssh_host_rsa.age 2>/dev/null || true
fi
'';
};
# Add the converted age keys to identity paths (in addition to auto-discovered ones)
age.identityPaths = identityPaths ++ [
"/etc/age/ssh_host_ed25519.age"
"/etc/age/ssh_host_rsa.age"
];
# Optional: Add assertion to warn if no secrets found
warnings =
let
hasSecrets = (builtins.length (builtins.attrNames applicableSecrets)) > 0;
in
lib.optional (
!hasSecrets
) "No age-encrypted secrets found in ./secrets/global/ or ./secrets/${hostname}/";
};
}

View File

@@ -7,37 +7,83 @@
inputs,
...
}:
lib.mkMerge [
(import ./kiosk-browser.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./net.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
]
with lib;
let
cfg = config.athenix.sw.stateless-kiosk;
in
{
options.athenix.sw.stateless-kiosk = mkOption {
type = lib.types.submodule {
options = {
enable = mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable stateless kiosk mode for diskless PXE boot systems.
Includes:
- Sway (Wayland compositor)
- Chromium in fullscreen kiosk mode
- MAC address-based URL routing
- Network-only boot (no local storage)
- Auto-start browser on boot
Recommended for: Assembly line stations, diskless kiosks, PXE boot displays
'';
example = true;
};
kioskUrl = mkOption {
type = lib.types.str;
default = "https://ha.factory.uga.edu";
description = ''
Default URL to display in the kiosk browser.
Note: For stateless-kiosk, MAC address-based routing may override this.
See sw/stateless-kiosk/mac-hostmap.nix for MAC-to-URL mappings.
'';
example = "https://homeassistant.lan:8123/lovelace/dashboard";
};
};
};
default = { };
description = "Stateless kiosk configuration (PXE boot, Sway, MAC-based routing).";
};
config = mkIf cfg.enable (mkMerge [
(import ./kiosk-browser.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./net.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
]);
}

View File

@@ -1,14 +1,13 @@
# This module configures Chromium for kiosk mode under Sway.
# It includes a startup script that determines the kiosk URL based on the machine's MAC address.
{
config,
lib,
pkgs,
inputs,
...
}:
let
macCaseBuilder = (import ./mac-hostmap.nix { inherit lib; }).macCaseBuilder;
macCaseBuilder = inputs.self.lib.macCaseBuilder;
macCases = macCaseBuilder {
varName = "STATION";
};

View File

@@ -1,28 +0,0 @@
# Shared MAC address to station mapping and case builder for stateless-kiosk modules
{ lib }:
let
hostmap = {
"00:e0:4c:46:0b:32" = "1";
"00:e0:4c:46:07:26" = "2";
"00:e0:4c:46:05:94" = "3";
"00:e0:4c:46:07:11" = "4";
"00:e0:4c:46:08:02" = "5";
"00:e0:4c:46:08:5c" = "6";
};
# macCaseBuilder: builds a shell case statement from a hostmap
# varName: the shell variable to assign
# prefix: optional string to prepend to the value (default: "")
# attrset: attribute set to use (default: hostmap)
macCaseBuilder =
{
varName,
prefix ? "",
attrset ? hostmap,
}:
lib.concatStringsSep "\n" (
lib.mapAttrsToList (mac: val: " ${mac}) ${varName}=${prefix}${val} ;;") attrset
);
in
{
inherit hostmap macCaseBuilder;
}

View File

@@ -1,7 +1,4 @@
{
config,
lib,
pkgs,
...
}:
{

View File

@@ -1,11 +1,10 @@
{
config,
lib,
pkgs,
inputs,
...
}:
let
macCaseBuilder = (import ./mac-hostmap.nix { inherit lib; }).macCaseBuilder;
macCaseBuilder = inputs.self.lib.macCaseBuilder;
shellCases = macCaseBuilder {
varName = "NEW_HOST";
prefix = "nix-station";

View File

@@ -5,29 +5,74 @@
inputs,
...
}:
lib.mkMerge [
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./gsettings.nix {
inherit
config
lib
pkgs
inputs
;
})
]
with lib;
let
cfg = config.athenix.sw.tablet-kiosk;
in
{
options.athenix.sw.tablet-kiosk = mkOption {
type = lib.types.submodule {
options = {
enable = mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable tablet kiosk mode with touch-optimized interface.
Includes:
- Phosh mobile desktop environment
- Chromium in fullscreen kiosk mode
- On-screen keyboard (Squeekboard)
- Auto-login and auto-start browser
- Touch gesture support
- Optimized for Surface Pro tablets
Recommended for: Surface tablets, touchscreen kiosks, interactive displays
'';
example = true;
};
kioskUrl = mkOption {
type = lib.types.str;
default = "https://ha.factory.uga.edu";
description = ''
URL to display in the kiosk browser on startup.
The browser will automatically navigate to this URL in fullscreen mode.
'';
example = "https://dashboard.example.com";
};
};
};
default = { };
description = "Tablet kiosk configuration (Phosh, touch interface).";
};
config = mkIf cfg.enable (mkMerge [
(import ./programs.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./services.nix {
inherit
config
lib
pkgs
inputs
;
})
(import ./gsettings.nix {
inherit
config
lib
pkgs
inputs
;
})
]);
}

View File

@@ -155,7 +155,7 @@
--noerrdialogs \
--disable-session-crashed-bubble \
--disable-infobars \
${config.athenix.sw.kioskUrl}
${config.athenix.sw.tablet-kiosk.kioskUrl}
'';
};
};

View File

@@ -1,7 +1,6 @@
{
pkgs,
config,
osConfig,
lib,
...
}:

View File

@@ -1,511 +1,524 @@
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [
python3
git
(pkgs.writeShellScriptBin "update-ref" ''
set -euo pipefail
config,
lib,
pkgs,
...
}:
RED='\033[31m'; YEL='\033[33m'; NC='\033[0m'
die() { printf "''${RED}error:''${NC} %s\n" "$*" >&2; exit 2; }
warn() { printf "''${YEL}warning:''${NC} %s\n" "$*" >&2; }
with lib;
usage() {
cat >&2 <<'EOF'
usage:
update-ref [-R PATH|--athenix-repo=PATH] [-b BRANCH|--athenix-branch=BRANCH]
[-m "msg"|--message "msg"]
[-p[=false] [remote[=URL]]|--push[=false] [remote[=URL]]]
[--make-local|-l] [--make-remote|-r] [--ssh]
user=<username> | system=<device-type>:<hostkey>
EOF
exit 2
}
let
cfg = config.athenix.sw;
in
{
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [
python3
git
(pkgs.writeShellScriptBin "update-ref" ''
set -euo pipefail
# --- must be in a git repo (current dir) ---
git rev-parse --is-inside-work-tree >/dev/null 2>&1 || die "This directory is not a git project"
CUR_REPO_ROOT="$(git rev-parse --show-toplevel)"
CUR_BRANCH="$(git rev-parse --abbrev-ref HEAD)"
RED='\033[31m'; YEL='\033[33m'; NC='\033[0m'
die() { printf "''${RED}error:''${NC} %s\n" "$*" >&2; exit 2; }
warn() { printf "''${YEL}warning:''${NC} %s\n" "$*" >&2; }
# --- athenix checkout (working tree) ---
ATHENIX_DIR="$HOME/athenix"
ATHENIX_BRANCH=""
usage() {
cat >&2 <<'EOF'
usage:
update-ref [-R PATH|--athenix-repo=PATH] [-b BRANCH|--athenix-branch=BRANCH]
[-m "msg"|--message "msg"]
[-p[=false] [remote[=URL]]|--push[=false] [remote[=URL]]]
[--make-local|-l] [--make-remote|-r] [--ssh]
user=<username> | system=<device-type>:<hostkey>
EOF
exit 2
}
# --- current repo automation ---
COMMIT_MSG=""
PUSH_SPEC=""
# --- must be in a git repo (current dir) ---
git rev-parse --is-inside-work-tree >/dev/null 2>&1 || die "This directory is not a git project"
CUR_REPO_ROOT="$(git rev-parse --show-toplevel)"
CUR_BRANCH="$(git rev-parse --abbrev-ref HEAD)"
# --- push / url mode ---
PUSH_SET=0
DO_PUSH=0
MODE_FORCE="" # "", local, remote
# --- athenix checkout (working tree) ---
ATHENIX_DIR="$HOME/athenix"
ATHENIX_BRANCH=""
TARGET=""
# --- current repo automation ---
COMMIT_MSG=""
PUSH_SPEC=""
is_remote_url() {
# https://, http://, ssh://, or scp-style git@host:org/repo
printf "%s" "$1" | grep -qE '^(https?|ssh)://|^[^/@:]+@[^/:]+:'
}
# --- push / url mode ---
PUSH_SET=0
DO_PUSH=0
MODE_FORCE="" # "", local, remote
derive_full_hostname() {
devtype="$1"; hostkey="$2"
if printf "%s" "$hostkey" | grep -q '-' || printf "%s" "$hostkey" | grep -q "^$devtype"; then
printf "%s" "$hostkey"
elif printf "%s" "$hostkey" | grep -qE '^[0-9]+$'; then
printf "%s" "$devtype$hostkey"
else
printf "%s" "$devtype-$hostkey"
fi
}
TARGET=""
extract_existing_fetch_url() {
# args: mode file username key
python3 - "$1" "$2" "$3" "$4" "$5"<<'PY'
import sys, re, pathlib
mode, file, username, key, use_ssh = sys.argv[1:5]
t = pathlib.Path(file).read_text()
is_remote_url() {
# https://, http://, ssh://, or scp-style git@host:org/repo
printf "%s" "$1" | grep -qE '^(https?|ssh)://|^[^/@:]+@[^/:]+:'
}
def url_from_block(block: str) -> str:
if not block:
return ""
m = re.search(r'url\s*=\s*"([^"]+)"\s*;', block)
url = m.group(1) if m else ""
derive_full_hostname() {
devtype="$1"; hostkey="$2"
if printf "%s" "$hostkey" | grep -q '-' || printf "%s" "$hostkey" | grep -q "^$devtype"; then
printf "%s" "$hostkey"
elif printf "%s" "$hostkey" | grep -qE '^[0-9]+$'; then
printf "%s" "$devtype$hostkey"
else
printf "%s" "$devtype-$hostkey"
fi
}
if use_ssh = "true":
return url
extract_existing_fetch_url() {
# args: mode file username key
python3 - "$1" "$2" "$3" "$4" "$5"<<'PY'
import sys, re, pathlib
mode, file, username, key, use_ssh = sys.argv[1:5]
t = pathlib.Path(file).read_text()
# Already https
if url.startswith("https://"):
def url_from_block(block: str) -> str:
if not block:
return ""
m = re.search(r'url\s*=\s*"([^"]+)"\s*;', block)
url = m.group(1) if m else ""
if use_ssh = "true":
return url
# ssh://git@host/org/repo.git
m = re.match(r"ssh://(?:.+?)@([^/]+)/(.+)", url)
if m:
host, path = m.groups()
return f"https://{host}/{path}"
# Already https
if url.startswith("https://"):
return url
# git@host:org/repo.git
m = re.match(r"(?:.+?)@([^:]+):(.+)", url)
if m:
host, path = m.groups()
return f"https://{host}/{path}"
# ssh://git@host/org/repo.git
m = re.match(r"ssh://(?:.+?)@([^/]+)/(.+)", url)
if m:
host, path = m.groups()
return f"https://{host}/{path}"
# If you gave me something cursed
raise ValueError(f"Unrecognized SSH git URL format: {url}")
# git@host:org/repo.git
m = re.match(r"(?:.+?)@([^:]+):(.+)", url)
if m:
host, path = m.groups()
return f"https://{host}/{path}"
# If you gave me something cursed
raise ValueError(f"Unrecognized SSH git URL format: {url}")
if mode == "user":
m = re.search(r'(?s)\n\s*' + re.escape(username) + r'\.external\s*=\s*builtins\.fetchGit\s*\{(.*?)\n\s*\};', t)
block = m.group(1) if m else ""
print(url_from_block(block))
else:
m = re.search(r'(?s)\n\s*"' + re.escape(key) + r'"\s*=\s*builtins\.fetchGit\s*\{(.*?)\n\s*\};', t)
block = m.group(1) if m else ""
print(url_from_block(block))
PY
}
if mode == "user":
m = re.search(r'(?s)\n\s*' + re.escape(username) + r'\.external\s*=\s*builtins\.fetchGit\s*\{(.*?)\n\s*\};', t)
block = m.group(1) if m else ""
print(url_from_block(block))
else:
m = re.search(r'(?s)\n\s*"' + re.escape(key) + r'"\s*=\s*builtins\.fetchGit\s*\{(.*?)\n\s*\};', t)
block = m.group(1) if m else ""
print(url_from_block(block))
PY
}
# --- parse args ---
while [ "$#" -gt 0 ]; do
case "$1" in
user=*|system=*)
[ -z "$TARGET" ] || die "Only one subcommand allowed (user=... or system=...)"
TARGET="$1"; shift
;;
--athenix-repo=*)
ATHENIX_DIR="''${1#*=}"; shift
;;
-R)
[ "$#" -ge 2 ] || usage
ATHENIX_DIR="$2"; shift 2
;;
--athenix-branch=*)
ATHENIX_BRANCH="''${1#*=}"; shift
;;
-b)
[ "$#" -ge 2 ] || usage
ATHENIX_BRANCH="$2"; shift 2
;;
-m|--message)
[ "$#" -ge 2 ] || usage
COMMIT_MSG="$2"; shift 2
;;
# --- parse args ---
while [ "$#" -gt 0 ]; do
case "$1" in
user=*|system=*)
[ -z "$TARGET" ] || die "Only one subcommand allowed (user=... or system=...)"
TARGET="$1"; shift
;;
--athenix-repo=*)
ATHENIX_DIR="''${1#*=}"; shift
;;
-R)
[ "$#" -ge 2 ] || usage
ATHENIX_DIR="$2"; shift 2
;;
--athenix-branch=*)
ATHENIX_BRANCH="''${1#*=}"; shift
;;
-b)
[ "$#" -ge 2 ] || usage
ATHENIX_BRANCH="$2"; shift 2
;;
-m|--message)
[ "$#" -ge 2 ] || usage
COMMIT_MSG="$2"; shift 2
;;
-p|--push)
PUSH_SET=1
DO_PUSH=1
PUSH_SPEC=""
-p|--push)
PUSH_SET=1
DO_PUSH=1
PUSH_SPEC=""
# If there is a next token, only consume it if it is a remote spec
# and not another flag or the subcommand.
if [ "$#" -ge 2 ]; then
nxt="$2"
# If there is a next token, only consume it if it is a remote spec
# and not another flag or the subcommand.
if [ "$#" -ge 2 ]; then
nxt="$2"
if printf "%s" "$nxt" | grep -qE '^(user=|system=)'; then
# next token is the subcommand; don't consume it
shift
elif printf "%s" "$nxt" | grep -qE '^-'; then
# next token is another flag; don't consume it
shift
elif printf "%s" "$nxt" | grep -qE '^[A-Za-z0-9._-]+$'; then
# remote name
PUSH_SPEC="$nxt"
shift 2
elif printf "%s" "$nxt" | grep -qE '^[A-Za-z0-9._-]+=.+$'; then
# remote=URL
PUSH_SPEC="$nxt"
shift 2
if printf "%s" "$nxt" | grep -qE '^(user=|system=)'; then
# next token is the subcommand; don't consume it
shift
elif printf "%s" "$nxt" | grep -qE '^-'; then
# next token is another flag; don't consume it
shift
elif printf "%s" "$nxt" | grep -qE '^[A-Za-z0-9._-]+$'; then
# remote name
PUSH_SPEC="$nxt"
shift 2
elif printf "%s" "$nxt" | grep -qE '^[A-Za-z0-9._-]+=.+$'; then
# remote=URL
PUSH_SPEC="$nxt"
shift 2
else
# unknown token; treat as not-a-push-spec and don't consume it
shift
fi
else
# unknown token; treat as not-a-push-spec and don't consume it
shift
fi
else
;;
-p=*|--push=*)
PUSH_SET=1
val="''${1#*=}"
case "$val" in
false|0|no|off) DO_PUSH=0 ;;
true|1|yes|on|"") DO_PUSH=1 ;;
*) die "Invalid value for --push: $val (use true/false)" ;;
esac
shift
;;
--make-local|-l) MODE_FORCE="local"; shift ;;
--make-remote|-r) MODE_FORCE="remote"; shift ;;
--ssh) USE_SSH="true"; shift ;;
-h|--help) usage ;;
*) die "Unknown argument: $1" ;;
esac
done
[ -n "$TARGET" ] || die "Missing required subcommand: user=<username> or system=<device-type>:<hostkey>"
# --- validate athenix working tree path ---
[ -d "$ATHENIX_DIR" ] || die "$ATHENIX_DIR does not exist"
git -C "$ATHENIX_DIR" rev-parse --is-inside-work-tree >/dev/null 2>&1 || die "$ATHENIX_DIR is not a git project (athenix checkout)"
# --- -b behavior: fork/switch athenix working tree into branch ---
if [ -n "$ATHENIX_BRANCH" ]; then
ATH_CUR_BRANCH="$(git -C "$ATHENIX_DIR" rev-parse --abbrev-ref HEAD)"
if [ "$ATH_CUR_BRANCH" != "$ATHENIX_BRANCH" ]; then
if git -C "$ATHENIX_DIR" show-ref --verify --quiet "refs/heads/$ATHENIX_BRANCH"; then
warn "Branch '$ATHENIX_BRANCH' already exists in $ATHENIX_DIR."
warn "Delete and recreate it from current branch '$ATH_CUR_BRANCH' state? [y/N] "
read -r ans || true
case "''${ans:-N}" in
y|Y|yes|YES)
git -C "$ATHENIX_DIR" branch -D "$ATHENIX_BRANCH"
git -C "$ATHENIX_DIR" switch -c "$ATHENIX_BRANCH"
;;
*)
git -C "$ATHENIX_DIR" switch "$ATHENIX_BRANCH"
;;
esac
else
git -C "$ATHENIX_DIR" switch -c "$ATHENIX_BRANCH"
fi
;;
-p=*|--push=*)
PUSH_SET=1
val="''${1#*=}"
case "$val" in
false|0|no|off) DO_PUSH=0 ;;
true|1|yes|on|"") DO_PUSH=1 ;;
*) die "Invalid value for --push: $val (use true/false)" ;;
esac
shift
;;
--make-local|-l) MODE_FORCE="local"; shift ;;
--make-remote|-r) MODE_FORCE="remote"; shift ;;
--ssh) USE_SSH="true"; shift ;;
-h|--help) usage ;;
*) die "Unknown argument: $1" ;;
esac
done
[ -n "$TARGET" ] || die "Missing required subcommand: user=<username> or system=<device-type>:<hostkey>"
# --- validate athenix working tree path ---
[ -d "$ATHENIX_DIR" ] || die "$ATHENIX_DIR does not exist"
git -C "$ATHENIX_DIR" rev-parse --is-inside-work-tree >/dev/null 2>&1 || die "$ATHENIX_DIR is not a git project (athenix checkout)"
# --- -b behavior: fork/switch athenix working tree into branch ---
if [ -n "$ATHENIX_BRANCH" ]; then
ATH_CUR_BRANCH="$(git -C "$ATHENIX_DIR" rev-parse --abbrev-ref HEAD)"
if [ "$ATH_CUR_BRANCH" != "$ATHENIX_BRANCH" ]; then
if git -C "$ATHENIX_DIR" show-ref --verify --quiet "refs/heads/$ATHENIX_BRANCH"; then
warn "Branch '$ATHENIX_BRANCH' already exists in $ATHENIX_DIR."
warn "Delete and recreate it from current branch '$ATH_CUR_BRANCH' state? [y/N] "
read -r ans || true
case "''${ans:-N}" in
y|Y|yes|YES)
git -C "$ATHENIX_DIR" branch -D "$ATHENIX_BRANCH"
git -C "$ATHENIX_DIR" switch -c "$ATHENIX_BRANCH"
;;
*)
git -C "$ATHENIX_DIR" switch "$ATHENIX_BRANCH"
;;
esac
else
git -C "$ATHENIX_DIR" switch -c "$ATHENIX_BRANCH"
fi
fi
fi
# --- target file + identifiers ---
MODE=""; FILE=""; USERNAME=""; DEVTYPE=""; HOSTKEY=""
case "$TARGET" in
user=*)
MODE="user"
USERNAME="''${TARGET#user=}"
[ -n "$USERNAME" ] || die "user=<username>: username missing"
FILE="$ATHENIX_DIR/users.nix"
;;
system=*)
MODE="system"
RHS="''${TARGET#system=}"
printf "%s" "$RHS" | grep -q ':' || die "system=... must be system=<device-type>:<hostkey>"
DEVTYPE="''${RHS%%:*}"
HOSTKEY="''${RHS#*:}"
[ -n "$DEVTYPE" ] || die "system=<device-type>:<hostkey>: device-type missing"
[ -n "$HOSTKEY" ] || die "system=<device-type>:<hostkey>: hostkey missing"
FILE="$ATHENIX_DIR/inventory.nix"
;;
esac
[ -f "$FILE" ] || die "File not found: $FILE"
# --- target file + identifiers ---
MODE=""; FILE=""; USERNAME=""; DEVTYPE=""; HOSTKEY=""
case "$TARGET" in
user=*)
MODE="user"
USERNAME="''${TARGET#user=}"
[ -n "$USERNAME" ] || die "user=<username>: username missing"
FILE="$ATHENIX_DIR/users.nix"
;;
system=*)
MODE="system"
RHS="''${TARGET#system=}"
printf "%s" "$RHS" | grep -q ':' || die "system=... must be system=<device-type>:<hostkey>"
DEVTYPE="''${RHS%%:*}"
HOSTKEY="''${RHS#*:}"
[ -n "$DEVTYPE" ] || die "system=<device-type>:<hostkey>: device-type missing"
[ -n "$HOSTKEY" ] || die "system=<device-type>:<hostkey>: hostkey missing"
FILE="$ATHENIX_DIR/inventory.nix"
;;
esac
[ -f "$FILE" ] || die "File not found: $FILE"
# --- push default based on existing entry url in the target file ---
EXISTING_URL=""
ENTRY_EXISTS=0
if [ "$MODE" = "user" ]; then
EXISTING_URL="$(extract_existing_fetch_url user "$FILE" "$USERNAME" "" "false")"
[ -n "$EXISTING_URL" ] && ENTRY_EXISTS=1 || true
else
FULL="$(derive_full_hostname "$DEVTYPE" "$HOSTKEY")"
EXISTING_URL="$(extract_existing_fetch_url system "$FILE" "" "$HOSTKEY")"
if [ -n "$EXISTING_URL" ]; then
ENTRY_EXISTS=1
elif [ "$FULL" != "$HOSTKEY" ]; then
EXISTING_URL="$(extract_existing_fetch_url system "$FILE" "" "$FULL")"
# --- push default based on existing entry url in the target file ---
EXISTING_URL=""
ENTRY_EXISTS=0
if [ "$MODE" = "user" ]; then
EXISTING_URL="$(extract_existing_fetch_url user "$FILE" "$USERNAME" "" "false")"
[ -n "$EXISTING_URL" ] && ENTRY_EXISTS=1 || true
fi
fi
if [ "$PUSH_SET" -eq 0 ]; then
if [ "$ENTRY_EXISTS" -eq 1 ] && is_remote_url "$EXISTING_URL"; then
DO_PUSH=1
else
DO_PUSH=0
[ "$MODE_FORCE" = "remote" ] && DO_PUSH=1 || true
FULL="$(derive_full_hostname "$DEVTYPE" "$HOSTKEY")"
EXISTING_URL="$(extract_existing_fetch_url system "$FILE" "" "$HOSTKEY")"
if [ -n "$EXISTING_URL" ]; then
ENTRY_EXISTS=1
elif [ "$FULL" != "$HOSTKEY" ]; then
EXISTING_URL="$(extract_existing_fetch_url system "$FILE" "" "$FULL")"
[ -n "$EXISTING_URL" ] && ENTRY_EXISTS=1 || true
fi
fi
fi
if [ "$MODE_FORCE" = "local" ] && [ "$PUSH_SET" -eq 0 ]; then
DO_PUSH=0
fi
# --- if current repo dirty, prompt ---
if [ -n "$(git status --porcelain)" ]; then
warn "This branch has untracked or uncommitted changes. Would you like to add, commit''${DO_PUSH:+, and push}? [y/N] "
read -r ans || true
case "''${ans:-N}" in
y|Y|yes|YES)
git add -A
if ! git diff --cached --quiet; then
if [ -n "$COMMIT_MSG" ]; then git commit -m "$COMMIT_MSG"; else git commit; fi
else
warn "No staged changes to commit."
fi
;;
*) warn "Proceeding without committing. (rev will be last committed HEAD.)" ;;
esac
fi
# --- push current repo if requested ---
PUSH_REMOTE_URL=""
if [ "$DO_PUSH" -eq 1 ]; then
if [ -n "$PUSH_SPEC" ]; then
if printf "%s" "$PUSH_SPEC" | grep -q '='; then
REM_NAME="''${PUSH_SPEC%%=*}"
REM_URL="''${PUSH_SPEC#*=}"
[ -n "$REM_NAME" ] || die "--push remote-name=URL: remote-name missing"
[ -n "$REM_URL" ] || die "--push remote-name=URL: URL missing"
if git remote get-url "$REM_NAME" >/dev/null 2>&1; then
git remote set-url "$REM_NAME" "$REM_URL"
else
git remote add "$REM_NAME" "$REM_URL"
fi
git push -u "$REM_NAME" "$CUR_BRANCH"
PUSH_REMOTE_URL="$REM_URL"
if [ "$PUSH_SET" -eq 0 ]; then
if [ "$ENTRY_EXISTS" -eq 1 ] && is_remote_url "$EXISTING_URL"; then
DO_PUSH=1
else
REM_NAME="$PUSH_SPEC"
git push -u "$REM_NAME" "$CUR_BRANCH"
PUSH_REMOTE_URL="$(git remote get-url "$REM_NAME")"
DO_PUSH=0
[ "$MODE_FORCE" = "remote" ] && DO_PUSH=1 || true
fi
else
if ! git rev-parse --abbrev-ref --symbolic-full-name @{u} >/dev/null 2>&1; then
die "No upstream is set. Set a default upstream with \"git branch -u <remote>/<remote_branch_name>\""
fi
git push
UPSTREAM_REMOTE="$(git rev-parse --abbrev-ref --symbolic-full-name @{u} | cut -d/ -f1)"
PUSH_REMOTE_URL="$(git remote get-url "$UPSTREAM_REMOTE")"
fi
fi
if [ "$MODE_FORCE" = "local" ] && [ "$PUSH_SET" -eq 0 ]; then
DO_PUSH=0
fi
CUR_REV="$(git -C "$CUR_REPO_ROOT" rev-parse HEAD)"
# --- if current repo dirty, prompt ---
if [ -n "$(git status --porcelain)" ]; then
warn "This branch has untracked or uncommitted changes. Would you like to add, commit''${DO_PUSH:+, and push}? [y/N] "
read -r ans || true
case "''${ans:-N}" in
y|Y|yes|YES)
git add -A
if ! git diff --cached --quiet; then
if [ -n "$COMMIT_MSG" ]; then git commit -m "$COMMIT_MSG"; else git commit; fi
else
warn "No staged changes to commit."
fi
;;
*) warn "Proceeding without committing. (rev will be last committed HEAD.)" ;;
esac
fi
# --- choose URL to write into fetchGit ---
if [ "$MODE_FORCE" = "local" ]; then
FETCH_URL="file://$CUR_REPO_ROOT"
elif [ "$MODE_FORCE" = "remote" ]; then
# --- push current repo if requested ---
PUSH_REMOTE_URL=""
if [ "$DO_PUSH" -eq 1 ]; then
FETCH_URL="$PUSH_REMOTE_URL"
elif [ "$ENTRY_EXISTS" -eq 1 ] && [ -n "$EXISTING_URL" ] && is_remote_url "$EXISTING_URL"; then
FETCH_URL="$EXISTING_URL"
else
CUR_ORIGIN="$(git remote get-url origin 2>/dev/null || true)"
[ -n "$CUR_ORIGIN" ] && is_remote_url "$CUR_ORIGIN" || die "--make-remote requires a remote url (set origin or use -p remote=URL)"
FETCH_URL="$CUR_ORIGIN"
if [ -n "$PUSH_SPEC" ]; then
if printf "%s" "$PUSH_SPEC" | grep -q '='; then
REM_NAME="''${PUSH_SPEC%%=*}"
REM_URL="''${PUSH_SPEC#*=}"
[ -n "$REM_NAME" ] || die "--push remote-name=URL: remote-name missing"
[ -n "$REM_URL" ] || die "--push remote-name=URL: URL missing"
if git remote get-url "$REM_NAME" >/dev/null 2>&1; then
git remote set-url "$REM_NAME" "$REM_URL"
else
git remote add "$REM_NAME" "$REM_URL"
fi
git push -u "$REM_NAME" "$CUR_BRANCH"
PUSH_REMOTE_URL="$REM_URL"
else
REM_NAME="$PUSH_SPEC"
git push -u "$REM_NAME" "$CUR_BRANCH"
PUSH_REMOTE_URL="$(git remote get-url "$REM_NAME")"
fi
else
if ! git rev-parse --abbrev-ref --symbolic-full-name @{u} >/dev/null 2>&1; then
die "No upstream is set. Set a default upstream with \"git branch -u <remote>/<remote_branch_name>\""
fi
git push
UPSTREAM_REMOTE="$(git rev-parse --abbrev-ref --symbolic-full-name @{u} | cut -d/ -f1)"
PUSH_REMOTE_URL="$(git remote get-url "$UPSTREAM_REMOTE")"
fi
fi
else
if [ "$DO_PUSH" -eq 1 ]; then FETCH_URL="$PUSH_REMOTE_URL"; else FETCH_URL="file://$CUR_REPO_ROOT"; fi
fi
# --- rewrite users.nix or inventory.nix ---
python3 - "$MODE" "$FILE" "$FETCH_URL" "$CUR_REV" "$USERNAME" "$DEVTYPE" "$HOSTKEY" <<'PY'
import sys, re, pathlib
CUR_REV="$(git -C "$CUR_REPO_ROOT" rev-parse HEAD)"
mode = sys.argv[1]
path = pathlib.Path(sys.argv[2])
fetch_url = sys.argv[3]
rev = sys.argv[4]
username = sys.argv[5]
devtype = sys.argv[6]
hostkey = sys.argv[7]
text = path.read_text()
# --- choose URL to write into fetchGit ---
if [ "$MODE_FORCE" = "local" ]; then
FETCH_URL="file://$CUR_REPO_ROOT"
elif [ "$MODE_FORCE" = "remote" ]; then
if [ "$DO_PUSH" -eq 1 ]; then
FETCH_URL="$PUSH_REMOTE_URL"
elif [ "$ENTRY_EXISTS" -eq 1 ] && [ -n "$EXISTING_URL" ] && is_remote_url "$EXISTING_URL"; then
FETCH_URL="$EXISTING_URL"
else
CUR_ORIGIN="$(git remote get-url origin 2>/dev/null || true)"
[ -n "$CUR_ORIGIN" ] && is_remote_url "$CUR_ORIGIN" || die "--make-remote requires a remote url (set origin or use -p remote=URL)"
FETCH_URL="$CUR_ORIGIN"
fi
else
if [ "$DO_PUSH" -eq 1 ]; then FETCH_URL="$PUSH_REMOTE_URL"; else FETCH_URL="file://$CUR_REPO_ROOT"; fi
fi
def find_matching_brace(s: str, start: int) -> int:
depth = 0
i = start
in_str = False
while i < len(s):
ch = s[i]
if in_str:
if ch == '\\':
i += 2
continue
if ch == '"':
in_str = False
i += 1
continue
if ch == '"':
in_str = True
i += 1
continue
if ch == '{':
depth += 1
elif ch == '}':
depth -= 1
if depth == 0:
return i
i += 1
raise ValueError("Could not find matching '}'")
# --- rewrite users.nix or inventory.nix ---
python3 - "$MODE" "$FILE" "$FETCH_URL" "$CUR_REV" "$USERNAME" "$DEVTYPE" "$HOSTKEY" <<'PY'
import sys, re, pathlib
def mk_fetch(entry_indent: str) -> str:
# entry_indent is indentation for the whole `"key" = <here>;` line.
# The attrset contents should be indented one level deeper.
inner = entry_indent + " "
return (
'builtins.fetchGit {\n'
f'{inner}url = "{fetch_url}";\n'
f'{inner}rev = "{rev}";\n'
f'{inner}submodules = true;\n'
f'{entry_indent}}}'
)
mode = sys.argv[1]
path = pathlib.Path(sys.argv[2])
fetch_url = sys.argv[3]
rev = sys.argv[4]
username = sys.argv[5]
devtype = sys.argv[6]
hostkey = sys.argv[7]
text = path.read_text()
def full_hostname(devtype: str, hostkey: str) -> str:
if hostkey.startswith(devtype) or "-" in hostkey:
return hostkey
if hostkey.isdigit():
return f"{devtype}{hostkey}"
return f"{devtype}-{hostkey}"
def find_matching_brace(s: str, start: int) -> int:
depth = 0
i = start
in_str = False
while i < len(s):
ch = s[i]
if in_str:
if ch == '\\':
i += 2
continue
if ch == '"':
in_str = False
i += 1
continue
if ch == '"':
in_str = True
i += 1
continue
if ch == '{':
depth += 1
elif ch == '}':
depth -= 1
if depth == 0:
return i
i += 1
raise ValueError("Could not find matching '}'")
def update_user(t: str) -> str:
mblock = re.search(r"(?s)athenix\.users\s*=\s*\{(.*?)\n\s*\};", t)
if not mblock:
raise SystemExit("error: could not locate `athenix.users = { ... };` block")
# locate the full span of the users block to edit inside it
# (re-find with groups for reconstruction)
m2 = re.search(r"(?s)(athenix\.users\s*=\s*\{)(.*?)(\n\s*\};)", t)
head, body, tail = m2.group(1), m2.group(2), m2.group(3)
entry_re = re.search(
r"(?s)(\n[ \t]*" + re.escape(username) + r"\.external\s*=\s*)builtins\.fetchGit\s*\{",
body
def mk_fetch(entry_indent: str) -> str:
# entry_indent is indentation for the whole `"key" = <here>;` line.
# The attrset contents should be indented one level deeper.
inner = entry_indent + " "
return (
'builtins.fetchGit {\n'
f'{inner}url = "{fetch_url}";\n'
f'{inner}rev = "{rev}";\n'
f'{inner}submodules = true;\n'
f'{entry_indent}}}'
)
if entry_re:
brace = body.rfind("{", 0, entry_re.end())
end = find_matching_brace(body, brace)
semi = re.match(r"\s*;", body[end+1:])
if not semi:
raise SystemExit("error: expected ';' after fetchGit attrset")
semi_end = end + 1 + semi.end()
line_start = body.rfind("\n", 0, entry_re.start()) + 1
indent = re.match(r"[ \t]*", body[line_start:entry_re.start()]).group(0)
def full_hostname(devtype: str, hostkey: str) -> str:
if hostkey.startswith(devtype) or "-" in hostkey:
return hostkey
if hostkey.isdigit():
return f"{devtype}{hostkey}"
return f"{devtype}-{hostkey}"
new_body = body[:entry_re.start()] + entry_re.group(1) + mk_fetch(indent) + ";" + body[semi_end:]
else:
indent = " "
new_body = body + f"\n{indent}{username}.external = {mk_fetch(indent)};\n"
def update_user(t: str) -> str:
mblock = re.search(r"(?s)athenix\.users\s*=\s*\{(.*?)\n\s*\};", t)
if not mblock:
raise SystemExit("error: could not locate `athenix.users = { ... };` block")
return t[:m2.start()] + head + new_body + tail + t[m2.end():]
# locate the full span of the users block to edit inside it
# (re-find with groups for reconstruction)
m2 = re.search(r"(?s)(athenix\.users\s*=\s*\{)(.*?)(\n\s*\};)", t)
head, body, tail = m2.group(1), m2.group(2), m2.group(3)
def update_system(t: str) -> str:
# Find devtype block robustly: start-of-file or newline.
m = re.search(r"(?s)(^|\n)[ \t]*" + re.escape(devtype) + r"\s*=\s*\{", t)
if not m:
raise SystemExit(f"error: could not locate `{devtype} = {{ ... }};` block")
dev_open = t.find("{", m.end() - 1)
dev_close = find_matching_brace(t, dev_open)
dev = t[dev_open:dev_close+1]
# Find devices attrset inside dev
dm = re.search(r"(?s)(^|\n)[ \t]*devices\s*=\s*\{", dev)
if not dm:
raise SystemExit(f"error: could not locate `devices = {{ ... }};` inside `{devtype}`")
devices_open = dev.find("{", dm.end() - 1)
devices_close = find_matching_brace(dev, devices_open)
devices = dev[devices_open:devices_close+1]
# indentation for entries in devices
# find indent of the 'devices' line, then add 2 spaces
candidates = [hostkey, full_hostname(devtype, hostkey)]
seen = set()
candidates = [c for c in candidates if not (c in seen or seen.add(c))]
for key in candidates:
entry = re.search(
r'(?s)\n([ ]*)"' + re.escape(key) + r'"\s*=\s*builtins\.fetchGit\s*\{',
devices
)
if entry:
entry_indent = entry.group(1)
# find the '{' we matched
brace = devices.find("{", entry.end() - 1)
end = find_matching_brace(devices, brace)
semi = re.match(r"\s*;", devices[end+1:])
entry_re = re.search(
r"(?s)(\n[ \t]*" + re.escape(username) + r"\.external\s*=\s*)builtins\.fetchGit\s*\{",
body
)
if entry_re:
brace = body.rfind("{", 0, entry_re.end())
end = find_matching_brace(body, brace)
semi = re.match(r"\s*;", body[end+1:])
if not semi:
raise SystemExit("error: expected ';' after fetchGit attrset in devices")
raise SystemExit("error: expected ';' after fetchGit attrset")
semi_end = end + 1 + semi.end()
# Reconstruct the prefix: newline + indent + "key" =
prefix = f'\n{entry_indent}"{key}" = '
line_start = body.rfind("\n", 0, entry_re.start()) + 1
indent = re.match(r"[ \t]*", body[line_start:entry_re.start()]).group(0)
new_devices = (
devices[:entry.start()]
+ prefix
+ mk_fetch(entry_indent)
+ ";"
+ devices[semi_end:]
new_body = body[:entry_re.start()] + entry_re.group(1) + mk_fetch(indent) + ";" + body[semi_end:]
else:
indent = " "
new_body = body + f"\n{indent}{username}.external = {mk_fetch(indent)};\n"
return t[:m2.start()] + head + new_body + tail + t[m2.end():]
def update_system(t: str) -> str:
# Find devtype block robustly: start-of-file or newline.
m = re.search(r"(?s)(^|\n)[ \t]*" + re.escape(devtype) + r"\s*=\s*\{", t)
if not m:
raise SystemExit(f"error: could not locate `{devtype} = {{ ... }};` block")
dev_open = t.find("{", m.end() - 1)
dev_close = find_matching_brace(t, dev_open)
dev = t[dev_open:dev_close+1]
# Find devices attrset inside dev
dm = re.search(r"(?s)(^|\n)[ \t]*devices\s*=\s*\{", dev)
if not dm:
raise SystemExit(f"error: could not locate `devices = {{ ... }};` inside `{devtype}`")
devices_open = dev.find("{", dm.end() - 1)
devices_close = find_matching_brace(dev, devices_open)
devices = dev[devices_open:devices_close+1]
# indentation for entries in devices
# find indent of the 'devices' line, then add 2 spaces
candidates = [hostkey, full_hostname(devtype, hostkey)]
seen = set()
candidates = [c for c in candidates if not (c in seen or seen.add(c))]
for key in candidates:
entry = re.search(
r'(?s)\n([ ]*)"' + re.escape(key) + r'"\s*=\s*builtins\.fetchGit\s*\{',
devices
)
new_dev = dev[:devices_open] + new_devices + dev[devices_close+1:]
if entry:
entry_indent = entry.group(1)
return t[:dev_open] + new_dev + t[dev_close+1:]
# find the '{' we matched
brace = devices.find("{", entry.end() - 1)
end = find_matching_brace(devices, brace)
# Not found: append into devices (exact hostkey)
# Indent for new entries: take indent of the closing '}' of devices, add 2 spaces.
close_line_start = devices.rfind("\n", 0, len(devices)-1) + 1
close_indent = re.match(r"[ ]*", devices[close_line_start:]).group(0)
entry_indent = close_indent + " "
semi = re.match(r"\s*;", devices[end+1:])
if not semi:
raise SystemExit("error: expected ';' after fetchGit attrset in devices")
semi_end = end + 1 + semi.end()
insertion = f'\n{entry_indent}"{hostkey}" = {mk_fetch(entry_indent)};\n'
new_devices = devices[:-1].rstrip() + insertion + close_indent + "}"
new_dev = dev[:devices_open] + new_devices + dev[devices_close+1:]
return t[:dev_open] + new_dev + t[dev_close+1:]
# Reconstruct the prefix: newline + indent + "key" =
prefix = f'\n{entry_indent}"{key}" = '
if mode == "user":
out = update_user(text)
elif mode == "system":
out = update_system(text)
else:
raise SystemExit("error: unknown mode")
new_devices = (
devices[:entry.start()]
+ prefix
+ mk_fetch(entry_indent)
+ ";"
+ devices[semi_end:]
)
new_dev = dev[:devices_open] + new_devices + dev[devices_close+1:]
path.write_text(out)
PY
return t[:dev_open] + new_dev + t[dev_close+1:]
cd $ATHENIX_DIR
nix fmt **/*.nix
cd $CUR_REPO_ROOT
# Not found: append into devices (exact hostkey)
# Indent for new entries: take indent of the closing '}' of devices, add 2 spaces.
close_line_start = devices.rfind("\n", 0, len(devices)-1) + 1
close_indent = re.match(r"[ ]*", devices[close_line_start:]).group(0)
entry_indent = close_indent + " "
printf "updated %s\n" "$FILE" >&2
printf " url = %s\n" "$FETCH_URL" >&2
printf " rev = %s\n" "$CUR_REV" >&2
'')
];
insertion = f'\n{entry_indent}"{hostkey}" = {mk_fetch(entry_indent)};\n'
new_devices = devices[:-1].rstrip() + insertion + close_indent + "}"
new_dev = dev[:devices_open] + new_devices + dev[devices_close+1:]
return t[:dev_open] + new_dev + t[dev_close+1:]
if mode == "user":
out = update_user(text)
elif mode == "system":
out = update_system(text)
else:
raise SystemExit("error: unknown mode")
path.write_text(out)
PY
cd $ATHENIX_DIR
nix fmt **/*.nix
cd $CUR_REPO_ROOT
printf "updated %s\n" "$FILE" >&2
printf " url = %s\n" "$FETCH_URL" >&2
printf " rev = %s\n" "$CUR_REV" >&2
'')
];
};
}

View File

@@ -9,27 +9,47 @@ with lib;
{
options.athenix.sw.remoteBuild = lib.mkOption {
type = types.submodule {
type = lib.types.submodule {
options = {
hosts = mkOption {
type = types.listOf types.str;
type = lib.types.listOf lib.types.str;
default = [ "engr-ugaif@192.168.11.133 x86_64-linux" ];
description = "List of remote build hosts for system rebuilding.";
description = ''
List of remote build hosts for system rebuilding.
Format: "user@hostname architecture"
Each host must have SSH access and nix-daemon available.
Useful for offloading builds from low-power devices (tablets, laptops)
to more powerful build servers.
'';
example = lib.literalExpression ''
[
"builder@nix-builder x86_64-linux"
"user@192.168.1.100 aarch64-linux"
]'';
};
enable = mkOption {
type = types.bool;
type = lib.types.bool;
default = false;
description = "Whether to enable remote build for 'update-system' command.";
description = ''
Whether to enable remote builds for the 'update-system' command.
When enabled, 'update-system' will use the configured remote hosts
to build the new system configuration instead of building locally.
Automatically enabled for tablet-kiosk systems.
'';
};
};
};
default = { };
description = "Remote build configuration";
description = "Remote build configuration for system updates.";
};
config = {
athenix.sw.remoteBuild.enable = lib.mkDefault (config.athenix.sw.type == "tablet-kiosk");
athenix.sw.remoteBuild.enable = lib.mkDefault (config.athenix.sw.tablet-kiosk.enable);
environment.systemPackages = [
(pkgs.writeShellScriptBin "update-system" ''

View File

@@ -1,4 +1,4 @@
{ inputs, ... }:
{ ... }:
# ============================================================================
# User Configuration
@@ -15,7 +15,6 @@
# nixos-systems configuration (nixpkgs, home-manager, etc.).
{
config,
lib,
pkgs,
osConfig ? null, # Only available in home-manager context
@@ -60,7 +59,7 @@
fd
bat
]
++ lib.optional (osConfig.athenix.sw.type or null == "desktop") firefox;
++ lib.optional (osConfig.athenix.sw.desktop.enable or false) firefox;
# Conditionally add packages based on system type
# ========== Programs ==========

View File

@@ -13,8 +13,9 @@
#
# External User Configuration:
# Users can specify external configuration modules via the 'external' attribute:
# external = builtins.fetchGit { url = "..."; rev = "..."; };
# external = { url = "..."; rev = "..."; submodules? = false; };
# external = /path/to/local/config;
# external = builtins.fetchGit { ... }; # legacy, still supported
#
# External repositories should contain:
# - user.nix (required): Defines athenix.users.<name> options AND home-manager config
@@ -47,9 +48,10 @@
enable = true; # Default user, enabled everywhere
};
hdh20267 = {
external = builtins.fetchGit {
external = {
url = "https://git.factory.uga.edu/hdh20267/hdh20267-nix";
rev = "dbdf65c7bd59e646719f724a3acd2330e0c922ec";
# submodules = false; # optional, defaults to false
};
};
sv22900 = {