haccfiles/pkgs/pluto/fetchgit/default.nix
stuebinm a00e28d85a
add pluto notebook server
Pluto [1] is one of these interactive notebook thingies that have become
so unreasonably popular with people doing machine learning or data
analysis, but – somewhat surprisingly – it's actually not shit (e.g. no
global mutable state in the notebook, no weird unreadable fileformat
that doesn't play well with version control, etc.)

In particular, it can be used collaboratively (while it doesn't do
real-time collaborative editing like a pad, it /does/ push out global
updates each time someone executes a cell, so it's reasonably close),
and I think it may be useful to have for julia-hacking sessions.

It may also be useful for people running low-end laptops, since code is
executed on the host — and I guess hainich has enough unused ressources
lying around that we can spare a few.

After deploying this, the notebook server should be reachable via:
  ssh hainich -L 9999:localhost:9999
and then visiting http://localhost:9999

Caveats: by design, pluto allows a user to execute arbitrary code on the
host. That is its main function, and not something we can prevent. I've
tried to mitigate this as far as possible by:
 - only allowing access via ssh port forwarding. In theory pluto does
   have basic access control, but that works via a secret link that
   it'll spit to stdout on startup (i.e. the journal), which cannot be
   set in advance, nor regenerted without restarting the entire process.
   Unfortunately, this means we won't be able to use it at e.g.
   conference sessions with people who don't have access to our infra
 - running it in a nixos-container as its own user, so it should never
   get any kind of access to the "main" directory tree apart from a
   single directory that we can keep notebooks in (which is currently a
   bind mount set to /data/pluto)
 - limiting memory and cpu for that container via systemd (less out of
   worry for exploits, and more so that a few accidental while-true
   loops will never consume enough cpu time to noticebly slow down
   anything else). The current limits for both a chosen relatively low;
   we'll have to see if they become too limiting should anyone run an
   actual weather model on this.

Things we could also do:
 - currently, the container does not have its own network (mostly since
   that would make it slightly less convenient to use with port
   forwarding); in theory, pluto should even be able to run entirely
   without internet access of its own, but I'm not sure if this would
   break things like loading images / raw data into a notebook
 - make the container ephemeral, and only keep the directory containing
   the notebooks. I haven't done this since it would require
   recompilation of pluto each time the container is wiped, which makes
   for a potentially inconvenient startup time (though still < 3-5 mins)

Questions:
 - have I missed anything important that should definitely be also
   sandboxed / limited in some way?
 - in general, are we comfortable running something like this?
 - would we (in principle) be comfortable opening this up to other
   people for congress sessions (assuming we figure out a reasonable
   access control)?

Notes to deployer:
 - while I have not tested this on hainich, it works on my own server
 - you will probably have to create the /data/pluto directory for the
   bind mount, and make it world-writable (or chown it to the pluto user
   inside the container)

[1] https://github.com/fonsp/Pluto.jl/
2021-08-26 21:27:49 +02:00

71 lines
2.4 KiB
Nix

{stdenvNoCC, git, cacert}: let
urlToName = url: rev: let
inherit (stdenvNoCC.lib) removeSuffix splitString last;
base = last (splitString ":" (baseNameOf (removeSuffix "/" url)));
matched = builtins.match "(.*).git" base;
short = builtins.substring 0 7 rev;
appendShort = if (builtins.match "[a-f0-9]*" rev) != null
then "-${short}"
else "";
in "${if matched == null then base else builtins.head matched}${appendShort}";
in
{ url, rev ? "HEAD", md5 ? "", sha256 ? "", leaveDotGit ? deepClone
, fetchSubmodules ? true, deepClone ? false
, branchName ? null
, name ? urlToName url rev
, # Shell code executed after the file has been fetched
# successfully. This can do things like check or transform the file.
postFetch ? ""
, preferLocalBuild ? true
}:
/* NOTE:
fetchgit has one problem: git fetch only works for refs.
This is because fetching arbitrary (maybe dangling) commits may be a security risk
and checking whether a commit belongs to a ref is expensive. This may
change in the future when some caching is added to git (?)
Usually refs are either tags (refs/tags/*) or branches (refs/heads/*)
Cloning branches will make the hash check fail when there is an update.
But not all patches we want can be accessed by tags.
The workaround is getting the last n commits so that it's likely that they
still contain the hash we want.
for now : increase depth iteratively (TODO)
real fix: ask git folks to add a
git fetch $HASH contained in $BRANCH
facility because checking that $HASH is contained in $BRANCH is less
expensive than fetching --depth $N.
Even if git folks implemented this feature soon it may take years until
server admins start using the new version?
*/
assert deepClone -> leaveDotGit;
if md5 != "" then
throw "fetchgit does not support md5 anymore, please use sha256"
else
stdenvNoCC.mkDerivation {
inherit name;
builder = ./builder.sh;
fetcher = ./nix-prefetch-git; # This must be a string to ensure it's called with bash.
nativeBuildInputs = [git];
outputHashAlgo = "sha256";
outputHashMode = "recursive";
outputHash = sha256;
inherit url rev leaveDotGit fetchSubmodules deepClone branchName postFetch;
GIT_SSL_CAINFO = "${cacert}/etc/ssl/certs/ca-bundle.crt";
impureEnvVars = stdenvNoCC.lib.fetchers.proxyImpureEnvVars ++ [
"GIT_PROXY_COMMAND" "SOCKS_SERVER"
];
inherit preferLocalBuild;
}