yumechi-no-kuni-proxy-worker/README.md
eternal-flame-AD aff0fec58f
docs
Signed-off-by: eternal-flame-AD <yume@yumechi.jp>
2024-11-23 15:52:09 -06:00

7.9 KiB
Raw Permalink Blame History

Yumechi-no-kuni-proxy-worker

This is a misskey proxy worker for ゆめちのくに (Yumechi-no-kuni) instance. Runs natively on both local and Cloudflare Workers environments!

It has been deployed on my instance for since 11/14 under the AppArmor deployment profile.

Currently to do:

  • Content-Type sniffing
  • Preset image resizing
  • Opportunistic Redirection on large video files
  • RFC9110 compliant proxy loop detection with defensive programming against known vulnerable proxies
  • HTTPs only mode and X-Forwarded-Proto reflection
  • Cache-Control header
  • Rate-limiting on local deployment
  • Read config from Cloudflare
  • Timing and Rate-limiting headers (some not available on Cloudflare Workers)
    • Tiered rate-limiting
  • Lossy WebP on CF Workers (maybe already works?)
  • Cache results
  • Handle all possible panics reported by Clippy
  • Sandboxing the image rendering
  • Prometheus-format metrics

Spec Compliance

This project is designed to match the upstream specification, however a few deviations are made:

  • We will not honor remote Content-Disposition headers but instead reply with the actual filename in the request URL.
  • Remote Content-Type headers will only be used as a hint rather than authoritative, and resniffing is unconditionally performed using the file utility database using purely masked signature matching.
  • SVG rasterization is removed from the proxy in favor of sanitization and CSP enforcement.

Demo

Avatar resizing

Preview at:

CF Worker: https://yumechi-no-kuni-proxy-worker.eternal-flame-ad.workers.dev/proxy/avatar.webp?url=https://media.misskeyusercontent.com/io/274cc4f7-4674-4db1-9439-9fac08a66aa1.png

Local: https://mproxy.mi.yumechi.jp/proxy/avatar.webp?url=https://media.misskeyusercontent.com/io/274cc4f7-4674-4db1-9439-9fac08a66aa1.png

Image:

Syuilo Avatar resived.png

Syuilo Avatar resived.png

SVG rendering

CF Worker: https://yumechi-no-kuni-proxy-worker.eternal-flame-ad.workers.dev/proxy/static.webp?url=https://upload.wikimedia.org/wikipedia/commons/a/ad/AES-AddRoundKey.svg

Local: https://mproxy.mi.yumechi.jp/proxy/static.webp?url=https://upload.wikimedia.org/wikipedia/commons/a/ad/AES-AddRoundKey.svg

AES-AddRoundKey.svg

AES-AddRoundKey.svg

Setup and Deployment

  1. Clone this repository. Load the submodules with git submodule update --init.

  2. Install Rust and Cargo, using rustup is recommended. If you do not plan on deploying to Cloudflare Workers, you can remove the rust-toolchain file intended to get around cloudflare/worker-rs#668. Otherwise you may need to install that specific version of Rust by rustup install $(cat rust-toolchain).

  3. IF deploying locally:

    1. Edit local.toml to your liking. The documentations can be opened with cargo doc --open.

    2. Test run with cargo run --features env-local -- -c local.toml. Additional features apparmor and reuse-port are available for Linux users.

      If you do not use the apparmor feature, you need to remove the apparmor stanza from the configuration file or the program will refuse to start. The reuse-port feature is not necessary but may improve performance on Linux in high-traffic environments.

    3. Build with cargo build --features env-local --profile release-local. The built binary will be in target/release-local/yumechi-no-kuni-proxy-worker. You can consider setting RUSTFLAGS="-Ctarget-cpu=native" for better performance. Be prepared for ~5 minutes of build time due to link time optimization.

    4. The only flag understood is -c for the configuration file. The configuration file is in TOML format. However, the RUST_LOG environment variable will change the log level. The log level is info by default if the environment variable is not set.

    IF deploying to Cloudflare Workers:

    Firstly I don't recommend deploying using the free plan because there are much faster implementations that do not or almost do not perform any Image processing. The reported CPU time by Cloudflare is consistently over the free plan limit (which is only 10ms! probably not even enough for decoding an image) and will likely be throttled or terminated once you deploy it to real workloads. The paid plan is recommended for this worker.

    1. Add the wasm target with rustup +$(cat rust-toolchain) target add wasm32-unknown-unknown.

    2. Have a working JS environment.

    3. Install wrangler with you JS package manager of choice. See https://developers.cloudflare.com/workers/wrangler/install-and-update/. npx also works.

    4. Edit wrangler.toml to your liking. Everything in the [vars] section maps directly into the config section of the TOML configuration file.

    5. Test locally with wrangler dev.

    6. Deploy with wrangler deploy --outdir bundled/.

AppArmor

AppArmor is a Mandatory Access Control Linux security module that can be used to heavily restrict the actions of tasks.

It is much more secure than Docker and I recommend using AppArmor instead of Docker for isolation, mainly because:

  • Docker is not designed for security but for convenience.
  • Docker only creates a new namespace but do not actually police the actions of the task and will expose much more kernel interfaces to the task.
  • There is no dynamic privilege reduction in Docker, so if the image parsing is compromised at the very least your whole container is compromised.
  • AFAIK there are no known bypasses for AppArmor, but there are known bypasses for Docker.

To use AppArmor, you need to have the apparmor LSM loaded into kernel (should be just a kernel parameter) and load the mac/apparmor/yumechi-no-kuni-proxy-worker profile into the system. You might want to adjust the path to your binary and configuration file, or alternatively use the systemd AppArmorProfile directive to confine the worker.

All major distros should have an easy-to-follow guide on how to do this. Typically add a kernel parameter and install a userspace tool package.

This will create a highly restrictive environment: try it yourself with aa-exec -p yumechi-no-kuni-proxy-worker [initial_foothold] and see if you can break out :). And that is just the first layer of defense, try the more restrictive subprofiles:

  • yumechi-no-kuni-proxy-worker//serve: irreversibly dropped into before listening on the network begins. Restrict loading additional code and access to configuration files.
  • yumechi-no-kuni-proxy-worker//serve//image: absolutely no file, network or capability access.

Docker

If you still for some reason want to use Docker, you can use the Dockerfile provided.