Compare commits

...

19 commits

Author SHA1 Message Date
earl-warren
d38e069064 Merge pull request 'cascading-pr from https://code.forgejo.org/forgejo/lxc-helpers/pulls/15' (#29) from cascading-pr/act:forgejo/lxc-helpers-15 into main
Reviewed-on: https://code.forgejo.org/forgejo/act/pulls/29
Reviewed-by: earl-warren <earl-warren@noreply.code.forgejo.org>
2023-12-02 16:54:48 +00:00
cascading-pr
d6cdcd4916 cascading-pr update 2023-12-02 15:56:31 +00:00
earl-warren
a996610578 Merge pull request 'cascading-pr from https://code.forgejo.org/forgejo/lxc-helpers/pulls/14' (#28) from cascading-pr/act:forgejo/lxc-helpers-14 into main
Reviewed-on: https://code.forgejo.org/forgejo/act/pulls/28
2023-12-01 23:04:22 +00:00
cascading-pr
fdc68581dd cascading-pr update 2023-12-01 22:45:17 +00:00
cascading-pr
f79ec84c16 cascading-pr update 2023-12-01 22:18:52 +00:00
cascading-pr
d9f81c9e04 cascading-pr update 2023-12-01 21:33:25 +00:00
s3lph
9db5480aad [FORGEJO] feat(docker): Add flag to enable IPv6 in auto-created networks (#24)
Implements one part of forgejo/runner#119. The other part is a corresponding PR in forgejo/runner: forgejo/runner#120.

Reviewed-on: https://code.forgejo.org/forgejo/act/pulls/24
Reviewed-by: earl-warren <earl-warren@noreply.code.forgejo.org>
Co-authored-by: s3lph <codeberg@s3lph.me>
Co-committed-by: s3lph <codeberg@s3lph.me>
2023-11-14 22:02:37 +00:00
Earl Warren
bf72e67041
[LXC] global lock on start
Since the start script may create LXC templates that are shared, they
may race against each other when running for the first time. A lock
global to the host needs to be used to guarantee that does not happen.
2023-11-11 11:34:47 +01:00
Earl Warren
844f117f31
[LXC] split platform into template, release and config 2023-11-10 21:45:14 +01:00
cascading-pr
25c4ac6b73 [FORGEJO] ascading-pr from https://code.forgejo.org/forgejo/lxc-helpers/pulls/7 (#21)
cascading-pr from https://code.forgejo.org/forgejo/lxc-helpers/pulls/7

Co-authored-by: cascading-pr <cascading-pr@example.com>
Reviewed-on: https://code.forgejo.org/forgejo/act/pulls/21
Co-authored-by: cascading-pr <cascading-pr@noreply.code.forgejo.org>
Co-committed-by: cascading-pr <cascading-pr@noreply.code.forgejo.org>
2023-11-10 20:36:28 +00:00
Earl Warren
0570e040a8
do not cascade if the CASCADE variable is no 2023-11-10 19:09:14 +01:00
Earl Warren
61f412a1f1
[FORGEJO] implement lxc separately from self-hosted 2023-11-09 03:52:51 +01:00
Earl Warren
207dff2fe4
no need to checkout before running cascading-pr 2023-11-08 17:59:36 +01:00
Earl Warren
032294082f
[FORGEJO] cascading PR to runner 2023-11-07 17:20:26 +01:00
Earl Warren
6769d4dbc1
[FORGEJO] informative errors when local action is not found
(cherry picked from commit cefe3d8dab)
2023-11-07 17:20:26 +01:00
Earl Warren
de9ef55ef7
[FORGEJO] lifetime 0 converts to infinity
Closes: https://code.forgejo.org/forgejo/act/issues/2
(cherry picked from commit 6dc2a8e888)
2023-11-07 17:20:26 +01:00
Earl Warren
b0c5099580
[FORGEJO] add unit tests
[FORGEJO] workflow cosmetic changes

(cherry picked from commit 143605538d)

[FORGEJO] runs on docker not ubuntu-latest
2023-11-07 17:20:26 +01:00
Earl Warren
70400ccc71
[FORGEJO] wrap self-hosted platform steps in an LXC container
act PR https://github.com/nektos/act/pull/1682

* shell script to start the LXC container
* create and destroy a LXC container
* run commands with lxc-attach
* expose additional devices for docker & libvirt to work
* install node 16 & git for checkout to work

[FORGEJO] start/stop lxc working directory is /tmp

[FORGEJO] use lxc-helpers to create/destroy containers

[FORGEJO] do not setup LXC

(cherry picked from commit c2eaf440f5)

Conflicts:
	pkg/container/host_environment.go

Conflicts:
	pkg/container/host_environment.go

[FORGJEO] upgrade to node20
2023-11-07 17:20:25 +01:00
Earl Warren
78981ee343
[FORGEJO] sync lxc-helpers ff5a3fc52b5c877427abb8f9867064a5487519ec 2023-11-07 17:17:22 +01:00
14 changed files with 974 additions and 39 deletions

18
.forgejo/cascading-pr-runner Executable file
View file

@ -0,0 +1,18 @@
#!/bin/bash
set -ex
runner=$1
runner_pr=$2
act=$3
act_pr=$4
url=$(jq --raw-output .head.repo.html_url < $act_pr)
test "$url" != null
url=${url##http*://}
branch=$(jq --raw-output .head.ref < $act_pr)
test "$branch" != null
cd $runner
sed -i -e "s|^replace github.com/nektos/act.*|replace github.com/nektos/act => $url $branch|" go.mod
GOPROXY=direct go mod tidy
date > last-upgrade

View file

@ -0,0 +1,30 @@
# SPDX-License-Identifier: MIT
on:
pull_request_target:
types:
- opened
- synchronize
- closed
jobs:
cascade:
runs-on: docker
if: vars.CASCADE != 'no'
container:
image: 'docker.io/node:20-bookworm'
steps:
- uses: https://code.forgejo.org/actions/setup-go@v4
with:
go-version: "1.21"
- uses: actions/cascading-pr@v1
with:
origin-url: ${{ env.GITHUB_SERVER_URL }}
origin-repo: forgejo/act
origin-token: ${{ secrets.CASCADING_PR_ORIGIN }}
origin-pr: ${{ github.event.pull_request.number }}
destination-url: ${{ env.GITHUB_SERVER_URL }}
destination-repo: forgejo/runner
destination-fork-repo: cascading-pr/runner
destination-branch: main
destination-token: ${{ secrets.CASCADING_PR_DESTINATION }}
close-merge: true
update: .forgejo/cascading-pr-runner

View file

@ -0,0 +1,44 @@
name: checks
on:
- push
- pull_request
env:
GOPROXY: https://goproxy.io,direct
GOPATH: /go_path
GOCACHE: /go_cache
jobs:
lint:
name: check and test
runs-on: docker
steps:
- name: cache go path
id: cache-go-path
uses: https://github.com/actions/cache@v3
with:
path: /go_path
key: go_path-${{ github.repository }}-${{ github.ref_name }}
restore-keys: |
go_path-${{ github.repository }}-
go_path-
- name: cache go cache
id: cache-go-cache
uses: https://github.com/actions/cache@v3
with:
path: /go_cache
key: go_cache-${{ github.repository }}-${{ github.ref_name }}
restore-keys: |
go_cache-${{ github.repository }}-
go_cache-
- uses: actions/setup-go@v3
with:
go-version: 1.20
- uses: actions/checkout@v3
- name: vet checks
run: go vet -v ./...
- name: build
run: go build -v ./...
- name: test
run: go test -v ./pkg/jobparser
# TODO test more packages

View file

@ -1,6 +1,6 @@
## Forking rules
This is a custom fork of [nektos/act](https://github.com/nektos/act/), for the purpose of serving [act_runner](https://gitea.com/gitea/act_runner).
This is a custom fork of [nektos/act](https://github.com/nektos/act/), for the [Forgejo runner](https://code.forgejo.org/forgejo/runner).
It cannot be used as command line tool anymore, but only as a library.

View file

@ -9,17 +9,14 @@ import (
"github.com/nektos/act/pkg/common"
)
func NewDockerNetworkCreateExecutor(name string) common.Executor {
func NewDockerNetworkCreateExecutor(name string, config *types.NetworkCreate) common.Executor {
return func(ctx context.Context) error {
cli, err := GetDockerClient(ctx)
if err != nil {
return err
}
_, err = cli.NetworkCreate(ctx, name, types.NetworkCreate{
Driver: "bridge",
Scope: "local",
})
_, err = cli.NetworkCreate(ctx, name, *config)
if err != nil {
return err
}

View file

@ -5,6 +5,9 @@ import "context"
type ExecutionsEnvironment interface {
Container
ToContainerPath(string) string
GetName() string
GetRoot() string
GetLXC() bool
GetActPath() string
GetPathVariableName() string
DefaultPathVariable() string

View file

@ -25,16 +25,19 @@ import (
)
type HostEnvironment struct {
Name string
Path string
TmpDir string
ToolCache string
Workdir string
ActPath string
Root string
CleanUp func()
StdOut io.Writer
LXC bool
}
func (e *HostEnvironment) Create(_ []string, _ []string) common.Executor {
func (e *HostEnvironment) Create(_, _ []string) common.Executor {
return func(ctx context.Context) error {
return nil
}
@ -93,7 +96,7 @@ func (e *HostEnvironment) CopyTarStream(ctx context.Context, destPath string, ta
}
}
func (e *HostEnvironment) CopyDir(destPath string, srcPath string, useGitIgnore bool) common.Executor {
func (e *HostEnvironment) CopyDir(destPath, srcPath string, useGitIgnore bool) common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
srcPrefix := filepath.Dir(srcPath)
@ -287,7 +290,7 @@ func getEnvListFromMap(env map[string]string) []string {
return envList
}
func (e *HostEnvironment) exec(ctx context.Context, command []string, cmdline string, env map[string]string, _, workdir string) error {
func (e *HostEnvironment) exec(ctx context.Context, commandparam []string, cmdline string, env map[string]string, user, workdir string) error {
envList := getEnvListFromMap(env)
var wd string
if workdir != "" {
@ -299,6 +302,23 @@ func (e *HostEnvironment) exec(ctx context.Context, command []string, cmdline st
} else {
wd = e.Path
}
if _, err := os.Stat(wd); err != nil {
common.Logger(ctx).Debugf("Failed to stat working directory %s %v\n", wd, err.Error())
}
command := make([]string, len(commandparam))
copy(command, commandparam)
if e.GetLXC() {
if user == "root" {
command = append([]string{"/usr/bin/sudo"}, command...)
} else {
common.Logger(ctx).Debugf("lxc-attach --name %v %v", e.Name, command)
command = append([]string{"/usr/bin/sudo", "--preserve-env", "--preserve-env=PATH", "/usr/bin/lxc-attach", "--keep-env", "--name", e.Name, "--"}, command...)
}
}
f, err := lookupPathHost(command[0], env, e.StdOut)
if err != nil {
return err
@ -341,7 +361,7 @@ func (e *HostEnvironment) exec(ctx context.Context, command []string, cmdline st
}
err = cmd.Run()
if err != nil {
return err
return fmt.Errorf("RUN %w", err)
}
if tty != nil {
writer.AutoStop = true
@ -398,6 +418,18 @@ func (e *HostEnvironment) ToContainerPath(path string) string {
return path
}
func (e *HostEnvironment) GetLXC() bool {
return e.LXC
}
func (e *HostEnvironment) GetName() string {
return e.Name
}
func (e *HostEnvironment) GetRoot() string {
return e.Root
}
func (e *HostEnvironment) GetActPath() string {
actPath := e.ActPath
if runtime.GOOS == "windows" {
@ -457,7 +489,7 @@ func (e *HostEnvironment) GetRunnerContext(_ context.Context) map[string]interfa
}
}
func (e *HostEnvironment) ReplaceLogWriter(stdout io.Writer, _ io.Writer) (io.Writer, io.Writer) {
func (e *HostEnvironment) ReplaceLogWriter(stdout, _ io.Writer) (io.Writer, io.Writer) {
org := e.StdOut
e.StdOut = stdout
return org, org

View file

@ -10,8 +10,7 @@ import (
log "github.com/sirupsen/logrus"
)
type LinuxContainerEnvironmentExtensions struct {
}
type LinuxContainerEnvironmentExtensions struct{}
// Resolves the equivalent host path inside the container
// This is required for windows and WSL 2 to translate things like C:\Users\Myproject to /mnt/users/Myproject
@ -47,6 +46,18 @@ func (*LinuxContainerEnvironmentExtensions) ToContainerPath(path string) string
return result
}
func (*LinuxContainerEnvironmentExtensions) GetName() string {
return "NAME"
}
func (*LinuxContainerEnvironmentExtensions) GetLXC() bool {
return false
}
func (*LinuxContainerEnvironmentExtensions) GetRoot() string {
return "/var/run"
}
func (*LinuxContainerEnvironmentExtensions) GetActPath() string {
return "/var/run/act"
}

View file

@ -42,6 +42,9 @@ var trampoline embed.FS
func readActionImpl(ctx context.Context, step *model.Step, actionDir string, actionPath string, readFile actionYamlReader, writeFile fileWriter) (*model.Action, error) {
logger := common.Logger(ctx)
reader, closer, err := readFile("action.yml")
if err != nil {
logger.Debugf("readActionImpl actionDir %s actionPath %s failed %v", actionDir, actionPath, err)
}
if os.IsNotExist(err) {
reader, closer, err = readFile("action.yaml")
if err != nil {
@ -444,11 +447,15 @@ func getContainerActionPaths(step *model.Step, actionDir string, rc *RunContext)
actionName = strings.ReplaceAll(actionName, "\\", "/")
}
}
common.Logger(context.Background()).Debugf("getContainerActionPaths step.Type %s, rc.Config.Workdir %s, actionName %s, containerActionDir %s", step.Type().String(), rc.Config.Workdir, actionName, containerActionDir)
return actionName, containerActionDir
}
func getOsSafeRelativePath(s, prefix string) string {
actionName := strings.TrimPrefix(s, prefix)
if s == actionName {
common.Logger(context.Background()).Errorf("getOsSafeRelativePath %s does not beging with %s", s, prefix)
}
if runtime.GOOS == "windows" {
actionName = strings.ReplaceAll(actionName, "\\", "/")
}

434
pkg/runner/lxc-helpers-lib.sh Executable file
View file

@ -0,0 +1,434 @@
#!/bin/bash
# SPDX-License-Identifier: MIT
export DEBIAN_FRONTEND=noninteractive
LXC_SELF_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
LXC_BIN=/usr/local/bin
LXC_CONTAINER_CONFIG_ALL="unprivileged lxc libvirt docker k8s"
LXC_CONTAINER_CONFIG_DEFAULT="lxc libvirt docker"
LXC_IPV6_PREFIX_DEFAULT="fc15"
LXC_DOCKER_PREFIX_DEFAULT="172.17"
LXC_IPV6_DOCKER_PREFIX_DEFAULT="fd00:d0ca"
: ${LXC_SUDO:=}
: ${LXC_CONTAINER_RELEASE:=bookworm}
: ${LXC_CONTAINER_CONFIG:=$LXC_CONTAINER_CONFIG_DEFAULT}
: ${LXC_HOME:=/home}
: ${LXC_VERBOSE:=false}
source /etc/os-release
function lxc_release() {
echo $VERSION_CODENAME
}
function lxc_template_release() {
echo lxc-helpers-$LXC_CONTAINER_RELEASE
}
function lxc_root() {
local name="$1"
echo /var/lib/lxc/$name/rootfs
}
function lxc_config() {
local name="$1"
echo /var/lib/lxc/$name/config
}
function lxc_container_run() {
local name="$1"
shift
$LXC_SUDO lxc-attach --clear-env --name $name -- "$@"
}
function lxc_container_run_script_as() {
local name="$1"
local user="$2"
local script="$3"
$LXC_SUDO chmod +x $(lxc_root $name)$script
$LXC_SUDO lxc-attach --name $name -- sudo --user $user $script
}
function lxc_container_run_script() {
local name="$1"
local script="$2"
$LXC_SUDO chmod +x $(lxc_root $name)$script
lxc_container_run $name $script
}
function lxc_container_inside() {
local name="$1"
shift
lxc_container_run $name $LXC_BIN/lxc-helpers.sh "$@"
}
function lxc_container_user_install() {
local name="$1"
local user_id="$2"
local user="$3"
if test "$user" = root ; then
return
fi
local root=$(lxc_root $name)
if ! $LXC_SUDO grep --quiet "^$user " $root/etc/sudoers ; then
$LXC_SUDO tee $root/usr/local/bin/lxc-helpers-create-user.sh > /dev/null <<EOF
#!/bin/bash
set -ex
mkdir -p $LXC_HOME
useradd --base-dir $LXC_HOME --create-home --shell /bin/bash --uid $user_id $user
for group in docker kvm libvirt ; do
if grep --quiet \$group /etc/group ; then adduser $user \$group ; fi
done
echo "$user ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
sudo --user $user ssh-keygen -b 2048 -N '' -f $LXC_HOME/$user/.ssh/id_rsa
EOF
lxc_container_run_script $name /usr/local/bin/lxc-helpers-create-user.sh
fi
}
function lxc_maybe_sudo() {
if test $(id -u) != 0 ; then
LXC_SUDO=sudo
fi
}
function lxc_prepare_environment() {
lxc_maybe_sudo
if ! $(which lxc-create > /dev/null) ; then
$LXC_SUDO apt-get install -y -qq make git libvirt0 libpam-cgfs bridge-utils uidmap dnsmasq-base dnsmasq dnsmasq-utils qemu-user-static
fi
}
function lxc_container_config_nesting() {
echo 'security.nesting = true'
}
function lxc_container_config_cap() {
echo 'lxc.cap.drop ='
}
function lxc_container_config_net() {
cat <<EOF
#
# /dev/net
#
lxc.cgroup2.devices.allow = c 10:200 rwm
lxc.mount.entry = /dev/net dev/net none bind,create=dir 0 0
EOF
}
function lxc_container_config_kvm() {
cat <<EOF
#
# /dev/kvm
#
lxc.cgroup2.devices.allow = c 10:232 rwm
lxc.mount.entry = /dev/kvm dev/kvm none bind,create=file 0 0
EOF
}
function lxc_container_config_loop() {
cat <<EOF
#
# /dev/loop
#
lxc.cgroup2.devices.allow = c 10:237 rwm
lxc.cgroup2.devices.allow = b 7:* rwm
lxc.mount.entry = /dev/loop-control dev/loop-control none bind,create=file 0 0
EOF
}
function lxc_container_config_mapper() {
cat <<EOF
#
# /dev/mapper
#
lxc.cgroup2.devices.allow = c 10:236 rwm
lxc.mount.entry = /dev/mapper dev/mapper none bind,create=dir 0 0
EOF
}
function lxc_container_config_fuse() {
cat <<EOF
#
# /dev/fuse
#
lxc.cgroup2.devices.allow = b 10:229 rwm
lxc.mount.entry = /dev/fuse dev/fuse none bind,create=file 0 0
EOF
}
function lxc_container_config_kmsg() {
cat <<EOF
#
# kmsg
#
lxc.cgroup2.devices.allow = c 1:11 rwm
lxc.mount.entry = /dev/kmsg dev/kmsg none bind,create=file 0 0
EOF
}
function lxc_container_config_proc() {
cat <<EOF
#
# /proc
#
#
# Only because k8s tries to write /proc/sys/vm/overcommit_memory
# is there a way to only allow that? Would it be enough for k8s?
#
lxc.mount.auto = proc:rw
EOF
}
function lxc_container_config() {
for config in "$@" ; do
case $config in
unprivileged)
;;
lxc)
echo nesting
echo cap
;;
docker)
echo net
;;
libvirt)
echo cap
echo kvm
echo loop
echo mapper
echo fuse
;;
k8s)
echo cap
echo loop
echo mapper
echo fuse
echo kmsg
echo proc
;;
*)
echo "$config unknown ($LXC_CONTAINER_CONFIG_ALL)"
return 1
;;
esac
done | sort -u | while read config ; do
echo "#"
echo "# include $config config snippet"
echo "#"
lxc_container_config_$config
done
}
function lxc_container_configure() {
local name="$1"
lxc_container_config $LXC_CONTAINER_CONFIG | $LXC_SUDO tee -a $(lxc_config $name)
}
function lxc_container_install_lxc_helpers() {
local name="$1"
$LXC_SUDO cp -a $LXC_SELF_DIR/lxc-helpers*.sh $root/$LXC_BIN
#
# Wait for the network to come up
#
local wait_networking=$(lxc_root $name)/usr/local/bin/lxc-helpers-wait-networking.sh
$LXC_SUDO tee $wait_networking > /dev/null <<'EOF'
#!/bin/sh -e
for d in $(seq 60); do
getent hosts wikipedia.org > /dev/null && break
sleep 1
done
getent hosts wikipedia.org > /dev/null || getent hosts wikipedia.org
EOF
$LXC_SUDO chmod +x $wait_networking
}
function lxc_container_create() {
local name="$1"
lxc_prepare_environment
lxc_build_template $(lxc_template_release) "$name"
}
function lxc_container_mount() {
local name="$1"
local dir="$2"
local config=$(lxc_config $name)
if ! $LXC_SUDO grep --quiet "lxc.mount.entry = $dir" $config ; then
local relative_dir=${dir##/}
$LXC_SUDO tee -a $config > /dev/null <<< "lxc.mount.entry = $dir $relative_dir none bind,create=dir 0 0"
fi
}
function lxc_container_start() {
local name="$1"
if lxc_running $name ; then
return
fi
local logs
if $LXC_VERBOSE; then
logs="--logfile=/dev/tty"
fi
$LXC_SUDO lxc-start $logs $name
$LXC_SUDO lxc-wait --name $name --state RUNNING
lxc_container_run $name /usr/local/bin/lxc-helpers-wait-networking.sh
}
function lxc_container_stop() {
local name="$1"
$LXC_SUDO lxc-ls -1 --running --filter="^$name" | while read container ; do
$LXC_SUDO lxc-stop --kill --name="$container"
done
}
function lxc_container_destroy() {
local name="$1"
local root="$2"
if lxc_exists "$name" ; then
lxc_container_stop $name $root
$LXC_SUDO lxc-destroy --force --name="$name"
fi
}
function lxc_exists() {
local name="$1"
test "$($LXC_SUDO lxc-ls --filter=^$name\$)"
}
function lxc_running() {
local name="$1"
test "$($LXC_SUDO lxc-ls --running --filter=^$name\$)"
}
function lxc_build_template_release() {
local name="$(lxc_template_release)"
if lxc_exists $name ; then
return
fi
local root=$(lxc_root $name)
$LXC_SUDO lxc-create --name $name --template debian -- --release=$LXC_CONTAINER_RELEASE
echo 'lxc.apparmor.profile = unconfined' | $LXC_SUDO tee -a $(lxc_config $name)
lxc_container_install_lxc_helpers $name
lxc_container_start $name
lxc_container_run $name apt-get update -qq
lxc_apt_install $name sudo git python3
lxc_container_stop $name
}
function lxc_build_template() {
local name="$1"
local newname="$2"
if lxc_exists $newname ; then
return
fi
if test "$name" = "$(lxc_template_release)" ; then
lxc_build_template_release
fi
if ! $LXC_SUDO lxc-copy --name=$name --newname=$newname ; then
echo lxc-copy --name=$name --newname=$newname failed
return 1
fi
lxc_container_configure $newname
}
function lxc_apt_install() {
local name="$1"
shift
lxc_container_inside $name lxc_apt_install_inside "$@"
}
function lxc_apt_install_inside() {
apt-get install -y -qq "$@"
}
function lxc_install_lxc() {
local name="$1"
local prefix="$2"
local prefixv6="$3"
lxc_container_inside $name lxc_install_lxc_inside $prefix $prefixv6
}
function lxc_install_lxc_inside() {
local prefix="$1"
local prefixv6="${2:-$LXC_IPV6_PREFIX_DEFAULT}"
local packages="make git libvirt0 libpam-cgfs bridge-utils uidmap dnsmasq-base dnsmasq dnsmasq-utils qemu-user-static lxc-templates debootstrap"
if test "$(lxc_release)" = bookworm ; then
packages="$packages distro-info"
fi
lxc_apt_install_inside $packages
if ! grep --quiet LXC_ADDR=.$prefix.1. /etc/default/lxc-net ; then
systemctl disable --now dnsmasq
apt-get install -y -qq lxc
systemctl stop lxc-net
sed -i -e '/ConditionVirtualization/d' /usr/lib/systemd/system/lxc-net.service
systemctl daemon-reload
cat >> /etc/default/lxc-net <<EOF
LXC_ADDR="$prefix.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="$prefix.0/24"
LXC_DHCP_RANGE="$prefix.2,$prefix.254"
LXC_DHCP_MAX="253"
LXC_IPV6_ADDR="$prefixv6::216:3eff:fe00:1"
LXC_IPV6_MASK="64"
LXC_IPV6_NETWORK="$prefixv6::/64"
LXC_IPV6_NAT="true"
EOF
systemctl start lxc-net
fi
}
function lxc_install_docker() {
local name="$1"
lxc_container_inside $name lxc_install_docker_inside
}
function lxc_install_docker_inside() {
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"ipv6": true,
"fixed-cidr-v6": "$LXC_IPV6_DOCKER_PREFIX_DEFAULT:1::/64",
"default-address-pools": [
{"base": "$LXC_DOCKER_PREFIX_DEFAULT.0.0/16", "size": 24},
{"base": "$LXC_IPV6_DOCKER_PREFIX_DEFAULT:2::/104", "size": 112}
]
}
EOF
lxc_apt_install_inside docker.io docker-compose
}

165
pkg/runner/lxc-helpers.sh Executable file
View file

@ -0,0 +1,165 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: MIT
set -e
source $(dirname $0)/lxc-helpers-lib.sh
function verbose() {
set -x
PS4='${BASH_SOURCE[0]}:$LINENO: ${FUNCNAME[0]}: '
LXC_VERBOSE=true
}
function help() {
cat <<'EOF'
lxc-helpers.sh - LXC container management helpers
SYNOPSIS
lxc-helpers.sh [-v|--verbose] [-h|--help]
[-o|--os {bookworm|bullseye} (default bookworm)]
command [arguments]
lxc-helpers.sh [-v|--verbose] [-h|--help]
[-o|--os {bookworm|bullseye} (default bookworm)]
[-c|--config {unprivileged lxc libvirt docker k8s} (default "lxc libvirt docker")]
lxc_container_create [arguments]
DESCRIPTION
A thin shell based layer on top of LXC to create, populate, run and
destroy LXC containers. A container is created from a copy of an
existing container.
The LXC network is configured to provide a NAT'ed IP address (IPv4
and IPv6) to each container, in a configurable private range.
CREATE AND DESTROY
lxc_prepare_environment
Install LXC dependencies.
lxc_container_create `name`
Create the `name` container.
lxc_container_mount `name` `path`
Configure `name` container to bind mount `path` so that it is
also accessible at `path` from within the container.
lxc_container_start `name`
Start the `name` container.
lxc_container_stop `name`
Stop the `name` container.
lxc_container_destroy `name`
Call lxc_container_stop `name` and destroy the container.
lxc_template_release
Echo the name of the container for the Operating System
specified with `--os`.
lxc_build_template `existing_container` `new_container`
Copy `existing_container` into `new_container`. If
`existing_container` is equal to $(lxc-helpers.sh lxc_template_release) it
will be created on demand.
CONFIGURATION
The `--config` option provides preset configurations appended to the `/var/lib/lxc/name/config`
file when the container is created with the `lxc_container_create` command. They are required
to run the corresponding subsystem:
* `docker` https://www.docker.com/
* `lxc` https://linuxcontainers.org/lxc/
* `libvirt` https://libvirt.org/
* `k8s` https://kubernetes.io/
* `unprivileged` none of the above
Example: lxc-helpers.sh --config "docker libvirt" lxc_container_create mycontainer
The `unprivileged` configuration does not add anything.
ACTIONS IN THE CONTAINER
For some command lxc_something `name` that can be called from outside the container
there is an equivalent function lxc_something_inside that can be called from inside
the container.
lxc_install_lxc `name` `prefix` [`prefixv6`]
lxc_install_lxc_inside `prefix` [`prefixv6`]
Install LXC in the `name` container to allow the creation of
named containers. `prefix` is a class C IP prefix from which
containers will obtain their IP (for instance 10.40.50). `prefixv6`
is an optional IPv6 private address prefix that defaults to fc15.
lxc_container_run `name` command [options...]
Run the `command` within the `name` container.
lxc_container_run_script `name` `path`
lxc_container_run_script_as `name` `user` `path`
Run the script found at `path` within the `name` container. The
environment is cleared before running the script. The first form
will run as root, the second form will impersonate `user`.
lxc_container_user_install `name` `user_id` `user` [`homedir` default `/home`]
Create the `user` with `user_id` in the `name` container with a
HOME at `/homedir/user`. Passwordless sudo permissions are
granted to `user`. It is made a member of the groups docker, kvm
and libvirt if they exist already. A SSH key is created.
Example: lxc_container_user_install mycontainer $(id -u) $USER
EOF
}
function main() {
local options=$(getopt -o hvoc --long help,verbose,os:,config: -- "$@")
[ $? -eq 0 ] || {
echo "Incorrect options provided"
exit 1
}
eval set -- "$options"
while true; do
case "$1" in
-v | --verbose)
verbose
;;
-h | --help)
help
;;
-o | --os)
LXC_CONTAINER_RELEASE=$2
shift
;;
-c | --config)
LXC_CONTAINER_CONFIG="$2"
shift
;;
--)
shift
break
;;
esac
shift
done
lxc_maybe_sudo
"$@"
}
main "$@"

View file

@ -3,9 +3,11 @@ package runner
import (
"archive/tar"
"bufio"
"bytes"
"context"
"crypto/rand"
"crypto/sha256"
_ "embed"
"encoding/hex"
"encoding/json"
"errors"
@ -16,8 +18,10 @@ import (
"regexp"
"runtime"
"strings"
"text/template"
"time"
docker "github.com/docker/docker/api/types"
"github.com/opencontainers/selinux/go-selinux"
"github.com/nektos/act/pkg/common"
@ -178,6 +182,96 @@ func (rc *RunContext) GetBindsAndMounts() ([]string, map[string]string) {
return binds, mounts
}
//go:embed lxc-helpers-lib.sh
var lxcHelpersLib string
//go:embed lxc-helpers.sh
var lxcHelpers string
var startTemplate = template.Must(template.New("start").Parse(`#!/bin/bash -e
exec 5<>/tmp/forgejo-runner-lxc.lock ; flock --timeout 21600 5
LXC_CONTAINER_CONFIG="{{.Config}}"
LXC_CONTAINER_RELEASE="{{.Release}}"
source $(dirname $0)/lxc-helpers-lib.sh
function template_act() {
echo $(lxc_template_release)-act
}
function install_nodejs() {
local name="$1"
local script=/usr/local/bin/lxc-helpers-install-node.sh
cat > $(lxc_root $name)/$script <<'EOF'
#!/bin/sh -e
# https://github.com/nodesource/distributions#debinstall
export DEBIAN_FRONTEND=noninteractive
apt-get install -qq -y ca-certificates curl gnupg git
mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
NODE_MAJOR=20
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list
apt-get update -qq
apt-get install -qq -y nodejs
EOF
lxc_container_run_script $name $script
}
function build_template_act() {
local name="$(template_act)"
if lxc_exists $name ; then
return
fi
lxc_build_template $(lxc_template_release) $name
lxc_container_start $name
install_nodejs $name
lxc_container_stop $name
}
lxc_prepare_environment
LXC_CONTAINER_CONFIG="" build_template_act
lxc_build_template $(template_act) "{{.Name}}"
lxc_container_mount "{{.Name}}" "{{ .Root }}"
lxc_container_start "{{.Name}}"
`))
var stopTemplate = template.Must(template.New("stop").Parse(`#!/bin/bash
source $(dirname $0)/lxc-helpers-lib.sh
lxc_container_destroy "{{.Name}}"
`))
func (rc *RunContext) stopHostEnvironment(ctx context.Context) error {
logger := common.Logger(ctx)
logger.Debugf("stopHostEnvironment")
var stopScript bytes.Buffer
if err := stopTemplate.Execute(&stopScript, struct {
Name string
Root string
}{
Name: rc.JobContainer.GetName(),
Root: rc.JobContainer.GetRoot(),
}); err != nil {
return err
}
return common.NewPipelineExecutor(
rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{
Name: "workflow/stop-lxc.sh",
Mode: 0755,
Body: stopScript.String(),
}),
rc.JobContainer.Exec([]string{rc.JobContainer.GetActPath() + "/workflow/stop-lxc.sh"}, map[string]string{}, "root", "/tmp"),
)(ctx)
}
func (rc *RunContext) startHostEnvironment() common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
@ -193,7 +287,8 @@ func (rc *RunContext) startHostEnvironment() common.Executor {
cacheDir := rc.ActionCacheDir()
randBytes := make([]byte, 8)
_, _ = rand.Read(randBytes)
miscpath := filepath.Join(cacheDir, hex.EncodeToString(randBytes))
randName := hex.EncodeToString(randBytes)
miscpath := filepath.Join(cacheDir, randName)
actPath := filepath.Join(miscpath, "act")
if err := os.MkdirAll(actPath, 0o777); err != nil {
return err
@ -208,6 +303,8 @@ func (rc *RunContext) startHostEnvironment() common.Executor {
}
toolCache := filepath.Join(cacheDir, "tool_cache")
rc.JobContainer = &container.HostEnvironment{
Name: randName,
Root: miscpath,
Path: path,
TmpDir: runnerTmp,
ToolCache: toolCache,
@ -217,6 +314,7 @@ func (rc *RunContext) startHostEnvironment() common.Executor {
os.RemoveAll(miscpath)
},
StdOut: logWriter,
LXC: rc.IsLXCHostEnv(ctx),
}
rc.cleanUpJobContainer = rc.JobContainer.Remove()
for k, v := range rc.JobContainer.GetRunnerContext(ctx) {
@ -233,17 +331,65 @@ func (rc *RunContext) startHostEnvironment() common.Executor {
}
}
return common.NewPipelineExecutor(
rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{
Name: "workflow/event.json",
Mode: 0o644,
Body: rc.EventJSON,
}, &container.FileEntry{
Name: "workflow/envs.txt",
Mode: 0o666,
Body: "",
}),
)(ctx)
executors := make([]common.Executor, 0, 10)
isLXCHost, LXCTemplate, LXCRelease, LXCConfig := rc.GetLXCInfo(ctx)
if isLXCHost {
var startScript bytes.Buffer
if err := startTemplate.Execute(&startScript, struct {
Name string
Template string
Release string
Config string
Repo string
Root string
TmpDir string
Script string
}{
Name: rc.JobContainer.GetName(),
Template: LXCTemplate,
Release: LXCRelease,
Config: LXCConfig,
Repo: "", // step.Environment["CI_REPO"],
Root: rc.JobContainer.GetRoot(),
TmpDir: runnerTmp,
Script: "", // "commands-" + step.Name,
}); err != nil {
return err
}
executors = append(executors,
rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{
Name: "workflow/lxc-helpers-lib.sh",
Mode: 0755,
Body: lxcHelpersLib,
}),
rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{
Name: "workflow/lxc-helpers.sh",
Mode: 0755,
Body: lxcHelpers,
}),
rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{
Name: "workflow/start-lxc.sh",
Mode: 0755,
Body: startScript.String(),
}),
rc.JobContainer.Exec([]string{rc.JobContainer.GetActPath() + "/workflow/start-lxc.sh"}, map[string]string{}, "root", "/tmp"),
)
}
executors = append(executors, rc.JobContainer.Copy(rc.JobContainer.GetActPath()+"/", &container.FileEntry{
Name: "workflow/event.json",
Mode: 0o644,
Body: rc.EventJSON,
}, &container.FileEntry{
Name: "workflow/envs.txt",
Mode: 0o666,
Body: "",
}))
return common.NewPipelineExecutor(executors...)(ctx)
}
}
@ -346,9 +492,13 @@ func (rc *RunContext) startJobContainer() common.Executor {
return nil
}
lifetime := fmt.Sprint(rc.Config.ContainerMaxLifetime.Round(time.Second).Seconds())
if lifetime == "0" {
lifetime = "infinity"
}
rc.JobContainer = container.NewContainer(&container.NewContainerInput{
Cmd: nil,
Entrypoint: []string{"/bin/sleep", fmt.Sprint(rc.Config.ContainerMaxLifetime.Round(time.Second).Seconds())},
Entrypoint: []string{"/bin/sleep", lifetime},
WorkingDir: ext.ToContainerPath(rc.Config.Workdir),
Image: image,
Username: username,
@ -372,10 +522,15 @@ func (rc *RunContext) startJobContainer() common.Executor {
return errors.New("Failed to create job container")
}
networkConfig := docker.NetworkCreate{
Driver: "bridge",
Scope: "local",
EnableIPv6: rc.Config.ContainerNetworkEnableIPv6,
}
return common.NewPipelineExecutor(
rc.pullServicesImages(rc.Config.ForcePull),
rc.JobContainer.Pull(rc.Config.ForcePull),
container.NewDockerNetworkCreateExecutor(networkName).IfBool(!rc.IsHostEnv(ctx) && rc.Config.ContainerNetworkMode == ""), // if the value of `ContainerNetworkMode` is empty string, then will create a new network for containers.
container.NewDockerNetworkCreateExecutor(networkName, &networkConfig).IfBool(!rc.IsHostEnv(ctx) && rc.Config.ContainerNetworkMode == ""), // if the value of `ContainerNetworkMode` is empty string, then will create a new network for containers.
rc.startServiceContainers(networkName),
rc.JobContainer.Create(rc.Config.ContainerCapAdd, rc.Config.ContainerCapDrop),
rc.JobContainer.Start(false),
@ -539,19 +694,55 @@ func (rc *RunContext) startContainer() common.Executor {
}
}
func (rc *RunContext) IsHostEnv(ctx context.Context) bool {
func (rc *RunContext) IsBareHostEnv(ctx context.Context) bool {
platform := rc.runsOnImage(ctx)
image := rc.containerImage(ctx)
return image == "" && strings.EqualFold(platform, "-self-hosted")
}
const lxcPrefix = "lxc:"
func (rc *RunContext) IsLXCHostEnv(ctx context.Context) bool {
platform := rc.runsOnImage(ctx)
return strings.HasPrefix(platform, lxcPrefix)
}
func (rc *RunContext) GetLXCInfo(ctx context.Context) (isLXC bool, template, release, config string) {
platform := rc.runsOnImage(ctx)
if !strings.HasPrefix(platform, lxcPrefix) {
return
}
isLXC = true
s := strings.SplitN(strings.TrimPrefix(platform, lxcPrefix), ":", 3)
template = s[0]
if len(s) > 1 {
release = s[1]
}
if len(s) > 2 {
config = s[2]
}
return
}
func (rc *RunContext) IsHostEnv(ctx context.Context) bool {
return rc.IsBareHostEnv(ctx) || rc.IsLXCHostEnv(ctx)
}
func (rc *RunContext) stopContainer() common.Executor {
return rc.stopJobContainer()
return func(ctx context.Context) error {
if rc.IsLXCHostEnv(ctx) {
return rc.stopHostEnvironment(ctx)
}
return rc.stopJobContainer()(ctx)
}
}
func (rc *RunContext) closeContainer() common.Executor {
return func(ctx context.Context) error {
if rc.JobContainer != nil {
if rc.IsLXCHostEnv(ctx) {
return rc.stopHostEnvironment(ctx)
}
return rc.JobContainer.Close()(ctx)
}
return nil
@ -573,7 +764,7 @@ func (rc *RunContext) steps() []*model.Step {
// Executor returns a pipeline executor for all the steps in the job
func (rc *RunContext) Executor() (common.Executor, error) {
var executor common.Executor
var jobType, err = rc.Run.Job().Type()
jobType, err := rc.Run.Job().Type()
switch jobType {
case model.JobTypeDefault:

View file

@ -61,15 +61,16 @@ type Config struct {
ReplaceGheActionTokenWithGithubCom string // Token of private action repo on GitHub.
Matrix map[string]map[string]bool // Matrix config to run
PresetGitHubContext *model.GithubContext // the preset github context, overrides some fields like DefaultBranch, Env, Secrets etc.
EventJSON string // the content of JSON file to use for event.json in containers, overrides EventPath
ContainerNamePrefix string // the prefix of container name
ContainerMaxLifetime time.Duration // the max lifetime of job containers
ContainerNetworkMode docker_container.NetworkMode // the network mode of job containers (the value of --network)
DefaultActionInstance string // the default actions web site
PlatformPicker func(labels []string) string // platform picker, it will take precedence over Platforms if isn't nil
JobLoggerLevel *log.Level // the level of job logger
ValidVolumes []string // only volumes (and bind mounts) in this slice can be mounted on the job container or service containers
PresetGitHubContext *model.GithubContext // the preset github context, overrides some fields like DefaultBranch, Env, Secrets etc.
EventJSON string // the content of JSON file to use for event.json in containers, overrides EventPath
ContainerNamePrefix string // the prefix of container name
ContainerMaxLifetime time.Duration // the max lifetime of job containers
ContainerNetworkMode docker_container.NetworkMode // the network mode of job containers (the value of --network)
ContainerNetworkEnableIPv6 bool // create the network with IPv6 support enabled
DefaultActionInstance string // the default actions web site
PlatformPicker func(labels []string) string // platform picker, it will take precedence over Platforms if isn't nil
JobLoggerLevel *log.Level // the level of job logger
ValidVolumes []string // only volumes (and bind mounts) in this slice can be mounted on the job container or service containers
}
// GetToken: Adapt to Gitea

View file

@ -44,10 +44,12 @@ func (sal *stepActionLocal) main() common.Executor {
return func(filename string) (io.Reader, io.Closer, error) {
tars, err := sal.RunContext.JobContainer.GetContainerArchive(ctx, path.Join(cpath, filename))
if err != nil {
common.Logger(context.Background()).Debugf("stepActionLocal reader %s failed %v", path.Join(cpath, filename), err)
return nil, nil, os.ErrNotExist
}
treader := tar.NewReader(tars)
if _, err := treader.Next(); err != nil {
common.Logger(context.Background()).Debugf("stepActionLocal reader %s failed %v", path.Join(cpath, filename), err)
return nil, nil, os.ErrNotExist
}
return treader, tars, nil