ceeb6c160c
* Support services (#42) Removed createSimpleContainerName and AutoRemove flag Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/42 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support services options (#45) Reviewed-on: https://gitea.com/gitea/act/pulls/45 Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support intepolation for `env` of `services` (#47) Reviewed-on: https://gitea.com/gitea/act/pulls/47 Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support services `credentials` (#51) If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials). Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code:0c1f2edb99/pkg/runner/run_context.go (L228-L269)
Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/51 Reviewed-by: Jason Song <i@wolfogre.com> Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Add ContainerMaxLifetime and ContainerNetworkMode options from:b9c20dcaa4
* Fix container network issue (#56) Follow: https://gitea.com/gitea/act_runner/pulls/184 Close https://gitea.com/gitea/act_runner/issues/177 - `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty. - In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker. - If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job. - Won't try to `docker network connect ` network after `docker start` any more. - Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error. - On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved. - Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat. Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/56 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: sillyguodong <gedong_1994@163.com> Co-committed-by: sillyguodong <gedong_1994@163.com> * Check volumes (#60) This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config. Options related to volumes: - [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes) - [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes) In addition, volumes specified by `options` will also be checked. Currently, the following default volumes (seea72822b3f8/pkg/runner/run_context.go (L116-L166)
) will be added to `ValidVolumes`: - `act-toolcache` - `<container-name>` and `<container-name>-env` - `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted) Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/60 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Remove ContainerMaxLifetime; fix lint * Remove unused ValidVolumes * Remove ConnectToNetwork * Add docker stubs * Close docker clients to prevent file descriptor leaks * Fix the error when removing network in self-hosted mode (#69) Fixes https://gitea.com/gitea/act_runner/issues/255 Reviewed-on: https://gitea.com/gitea/act/pulls/69 Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Move service container and network cleanup to rc.cleanUpJobContainer * Add --network flag; default to host if not using service containers or set explicitly * Correctly close executor to prevent fd leak * Revert to tail instead of full path * fix network duplication * backport networkingConfig for aliaes * don't hardcode netMode host * Convert services test to table driven tests * Add failing tests for services * Expose service container ports onto the host * Set container network mode in artifacts server test to host mode * Log container network mode when creating/starting a container * fix: Correctly handle ContainerNetworkMode * fix: missing service container network * Always remove service containers Although we usually keep containers running if the workflow errored (unless `--rm` is given) in order to facilitate debugging and we have a flag (`--reuse`) to always keep containers running in order to speed up repeated `act` invocations, I believe that these should only apply to job containers and not service containers, because changing the network settings on a service container requires re-creating it anyway. * Remove networks only if no active endpoints exist * Ensure job containers are stopped before starting a new job * fix: go build -tags WITHOUT_DOCKER --------- Co-authored-by: Zettat123 <zettat123@gmail.com> Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Jason Song <i@wolfogre.com> Co-authored-by: sillyguodong <gedong_1994@163.com> Co-authored-by: ChristopherHX <christopher.homberger@web.de> Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
169 lines
4.2 KiB
Go
169 lines
4.2 KiB
Go
package container
|
|
|
|
import (
|
|
"bufio"
|
|
"context"
|
|
"io"
|
|
"net"
|
|
"strings"
|
|
"testing"
|
|
"time"
|
|
|
|
"github.com/docker/docker/api/types"
|
|
"github.com/docker/docker/client"
|
|
"github.com/stretchr/testify/assert"
|
|
"github.com/stretchr/testify/mock"
|
|
)
|
|
|
|
func TestDocker(t *testing.T) {
|
|
ctx := context.Background()
|
|
client, err := GetDockerClient(ctx)
|
|
assert.NoError(t, err)
|
|
defer client.Close()
|
|
|
|
dockerBuild := NewDockerBuildExecutor(NewDockerBuildExecutorInput{
|
|
ContextDir: "testdata",
|
|
ImageTag: "envmergetest",
|
|
})
|
|
|
|
err = dockerBuild(ctx)
|
|
assert.NoError(t, err)
|
|
|
|
cr := &containerReference{
|
|
cli: client,
|
|
input: &NewContainerInput{
|
|
Image: "envmergetest",
|
|
},
|
|
}
|
|
env := map[string]string{
|
|
"PATH": "/usr/local/bin:/usr/bin:/usr/sbin:/bin:/sbin",
|
|
"RANDOM_VAR": "WITH_VALUE",
|
|
"ANOTHER_VAR": "",
|
|
"CONFLICT_VAR": "I_EXIST_IN_MULTIPLE_PLACES",
|
|
}
|
|
|
|
envExecutor := cr.extractFromImageEnv(&env)
|
|
err = envExecutor(ctx)
|
|
assert.NoError(t, err)
|
|
assert.Equal(t, map[string]string{
|
|
"PATH": "/usr/local/bin:/usr/bin:/usr/sbin:/bin:/sbin:/this/path/does/not/exists/anywhere:/this/either",
|
|
"RANDOM_VAR": "WITH_VALUE",
|
|
"ANOTHER_VAR": "",
|
|
"SOME_RANDOM_VAR": "",
|
|
"ANOTHER_ONE": "BUT_I_HAVE_VALUE",
|
|
"CONFLICT_VAR": "I_EXIST_IN_MULTIPLE_PLACES",
|
|
}, env)
|
|
}
|
|
|
|
type mockDockerClient struct {
|
|
client.APIClient
|
|
mock.Mock
|
|
}
|
|
|
|
func (m *mockDockerClient) ContainerExecCreate(ctx context.Context, id string, opts types.ExecConfig) (types.IDResponse, error) {
|
|
args := m.Called(ctx, id, opts)
|
|
return args.Get(0).(types.IDResponse), args.Error(1)
|
|
}
|
|
|
|
func (m *mockDockerClient) ContainerExecAttach(ctx context.Context, id string, opts types.ExecStartCheck) (types.HijackedResponse, error) {
|
|
args := m.Called(ctx, id, opts)
|
|
return args.Get(0).(types.HijackedResponse), args.Error(1)
|
|
}
|
|
|
|
func (m *mockDockerClient) ContainerExecInspect(ctx context.Context, execID string) (types.ContainerExecInspect, error) {
|
|
args := m.Called(ctx, execID)
|
|
return args.Get(0).(types.ContainerExecInspect), args.Error(1)
|
|
}
|
|
|
|
type endlessReader struct {
|
|
io.Reader
|
|
}
|
|
|
|
func (r endlessReader) Read(_ []byte) (n int, err error) {
|
|
return 1, nil
|
|
}
|
|
|
|
type mockConn struct {
|
|
net.Conn
|
|
mock.Mock
|
|
}
|
|
|
|
func (m *mockConn) Write(b []byte) (n int, err error) {
|
|
args := m.Called(b)
|
|
return args.Int(0), args.Error(1)
|
|
}
|
|
|
|
func (m *mockConn) Close() (err error) {
|
|
return nil
|
|
}
|
|
|
|
func TestDockerExecAbort(t *testing.T) {
|
|
ctx, cancel := context.WithCancel(context.Background())
|
|
|
|
conn := &mockConn{}
|
|
conn.On("Write", mock.AnythingOfType("[]uint8")).Return(1, nil)
|
|
|
|
client := &mockDockerClient{}
|
|
client.On("ContainerExecCreate", ctx, "123", mock.AnythingOfType("types.ExecConfig")).Return(types.IDResponse{ID: "id"}, nil)
|
|
client.On("ContainerExecAttach", ctx, "id", mock.AnythingOfType("types.ExecStartCheck")).Return(types.HijackedResponse{
|
|
Conn: conn,
|
|
Reader: bufio.NewReader(endlessReader{}),
|
|
}, nil)
|
|
|
|
cr := &containerReference{
|
|
id: "123",
|
|
cli: client,
|
|
input: &NewContainerInput{
|
|
Image: "image",
|
|
},
|
|
}
|
|
|
|
channel := make(chan error)
|
|
|
|
go func() {
|
|
channel <- cr.exec([]string{""}, map[string]string{}, "user", "workdir")(ctx)
|
|
}()
|
|
|
|
time.Sleep(500 * time.Millisecond)
|
|
|
|
cancel()
|
|
|
|
err := <-channel
|
|
assert.ErrorIs(t, err, context.Canceled)
|
|
|
|
conn.AssertExpectations(t)
|
|
client.AssertExpectations(t)
|
|
}
|
|
|
|
func TestDockerExecFailure(t *testing.T) {
|
|
ctx := context.Background()
|
|
|
|
conn := &mockConn{}
|
|
|
|
client := &mockDockerClient{}
|
|
client.On("ContainerExecCreate", ctx, "123", mock.AnythingOfType("types.ExecConfig")).Return(types.IDResponse{ID: "id"}, nil)
|
|
client.On("ContainerExecAttach", ctx, "id", mock.AnythingOfType("types.ExecStartCheck")).Return(types.HijackedResponse{
|
|
Conn: conn,
|
|
Reader: bufio.NewReader(strings.NewReader("output")),
|
|
}, nil)
|
|
client.On("ContainerExecInspect", ctx, "id").Return(types.ContainerExecInspect{
|
|
ExitCode: 1,
|
|
}, nil)
|
|
|
|
cr := &containerReference{
|
|
id: "123",
|
|
cli: client,
|
|
input: &NewContainerInput{
|
|
Image: "image",
|
|
},
|
|
}
|
|
|
|
err := cr.exec([]string{""}, map[string]string{}, "user", "workdir")(ctx)
|
|
assert.Error(t, err, "exit with `FAILURE`: 1")
|
|
|
|
conn.AssertExpectations(t)
|
|
client.AssertExpectations(t)
|
|
}
|
|
|
|
// Type assert containerReference implements ExecutionsEnvironment
|
|
var _ ExecutionsEnvironment = &containerReference{}
|