ceeb6c160c
* Support services (#42) Removed createSimpleContainerName and AutoRemove flag Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/42 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support services options (#45) Reviewed-on: https://gitea.com/gitea/act/pulls/45 Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support intepolation for `env` of `services` (#47) Reviewed-on: https://gitea.com/gitea/act/pulls/47 Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Support services `credentials` (#51) If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials). Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code:0c1f2edb99/pkg/runner/run_context.go (L228-L269)
Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/51 Reviewed-by: Jason Song <i@wolfogre.com> Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Add ContainerMaxLifetime and ContainerNetworkMode options from:b9c20dcaa4
* Fix container network issue (#56) Follow: https://gitea.com/gitea/act_runner/pulls/184 Close https://gitea.com/gitea/act_runner/issues/177 - `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty. - In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker. - If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job. - Won't try to `docker network connect ` network after `docker start` any more. - Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error. - On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved. - Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat. Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/56 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: sillyguodong <gedong_1994@163.com> Co-committed-by: sillyguodong <gedong_1994@163.com> * Check volumes (#60) This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config. Options related to volumes: - [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes) - [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes) In addition, volumes specified by `options` will also be checked. Currently, the following default volumes (seea72822b3f8/pkg/runner/run_context.go (L116-L166)
) will be added to `ValidVolumes`: - `act-toolcache` - `<container-name>` and `<container-name>-env` - `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted) Co-authored-by: Jason Song <i@wolfogre.com> Reviewed-on: https://gitea.com/gitea/act/pulls/60 Reviewed-by: Jason Song <i@wolfogre.com> Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Remove ContainerMaxLifetime; fix lint * Remove unused ValidVolumes * Remove ConnectToNetwork * Add docker stubs * Close docker clients to prevent file descriptor leaks * Fix the error when removing network in self-hosted mode (#69) Fixes https://gitea.com/gitea/act_runner/issues/255 Reviewed-on: https://gitea.com/gitea/act/pulls/69 Co-authored-by: Zettat123 <zettat123@gmail.com> Co-committed-by: Zettat123 <zettat123@gmail.com> * Move service container and network cleanup to rc.cleanUpJobContainer * Add --network flag; default to host if not using service containers or set explicitly * Correctly close executor to prevent fd leak * Revert to tail instead of full path * fix network duplication * backport networkingConfig for aliaes * don't hardcode netMode host * Convert services test to table driven tests * Add failing tests for services * Expose service container ports onto the host * Set container network mode in artifacts server test to host mode * Log container network mode when creating/starting a container * fix: Correctly handle ContainerNetworkMode * fix: missing service container network * Always remove service containers Although we usually keep containers running if the workflow errored (unless `--rm` is given) in order to facilitate debugging and we have a flag (`--reuse`) to always keep containers running in order to speed up repeated `act` invocations, I believe that these should only apply to job containers and not service containers, because changing the network settings on a service container requires re-creating it anyway. * Remove networks only if no active endpoints exist * Ensure job containers are stopped before starting a new job * fix: go build -tags WITHOUT_DOCKER --------- Co-authored-by: Zettat123 <zettat123@gmail.com> Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com> Co-authored-by: Jason Song <i@wolfogre.com> Co-authored-by: sillyguodong <gedong_1994@163.com> Co-authored-by: ChristopherHX <christopher.homberger@web.de> Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
199 lines
5.5 KiB
Go
199 lines
5.5 KiB
Go
package runner
|
|
|
|
import (
|
|
"context"
|
|
"fmt"
|
|
"time"
|
|
|
|
"github.com/nektos/act/pkg/common"
|
|
"github.com/nektos/act/pkg/model"
|
|
)
|
|
|
|
type jobInfo interface {
|
|
matrix() map[string]interface{}
|
|
steps() []*model.Step
|
|
startContainer() common.Executor
|
|
stopContainer() common.Executor
|
|
closeContainer() common.Executor
|
|
interpolateOutputs() common.Executor
|
|
result(result string)
|
|
}
|
|
|
|
//nolint:contextcheck,gocyclo
|
|
func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executor {
|
|
steps := make([]common.Executor, 0)
|
|
preSteps := make([]common.Executor, 0)
|
|
var postExecutor common.Executor
|
|
|
|
steps = append(steps, func(ctx context.Context) error {
|
|
logger := common.Logger(ctx)
|
|
if len(info.matrix()) > 0 {
|
|
logger.Infof("\U0001F9EA Matrix: %v", info.matrix())
|
|
}
|
|
return nil
|
|
})
|
|
|
|
infoSteps := info.steps()
|
|
|
|
if len(infoSteps) == 0 {
|
|
return common.NewDebugExecutor("No steps found")
|
|
}
|
|
|
|
preSteps = append(preSteps, func(ctx context.Context) error {
|
|
// Have to be skipped for some Tests
|
|
if rc.Run == nil {
|
|
return nil
|
|
}
|
|
rc.ExprEval = rc.NewExpressionEvaluator(ctx)
|
|
// evaluate environment variables since they can contain
|
|
// GitHub's special environment variables.
|
|
for k, v := range rc.GetEnv() {
|
|
rc.Env[k] = rc.ExprEval.Interpolate(ctx, v)
|
|
}
|
|
return nil
|
|
})
|
|
|
|
for i, stepModel := range infoSteps {
|
|
stepModel := stepModel
|
|
if stepModel == nil {
|
|
return func(ctx context.Context) error {
|
|
return fmt.Errorf("invalid Step %v: missing run or uses key", i)
|
|
}
|
|
}
|
|
if stepModel.ID == "" {
|
|
stepModel.ID = fmt.Sprintf("%d", i)
|
|
}
|
|
|
|
step, err := sf.newStep(stepModel, rc)
|
|
|
|
if err != nil {
|
|
return common.NewErrorExecutor(err)
|
|
}
|
|
|
|
preSteps = append(preSteps, useStepLogger(rc, stepModel, stepStagePre, step.pre()))
|
|
|
|
stepExec := step.main()
|
|
steps = append(steps, useStepLogger(rc, stepModel, stepStageMain, func(ctx context.Context) error {
|
|
logger := common.Logger(ctx)
|
|
err := stepExec(ctx)
|
|
if err != nil {
|
|
logger.Errorf("%v", err)
|
|
common.SetJobError(ctx, err)
|
|
} else if ctx.Err() != nil {
|
|
logger.Errorf("%v", ctx.Err())
|
|
common.SetJobError(ctx, ctx.Err())
|
|
}
|
|
return nil
|
|
}))
|
|
|
|
postExec := useStepLogger(rc, stepModel, stepStagePost, step.post())
|
|
if postExecutor != nil {
|
|
// run the post executor in reverse order
|
|
postExecutor = postExec.Finally(postExecutor)
|
|
} else {
|
|
postExecutor = postExec
|
|
}
|
|
}
|
|
|
|
postExecutor = postExecutor.Finally(func(ctx context.Context) error {
|
|
jobError := common.JobError(ctx)
|
|
var err error
|
|
if rc.Config.AutoRemove || jobError == nil {
|
|
// always allow 1 min for stopping and removing the runner, even if we were cancelled
|
|
ctx, cancel := context.WithTimeout(common.WithLogger(context.Background(), common.Logger(ctx)), time.Minute)
|
|
defer cancel()
|
|
|
|
logger := common.Logger(ctx)
|
|
logger.Infof("Cleaning up container for job %s", rc.JobName)
|
|
if err = info.stopContainer()(ctx); err != nil {
|
|
logger.Errorf("Error while stop job container: %v", err)
|
|
}
|
|
}
|
|
setJobResult(ctx, info, rc, jobError == nil)
|
|
setJobOutputs(ctx, rc)
|
|
|
|
return err
|
|
})
|
|
|
|
pipeline := make([]common.Executor, 0)
|
|
pipeline = append(pipeline, preSteps...)
|
|
pipeline = append(pipeline, steps...)
|
|
|
|
return common.NewPipelineExecutor(info.startContainer(), common.NewPipelineExecutor(pipeline...).
|
|
Finally(func(ctx context.Context) error { //nolint:contextcheck
|
|
var cancel context.CancelFunc
|
|
if ctx.Err() == context.Canceled {
|
|
// in case of an aborted run, we still should execute the
|
|
// post steps to allow cleanup.
|
|
ctx, cancel = context.WithTimeout(common.WithLogger(context.Background(), common.Logger(ctx)), 5*time.Minute)
|
|
defer cancel()
|
|
}
|
|
return postExecutor(ctx)
|
|
}).
|
|
Finally(info.interpolateOutputs()).
|
|
Finally(info.closeContainer()))
|
|
}
|
|
|
|
func setJobResult(ctx context.Context, info jobInfo, rc *RunContext, success bool) {
|
|
logger := common.Logger(ctx)
|
|
|
|
jobResult := "success"
|
|
// we have only one result for a whole matrix build, so we need
|
|
// to keep an existing result state if we run a matrix
|
|
if len(info.matrix()) > 0 && rc.Run.Job().Result != "" {
|
|
jobResult = rc.Run.Job().Result
|
|
}
|
|
|
|
if !success {
|
|
jobResult = "failure"
|
|
}
|
|
|
|
info.result(jobResult)
|
|
if rc.caller != nil {
|
|
// set reusable workflow job result
|
|
rc.caller.runContext.result(jobResult)
|
|
}
|
|
|
|
jobResultMessage := "succeeded"
|
|
if jobResult != "success" {
|
|
jobResultMessage = "failed"
|
|
}
|
|
|
|
logger.WithField("jobResult", jobResult).Infof("\U0001F3C1 Job %s", jobResultMessage)
|
|
}
|
|
|
|
func setJobOutputs(ctx context.Context, rc *RunContext) {
|
|
if rc.caller != nil {
|
|
// map outputs for reusable workflows
|
|
callerOutputs := make(map[string]string)
|
|
|
|
ee := rc.NewExpressionEvaluator(ctx)
|
|
|
|
for k, v := range rc.Run.Workflow.WorkflowCallConfig().Outputs {
|
|
callerOutputs[k] = ee.Interpolate(ctx, ee.Interpolate(ctx, v.Value))
|
|
}
|
|
|
|
rc.caller.runContext.Run.Job().Outputs = callerOutputs
|
|
}
|
|
}
|
|
|
|
func useStepLogger(rc *RunContext, stepModel *model.Step, stage stepStage, executor common.Executor) common.Executor {
|
|
return func(ctx context.Context) error {
|
|
ctx = withStepLogger(ctx, stepModel.ID, rc.ExprEval.Interpolate(ctx, stepModel.String()), stage.String())
|
|
|
|
rawLogger := common.Logger(ctx).WithField("raw_output", true)
|
|
logWriter := common.NewLineWriter(rc.commandHandler(ctx), func(s string) bool {
|
|
if rc.Config.LogOutput {
|
|
rawLogger.Infof("%s", s)
|
|
} else {
|
|
rawLogger.Debugf("%s", s)
|
|
}
|
|
return true
|
|
})
|
|
|
|
oldout, olderr := rc.JobContainer.ReplaceLogWriter(logWriter, logWriter)
|
|
defer rc.JobContainer.ReplaceLogWriter(oldout, olderr)
|
|
|
|
return executor(ctx)
|
|
}
|
|
}
|